When the European Commission’s White Paper on Artificial Intelligence was published on 19 February, there was disappointment among many members of the High-Level Expert Group (HLEG AI). Too vague, too soon, too noncommittal, too unrealistic. Crucial issues were flatly removed. Some also asked: Why has the HLEG AI worked for one-and-a-half years, only for its detailed proposals to be mostly ignored or mentioned only in passing? Here we present our assessment as ethics experts. We think that the White Paper makes a number of strong points, but far fewer than we would have liked.
Trustworthy, still
Let’s start with the positives. We are encouraged by the fact that the Commission is holding firm to the ideal of “trustworthiness” and thereby, albeit indirectly, to an ethics-based approach. However, there are some problems with the specific formulations. By now, everybody engaged in the global AI debate knows that there are fundamental conceptual problems with calling a technology itself – as opposed to the human beings who use it – “trustworthy”. AI systems that are robust, reliable, and largely transparent could easily be used in a way that contradicts the intentions of the White Paper, for example by European governments like Hungary or Poland, or by American companies like Facebook or Google. Nevertheless, the sustained emphasis on trustworthiness is ethically laudable.
A chance for Europe to lead: Merging the Green Deal and Europe’s AI approach?
Another positive feature of this document is the focus on climate change, sustainability, and the protection of resources. Von der Leyen and her teams clearly recognise the historical opportunity afforded by the rolling climate crisis: They systematically exploit connections and massive synergies between future AI research and the European Green Deal. If von der Leyen manages to pull off the Green Deal by simultaneously developing Trustworthy AI made in Europe, the whole world will soon want to buy smart climate technology manufactured in the bloc. Europe has the chance to lead by example and develop its own unique selling point.
Last week, in response to the COVID-19 crisis, Ursula von der Leyen called for a “Marshall Plan” for Europe. We think that all three goals should now be integrated under a single and consistent normative vision: Trustworthy AI and the European Green Deal could become the ethically founded underpinning of economic recovery following the coronavirus.
Assessing risk before it is posed. Or imported.
Beyond that, as ethicists we think that the strongest proposal in the document is the set of ex ante conformity assessments for AI technologies sold on the European market to European consumers. This proposal is in line with the long-standing idea, arising out of technology ethics and responsible innovation research, that it is best to build ethics in at the development stage; the assessments are one way to stimulate this.
What is problematic, however, is that the approach is proposed only for the high-risk sectors. The distinction between “high-risk” and “low-risk” sectors is too undifferentiated and coarse-grained, and it could easily turn into a Trojan horse. Everything that is not unequivocally flagged as “high-risk” can now be smuggled into Europe, perhaps unfolding its potential for harm in unexpected domains of application or only in the medium term.
Psychological risks of AI are finally recognised
This leads us to another positive: The authors of the White Paper are truly innovative in their mention of “mental safety risks”. With their rage- and division-driven business models, large US companies like Facebook, for example, systematically put the mental health of European citizens at risk. They not only undermine social cohesion; they also use AI to gradually make their users more predictable, extract their attentional resources, and then sell them to their customers. Facebook AI is not Trustworthy AI. Europe is surrounded by predators, psychologically as well as economically, and the Commission is beginning to realise it. Europe’s response to this should not be to compete by becoming just another “data-agile” surveillance shark, but instead to create incentives for innovation that make the mental health, privacy, and autonomy of its citizens and proactive contributions to social cohesion key ethical requirements.
Sector-specific approach is uncalled for and highly problematic
Especially given their vague and noncommittal nature, these strengths of the White Paper should not blind us to the major ethical omissions and problems in the document.
Perhaps most importantly, we highlight an issue that jeopardises the core of the proposed policy: a sector-specific approach in combination with a deliberately simplistic high-risk/low-risk distinction. There is no good reason to single out specific sectors, since attractive applications easily begin to drift, and ethical problems with AI software are likely to be found across all sectors. Moreover, there are many kinds of risk, and levels of risk evolve over time. Our general impression is that the authors of the White Paper wanted to be tough on some public sectors such as health and mobility, while leaving the private sector alone.
Europe needs to train a new generation of AI ethicists
We fully support the idea of establishing a Pan-European “Ecosystem of Excellence”. But the Commission does not seem to see that for the coming decades, and in order to make the Ecosystem work, we will also need hundreds of well-trained experts in the ethics of AI. We need Ethical Excellence, too. Big companies need independently trained, fully credible ethicists as internal experts. Political institutions across Europe need a whole new generation of brilliant young people with a rigorous interdisciplinary education in philosophy, ethics, and AI.
Unfortunately, the Commission ignores the fact that in its investment and policy recommendations, the HLEG AI proposed the establishment of 720 professorial chairs for AI ethics, one for every major European university. Professors appointed to these positions could also strengthen links between science and civil society by convening public debates, interpreting research results, and offering interdisciplinary curricula to new generations of students.
Where did the ethics go?
Ethics has lost all importance for the Commission, it seems. This becomes clear once we consider how fully any substantive ethical positioning has been purged from this document. Largely in line with the US perspective, Von der Leyen’s White Paper basically says: “Either ethics gets done by industry, or there will be no ethics at all!” First, ethics was used merely as an elegant public decoration for a large-scale investment strategy; now it is completely absent. This leads to a glaring gap in the Digital Education Action Plan, which now needs filling; it also goes against the spirit of the Commission’s own overall digital strategy, which we understand to include real ethics.
The second omission concerns “Lethal Autonomous Weapon Systems”. The White Paper’s ideal of “human oversight” with the “ability to intervene in real time” will quickly go overboard if the EU now enters an AI arms race. In 2018 the European Parliament adopted by 566 votes to 47 a resolution in which it expressed its deep concern about autonomous weapon systems and called for a start to negotiations for an international legally binding instrument prohibiting such weapons. Many parliaments in Europe and around the world have voiced similar worries. The Belgian parliament explicitly called for a ban, and for good reason. The current document completely ignores the issue, and this arouses our deep suspicion. Has the arms industry successfully intervened in the process? No matter what we think as ethicists, the legitimate representatives of European citizens clearly want to see a binding protective commitment here very soon.
Experts were ignored in the process
Finally, it was disappointing that the White Paper was developed without any direct input from the high-level expert group. Not only did the authors publish it without even waiting for our final results, they then failed to do us the courtesy of sending us the White Paper, instead leaving us to read about it in the press. To members of the HLEG AI, this demonstrates how unimportant we really are and how seriously Ursula von der Leyen takes what we do. We sadly conclude that the few ethicists among the 52 members of the AI advisory group have been nothing but a fig leaf.
Thomas Metzinger is a Professor of Theoretical Philosophy at the University of Mainz, Germany. In the High-Level Expert Group on Artificial Intelligence (HLEG AI) he represents 850 European universities of the European University Association (EUA).
Mark Coeckelbergh is a Professor of Philosophy of Media and Technology at the University of Vienna, Austria and also a Member of the HLEG AI. His book „AI Ethics“was recently published in the MIT Press Essential Knowledge series.
The original article can be retrieved in German here.