For seventy years, our societies have been imagining their companionship with "artificial intelligences" (...). Even before AI technologies had the slightest concrete existence, philosophers, ethicists and novelists were already at work imagining the rules intended to domesticate them," observe Bilel Benbouzid and Dominique Cardon, coordinators of this new issue of the Revue Réseaux.
This issue of the Networks Review explores a range of arenas in which the development of artificial intelligence (AI) technologies is subject to normative debate. " It seeks to explain the inflation of reflections and essays, recommendations and charters, fears and criticisms, regulations and laws of which AI is the subject."
The two coordinators of this issue of Réseaux have chosen the term "control" in its general sense "because it makes it possible to bring together approaches that aim to question, criticize or constrain the development of AI". This formulation opens up several types of questioning:
The first questions the close association of AI with ethical reflection. This link took shape in the reflections conducted in the post-war period within the cybernetics movement, notably concerning the contribution of scientists to the manufacture of the atomic weapon.
A second questioning concerns the relationship between standards and the objects they wish to control. Although it is often personified under a single identifier, AI, it is actually very difficult to define the technical entities it designates. This vagueness is all the more important as the technical entities called AI have changed considerably according to the bifurcations of the different paradigms of the scientific history of the field. This transformation of the objects of AI has important consequences on the way we debate their regulation.
A third line of questioning relates to the difficult political control of technology development, a classic theme in studies of the governance of emerging technologies.
It is in this context that we need to understand how AI control has become an important concern in the public space, "an ethical, philosophical, and scientific specialty and even, more recently, a developing legal framework."A very wide variety of documents will emerge from this " AI ethics wave ", with a common intention: "to propose readable frameworks for action for developers of AI solutions, based on high-level principles such as transparency, fairness, accountability or robustness ". At least 84 ethical guidelines were thus developed to provide these high-level principles as a basis for the ethical development of AI. " A diverse set of issues is now highlighted:" biases in machine learning, risks of manipulation by recommender systems, interpretability of classifications, etc. So many themes that are now being addressed by research and that are generating rich scientific controversies".Two themes run through this dossier: the institutionalization of AI control, on the one hand, and controversies, promises and mobilizations, on the other.
Institutionalization of AI control
"How are the ways of framing artificial intelligence discussed? What standards are needed?" To answer these questions, much of the analysis currently turns to the Artificial Intelligence Act of the European Union. "As in other areas, the European Commission seeks, with its draft regulation, to reconcile the expansion of a huge and extremely profitable market with the prevention of harm to individuals and society."By drawing up a "cartography of definitional conflicts" , Bilel Benbouzid, Yannick Meneceur and Nathalie Alisa Smuha show that the Artificial Intelligence Act, by focusing on products and relying on the paradigm of risks, only deals with a limited aspect of Artificial Intelligence and its control. They observe the existence of at least four differentiated normative arenas: "speculative reflections (notably transhumanist) on the dangers of superintelligence and the problem of its alignment with human values; the self-empowerment of researchers developing a regulatory science entirely devoted to the technical certification of machines; the criticism of the harmful effects of AI systems on fundamental rights; and finally, the European regulation of the market through the control of safety due to AI products and services. The authors show " how the social space of regulation has been structured around a tendency that evolves from a control in abstracto to a control in concreto. Thus, from a principial and abstract regulation, the European regulator has progressively moved to a concrete, local, product by product, sector by sector regulation, partly because the objects of AI are realities producing tangible effects such as accidents and discriminations". Through the case of AI systems for detection and diagnosis in medical imaging, Léo Mignot and Émilien Schultz show how the control of AI in this sector proceeds from an autonomous regulation, both professional and economic. "It is neither health agencies, nor the law, nor technical standards that currently regulate the construction and use of devices, but the professional standards of radiologists. The survey reveals that the concerns of the actors are far removed from the way the problem is posed in the public debate. The problem of social control then appears as that of knowing who controls what? " In radiology, the state remains fairly uninvolved in regulation, so that the responsibility of actors in AI-assisted diagnostics remains, as it stands, in the hands of industrialists and professional networks."Ljupcho Grozdanovski returns to the question of liability in case of malfunction of a learning machine evolving without the direct intervention of its designer and user. This legal question, which is very old in law, is coming up again with machine learning, as the decision-making processes of contemporary machines that use it remain "if not incomprehensible to human understanding, at least difficult to trace with the methods available". "Can we assimilate the most advanced AI systems to agents, which would imply recognizing their rights and duties? What criteria should be used to designate a responsible human in the case of damage caused by an AI system?". "The implementation of liability in law with respect to AI systems is one of the most urgent avenues of research for legal scholars."
Controversies, promises and mobilizations
Beyond the institutional forms of regulation of AI, this issue also proposes to examine the way its development fits into a set of controversies, promises and mobilizations.Maxime Crépel and Dominique Cardon propose a mapping of controversies around AI in the Anglo-Saxon press based on digital methods deployed on a corpus of 29,342 articles published between 2015 and 2019. Using a machine learning method, they proceeded to detect the 7% of articles with a critical coloration in order to identify the main objects of controversy surrounding a very open range of technical systems labeled as AI by journalists. The interest of this analysis is to differentiate between two sets of objects, robots and algorithms, that are associated with very different types of press discourse: prophecy and criticism. " If philosophy and ethical reflection feed the critique of the artificial life of robots, the political stakes of the control of algorithms are much more clearly aligned with other forms of regulation related to work, discrimination issues or questions of transparency and asymmetry of information and power.Jonathan Roberge, Guillaume Dandurand, Kevin Morin and Marius Senneville look back at the industrial and political adventure of the company Element AI, which aimed to make Canada one of the dominant poles of global artificial intelligence. By documenting the history of Element AI, they reveal the immense persuasiveness of this technological innovation. "However, nothing is going to work as planned. The promises surrounding Element AI will abruptly collapse, unraveling a web of beliefs, funding, and public policy arrangements to encourage this ecosystem. Above all, the authors show that " even if we find the most facilitating arrangements to encourage its industrial development, the translation of AI prophecy into operational services is uncertain and sometimes improbable".Ke Huang , on the other hand, analyzes the mobilizations of workers at a Chinese delivery platform, Meituan. The ethnographic survey conducted in three Chinese cities reveals the forms of resistance and friction points raised by workers to demand specific platform regulations. " The control of AI in this case, is not played out "on the abstract and universal stage of ethics but in the negotiation of specific and professional rules that are specific to the multiple fields of application in which AI is being inserted."
Contents
Bilel Benbouzid and Dominique Cardon: Controlling AI
Bilel Benbouzid, Yannick Meneceur, Nathalie Alisa Smuha: Four shades of regulation of artificial intelligence. A mapping of definitional conflicts
Léo Mignot, Émilien Schultz: Artificial intelligence innovations in radiology tested against health system regulations
Ljupcho Grozdanovski: Algorithmic agentivity, futuristic fiction or imperative of procedural justice? Reflections on the future of the product liability regime in the European Union
Maxime Crépel, Dominique Cardon: Robots vs algorithms. Prophecy and criticism in the media representation of AI controversies
Jonathan Roberge, Guillaume Dandurand, Kevin Morin, Marius Senneville: Do narwhals and unicorns hide to die? On cybernetics, governance and Element AI
Ke Huang: Ethical implications of algorithmic system and worker practices of meal delivery platforms. The Case of Meituan