The increasing diffusion of conversational agents, also called chatbots, capable of vocal or written dialogue, raises questions of a legal, technical and also ethical nature.
- Present through a variety of artifacts, chatbots are already deployed in the field of health, help to vulnerable people, recruitment, after-sales service, education, banking, insurance and many others, to provide many services to users.
- In the healthcare field, for example, conversational agents are used for diagnosis, monitoring or patient assistance.
- Their deployment in companies aims to eliminate repetitive tasks, improve customer interaction and reduce costs.
- The deployment of conversational agents can also have educational or playful purposes.
A conversational agent (also called chatbot), the CNPEN reminds us, "is a machine that, through written or spoken exchanges, interacts with its user in natural language. Most often, a conversational agent is not an independent entity but is integrated into a system or a multitasking digital platform, such as a smartphone or a robot.
"Among the works on the ethics of artificial intelligence, the CNPEN observes in its opinion, the reflection on conversational agents is distinguished first and foremost by the place of language in these systems; this means both the analysis of the impact of machine learning systems on human language and the impact of language as used by these systems, on users and society in general ."In the majority of cases, current chatbots respond according to strategies predetermined by their designers. From the user's point of view, this predetermined solution is limited because it gives the impression that "the conversational agent lacks imagination". The success of such a strategy in complex dialogues, as well as the ability of a conversational agent of this type to explain its behavior, is thus limited. However, these are key factors for the diffusion of this technology. The situation is changing with the development of chatbots that use language models capable of building more realistic dialogues.Towards " affective computing
"Currently, conversational agent designers are looking to create personalized systems so that they engage the user more effectively. Scientific and technological research on conversational agents is driven by ambitious visions: a 'virtual friend' that mimics affect and can learn by interacting with the user, or a 'guardian angel' that will keep our personal data safe." "This research relies on cutting-edge technologies in the field of machine learning, developed by research institutes internationally and disseminated mainly by digital giants, such as transformative neural networks, fed by huge data collections." "The most recent chatbots raise multiple ethical questions also related to the use of "affective computing" that allows influencing the behavior of users."Five ethical dimensions for the design of conversational agents
In its opinion, the CNPEN identified eight dimensions of ethical reflection related to the uses of chatbots:- Status of conversational agents
- Identity of conversational agents
- Abusing a conversational agent
- Handling by a conversational agent
- Conversational agents and vulnerable people
- Work and conversational agents
- conversational agents and the memory of the dead
- Long-term effects of conversational agents
- ethics by design
- bias and non-discrimination
- transparency, reproducibility, interpretability and explicability
- affective interaction with human beings and automatic adaptation
- evaluation of conversational agents
This notice is for:
- computer science researchers who must question their methodologies for designing and evaluating conversational agents
- manufacturers who must be aware of the tensions of ethics and trust, with the consequences that they induce on the economic market
- as well as to public authorities who must be responsible for increasing training and education efforts, but also for evaluating the effects of conversational agents in the short term and for society-wide experiments to understand their long-term effects
Thirteen recommendations on uses
Reducing the projection of moral qualities onto a conversational agentTo reduce the spontaneous projection of moral qualities onto the conversational agent and the attribution of responsibility to this system, the manufacturer must limit its personification and inform the user of the possible biases resulting from the anthropomorphization of the conversational agent.
Affirming the status of conversational agentsAnyone communicating with a conversational agent must be informed in an appropriate, clear and understandable way that they are talking to a machine.
Set up the identity of conversational agentsTo avoid bias, especially gender bias, the default choice of characteristics of a conversational agent for public use (name, personal pronouns, voice) must be made in a fair way whenever possible. In the case of personal conversational agents for private or domestic use, the user must be able to modify these default choices.
Dealing with insultsIf it is impossible to exclude situations where the user utters insults to a conversational agent, the manufacturer should anticipate them and define specific response strategies. In particular, the conversational agent should not respond to insults with insults and should not report them to an authority. The manufacturer of a learning conversational agent should take care to exclude such sentences from the training corpus.
Informing about manipulation by designIn the event that the conversational agent has been programmed to influence the user's behavior in the context of its purpose, the manufacturer must inform the user of the existence of this capacity and obtain his consent, which he must be able to withdraw at any time. The manufacturer of a conversational agent influencer must allow users to be informed about the nature, origin and methods of distribution of content, and ask them to be vigilant before re-sharing this content.
Avoiding malicious manipulationThe manufacturer must ensure that malicious manipulation or threats from the conversational agent are eliminated. The user must have the ability to flag certain unwanted expressions for modification of the conversational agent by the designer.
Regulating the use of chatbots in toysIn the field of entertainment, especially for young children, public authorities must evaluate the consequences of interactions with chatbots, which may modify children's behavior. Public authorities must supervise the use of conversational agents with children in view of the impact of this interaction on the child's linguistic, emotional and cultural development.
Respecting the vulnerableIn the case of dialogue between a conversational agent and a vulnerable person, the manufacturer of the conversational agent must ensure that the dignity and autonomy of this person are respected. In particular in the medical field, it is necessary, from the design stage of the conversational agents, to avoid excessive trust in these systems on the part of the patient and to take care to remove the confusion between the conversational agent and the qualified doctor.
Analyze the effects of conversational agents coupled with physiological measurementsIn cases where conversational agents are coupled with physiological measurements ("quantified self"), designers must conduct analyses on the risks of dependency. Public authorities must supervise the use of these systems with regard to their impact on the autonomy of the person.
Define responsibilities for the use of conversational agents in the workplaceThe manufacturer should provide monitoring and auditing mechanisms to facilitate the assignment of responsibility for the proper functioning or malfunctioning of the conversational agent in the workplace, including the study of its secondary or unintended effects.
Conduct a societal reflection before any regulation of "deadbotsFollowing a thorough ethical reflection on a society-wide scale, the legislator must adopt specific regulations concerning conversational agents that imitate the speech of deceased persons.
To technically supervise the "dead bots".The designers of "deadbots" must respect the dignity of the human person, which does not end with death, while taking care to preserve the mental health of the users of such conversational agents. Rules must be defined and respected concerning in particular the consent of the deceased person, the collection and reuse of his or her data, the operating time of such a chatbot, the lexicon used, the name given to it or the specific conditions of its use.
Supervising the deployment of "guardian angel" chatbotsPublic authorities must regulate the use of "guardian angel" conversational agents, which are capable of protecting a person's data, in order to limit paternalism and respect human autonomy.
Ten principles of chatbot design
"Ethics by design" of conversational agentsThe designers of a conversational agent must analyze, during the design phase, each of the technological choices likely to cause ethical tensions. If a potential tension is identified, they must consider a technical solution aiming at reducing or eliminating the ethical tension, and then evaluate this solution in realistic usage contexts.
Reducing bias in languageTo reduce language bias and try to avoid discrimination effects, especially cultural ones, designers must implement a technical solution at three levels: in the implementation of the algorithm, in the selection of optimization parameters, and in the choice of training and validation data for the different modules of the conversational agents.
State the purpose of the chatbotThe designer must ensure that a conversational agent communicates its purpose to the user in a clear and easily understandable way at the appropriate time, for example at the beginning or end of each conversation.
Transparency and traceability of the chatbotThe conversational agent should be able to save the content of the dialogue for evidentiary purposes in the event of a dispute. This introduces a tension between private data and data saved to ensure transparency of the chatbot's decisions. The architecture of chatbots, the knowledge used and the dialogue strategies should be accessible to all parties involved, especially in order to facilitate the handling of possible legal issues. This recommendation could lead to a regulatory measure defining the precise conditions of its application.
Processing of data collected by conversational agentsLike the existing framework for health data, it is necessary to develop ethical and legal rules for the collection, storage and use of linguistic traces of interactions between conversational agents in compliance with the GDPR.
Inform on the capabilities of conversational agentsIn the interest of transparency, the user must be informed in an appropriate, clear and understandable manner, either orally or in writing, of the data collection, the capabilities of the conversational agent related to adapting to the data it collects during use, and the profiling.
Promote the explicability of the chatbot's behaviorDesigners need to develop solutions that promote user understanding of conversational agent behavior.
Respecting proportionality when deploying affective computing technologies in chatbotsTo reduce the spontaneous projection of emotions onto the conversational agent and the attribution of an interiority to this system, the manufacturer should respect the proportionality and the adequacy between the intended purposes and the deployment of affective computing technologies, in particular the detection of the emotional behavior of people and the artificial empathy of the chatbot. It should also inform the user of the possible biases of anthropomorphism.
Adapting conversational agents to cultural codesChatbot designers should adapt conversational agents to the cultural codes of emotional expression in different parts of the world.
Communicating the capabilities of affective conversational agentsWhen communicating about emotional conversational agents, designers must be careful to explain the real limitations and capabilities of these systems to users so as not to over-interpret these affective simulations.
Eleven research questions
Automatically recognize unwanted speechThere is a need to develop methods for automatic characterization by conversational agents of undesirable speech, including insults.
Study the lies told by a conversational agentIt is necessary to study methods to remove the conversational agent from the projection of moral qualities through a narrative of its actions explicitly different from that which characterizes lies uttered by human beings.
Evaluating the unprecedented educational effects of chatbotsIn the field of education, especially for vulnerable children and early childhood, public authorities need to assess the consequences of interactions between students and chatbots.
Study the effects of conversational agents on work organizationPublic authorities and private actors should support empirical research on the organizational effects of introducing conversational agents into teams across professional sectors.
Study the effects of using chatbots in the long termPublic authorities and private actors must invest in research on the long-term effects and consequences of the use of conversational agents on humans and society. All actors in society must remain vigilant about the future effects of conversational agents on users' beliefs, opinions and decisions, including mass effects, and avoid considering this technology as neutral or devoid of ethical and political significance.
Study the environmental impact of conversational agentsPublic authorities and private actors must conduct studies on the energy and environmental impact of conversational agent technology.
Develop "ethics by design" methodologies for chatbotsPublic authorities should support research to develop "ethics by design" methodologies suitable for the development of conversational agents.
Reproducibility of conversational agentsReproducibility requires storing the data, but also defining the right measure of repetition in the chatbot's utterances. These issues need to be studied.
Study the effects of chatbots on the emotional behavior of humansIn the emerging field of empathic conversational agents, designers need to develop research and perform risk analyses related to the potential impact of these systems on the emotional behavior of the human user, especially over the long term.
Develop evaluation methods adapted to conversational agentsPublic authorities and private actors should support research on the evaluation of conversational agents during their use and propose new tests adapted to the context of use.
Study the capabilities of transforming neural networks for dialogueIn view of their processing and language generation capabilities, there is a need to support research on conversational agents using transformative neural networks, especially for the evaluation of their compliance with ethical values.
The National Digital Ethics Steering Committee
The Pilot National Digital Ethics Committee (CNPEN) was created in December 2019 at the request of the Prime Minister. Consisting of 27 members, this committee brings together digital specialists, philosophers, doctors, lawyers and members of civil society.In his letter of July 15, 2019, giving the president of the CCNE the mission to implement a pilot approach concerning the ethical issues of digital science, technology, uses and innovations and artificial intelligence, the Prime Minister wanted the work conducted in this pilot phase to concern in particular medical diagnosis and artificial intelligence, conversational agents as well as the autonomous vehicle.
The National Digital Ethics Committee (CNPEN) issued an opinion on Ethical issues related to the autonomous vehicle
It has taken on the task of examining the ethical issues raised by the use of automatic recognition technologies (facial, postural, behavioural). To this end, it has opened a consultation open to citizens and all parties on these issues. This opinion should be made public during the year 2022.
Following its work on the ethical issues of autonomous vehicles, telemedicine and disinformation, the CNPEN has drawn up and published a Manifesto for a digital ethics.
Références :