In April 2021, the European Commission unveiled a draft regulation on artificial intelligence regulation that aims to introduce binding rules for artificial intelligence (AI) systems for the first time.
In order to establish a European vision of AI based on ethics and to prevent the risks inherent to these technologies, the European Commission proposed a categorization of algorithmic tools according to four types of risks:
- the unacceptable, which will result in the ban ;
- High risk, which will require compliance with various guidelines before being deployed;
- limited risk, which will require transparency to be corrected;
- the minimum risk.
For its part, Equinet, the European network of equality bodies (of which the Défenseure des droits is a member in France) pleads to "make the principle of non-discrimination a central concern in any European regulation dedicated to AI".
Since April 2021, the Member States, the European Parliament and the Commission have been seeking a legal approach that promotes innovation while respecting fundamental rights.
Many questions divide the European partners, starting with the very definition of artificial intelligence.
Almost 3000 amendments have been tabled in the Parliament. The final version of the regulation could be adopted in November by the Parliament. The text could then enter the trialogue phase between the Parliament, the European Council and the Commission.
A risk-based approach
"The new rules, based on a future-proof definition of AI, will be directly applicable in all member states," the European Commission explains. "They follow a risk-based approach:Unacceptable riskAI systems that are considered a clear threat to human safety, livelihoods, and rights will be banned. These include AI systems or applications that manipulate human behavior to deprive users of their free will (e.g., toys using voice assistance that encourage minors to engage in dangerous behavior) and systems that enable social rating by states.
High riskAmong the AI systems considered high risk:
- AI technologies that are used in critical infrastructure (e.g., transportation) and have the potential to endanger the lives and health of citizens;
- AI technologies used in education or vocational training, which can determine an individual's access to education and career path (e.g., scoring of exam tests);
- AI technologies used in product safety components (e.g., the application of AI in robot-assisted surgery);
- AI technologies used in employment, workforce management, and self-employment access (e.g., resume sorting software for recruitment procedures);
- AI technologies used in critical private and public services (e.g., credit risk assessment, which denies some citizens the ability to obtain a loan);
- AI technologies used in law enforcement, which are likely to interfere with the fundamental rights of individuals (e.g., checking the reliability of evidence);
- AI technologies used in the field of migration management, asylum and border control (e.g., verification of the authenticity of travel documents);
- AI technologies used in the areas of administration of justice and democratic processes (e.g., application of the law to a concrete set of facts).
- adequate risk assessment and mitigation systems ;
- high quality of the datasets feeding the system to minimize risks and results with a discriminatory effect ;
- recording of activities to ensure traceability of results;
- detailed documentation providing all the necessary information on the system and its purpose to enable the authorities to assess its compliance;
- clear and adequate information for the user;
- appropriate human control to minimize risk;
- high level of robustness, security and accuracy.
For this category, the draft regulation includes specific transparency obligations: " When using AI systems such as chatbots, users must know that they are interacting with a machine so that they can make an informed decision about whether to proceed."
Minimal riskThe legislative proposal allows for the free use of applications such as video games or spam filters based on AI. The vast majority of AI systems fall into this category. The draft regulation does not provide for intervention in this area, as these systems pose little or no risk to the rights or safety of citizens.
The Commission proposes that the competent national market surveillance authorities ensure compliance with the new rules, the implementation of which will be facilitated by the creation of a European Artificial Intelligence Committee that will also be responsible for stimulating the development of standards for AI.
In addition, the proposal provides for optional codes of conduct for non-high risk AI systems, as well as "regulatory sandboxes" to facilitate responsible innovation.
An AI investment plan
The draft regulation is accompanied by a coordinated plan to accelerate investment in AI and stimulate the implementation of national strategies. It will be financed by existing programs such as the Digital Europe Fund, Horizon Europe and an allocation from the European Economic Recovery Plan.This European plan provides for :
- Create the conditions for the development of AI;
- fostering excellence in AI (public-private partnership);
- train new skills in AI technologies;
- Establish a technological leadership of Europe in AI in the virtuous sectors (sustainable environment, health, agriculture...).
Références :
Sources
- 1. A Europe fit for the digital age: Commission proposes new rules and actions to promote excellence and trust in artificial intelligence
- 2. Artificial intelligence: the opinion of the CNIL and its counterparts on the future European regulation
- 3. Artificial intelligence: the Human Rights Defender calls for the principle of non-discrimination to be placed at the heart of the European Commission's draft regulation