Foreword
The third phase of the national strategy for artificial intelligence (AI), released a few days before the Summit, includes a major section devoted to the implementation of AI for public policy.
"AI will become a priority focus translated into ministerial roadmaps to be presented no later than June 2025. (...) The Ministry of Public Action, Civil Service and Simplification will coordinate interministerial actions to deploy AI in the public sector. (...) AI will be one of the priorities of the State Reform Fund".
" A plan to deploy generative AI tools will enable every agent to benefit from AI assistants capable of saving them time and efficiency in their daily administrative tasks. (...) This trusted AI will always have a public agent in the loop (...) The tight steering of these deployments will enable their performance to be assessed in terms of time savings, costs and efficiency. On the basis of this evaluation and feedback, proven use cases will be deployed on a larger scale.
In 2022, the Conseil d'État had laid the foundations for an AI administrative doctrine, organized around seven principles of trusted public AI.
A few months later, ChatGPT burst onto the scene. The rise of generative AI is reshuffling the deck.
As artificial intelligence systems (AIS) make their appearance in many areas of public action, it's time for feedback.
The Conseil économique, social et environnemental (CESE), the Défenseure des droits, the Cour des comptes, the Commission de l'IA and the Direction générale de l'administration et de la fonction publique (DGAFP) all delivered their initial analyses in 2024 and early 2025: they all advocate strengthening interministerial management and clarifying a doctrine for the use of AI in public services.
- By emphasizing the "empowering" dimension of AI for agents, the AI Commission thus sketches out, in hollow, a doctrine of digital transformation that would be deployed "from below" : "The arrival of AI could be an opportunity to give public agents the ability to transform their own work, rather than undergoing transformation from above".
- The Défenseure des droits, responsible for defending the rights and freedoms of users of public services, is examining the growing use of algorithms or artificial intelligence (AI) systems in French public services, particularly in central government. Despite the "efficiency gains in decision-making ", the Défenseure des droits warns against " infringements of users' fundamental rights", and calls for human intervention to guarantee respect for users' rights.
- In June 2024, the DGAFP published a strategy for the use of artificial intelligence in human resources management (HRM) in the state civil service. It highlights use cases and issues guiding principles.
- While the Cour des Comptes recommends prioritizing "the missions and processes for which AI is likely to deliver significant gains in efficiency and productivity", it also outlines the main lines of a "robust public AI", based on sobriety in terms of data, models and infrastructures, and the ability of organizations to retain full control over them.
- The CESE calls on administrations to question the purposes of AI systems. To this end, it formulates two conditions for their deployment: "that they improve the quality of public service and the working conditions of the agents who contribute to it", and "that the obligations of transparency, accountability and explicability are genuinely applied, when AI systems are deployed to enforce the law, or when they lead to individual decisions producing legal, economic or social effects that affect individuals".
Référence :
Seven principles for "trusted public artificial intelligence" (Conseil d'État, 2022)
"In August 2022, the Conseil d'État published a 360-page study on the contribution of AI to public action. The Conseil d'Etat therefore recommended that public authorities immediately define a doctrine for the use of AI: an administrative doctrine, which would make it possible to "control risks of all kinds (individual and collective)" and "guarantee the public utility of AI systems in all circumstances".
The high court proposed to base the use of AI in the public sphere on seven principles:
- Human primacy. "Public AIS are conceived as tools at the service of human beings, which presupposes that they serve a general interest purpose and that the interference in fundamental rights and freedoms that results from their implementation is not disproportionate to the benefits expected from them".
- Performance. "Deterioration in the quality of a service due to automation is one of the most destructive factors for confidence in digital tools, particularly when, in the case of public services, users have no option but to turn to a competitor. Public authorities must therefore identify system performance indicators (accuracy, technical robustness, response time, etc.), and define the acceptable level of performance, taking into account the consequences of the error, while ensuring that the quality of the service provided is not adversely affected.
- Equity and non-discrimination. "AIS designers must choose, from among the various conceptions of fairness, the one that will guide the operation of the systems, and formalize this choice, while respecting the principle of equality. They must also take care to prevent unintentional discrimination, a particularly important issue for AIS decision support systems based on machine learning".
- Transparency. "This principle includes, at the very least, the right of access to the system's documentation, a requirement of loyalty consisting in informing people of the use of an AIS with regard to them, the auditability of the system by the competent authorities, and the guarantee of explicability. The technical complexity of certain AIS, in particular those based on deep learning, and the difficulty or inability to formalize the reasoning that led to the result produced, run the risk of accentuating the feeling of mistrust if the people on whom this result has an impact cannot obtain, in simple language, an explanation of the main reasons behind the decision or recommendation formulated by the system".
- Safety (cybersecurity). " Any AIS must integrate the issue of safety, i.e. preventing computer attacks and resolving their consequences."
- Environmental sustainability. "The environmental impact of AIS must be taken into account in the strategy of public AI in general, around a principle of global neutrality of AI, as in the design of each system, by comparing the extra performance enabled by higher computing power with its ecological footprint."
- Strategic autonomy. "Since AIS increasingly contribute to the essential functions of public power, they must be designed in such a way as to guarantee the nation's autonomy. While digital autarky would be an illusory and counter-productive objective, France must equip itself with the necessary resources, in terms of skills, infrastructure and data research structures, to reduce and choose its dependencies".
However, administrations have not waited for such a doctrine to be put into practice, here and there, with varying degrees of maturity. "Above all, we note a very great heterogeneity in the maturity of administrations, the degree of progress of reflections and projects, and the scale of investments devoted to them", writes the Council. Pleading for a "proactive and lucid " strategy for the deployment of public AI, the high court invited the State to strike a balance between the development of AI for internal use and AI that directly benefits citizens, between uses dedicated to "service" and uses dedicated to "control".
Référence :
Giving public employees the ability to transform their own work, rather than having it transformed from above (AI Commission, 2024).
The AI Commission, created in 2023, devotes a chapter of the report it submitted to the Prime Minister in March 2024, to the implementation of AI in public services. In it, she draws up a harsh assessment of the digitization of public services. " Too often, digital transformation has stopped at the dematerialization of procedures, without transforming in depth the flow of information, or the processing of requests. Promises to personalize (and therefore humanize) public services, to speed up processing and to simplify the work of agents have not been kept.
For the Commission, AI is first and foremost " an opportunity to relaunch the digital transformation of public services".
"Through its ease of use, generative AI offers the opportunity to unleash agents' creativity by enabling them to experiment with technology at their own level, without always needing a specific system". The Commission therefore recommends " encouraging public servants to seize these tools, all the more so when they are free for occasional use (...) The public service would benefit from equipping public servants with configurable AI so that they can deploy them themselves, whether these are off-the-shelf solutions or specific to the public service If public servants appropriate AI, uses will abound and will make it possible to identify more quickly where AI has value".
By emphasizing the "empowering" dimension of AI for civil servants, the AI Commission thus sketches out a doctrine of digital transformation that would be deployed from below : "The arrival of AI could be an opportunity to give civil servants the ability to transform their own work, rather than having it transformed from above".
Two pitfalls: the "big AI project" and the "all ChatGPT" approach
She warns, in passing, against two pitfalls to be avoided: " On the one hand, the 'big AI project', designed to do everything, replace everything, developed far from agents, users and the reality of public service. On the other hand, the "all ChatGPT", in which a commercial and foreign universal conversational robot would become the only use of AI in public service".
Citizens themselves could be called upon to contribute to AI-enabled public services, " whether to define how they operate or to participate in their construction. Their involvement is crucial if AI-enabled transformation of public services is to avoid reinforcing bureaucratic, inexplicable and distant centralization". The Commission mentions the " alignment assemblies " convened in Taiwan to define rules for AI deployment and behavior in the public service.
To give wide access to generative AI services and avoid double investment, the Commission recommends that the capacity for technical steering and execution in public services be strengthened. "At inter-ministerial level, a real technology directorate should be able to provide not only doctrine, but also quality infrastructures (hosting, computing power, software factory, digital identity etc. ), expertise and transformation budgets. At a time when the State is seeking to reinternalize skills and circulate them between ministries, it needs to strengthen its ability to produce quality digital products".
Off-the-shelf AI solutions?
The Commission, incidentally, sets the terms for a debate on the use of off-the-shelf AI solutions.
"From 2024, public services will have to decide whether to use off-the-shelf AI solutions, enter into partnerships with companies, or redevelop their own tools. (...) Off-the-shelf solutions will have the advantage of performance and simplicity, as they are immediately available, and can even be integrated directly into agents' tools (office suite, search engine). Some ministries, for example, have banned the use of IT development tools such as Github's Copilot or GPT-4.
"However, the Commission reminds us that off-the-shelf solutions have their limits: "either because they don't fit in with existing tools, or because they aren't adapted to certain sensitive uses. It would be tricky to ask ChatGPT to summarize a memo to a minister, for example. But it would be absurd for every ministry and local authority to redevelop or buy an AI capable of summarizing a memo without leaking the data.
Référence :
Human intervention as a guarantee of user rights (Défenseure des droits, 2024)
In a report published on November 13, 2024, the Défenseure des droits, responsible for defending the rights and freedoms of users of public services, examines the growing use of algorithms or AI systems in French public services, particularly in central government.
"Despite gains in decision-making efficiency ", the Défenseure des droits warns against " infringements of users' fundamental rights".
In this report, the Human Rights Defender examines the effectiveness of two guarantees that are particularly important in ensuring respect for these rights: human intervention in decision-making and control of systems, and the requirement for transparency towards users.
Although user complaints are still few and far between, the Défenseure des droits has identified a"systematic" problem of trust between users and the administration. In this respect, she points out that the transparency of public action must be seen as a "prerequisite for combating possible errors, abuses and discrimination (...) However, information obligations are sometimes little or poorly respected due to the opacity of procedures carried out by or with the help of algorithms".
When an administrative decision is said to be " partially automated ", a public official must take a positive, concrete and significant action based on or alongside the result generated by the algorithm, in the decision-making process. The Défenseure des droits notes, particularly on the basis of the complaints it receives, that this intervention is sometimes non-existent and sometimes inconsistent or biased, when the people involved in individual decision-making tend to endorse the results produced by the system without questioning them.
In view of these limitations, and their impact on the rights of public service users, the Human Rights Defender recommends the introduction of mandatory criteria and operating procedures to define more precisely the nature of the human intervention required.
Read more :The Défenseure des droits calls for vigilance regarding the use of algorithms in public services
Référence :
[Feature] Artificial intelligence and public services: what doctrine of use?
Foreword
The third phase of the national strategy for artificial intelligence (AI), released a few days before the Summit, includes a major section devoted to the implementation of AI for public policy.
"AI will become a priority focus translated into ministerial roadmaps to be presented no later than June 2025. (...) The Ministry of Public Action, Civil Service and Simplification will coordinate interministerial actions to deploy AI in the public sector. (...) AI will be one of the priorities of the State Reform Fund".
" A plan to deploy generative AI tools will enable every agent to benefit from AI assistants capable of saving them time and efficiency in their daily administrative tasks. (...) This trusted AI will always have a public agent in the loop (...) The tight steering of these deployments will enable their performance to be assessed in terms of time savings, costs and efficiency. On the basis of this evaluation and feedback, proven use cases will be deployed on a larger scale.
In 2022, the Conseil d'État had laid the foundations for an AI administrative doctrine, organized around seven principles of trusted public AI.
A few months later, ChatGPT burst onto the scene. The rise of generative AI is reshuffling the deck.
As artificial intelligence systems (AIS) make their appearance in many areas of public action, it's time for feedback.
The Conseil économique, social et environnemental (CESE), the Défenseure des droits, the Cour des comptes, the Commission de l'IA and the Direction générale de l'administration et de la fonction publique (DGAFP) all delivered their initial analyses in 2024 and early 2025: they all advocate strengthening interministerial management and clarifying a doctrine for the use of AI in public services.
- By emphasizing the "empowering" dimension of AI for agents, the AI Commission thus sketches out, in hollow, a doctrine of digital transformation that would be deployed "from below" : "The arrival of AI could be an opportunity to give public agents the ability to transform their own work, rather than undergoing transformation from above".
- The Défenseure des droits, responsible for defending the rights and freedoms of users of public services, is examining the growing use of algorithms or artificial intelligence (AI) systems in French public services, particularly in central government. Despite the "efficiency gains in decision-making ", the Défenseure des droits warns against " infringements of users' fundamental rights", and calls for human intervention to guarantee respect for users' rights.
- In June 2024, the DGAFP published a strategy for the use of artificial intelligence in human resources management (HRM) in the state civil service. It highlights use cases and issues guiding principles.
- While the Cour des Comptes recommends prioritizing "the missions and processes for which AI is likely to deliver significant gains in efficiency and productivity", it also outlines the main lines of a "robust public AI", based on sobriety in terms of data, models and infrastructures, and the ability of organizations to retain full control over them.
- The CESE calls on administrations to question the purposes of AI systems. To this end, it formulates two conditions for their deployment: "that they improve the quality of public service and the working conditions of the agents who contribute to it", and "that the obligations of transparency, accountability and explicability are genuinely applied, when AI systems are deployed to enforce the law, or when they lead to individual decisions producing legal, economic or social effects that affect individuals".
Référence :
Seven principles for "trusted public artificial intelligence" (Conseil d'État, 2022)
"In August 2022, the Conseil d'État published a 360-page study on the contribution of AI to public action. The Conseil d'Etat therefore recommended that public authorities immediately define a doctrine for the use of AI: an administrative doctrine, which would make it possible to "control risks of all kinds (individual and collective)" and "guarantee the public utility of AI systems in all circumstances".
The high court proposed to base the use of AI in the public sphere on seven principles:
- Human primacy. "Public AIS are conceived as tools at the service of human beings, which presupposes that they serve a general interest purpose and that the interference in fundamental rights and freedoms that results from their implementation is not disproportionate to the benefits expected from them".
- Performance. "Deterioration in the quality of a service due to automation is one of the most destructive factors for confidence in digital tools, particularly when, in the case of public services, users have no option but to turn to a competitor. Public authorities must therefore identify system performance indicators (accuracy, technical robustness, response time, etc.), and define the acceptable level of performance, taking into account the consequences of the error, while ensuring that the quality of the service provided is not adversely affected.
- Equity and non-discrimination. "AIS designers must choose, from among the various conceptions of fairness, the one that will guide the operation of the systems, and formalize this choice, while respecting the principle of equality. They must also take care to prevent unintentional discrimination, a particularly important issue for AIS decision support systems based on machine learning".
- Transparency. "This principle includes, at the very least, the right of access to the system's documentation, a requirement of loyalty consisting in informing people of the use of an AIS with regard to them, the auditability of the system by the competent authorities, and the guarantee of explicability. The technical complexity of certain AIS, in particular those based on deep learning, and the difficulty or inability to formalize the reasoning that led to the result produced, run the risk of accentuating the feeling of mistrust if the people on whom this result has an impact cannot obtain, in simple language, an explanation of the main reasons behind the decision or recommendation formulated by the system".
- Safety (cybersecurity). " Any AIS must integrate the issue of safety, i.e. preventing computer attacks and resolving their consequences."
- Environmental sustainability. "The environmental impact of AIS must be taken into account in the strategy of public AI in general, around a principle of global neutrality of AI, as in the design of each system, by comparing the extra performance enabled by higher computing power with its ecological footprint."
- Strategic autonomy. "Since AIS increasingly contribute to the essential functions of public power, they must be designed in such a way as to guarantee the nation's autonomy. While digital autarky would be an illusory and counter-productive objective, France must equip itself with the necessary resources, in terms of skills, infrastructure and data research structures, to reduce and choose its dependencies".
However, administrations have not waited for such a doctrine to be put into practice, here and there, with varying degrees of maturity. "Above all, we note a very great heterogeneity in the maturity of administrations, the degree of progress of reflections and projects, and the scale of investments devoted to them", writes the Council. Pleading for a "proactive and lucid " strategy for the deployment of public AI, the high court invited the State to strike a balance between the development of AI for internal use and AI that directly benefits citizens, between uses dedicated to "service" and uses dedicated to "control".
Référence :
Giving public employees the ability to transform their own work, rather than having it transformed from above (AI Commission, 2024).
The AI Commission, created in 2023, devotes a chapter of the report it submitted to the Prime Minister in March 2024, to the implementation of AI in public services. In it, she draws up a harsh assessment of the digitization of public services. " Too often, digital transformation has stopped at the dematerialization of procedures, without transforming in depth the flow of information, or the processing of requests. Promises to personalize (and therefore humanize) public services, to speed up processing and to simplify the work of agents have not been kept.
For the Commission, AI is first and foremost " an opportunity to relaunch the digital transformation of public services".
"Through its ease of use, generative AI offers the opportunity to unleash agents' creativity by enabling them to experiment with technology at their own level, without always needing a specific system". The Commission therefore recommends " encouraging public servants to seize these tools, all the more so when they are free for occasional use (...) The public service would benefit from equipping public servants with configurable AI so that they can deploy them themselves, whether these are off-the-shelf solutions or specific to the public service If public servants appropriate AI, uses will abound and will make it possible to identify more quickly where AI has value".
By emphasizing the "empowering" dimension of AI for civil servants, the AI Commission thus sketches out a doctrine of digital transformation that would be deployed from below : "The arrival of AI could be an opportunity to give civil servants the ability to transform their own work, rather than having it transformed from above".
Two pitfalls: the "big AI project" and the "all ChatGPT" approach
She warns, in passing, against two pitfalls to be avoided: " On the one hand, the 'big AI project', designed to do everything, replace everything, developed far from agents, users and the reality of public service. On the other hand, the "all ChatGPT", in which a commercial and foreign universal conversational robot would become the only use of AI in public service".
Citizens themselves could be called upon to contribute to AI-enabled public services, " whether to define how they operate or to participate in their construction. Their involvement is crucial if AI-enabled transformation of public services is to avoid reinforcing bureaucratic, inexplicable and distant centralization". The Commission mentions the " alignment assemblies " convened in Taiwan to define rules for AI deployment and behavior in the public service.
To give wide access to generative AI services and avoid double investment, the Commission recommends that the capacity for technical steering and execution in public services be strengthened. "At inter-ministerial level, a real technology directorate should be able to provide not only doctrine, but also quality infrastructures (hosting, computing power, software factory, digital identity etc. ), expertise and transformation budgets. At a time when the State is seeking to reinternalize skills and circulate them between ministries, it needs to strengthen its ability to produce quality digital products".
Off-the-shelf AI solutions?
The Commission, incidentally, sets the terms for a debate on the use of off-the-shelf AI solutions.
"From 2024, public services will have to decide whether to use off-the-shelf AI solutions, enter into partnerships with companies, or redevelop their own tools. (...) Off-the-shelf solutions will have the advantage of performance and simplicity, as they are immediately available, and can even be integrated directly into agents' tools (office suite, search engine). Some ministries, for example, have banned the use of IT development tools such as Github's Copilot or GPT-4.
"However, the Commission reminds us that off-the-shelf solutions have their limits: "either because they don't fit in with existing tools, or because they aren't adapted to certain sensitive uses. It would be tricky to ask ChatGPT to summarize a memo to a minister, for example. But it would be absurd for every ministry and local authority to redevelop or buy an AI capable of summarizing a memo without leaking the data.
Référence :
Human intervention as a guarantee of user rights (Défenseure des droits, 2024)
In a report published on November 13, 2024, the Défenseure des droits, responsible for defending the rights and freedoms of users of public services, examines the growing use of algorithms or AI systems in French public services, particularly in central government.
"Despite gains in decision-making efficiency ", the Défenseure des droits warns against " infringements of users' fundamental rights".
In this report, the Human Rights Defender examines the effectiveness of two guarantees that are particularly important in ensuring respect for these rights: human intervention in decision-making and control of systems, and the requirement for transparency towards users.
Although user complaints are still few and far between, the Défenseure des droits has identified a"systematic" problem of trust between users and the administration. In this respect, she points out that the transparency of public action must be seen as a "prerequisite for combating possible errors, abuses and discrimination (...) However, information obligations are sometimes little or poorly respected due to the opacity of procedures carried out by or with the help of algorithms".
When an administrative decision is said to be " partially automated ", a public official must take a positive, concrete and significant action based on or alongside the result generated by the algorithm, in the decision-making process. The Défenseure des droits notes, particularly on the basis of the complaints it receives, that this intervention is sometimes non-existent and sometimes inconsistent or biased, when the people involved in individual decision-making tend to endorse the results produced by the system without questioning them.
In view of these limitations, and their impact on the rights of public service users, the Human Rights Defender recommends the introduction of mandatory criteria and operating procedures to define more precisely the nature of the human intervention required.
Read more :The Défenseure des droits calls for vigilance regarding the use of algorithms in public services
Référence :