The growing deployment of artificial intelligence systems (AIS) in companies and government agencies is raising major questions about the future of work.
After two years of surveys, the LaborIA action-research laboratory, set up by the French Ministry of Labor and lnria, has published groundbreaking results on human-machine interactions and the challenges of appropriating AI in the world of work.
Survey results
After gathering the perceptions of stakeholders (decision-makers, designers, engineers and employees) in different types of organizations (private companies, public authorities, public institutions), the survey reveals three types of results.
The deployment of AIS in organizations is not the end point of innovation processes, but rather a new starting point.
First result: the deployment of AIS in organizations is not the end point of innovation processes, following the phases of ideation, prototyping and experimentation.
Human-machine interactions involve extended and uncertain learning periods. Workers must not only use AIS, but also engage in their maintenance, improvement and supervision.
Work that is often little or poorly recognized by organizations, leading to disengagement and long-term failure of AI projects.
The failures and successes of AI projects also depend on conflicting work priorities.
The survey reveals an opposition between an AI management logic, promoted by designers/decision-makers, and a "logic of real work, specific to employees".
Managerial logic sees AI as "a means of optimizing processes, reducing the risk of errors or even improving performance and increasing labor productivity".
The logic of real work is driven more by issues of appropriation of AI in work activity, "raising questions in terms of recognition, autonomy, responsibility and meaning of work". Thus, as the authors of the study note, " during the deployment of AIS, the priorities of decision-makers often come up against the concerns of workers faced with changes in their tasks, skills and working conditions".
This conflict of rationalities is a source of ambivalence and paradox: "AIS are as much perceived as useful assistants (saving time, facilitating work) as they are as a source of threats (to their jobs or work content), as a promising solution capable of performing complex tasks as they are software lacking in maturity, stability, relevance and reliability".
Rationality conflict can lead to very different results, depending on the ability of organizations to establish compromises between different stakeholders.
"The absence or failure of such compromises gives rise to alienating human-machine configurations (overconfidence or caution with regard to AI, loss of skills and autonomy...) in which workers feel dispossessed of their work".
Conversely, the presence of a successful rationality compromise "gives rise to enabling configurations in which AIS increase human skills".
The deployment of AIS in organizations can have unexpected effects on work organization and management.
These unexpected effects include a reconfiguration of professional roles and qualification frames of reference, a questioning of the role of the middle manager, polarization of work, and so on.
What's more, if AI changes work (and its organization), work also changes AI: "different modes of work organization - highly hierarchical/centralized or, on the contrary, leaving more room for autonomy - strongly influence the reception and appropriation of an AIS".
Six recommendations for technological social dialogue
Creating the conditions for AI-enabled work
"The effects of AIS deployment are therefore manifold, from the individual performing his or her work, to the tasks, activities and methods required to carry it out, via changes in job references, qualifications and skills, and the reorganization of management and work organization".
"Never definitively established, these consequences can be ambivalent, contradictory, but also profoundly alienating or empowering, depending on the outcome given to the rationality conflicts", add the authors of the study . "A final major contribution of this study consists in identifying the conditions of possibility of the rationality compromise and the conditions of emergence of empowering configurations".
Involve workers in the innovation process, from the outset, to enable the emergence of a rationality compromise.
More than a simple consultation to facilitate adoption, it's a question of starting from real work (" what workers really do - rather than prescribed work - what workers do in theory") to make well-being, quality and meaning of work possible.
Ensure co-design of AIS and organize ongoing dialogue
Promote close interaction with all the protagonists of the AIS project - the decision-maker, the designer, the engineer and operator, the employee representative bodies - in order to co-define the socio-technical configuration:
- The goal of AI now and in the future;
- AI usage rules ;
- Tooling ergonomics and interface ;
- The distribution of roles and tasks, including maintenance, improvement and supervision, and determining the appropriate forms of recognition for these activities;
- Algorithm parameterization and acceptable level of error in the work.
"Considering the dynamic, learning and empirical properties of AIS, these interactions are not limited to an initial design phase: they must be continuous, organized and collegial with a view to developing a collective culture of the uses and impacts of AIS on work".
Putting AI to work to make workers safer
"Aiming for an increase in safety that reassures workers", i.e. the deployment of AIS focused on improving the quality of working life, reducing socio-professional risks and supporting professional practices(AI as a "safety catch").
"This security then makes possible new forms of interaction and new uses that deliver the full value of AIS: performance, quality and productivity.
Making AIS "explainable": opening the black box
Strive to make AI operations and results understandable to workers in real-life situations.
Without referring here to the high-level discussions and controversies on the explicability of algorithms, "what we're talking about here is situated explicability, which puts the AIS to the test in its context of use to concretely evaluate how the worker's understanding of its results influences his or her power to act".
Learning by doing: accepting a degree of unpredictability in the upheavals produced by AI
Take into account the potential unexpected effects of AI on workers, work and the organization.
"The empirical nature of AIS can give rise to new situations linked to the heterogeneity of workers' interpretations, to discrepancies between situations experienced and situations dealt with by AIS, and to learning by both AIS and workers".
These situations need to be the subject of feedback to develop a shared culture of usage and appropriate organizational postures: tolerance of error, taking the initiative, technological social dialogue, quality conflict, i.e. the possibility of deliberating on the quality criteria of work with AIS.
Le laborIA
In response to one of the recommendations of the Villani Report, the French Ministry of Labor and Employment and Inria have jointly founded LaborIA in 2021, "a laboratory aimed at building and consolidating a field vision to better identify artificial intelligence and its effects on work, the workforce, employment, skills and social dialogue".
LaborIA's objectives are to enlighten the public debate on the impacts of the spread of AI in organizations, to produce recommendations to foster the development of responsible AI and to support the decision-making of the Ministry in charge of labor and employment .
Référence :