The Digital Society Lab's "interview" format aims to give a voice to a diverse range of experts on digital issues, who all speak on their own behalf. The analyses and opinions expressed are those of the interviewees alone and should not be interpreted as the official position of the ANCT's Digital Society Program. This interview was conducted by François Huguet, on behalf of the Digital Society Lab, a researcher in digital humanities (PhD, Institut Polytechnique de Paris) whose work consists of investigating, documenting, and promoting the digital world in the public interest.
Foreword
Hubert Guillaud is a French journalist and essayist, renowned for his critical analysis of digital technology, its social uses, and their political and cultural implications. A leading voice in thinking about digital technology beyond technosolutionist discourse, he now runs the media outlet Dans les Algorithmes, which is in line with the work carried out across the Atlantic by theAI Now Institute, Data & Society and Algorithm Watch in Europe... actors who investigate the concrete effects of algorithms and highlight the risks they pose to fundamental rights.
Digital technology in the public interest: a turning point?
François Huguet: In your latest book, Algorithms Against Society, you begin by recalling the "false promises" of the digital revolution. You also show how we are now surrounded by technical systems imbued with ultra-liberal logic, which makes it very difficult to build political alternatives. Should we conclude that digital technology in the public interest can only be conceived as a break with what digital technology has become?
Hubert Guillaud: My observation today is that it is extremely difficult to imagine any major disruptions in relation to the range of professional, social, economic, and political digital tools that "script" us so much and constrain us so strongly. Power is now a script, a sequence of instructions that unfolds instantly and instantiates itself, as is the case with computer programs. The possibilities for action offered by these tools are immense, as are their calculation methods, but they tend to impose ways of doing things on us, assign us specific roles and uses, and limit and circumscribe our possibilities. They reduce our capacity for action, our agency. In other words, we are very "constrained" by these systems: constrained to function as the models have predicted, to think as they impose on us... or even not to think at all.
To illustrate this, I would like to give the example of a meeting I attended recently concerning an upcoming public event. I quickly realized that our role as partners in this project was limited to collectively commenting on several spreadsheets in a very complex spreadsheet program that contained all the project details: time, budgets, deliverables, etc. Our job was to validate cells and dates, but in no way to discuss the deeper issues of the project, the reason for carrying it out, why we thought it was important to do so, or how. Presented in this form—rows, columns, cells—the project seemed fluid: a succession of tasks to be assigned. We forgot to question the meaning of the event and its challenges, defining its very form in advance, such as the fact that all the round tables would be occupied by the project partners, regardless of whether their views advocated a completely different concept, particularly its openness to public participation. The spreadsheet format and the scripted economic organization took up all the space, leaving none for meaning. All business processes today are like this, totally encoded in practices imposed by digital management tools, as described by sociologist Marie-Anne Dujarier when talking about management disembodied by tools that produce a social relationship without connection.
These tools can be very useful and are often extremely powerful. But they constrain relationships to such an extent that they prevent any other course of action and impose their methods without any escape, like the marketing funnels of automated mailing, which are used as much for joining an association as for selling the worst product imaginable. Worse still, regardless of the reliability and effectiveness of these methods, operated by systems that everyone uses, we are stuck, surrounded by these practices, instantiated in tools with no way out.
Thinking about digital technology in the public interest requires finding ways to break free from these practices, questioning and finding ways to overcome their limitations, breaking the scripts they embody and, with them, the ideologies they carry. This is possible because there is no single way of calculating.
However, the standardization of these tools offers us unique calculation methods, similar to the fraud risk scoring systems used in social systems, which are deployed around the world according to the same methods, with the same flaws, and often by the same large companies. The challenge is to regain control of these systems, understand who they miscalculate, and how to calculate differently.
Référence :
Digital mediation and blind spots
François Huguet: You also outline, implicitly but precisely, the economic characteristics of digital technology in the public interest. For example, you point out that digital technology has often been used as a tool for budgetary rationalization: fewer staff, more services with fewer resources, more citizens to serve with fewer public reception areas, etc. You also denounce the lack of genuine democratic and scientific oversight mechanisms in the development of artificial intelligence applications: everything becomes calculable, but we too often forget to observe, question, and control what is calculated—and how it is calculated. Those involved in socio-digital mediation have been making this observation for many years. In your opinion, is this one of the roles of this sector? To make these blind spots visible?
Hubert Guillaud: One of the major problems with algorithms is the lack of democratic control over their development. Parcoursup is a very good example of this: there are no user representatives on its ethics and science committee, no students, no parents, no teachers, etc. The same goes for a whole host of other tools that have a huge impact on society, from social media ranking algorithms to job matching systems.
However, all over the world, it is citizens who are exposing the problems with algorithmic systems: scientists, journalists, activist groups. Users are the first to know when systems are mistreating them.
In France, revelations about the limitations of the CAF's scoring systems were brought to light by researchers, journalists, and activists. The Changer de Cap collective became aware of the problem when it was approached by beneficiaries who reported unexplained cancellations of their social rights. The collective investigated to understand the nature of the problem. It was the collection of testimonies that brought the issue to the forefront. And this is hardly surprising: users are always the most knowledgeable about the problems they face. They are best placed to describe their difficulties. These voices are rarely heard, and the world of digital mediation is the first to listen to them.
Digital mediators are the ones who encounter these difficulties most often, as they are always on the front line. In this sense, their impact should be very strong. But they still need to find people to whom they can report these difficulties.
It is also important to note that these worlds constantly remind us that we must not lose more public services, human relationships, and service counters. The same is true in businesses: budget cuts in customer relations mean that the quality of the service we offer is declining. There is something very paradoxical about this: users must always question their ability to use these digital tools, portals, and interfaces. Everything rests on their responsibility. It is always the user who is at fault and who has very little recourse. The responsibility of those who design these services is never questioned. The "mental load" of digital disengagement rests solely on users, never on the designers and the scripts they impose on us... Data scientists and engineers believe they can solve all the world's problems with a confidence that should deeply challenge our society.
In a recent study, researchers Nel Escher, Jeffrey Bilik, Nikola Banovic, and Ben Green asked computer scientists to develop software that provides automated advice on eligibility for bankruptcy proceedings. The computer scientists completed the task quickly, with one of them noting that it was very simple to do. However, analysis of the software they developed revealed numerous errors and misinterpretations of the law. Despite these flaws, the computer scientists were convinced of the practical usefulness of their tools. Seventy-nine percent of them said they would gladly accept their tool replacing judges in bankruptcy court, even though their tools were flawed and problematic.
Références :
Digital technology in the public interest and techno-criticism
François Huguet: Moral issues also run through your book from start to finish. The ethics of those who design these technologies appear to be an important key to understanding them, and have recently been explored by Olivier Tesquet and Nastasia Hadjadji in their book Apocalypse Nerds. In your opinion, to conceive of digital technology as being in the public interest, should we refer to techno-critical thinkers such as Jacques Ellul or Ivan Illich, or are there other more contemporary ideas that inform your own thinking?
Hubert Guillaud: Illich and Ellul are fundamental references. They speak to us from their eras, posing questions that remain highly relevant today. But I think there are many intellectuals who enable us to better understand the challenges posed by today's technologies. I am thinking, for example, of Kate Crawford, Melanie Mitchell, Arvind Narayanan, Virginia Eubanks, Karen Levy, Miriam Posner... and the work done by Data & Society, Dair, Algorithm Watch, and the AI Now Institute. These are all references that I often mention in Dans les algorithmes (In Algorithms) and that help us understand the digital stranglehold that is closing in around us.
Référence :
Alternumerism and artificial public service
François Huguet: What exactly does the expression "artificial public service" mean, which you use several times in your book?
Hubert Guillaud: The expression comes from the book published by the Solidaires Finances Publiques union, "AI in taxation: union reflections and actions." The authors of the book use this term to refer to the replacement of public servants by computerized technical systems. They explain that in the tax department, data scientists have taken over and dictate the work of civil servants, which is not without consequences. Despite budgetary rationalization, there are still civil servants, but their tasks are now determined by AI. Rather than qualitative research and detailed knowledge of the public, powerful computer tools are now used to aggregate data on taxpayers on a massive scale and, for example, to check for errors in tax returns rather than sophisticated tax fraud, which is too difficult to detect and yet much more significant in financial terms. The risk, warns Solidaires, is that we are creating an "artificial public service," where modernization leads to a loss of skills and a deterioration in service. Where the administration is reduced to a statistical machine. The risk, as warned by the Council of State, is that the ability to detect errors will be numbed in the systems deployed, the workings of which very few people understand. The artificial public service describes a likely future, where public service is replaced by machines, on the pretext of saving money and to the detriment of the relationship with the public, at the risk of no longer fulfilling its role of rebalancing inequalities and ensuring equal treatment of citizens.
François Huguet: Can we still believe in alternumerism? You rightly point out the criticisms made by Julia Laïne and Nicolas Alep in their book Contre l'alternumérisme (Against Alternumerism), figures of radical technocriticism, while emphasizing that reforming technology can also serve as a bulwark against its total deployment. Can you explain this form of schizophrenia: reforming to contain, without legitimizing?
Hubert Guillaud: Is another digital world still possible? Perhaps, but we are not really moving in that direction, except marginally, certainly because the regulator is not playing its role and because austerity is sweeping everything away in its path. The problem is that this ideology shapes the tools, determines the scripts and the calculation methods. This overlooks the fact that there are always other ways of calculating. That we can produce tools to monitor employers' social security contribution fraud rather than tools to monitor undue payments and overpayments to citizens down to the last euro. We can use them for highly specialized research into sophisticated tax evasion schemes and focus less on the large mass of taxable individuals. We can also give more resources to agents, trust the field, and trust investigations. Doing things by hand works very well too. The people who report algorithmic malfunctions are citizens, journalists, and scientists. Most of the time, they don't have very sophisticated IT tools to investigate, but they are the ones who put together cases and take them to court so that the issue becomes public. Let's never forget their power!
The recent example of Senator Fabien Gay's work on the opacity of business support policy is quite illustrative: by denouncing the multiplicity of schemes, the difficulty in measuring their real cost, and the lack of precise monitoring of beneficiaries and effects, he showed that there was a major problem here. His work was not carried out by AI, but by women and men who analyzed piles of documents and showed that business aid must become a conditional, evaluated, and democratically controlled political tool, serving employment, social justice, and ecological transition.
Références :
François Huguet: You conclude your book by discussing the principles of data feminism put forward by two American researchers, Catherine D'Ignazio and Lauren Klein. How, in practical terms, could this ambition be translated into the design and implementation of digital technology in the public interest?
Hubert Guillaud: Indeed, data feminism is one of the most interesting proposals to have emerged in recent years. It is surprising that their book has not yet been translated into French, given that it offers a different perspective on our relationship with data, a perspective that I believe is widely shared in the world of digital mediation.
What I take away from this is above all a demonstration that things can be done differently: data and algorithms can be used for other purposes, not solely for profit, but to empower the communities and individuals concerned.
To better understand the side effects of the IT tools we use, we need to question the calculations they perform: why, how, and what biases do they contain? The problem is the complexity of the systems and calculations, which makes it difficult to understand the side effects. We should question all these calculations. This is what Camille Girard-Chanudet, Estelle Hary, and Soizic Pénicaud of the Observatoire des algorithmes publics (Public Algorithms Observatory) are doing here in France, which is truly important work in the public interest. The principle of data feminism invites us to break free not only from the technical scripts imposed on us, but also from this atmosphere of calculation that always aims to go in the same direction: reducing costs, monitoring, controlling. They show us that we can and must do things differently, for users and with users.
Références :
Sources
On the concept of "script," we refer in particular to the description given by sociologist Madeleine Akrich, namely the set of uses, roles, behaviors, and social relations that a technical device anticipates, prescribes, or makes more likely. See 1. Madeleine Akrich, "La description des objets techniques" in Madeleine Akrich, Michel Callon, and Bruno Latour, Sociologie de la traduction. Presses des Mines, 2006.
3. Code-ifying the Law: How Disciplinary Divides Afflict the Development of Legal Software (Nel Escher, Jeffrey Bilik, Nikola Banovic, Ben Green [2024] Proceedings of the ACM on Human-Computer Interaction, Volume 8, Issue CSCW2, Article No 398, Pages 1 - 37.
4. Apocalypse Nerds (2025), Nastasia Hadjadji and Olivier Tesquet, published by Divergences
5. AI in taxation: union reflections and actions (Solidaires Finances Publiques union)
6. Transparency and evaluation of public aid to businesses: a democratic expectation, a guarantee of economic efficiency. Senate Report No. 808 (2024-2025), Volume I, submitted on July 1, 2025.
7. Study commissioned by the Prime Minister (2022), Artificial Intelligence and Public Action: Building Trust, Serving Performance. Study by the Council of State (August 2022)
8. Data Feminism (2020) by Catherine D'Ignazio & Lauren F. Klein, The MIT Press.
See the seven principles that guide data feminism: 9. Summary of Catherine D’Ignazio and Laurent Klein’s book, Data Feminism, published in 2020 by MIT Press.