On Good and Evil, and the Mistaken Idea that Technology is Ever Neutral

The lively debate about the ethical impact of digital technologies, from social media to artificial intelligence (AI), is having many interesting side effects. Philosophically, it is forcing us to consider new conceptual issues and reconsider old ones in new lights. It is against this background that I wish to discuss a special relation between good and evil: their equilibrium in digital artefacts.

The current debate about AI is rife with remarks about the neutrality of the technology and, therefore, the need to legislate its uses (deployment) rather than its design or development. Such remarks are trite. They are also mistaken. Let me explain.

Recently, an expert told me, as a matter of course, that the problem with AI is that it is a neutral technology that can be equally used for good or evil. It is just common sense, they added. They continued by mansplaining to me that, given such neutrality, the problem was one of misuse, not of poor, misled, or evil design or development. Tools are tools, technology will be developed anyway, and it is up to people to use it for the right or wrong purposes. The conclusion was that the scientists and engineers creating any technology, AI included (and the business people commercialising it, insofar as they are not seen as end users, I add) were innocent or at least not guilty. Culprits must be found further along in the process that leads to ethical disasters, not in the design and development, but in the deployment of the specific technology, to use the familiar, tripartite D-analysis. I guess one could say the same about guns. People kill people, not guns, I am told. People mistreat people, not AI. Put it this way, and it sounds suspicious. There is some truth in it, surely, but it is a shallow truth that hides a deeper point: which people? The designers, the distributors, those who profit from the technology, the users, or those who allow the artefacts (AI, guns, paper clippers, or whatever you wish to choose) to be available? I waited, listening. Guns were not mentioned (incoherently, I would add), but I was not disappointed; the second shoe dropped, and the trivial, unavoidable analogy with nuclear power made a quick appearance. I tried to change the topic. The last thing you want in a conversation is to come across as the insufferable philosopher who teaches people how to think correctly. Besides, that conversation was not the proper context for a detailed explanation or an excursus about the “agency-like” nature of AI, autonomous agents included. Yet it struck me how even a highly educated, scientific mind could miss the difference between two ordinary kinds of equilibria, to use a more precise word for neutrality. It is high school physics. Here it is.

“Equilibrium” refers to the state of an object that either has no net forces acting upon it or has all the net forces acting upon it balanced. If a ball stops rolling and is at rest, that is a neutral equilibrium. It will not move unless a new force enters the picture. Suppose a ball is pushed in two opposite directions by equal forces (we should call them vectors) and is not moving. In that case, it is also at rest, but that is called a static equilibrium: the Newtonian sum of the forces is zero, but if one force becomes greater than the other, the ball will start rolling. So, a good way of understanding technology’s “neutrality” is in terms of two kinds of equilibria, not one. It is a fundamental distinction that seems to escape some defenders of the neutrality thesis (Pitt 2014). Yet it makes a considerable difference which kind we are discussing.

No technology is ever “at rest” in the “neutral” sense, not least because every technology is always designed according to some implicit or explicit values, by some people for some people, within a culture and with a culture in mind, for some uses rather than others, with affordances and constraints, and so forth. Even the design and choice between chopsticks and forks come with some values attached to them. Moral forces are constantly at play. Indeed, in the philosophy of technology, the idea that technology may be morally neutral (the neutrality thesis) has been criticised for a long time. However, as I shall explain below, many technologies may be more or less static in the sense of looking at rest because they have forces pushing and pulling them in different directions, sometimes obviously, some other times imperceptibly, often in skewed ways. The dual use (which is an empirical, not an ethical assessment) of nuclear energy is the obvious example that my interlocutor could not resist. But then, it is a mistake to say that nuclear power is value-neutral because it can be used in civilian and military applications. The correct way of conveying the ethical point is to call that technology value-static or, perhaps better, value double-charged (henceforth simply double-charged). Note that “dual use” and “double-charged” are distinctions that work independently. A supporter of nuclear armament may coherently argue that nuclear power is dual use practically, double-charged ethically, and is more ethically good than evil in both civilian and military applications. At the same time, a critic may agree about the dual use and the double-charged nature of nuclear power, but object that this is skewed in favour of evil when it comes to military applications or perhaps civilian applications, e.g. in terms of environmentalism. The same applies to AI (Koplin 2023).

At this point, some readers may suspect that some philosophical hair-splitting may be at play here. But let me reassure them that it is a very substantial difference, as significant as the one we saw above in the Newtonian context. Because now, we can use this distinction to understand that there are technologies designed to ensure that their static equilibrium is not an equilibrium at all, having “good vectors” that exert an influence much stronger than the evil ones, to rely on the same analogy. A coffee maker may be used for goodness knows what kind of evils (including the production of some dark liquids mistakenly called coffee), too much coffee may be unhealthy, one could burn one’s fingers mishandling it… but the truth is that it is a technology that is double-charged in a very unbalanced (non-equilibrium) way, for it has been designed to do good: to brew a decent cup of coffee. Simply put, the good vector is far stronger than the evil vector. A knife has more of a static equilibrium, as the endless examples show. Yet, the blunt, small and blunt knife given to a child to spread butter can hardly be used to kill someone, whereas a bayonet is the sort of knife with which one may struggle to spread Nutella but is meant to be sharp and long enough to reach a human heart and kill. In Kafka’s In the Penal Colony, the story describes the last use of a torture machine that, like a dot matrix printer, carves the sentence of the condemned prisoner on their skin and slowly kills them over twelve hours. No neutrality, certainly, and the static equilibrium is irredeemably lost in favour of an evil bias. Examples abound. Swords are meant to harm and kill, so they are double-charged, but in favour of evil. This is why Isaiah 2:4 recommends transforming them into ploughshares, which are double-charged in favour of good (“They shall beat their swords into ploughshares, and their spears into pruning hooks; nation shall not lift up sword against nation, neither shall they learn war any more”). Yet both swords and ploughshares are still “static”, never neutral, because swords can be used to defend and save innocent lives, and ploughshares may still be used as weapons or transformed into swords, as Joel 3:10 reminds us (“Beat your ploughshares into swords, and your pruning hooks into spears; let the weakling say, ‘I am a warrior.’”).

The static equilibrium or double-charge nature of technology looks like neutrality, but it is not. It is based on a tension between opposite forces, and such tension can be exploited to design the wanted equilibria.[1] So, contrary to the neutrality thesis, the double charge thesis places a significant responsibility on its designers, contrary to what my interlocutor seemed to think. For, it is designers who can have (at least some) control over the values that end up shaping (or equally importantly not shaping) what kind of double-charged technology will be used and how. Sometimes, it is just a matter of international agreement on some ISO requirements, from microwaves that stop working when the door is open to seat belts in cars. But in dangerous and controversial cases, will the artefact be in perfect equilibrium (perfectly static), or will its design make it more good than evil? A typical example is the decades-old debate in the US about personalized smart guns that can be fired only by verified users, avoiding endless incidents.


[1] Here the distinction between the cybernetic sense of equilibrium of a system as homeostasis, in Winer’s sense, and the metaequilibrium of a system in Simondon’s sense (Bardin and Ferrari 2022) plays an ontological role that may be worth exploring in the future but should not be confused with the ethical (double-charged) and practical (dual use) equilibria under discussion.

When it comes to AI and digital technologies in general, they are not neutral, but double-charged, like all other technologies, but with the good vector much stronger than the evil one.[1] Digital technologies are, when properly designed, a force for good. They are like a Swiss Army knife, not like a bayonet. This is not an optimistic or rosy picture of digital innovation blind to its obvious and frequent misuses, but a suggestion to see in the evil and harmful design, development and deployment of AI and other digital technologies an exception to be rectified, rather than the rule to be stopped. From this perspective, the responsibility of innovators and designers is both significant and apparent. Beta testing any technology on humans to see what happens and how the technology may be improved, from driverless cars to chatbots based on large language models, is irresponsible also in the sense that it is a failed attempt to deresponsibilise the designers and producers and shift all the responsibility onto the users, small prints included. The same applies to guns. The tragic outcomes of such policies are apparent, especially in the US.

The design of any technology is a moral act. The neutrality thesis tries to hide this fact, and the responsibilities that it implies. This is unhelpful also because it makes it difficult to clarify the ethical choices and trade-offs that many technologies often require and, therefore, the policies and regulations that need to be devised. Which and whose values should be privileged when designing a technology, given the fact that it is double-charged? Which way should the ball roll, and according to what vectors? For example, given the European Union context, the AI Act considers facial recognition a technology likely to deliver more risks than advantages. Now, one may agree that it is not a neutral technology (values are at work), but still reject that it is a “static” one negatively double-charged, arguing that it may be less problematic than envisaged, depending on the contexts. So, other countries may have different approaches and regulate facial recognition technologies differently. There is room for disagreement without that “room” becoming permeable to evil. Ethically good solutions can come in various ways, as any designer knows. Or, to put it more formally, ethical choices may often lie on a Pareto frontier, being all equally best trade-offs. Not every ethical choice is subject to the Anna Karenina Principle (only one way of being a happy family) or Aristotle’s bull’s eye constraint (only one right solution, all the countless others are wrong). All this must be made explicit, discussed, and debated at least to reach a critical agreement to disagree. But one does not get to all these questions if one stops at the neutrality thesis. This may be a convenient head-in-the-sand approach, but it is also a mistake because someone somewhere will have decided about the values embedded. And these “someone” usually prefer their decisions not to be made explicit and evaluated. Coherently, you will find them defending the neutrality thesis. The alternative is to accept the double-charged thesis and build critically, responsibly, and realistically on its basis.


[1] In (Floridi 2023) I have argued that infraethics have a similar double-charged nature, being more or less enabling in terms of facilitating good or evil behaviours.

The more powerful a technology is, the more significant the moral act of its design becomes. When a technology, by its nature, would lead to more positive than negative outcomes, it takes human perversity or stupidity to transform it into a force for evil. This can be done, an it happens frequently. We should never underestimate humanity’s boundless lack of scruples, insights, or both. It can be staggering. But understanding what is at stake – static, not neutral, equilibria, and how such equilibria may be exploited for good according to which values and trade-offs – and hence, where the responsibilities lie is a necessary step towards better technologies and a preferable digital society.

Acknowledgements

Many thanks to Emmie Hine, Jessica Morley, Claudio Novelli, Chris Thomas, and Paul Trimmers for their insightful comments on previous versions of this article. They improved it significantly, much to my embarrassment and relief.


References

Bardin, Andrea, and Marco Ferrari. 2022. “Governing progress: From cybernetic homeostasis to Simondon’s politics of metastability.”  The Sociological Review 70 (2):248-263.

Floridi, Luciano. 2023. The Green and the Blue – Naive Ideas to Improve Politics in an Information Society. New York: Wiley.

Koplin, Julian J. 2023. “Dual-use implications of AI text generation.”  Ethics and Information Technology 25 (2):32.

Pitt, Joseph C. 2014. ““Guns Don’t Kill, People Kill”; Values in and/or Around Technologies.” In The Moral Status of Technical Artefacts, edited by Peter Kroes and Peter-Paul Verbeek, 89-101. Dordrecht: Springer Netherlands.




Written for OEB Global 2023 by Professor Luciano Floridi. Luciano will be one of the Opening Keynote speakers on November 23, 2023.






Leave a Reply

Your email address will not be published.