Machines of Caring Grace - Boston Review
Excerpt
The goal should be to support humans, not to replace them.
Morozov poses a provocative question, asking how AI might have been directed to different ends than the ones that drive the runaway industry today. As with any technology, we need to question both the technical imperatives and the underlying human values and uses. In the words of the decades-old slogan of Computer Professionals for Social Responsibility, âTechnology is driving the future⊠. it is up to us to do the steering.â
Morozov also accurately points out the dominant role of the âEfficiency Lobbyâ in steering the direction for AI so far, as well as many other modern computing technologies. The question to be asked from a socially meaningful point of view, however, is not where else we could have gone in an alternative world, but how we move forward from here.
We donât want to fill the world with uncaring playful machines any more than with uncaring efficiency generators.
That is not to say that learning from the past isnât useful. There were indeed alternatives of the sort Morozov seeks from the very beginning of AI and kindred technologies. A visionary example was Gordon Paskâs Musicolour machine, built in 1953 in collaboration with Robin McKinnon-Wood, which translated musical input into visual output in a way that learned from the interaction with the musician operating it. As Pask put it:
Given a suitable design and a happy choice of visual vocabulary, the performer (being influenced by the visual display) could become involved in a close participant interaction with the system. He trained the machine and it played a game with him. In this sense, the system acted as an extension of the performer with which he could co-operate to achieve effects that he could not achieve on his own.
This and other explorations like it in subsequent decades did point in a direction that the worldâor to be more precise, the commercial technology developersâdid not choose to take. But is this the direction in which we should be looking for a broad alternative to current AI?
I am not as enamored as Morozov seems to be with the world of Stormâs âflĂąneur.â I agree that there is something attractive about the image of playfulness, imagination, originality, with no problems to solve, no goals to pursue. But there are deeper human consequences and opportunities that are at stake when we design technologies. What Morozov leaves out in his efficient-versus-playful dichotomy is the role of human care and concern. This is evident in the way he talks about intelligence, which he sees as the measure of being human. Thus he seeks alternative kinds of ânon-teleological forms of intelligenceâthose that arenât focused on problem solving or goal attainment.â
But care is not a form of intelligence. The philosopher John Haugeland famously said âthe trouble with artificial intelligence is that computers donât give a damn.â This is just as true of todayâs LLM-based systems as it was of the âgood old-fashioned AIâ Haugeland critiqued. Rather than a kind of intelligence, care is an underlying foundation of human meaning. We donât want to fill the world with uncaring playful machines any more than with uncaring efficiency generators.
Morozov has also missed the main underlying points of the examples he cites from my work with Fernando Flores. The Coordinator was indeed marketed with offers of increased organizational efficiency, but the underlying philosophy reflected a deeper view of human relationships. It was centered on the role of commitment as the basis for language. The Coordinatorâs structure was designed to encourage those who used it to be explicit in their own recognition and expressions of commitment, within their everyday practical communications. The theme of this and Floresâs subsequent work is of âinstilling a culture of commitmentâ in our working relationships, allowing us to focus on what we are creating of value together.
My analogy of AI to bureaucracy evokes not just the mechanics of bureaucratic rule-following but the hollowing out of human meaning and intention. We are all familiar with a bureaucratic interaction where our interlocutor says, âIâm sorry, I understand your concern, but the rules say that you have to âŠâ That is, care for the lifeworld of the person being told what to do cannot be a consideration. To return to Haugelandâs insight, the bureaucratic system doesnât give a damn. Itâs designed that way on purpose, to remove human subjectivity and judgment from matters even when they are of crucial, life-determining importance.
Morozov recognizes that as long as AI remains largely under corporate control, placing our trust in this technology to solve big societal problems might as well mean placing our trust in the market. But putting it under government control, given the current nature of governments in the world, may not be an improvement. The problem isnât how to engender AI systems that are more playful and less boring but to lay out what it would mean to create and deploy systems that are supportive of human concern and care. I agree these would be systems designed to enhance the interaction of humans, not to replace it. As outlined in Douglas Engelbartâs early vision, the goal should be intelligence augmentation rather than artificial intelligence.
There have been many calls for moving toward AI âalignmentâ with human values and concerns, but there is no simple mechanism of alignment that we can appeal to. As Arturo Escobar argues, conventional technology design tends to support a singular, globalized world view that prioritizes efficiency, economic growth, and technological progress, often at the expense of cultural diversity and ecological health. This is not the result of âclosed worldâ assumptions, but of the consequences of the process by which data is collected, networks are trained, and models deployed.
We return to the question we started with: not âHow might things have happened differently?â but âHow might things be different in the future?â Morozov ends with a tantalizing proclamation: the lesson of the Latin American experiments is that âtechnologyâs emancipatory potential will only be secured through a radical political project.â What is the radical political project of our times, within existing national and international systems of governance, that has the promise to nurture AIâs emancipatory potential? Unfortunately, this is a far more difficult and consequential question.