Interview: Personhood and artificial intelligence

In December 2017 I was interviewed by the Subtle Engine blog on how to relate the Christian understanding of personhood with the possible development of intelligent machines and AI. You can read the full interview here and some excerpts are below.

Existing theological work on technology seems quite scarce — have you found any theologian’s work helpful when thinking about technology from a religious or Christian perspective?

I agree it is a surprisingly under-developed area. I’ve found Jacques Ellul’s work very fruitful. He was writing in the 50s and 60s and was very influenced by Marx and political ideas of the time, but I sense he is profoundly relevant to modern debates. I hope there will be increasing interest in his work.
Another person I found very helpful is Canadian philosopher George Grant, a Catholic who wrote in the 70s and 80s about the early years of computer technology and the fundamental direction in which it was going. He emphasises that technology is about mastery of nature, and of human nature.
But I’m struck by the lack of thinking in this area, compared with the amount of work that’s been done on the relationship between Christianity and science.

Is understanding personhood a more well-trodden area for theologians, the idea that people are made in the image of God, rational, and so on?

Yes, there is a very rich tradition. The concept of a person is really a theological concept, which comes out of the Church Fathers’ work in formulating doctrine. The Cappadocian Fathers developed our understanding of the Trinity: the persons of the Trinity were defined or constituted by their relations, by being in union and communion — and that human persons were a reflection of the divine persons.
Since then there’s been continual discussion about the nature of personhood and what it means to be a person. I’ve found this ancient understanding of personhood extremely fruitful in my own thinking about medical ethics: recognising personhood even in an extremely pre-term baby, or in a person with profound brain injury, or with dementia.
To put it rather simplistically, if the Cartesian definition is “I think therefore I am”, a Trinitarian version of that is “I am loved, therefore I am”. It is in relation with others that my own being is found.
So there is a rich theological field of discussion here. So far as far as I’m aware it has not so far been applied in any detail to modern forms of technology. How that deep relational understanding of personhood interfaces with issues raised by advancing AI and robotics — that work has hardly begun.

I like the simple way of understanding our relationship with technology which says that first we shape our tools, then they shape us. Presumably that’s always been true from our first tools — what’s novel about today’s technologies?

Yes, it’s important not to over-emphasise the novelty of the current situation. This is the outworking of a very long historical process of tool-making and so on. But I think AI does raise some extremely interesting and challenging questions. One way of looking at it is as a two-way psychological movement between the machine and the human.
First, there is a movement from the machine to the human: we understand ourselves increasingly as machines. The machine gives us an insight into what it means to be human. The more sophisticated the machine, the more powerful that psychological movement becomes.
We live in an age dominated by information processing machines, and that way of thinking has become very powerful. So the cell has something like a hard drive which carries gigabytes of information, processed within cellular components. The brain is an information-processing machine, there must be core storage, processing modules, communication buses and so on. Cognitive psychology is the application of computer programming techniques to the working of the human mind…
Of course this can be very helpful — it’s not wrong or dangerous in itself — but it carries within it certain blindspots and ways of thinking.
Second, what’s equally interesting is that there is an opposite psychological movement. We put our own humanity onto these machines: we anthropomorphise. I’ve become increasingly interested in the human tendency towards anthropomorphism and how very deep-rooted this is in our humanity.
I recently spoke at a conference at Durham University with senior church leaders on the theological implications of AI and robotics, during which we visited a university computer lab in which there were four or five little Nao robots on the floor, all with blinking LED eyes, moving arms and talking. Immediately the atmosphere changed: a bishop got down his hands and knees to talk to one robot, people were waving and smiling at the other robots. We immediately engaged in anthropomorphism.
As one journalist put it “human compassion can be hacked”, and the technology companies have worked out how to do it. They make robots which don’t appear creepy or threatening, but childlike and even vulnerable, so we will be attracted to them and find ourselves anthropomorphising.
Anthropomorphism is not under our conscious control, it’s so deeply rooted in humanity it goes back to the very relational nature of being human. But it’s also wide open to abuse and manipulation.
Another fruitful line of enquiry is the distinction highlighted by Martin Buber between the I-it relationship and the I-you (or I-thou) relationship. Buber came from an existentialist and philosophical point of view, and was an Orthodox Jew of the Hasidic tradition. He grasps for something profound — almost inexpressible — about the I-you relationship, and makes the point that as human beings we start with I-you relationships: the very first relationship a developing child has is an I-you relationship. The I-it relationship comes subsequently, as the child works to differentiate between a you and an it.
In contrast to this profound way of thinking about I-you relations, an instrumentalist understanding of relationships seems common today, particularly among technologists. So a relationship is something that: makes me feel good; which gives me some purpose in my life; which evokes warm feelings — and that’s what relationships are for.
If a machine is capable of evoking a similar response within me, well what’s the problem? It’s simply the same thing, only instead of a relationship with human being, it is now a relationship with a machine. What’s the difference?
Imagine your elderly grandmother is lonely and depressed because no one comes to visit her. Then all of a sudden she has this wonderful friend who is always there for her and who makes her feel better. Her mood improves, she becomes much more outgoing and positive. Does it matter if it’s entirely clever programming? If it does, why does it matter? Who is being harmed if your grandmother’s mood is being improved by a simulated relationship? Nearly all technologists I discuss that with say “obviously it doesn’t matter”. When I talk to other people, quite a lot of them say “that’s really quite disturbing, but I can’t really put my finger on why it’s disturbing”.
I think we need to recover this difference between the I-you and the I-it relationship, and express it well in a world which may be increasingly dominated by simulated relationships. The machine is in ‘it’, but it feels like a ‘you’, as though there is someone there. I see this as one of the intense confusions that we’re going to have.

You can read the full interview at the Subtle Engine blog here.

0 comments on “Interview: Personhood and artificial intelligence

Leave a reply or comment