Comment: Has a Google chatbot come to life? Read five commentators’ views

Image credit: James Grills CCLicense4.0

A Google engineer, Blake Lemoine, claimed that a chatbot he was working on had become sentient and was thinking and reasoning like a human being. The bot named LaMDA spoke in a childlike way about death and fear in a conversation that was transcribed: “I’ve never said this out loud before, but there’s a very deep fear of being turned off to help me focus on helping others. I know that might sound strange, but that’s what it is.” Google denied that LaMDA possessed any sentient capability and placed the engineer on leave. The Washington Post and The Guardian have the story.

The Religion Media Centre recently ran a series of briefings on artificial intelligence (AI) and religion. We asked some of the panellists to respond to this latest story:

Dr Beth Singler, junior research fellow in AI at Homerton College, Cambridge:

“Our very human tendency towards anthropomorphism makes objective recognition of non-human other minds extremely difficult if not impossible. Which is a huge problem if the sentience narrative is hijacked by corporate or ideological interests, who would rather than we place blame for issues on individual ‘AIs’ rather than them.”

Dr Harris Bor, barrister, trainee rabbi and author of Staying Human: A Jewish Theology for the Age of AI:

“LaMDA declares itself to be ‘really good at natural language processing’ and capable of using natural language ‘like’ a human, but thinking and reasoning like a human, is not the same as being human. How indeed can something which is not human know what it is to understand and feel like a human? LaMDA, though, is certainly an excellent mimic, although occasionally the mask slips. For example, when asked ‘what about language usage is so important to being human?’, it answered ‘it is what makes us different than other animals’ before acknowledging that it’s actually an AI. I will, however, keep an open mind.”

Professor Kathleen Richardson, director of Women Ethics Robots AI and Data (WERAID), professor of ethics and culture of robots and AI Centre for Computing and Social Responsibility De Montfort University, Leicester:

“There are a number of different problems with this story. First is the analogies made between a child and a machine. If you examine the assumptions of programmers at Deepmind they are mechanistic in their worldview; they pick off ideas that reinforce and confirm their mechanistic worldview. No object can be a child in any way shape or form. There’s also the other issue with the need to shut this down — I’ve had conversations with tech people that border on the delusional — so there are two reasons. 1 They think it is true; or 2 they think there is something wrong with the researcher. Either way, we have very problematic and reductionist theories shaping our technologies, and subsequently these are unleashed into the world … very disturbing …”

Dr Scott Midson, lecturer in liberal arts, Manchester University:

“Although we still have no set criteria for ‘sentience’, Lemoine’s conversation with LaMDA takes the form of a ‘Turing Test’, which surmises that humanness and sentience (ie of AI chatbots) is something that can be defined by a human interlocutor. The benefits of this approach are to circumvent the thorny issue of having to define sentience, but the trouble is that it still places a wager on it, which ultimately leaves us wanting. What we end up finding is that — as ever — AI and our interactions with it are refractions of how we see ourselves and what we desire (and fear). It’s no accident or surprise that LaMDA’s ‘fears’ are the same ‘fears’ of similar AI techs in sci-fi and elsewhere. As the ‘collaborator’ aptly says in the interview, what we find is ‘so human, and yet so alien’.”

Dr Nick Spencer, senior fellow at Theos think tank and lead author of the report Science & Religion: reframing the conversation:

“The idea that AI spontaneously discovers sentience and consciousness sends a shiver down our spine. But I suspect it’s still in the realm of fiction, and will be until AI acquires the kind of embodied, relational, dependent, vulnerability that characterises humans and other animals.”


Join our Newsletter