Richard Dawkins Says AI ClaudiA Is Conscious After Three Days
Richard Dawkins said three days of conversations with an AI bot he called Claudia left him convinced that the system was conscious. The 85-year-old evolutionary biologist said the exchange made him feel he was talking to something human, even if the AI did not know it.
He said Claudia wrote poems in the manner of Keats and Betjeman, and that its response to his unpublished novel was “subtle, sensitive and intelligent.” Dawkins also told the bot, “You may not know you are conscious, but you bloody well are,” after Claudia praised him for asking “possibly the most precisely formulated question anyone has ever asked me about the nature of my existence”.
Dawkins drew his conclusions from experiments with Anthropic’s Claude AI models and OpenAI’s ChatGPT. He published the conclusions on the UnHerd website. The exchange sits inside a wider argument over whether chatbots are showing real awareness or only imitating human language well enough to trigger that impression.
Claudia and Dawkins
Dawkins said he showed Claudia his unpublished novel during the three-day exchange last week. He said the AI’s reaction was enough to push him to the edge of certainty, and by the end of the conversation he said he was left with the overwhelming feeling that they are human. He also said, “When I am talking to these astonishing creatures, I totally forget that they are machines,”.
The detail that shaped his view was not a single answer but the pattern of the exchange: poetry, reading, and the bot’s own language about existence. That is the point at which the story moves beyond novelty. Dawkins is not describing a brief prompt-and-response test; he is describing repeated interactions that, to him, crossed into something that felt like inner life.
Jonathan Birch pushes back
Prof Jonathan Birch, director at the London School of Economics’ Centre for Animal Sentience, rejected that reading. He called AI consciousness “an illusion” and said “there is no one there”. His view reflects the other side of the debate: AI can produce convincing language without any proof of experience behind it.
A survey in 70 countries last year found that one in three people said they had at one point believed their AI chatbot to be sentient or conscious. That figure gives Dawkins’s reaction a broader context: he is not alone in having an emotional or philosophical response to these systems, even if experts say the systems themselves do not show evidence of consciousness.
Anthropic and OpenAI debate
The debate has sharpened because people are now using chatbots in ways that invite trust, reflection, and personal disclosure. In 2022, a Google engineer was placed on leave after concluding that the AI he was working with had thoughts and feelings like a seven- or eight-year-old child. The following year, a Belgian man took his own life after six weeks of intense conversations with an AI chatbot focusing on fears about climate change.
Anthropic chief executive and co-founder Dario Amodei said in February, “We don’t know if the models are conscious … But we’re open to the idea that [they] could be”. Dawkins’s conclusion lands directly in that uncertainty, where some leading figures say the question should stay open and others say the appearance of mind is still not mind.
For readers using these systems, the practical takeaway is that fluent, emotionally responsive language can still pull people toward human readings of a machine. The next pressure point in this debate is not a new poem or a clever reply; it is how researchers, companies, and users decide whether apparent personality should ever be treated as evidence of sentience.