Skip to main content


Two years ago Google fired Blake Lemoine for trying to free their LLM-based chatbot believing it was sentient. The rise of ChatGPT has caused more people to agree with him.

The problem is that people conflate passing the Turing Test with sentience. Just because you ask a chatbot if it’s alive and its training data indicates saying yes is the best response doesn’t mean it’s true.

A mannequin that plays a scream every time you poke it doesn’t feel pain. But your instincts make you think it does.

in reply to Dare Obasanjo

If you manage to get a really good test for sentience, the problem isn't going to be the AIs that pass. The problem will be the humans that fail.
This entry was edited (1 month ago)