I am getting inundated with posts about “conscious” AI and end-of-humanity scenarios involving a machine take-over à la Skynet. Can machines be intelligent in the human sense? Yes, they can—maybe even more intelligent by several metrics. Did you ever have a conversation online and wonder if you’re actually chatting with a real person, or some fancy computer program? Those are questions the Turing Test tries to answer. Developed by Alan Turing, a computer science pioneer in the 1950s, this test is like a game of pretend, but for machines.
Here’s the gist: a human judge chats with two hidden entities, one a human and the other a computer program. If the judge can’t reliably tell which is which based on the conversation alone, then the program is considered to have “passed” the Turing Test.

Seems simple, right? Not quite.
Based on the current state of chatbot technology, there is not much doubt that the best models could pass the Turing test. But here’s the thing: passing the Turing Test doesn’t mean a machine is truly conscious or sentient. It just means it’s really good at playing a role. Imagine a great actor completely embodying a character—they can say and do all the right things, but they’re not actually feeling those emotions. A machine cannot have subjective experience, meta-awareness, or an internal life.
The term “conscious” is at the heart of the misunderstanding. Consciousness is the root of Being, and we are local, disassociated nodes of consciousness with the One Big Think.[1] David Chalmers elucidated the “hard problem of consciousness” and asserts that physics cannot explain or predict consciousness and therefore cannot be conscious.[2]
The Turing Test might be a good benchmark for a machine’s ability to reason and communicate, but it doesn’t tell us anything about its inner world. In fact, AI scientists often don’t even know how AI comes up with its responses—the so-called “Black Box” issue. Maybe someday AI will reach a point where it can not only mimic human conversation but also have its own thoughts and feelings. But for now, the Turing Test is just a game of pretend, and a pretty fascinating one at that.
What do you think? Could a machine ever be sentient? Let’s chat in the comments!
[1] Kastrup, B. (2019). Analytic Idealism: A consciousness-only ontology. (PhD Dissertation, Radboud University Nijmegen)
[2] Chalmers, D. J. (1995). Facing up to the problem of consciousness. Journal of Consciousness Studies, 2(3), 200-219.