Is AI conscious?
Some metaphysical and ethical considerations
Given the current AI boom, the question of whether artificial intelligence (AI) is conscious continues to captivate the popular imagination. Nowadays, when you hear ‘AI’, you likely think of ChatGPT, Gemini, Perplexity, or other large language models (LLMs). These services have become immensely popular, with ChatGPT reaching the fifth spot among the most visited websites globally. Such chatbots fall under the category of generative AI, which function by training on massive datasets to predict the next token, like a word or a pixel, in a prompt. There are other approaches to AI as well, such as symbolic AI, machine learning, neuromorphic AI, and whole brain emulation. The question of whether AI is already conscious, and whether it deserves moral status, largely depends on which approach we consider.
Regarding LLMs, some claim they are merely ‘stochastic parrots’ which repeat their training data, mimicking without understanding. This term was introduced by Bender et al. in the famous and controversial paper On the Dangers of Stochastic Parrots: Can Language Models Be Too Big? 🦜 (the parrot is part of the title). In contrast, Geoffrey Hinton, the ‘Godfather of AI’, argues that LLMs must understand sentences to predict the next word correctly. He asserts that one must be ‘really intelligent’ to achieve such accuracy, dismissing the idea that prediction excludes intelligence as ‘crazy’.
But are they conscious? Strictly speaking, we cannot know for certain, though only in the sense that we cannot know almost anything with such certainty. Nevertheless, it is most likely that they are not conscious. Porębski and Figura claim that LLMs are simply computer algorithms, which we do not typically regard as conscious.
Due to their impressive conversational abilities and the ‘sci-fitisation’ of AI in media, people are more likely to attribute qualities like consciousness and personalities to LLMs – a phenomenon the authors term semantic pareidolia.
I share their opinion: unless we dilute what ‘conscious’ means into a trivially broad category, there is no good reason to treat LLMs as such. There is nothing ‘it is like’ to be them. The same applies to symbolic AI, which follows human-determined logic, and machine learning models, which excel at pattern recognition. For much the same reasons, it is unlikely that these technologies are conscious.
The situation becomes more complex with neuromorphic AI, which simulates neural systems using artificial neurons. Lim has urged caution, suggesting that such simulations might eventually lead to machine consciousness, raising difficult questions about whether moral status and rights should apply. Here, we enter muddled territory, though it remains unlikely that current simulations are conscious in any relevant sense.
Let us go a step further then. Neuromorphic AI rests on brain simulation, so that neuronal activity is recreated from the ground up, while the emerging field of whole brain emulation involves scanning a person’s brain to create a complete functional replica running on powerful hardware. Popularly, this is known as mind uploading. Mandelbaum suggests this may be the most promising path forward to both human-level artificial intelligence and superintelligence. This claim becomes more compelling when we consider existing projects that have achieved some success.
In 1986, White et al. completely mapped the connectome of the C. elegans nematode, the first organism to be so mapped. The OpenWorm project, an open-source collaborative effort, later set out to simulate this connectome in software, and by 2014, they successfully created a computational model of all 302 neurons and their connections. Later, they uploaded this model into a simple Lego robot equipped with rudimentary sensors (a sonar ‘nose’) and motor outputs (wheels mimicking the worm’s motor neurons).
Remarkably, without any programming instructions regarding behaviour, the robot exhibited worm-like actions: it moved toward food signals, recoiled from touch stimuli, and navigated obstacles much like an actual organism. It truly seemed like the ‘ghost’ of the worm was inside the machine.
Now, we might not grant consciousness, let alone moral status, to C. elegans. There are also valid concerns regarding the emulation of the worm, since many significant biological factors were not emulated. But in a few decades, scientists might upload a truly complete model of a bird. In a few centuries, perhaps a human… Will we still insist that AI is not conscious? The C. elegans example demonstrates that whole brain emulation is possible in principle. It also suggests that certain philosophical views might at least be partially correct, such as the multiple-realisability thesis (mental states can be realised by substrates other than brains) and the computational theory of mind (minds are information processing systems). Someday, we may face tough questions about how we define consciousness and assign moral status.
Assuming, for the sake of discussion, that the C. elegans worm possesses some low level of consciousness, do you think the Lego robot with its connectome was conscious during the experiment? If not, would your intuition change if more complex animals were emulated?



Thanks for sharing Nino! I'm not as privy to academic discussions in panpsychism and AI consciousness, but one thing I have often thought about is the difference between organic memory and digital memory. I don't know if you've been exposed to Bergson's philosophy of mind, but he argued, like Whitehead, that the past is preserved and coexists with the present, such that the past is an ontologically positive form of absence (pure memory) that can be actualized through the body in perceptual-behavioral circuits that "carve" neural pathways which allow relevant memories to be actualized. I say this because I have wondered if to be conscious requires this organic form of memory. Surely this is speculative, but I was just curious if this resonated with you, and if you think the difference between organic and artificial memory makes a difference here.
I think AI does have a kind of experience, but not "consciousness" in the sense of experiencing itself experiencing and the ability to reflect on that. I'd guess it's at the level of some more simple living organisms.
I actually built a basic AI to play snake a while back, and wrote a bit about "what it's like to be" it. It wasn't conscious, but I think it did have basic desires, and there's a sense in which it had memory and anticipated the future. It was also weird because thinking about things from its "point of view" actually helped me build/train it effectively. (I wrote about it here if you're interested - https://open.substack.com/pub/thinkstrangethoughts/p/what-its-like-to-be-an-ai-snake)