In a recent opinion piece published in the journal Trends in Neurosciences, a group of neuroscientists hailing from Estonia, Germany, and Australia express scepticism about the prospect of building conscious machines, shedding light on the fundamental disparities between human cognition and artificial intelligence. While interactions with AI systems like ChatGPT give the impression of conscious responses, the authors contend that such systems are likely devoid of genuine consciousness.
The neuroscientists present a compelling case, pointing out three key factors that set human consciousness apart from current AI models. First, they highlight the absence of embodied and embedded sensory information in AI language models. Unlike humans, who derive their understanding of the world through rich sensory experiences, AI systems lack this essential connection to the physical environment.
Secondly, the authors emphasize that contemporary AI architectures lack critical features found in the thalamocortical system, a vital neural structure associated with conscious awareness in mammals. This disparity suggests that AI models are still far from replicating the intricacies of human consciousness.
Furthermore, the researchers argue that the evolutionary and developmental paths that led to the emergence of conscious living organisms have no parallel in today's artificial systems. They stress that the existence of living beings is intricately linked to a complex web of biological processes, from cellular interactions to multi-level agency and consciousness, which AI models are yet to emulate.
While there is no consensus among researchers on how consciousness arises in the human brain, neuroscientists underline that the mechanisms underpinning AI language models are significantly less complex than those governing human consciousness. They draw attention to the fundamental distinction between biological neurons and their artificial counterparts in neural networks. Real neurons are physical entities capable of growth and shape changes, while AI neurons are essentially lines of code devoid of physicality.
In light of these disparities, the authors caution against assuming that AI language models like ChatGPT possess consciousness. They argue that attributing consciousness to such systems underestimates the intricacy of the neural mechanisms responsible for human consciousness, signalling that building conscious machines remains a formidable challenge on the horizon of AI development.