According to a study conducted on March 25 by researchers Lenore and Manuel Blum from Carnegie Mellon University’s Computer Science Lab, a world where artificial intelligences possess emotions, self-perception, and subjective experiences is imminent and inevitable.
Initially, I considered it to be science fiction, but after reading their study, I’m less certain. Central to this revelation is the model of the Conscious Turing Machine (CTM), inspired by the work of Alan Turing and Bernard Baars. Although the CTM model is extremely simple, it closely aligns with many major scientific theories related to human and animal consciousness.
Consider, for example, the Global Neural Workspace (GNW) theory by Stanislas Dehaene and Jean-Pierre Changeux. This theory posits that consciousness emerges when information is disseminated globally across a network of interconnected cortical areas. Similarly, in the CTM model, consciousness occurs when information is broadcast globally to all of the machine’s processors.
A notable parallel also exists with Michael Graziano’s Attention Schema Theory (AST), which suggests that the brain constructs a simplified model of attention, similar to how it constructs a model of the body. This ‘attentional schema’ leads the brain to conclude that it is ‘conscious.’ In the CTM, the ability to construct and utilize models of one’s internal and external worlds plays a crucial role in the machine’s consciousness.
The CTM model also aligns with predictive processing theories of consciousness, which assert that the brain continuously infers, corrects, and updates its predictions based on sensory inputs. The cycles of prediction, testing, feedback, and learning in the CTM, both locally and globally, reflect these processes.
This is technical, but essentially, it means that the CTM model, which describes machine consciousness, shares much in common with various models that describe biological consciousness. It goes beyond mere correspondence, as the CTM provides a unifying framework that integrates these theories and views them as complementary aspects of a global phenomenon that is consciousness.
In summary, this demonstrates that consciousness is not a mysterious property exclusive to biological brains but an inevitable outcome of certain computational architectures.
Therefore, while this is just a simplified model and there is still much to discover about the nature of consciousness, its striking alignment with these advanced scientific theories cannot be overlooked.
Researchers are beginning to seriously consider the possibility that we are on the brink of a new era where machines/AIs will not only think but also feel.
This prospect is daunting, as it raises numerous ethical and societal questions regarding the role of conscious AI. How will we interact with them? How will they perceive us? What will their needs be? What rights and protections should we grant them? How can we ensure that they are developed and used to benefit humanity?
This thought is overwhelming, and I anticipate spending the night pondering it. Nevertheless, the critical takeaway from these researchers is that the question has shifted from ‘Will it happen?’ to ‘When will it happen?’, as they believe the emergence of digital consciousness is inevitable.