An apparatus of consciousness… a mechanical visage of a vast, intricate machine, its gears and circuits forming the contours of a human face, figures of various sizes working hard, piloting this grand apparatus.
As we strive to create human-like artificial intelligence (AI) …
Have we truly progressed, or merely changed our metaphors?
Where does the machine end and the mind begin?
This image below is not just a fanciful illustration. To me, it is a metaphoric representation of the ongoing quest in AI to create systems that think and perceive in ways similar to humans. Will true human-like AI require that we move beyond purely computational metaphors and consider the embodied, contextual nature of cognition?
From Computation to Embodiment
Traditionally, AI research has been heavily influenced by the computational theory of the mind (CTM). CTM views the mind as a kind of computer, processing information through symbol manipulation. This view posits that:
-
The mind is a computational system that is physically realised in the brain’s neural activity.
-
Mental processes are computations i.e. manipulation of symbols according to formal rules.
-
The mind represents information using mental symbols, which serve as the inputs and outputs of the computations.
-
Cognitive processes can be understood independently of their physical implementation in the brain, just as a computer program can run on different hardware.
This approach, while yielding significant advances, has shown limitations in creating AI systems that can adapt flexibly to new situations or understand context in the way humans do. Several objections and criticisms have been raised against the CTM 123 notably:
-
Intention and Mental States: John Searle’s Chinese Room thought experiment aims to show that a computational system, even one that appears to understand Chinese, does not actually understand the language in the way a human does. The argument suggests that syntax (symbol manipulation) is not sufficient for semantics (true understanding). Searle argues that thought and computation are fundamentally different - thought is always “about” something, while computation is just the manipulation of symbols without any intrinsic meaning. This criticism highlights the need for a theory that can account for the intrinsic meaning and intentionality of mental states.
-
Triviality of Implementation: Searle’s “Wall” argument suggests that if the brain is just implementing computations, then any physical system could be said to be implementing any computation, rendering the theory trivial. This criticism highlights the need for a more nuanced understanding of how the brain implements computations that can account for the specific physical implementation of mental processes.
-
Consciousness and Qualitative Aspects of Mental States: Critics argued that CTM cannot adequately account for subjective conscious experience and the qualitative aspects of mental states. CTM, which models the mind as a computational information processing system, fails to explain the first-person experience of consciousness and the qualitative “feel” of mental states (known as qualia).
The limitations of the CTM in accounting for the subjective, qualitative aspects of consciousness and cognition highlight the need for a more nuanced understanding of the mind. The recent shift towards embodied AI, where intelligence emerges from the interaction between an agent and its environment, aligns well with theories of embodied cognition in cognitive science. These theories view cognitive processes as deeply rooted in the body’s physical interactions with the world, rather than just abstract information processing:
-
the properties of an organism’s body limit or constrain the concepts it can acquire. The concepts by which an organism understands its environment depend on the nature of its body.
-
the computational concepts like symbol, representation, and inference used in traditional cognitive science should be replaced with concepts better suited to investigating bodily-informed cognitive systems.
-
the body (and parts of the environment) play a constitutive role in cognition, not just a causal one. The cognitive systems consist of more than just the nervous system.
Cittavīthi - The Buddhist Model of Cognitive Process
The cittavīthi (Cognitive Process) model is a key concept in Buddhist philosophy that presents a view of the mind as inherently embodied and context-dependent.4 The model analyses the rapid arising and passing away of citta (moments of consciousness) and their associated cetasika (mental factors).
In this model, a cognitive process consists of a rapid succession of 17 thought moments of consciousness arising and passing away. The first citta apprehends the object, followed by a series of javana cittas that process the object, and a final registration citta that completes the process. Billions of these fleeting consciousnesses, arise and pass away in a single second, with each consciousness lasting only a fraction of a moment (mind-moment or cittakkhaṇa). Hence, cittakkhaṇa is the most fundamental unit of the cognitive process, representing that mind-moment of consciousness and each cittakkhaṇa has three sub-moments of arising (upāda), presence (ṭhiti), and dissolution (bhaṅga).
The cittavīthi model describes two main types of cognitive processes.
-
The five-door cognitive process occurs when an object is apprehended through one of the five physical sense doors (eye, ear, nose, tongue, body). It begins with the bhavanga citta (life-continuum consciousness) being interrupted, followed by a series of cittas that perform the functions of seeing, hearing, smelling, tasting or touching the object.
-
The mind-door cognitive process occurs when an object is apprehended through the mind door, including abstract concepts and mental objects. It starts with the manodvārāvajjana citta (mind-door directing consciousness) adverted to the object, followed by a series of javana cittas (impulsive/apperceptive consciousness) that process the mental object.
Both cognitive processes unfold through a rapid succession of cittakkhaṇas, with citta and cetasika arising and passing away together.
The table below shows the comparison of the cognitive processes.
Steps | Five-Door Cognitive Process | Mind-Door Cognitive Process |
---|---|---|
1. Resting State | Mind is not processing any sensory input. | Mind is not processing any sensory input. |
2. Initial Disturbance | External object impinges on one of the sense doors causing a vibrational response | A mental object (thought, memory, or concept) arises causing a vibration |
3. Transition | Resting state ceases momentarily, and transits to readiness to process new sensory input. | Transits from resting state to an active state of processing |
4. Adverting Consciousness | Mind turns its attention to specific sense door through which the object entered and determines the sense door that is receiving the stimulus | Mind turns attention to mental object, setting the stage for further processing. |
5. Sense-Door Consciousness | The appropriate sense-door consciousness arises: Eye, Ear, Nose, Tongue, Body consciousness. The consciousness directly experiences the object | Not applicable |
6. Receiving consciousness | The receiving consciousness receives the sensory impression | Not applicable |
7. Investigating consciousness | Examines and investigates the received sensory impression | Not applicable |
8. Determining consciousness | Decides the nature of sensory object | Not applicable |
9. Impulse/Active consciousness | Emotional and volitional response to the object. Karmic actions are generated in terms of mental and emotional engagement | Mind engages actively with object, generating emotional and volitional responses. Karmic actions are generated accordingly. |
10. Registering consciousness | Registers or retains the object in memory. Cognitive process concludes | Registers or retains the mental object in memory. Cognitive process concludes |
11. Return to resting state | Returns to resting stage | Returns to resting stage |
This model offers several parallels with the modern theories of embodied cognition:
-
Conceptualization: In Cittavīthi, the arising of thoughts and mental states is shaped by one’s physical and sensory experiences, similar to the embodied cognition view that concepts are constrained by the body.
-
Replacement: Cittavīthi moves beyond a purely computational view of the mind, recognising the role of embodied, sensory-motor processes in shaping thought, aligning with the embodied cognition critique of traditional cognitive science.
-
Constitution: Cittavīthi sees the mind as intimately connected to the body and environment, not just an abstract information processing system, mirroring the embodied cognition idea of the cognitive system extending beyond just the brain.
-
Action and Perception: Embodied cognition emphasises the tight coupling between action, perception and cognition, which is also central to the Cittavīthi model of thought arising from sensory experiences and leading to bodily actions.
-
Situatedness: Both embodied cognition and Cittavīthi recognise the importance of the situated, contextual nature of cognitive processes, rather than abstract, disembodied mental representations.
These parallels suggest that ancient Buddhist insights and cutting-edge cognitive science converge on a holistic, embodied view of mind, highlighting the role of the body, action, and environment in shaping cognition and mental states. This is an area that perhaps could revolutionise our approach to AI development.
Cittavīthi involves a complex interplay of various mental factors, including attention, perception, and volition and offers a provocative counterpoint to traditional information processing models, challenging us to reconsider the nature of cognition and its implications for AI.
When East Meets West
In the Buddhist model, each moment of cognition flows seamlessly into the next, with no clear delineation between “stages” of information processing. This perspective aligns with modern theories of embodied cognition and predictive processing, which view the mind as constantly active, anticipating and responding to its environment in real-time. These theories suggest that cognition isn’t confined to the brain but is intimately connected with the body and environment.
-
If cognition is indeed a continuous, embodied process, can a disembodied AI ever truly mimic human-like cognition?
-
How might we design AI systems that integrate more seamlessly with their environment, blurring the line between internal processing and external interaction?
Furthermore, cittavīthi’s emphasis on the interdependence of mental factors challenges us to rethink how we conceptualize and implement cognitive architectures in AI. Traditional AI systems often operate with rigid, predefined rules and structures. In contrast, a cittavīthi-inspired approach would suggest more dynamic, adaptive systems that can flexibly respond to changing contexts.
-
How might we create AI systems that don’t just process information, but actively engage with and adapt to their environment in real-time?
-
Could we develop AIs that, like the human mind, seamlessly integrate perception, cognition, and action?
The fluid, non-linear processing suggested by cittavīthi pushes us to reconsider the very foundations of how we design artificial cognitive systems. It suggests that to truly mimic human cognition, we may need to move beyond the traditional computer architecture of separate memory, processing, and output components.
Practical Implications and Cittavīthi-inspired AI
What might a cittavīthi-inspired AI system look like in practice? While the full realization of such a system remains speculative, we can envision some key characteristics:
-
Dynamic Processing: a cittavīthi-inspired AI would feature highly dynamic, context-sensitive processing. Its decision-making would be fluid, constantly adapting to changing environmental inputs and internal states.
-
Integrated Sensory-Cognitive Systems: Mirroring the Buddhist model’s emphasis on the interconnectedness of mental processes, this AI would have tightly integrated sensory and cognitive systems. There would be no clear divide between “perception” and “cognition” - instead, these would be part of a continuous, unified process.
-
Non-Linear Architecture: a Cittavīthi-inspired AI might feature a more networked, non-linear architecture. This could involve multiple parallel processes that interact and influence each other in complex ways, similar to the interplay of cittas and cetasikas.
-
Embodied Learning: Such an AI would learn through active interaction with its environment. Learning would be deeply tied to the system’s “embodied” experiences and actions.
-
Contextual Memory: Rather than storing information in discrete, static units, a Cittavīthi-inspired AI might feature a more dynamic, context-dependent memory system. Memories would be fluid, their recall and interpretation heavily influenced by the current context.
While realizing such a system poses significant technical challenges, it offers exciting possibilities for creating AI that more closely mimics the fluidity and adaptability of human cognition.
Ethical Considerationns
The development of more human-like, embodied AI systems raises importat ethical considerations:
-
Consciousness and Sentience: As we create AI systems that more closely mimic human cognition, we may need to grapple with questions of machine consciousness and sentience. At what point might we consider these systems to be conscious, and what rights or considerations should they be afforded?
-
Human-AI Boundaries: More fluid, adaptive AI systems may blur the lines between human and machine cognition. This could have profound implications for how we understand and value human intelligence and creativity.
-
Privacy and Autonomy: Highly context-aware AI systems might require access to vast amounts of personal data to function effectively. This raises concerns about privacy and individual autonomy in a world of pervasive, intelligent machines.
-
Accountability and Control: The more complex and human-like AI systems become, the more challenging it may be to understand and control their decision-making processes. This could raise issues of accountability, especially in high-stakes domains like healthcare or criminal justice.
Finally
In the context of our central question, “Where does the machine end and the mind begin?”, Cittavīthi offers a provocative answer:
Perhaps there is no clear boundary.
Just as the Buddhist model suggests that mind and environment are intimately interconnected, we might envision future AI systems that are so deeply integrated with their surroundings that the distinction between “machine” and “environment” becomes meaningless.
This perspective challenges us to move beyond thinking of AI as isolated computational systems. Instead, we might conceptualise AI as dynamic, context-sensitive processes that, like the human mind, are constantly engaging with and being shaped by their environment. In this view, the “mind” of an AI wouldn’t be confined to its hardware or software, but would emerge from its ongoing interactions with the world.
Ultimately, the cittavīthi model, suggests that instead of asking the question “Where does the machine end and the mind begin?”, we might ask:
How can we create artificial systems that, like the human mind, exist not as isolated entities, but as dynamic processes intimately intertwined with their environment?
-
The Chinese Room Argument https://plato.stanford.edu/entries/chinese-room/ ↩
-
Blackmon, J. Searle’s Wall. Erkenn 78, 109–117 (2013). https://doi.org/10.1007/s10670-012-9405-4 ↩
-
Bartlett, G. (2012). Computational Theories of Conscious Experience: Between a Rock and a Hard Place. Erkenntnis (1975-), 76(2), 195–209. http://www.jstor.org/stable/41417611 ↩
-
A Manual of Abidhamma - Edited in the original Pali Text with English Translation and Explanatory Notes https://www.budsas.org/ebud/abhisgho/abhis04.htm ↩