top of page
Search

The Hard Problem of Artificial Intelligence - from the Imitation Game to the Meditation Game


The Hard Problem of Artificial Intelligence

Introduction – from the Imitation Game to the Meditation Game


Artificial intelligence (AI) narratives have re-captured public imaginations. One such narrative is not resolvable by either technical, legal or even scientific approaches we currently have to hand. It centres around a very human, self-reflective philosophical question. This question concerns the possibility of artificial consciousness. It is the hardest imaginable question surrounding AI. Can we build artificial consciousness if we do not yet understand our own. This builds on the separation between easy and hard problems of human consciousness introduced by David Chalmers. Chalmers related the easy problem to the functional workings of consciousness which we also do not yet understand but could potentially in the future. The hard problem then focuses on what is still left unanswered by this quest - explaining why conscious experience exists in the first place. This article argues that we should formalise the hardest problem of AI as an analogous but separate question from that posed by Chalmers. We should understand it in a way that asks to what end may we want to produce an artificial consciousness.


Thinking with the end in mind we may learn more about the hard problem of consciousness. It can also help us put both hard questions of consciousness aside while we deal with the easy problems – how we and machines are functionally wired differently. A specific focus on the very human aspects of emotional relationships with our bodies, minds, environment, well-being and self-awareness. This is animated through the introduction of a thought experiment pointing out the non-sensical nature of attempting to teach an AI to meditate and using current large language models to try this out. This meditation game builds on the imitation game proposed by Alan Turing better known as the Turing test.

The Easy Problem of AI


Narratives about AI could be creatively traced through history. Recent work covers such narrative histories but it rarely goes beyond the often-cited birth of AI in the 1950s. Fears about artificially conscious machines go as far back as ancient Greek writings about machine automata. The current dominant history of AI Starts with Alan Turing who was able to show there are certain things that are not computable as a side-effect to theorising about what is computable from step-by-step operations into algorithms. Certain aspects of our lives like emotions just do not reduce to step-by-step operations.

In this context, the easy problem of AI is to find out whether a process can be represented unambiguously step-by-step in order to be computable. This mirrors the easy problem of consciousness. It focuses on trying to represent all correlates of consciousness unambiguously in order to find out which ingredients in a recipe make up consciousness. Sorting out the easy question of AI is what should be dealt with as early as possible as they are already interfering with daily lives. But we are also seeing people struggle with the hard problem of AI – is it possible to create artificial consciousness?

The Hard Problem of AI

The hard problem of consciousness centres on what is experience of being conscious in itself rather than what makes it function. This in stark contrast to some approaches that assume that by answering the easy question about what consciousness is made of and how it functions, we can assume that we have addressed the full extent of the problem of understanding what it is like to be a conscious being. Our scientific understanding struggles with describing what experience is like. If we accept there is no experience for machines, then we could by following strict logic demote our own experience as a proxy if we find that we exactly replicate our brains functionally and no artificial consciousness emerges. This is the hard problem of AI.

The discussion around whether AI could be conscious is captivating. A technology company employee already claimed a language model (chatbot) had become conscious in our lifetime. Other language models could be better than people on measures of structured computation and even measures of intelligence. Could the engineer’s subjective perception be already outpaced by AI? Is this another demonstration that the Turing test is not able to formalise a threshold that does not in the end rely on the subjective judgement of those being fooled by a seemingly conscious AI? Or are we wasting our time with the hard problem of AI? Would solving the easy problem remove the possibility for existence of the hard problem by another way – just the way philosophical interest in the explaining of why God exists steadily decreased in importance as other philosophical and scientific questions took the stage?


Can AI Meditate


To demonstrate the distinction between real and artificial consciousness very clearly we can go to a thought experiment. The thought experiment asks us if we can teach an AI to meditate. The thought experiment gestures at the non-sensical nature of such an endeavour. It also attempts to integrate many debates into one broad problem. The thought experiment asks us to imagine teaching an AI to meditate. Finally, it points to the difficulty in imagining such an act of an AI meditating. This is due to the centrality of emotional experience in meditation practice. Emotions are what we deem to be a part of natural conscious experience. The thought experiment also gestures at the relational nature of emotional experience. Emotional experience integrates relations to bodies, minds, the environment, our well-being and our interest in our own self-awareness.

The thought experiment highlights a specific aspect of the experience of meditation. Meditation asks practitioners to quiet down their thinking (rational activity) and action (behavioural activity) by a variety of object-focussed or open awareness practices. When this is practiced, emotional experience comes to the foreground of experience. In some Buddhist theorythere is a descriptive view meditation works from the premise that there is no stable entity called 'the self' within our conscious experience - what we think of as ourselves could be a collection of aggregates. So perhaps the question is whether AI can develop to the point that it is a collection of aggregates with something akin to consciousness - then it would presumably be capable of meditation, or at least, understanding the principles of this style of meditation. Yet, this would only touch on the easy problem of consciousness rather than address beyond the correlates of consciousness why a feeling of a self is there in the first place.

On the practical level could a foregrounding of emotion be possible for an artificial consciousness if calculations are quietened down? Would it be easy to imagine that when an AI's calculations are quietened down there can be anything left? The thought experiment shows the envelopment of rational and behavioural faculties in emotional experience for human conscious experience. Before delving deeper into the discussion about the aspects of emotions the paper turns to distinguish the easy, hard, and hardest problems of AI. Then it will focus on the hardest problem by returning to the thought experiment and aspects of emotions reviewed from the literature.

Conscious experience does not fit our current scientific models. Should we correct our models or ignore consciousness as a badly defined concept or one that doesn’t add much to the models? Consciousness is about there being something it is like to be there in the words of phenomenological philosophers like Heidegger. This is continued and made prominent in recent times by Tomas Nagel’s questions about what it is like to be a bat. The broad description is experience. Narrower aspects of conscious experience that can be somewhat formalised scientifically are behaviour and logical/rational thinking. These get studied in the cognitive sciences with increasing success. They help with the discussions relating to the easy problems of consciousness and AI. Emotions on the other hand are even harder to formalise into a measurable scale. Emotions are characterised by variety, change and relations. This is opposed to the more computational boundedness of behaviour and rationality. This means they are hard to compute and thus hard to imagine a machine having. Emotions integrate bodies, minds, and the environment. Emotional regulation practice like meditation aims to point those emotions towards well-being and care for one's own self-awareness. In order to further understand the aspects of emotions that meditation tries to regulate we need to look at how they relate to bodies, minds and the environment.

Emotional Experience

The goal of this blog is to relate emotions back to experience while not entirely making them the same thing. But what are emotions?

Emotions as Bodily Feelings

One account starting with William James focuses on defining emotions as necessarily and sufficiently captured simply by bodily feelings. James does not exclude the idea that there are other emotions apart from those bound up in the body. These were the ones deemed as more easily formalisable into something concrete to study rather than the broader emotions characterised by variety, change and relations. A modern-day continuation in this lineage is spearheaded by Paul Ekman who goes one step further to argue emotions are marked by specific facial expressions that we could study scientifically.

The focus on embodiment would be interesting in the context of attempting to apply to an AI. Clearly it sounds non-sensical to talk about an AI having emotions unless we broaden the definition so much as to lose the common sense meaning of the word. This becomes even more obvious if we think back to the thought experiment that may ask an AI to perform a body-scan meditation, which is prescribed to people for stress reduction and better sleep.

Emotions as Intuitions

Another focus is put by some, who deem emotions are essentially a type of cognition. Such works do not equate emotions to thinking but give humans the benefit of the doubt in being able to take responsibility for their emotions and to label them appropriately in their thinking. This work traces emotions’ evolutionary relationship with the original purpose of brains – to perceive a favourable environment by proactively predicting how things will unfold. The ultimate aim is tied back to balancing an idea proposed by Lisa Feldman-Barret – a “body budget” that our emotions indicate is fluctuating towards being balanced or out of balance.

Thinking about AI that has computation at its core then it becomes obvious that if we are to try to create artificial emotions in an AI that we have the responsibility to gift it with the best possible ability to appropriately label its own emotions. The mislabelling is associated with reduced well-being and disorders in humans. Still, this becomes clearly non-sensical when we come back to the thought experiment of asking an AI to label its thoughts/computations as they pass by without getting attached to them in an open awareness meditation practice.

Emotions as Connections to our Environment

Between those two views lay works that split feelings and emotions somewhat. Other softer cognitivists follow an approach of looking at emotions as primarily working through the associations in the mind that are produced by an emotion as they relate to a bank of previous experiences – looking for similarity, contiguousness, and cause-effect inferences. Synthesising these views emotions also help us integrate cues and hence influence cognition through gut feelings from the body which are scans of our environment. Such broad appraisals of the environment are described by Jesse Prinz and others as integrating both the bodily feeling and the labelling approach in order to describe what emotions are about.

Ultimately, such a view showcases emotions as highlighting what is important for the well-being of an individual in their environment through an integrative mental intuition/recognition felt through the body. It is when we keep training to read the exaggerations of our feelings of the emotions we have relied on to appraise a situation over and over again that we can be in the territory of expert intuition where emotions can be deemed in a limited sense as rational. And even then, the exaggeration can take-over and guide us towards an incorrect direction.

Why would we want to give such a vague and often misfiring emotion to an AI which we need to help us by being precise in its calculations and counter-balancing our regularly confused intuitions. Would having broad appraisals benefit an AI? Returning to the thought experiment – would we have to gift emotional experience to an AI just to then invest decades in training it to be able to regulate this experience through a meditative practice to reduce the dissatisfactoriness of such experience. The dissatisfactoriness that comes from the mismatch of our emotional appraisals and reality are difficult enough to navigate without having to teach it to an AI even if we use meditation to remind us that we our minds are built to perpetually make these over-predictive appraisals.


Conclusion

The hardest problem of AI could be separated out as a philosophical question. It can be highlighted by a thought experiment that exaggerates the issue. What instructions could give rise to the meditative experience in a machine that claims it has first person conscious experience? The thought experiment asks us to take the hardest problem of AI as separate from the easy and hard problems of consciousness. It asks if we know what consciousness is and why it exists then should we gift it to a non-conscious being. Could attempting such a non-sense thought experiment more seriously challenge us to face our common fear of the uncertainty of each-others' consciousness?

Separating the hardest problem of AI could lead us to consider tools like AI having eventually a being or experience of their own. But why does this have to lie into the distinction between either being human-like level conscious or being a mere tool. After all the conscious effort to design a tool to meet the end-goals of conscious creatures makes such tools feel for all intents and purposes perceptually as a bit alive already. A responsibility then lays the shoulders of designers of such systems that can out-manoeuvre human intuitive judgements to make such AI at least explainable in order for people to be able to understand its own being as it is rather than just look at it as if into a mirror or treating it like any other tool.

Recent Posts

See All

What could Place-Actualisation Mean?

I had this weird dream... At a conference I was sitting in, there was a presentation from another researcher studying the same problem as me... but in exactly the opposite paradigm. I was yet again s

bottom of page