top of page


The Hard Problem of Artificial Intelligence

Introduction – from the Imitation Game to the Meditation Game


Artificial intelligence (AI) narratives have re-captured public imaginations. One such narrative is not resolvable by either technical, legal or even scientific approaches we currently have to hand. It centres around a very human, self-reflective philosophical question. This question concerns the possibility of artificial consciousness. It is the hardest imaginable question surrounding AI. Can we build artificial consciousness if we do not yet understand our own. This builds on the separation between easy and hard problems of human consciousness introduced by David Chalmers. Chalmers related the easy problem to the functional workings of consciousness which we also do not yet understand but could potentially in the future. The hard problem then focuses on what is still left unanswered by this quest - explaining why conscious experience exists in the first place. This article argues that we should formalise the hardest problem of AI as an analogous but separate question from that posed by Chalmers. We should understand it in a way that asks to what end may we want to produce an artificial consciousness.


Thinking with the end in mind we may learn more about the hard problem of consciousness. It can also help us put both hard questions of consciousness aside while we deal with the easy problems – how we and machines are functionally wired differently. A specific focus on the very human aspects of emotional relationships with our bodies, minds, environment, well-being and self-awareness. This is animated through the introduction of a thought experiment pointing out the non-sensical nature of attempting to teach an AI to meditate and using current large language models to try this out. This meditation game builds on the imitation game proposed by Alan Turing better known as the Turing test.

The Easy Problem of AI


Narratives about AI could be creatively traced through history. Recent work covers such narrative histories but it rarely goes beyond the often-cited birth of AI in the 1950s. Fears about artificially conscious machines go as far back as ancient Greek writings about machine automata. The current dominant history of AI Starts with Alan Turing who was able to show there are certain things that are not computable as a side-effect to theorising about what is computable from step-by-step operations into algorithms. Certain aspects of our lives like emotions just do not reduce to step-by-step operations.

In this context, the easy problem of AI is to find out whether a process can be represented unambiguously step-by-step in order to be computable. This mirrors the easy problem of consciousness. It focuses on trying to represent all correlates of consciousness unambiguously in order to find out which ingredients in a recipe make up consciousness. Sorting out the easy question of AI is what should be dealt with as early as possible as they are already interfering with daily lives. But we are also seeing people struggle with the hard problem of AI – is it possible to create artificial consciousness?

The Hard Problem of AI

The hard problem of consciousness centres on what is experience of being conscious in itself rather than what makes it function. This in stark contrast to some approaches that assume that by answering the easy question about what consciousness is made of and how it functions, we can assume that we have addressed the full extent of the problem of understanding what it is like to be a conscious being. Our scientific understanding struggles with describing what experience is like. If we accept there is no experience for machines, then we could by following strict logic demote our own experience as a proxy if we find that we exactly replicate our brains functionally and no artificial consciousness emerges. This is the hard problem of AI.

The discussion around whether AI could be conscious is captivating. A technology company employee already claimed a language model (chatbot) had become conscious in our lifetime. Other language models could be better than people on measures of structured computation and even measures of intelligence. Could the engineer’s subjective perception be already outpaced by AI? Is this another demonstration that the Turing test is not able to formalise a threshold that does not in the end rely on the subjective judgement of those being fooled by a seemingly conscious AI? Or are we wasting our time with the hard problem of AI? Would solving the easy problem remove the possibility for existence of the hard problem by another way – just the way philosophical interest in the explaining of why God exists steadily decreased in importance as other philosophical and scientific questions took the stage?


Can AI Meditate


To demonstrate the distinction between real and artificial consciousness very clearly we can go to a thought experiment. The thought experiment asks us if we can teach an AI to meditate. The thought experiment gestures at the non-sensical nature of such an endeavour. It also attempts to integrate many debates into one broad problem. The thought experiment asks us to imagine teaching an AI to meditate. Finally, it points to the difficulty in imagining such an act of an AI meditating. This is due to the centrality of emotional experience in meditation practice. Emotions are what we deem to be a part of natural conscious experience. The thought experiment also gestures at the relational nature of emotional experience. Emotional experience integrates relations to bodies, minds, the environment, our well-being and our interest in our own self-awareness.

The thought experiment highlights a specific aspect of the experience of meditation. Meditation asks practitioners to quiet down their thinking (rational activity) and action (behavioural activity) by a variety of object-focussed or open awareness practices. When this is practiced, emotional experience comes to the foreground of experience. In some Buddhist theorythere is a descriptive view meditation works from the premise that there is no stable entity called 'the self' within our conscious experience - what we think of as ourselves could be a collection of aggregates. So perhaps the question is whether AI can develop to the point that it is a collection of aggregates with something akin to consciousness - then it would presumably be capable of meditation, or at least, understanding the principles of this style of meditation. Yet, this would only touch on the easy problem of consciousness rather than address beyond the correlates of consciousness why a feeling of a self is there in the first place.

On the practical level could a foregrounding of emotion be possible for an artificial consciousness if calculations are quietened down? Would it be easy to imagine that when an AI's calculations are quietened down there can be anything left? The thought experiment shows the envelopment of rational and behavioural faculties in emotional experience for human conscious experience. Before delving deeper into the discussion about the aspects of emotions the paper turns to distinguish the easy, hard, and hardest problems of AI. Then it will focus on the hardest problem by returning to the thought experiment and aspects of emotions reviewed from the literature.

Conscious experience does not fit our current scientific models. Should we correct our models or ignore consciousness as a badly defined concept or one that doesn’t add much to the models? Consciousness is about there being something it is like to be there in the words of phenomenological philosophers like Heidegger. This is continued and made prominent in recent times by Tomas Nagel’s questions about what it is like to be a bat. The broad description is experience. Narrower aspects of conscious experience that can be somewhat formalised scientifically are behaviour and logical/rational thinking. These get studied in the cognitive sciences with increasing success. They help with the discussions relating to the easy problems of consciousness and AI. Emotions on the other hand are even harder to formalise into a measurable scale. Emotions are characterised by variety, change and relations. This is opposed to the more computational boundedness of behaviour and rationality. This means they are hard to compute and thus hard to imagine a machine having. Emotions integrate bodies, minds, and the environment. Emotional regulation practice like meditation aims to point those emotions towards well-being and care for one's own self-awareness. In order to further understand the aspects of emotions that meditation tries to regulate we need to look at how they relate to bodies, minds and the environment.

Emotional Experience

The goal of this blog is to relate emotions back to experience while not entirely making them the same thing. But what are emotions?

Emotions as Bodily Feelings

One account starting with William James focuses on defining emotions as necessarily and sufficiently captured simply by bodily feelings. James does not exclude the idea that there are other emotions apart from those bound up in the body. These were the ones deemed as more easily formalisable into something concrete to study rather than the broader emotions characterised by variety, change and relations. A modern-day continuation in this lineage is spearheaded by Paul Ekman who goes one step further to argue emotions are marked by specific facial expressions that we could study scientifically.

The focus on embodiment would be interesting in the context of attempting to apply to an AI. Clearly it sounds non-sensical to talk about an AI having emotions unless we broaden the definition so much as to lose the common sense meaning of the word. This becomes even more obvious if we think back to the thought experiment that may ask an AI to perform a body-scan meditation, which is prescribed to people for stress reduction and better sleep.

Emotions as Intuitions

Another focus is put by some, who deem emotions are essentially a type of cognition. Such works do not equate emotions to thinking but give humans the benefit of the doubt in being able to take responsibility for their emotions and to label them appropriately in their thinking. This work traces emotions’ evolutionary relationship with the original purpose of brains – to perceive a favourable environment by proactively predicting how things will unfold. The ultimate aim is tied back to balancing an idea proposed by Lisa Feldman-Barret – a “body budget” that our emotions indicate is fluctuating towards being balanced or out of balance.

Thinking about AI that has computation at its core then it becomes obvious that if we are to try to create artificial emotions in an AI that we have the responsibility to gift it with the best possible ability to appropriately label its own emotions. The mislabelling is associated with reduced well-being and disorders in humans. Still, this becomes clearly non-sensical when we come back to the thought experiment of asking an AI to label its thoughts/computations as they pass by without getting attached to them in an open awareness meditation practice.

Emotions as Connections to our Environment

Between those two views lay works that split feelings and emotions somewhat. Other softer cognitivists follow an approach of looking at emotions as primarily working through the associations in the mind that are produced by an emotion as they relate to a bank of previous experiences – looking for similarity, contiguousness, and cause-effect inferences. Synthesising these views emotions also help us integrate cues and hence influence cognition through gut feelings from the body which are scans of our environment. Such broad appraisals of the environment are described by Jesse Prinz and others as integrating both the bodily feeling and the labelling approach in order to describe what emotions are about.

Ultimately, such a view showcases emotions as highlighting what is important for the well-being of an individual in their environment through an integrative mental intuition/recognition felt through the body. It is when we keep training to read the exaggerations of our feelings of the emotions we have relied on to appraise a situation over and over again that we can be in the territory of expert intuition where emotions can be deemed in a limited sense as rational. And even then, the exaggeration can take-over and guide us towards an incorrect direction.

Why would we want to give such a vague and often misfiring emotion to an AI which we need to help us by being precise in its calculations and counter-balancing our regularly confused intuitions. Would having broad appraisals benefit an AI? Returning to the thought experiment – would we have to gift emotional experience to an AI just to then invest decades in training it to be able to regulate this experience through a meditative practice to reduce the dissatisfactoriness of such experience. The dissatisfactoriness that comes from the mismatch of our emotional appraisals and reality are difficult enough to navigate without having to teach it to an AI even if we use meditation to remind us that we our minds are built to perpetually make these over-predictive appraisals.


Conclusion

The hardest problem of AI could be separated out as a philosophical question. It can be highlighted by a thought experiment that exaggerates the issue. What instructions could give rise to the meditative experience in a machine that claims it has first person conscious experience? The thought experiment asks us to take the hardest problem of AI as separate from the easy and hard problems of consciousness. It asks if we know what consciousness is and why it exists then should we gift it to a non-conscious being. Could attempting such a non-sense thought experiment more seriously challenge us to face our common fear of the uncertainty of each-others' consciousness?

Separating the hardest problem of AI could lead us to consider tools like AI having eventually a being or experience of their own. But why does this have to lie into the distinction between either being human-like level conscious or being a mere tool. After all the conscious effort to design a tool to meet the end-goals of conscious creatures makes such tools feel for all intents and purposes perceptually as a bit alive already. A responsibility then lays the shoulders of designers of such systems that can out-manoeuvre human intuitive judgements to make such AI at least explainable in order for people to be able to understand its own being as it is rather than just look at it as if into a mirror or treating it like any other tool.

I had this weird dream... At a conference I was sitting in, there was a presentation from another researcher studying the same problem as me... but in exactly the opposite paradigm. I was yet again shaken by the thought that all science is just a hidden tribal warfare between old philosophical ideas. This one that bothers me all the time is not about the nature of reality or what is good knowledge. It is a very personal question about whether I should make more rational decisions... and what the hell is it about them that can make them more rational. In my PhD I study qualitatively how rationality feels in the context of locational decisions of those who are highly educated. The other researcher claimed to be doing the same – but their model fit 80% of the behaviour explained - quantitatively. It. Sounds so simple. I don’t think the choice can be reduced to that. But let’s outline the contours of my PhD thesis before going on a tribal warfare amongst potential friends.

The Context

Globalisation has intensified perceived competition between places seeking to attract talent and investment. This has led to the rise of place marketing and branding efforts aimed at promoting the distinctiveness and quality of places. However, attracting talent requires understanding subtle perceptions of place beyond economics and even some social sciences and the humanities. This is where our disagreement with the other researcher would have focussed. They just assumed they could survey respondents, model and – 80%. Yet... concepts from human geography like 'place attachment' (bonds with a place) and 'topophilia' (positive affect towards a landscape) highlight the affective, symbolic and emotional bonds between people and places that are hard to capture at the mode objective, zoomed-out view that a quantitative paradigm is great for. But all I am supposed to say is that we are studying a different aspect or scale. I would just feel they are plainly wrong. I raised that in a question in their presentation and other colleagues agreed but a majority doesn't necessarily make us right, does it?

Relevant Literature for my PhD

My thesis reviews literature across geography, behavioural economics and psychology. The aim is to investigate personal accounts of multiple location choices over time. Rather than assuming place attachment is stable, singular, passive or always positive, the focus is on how skills of place attachment develop through lived experience.

There is potential of the discipline of behavioural economics (along with social and positive psychology which it borrows it’s concepts from) to help with explaining the locational choices made by individuals in response to the associated marketing/branding activities that are used to enhance the attractiveness of places by those responsible for their management.


In examining the nature of peoples’ locational decisions, the focus is particularly on the heuristics - which can be thought of in terms of ‘cognitive shortcuts’ by some and learned or adaptive abilities to ignore information by others. This is in contrast to neo-classical economists’ view of ‘homo economicus’, making optimally rational decisions which deliver maximum personal utility. This is what the other researcher had to assume in order to use their methods but I don’t.

Research Questions and Objectives

Richard Florida's 'creative class' theory tried quantifying place quality to attract talent, but overlooked the subtle experiential aspects as well. My thesis looks to go beyond surveys, lab experiments and other quantitative paradigms in order to study real-world locational decisions under uncertainty. They are very important to the individual and yet overlooked by academic funders or practitioners.


The questions I had did develop from a personal quest to understand what made me move more than 10 times in my life. How do past place experiences shape intuitions about future moves? Can healthy place attachment become an explicit skill to reduce uncertainty, not just a passive bond? Why is it so hard to feel like an insider in a new place?


The goal of the PhD is a conceptual model of how place experiences shape decisions, and practical guidelines for nurturing healthy place attachment. The thesis explores whether place attachment can transform from an implicit intuition to an explicit, learnable skill. In an uncertain world, attachment skills may help people adapt to new places, reducing stress.

Findings

The potential to turn place attachment into a learnable skill is what I have come to term "place actualisation" in line with the reinvigorated humanistic and positive psychology surge evident in the last few decades.The dynamic, temporal aspects of the study bring out a model that synthesises past rational, explicit decisions into future implicit intuitions and brings out how getting better at understanding one’s learning style can help them find a place that fits their practical, social and emotional needs.


These themes are now being integrated back into the literature and future directions seem to point this kind of study towards concepts from the positive psychology literature where a concept such as place-actualisation can integrate the findings into a coherent even though less parsimonious model than the firm 80% of the other researcher.


Ultimately

Locational decisions are a fruitful context for augmenting geographic concepts like place and space with behavioral economics and positive psychology. The resulting insights can enhance the understanding of practitioners who use the language of marketing and economics and help them understand that humanistic and positive concepts can be just as pragmatic as those of the market-oriented disciplines.



A Look at Behavioural Economics

I have previously written about my gripe with digital algorithms being used as an analogy for decision-making in the real world. My passion is to understand how people make decisions in the real world and they do not seem to use rationalistic algorithms. Often, the study of rational decision-making has been done in the realm of economists. Emulating the ideas of the natural sciences a lot of economists have often attempted to make assumptions in their models that people act rationally. Over the last 50 years an alternative research program has been evolving that is often referred to as “behavioural economics”. This behavioural economics tradition has started to pejoratively call the rational being from economic textbooks “homo economicus” and argued that people consistently show deviations from this supposed rationality. Behavioural economists have defined the rational decisions in economics texts are based on the idea that rational people:

  • “have well-defined, stable preferences along with unbiased beliefs;

  • make optimal choices based on these beliefs and preferences; and   

  • their primary motivation is self-interest" (Thaler, 2016).

There have been over 200 different biases identified by now – each showing people as more and more fallible. But has behavioural economics gone too far? Has human nature been downgraded from the perfectly rational “Homo Economicus” directly to Homo Simpson who has inbuilt biases? Behavioural economists argue that instead of full rationality we have this thing called bounded rationality. The story of Homo Simpson goes that because of the bounds of our cognition we use these things called “heuristics” or rules of thumb which are: “cognitive shortcuts or rules of thumb that simplify decisions, represent a process of substituting a difficult question with an easier one” (Kahneman, 2003).  

This view of heuristics has become famous due to the work of Daniel Kahneman and Richard Thaler that center on what happens in people’s heads when faced with difficult decisions.  Our System 1 thinking is supposed to be fallible and error prone compared to our slow and deliberate System 2 thinking. The practical implication here is that people need to be “nudged” towards a “System 2” thinking where possible. This focus on people’s psyche in isolation of their environment has been useful to an extent but in the field of place branding where we are interested in the interaction between people and place (or at least the images of place), we can benefit from another perspective.


A Look at Evolutionary Psychology

Through my PhD research in place branding and work as a user experience designer I have uncovered another strand in the behavioural economics discipline that focuses not on people’s heads being studied in isolation. The focus in what is called the “fast-and-frugal" view of heuristics is on people’s interaction with an “environment”. In this tradition the interpretation of what rationality is can be summarised by the concept of “ecological rationality” as opposed to the narrower understanding of what “bounded rationality is. Heuristics in this view become not an inconvenience of our fallible minds but rather detailed descriptions of how we approach different situations through a multitude of adaptive strategies that we have evolved to us in a complex and uncertain world:


“A heuristic is a strategy that ignores part of the information, with the goal of making decisions more quickly, frugally, and/or accurately than more complex methods." Gigerenzer and Selten (2002)  

So, having outlined the usefulness of heuristics does not mean that I will diminish the work of those behavioural economists who focus on the study of the bias associated with a heuristic. I just think it is a lot more useful and fun to study the exact ways in which a decision-making process works rather than... for want of a better word “gaslighting” a decision as biased. Even if a decision is wrong – the study of the exact process that someone went through to reach that conclusion is useful knowledge. 


An Example of the Usefulness of Heuristics in Digital UX Design

The focus on the useful side of heuristics is nothing new to the field of design which is becoming ever more involved in the building of digital representations of places or services provided by those in charge of the management or brand management of places. In the early 1990s Jakob Nielsen summarised 10 Heuristics for good user interface design and these are still in use today as a good starting point in designing usable digital interfaces. Does the digital presence of your place violate any of these. Nielsen recommends that 3-5 non expert users can spot 80% of the usability problems in an interface if they are given these 10 rules of thumb and instructed to spot if they are violated by the design of the interface. This method has limitations and of course the Homo Simpson bias but it costs very little and can remove design problems before paying for formal research.

#1: Visibility of system status

The system should always keep users informed about what is going on, through appropriate feedback within reasonable time.

#2: Match between system and the real world

The system should speak the users' language, with words, phrases and concepts familiar to the user, rather than system-oriented terms. Follow real-world conventions, making information appear in a natural and logical order.

#3: User control and freedom

Users often choose system functions by mistake and will need a clearly marked "emergency exit" to leave the unwanted state without having to go through an extended dialogue. Support undo and redo.

#4: Consistency and standards

Users should not have to wonder whether different words, situations, or actions mean the same thing. Follow platform conventions.

#5: Error prevention

Even better than good error messages is a careful design which prevents a problem from occurring in the first place. Either eliminate error-prone conditions or check for them and present users with a confirmation option before they commit to the action.

#6: Recognition rather than recall

Minimize the user's memory load by making objects, actions, and options visible. The user should not have to remember information from one part of the dialogue to another. Instructions for use of the system should be visible or easily retrievable whenever appropriate.

#7: Flexibility and efficiency of use

Accelerators — unseen by the novice user — may often speed up the interaction for the expert user such that the system can cater to both inexperienced and experienced users. Allow users to tailor frequent actions.

#8: Aesthetic and minimalist design

Dialogues should not contain information which is irrelevant or rarely needed. Every extra unit of information in a dialogue competes with the relevant units of information and diminishes their relative visibility.

#9: Help users recognize, diagnose, and recover from errors

Error messages should be expressed in plain language (no codes), precisely indicate the problem, and constructively suggest a solution.

#10: Help and documentation

Even though it is better if the system can be used without documentation, it may be necessary to provide help and documentation. Any such information should be easy to search, focused on the user's task, list concrete steps to be carried out, and not be too large.

1
2
bottom of page