top of page
  • Fredrik the Frisian

Artificial Evil versus Artificial Empathy

Updated: Mar 25, 2023

Guest writer: Christer Gjerstad Clinical Psychologist/ Ph.D. student.

Computers will overtake humans with AI the next 100 years. When that happens,

we need to make sure the computers have goals aligned with ours.

Stephen Hawking

Future development of artificial intelligence (AI) poses important challenges to humanity. For instance, the EU has expressed concerns about how AI development might result in the loss of jobs. This article explores another significant challenge of AI development; what happens when AI becomes truly independent of humans? Many great thinkers, f. ex. Elon Musk fear this aspect of AI, and, as we will illustrate in this article, perhaps they are right. Maybe we should be afraid of independent AI. This article explains why and what we potentially could do about it.

We will present a framework called the Artificial Creation Model, encompassing two crucial elements of AI development; Emotional Intelligence Quotient (EQ) and Intent. This matrix puts AI development in a different perspective, contributing to Stephen Hawking’s advice that “(…) the computers have goals aligned with ours”.

Artificial evil versus artificial empathy

Development of Emotional Intelligence

Human intelligence consists not only of a general intelligence quotient (IQ), but also of social and emotional aspects. This so-called emotional intelligence quotient (EQ) encompasses several aspects, including moral, ethics, empathy, sympathy and the theory of mind (the ability to take the perspective of others). The EQ of our intelligence is guiding our decisions, making them more or less likely to be accepted by our fellow human beings.

Although our EQ is largely genetic and innate, it is also shaped by factors such as sensory inputs, upbringing and our repeated experiences in interacting with other human beings (so-called relational schemas). Due to its cooperation with our IQ, EQ forms the horizontal axis of our model.

Horizontal emotional intelligence axis

From a psychological perspective, the majority of today’s AI, having close to zero emotional intelligence, may be compared to a human baby. A baby acts instinctively, essentially seeking gratification of its own needs and desires, with little or no regard to others. As such, we may say that a baby has low EQ. Although today’s AI may be comparable to a baby in terms of EQ, the opposite is true for IQ. Most current AI systems in fact have a very high IQ, far surpassing that of even the smartest human being. However, what happens when very high IQ is paired with very low EQ? The result may be comparable to a human psychopath; a highly intelligent, but calculating and unscrupulous organism. A psychopath is perhaps not someone you would want to make independent decisions concerning human well-being.

What is Intent?

Intention is a mental state that represents a commitment to carrying out an action or actions in the future. This aspect of the human mind is crucial to plan actions and to fulfill a purpose.

What drives the growth and development of intent? According to Astington (1993), intention is driven by desire or needs. As such, one may very well say that intent is influenced by both basic and higher order needs. We can state that an artificial developed EQ /IQ, without given or developed intent, stands idle equal to humans.

Follow this link for a more popular description of intent.

The vertical intent axis

The question arises whether an artificially intelligent agent could create intent by itself, especially when equipped with sensors enabling it to perceive its surroundings and measure responses based on its actions. Whether true or not, the intent, either given by humans or self-generated, is a major factor deciding if an AI agent will be “evil” or not.

Currently humans give AI intent, a purpose, a goal. The majority of AI applications have a reactive intent, for example used in the SmartDialog software. Only when a question is sent, the AI, based on Artificial Neural Network, will respond with an answer from its memory (database). Without this trigger nothing will happen.

Due to the impact intent is having on how IQ/EQ is used, the second vertical axis visualises therefor Intent.

The Artificial Creation Model

The EQ axis and the Intent axis combined into one model enable us to distinguish between various types of artificial intelligence. At the point where the Intent axis crosses the EQ axis, it is assumed that AI is capable of developing its own intent.

We call this model the Artificial Creation Model tm, since IQ (with or without EQ) combined with intent ultimately result in creation.

Artificial creation model - axes emotions and intent

To explore the four quadrants in this Matrix, we made one assumption: Artificial IQ has already surpassed that of a human.

This model enables us to:

1. make guidelines/recommendations for the development of future AI systems

2. classify future developed AI’s to determine for which purposes they can be used

AI With Given Intent

The lower part of the model depicts AI with no self-generated intent, hence it only executes assignments given by humans. Our AI SID (Smart Interactive Dialog) based on Artificial Neural Network can be positioned in the left lower quadrant as an example.

Machines with human controlled intent

Adding emotional intelligence (EQ) to AI it will move to the right lower quadrant, where it can receive assignments directly impacting human beings, but still without having its own intentions. It goes to the extent, one can argue, that this type of AI will not execute evil intent given by humans, simply because it conflicts with its EQ. To achieve this, future development of AI requires the involvement of legislation to prohibit AI development becoming a threat to humanity, as explained in the next paragraph.

AI With Own Intent So, what happens when an AI agent discovers itself (self-awareness) and creates its own intent? One might argue that the most fundamental “need” of an AI agent is to survive. This is made possible i.e. by providing unlimited access to electricity (computing power), but also by making sure that humans are not able to shut it down by reprogramming its EQ or removing its own developed intent (blocking the access). Theoretically then, an AI agent without EQ may at one point discover that human beings are the main threat to its existence. Hence, it may develop an evil intent towards humans. This is the reason why the upper left quadrant of our matrix is called “Artificial Evil”.

Machines in control of intent

This intent can result in evil action since the AI in the upper left quadrant was not equipped with EQ. But the potential development of intent by an AI itself in this quadrant requires obviously more discussion.

In the opposite case, given that the AI both has its own intent and is equipped with a fully developed emotional intelligence, it may very well be involved in tasks directly affecting the life of human beings. Consequently, we have chosen to call it the Artificial Empathy (upper right quadrant).

We want to stress the importance of a fixed EQ for AI, as of today we are unaware whether an AI agent will be able to reprogram its original EQ. This aspect of the fourth quadrant is an important topic of further discussion.

Is Development of Emotional Intelligence Possible?

The usefulness of this model is based on the presumption that humans are actually able to design and create artificial EQ that can coexist and cooperate with artificial IQ, resulting in a humanlike ability for AI’s to reflect over their own decisions. According to the knowledge of our CTO Bård Øvrebø and his team, programming of artificial EQ is possible, but requires a lot more research and testing.


As a contribution to the safe future development of AI, we caution that stand-alone artificial intelligence (IQ) without an associated emotional intelligence component (EQ) will potentially be dangerous, hence the name Artificial Evil. Unless AI incorporates fixed emotional intelligence, it must only be allowed to act on intent given by humans.

Until this is made possible, operational AI should only be used as an aid in decision-making, with the final decisions remaining in the hands of humanity.


We urge AI developers to look beyond the technical possibilities of AI programming and think what the consequences for global society might be. Their perspective on potential risks, and perhaps proposed ethical standards of conduct, are well appreciated.

The International Standard Organisation (ISO) might come up with guidelines and certifications for which purpose (human intent) AI sold on the market can be used. This means it must be auditable and have clear rules regarding the design and security of lQ and EQ and most likely other criteria. This proposed ISO regulation is another topic for further discussion.

Update September 4, 2019

The Standard Norway ISO group for Artificial intelligence was established in September 2019. Fredrik De Vries is a member and participates in the working group "Ethics & Society".

Guest writer: Christer Gjerstad Clinical Psychologist/ Ph.D. student

Writers: Bård Øvrebø CTO / Fredrik De Vries CEO Dynamic Integrations, Software-AI development

145 views0 comments

Recent Posts

See All


Les commentaires ont été désactivés.
bottom of page