AI & the Uncanny Valley

Definition and History

The uncanny valley is an unsettling feeling people have when a robot or AI appears to be almost human, but has something about it which is still clearly artificial.

The original concept came from a 1970 essay by the Japanese roboticist Masahiro Mori, who argued that can robots become unlikeable if they are too humanlike but not quite humanlike enough. Normally, adding human characteristics to robots makes them more likeable, but there appears to be a point where this reverses and people feel a sense of eeriness or even disgust, hence the uncanny valley. The peak at the other side of the valley would be perfectly convincing, likeable robots or AI.

the uncanny valley diagram

The actual reason for the feeling of unease or disgust is debated, with some arguing it’s the juxtaposition of natural and unnatural elements. Others suggest it is tied to biology, with things that fall into the uncanny valley appearing diseased. Superstition is another argument, with some robots or images appearing zombie-like or undead, another type of unnaturalness, perhaps reminding us of our own mortality. Historically, similar reactions can also be seen in people also finding porcelain dolls, realistic puppets, clowns, and waxworks disturbing or creepy.

The Uncanny Valley in the Modern World

Since Masahiro Mori’s original essay, the idea has been discussed across many fields, including computer graphics, image generation, and AI chat conversations. Over the past decade, one focus has been Hollywood, who have used more and more computer graphics in their films, reaching a stage where they can create footage of living or dead actors which is almost believable, but where something still seems off. A famous example was in 2016’s Rogue One: A Star Wars Story, where Peter Cushing was digitally resurrected to play a character from the original 1977 Star Wars– and Carrie Fisher, playing Princess Leia, was de-aged to match her 1977 self. Despite how realistic they looked, something about both actors seemed waxy, glossy, or stiff, creating an uncanny feeling which detracted from the cinematic experience.

The Uncanny Valley and AI

In terms of AI, the term has been used for similar phenomena when generated images or videos have that same uncanny sheen. There is also some overlap with the idea of ‘artifacts’ and similar, small mistakes or imperfections found in AI images which humans would never make, such as people in the images having six fingers.

As discussed in another one of our articles, Ethical Considerations in AI Adoption, there is a growing backlash to the blatant use of AI art, as consumers are concerned about the impact on artists – and the way that AI images can feel unauthentic and fake.

Another phenomenon can also be seen in deepfakes, where fake computer-generated footage of a well-known figure, such as Barack Obama, has become more and more convincing by leveraging AI trained on footage of his speeches. In these cases, the presentation may be so realistic that the video itself does not trigger the uncanny valley. Instead, it becomes about how credible the things they are saying are. If it’s too unrealistic or implausible, for example, if Obama endorsed Trump, it could create that same disconnected feeling, as it does not match with what we know about the person’s character.

This issue of authenticity and genuineness also applies to interactions with AI chatbots. Increasingly, such chatbots can develop a convincing rapport, to the point where someone interacting with one might not even realise they are speaking to an AI. If someone unexpectedly discovers the truth, that the ‘person’ is in fact an AI trying to sell them a product, they will feel manipulated and likely be unwilling to buy from the brand.

Even when people are already aware they are interacting with an AI, the uncanny valley can still be triggered. Users might notice odd turns of phrase an AI is using that human’s would not, thereby creating that sense of eeriness. Paradoxically, as awareness of AI grows, this can even happen when people aren’t dealing with AI at all. For example, customer service roles will often copy and paste stock text responses during text conversations with customers. This kind of robotic pattern could trigger the uncanny valley, and upset the customer.

Humanoid robots like this one, aren't the only things that can trigger an uncanny valley reaction. Image by Max Aguilera-Hellweg.

How Businesses using AI Can Deal with The Uncanny Valley

First it should be noted that, unlike many other problems faced by AI, the uncanny valley should naturally become less of a problem as the technology improves to be truly lifelike.

considering a less realistic representation

That said, for now, the simplest solution is an unintuitive one, businesses should make everything deliberately less realistic. If you have a picture or video of a person that looks almost real, or a text chat interface that feels like a human for 99% of its responses until suddenly it trips up, this will create a strong uncanny valley effect.


If instead the picture is of a deliberate cartoon or stylized artwork of a human, AI can still be used to create a great image, but no one will get disappointed when they discover it is not actually real. Similarly, a chatbot or text interface can be programmed to deliberately make constant references to itself being an AI assistant, which makes it far harder for people to start feeling it is a real person.

In short, by making it clear people are dealing with AI, and not trying to too closely pretend to be or to act like a human, the uncanny valley can be avoided, until technology advances enough that it is no longer a problem.

Previous
Previous

Saudi Arabia's AI Vision

Next
Next

How AI is being regulated