Sense and sensibility: Do we want AI to master emotions?

We examine the workings of emotion-recognition technologies, their usefulness, and the privacy concerns they inspire.

How AI learns to recognize human emotions

Imagine you come home one day in a bad mood, shout at the door for not opening fast enough and at the light bulb because it burned out — and the smart speaker immediately starts playing chill music, and the coffee machine pours you a mocha. Or, as you walk into a store, the robot assistant that was about to approach sees your unhappy face, backs off, and helps another customer instead. Sound like science fiction?

In fact, emotion recognition technologies are already being introduced into many areas of life, and in the near future our mood could well be under the watchful eye of gadgets, household appliances, cars, you name it. In this post, we explore how such technologies work, and how useful — and sometimes dangerous — they might be.

Artificial EQ

Most existing emotion-recognition systems analyze an individual’s facial expression and voice, as well as any words they say or write. For example, if the corners of a person’s mouth are raised, the machine might rule that the person is in a good mood, whereas a wrinkled nose suggests anger or disgust. A high, trembling voice and hurried speech can indicate fear, and if someone shouts the word “cheers!” they are probably happy.

More-complex systems also analyze gestures and even take into consideration the surrounding environment along with facial expressions and speech. Such a system recognizes that a person being forced to smile at gunpoint is probably not overjoyed.

Emotion-recognition systems generally learn to determine the link between an emotion and its external manifestation from large arrays of labeled data. The data may include audio or video recordings of TV shows, interviews and experiments involving real people, clips of theatrical performances or movies, and dialogues acted out by professional actors.

Simpler systems can be trained on photos or text corpora, depending on the purpose. For example, this Microsoft project tries to guess people’s emotions, gender, and approximate age based on photographs.

What’s emotion recognition for?

Gartner predicts that by 2022 one in ten gadgets will be fitted with emotion-recognition technologies. However, some organizations are already using them. For example, when stepping into an office, bank, or restaurant, customers might be greeted by a friendly robot. Here are just a few areas in which such systems might prove to be beneficial.

Security

Emotion recognition can be used to prevent violence — domestic and otherwise. Numerous scientific articles have touched on this issue, and entrepreneurs are already selling such systems to schools and other institutions.

Recruitment

Some companies deploy AI capable of emotion recognition as HR assistants. The system evaluates keywords, intonations, and facial expressions of applicants at the initial — and most time-consuming — stage of the selection process, and compiles a report for the human recruiters on whether the candidate is genuinely interested in the position, honest, and more.

Customer focus

The Roads and Transport Authority in Dubai launched an interesting system this year at its customer service centers, with AI-equipped cameras comparing people’s emotions when they enter and leave the building to determine their level of satisfaction. If the score calculated falls below a certain value, the system advises center employees to take measures to improve the quality of service. (Photos of visitors are not saved for privacy considerations.)

Socialization of children with special needs

Another projects aims to help autistic children interpret the feelings of those around them. The system runs on Google Glass smart glasses. When the child interacts with another person, the glasses use graphics and sound to give clues about the latter’s emotions. Tests have shown that children socialize faster with this virtual helper.

How effective are emotion detectors?

Emotion-recognition technologies are far from perfect. A case in point is the aggression-detection technology deployed in many US schools. As it turns out, the system considers a cough more alarming than a bloodcurdling scream.

Researchers at the University of Southern California have found that facial-recognition technology is also easy to dupe. The machine automatically associates certain facial expressions with particular emotions, but it fails to distinguish, for example, malicious or gloating smiles from genuine ones.

As such, emotion-recognition systems that take context into account are more accurate. But they are more complex and far fewer.

Of relevance is not only what the machine is looking at, but what it was trained on. For example, a system trained on acted-out emotions might struggle with real-life ones.

Emotions as personal data

The spread of emotion-recognition technologies raises another important issue. Regardless of how effective they are, such systems invade people’s personal space. Consider, for example, the following scenario: You take a fancy to a random passerby’s outfit, and before you know it, you’re being bombarded with ads for clothes from the same brand. Or you frown disapprovingly during a meeting and subsequently get passed over for promotion.

According to Gartner, more than half of all US and British residents do not want AI to interpret their feelings and moods. In some places, meanwhile, emotion- and facial-recognition technologies are prohibited by law. In October, for example, California introduced legislation banning law enforcement officers from recording, collecting, and analyzing biometric information using body-worn cameras, including facial expressions and gestures.

According to the drafters of the bill, the use of facial-recognition technology is tantamount to demanding passersby show their passport every second. It violates the rights of citizens, and it can cause people guilty of minor misconduct, such as having an unpaid parking ticket, to be wary of reporting more serious crimes to the police.

Artificial emotionlessness

The problem of privacy is so acute that the fooling of emotion detectors is even the subject of scientific research. For example, scientists at Imperial College London have developed a privacy-preserving technology that removes feelings from the human voice. The result is that a voice assistant fitted with emotion-recognition technology can understand the meaning of what is said, but not interpret the speaker’s mood.

Placing limits on AI will surely complicate the development of empathy in AI systems, which even now are prone to error. But it’s good to have a safeguard in case our world turns into Black Mirror the machines start to poke around too deeply inside our subconscious. After all, we shouldn’t count on emotion recognition being scrapped, especially given that in some areas the technology does actually do good.

Tips