Intelligent Machines

We Need Computers with Empathy

An emerging trend in artificial intelligence is to get computers to detect how we’re feeling and respond accordingly. They might even help us develop more compassion for one another.

Oct 20, 2017

I was rehearsing a speech for an AI conference recently when I happened to mention Amazon Alexa. At which point Alexa woke up and announced: “Playing Selena Gomez.” I had to yell “Alexa, stop!” a few times before she even heard me.

But Alexa was oblivious to my annoyance. Like the majority of virtual assistants and other technology out there, she’s clueless about what we’re feeling.

We’re now surrounded by hyper-connected smart devices that are autonomous, conversational, and relational, but they’re completely devoid of any ability to tell how annoyed or happy or depressed we are. And that’s a problem.

What if, instead, these technologies—smart speakers, autonomous vehicles, television sets, connected refrigerators, mobile phones—were aware of your emotions? What if they sensed nonverbal behavior in real time? Your car might notice that you look tired and offer to take the wheel. Your fridge might work with you on a healthier diet. Your wearable fitness tracker and TV might team up to get you off the couch. Your bathroom mirror could sense that you’re stressed and adjust the lighting while turning on the right mood-enhancing music. Mood-aware technologies would make personalized recommendations and encourage people to do things differently, better, or faster.

Today, an emerging category of AI—artificial emotional intelligence, or emotion AI—is focused on developing algorithms that can identify not only basic human emotions such as happiness, sadness, and anger but also more complex cognitive states such as fatigue, attention, interest, confusion, distraction, and more.

My company, Affectiva, is among those working to build such systems. We’ve compiled a vast corpus of data consisting of six million face videos collected in 87 countries, allowing an AI engine to be tuned for real expressions of emotion in the wild and to account for cultural differences in emotional expression.

Using computer vision, speech analysis, and deep learning, we classify facial and vocal expressions of emotion. Quite a few open challenges remain—how do you train such multi-modal systems? And how do you collect data for less frequent emotions, like pride or inspiration?

David Biskup

Nonetheless, the field is progressing so fast that I expect the technologies that surround us to become emotion-aware in the next five years. They will read and respond to human cognitive and emotional states, just the way humans do. Emotion AI will be ingrained in the technologies we use every day, running in the background, making our tech interactions more personalized, relevant, authentic, and interactive. It’s hard to remember now what it was like before we had touch interfaces and speech recognition. Eventually we’ll feel the same way about our emotion-aware devices.

Here are a few of the applications I’m most excited about.

Automotive: An occupant-aware vehicle could monitor the driver for fatigue, distraction, and frustration. Beyond safety, your car might personalize the in-cab experience, changing the music or ergonomic settings according to who’s in the car.

Education: In online learning environments, it is often hard to tell whether a student is struggling. By the time test scores are lagging, it’s often too late—the student has already quit. But what if intelligent learning systems could provide a personalized learning experience? These systems would offer a different explanation when the student is frustrated, slow down in times of confusion, or just tell a joke when it’s time to have some fun.

Health care: Just as we can track our fitness and physical health, we could track our mental state, sending alerts to a doctor if we chose to share this data. Researchers are looking into emotion AI for the early diagnosis of disorders such as Parkinson’s and coronary artery disease, as well as suicide prevention and autism support.

Communication: There’s a lot of evidence that we already treat our devices, especially conversational interfaces, the way we treat each other. People name their social robots, they confide in Siri that they were physically abused, and they ask a chatbot for moral support as they head out for chemotherapy. And that’s before we’ve even added empathy. On the other hand, we know that younger generations are losing some ability to empathize because they grow up with digital interfaces in which emotion, the main dimension of what makes us human, is missing. So emotion AI just might bring us closer together. 

As with any novel technology, there is potential for both good and abuse. It’s hard to get more personal than data about your emotions. People should have to opt in for any kind of data sharing, and they should know what the data is being used for. We’ll also need to figure out if certain applications cross moral lines. We’ll have to figure out the rules around privacy and ethics. We’ll have to work to avoid building bias into these applications. But I’m a strong believer that the potential for good far outweighs the bad.

Rana el Kaliouby is the CEO and cofounder of Affectiva. In 2012 she was named one of MIT Technology Review’s 35 Innovators Under 35.