Thinking About Emotional Machines

Affective Computing

Jan 1, 1998

I was trying to summarize Rosalind Picard’s ideas in Affective Computing while talking with a technician working on my home computer, a machine as fond of crashes as any Indianapolis 500 grandstander. “What’s that book you’re reading?” he asked. “Affective Computing,” was my gruff, short-tempered reply. “That’s a good one,” he said. “You’re reading about effective computing and you can’t get your computer to work!” I let the misunderstanding remain, realizing that “effective” and “affective” also have similar meanings for Picard, an associate professor of media technology at the MIT Media Laboratory. Her book is a revolutionary proposal to make computing more effective by factoring in emotions.

While Picard’s title might suggest a study centering upon our emotional responses to computing, her vision actually encompasses an even more daring and unsettling theme: the possibility that computers can become capable of responding intelligently to the emotions of their users. As Picard writes in her preface: “I have come to the conclusion that if we want computers to be genuinely intelligent, to adapt to us, and to interact naturally with us, then they will need the ability to recognize and express emotions, to have emotions, and to have what has been called emotional intelligence.’”

What sounds like a totally preposterous premise-feeling computers sounds like a cyber-age update of the Tin Man in The Wizard of Oz (a machine retro-fitted with a “heart”)-becomes an in-creasingly plausible reality as the reader follows Picard’s captivating (if labyrin-thine) course of reasoning. While refusing to narrowly define “feelings” or ap-prove of any single psychological theory regarding their meaning in terms of cognition and personality, Picard focuses upon the physical manifestations of feelings (facial expression, vocal intonation, pupil dilation) and considers how through sensing devices (gloves, masks, sensor-laden jewelry and clothing) computers might receive and usefully act upon input about user emotionality. That input could be interpreted through programs demonstrating reasoned responses to what these biological patterns might imply about the user’s computing needs. In the chapter entitled “Applications of Affective Computers,” Picard lists a range of possible practical uses, including adding emotional inflection to synthesized speech programs for the disabled, and educational software capable of recognizing student frustration with technology and consequently altering its patterns of presentation.

It needs to be emphasized, and Picard does remind her readers, that affective computing is in its infancy. It is precisely because the feasibility of affective-sensitive technology is still so unknown that Picard’s book seems to exist in a curious zone somewhere between science and science fiction. The text is divided into two parts, underscoring the book’s hybrid identity. Part one is a compelling (though somewhat rambling) account of how and why Picard developed this approach. Part two, which is far less readable than the first, particularly for nonscientists, describes actual and possible affective computer designs, leaving me with the impression that many years will pass before affective computing will be substantially realized.

There are a dizzying number of technical issues involved. Biosensing devices to communicate the physical consequences of user emotional responses are currently readily available. The dicey aspects of the technology seem to turn upon the issue of how to program computers to process emotional data consistently and intelligently. Picard seems to rely upon the conceptual model of emotion offered by Daniel Goleman in Emotional Intelligence (Bantam, 1995): emotions are events we can reason about. Humans can think about feelings in order to chart desirable courses of action, and Picard convincingly demonstrates that computers can also be designed to think about feelings and how to rationally act in light of them.

Where Picard transitions from science to science fiction is when she contemplates computers possessing their own emotional intelligence independently of their human operators. In her mind-boggling contribution to David G. Stork’s HAL’s Legacy: 2001’s Computer as Dream and Reality (MIT Press, 1997), she raises a usefully disturbing question: “Can we create computers that will recognize and express affect, feel empathy, exhibit creativity and intelligent problem solving, and never bring about harm through their emotional reactions?”

But even if all of the technical obstacles surrounding affective computing can be overcome, the deep question surrounding affective technology might not pivot on a “can” but a “should.” Should we look toward machinery to better understand what to do with the feelings in our hearts? A pessimistic image comes to mind as the question is posed. The near-human HAL in the “2001” film definitely has a Frankenstein-like undercurrent: HAL kills the spaceship’s human crew with nary a twang of conscience.

But let’s return to the image of the Tin Man in The Wizard of Oz, which might be a better metaphor for Picard’s vision than HAL. Remember the Wizard does some surgery on the Tin Man, cuts a space through his armor, and inserts a silk heart filled with sawdust. “Isn’t it a beauty?” asks the Wizard. And in two sentences, the author, L. Frank Baum, cuts to the chase, anticipating by nearly a century debates about how to create (and if we should create) humane and humanizing technologies: “It is indeed!” replies the Tin Man, “But is it a kind heart?”

It is very much to Picard’s credit that she is aware of the many ethical and moral dilemmas raised by the prospect of affective technology. This book really doesn’t attempt to deal in any depth with those dimensions. Rather it is a groundbreaking preface to a plausible direction in computing design, one that will inevitably open up Pandora’s boxes beyond the domain of technological invention.

I love the fact that Picard has me asking questions like “Can my computer be programmed to demonstrate genuine empathy toward me?” Equally attractive is her insightful conclusion about the measured meaning of her discovery: “There is a time to express emotion, and a time to forbear; a time to sense what others are feeling and a time to ignore feelings… . In every time, we need a balance, and this balance is missing in computing. Designers of future computing can continue with the development of computers that ignore emotions … or they can take the risk of making machines that recognize emotions, communicate them, and have’ them, at least in the ways in which emotions aid in intelligent interaction and decision making.”

The fact that Picard is taking the risk of leaping into terra incognita is a reason to rejoice. There is deep sensibility married to technical savvy throughout these pages, and I trust that Picard would have given HAL some model of a feeling brain, the Tin Man a good heart, and will give, if possible, future affective computers something akin to a noble soul.