Ever talk to someone at a party or conference reception only to discover that he or she is constantly scanning the room, looking this way and that, perhaps finding you boring, perhaps looking for someone more important? Doesn’t the person realize that you notice?
Welcome to the new world of wearable computers, where we will tread uneasily as we risk continual distraction, continual diversion of attention, and continual blank stares in hopes of achieving focused attention, continual enhancement, and better interaction, understanding, and retention. Google’s latest hardware toy, Glass, which has received a lot of attention, is only the beginning of this challenge.
Actually, it isn’t the beginning—this stuff has been around for over a decade. In my former roles as a cognitive scientist and vice president of technology at Apple, and now as a management consultant in product design, I visit research laboratories at companies and universities all over the world. I’ve experienced many of these devices. I’ve worn virtual-reality goggles that had me wandering through complex computerized mazes, rooms, and city streets, as well as augmented realities where the real world was overlaid with information.
And yes, I’ve worn Google Glass. Unlike “immersive” displays that capture your full attention, Glass is deliberately designed to be inconspicuous and nondistracting. The display is only in the upper right of the visual field, the goal being to avoid diverting the user’s attention and to provide relevant supplementary information only when needed.
Even so, the risk of distracting the user is significant. And once Google allows third-party developers to provide applications, it loses control over the ways in which these will be used. Sebastian Thrun, who was in charge of Google’s experimental projects when Glass was conceived, told me that while he was on the project, he insisted that Glass provide only limited e-mail functionality, not a full e-mail system. Well, now that outside developers have their hands on it, guess what one of the first things they did with it was? Yup, full e-mail.
It’s a great myth that people can multi-task without any loss in the quality of their work. Numerous psychology experiments show that when two relatively complex tasks are done at the same time, performance deteriorates measurably. Some of these experiments were done by me, back when I was a practicing cognitive scientist. David Strayer, whose research group at the University of Utah has studied these issues for decades, has shown that hands-free phones are just as distracting as handheld ones, and using one while driving is just as bad as driving while drunk.
Even pairs of tasks as simple as walking and talking can show performance decrement: it happens to me all the time. While I am thinking or deep in conversation on my morning walk, I often stop walking when I get to difficult and profound thoughts. The stopping is subconscious, perceived only when my conscious mind breaks its concentration to notice that the walking has halted. Psychologist (and Nobel laureate) Danny Kahneman notes in his book Thinking, Fast and Slow that he discovered he couldn’t think at all when he walked too fast. He had to slow down to allow new thoughts.
If performing tasks simultaneously is so deleterious, why do people maintain that they can do it without any deterioration? Well, it is for somewhat the same reason that drunk drivers think they can drive safely: monitoring our own performance is yet another task, and it suffers. The impairment in mental skills makes it difficult to notice the impairment.
So while the supplementary, just-in-time information provided by wearable computers seems wonderful, as we come to rely upon it more and more, we can lose engagement with the real world. Sure, it is nice to be reminded of people’s names and perhaps their daughter’s recent skiing accident, but while I am being reminded, I am no longer there—I am somewhere in ether space, being told what is happening.
Years ago, I wrote a piece called “I Go to a Sixth Grade Play” in which I discussed the parents so anxiously video-recording their children in the play that they didn’t experience the event until the next day. Detached engagement is not the same thing as full engagement; it lacks the emotional dimension.
There is a flip side to this argument, however. It is that when implemented and used mindfully, wearable technology can enhance our abilities significantly. Thad Starner, a wearable-computer champion who has worn these devices for almost a quarter-century and was a technical advisor to Google Glass, sent me comments on an early draft of this article. “I am very bad at multitasking,” he said, noting that when he attends a lecture, “[by] putting the physical focus of the display at the depth of the blackboard and having a fast text entry method, I could (suddenly) both pay attention and take good notes.” He did far better than he could with paper and pencil, which forced his attention to shift from notebook to blackboard. He then reminded me of a conversation we had on this topic in 2002. I didn’t remember the conversation, so he described the interaction, reminding me of both his comments and my responses.
How can Starner remember the details of a conversation from more than 10 years ago? He takes notes during his conversations, one hand in his pocket typing away on a special keyboard. The result is that during any interaction, he is far more focused and attentive than many of my non-computer-wearing colleagues: the act of taking notes forces him to concentrate upon the content of the interaction. Moreover, he has records of his interactions, allowing him to review what took place—which is how he “remembered” our decade-old conversation. (See the Q&A with Starner, and “You Will Want Google Goggles.”)
Without the right approach, the continual distraction of multiple tasks exerts a toll. It takes time to switch tasks, to get back what attention theorists call “situation awareness.” Interruptions disrupt performance, and even a voluntary switching of attention from one task to another is an interruption of the task being left behind.
Furthermore, it will be difficult to resist the temptation of using powerful technology that guides us with useful side information, suggestions, and even commands. Sure, other people will be able to see that we are being assisted, but they won’t know by whom, just as we will be able to tell that they are being assisted, and we won’t know by whom.
Eventually we will be able to eavesdrop on both our own internal states and those of others. Tiny sensors and clever software will infer emotional and mental states. Worse, the inferences will often be wrong: many factors could cause a person’s pulse rate to go up or skin conductance to change, but technologists are apt to focus upon a simple, single interpretation.
Is this what we want? People staring blankly at the real world as their virtual minders tell them what is happening? We are entering unknown territory, and much of what is being done is happening simply because it can be done.
In the end, either wearable technologies will be able to augment our experiences and focus our attention on a current task and the people with whom we are interacting, or they’ll distract us—diverting our attention through tasty morsels of information that are irrelevant to the current activity.
When technologies are used to supplement our activities, when the additional information being provided is of direct relevance, our attention can become more highly focused and our understanding and retention enhanced. When the additional information is off target, no matter how enticing it is, that’s the distracting and disruptive side.
I like to look on the positive side of technology. I even wrote a book, Things That Make Us Smart, about the power of artifacts to enhance human abilities. I am fully dependent upon modern technologies, because they make me more powerful, not less. By taking away the dreary, unessential parts of life, I can concentrate upon the important, human aspects. I can direct high-level activities and strategies and maintain friendships with people all over the world. That’s the focused side. On the other hand, I spend many hours each day simply keeping up with people who continually contact me, almost always with interesting comments, news, and invitations, but nonetheless exceeding my ability to cope and distracting me from my primary activities. Yes, I welcome these distractions because they are a pleasant diversion from the hard work of writing, thinking, and decision-making, but procrastination, even though it’s enjoyable, does not help get the work done. I already had to hire a human assistant to help keep me focused. Will the continual stream of messages from wearable devices prove to be irresistible, diverting me from my work, or will they amplify my abilities?
A standard response is to put the burden on the individual: it is our responsibility to use technology responsibly. I agree in theory, but not in practice. I know all too well the temptations of distraction—all that fascinating news, all those friends who send me status reports and wish me to respond with my own. I find it easy to succumb—anything to avoid the difficult, dreary concentration required to accomplish anything of value. I’ve often had to unplug my computer from the Internet to complete my work. The providers of these technologies must share the burden of responsible design.
Can wearable devices be helpful? Absolutely. But they can also be horrid. It all depends upon whether we use them to focus and augment our activities or to distract. It is up to us, and up to those who create these new wearable wonders, to decide which it is to be.
This review was revised on August 19, 2013.
Don Norman is a cognitive science professor (UC San Diego, Northwestern) turned executive (Apple vice president) turned designer (IDEO Fellow), and author of 20 books, including Living with Complexity and The Design of Everyday Things. He can be found at jnd.org.