Intelligent Machines

What Will It Take to Build a Virtuous AI?

A billion-dollar nonprofit backed by Silicon Valley heavyweights aims to make machines more moral. How it will do so isn’t yet clear.

Dec 15, 2015

A billion dollars is quite a lot of money to invest—even if your goal is saving the world from the ill effects of super-smart artificial intelligence that humans can’t control.

That’s the amount of cash several big-name entrepreneurs, including Elon Musk, Peter Thiel, and Sam Altman of Y Combinator, have committed to investing in OpenAI, a new nonprofit aimed at developing artificial intelligence that “benefits humanity” (see “A Billion-Dollar Effort to Make a Kinder AI”).

The idea is this: without answering to either academia or industry, OpenAI will be better able to consider the ethical implications of the technology it is creating. OpenAI isn’t the first nonprofit dedicated to furthering artificial intelligence, and Google, Facebook, and others have taken steps to make some of the AI technology they are developing freely available (see “Facebook Joins the Stampede to Give Away Artificial Intelligence Technology”). However, OpenAI has attracted a large amount of attention already, both because of the big names involved and because of its lofty and futuristic goals.

It might seem like a worthwhile and sensible endeavor. After all, many AI experts believe it’s worth considering the implications of technology that is becoming ever more powerful and disruptive.

However, some important questions remain to be answered. It is not clear how the billion dollars invested in OpenAI will be spent, or over what time frame. Nor is it evident how the organization’s ethical objectives will be squared with the basic work of building AI algorithms. Perhaps most confusing of all, it’s difficult to imagine how the ethical goals that would be programmed into future AIs might be agreed upon.

“I’d agree that AI is a very powerful tool, and some designs and uses can be better or worse than others,” says Patrick Lin, a philosopher at Cal Poly who studies the ethics of automation and artificial intelligence. But, he adds: “OpenAI says that it aims at ‘a good outcome for all’ and to ‘benefit humanity as a whole,’ but who gets to define what the good outcome is?”

Lin notes that some of those backing OpenAI are known for their libertarian views, which may not align with the public at large. And he points to Tesla’s decision to beta-test its self-driving technology on the public roads as ethically questionable. Lin also says that simply removing a financial incentive does not necessarily eradicate the potential for things to go wrong.

Others, especially those who are already worried about the long-term implications of the technology they are developing, are more encouraging. “I’m enthusiastic about the initiative,” Eric Horvitz, technical fellow and managing director of Microsoft Research in Redmond, Washington, who has long been interested in discussing the long-term implications of AI. “I hope that Microsoft Research and Microsoft more broadly will be able to assist Elon and Sam and others—and to collaborate with folks at OpenAI.”

In 2009, as president of the Association for the Advancement of Artificial Intelligence, Horvitz convened a conference to explore the long-term impact of AI. And last year he founded an academic effort to explore this issue: the One Hundred Year Study on Artificial Intelligence at Stanford University. “It is not only about aligning preferences and making sure that systems do the right thing,” Horvitz says, “but also about their ability to understand when they must reach out to people for assistance when they are confused.”

Oren Etzioni, chief executive officer of the Allen Institute for Artificial Intelligence, another nonprofit, is similarly positive about the announcement. “I think everybody thinks this is a good thing,” he says. “The people involved are exceptionally talented.”

The OpenAI announcement certainly comes at a remarkable moment in the rebirth of AI. There has been rapid and sometimes startling progress—especially using a technique based on neural networks, known as deep learning (see “10 Breakthrough Technologies: Deep Learning”)—in building algorithms enabling machines to perform tasks that previously eluded them, including recognizing the contents of images, understanding spoken words, and parsing basic text.

The founding of OpenAI also follows a series of warnings, from Musk and others, over the potential long-term implications of artificial intelligence, fueled in part by speculation about the difficulty of controlling an AI capable of superhuman intelligence and ingenuity. Much of this seems to have been inspired by a book called Superintelligence: Paths, Dangers, Strategies,written by Nick Bostrom, a philosopher and director of the Future of Humanity Institute at the University of Oxford. Bostrom’s book contains many troubling thought experiments about future AI.

Some caution that these long-term fears should not outweigh more pressing concerns, such as the impact of AI on employment or legal situations.

“I would definitely be thinking about the social and economic consequences of AI now,” says Etzioni. “These other, more far-fetched questions of whether AI is an existential threat—I’m worried that they could be an unwelcome distraction.”

The new nonprofit certainly won’t lack for technical expertise. It will be led by research director Ilya Sutskever, who was a student of the deep-learning pioneer Geoffrey Hinton and most recently worked at Google, after it acquired the company he cofounded with Hinton (see “Innovators Under 35: Ilya Sutskever”).

The OpenAI project has not yet stated how its moral objectives will change the machine-learning techniques that Sutskever and others are developing. Some technical experts say this isn’t an insurmountable challenge. “I actually don’t think it’s that hard to encode ethical considerations into machine learning algorithms as part of the objective function that they optimize,” says Pedro Domingos, a professor at Washington University and the author of the recent book The Master Algorithm. However, he notes: “The big question is whether we human beings are able to formalize our ethical beliefs in a halfway coherent and complete way. Another option is to have the AIs learn ethics by observing our behavior, but I have a feeling they could wind up pretty confused if they did.”

However OpenAI chooses to program morality into its algorithms, Lin argues that the issue should be opened up for greater debate. “Society should have some input into what a good outcome should be, especially for a game-changing technology such as AI,” he says. “Without broader input, they’ll be unlikely to get it right.”