In 2018, several high-profile controversies involving AI served as a wake-up call for technologists, policymakers, and the public. The technology may have brought us welcome advances in many fields, but it can also fail catastrophically when built shoddily or applied carelessly.
It’s hardly a surprise, then, that Americans have mixed support for the continued development of AI and overwhelmingly agree that it should be regulated, according to a new study from the Center for the Governance of AI and Oxford University’s Future of Humanity Institute.
These are important lessons for policymakers and technologists to consider in the discussion on how best to advance and regulate AI, says Allan Dafoe, director of the center and coauthor of the report. “There isn’t currently a consensus in favor of developing advanced AI, or that it’s going to be good for humanity,” he says. “That kind of perception could lead to the development of AI being perceived as illegitimate or cause political backlashes against the development of AI.”
It’s clear that decision makers in the US and around the world need to have a better understanding of the public’s concerns—and how they should be addressed. Here are some of the key takeaways from the report.
While more Americans support than oppose AI development, there isn’t a strong consensus either way.
A higher percentage of respondents also believe high-level machine intelligence would do more harm than good for humanity.
When asked to rank their specific concerns, they listed a weakening of data privacy and the increased sophistication of cyber-attacks at the top—both as issues of high importance and as those highly likely to affect many Americans within the next 10 years. Autonomous weapons closely followed in importance but were ranked with a lower likelihood of wide-scale impact.
More than 8 in 10 Americans believe that AI and robotics should be managed carefully.
That is easier said than done because they also don’t trust any one entity to pick up that mantle. Among the different options presented from among federal and international agencies, companies, nonprofits, and universities, none received more than 50% of the respondents’ trust to develop and manage AI responsibly. The US military and university researchers did, however, receive the most trust for developing the technology, while tech companies and nonprofits received more trust than government actors for regulating it.
“I believe AI could be a tremendous benefit,” Dafoe says. But the report shows a main obstacle in the way of getting there: “You have to make sure that you have a broad legitimate consensus around what society is going to undertake.”
Correction: An earlier version of this story had a typo in the first chart, showing that 31% of Americans "somewhat oppose" AI development. It has been corrected to 13%.