Jeremy Portje

Future of Work / Cobots

AI savants, recognizing bias, and building machines that think like people

Despite impressive advances, three speakers at EmTech Digital show how far there is to go in the AI world.

Mar 26, 2018
Jeremy Portje

Even with all the amazing examples of progress in artificial intelligence, such as self-driving cars and the victories of AlphaGo, the technology is still very narrow in its accomplishments and far from autonomous. Indeed, says Oren Etzioni, chief executive officer of the Allen Institute for Artificial Intelligence, today’s machine-learning systems are “AI savants.”

Etzioni, speaking today at MIT Technology Review’s annual EmTech Digital conference in San Francisco, explained that self-driving cars and speech recognition are based on machine learning. And even today, he said, 99 percent of machine learning is based on human work.

Etzioni pointed out that machine learning needs large amounts of data, all of which needs to be labeled: this is a dog; this is a cat. And then people need to supply the appropriate algorithms. All this relies on manual labor.

AI is “not something mystical. It’s not something magical,” he said. “It will take a lot of work to go beyond current capabilities.” 

Key to recognizing the limitations of today’s AI is understanding the difference between autonomy and intelligence. People have both. But Etzioni pointed out that AI systems, even if they have very high intelligence for a specific task, tend to be low in autonomy. And that lack of autonomy limits the technology’s ability to take on many broad problems.

Despite the limitations of AI, Etzioni is keen on many potential applications. Near the top of his list are AI-based scientific breakthroughs. Machine learning can read millions of scientific papers looking for trends and forming hypotheses. “Imagine a cure for cancer buried in the thousands and even millions of clinical trials,” he said.

Etzioni also warned about the dangers introduced by machine learning, such as algorithmic bias: “Machine learning is looking for patterns in data. If you start with racist data, you will end up with even more racist models.” He said, “This is a real problem.”

But in taking on such problems, the strategy is clear. “AI is a tool,” he said. “The choice about how it gets deployed is ours.”

NYU professor Brenden Lake, who focuses on the intersection of data science and psychology, showed how looking at cognitive science can help build the next generation of artificial intelligence. Model building, instead of pattern recognition, can help AI systems develop new skills, like playing an unfamiliar game or recognizing new handwritten characters. The study of the human mind “will play a key role in developing this technology in the future,” Lake believes. 

Microsoft researcher Timnit Gebru brought a dose of reality to the proceedings with her examples of AI bias in action and a reminder to the audience about the need to develop smart standards for using the data at scientists’ fingertips. She compared the lack of standards in the AI industry to the initial chaos and subsequent regulation in the automobile industry, pointing out that cars were initially believe to be “inherently evil,” much as AI systems are feared today. “AI has a lot of opportunities, but we have to take this idea of a safety standardization process seriously,” she said.