Google CEO Sundar Pichai told investors last month that advances in machine-learning technology would soon have an impact every product or service the company works on. “We are rethinking everything we are doing,” he said.
Part of that push to make its services smarter involves rethinking the way it’s employing machine learning, which enables computers to learn on their own from data. In short, Google is working to teach those systems to be a little more human.
Google discussed some of those efforts at a briefing Tuesday at its headquarters in Mountain View, California. “We’re at the Commander Data stage,” staff research engineer Pete Warden said in a reference to the emotionless android in the television show Star Trek: The Next Generation. “But we’re trying to get a bit more Counselor Troi into the system”—the starship Enterprise’s empathetic counselor.
Warden works on the team that developed Google Photos, which lets you search for things like “beach” or “dog” in your snaps. The underlying technology emerged from a long research effort into enabling software to identify objects in photos. But Warden and his coworkers discovered that just being able to spot, say, children, eggs, or baskets wasn’t enough. People wanted to search for “Easter egg hunt.” Likewise, the system needed to be trained to understand that photos with a turkey and plates taken in late November should be associated with “Thanksgiving.”
Another Google project, nicknamed GlassBox, is trying to stop software that learns from a limited sample of data from making what look to humans like simple, dumb mistakes. Headed by senior staff research scientist Maya Gupta, it aims to give the software something of the common sense that enables humans to discount misleading examples.
For example, a person shown a few examples of houses and their associated prices could see immediately that larger houses generally cost more—even if there was one outlier, such as a tiny house offered for $1.8 million in the expensive city of Palo Alto, California. But that same outlier might cause a machine-learning system looking for a relationship in the same sample of data to attribute high prices to another factor, such as house color. Gupta has developed mathematical methods to smooth out the influence of such outliers that can trip up a machine-learning system. “We’re trying to put back as much of the human knowledge as we can,” Gupta told MIT Technology Review.
Google has increased its investment in machine-learning research in recent years, after the emergence of a technology known as deep learning, which uses networks of roughly simulated neurons (see “10 Breakthrough Technologies 2013: Deep Learning”). It has produced striking improvements in speech recognition and image recognition. Facebook, Google, IBM, Microsoft, and Baidu are all investigating how deep learning can enable machines to understand language, and perhaps even converse with us (see “Teaching Machines to Understand Us”).
In the past week, Google has confirmed that its core search service is now processing a large portion of queries using a new deep-learning-driven system called RankBrain. And on Tuesday it debuted a service called Smart Reply that uses machine learning to automatically offer several short choices of responses to e-mail messages.
Greg Corrado, a senior research scientist and cofounder of Google’s deep-learning team, says the e-mail-writing software is just an early example of how machine learning is now creating completely new products, not just enhancing existing ones, such as spam filtering or search.