Intelligent Machines

Boosting AI’s IQ

Today’s artificial intelligence is far from intelligent. But Josh Tenenbaum, PhD ’99, is working on it.

Jun 27, 2018
Bob O'Connor

Intelligent Machines

Boosting AI’s IQ

Today’s artificial intelligence is far from intelligent. But Josh Tenenbaum, PhD ’99, is working on it.

Jun 27, 2018
Bob O'Connor

On a Sunday afternoon in late March, cognitive scientist Josh Tenenbaum, PhD ’99, sits before a wooden table game, wielding a magnetic paddle and attempting to knock a small orange ball into his opponent’s goal. Tenenbaum, a professor of computational cognitive science, and his opponent, a first-year graduate student, laugh as the ball bounces off the side of the board and ricochets backwards into Tenenbaum’s goal, costing him a point. 

Tenenbaum’s Building 46 workspace is filled with toys—not only the magnetic paddle game, but also a tree-shaped puzzle and a game called Stormy Seas, in which players try to balance small, pirate-themed objects on an unstable platform. A first grader would be right at home here. Yet the toys also inspire Tenenbaum and his students to explore headier questions about what makes people intelligent—what allows us to think and learn so quickly, and ultimately what makes us smarter than any current artificial-intelligence system.

A human being watching Tenenbaum and his student whack the ball back and forth would quickly understand that they were playing a game. After a few moments, the observer would figure out the rules and be able to participate. “It’d be hard enough to get a robot to pick up most of the things on this table, let alone figure out what’s going on,” says Tenenbaum. Artificial intelligence is not equipped to “work backwards from a goal and plan in a novel way,” he says. Humans, however, can do so, in part by creating mental models of the world around them. It may be possible to build smarter machines by reverse-engineering these mental processes.

For all the recent hype around artificial intelligence, Tenenbaum considers none of today’s AI systems truly intelligent. Yes, AI has made striking advances in enabling computers to perform relatively narrow tasks, like recognizing faces or objects. But while many of these focused AI systems work incredibly well and are having a huge impact, researchers have not succeeded in emulating the general, flexible intelligence that lets people solve problems without being specially trained to do so. His work, he says, is inspired by philosophy, including questions contemplated by Plato, Aristotle, Kant, and Hume. How do you go from particulars to generalities? How do you look at one thing and say, “What I learn is not just about that thing—it’s about other things, too”? “That is exciting to me—thinking about these big questions about how we come to know anything,” he says.

Bob O'Connor

Tenenbaum argues that focusing on the human mind is a valuable way to advance AI. Historically, “many if not most” innovations in the field were accomplished by people interested in understanding human intelligence from a mathematical or engineering perspective, he says. So while this is not the only possible approach, he believes it is likely to be a fruitful one. In addition, “many of the kinds of technology we look to AI for are technologies that will live in a human world,” he says. These include machines that can help with household chores or even take care of children or the elderly, adopting tasks on a human scale. And, he says, “if we want machines to interact with us the way we interact with each other, then to some extent it is essential for them to have a human-like intelligence.”

In March, when the Institute launched MIT Intelligence Quest (now called the MIT Quest for Intelligence), an initiative to explore the underpinnings of human intelligence, Tenenbaum was candid about the limits of existing AI systems. “Each has to be built by really smart engineers,” he said, “and each does just one thing.” But he elaborated on his bolder vision of developing AI to emulate human learning. “Imagine if we could build a machine that grows into intelligence the way a human being does, that starts like a baby and learns like a child,” he said. “That would be AI that is really intelligent, if we could build it.” He added that it almost certainly couldn’t be done in 10 years, probably not in 20—and quite possibly not even in our lifetimes. “But that’s okay,” he said. “That’s why we call it a quest.”

Intuitive physics

“In this age of big data and deep learning, it is easy to get seduced,” says Robert Goldstone, a professor of psychological and brain science at Indiana University. He means it’s tempting to think machine-learning systems don’t need coherent interpretations of the world around them to behave intelligently—who needs theory when we have enough data? Tenenbaum’s work, however, demonstrates the importance of theory, says Goldstone, because it shows that to get ahead, learners need to create internal models of their environments.

Consider the realm of everyday physics, in which humans display ready intuition. When we see a child climbing a tree, we have a sense of whether the branches are strong enough to support her weight. When we see a cup positioned at the edge of a table, we know that if the table is jostled, the cup will probably fall. Each day, we encounter a visually diverse range of scenarios, yet we can reason and make judgments about them. Tenenbaum thinks people have an ability to mentally simulate the mechanisms of physics, and this allows us to predict how objects will behave.

As he tests this theory, Tenenbaum’s goal is not only to understand human cognition better but also to create computer programs that might approximate it more effectively. In one project, he and his team built a series of virtual towers with tools used in video-game design. They designed each tower with 10 blocks stacked in more or less precarious arrangements. Then the team presented human subjects with these constructs and asked them whether the towers would fall, as in a game of Jenga. After the subject answered, the tower fell—or didn’t. Tenenbaum argues that people approach this task using an “intuitive physics engine”—a mental model based on an understanding of mass, friction, gravity, and other aspects of the physical world.

Bob O'Connor

Meanwhile, Tenenbaum and his team had also written a computer program to capture human thought and perception about such physical scenarios. Their goal was for the program to encode information about a block tower, such as its position and configuration, and then run simulations based on that information to predict whether the tower would fall. It made predictions that were probabilistic, as he says our own physical reasoning does, too. (We tend to think in statistical terms about different possible outcomes, he explains, because in the everyday world we must grapple with uncertainty about objects we are observing.)

In a paper published in 2013, Tenenbaum, postdoc Peter Battaglia, and Jessica Hamrick ’11, MEng ’12, described testing the system and finding that its predictions were strongly correlated with those made by human subjects. In further experiments, the team manipulated the masses of the blocks, so that some were heavy and some were light. They also modeled a jolt to the table and asked which blocks were most likely to fall to the floor. In each of these cases, the computer model’s predictions were similar to those made by human subjects, says Tenenbaum: “We were able to model the kind of simulation someone might build in their own head.”

Learning to learn

To date, the best AI algorithms have required systems to be trained on thousands—or in some cases millions—of examples. This is true, for instance, in the domain of letter identification, a standard benchmark in machine learning. “What’s remarkable about human cognition, though, is that we don’t need 6,000 examples of a letter to learn what it is and recognize it in different fonts or different handwritings,” says Brenden Lake, PhD ’14, an assistant professor of psychology and data science at New York University and a former graduate student of Tenenbaum’s.

Beginning in 2009, Lake, Tenenbaum, and postdoc Ruslan Salakhutdinov (now an associate professor at Carnegie Mellon University and Apple’s director of AI research) set out to develop a program inspired by insights into how people learn about and perceive characters. After videotaping 20 subjects drawing letters from foreign alphabets they’d never used before, they found strong commonalities in how people went about the task—the parts they broke the characters into, their starting point, and the direction of their strokes. On the basis of these observations, the researchers decided to create a handwriting algorithm that would take into account how letters were produced and not simply what pixels were present in the final result.

“The key idea was to represent the concept of a letter as a program, which is a much richer representation than is typically used in machine learning,” says Lake. They also made the program probabilistic. It could observe a written character, infer the way it had most likely been drawn (the “program” for drawing it), and then imagine how new instances of that same character might be drawn. It could also accommodate some uncertainty in the information it worked with and generate a range of outputs: each example of the letter A, for instance, was slightly different. This ability to tolerate uncertainty helped the algorithm handle real-world situations. With experience, the program could also improve at drawing new characters. “Part of what excited us was this idea of ‘learning to learn,’” says Tenenbaum. And that, he explains, is how we gain the ability to generalize efficiently, using sparse data or even just one example.

In a 2015 paper published in Science, the team tested the algorithm by showing it a single unfamiliar letter and asking it to produce a novel character that was similar. They presented human subjects with the same task, and then asked judges to determine which of the characters had been created by people and which by machines. Remarkably, the vast majority of judges could not tell the difference. In other words, the algorithm passed a simple, visual form of the Turing test, the classic criterion for machine intelligence.

Modeling babies

In the early days of artificial intelligence, the science of human learning was not advanced enough to offer much guidance. “Turing could only presume what a child’s brain was like,” says Tenenbaum—the British mathematician regarded it as a blank notebook—“but no, it is much more.” And as part of his drive to understand the origins of knowledge, Tenenbaum has also collaborated on work exploring the minds of small children. In 2008, he taught a class at Harvard with the renowned psychologist Elizabeth Spelke, who has shown that even young babies have certain forms of knowledge that appear to be built in. (In contrast to the foundational work of Jean Piaget—who believed that children develop object permanence, the understanding that objects don’t simply wink in and out of existence, at around eight months—Spelke showed that some form of this understanding is present in two- to three-month olds.) Spelke convinced Tenenbaum that the brain is set up to develop certain concepts, and that in order to appreciate human intelligence fully, it is necessary to model the core knowledge of very young children. This includes intuition about both physics and psychology—in particular, our ability to interpret other people’s actions in terms of their mental states, beliefs, and objectives.

Bob O'Connor

Most recently, Tenenbaum and Spelke explored how 10-month-old babies are able to figure out what other people want and how much they value the goals they’re pursuing. To do so, the researchers created animations in which blue, red, and yellow characters jumped over walls or performed other physical tasks in order to reach one another. For instance, a red figure might jump over a small wall to get to a blue figure but refuse to jump over a medium wall. That same figure, however, might scale a tall wall in order to reach its yellow counterpart. The researchers hypothesized that if babies saw a character work harder to achieve a goal, they would infer that the character valued it more than the alternative. Indeed, when researchers showed a red figure standing between yellow and blue with no wall and then caused it to move toward blue, the babies stared for longer. That is, having seen the red figure perform more work to reach yellow, they had inferred that red preferred yellow and were surprised when it “chose” blue instead. This suggests that even preverbal babies have an intuitive understanding of the balance between costs and rewards and how it figures into other people’s behavior, says Tenenbaum, adding that this psychological intuition is foundational for more complex social understanding later on. The work, led by postdoc Tomer Ullman, PhD ’15, and Spelke’s PhD student Shari Liu, was published in Science in 2017. 

Taking a hike

Hamrick was a junior at MIT when she joined Tenenbaum’s lab in 2010 and began working on the Jenga-like physics project. When students are first exposed to research, they typically focus narrowly on the problem before them, she says. But Tenenbaum inspires them to think about how it fits into the larger questions in science, too (not unlike the way he says we learn). “It made me feel that research was an awesome thing to be doing,” Hamrick adds, crediting her experience in Tenenbaum’s lab with the decision to pursue a career in science. (She’s now a researcher at DeepMind, a British AI company owned by Google.)

One way that Tenenbaum has fostered a close and collaborative spirit in his lab is by taking his students and postdocs on annual hikes, some of which have been “notoriously intense,” he says. Growing up in California, he considered Yosemite National Park one of his favorite places. And starting in 2002, when he became an assistant professor at MIT, he began organizing yearly trips to the White Mountains in New Hampshire. The group usually goes around Columbus Day, at the height of fall foliage season. But even then, they have occasionally encountered wintry conditions at the higher altitudes. Recalling one trip along the Franconia Ridge Trail, Tenenbaum jokes: “It wasn’t a death march—nobody died—but there was some snow and not everyone was prepared, though they were warned!”

Along with that intense physicality, hiking also affords group members an opportunity to talk broadly about science and life. When people hike along a narrow trail, they tend to spread out. “Sometimes I find myself in the front, sometimes at the back,” says Tenenbaum. Either way, he adds, “I always have such a great range of conversations moving up and down the line, all while being inspired by this beautiful place we’re walking through.”

Tenenbaum also draws inspiration from parenthood. His 16-year-old daughter, Abi, has been known to lend a hand with his research; at age eight or nine, she was the first participant in the Jenga experiment. “Right away she found a bug—essentially a way to get a perfect score—and when she explained it to us, we were like, ‘Oh, good point,’ and rewrote the whole experiment,” he recalls. But more broadly, parenthood dovetails with Tenenbaum’s interest in human learning, especially in babies and children. “The experience of watching your kid have their first taste of ice cream or see their first horse is great,” he says. It “taps into my interest in what it is that you learn from the first time you see something.”

 

Tackling big questions at the intersection of brains, minds, and intelligent machines


MIT Intelligence Quest (now called the MIT Quest for Intelligence) aims to imbue AI with true intelligence by bringing together thinkers in a range of disciplines, from psychology to computer science. The initiative treats the human brain (and the AI systems that try to mimic it) as a grand engineering challenge. When the project was announced in March, Josh Tenenbaum shared this chart summarizing some of the big questions MIT researchers are working on in six major categories—and pointed out that questions yet to be asked may prove most important.

Center for Brains, Minds, and Machines AT MIT
Center for Brains, Minds, and Machines AT MIT
Center for Brains, Minds, and Machines AT MIT
Center for Brains, Minds, and Machines AT MIT
Center for Brains, Minds, and Machines AT MIT
Center for Brains, Minds, and Machines AT MIT