A Collection of Articles

Inventors

35 Innovators Under 35

Inventors

Creating technologies that make it possible to reimagine how things are done.

` Yunji Chen, 32

Improvements in artificial intelligence call out for new hardware.

  • by Christina Larson
  • Yunji Chen, iconoclastic and cosmopolitan, is sporting an untucked flannel shirt and sipping a mango smoothie at an Italian coffee shop in Beijing. He is talking about how he can make deep learning, a hot field of artificial intelligence, far more useful to people.

    Once an obscure research branch, deep learning has quickly improved image search, speech recognition, and other aspects of computing (see “Teaching Machines to Understand Us”). Companies such as Google and Baidu are heavily invested in using it to get computers to learn about the world from vast quantities of data without having to be manually taught. However, the technology is resource-intensive: when the Google Brain project trained a computer to recognize a cat face in 2012, it required 16,000 microprocessor cores. That dismays Chen. “The expense and energy consumption is quite high,” he says, noting that only large companies can afford it.

    The reason is that most processors can quickly repeat basic math functions but need “hundreds of instructions” to perform the more elaborate functions needed in advanced AI techniques, Chen says. So he is designing dedicated deep-learning processors, optimized “to compute the basic blocks of machine learning.” In his lab at the Institute of Computing Technology, research assistants run a computer program that simulates how precise tweaks in chip blueprints will affect processing speeds. “We are changing the wires, the connections, the circuits,” he says. His latest design appears to be hundreds of times faster than today’s central processing units, yet it requires only a thousandth as much energy.

    As impressive as that may be, Chen, who entered college at age 14 and raced through his PhD in computer science by 24, envisions reducing energy consumption by a factor of 10,000, which could let deep-learning functions work on mobile or wearable devices. “After five or more years,” he says, “I think each cell phone can be as powerful as Google Brain.”

    —Christina Larson