Connectivity

Air Force Tests IBM’s Brain-Inspired Chip as an Aerial Tank Spotter

Chips with silicon “neurons” could make satellites, aircraft, and drones smarter.

Jan 11, 2017

Satellites, aircraft, and growing numbers of drones—the U.S. Air Force has a lot of electronic eyes in the sky. Now it’s exploring whether brain-inspired computer chips could give those systems the smarts to do things like automatically identify vehicles such as tanks or anti-aircraft systems.

The Air Force Research Lab (AFRL) reports good results from using a “neuromorphic” chip made by IBM to identify military and civilian vehicles in radar-generated aerial imagery. The unconventional chip got the job done about as accurately as a regular high-powered computer, using less than a 20th of the energy.

The AFRL awarded IBM a contract worth $550,000 in 2014 to become the first paying customer of its brain-inspired TrueNorth chip. It processes data using a network of one million elements designed to mimic the neurons of a mammalian brain, connected by 256 million “synapses.”

Such chips are very different from those in existing computers, and for some problems they should be much more power efficient (see “Thinking In Silicon”). The Air Force is interested because that might make it possible to deploy advanced machine vision, which usually requires a lot of computing power, in places where resources and space are limited. Satellites, high-altitude aircraft, air bases reliant on generators, and small drones could all benefit, says AFRL principal electronics engineer Qing Wu. “Air Force mission domains are air, space, and cyberspace. [All are] very sensitive to power constraints,” he says.

Wu has been staging contests between TrueNorth and a high-powered Nvidia computer called the Jetson TX-1, which costs around $500 and is designed to make it easier to deploy powerful machine-learning technology onboard machines such as cars or mobile robots.

The competing computers used different implementations of neural-network-based image-processing software to try and distinguish 10 classes of military and civilian vehicle represented in a public data set called MSTAR. Examples included Russian T-72 tanks, armored personnel carriers, and bulldozers. Both systems achieved about 95 percent accuracy, but the IBM chip used between a 20th and a 30th as much power.

IBM’s chip should have an efficiency advantage at such tasks. The conventional computer ran its neural-network software on chips with hardware that could be considered general purpose, intended to solve any kind of problem. The TrueNorth chip’s hardware is hard coded to represent artificial neural networks, with one million physical “neurons” customized to the task.

One reason that architecture offers better efficiency is that the chip’s neurons and synapses both store and operate on data, says Wu. In a conventional system like those on the Jetson TX-1, the components that perform calculations are separate from memory. That means data must be shuttled from memory to the processor to be analyzed, and then back to memory to be stored, consuming time and energy.

Massimiliano Versace, who directs the Boston University Neuromorphics Lab and worked on another part of the Pentagon contract that funded IBM’s work on TrueNorth, says the results are promising. But he notes that IBM’s chip currently comes with trade-offs.

It is much easier to deploy neural networks on conventional computers, thanks to software made available by Nvidia, Google, and others. And IBM’s unusual chip is much more expensive. “Ease of use and price are [the] two main factors rowing against specialized neuromorphic chips,” says Versace.

Wu says that the hardware should get much cheaper if IBM is able to attract enough interest to ramp up production. The company says it is working on making software development for the platform easier.