Drive down a Manhattan street, and your car’s navigation system will blink in and out of service. That’s because the Global Positioning System (GPS) satellite signals used by car navigation systems and other technologies get blocked by buildings. GPS also doesn’t work well indoors, inside tunnels and subway systems, or in caves–a problem for everyone from emergency workers to soldiers.
But in a recent advance that has not yet been published, researchers at Sarnoff, in Princeton, NJ, say their prototype technology–which uses advanced processing of stereo video images to fill in GPS gaps–can maintain location accuracy to within one meter after a half-kilometer of moving through so-called GPS-denied environments.
That kind of resolution is a major advance in the field, giving GPS-like accuracy over distances relevant to intermittent service gaps that might be encountered in urban combat or downtown driving. “This is a general research problem in computer vision, but nobody has gotten the kind of accuracy we are getting,” says Rakesh Kumar, a computer scientist at Sarnoff. The work was based partly on earlier research done at Sarnoff by David Nister, a computer scientist now at the University of Kentucky.
Motilal Agrawal, a computer scientist at SRI International, in Menlo Park, CA, which is also developing GPS-denied location technologies, agrees, saying the advance essentially represents a five-fold leap in accuracy. “We haven’t seen that reported error rate before,” Agrawal says. “That’s pretty damned good. For us a meter of error is typical over 100 meters–and to get that over 500 meters is remarkable and quite good.”
The approach uses four small cameras, which might eventually be affixed on a soldier’s helmet or on the bumper of a car. Two cameras face forward, and two face backward. When GPS signals fade out, the technology computes location in 3-D space by making calculations from the objects that pass through its 2-D field of view as the camera moves.
This is a multi-step task. The technology first infers distance traveled by computing how a series of fixed objects “move” in relation to the camera image. Then it adds up these small movements to compute the total distance. But since adding up many small motions can introduce errors over time–a problem called “drift”–the software identifies landmarks, and finds the same landmarks in subsequent frames to correct this drift. This part of the technology is called “visual odometry.” Finally, the technology discerns which objects are moving and filters them out to avoid throwing off the calculations. It works even in challenging, cluttered environments, says Kumar.
“The essential method is like how people navigate,” says Kumar. “When people close their eyes while walking, they will swerve to the left or right. You use your eyesight to know whether you are going straight or turning. Then you use eyesight to recognize landmarks.”
While the general idea has been pursued for years, Sarnoff has achieved the one-meter accuracy milestone only in the past three months–an advance that will be published soon, says Kumar. It’s an important advance, says Frank Dellaert, a computer scientist at Georgia Tech. “This is significant,” he says. “The reason is that adding these velocities over time accumulates error, and getting this type of accuracy over such a distance means that the ‘visual odometry’ component of their system is very high quality.”
Kumar says the technology also allows users–whether soldiers, robots, or, eventually, drivers–to build accurate maps of where they have been, and also to communicate with one another to build a common picture of their relative locations.
Kurt Konolige, Agrawal’s research partner at SRI, of which Sarnoff is a subsidiary, says one goal is to reduce the computational horsepower needed to do such intensive processing of video images–something Kumar’s group is working on. But if the size and cost could be made low enough, he says, “you could also imagine small devices that people could wear as they moved around a city, say, or inside a large building, that would keep track of their position and guide them to locations.”
The technology, funded by the Office of Naval Research (ONR), is being tested by military units for use in urban combat. Dylan Schmorrow, the ONR program manager, says, “Sarnoff’s work is unique and important because their technology adds a relatively low cost method of making visual landmarks with ordinary cameras to allow other sensors to work more accurately.”
Kumar says that while the first priority is to deliver mature versions of the technology to Sarnoff’s military sponsors, the next step will be to try to produce a version that can work in the automotive industry. He says the technology has not yet been presented to car companies, but “we plan to do that.” He adds that the biggest ultimate commercial application would be to beef up car navigation systems.