Look at a clock on a nearby wall. The focal point of your gaze should be in focus, while the scene around the clock is blurred, as if your brain is sketching your surroundings, or, in computer graphics terms, rendering a low-resolution version of the scene.
Nvidia is applying the same trick to rendering virtual reality, and it could help improve the realism of virtual worlds significantly. By focusing graphics rendering power on a smaller area, it is possible to sharpen the image a person sees significantly.
Leonardo da Vinci was the first person to notice this visual phenomenon, called foveal vision, in the 15th century. David Luebke, together with four other researchers at Nvidia, has spent the last nine months attempting to mimic the principle in VR by fully rendering only the specific area where a player is looking, and leaving the rest of the scene at a far lower resolution.
When the player using the Nvidia system focuses on a new area of the scene, eye-tracking software shifts the focus of the rendering in kind. To render a full scene in VR at 90 frames per second, the lowest acceptable frame rate in VR before users begin to report feelings of nausea, four million pixels must be rendered at almost a hundred times a second. But by focusing the rendering only on the player’s line of sight, huge computational savings can be made. “The performance gains are too large to be ignored,” says Luebke.
The principle is not new in VR research. "The principle is not new in VR research. Indeed, the Kickstarter-backed Fove headset uses a similar system (see "Point, Click, and Fire in Virtual Reality—With Just Your Eyes"). Luebke has spent much of the past 15 years studying the area, first while a professor at the University of Virginia and now at Nvidia. Previously, however, eye-tracking technology has struggled to keep up with the whip-quick speed of human eye movements, causing a stomach-churning latency effect when a user switches from, say, the left side of a scene to the right. A new prototype eye-tracking VR display by SensoMotoric Instruments is capable of accurate and low-latency eye-tracking at 250 Hertz. “For the first time we have eye-trackers that you can’t outrun with your eyes,” explains Luebke.
Even with this capability, Nvidia’s team needed to spend a great deal of time calculating exactly how much it could lower the resolution of the periphery of a scene before a viewer notices. “Peripheral vision is very good at detecting flicker,” explains Luebke. “It’s used to help us see tigers in the woods.”
As such, any flicker from the degradation is disconcerting. Likewise, if the periphery becomes too blurred, it can create a tunnel vision effect, as if the viewer is looking through a pair of binoculars. “You can tell something’s wrong, even if you can’t quite put your finger on what,” says Luebke.
To solve the issue, Nvidia’s researchers found that if they increase the contrast of the peripheral scene while lowering the resolution, the human mind is completely fooled.
While Nvidia has no products in production that facilitate the technique, the company, which provides hardware and software for many VR companies, hopes its findings will encourage the major headset makers to include eye-trackers in their future head-mounted displays. “Part of what we are doing here is helping to define the rules of the road for VR,” says Luebke.
The technology is unlikely to appear outside of VR—in laptops, for example—since eye-trackers become far less efficient the further they are from one’s face. VR, where the tracker sits a few centimeters from the eye, by contrast, offers the ideal pairing. The technology will likely impact the company’s future graphics cards, giving developers the opportunity to prioritize computational processing on specific pixels and redefine rendering algorithms.