What’s Next for VR? More Throughput, Less Latency … and a Killer App
By Jim Bask
Virtual reality is going to get a lot more real.
With a vision of VR serving as the future of computer interfaces, NVIDIA has set its sights on refining the rendering process — to increase throughput, reduce latency and create a more mind-blowing visual experience for users.
That effort was the subject of a presentation Tuesday at the GPU Technology Conference, where Morgan McGuire, an associate professor of computer science at Williams College who will soon join NVIDIA as a distinguished research scientist, told attendees that there are significant challenges to overcome.
For instance, McGuire said future graphics systems need to be able to process 100,000 megapixels per second, up from the 450 megapixels per second they’re capable of today. Doing so will help push rendering latency down from current thresholds of about 150 milliseconds down to 20 milliseconds, with a goal of getting that down under one millisecond, approaching the maximum perceptive abilities of humans.
“We’re about five or six magnitudes of order [between] modern VR systems and what we want,” McGuire said during a well-attended talk. “That’s not an incremental increase.”
What makes latency an even more pressing problem is the fact that as VR systems increase resolution, their throughput also increases, which fuels latency. So even as latency shrinks, the gains are often offset by the growth of the throughput.
“You can’t process the first pixel of the next stage until you’ve completed the final pixel in the previous stage,” McGuire said.
To bring latency down enough, McGuire said NVIDIA is, and will be, experimenting in many areas:
- It starts with the renderer, which drives most of the latency. McGuire said that removing Path Tracer, which is the film industry’s primary rendering tool, from the process and replacing it with a combination of rasterization and GPUs, speeds up the rendering process.
- NVIDIA research teams have found that eliminating post-rasterization tools such as shading and post FX increases throughput and reduces latency, but it also reduces image quality.
- Eye-tracking software enables VR systems to deliver the sharpest resolution to whatever parts of an image the user is looking at, allowing the rendering process to deliver lower resolution imagery of for the rest of the display.
- Breaking an image into many versions and angles of that image — like creating a bug’s view of an image — also brings down latency, but it requires a lot of throughput, just as if it were processing many images simultaneously.
- NVIDIA researchers also have been testing the effectiveness of using a sheet of holographic glass that replaces the assortment of lenses and filters that a camera uses, enabling focus-on-the-fly by moving back and forth as the user’s eyes move to different parts of an image.
Will VR Kill the Keyboard?
Probably the most surprising part of McGuire’s talk was the subject of text. When he brought this up, audience members were momentarily confused, until he explained further that if VR becomes the gateway to augmented reality and the interface for consumer computer use, it will one day replace keyboards with some other tool. And that means that how text is entered and displayed becomes a major consideration.
In this scenario, McGuire said, “text is actually the killer app” for VR — definitely not what anyone in the room expected to hear.
Naturally, all of these improvements are likely to drive up the price tag for a desktop VR system, which have typically cost about $5,000. McGuire declined to speculate on what pricing of future systems might look like, but he made it clear it won’t be as dramatic as some might fear