By clicking “Accept”, you agree to the storing of cookies on your device to enhance site navigation, analyse site usage, and assist in our marketing efforts. View our Privacy Policy for more information.
News

Foveated Rendering on the VIVE PRO Eye

Following on from our previous work on foveated rendering, we have used the latest hardware advancements to improve the technique and achieve even greater resolution and performance gains.

First shown at CES 2019, this project used the latest HMD from HTC: the new VIVE Pro Eye. As Announcement Partners for the headset, we got early access to the hardware before its unveiling at the event. The new HMD features the same high-spec AMOLED 2,880 x 1,600 90Hz display as the pervious VIVE Pro, but with the new addition of integrated Tobii eye tracking technology.

Our previous foveated rendering demos relied on a technique that renders multiple views of the scene per eye: one for the peripheral area and one for the high-resolution fovea area. While this reduces the pixel fill rate cost, it adds an additional CPU overhead and added vertex cost due to the extra render pass. For the Vive Pro Eye experience, we used NVIDIA's latest Turing GPU, the Quadro RTX 6000, to develop a new technique that avoids these overheads by taking advantage of Variable Rate Shading (VRS). VRS enables the application to render to a buffer at varying pixel densities.

The video below shows the VRS mask that we update each frame based on the position of the eyes. The colours represent the indices in a pixel density table: the green area is 1x1 pixel; the yellow is 2x2; the red is 4x4.

Variable pixel density is the optimal rendering method for foveated rendering. The draw thread no longer needs to process additional scene data to render multiple passes for the foveal and peripheral views; this removes a large part of the CPU overhead. Additionally, foveated rendering greatly reduces the pixel cost; the more pixel and fill rate bound the application is the greater the performance boost can be.

In addition to foveated rendering, we enhanced the BMW M Virtual Experience further to push quality to the max. Using a technique called supersampling, we render at 9x the HMD's display resolution then filter this data to create a perfect image without the aliasing artefacts typically seen in VR.

VR is prone to aliasing due to the constant small movements detected by the tracking system. Aliasing can be very distracting and reduce the user immersion. These artefacts are even more prominent when rendering very detailed geomtry like CAD data. Small details result in sub-pixel triangles, and, as the user moves, they cause a flicker as they disappear and reappear. Using supersampling greatly reduces this distracting artefact.

You can find our more information on how to implement this technique on our NVIDIA Developer Blog.

Follow our ZeroLight Tech twitter page for all our latest tech news and developments.