The announcement of the latest next-generation virtual reality head mounted display (HMD), the Varjo VR-1, which features a human eye resolution display, has enabled the team at ZeroLight to push VR graphical quality to a new level. In this post, we will explain how we have achieved this by optimising our software's performance and quality using NVIDIA's new GPU feature: "Multi View Rendering."

(Image courtesy of Varjo)

The VR-1 features a Bionic Display™ which combines a 1920x1080 low-persistence micro-OLED and 1440x1600 low-persistence AMOLED. The HMD has an 87-degree field of view and integrated 100hz stereo eye tracking, and it uses SteamVR base stations version 1.0 or 2.0.

(Image courtesy of Varjo)

The VR-1's display produces a hardware-based foveated rendering effect, creating the human-eye resolution in the centre of the display when the user looks directly ahead.

Integration of the VR-1 is very similar to any other HMD, but rather than rendering 2 views - one per eye - we render 4. This is similar to the foveated rendering technique we developed for the StarVR One, but without the eye tracking as the fovea area is fixed in place by the hardware.

For standard two-view VR headsets, we use NVIDIA's Single Pass Stereo (SPS), which instances the view data from one eye to the other, removing the requirement to go over the scene hierarchy twice. Rendering the 4 passes of the scene that the VR-1 requires, however, has a large overhead on the CPU due to the complexity of the vehicle and scene. The draw thread must transform and upload material data to the GPU for thousands of objects every frame. We first achieved this using 2 SPS passes, which halved the CPU rendering overhead. To further optimise the performance on the VR-1, we integrated a feature of the new NVIDIA Turing GPUs: Multi View Rendering (MVR). This feature is similar to SPS, but it has support for up to 4 viewports in a single render pass.


The image above shows the typical use case for MVR, enabling optimal rendering for a wide FOV HMD display. The image below shows how we use this technique to render the 4 Varjo viewports. The bottom two renders show the human-eye resolution content that is displayed in the focus area. The top two views show the content that is rendered to the remainder of the view. All four viewports are rendered in a single pass, saving on both vertex shader transformation and CPU command buffer creation.

Delivering the highest quality content is always a key focus of all our work, but this is especially the case in VR. When putting someone in an entirely virtual world, you have to make sure that every detail is as true-to-life as possible, or the illusion of reality will be broken. To make sure our models are of the absolute highest quality, we use a single digital twin asset across every display type. This meant that our integration into the VR-1 was seamless, as the detail required for the human-eye resolution display is already in our digital twin models that we use in our UHD display solutions (find out more here).

The image below shows the quality of the image capture through the VR-1 display vs standard VR, demonstrating the fine details that can be displayed on the VR HMD.

(Image courtesy of Varjo)

For the latest tech news, follow our ZeroLight Tech Twitter page and our #ZLTech hashtag.