By clicking “Accept”, you agree to the storing of cookies on your device to enhance site navigation, analyse site usage, and assist in our marketing efforts. View our Privacy Policy for more information.
News

Real-Time Ray Tracing Part 2: GPU Hybrid Ray Tracing

This is the second article in our ray tracing series that will outline our history in the area of real-time ray tracing.

To recap: ray tracing is a technique used in CGI to recreate how light interacts with objects and materials in the real world. It's the holy grail of real-time rendering, and it's vital if we're going to create truly photorealistic 3D visualisation. In our previous blog, we discussed how we overcame some of the challenges of real-time ray tracing by using sparse voxel reflections ahead of GTC 2018. If you've not read that blog yet, you can check it out here.

In this blog, we outline the new ray-tracing technology from Microsoft and NVIDIA and how we used early access to these new developments to design and prototype our next generation of real-time ray-traced experiences before presenting them at GTC 2019.

DXR

With the announcement of Microsoft DXR (DirectX Ray Tracing) and Nvidia RTX (Ray Tracing Hardware Acceleration) technologies in 2018, we began developing the next generation of our rendering pipeline. DXR adds interfaces to DirectX that make it easy for developers to add geometry to acceleration structures and cast rays against this data using shaders to cast rays and create results with hit and miss shaders. RTX hardware, on the other hand, helps to accelerate this whole system by giving developers the ability to cast many more rays in the small number of milliseconds available in a real-time rendered frame. To add support for DXR in this revamped platform, we separated the technology into two sections: the core rendering update to DirectX 12 (from DirectX 11) and the integration of hybrid render layers into our shader lighting logic.

In order to get a head start on building the technology required to support merging ray traced render passes into our rasterised pipeline, we decided to develop GPU Compute Ray Tracing, which is an external path-tracing solution that we could use to experiment with different techniques and passes.

GPU Compute Ray Tracing

We used a compute-accelerated path tracer to aid our experiments. This solution was not fast enough for a real-time solution, but it enabled experimentation with different techniques, and it meant we could blend multiple layers of data into our rasterised content.

Ray-traced render passes were created for specular reflections, clear-coat reflections, ambient occlusion (AO), global illumination (GI), , diffused lighting and shadows.. Examples of these passes on the interior of an Audi can be seen in the image below (in the order listed above).

Layered Rendering

Rendering the content in multiple passes then compositing in the real-time shaders added great flexibility to the visualisation. Each pass would enhance the final visuals without drastically changing the look and feel of the image. Having this setup enabled us to discover some major issues with these techniques and develop solutions to accelerate the development of the next generation of real time ray tracing.

The main issues we tackled are detailed below.

Aliasing - We experimented with using lower resolution render targets to reduce the number of rays that required casting. This resulted in major aliasing issues where the pixels in the rasterization would miss the pixels in the ray-traced content.

Noise - In any ray tracer, noise is an issue unless you cast thousands of rays per pixel. We were looking for real-time solutions and experimented with a range of different denoisers to reduce the number of rays we had to cast.

Number of rays - The two problems above push up the number of rays that had to be cast per pixel. We discovered that different surfaces require varying numbers of rays to be cast, which meant we could achieve lower ray counts and better performance in any area . We developed a variable rate shading solution that would go on to become an essential component in the real-time solution.

Lighting hacks - Many of the 3D models feature tweaks and hacks to get the desired look rather than a physically correct representation of light on a surface. The combining of the ray-traced data per object enabled the lighting algorithm to be tweaked per part and maintain some of these important features.

Memory - Much of the content (mesh and texture data) had to be duplicated on the GPU as it was required in both the compute ray-tracing passes and the standard rendering process. This limited the complexity of the assets that we could experiment with.

Compute is much slower than the voxel solution, but it enabled us to develop and discover solutions to issues that we would face in the next stage of real-time ray tracing development.

GTC 2019
After putting the final touches on this tech demo, we presented our developments at GTC 2019 along with our advances in XR. You can view the presentation here.

The next blog in this series will break down the impact of DXR and RTX even more and explain how we used this to take our ray-tracing tech the next level. If you can't wait for that, check out our ZLTech twitter page for all the latest ZeroLight tech updates!