tech blog header realtime dreams

By Patrick Trivencevic

I have fond memories as a Computing Science student at UTS studying Graphics and learning about Raytracing. Our end of year assignment was to write a basic Ray Tracer a daunting task at the time I thought, this was by far my most complex and challenging Uni assignment to date. We were to produce a basic 3D Scene to be rasterised by means of Ray Tracing to a 2D output image. The output was to be “high res”, 640x480 in resolution, remember this was back in the nineties and our home workstations were not even Pentium class boxes. 3D GPU’s didn’t exist, at least not for students, besides my Ray Tracer was CPU based as it was designed at the time.

My 3D scene included basic geometry - Cubes, Spheres and Cylinders, bonus marks were awarded if you produced glass-like objects, i.e. going beyond basic Diffuse Gouraud/Phong shading and using refractions and reflections. After spending what seemed to be months, in reality it was weeks, I managed to write a Ray Tracer that produced a 2D output of a 3D Scene with one of the Sphere’s being “glass-like.” The Renders from my Ray Tracer and 486 DX machine took hours and sometimes days to produce an output.

"Have Nvidia attained the holy grail of Computer Graphics? No, not as yet, but it sure has brought us a lot closer to what resembles Ray Tracing in real time."
Patrick Trivencevic

raytracing 300x225 That was a single frame at 640x480 that was produced, if animation was to be considered where anything from 12 fps to 60 fps is required for an animation sequence, you can quickly do the math to find it would take a very long time to produce a full-length feature film with Ray Tracing techniques and CPU power of the time.

The Media and Entertainment Industry, particularly Film and Gaming industries have been working with Raytracing and Rasterization techniques throughout the past decades and have long regarded real-time Ray Tracing as the “Holy Grail” of Computer Graphics.

VFX/3D Artists spend a lot of time iterating their creative work and to date a lot of it required a time and resource expensive process known as rendering. It is true render times have improved significantly since my attempts at writing a Ray Tracer at Uni, with the right continual investments these render times are mere hours, minutes and seconds. What if there was no apparent rendering? What if we could effectively render in real time? It doesn’t take a lot to imagine how the creative iterative process suddenly expands, how productivity effectively increases as does the creative output.

What is Raytracing – Rasterization vs Raytracing

rendering 400x262 Raytracing is a rendering process/technique for generating images by tracing the path of light (rays) as pixels on an image plane (image) to paraphrase a Wikipedia entry. The resulting image or images is based on how virtual light rays interact on virtual objects, and therefore have very realistic or “life-like” qualities.

The number of rays interacting on a scene is challenging even with all our advancements in CPUs and GPUs. This means Ray Tracers set a level of quality that determines how many rays are cast from each pixel of the scene in various directions. Further calculations are then required where each ray intersects an object in the scene, tracing a number of rays out from each intersection to model reflected light. These calculations are relatively complex and expensive and has typically and until recently been limited to CPUs. GPU rendering, like modern Renderers - RedShift and Arnold have provided improvements in ray tracing render times, but little to mitigate real-time requirements of modelling a 3D Scene.

Rasterization or to be more specific 3D rasterization such as we see from DirectX and OpenGL has been employed to bridge the gap between real-time requirements for creative iteration, for previewing or visualisations. These rasterization graphics API’s provide a means to preview an approximation of a 3D Scene in real time. These rasterised approximations are a “good enough” estimation to aid in the iterative creative process, allowing for relatively quick changes in lighting and visualising basic shading.

Nvidia introduces Turing – RTX

Turing In the past month at two separate events Nvidia announced their new Turing Architecture. The first event for Graphics Professional and Industry, SIGGRAPH 2018, Nvidia showcased its new Professional series RTX GPU’s. We got to learn that the Turing architecture goes beyond Shader and Compute and introduces RT Core and Tensor Core, along with some new terminology or more specifically a unit of measure called Giga Rays/sec.

The New RT Core (Ray Tracing Core) will enable real-time ray tracing of objects and environments with physically accurate shadows, reflections, refractions and global illumination. There isn’t a whole lot of information about how this is accomplished, Bounding Volume Hierarchy intersection calculations are expensive, and the RT Cores are meant to be tuned for these kinds of calculations. Couple the RT Core with Turing Tensor cores which provides another critical process in Raytracing called Denoising. In low sampled or ray casted images you have what is essentially noise, you have speed but also have noise so a process of denoising is required. Thankfully GPUs are very optimised for this kind of process, but not optimised enough for real-time denoising and this is where Tensor Core denoising steps up to the challenge. Turing Tensor Cores are designed to accelerate deep neural network training and inference, which are critical to powering AI-enhanced rendering. AI-based denoising is utilised that produces real-time denoising performance with near final quality output.

Key attributes of the new GPUs:

gpu table 2

pascal vs turing

There are many other new features that Turing RTX Architecture brings that I won’t be discussing as we are focusing on the Ray Tracing and how RTX is bringing us closer to our real-time dreams.

The second and more recent event, Gamescom 2018, saw the introduction of Nvidia’s Gaming or Consumer class RTX GPUs. Along with the new hardware announcements, we saw many exciting examples of Ray Tracing in Games. Gaming Engines like Unreal and Unity built on DirectX demonstrated complex 3D Scenes with real-time ray-traced lighting and shadows. Reflections that were not possible with simple screen space reflections used in Unreal Engine today are now possible. All this at the real-time, 60 fps performance that gaming demands. These Turing based RTX GPUs and APIs could also be employed to provide 3D modelling Applications like Maya, Houdini and C4D real-time and more accurate previews, near rendered like quality for accelerating the iterative creative process.

I urge those interested in Nvidia’s latest announcements to check out the keynotes/announcements from SIGGRAPH and Gamescom, there are few more details and some really cool demos and examples of RTX Ray Tracing.

Who is supporting the RTX platform

Ray Tracing extensions have been developed and adopted from Nvidia’s own OptiX API for Professional GPU’s and Applications through to more mainstream Microsoft DXR and Vulkan Support for gaming/consumer class GPU’s.

Many of Media and Entertainments vendors already support OptiX and have professional applications that will soon support the new Ray Tracing extensions and capabilities of the Turing RTX Family.

logos 870

More to look forward to

As I was looking for information about the Turning RTX releases I came across Nvidia’s press release for the Quadro RTX Series Unveil which had some interesting information about what’s coming in early 2019, particularly around datacenter Rendering solutions.

“The Quadro RTX Server defines a new standard for on-demand rendering in the data center, enabling easy configuration of on-demand render nodes for batch and interactive rendering.
It combines Quadro RTX GPUs with new Quadro Infinity software (available in the first quarter of 2019) to deliver a powerful and flexible architecture to meet the demands of creative professionals. Quadro Infinity will enable multiple users to access a single GPU through virtual workstations, dramatically increasing the density of the data center. End-users can also easily provision render nodes and workstations based on their specific needs.

With industry-leading content creation and render software pre-installed, the Quadro RTX Server provides a powerful and easy-to-deploy rendering solution that can scale from small installations to the largest datacenters, at one-quarter of the cost of CPU-only render farms.” Source – Nvidia News

Have Nvidia attained the holy grail of Computer Graphics? No, not as yet, but it sure has brought us a lot closer to what resembles Ray Tracing in real time. The VFX creative process has the most to gain with better modelling previews that look closer to final renders and powerful rendering optimisations that will bring 2-6x performance.

Need to optimise your rendering system? Digistor can help! Contact us to discuss your requirements.