RenderingPipeline

from geometry to pixels

Ray-Tracing Hardware

Ray-Tracing vs. Rasterization

The question which of these techniques is “better” is nearly as old as the field of computer graphics itself. As ray-tracing simulates light transport it is simple to get realistic images using this technique. This is why it is often used when realism is the top priority, e.g. in rendering movies. Rasterization on the other hand is easy to accelerate and the de facto standard for interactive visualisations and games. (This is of course a simplified view.)

One important difference is that rasterization handles each primitive (e.g. each triangle) separate from each other and does not need full knowledge of the whole scene all the time. To find the correct intersection of one ray with the scene however, knowledge of the whole scene is always needed (which does not mean that a test with each primitive has to be performed, normally a special data structure is used to minimize the intersection tests).

Interactive ray-tracing has been possible for a while and there have also been first attempts to build hardware specifically for ray-tracing since the mid-90s. It has never been a huge success however. While ray-tracing could produce similar images as rasterization in real-time and could add for example better reflections and shadows, it was never as fast as rasterization.

A new generation of Ray-Tracing Hardware

We will likely see a new attempt to bring ray-tracing hardware to the masses soon, Imagination Technologies has already presented a prototype of the PowerVR Wizard GPU. So what has changed and why could it work now? In my opinion the reason why ray-tracing hardware could now succeed is that we already do a lot of ray-tracing already! Well, more like ray-casting instead of -tracing (finding the first intersection of a ray with the scene instead of tracing that ray over multiple reflections), but it has become very common in real-time rendering.

Shadowmaps, Screen-Space Ambient Occlusion (SSAO), Real-time Local Reflections (RLR) and Voxel Cone Tracing are just some of the techniques that help basic rasterization to add complex illumination effects. Those techniques have in common, that they are implemented inside of the fragment shader and use some kind of simplified scene representation (a depth image from the lights point of view, the depth buffer from the cameras point of view or a voxelised scene representation). The fragment shader now performs a simple variant of ray-casting into this simplified scene to detect occlusion to to find the source for indirect illumination. This ray-casting can be implemented as ray-marching or even being simplified to a simple texture lookup, however, the algorithmic idea is always based on intersecting secondary rays with the scene.

Hybrid Ray-Tracing

As we can see, ray-tracing and rasterization are not mutually exclusive. Simplified variants of ray-tracing are already used for complex lighting effects in games – implemented completely in shaders using simplified, shader-friendly scene representations. And these effects are where ray-tracing hardware could come in handy to replace or extend these shader-based ray-casting hacks with real ray-tracing. This can still use a simplified scene or even the real, high complex scene.

For example shadows can get implemented by ray-tracing to get pixel-perfect edges for just one ray per pixel and lightsource. As no shadowmaps are needed, GPU architectures which are bandwidth limited might benefit most from this (e.g. mobile GPUs). Ambient occlusion would not be limited to screen space and reflections can support multiple bounces.

I would not expect ray-tracing to replace rasterization in real-time rendering any time soon, but it is already extending rasterization and hardware support for these effects will be beneficial.

Beside graphics the idea of intersecting rays with the scene can also be used in the context of AI or physics, so hardware support will be useful here as well.

Chicken and Egg

Moving a pure-rasterization rendering pipeline to a hybrid pipeline will require a¬†fair amount of work and research into what works best on the ray-tracing part of the GPU vs the traditional shader hacks. Without a sufficient amount of hybrid GPUs in the hands of customers the motivation to spend much resources on this will be low. The motivation of a hardware manufacturer on the other hand to spend resources for licensing and DIE space will be low unless there is enough support from developers… Unless all target devices include a hybrid GPU, two code paths have to be developed and maintained.

So who might be interested in getting into this technology first? Maybe a (mobile) console manufacturer. Consoles are closed ecosystems anyway often use unusual hardware architectures (remember the Cell processor?) or closed APIs. For smartphones only Apple currently has the power to push a new feature into all new devices within a short period of time – they also build there own chips and software (sadly Apple isn’t known to be the first to support new graphics technology, OpenGL is far behind on Macs and OpenGL ES might be stuck at 3.0 forever…).

Many games use a small number of engines (the popular Unity engine comes to mind), getting support into one or two of them could already result in a large enough developer user base for e.g. an Android manufacturer to include a hybrid GPU in some smartphones.

I’m sure that we will hear much more about ray-tracing hardware in 2016.

If you want to learn more about hybrid ray-tracing, there is an article by Gareth Morgan in GPU Pro 6 “Hybrid Ray Tracing on a PowerVR GPU”. There also also a few blogposts by Imagination Technologies about the Wizard architecture.

, , , , , ,

5 thoughts on “Ray-Tracing Hardware
  • Alex says:

    Very interesting considerations but you don’t think that the level of complexity reached to develop complex illumination in the rasterization approach is a good motivation for the hardware manufacturers to push definitively in ray tracing hardware?

    • Robert says:

      Good global illumination is a complex problem, this is true for ray-tracing and rasterization (yes, monte-carlo path tracing is simple to implement and gives very good results, but the complexity comes in when you want to make ray-tracing not only good but also fast enough).
      So the real question is where is the best trade-off between algorithmic complexity, computational complexity and image quality given the current requirements (frame-rate and image size) and the current transistor budget for a GPU.
      If the hardware manufacturers come to a different conclusion here than the application developers, then they can push for RT hardware all they want ;-) The success would depend on how much effort the game developers are willing to put into this.

      • Alex says:

        From my experiences good global illumination based on rasterization sometimes have a lot of parameters that usually aren’t so intuitive. This results in very long attempts before obtain good image quality and good frame rate. So I think that game developers would be more prone to adopt RT hardware. But I could be proven wrong.

  • Felix says:

    Aren’t the current gpus already pretty much hybrid hardware, what with compute shaders and GPGPU programming interfaces such as openCL and CUDA, though?

    I don’t really see any need nor am I particularly interested in seeing yet another closed standard added to the environment, and as far as I can tell they haven’t yet submitted it(‘open’RL) to Khronos.

    • Robert says:

      GPGPU is “just” using the shader cores outside of the graphics pipeline. You can implement a ray-tracer with those but it’s not hybrid. The PowerVR Wizard for example adds a hardware module that implements the ray-triangle intersection tests to reduce the workload of the shader cores.
      The API used here is OpenRT and it probably would be up to the hardware manufacturers to use a open API for the RT part or use something closed (just like most vendors pick OpenGL and/or Vulkan while Apple is going with its own Metal).

      But I too hope that if ray-tracing hardware becomes a reality it will be based on an open standard. That way the same API might be implementable on GPUs which don’t support it in hardware but by GPGPU techniques.

Leave a Reply

Your email address will not be published. Required fields are marked *

*