Ray-Tracing vs. Rasterization
The question which of these techniques is “better” is nearly as old as the field of computer graphics itself. As ray-tracing simulates light transport it is simple to get realistic images using this technique. This is why it is often used when realism is the top priority, e.g. in rendering movies. Rasterization on the other hand is easy to accelerate and the de facto standard for interactive visualisations and games. (This is of course a simplified view.)
One important difference is that rasterization handles each primitive (e.g. each triangle) separate from each other and does not need full knowledge of the whole scene all the time. To find the correct intersection of one ray with the scene however, knowledge of the whole scene is always needed (which does not mean that a test with each primitive has to be performed, normally a special data structure is used to minimize the intersection tests).
Interactive ray-tracing has been possible for a while and there have also been first attempts to build hardware specifically for ray-tracing since the mid-90s. It has never been a huge success however. While ray-tracing could produce similar images as rasterization in real-time and could add for example better reflections and shadows, it was never as fast as rasterization.
A new generation of Ray-Tracing Hardware
We will likely see a new attempt to bring ray-tracing hardware to the masses soon, Imagination Technologies has already presented a prototype of the PowerVR Wizard GPU. So what has changed and why could it work now? In my opinion the reason why ray-tracing hardware could now succeed is that we already do a lot of ray-tracing already! Well, more like ray-casting instead of -tracing (finding the first intersection of a ray with the scene instead of tracing that ray over multiple reflections), but it has become very common in real-time rendering.
Shadowmaps, Screen-Space Ambient Occlusion (SSAO), Real-time Local Reflections (RLR) and Voxel Cone Tracing are just some of the techniques that help basic rasterization to add complex illumination effects. Those techniques have in common, that they are implemented inside of the fragment shader and use some kind of simplified scene representation (a depth image from the lights point of view, the depth buffer from the cameras point of view or a voxelised scene representation). The fragment shader now performs a simple variant of ray-casting into this simplified scene to detect occlusion to to find the source for indirect illumination. This ray-casting can be implemented as ray-marching or even being simplified to a simple texture lookup, however, the algorithmic idea is always based on intersecting secondary rays with the scene.
As we can see, ray-tracing and rasterization are not mutually exclusive. Simplified variants of ray-tracing are already used for complex lighting effects in games – implemented completely in shaders using simplified, shader-friendly scene representations. And these effects are where ray-tracing hardware could come in handy to replace or extend these shader-based ray-casting hacks with real ray-tracing. This can still use a simplified scene or even the real, high complex scene.
For example shadows can get implemented by ray-tracing to get pixel-perfect edges for just one ray per pixel and lightsource. As no shadowmaps are needed, GPU architectures which are bandwidth limited might benefit most from this (e.g. mobile GPUs). Ambient occlusion would not be limited to screen space and reflections can support multiple bounces.
I would not expect ray-tracing to replace rasterization in real-time rendering any time soon, but it is already extending rasterization and hardware support for these effects will be beneficial.
Beside graphics the idea of intersecting rays with the scene can also be used in the context of AI or physics, so hardware support will be useful here as well.
Chicken and Egg
Moving a pure-rasterization rendering pipeline to a hybrid pipeline will require a fair amount of work and research into what works best on the ray-tracing part of the GPU vs the traditional shader hacks. Without a sufficient amount of hybrid GPUs in the hands of customers the motivation to spend much resources on this will be low. The motivation of a hardware manufacturer on the other hand to spend resources for licensing and DIE space will be low unless there is enough support from developers… Unless all target devices include a hybrid GPU, two code paths have to be developed and maintained.
So who might be interested in getting into this technology first? Maybe a (mobile) console manufacturer. Consoles are closed ecosystems anyway often use unusual hardware architectures (remember the Cell processor?) or closed APIs. For smartphones only Apple currently has the power to push a new feature into all new devices within a short period of time – they also build there own chips and software (sadly Apple isn’t known to be the first to support new graphics technology, OpenGL is far behind on Macs and OpenGL ES might be stuck at 3.0 forever…).
Many games use a small number of engines (the popular Unity engine comes to mind), getting support into one or two of them could already result in a large enough developer user base for e.g. an Android manufacturer to include a hybrid GPU in some smartphones.
I’m sure that we will hear much more about ray-tracing hardware in 2016.
If you want to learn more about hybrid ray-tracing, there is an article by Gareth Morgan in GPU Pro 6 “Hybrid Ray Tracing on a PowerVR GPU”. There also also a few blogposts by Imagination Technologies about the Wizard architecture.