RenderingPipeline

from geometry to pixels

OpenGL Programmable Blending – APPLE_shader_framebuffer_fetch

Large parts of the rendering pipeline are user programmable through shaders nowadays. One of the remaining fixed-functions is fragment blending: The new fragments (shaded by the fragment shader) don’t neccessarily have to overwrite the corresponding framebuffer entry, we can set different blending operations. The set of possible operations however is fixed.

There have been requests to provide a programmable blending stage for quite some time now and finally this functionality is available – somewhere we probably didn’t expect it: On iOS from Apple. The fruit-branded computer company is not exactly well known for hi-end OpenGL support but this time they provide a long awaited feature for the first time.

The extension for programmable blending is called APPLE_shader_framebuffer_fetch and is available for OpenGL ES on iOS 6. What we get with this extension is a new build-in varable called gl_LastFragData which simply holds what would be the destination of the blending operation in fixed-function (our current fragment color would be the source). Of course this is read-only, we write to the framebuffer in this shader anyway!

Beside reimplementing the known fixed-function blend modes (which now can be different on a per fragment basis!), we can add completely new modes (e.g. photoshop-like blendmodes). Also special blending modes for non-color data is possible…

Another possibility is the implementation of per-fragment post-processing effects without rendering to a texture first: just render an empty fullscreen quad, read the framebuffer color and modify it in the shader (think of color adjustments etc.).

Access to the depth is not provided, but if that’s needed, it might be able by storing the depth (probably at a lower resolution) also in one of the color channels.

If you have access to the 2012 WWDC videos from Apple (you need a developer account), check out session 513!

Now, if only we would get this also on the desktop (and as an ARB extension or even core) :-) Sadly, this might not be so simple as blending is implemented as fixed-function hardware on most GPUs. At least, that’s what Graham Sellers told me and I have no reason to doubt that:

"It's (very flexible) fixed function in most architectures I know of. Would be quite hard to make programmable."

A quick look at the Intel HD 4000 documents suggest, that programmable blending isn’t possible there as well (I took that GPU as it is up-to-date and well documented).

, , , , , , ,

8 thoughts on “OpenGL Programmable Blending – APPLE_shader_framebuffer_fetch
  • John says:

    The second sentence in the second paragraph was supposed to say “The fruit-branded computer company is *not* exactly well known for hi-end OpenGL support…”

  • Daniel Rakos says:

    NVIDIA’s Tegra has programmable blending for a long time so iOS 6 is not the first to support such thing. In fact, it’s available on all Tegra based Android devices through the GL ES extension NV_shader_framebuffer_fetch. Apple’s extension is probably a replica of that spec. (maybe the iPhone 5 has a Tegra GPU?)

    Also, mobile GPU architectures are quite different than the desktop ones so implementing one thing on one platform might not be efficient to implement on the other. It’s not a coincidence that most mobile GPUs don’t support things like geometry shaders or tessellation.

    Finally, while desktop GPUs don’t support programmable blending, in fact you can do programmable blending on any recent GPU as you have load/store images (or UAVs in DX terminology) that allow you to perform read-modify-write operations if you really want to. You can even do scattered writes (i.e. write to “other pixels”). Also, they know even more, as using load/store images and atomic counters you can even do order independent transparency *with* programmable blending which is far more than any mobile GPU can achieve.

    • Robert says:

      This Apple extension was first demonstrated on an iPhone 4s, so it runs on IMGTec GPUs, I also highly doubt that Apple switched to NV for the iPhone 5.

      That said, it’s true that that mobile GPUs are very different than desktop GPUs. You’re also totally right that we can do tricks like programmable blending and much more with load/store but this ‘work around’ is still much more ‘manual work’ and more complex than the simpler options provided by mobile GPUs.

      While it’s debatable whether a simple access/way to programmable blending should get provided on desktop GPUs, it is one of the often demanded features by developers.

  • Christophe says:

    On Kepler and Southern Islands hardware , I am pretty convinced that only by exposing say 50% of the LDS in the fragment shader, we could do a lot of interesting stuff related to programmable blending.

  • show info... says:

    That’s a nice post.

  • Radek Mackowiak says:

    Sadly this extension doesn’t work for me. :(

    #extension GL_APPLE_shader_framebuffer_fetch :require

    varying lowp vec4 colorVarying;

    void main(void) {
    gl_FragColor = gl_lastFragData[0] + vec4(colorVarying.x, colorVarying.y, colorVarying.z, 1.0);
    }

    extension ‘GL_APPLE_shader_framebuffer_fetch’ is not supported

    Tried to run it on the iOS 6.0 iPad Simulator n on an actual iPad with 6.0

Leave a Reply

Your email address will not be published. Required fields are marked *

You may use these HTML tags and attributes: <a href="" title=""> <abbr title=""> <acronym title=""> <b> <blockquote cite=""> <cite> <code> <del datetime=""> <em> <i> <q cite=""> <strike> <strong>

*