RenderingPipeline

from geometry to pixels

A look at the Bayer Pattern

Have you ever wondered, how a ‘raw’ image file from a camera looks like? I have, so I tried to visualise one. Contrary to popular believe, you actually can do that quite easily…

Most digital cameras can not only store the images as JPEGs, but also ‘raw’, in a format that normally has to be developed before it can be viewed. In fact, the camera does the same thing before saving an image as a JPEG, even those that can’t store the raw itself like smartphones. Almost all digital sensors in cameras from phones up to DSLRs only have one sensor in them which can only detect the brightness of the incoming light. To get a coloured image they have per-pixel colour filters in front of the sensor to capture the red, green or blue light per pixel. The arrangement of these filters can vary, but most resemble the so called Bayer pattern invented by Bryce Bayer in 1976.

Visualising the Bayer pattern

The three colours are arranged in a 2×2 pattern where green is used twice as often as the other colours as the human eye is most sensitive to this colour. The arrangement is as depicted below:

RGGB Bayer Pattern

RGGB Bayer Pattern

Note that the pattern doesn’t have to start with red but at least in the cameras for which I could check the raw data it did. Instead of mixing one RGGB block to get one pixel, the missing colours are reconstructed to get a higher resolution out of the sensor. So the sensor above would result in 12 by 10 RGB pixels. So, yes, your 10 megapixels camera only has 5 million green, 2.5 million red and 2.5 million blue sensor elements and 66% of the resulting image are interpolated and were not actually captured!

So how does the data that comes out of the sensor look like? I made the photo below (as raw) and used libraw to decode it. Luckily it comes with a sample called ‘unprocessed_raw’ which can dump the sensor data and which served as a starting point for me.

test image

Converted in Lightroom, no adjustments were made.

What comes out of the ‘unprocessed_raw’ sample from libraw just looks black but this is because the sensor data gets saved in a 16 bit grey image but the camera might not have a 16 bit analog to digital converter. So the pixels had to be shifted in this case by 2 places as the camera was only set to 12 bit. Below you can see a small part of the image developed in Adobe Lightroom 4 and the same part of the original sensor data. On the right I coloured the sensor data with the corresponding colours they represent (the colours of the filter for that pixel).

Resized 200%

Resized 200%. You might have to click on the image to see the pattern better in full resolution.

The image is quite dark and green is very dominant. To get the true colours out of the raw we would have to rescale the colour channels and find a proper white balance.

Getting the missing colours back

But first let’s see how the missing 66% of the data can be restored. The simplest way to do it is taking the value from the nearest sample of the correct colour: e.g. for a pixel that has already a red value take the blue from the pixel one pixel to the right and one to the left and the green from the pixel below the current pixel. This nearest neighbour filtering leads to the coloured result below (left). Now that the image is very roughly restored, the colours can get adjusted. The raw file stores additional information for that (e.g. the white balance from the camera) but here I just tried to roughly match the output from Lightroom (middle).

Left to right: Nearest neighbour interpolation without and with colour correction, bilinear filtering with colour correction.

Left to right: Nearest neighbour interpolation without and with colour correction, bilinear filtering with colour correction (resized 200%).

A slightly better way to reconstruct the missing colours is to average the surrounding samples instead of just picking one: A simple bilinear filtering. This is shown on the right (also colour adjusted). As you can see, this already improves the highlight on the eye dramatically.

If the same colour adjustments (in fact just a curve in Photoshop) are applied to the coloured Bayer pattern we can visualise it even better:

Left: from Lightroom, right the coloured Bayer pattern. Resized 300%.

Left: from Lightroom, right the coloured Bayer pattern. Resized 300%.

We can clearly see the dominance of the red coloured pixels on the rose, the green ones on the leaves and the mixture of all colours on the wooden table.

Results from the pros

Even the bilinear reconstruction is not ideal in any way. There are lots of algorithms to better handle Bayer demosaicing (also written demosaicking if you want to google the details) and as I didn’t want to implement all of them I switched to already available software: The already mentioned Lightroom and RAWTherapee which has the nice feature of letting you choose the algorithm for Bayer demosaicing.

bayer_reconstruction_compare

Left to right: Nearest neighbour, bilinear, RAWTherapee ‘fast’ & ‘amaze’, Lightroom. Resized 200%, click the image to see it in full resolution.

Here we can see the leave structure of the rose slightly in the images reconstructed from Lightroom and RAWTherapee with different algorithms. My own quick hack did not reconstruct that but only created a bit of noise. As you can see now, the rose is actually an artificial flower, so the structure is not an artefact. No additional sharpening was used in any image. Below are the same parts of the image with enhanced contrast and sharpened:

Increased contrast and sharpened.

Increased contrast and sharpened, click on the image to see it in full resolution.

The visibility of the structure of the fabric is highly dependent of the algorithms used. All images so far were captured with a camera that has an optical low-pass filter which is intended to reduce aliasing that can be introduced by demosaicing the Bayer filter. However, it also reduces the sharpness of the image. The images below were taken with a camera that has 50% more pixel sensors but also has no optical filter anymore. But again, the quality differences were mostly a result of the demosaicing technique chosen.

foo

Same scene, different sensor: 50% higher resolution and no low-pass filter.

The only moiré like pattern I found were in the camera made with the low-pass filter (but with a low quality demosaicing algorithm). So I assume that moiré can be avoided by better algorithms at least as well as with optical filters. ‘Image Demosaicing: A Systematic Survey‘ by Li et al. also shows the great influence of the demosaicing technique to moiré.

foo

Nearest neighbour, bilinear, ‘fast’ and ‘amaze’ from RAWTherapee and Lightroom, 400%. Click on the image to see it in full resolution, otherwise some artefacts might not be visible!

This last example above shows that fine structures can be challenging for demosaicing: The algorithms used by RAWTherapee seem to try to add some sharpening that goes wrong here as can be seen by the few dark and bright pixels. The bilinear reconstruction can hardly preserve the structure from the fabric.

Recap

As it turns out it’s actually not too hard to get access to the raw sensor data of a DSLR and visualise it. A basic reconstruction of the full colour data is hacked together quite quickly but by far no match to the already available implementations (no surprise here). If you own a DSLR I would recommend to shoot in RAW mode as you can see that there are wide differences even in the reconstruction quality of the Bayer pattern (in addition to a higher bit rate and the avoidance of JPEG artefacts!). Who knows what is implemented inside of the camera and what awesome new algorithms will be invented in the future that can improve your already shot images even more?

Other ways to capture an image

While most cameras use the Bayer pattern, this isn’t the only way to capture an image of course:

  • You can arrange the colour filter array in a different order.
  • You can add other colour filters than just red, green and blue, e.g. add yellow, white or cyan.
  • You can capture red, green and blue in one sensor (Foveon).
  • You can use three separate sensors – if you can manage to align them precisely, a mechanical problem that gets more and more challenging the higher the resolution gets.
  • You can even capture the colours one after another by placing removable colour filters in front of a sensors – sounds impractical? This is used by NASA for some spacecrafts and landers…

, ,

2 thoughts on “A look at the Bayer Pattern
  • poo says:

    hi. I have a question. when I set a black paper or black thing in custom white balance, and then i take a photo, why my photo become a green photo?

    • Robert says:

      You should set the white balance based on a light grey image. If you do it on a black object it might not be perfectly black but has a slight color cast, this will negatively influence the white balance and the images get a strong color cast (green in your case).

Leave a Reply

Your email address will not be published. Required fields are marked *

*