Choosing a color space

In image processing the choice of a color space is the first task to do. The difficulty here is the fact that content sources like images or movies all have their content defined in a color space such as RGB (not to mention sRGB and Adobe RGB), but also YCbCr - where color subsampling can be done, such that you have 4:4:4 , 4:2:2 or even 4:2:0.

On the other hand, image processing algorithms, such as contrast enhancement or motion estimation, expect to have a clear indication in the color information for lightness (how bright is everything in the scene).

In this blog , we’ll talk about the different choices you have to extract lightness data from the image content. We will start from the same picture and show everytime the grayscale result in the different color spaces that will be discussed here.



A lot of images contain the information in the form of RGB values for every pixel. In the following section it is shown that the G component contains the most luminance information. So if there is not a lot of computing power available, or if there is no time to go over to another color space, replace every pixel, p, as follows $$ p_ {i,j}(r,g,b) = p_ {i,j}(g,g,g) $$

where i is the row index and j is the column index of the image and r,g,b are the values in RGB of the pixel located at i,j. When this has been done on our example image, the following result is obtained.


Remark: one can think easily of images which contain little or no green, where this approach will fail.


If the RGB values are defined as in the ITU-R.709 standard, then the Y component (luminance, non-linear indication for lightness) of YCbCr is derived form the original values as follows (and the color components c_b and c_r are set to zero)

$$ y = 0.299 r + 0.587g + 0.114b $$ $$ c_b = 0 $$ $$ c_r = 0 $$

Going back to RGB is easy, because if c_b and c_r both are 0, then $$ r = y $$ $$ g = y $$ $$ b = y $$

so for every pixel, p, we introduce these calculated values: $$ p_ {i,j} = p_ {i,j}(y,y,y) $$ where *y* has been calculated from the *r, g, b* values of *p* at position *i, j*.


As can be seen; this result is already a better approximation to what humans perceive as lightness.


The LAB color space has been defined to be computationally feasible and at the same time being a color space relatively good for doing color differences, lightness indication, color separation from lightness etc.

That’s why we could like to show the resulting image also for computing the brightness of the scene in LAB.

In order not to go to deep into computational details, we can say that for every pixel p, l can be calculated from r,g,b of the pixel, while the values for a and b can be set to 0. $$ l = F_ {forward} (r,g,b) $$ $$ a = 0 $$ $$ b = 0 $$ Going back to the original RGB color space of the image (applying *Fbackward(l,a,b)*, the result is as follows.


As can be seen in the image, there are slight differences in the result between using LAB or YCbCr. But these details can turn out to have a huge different output of the algorithm where this lightness information is being used. What also should be clear is that it is not obvious, even when your image content is defined in a YCbCr or similar color space, that the value of the Y component can immediately be used as the lightness information.

Example code is available at Github Go Color To Gray


The last example cleary shows the problems with the first method. The red accents are completly gone in the converted picture. To get a proper view of the differences it’s better to download the images and scroll through them to see the changes.

G to Gray YCbCr LAB

G to Gray YCbCr LAB

G to Gray YCbCr LAB

written by: Benoit Catteau