Case Study: High Dynamic Range (HDR)

Our eyes and brain process images very differently than a camera. Aside from the size and shape of the image frame, the world simply looks different. That is why oftentimes we look at something, take a picture of it, and when we look at the picture, it’s too dark or too bright. Sometimes, there are both bright and dark areas on the same frame. That happens because our eyes and a typical camera sensor do not process dynamic range equally.

Dynamic range is the measure of spread of the lowest to the higest energy levels in a scene (Ang, 2010). As we are talking about light energy, we can safely equate lowest and highest levels to darkness and lightness. So, let’s say we are talking about a black and white image. Our eyes can perceive a whole lot more shades of grey compared to a camera sensor. The same applies to a colored image — more shades of each color are visible to the human eye. While not an exact representation, the figure below illustrates that we perceive a much wider dynamic range than a typical digital camera.

dr chart 1

This, however, did not stop people from finding a workaround.

High Dynamic Range (HDR) is a technique that was conceived to address this shortcoming on the camera’s part. An HDR image lets you combine the brightest and darkest parts of a scene that your camera camera wouldn’t be able to record in a single shot (Nightingale, 2009). Multiple shots of the same frame with different exposures are composited in such a way that each shot fills in the blank left by the others.

dr chart 2

As you can deduce from the above figure, through the HDR effect, it becomes possible for a image to exhibit a dynamic range even wider than human perception. This is called hyperrealism and was very popular some years ago due to its wow factor, where even mundane subjects become larger than life.

Here is the thing, though. Even though it is possible to create a true HDR photograph, it cannot be rendered by output devices that only support lower dynamic ranges. So, in order to avoid that problem, an high dynamic range image needs to be compressed back into a low dynamic range image without losing that HDR look. This process is called tone mapping.

Salzburg, Austria (2014)
Salzburg, Austria (2014)

 

Unfortunately, like any other form, HDR photography lost much of its novelty when too many people got into it. Suddenly, just about anyone with a recent version of Adobe Photoshop was doing it on his or her pictures. Of course, the quality of the end results varied from the exceptional, to the acceptable, to the horrible.

Perhaps the debate will go on for a while longer. But it should be pointed out that HDR is more than just about producing hyperrealistic pictures. While there will always be a time and place for those, HDR can also be a powerful tool for working around the camera’s limitations in lesser degrees.

I personally work more often with the subtler side of HDR — just enough to get details to pop out.

I also have a more practical purpose. Take, for example, the collage below. The frames are of a second floor den with a balcony on the outside taken on a sunny morning. If you are to take one shot, you would have to choose between an underexposed den, or an overexposed window. Applying the HDR technique allows you to get the best of both worlds.

Baguio City, Philippines (2013)
Baguio City, Philippines (2013)

By combining three pictures of identical frames and different exposure levels, I was able to come up with a picture that comes quite close to what I was actually seeing that time without it ending up looking too hyperrealistic or cartoonish.

Tutorials on HDR abound in the Web. I encourage you to read up on it and practice its application. But the important thing afterwards is for you to have the intuition to identify where and when to make use of it.

 

References:

Nightingale, D. (2012). Practical HDR.