About 10 years ago I saw some images that were so different than the normal photography I was used to seeing. At first I could not figure out what made these images so impressive. It seemed like I could gaze into them and pick out detail in all areas of the frame. The imagery seemed almost fake yet very life like. I struggled for a while with this contradiction and came to the conclusion that it didn’t matter as long as I enjoyed the photograph. Photography is art after all and I was enjoying this imagery.
Sometime later (thanks to the greatness which is the internet) I discovered that these types of images are collectively known as HDR or High Dynamic Range. In photography dynamic range is the difference between the brightest and darkest parts of an image. Dynamic range is measured in Exposure Values or EV for short. One EV is a doubling of the amount of light. Two EVs is four times (2x2=4) more and so on.
It turns out that the human eye can see about 20 EVs of light, which is a really wide range. Comparatively a high end camera such as the Nikon D810 can record about 14 EVs. A typical monitor can only display or reproduce 10 EV’s of light. Can you see the problem? Our eyes can see a huge range of tones and our best cameras can only record 70% of that information. Even worse, the average display media such as a LCD monitor can only display about 10EVs, that is only half of the information. How do we solve this dilemma? We have to compress the dynamic range down so that our recording and output mediums can display it.
In the 1850’s French photographer Gustave Le Gray recognized this problem when he was taking seascape photographs. The brightness of the sky vs the sea was extreme and would not fit into the dynamic range of the film he was using. He pioneered the idea of taking multiple exposures to render both the sky and sea properly. In the development process he carefully spliced the two negatives together to get a composite image with proper exposure in the sky and sea. The world’s first HDR image was born.
In the modern world we can simplify this process using use digital editing tools. As long as we have a series of images that were taken at different exposures we can combine those exposures to create a composite image. That composite will contain the best elements of each exposure. The end result will be much closer to how your eye would see the scene if you were actually standing there.
There is a long standing stigma attached to manipulating photos (The history of how that started is for another post). It’s well known that many great photographers (such as Ansel Adams) manipulated their negatives by lightening or darkening different areas of the image. They generally did not remove or add elements to the composition that did not exist. They were mostly increasing or decreasing light/exposure to place emphasis on certain components of the composition.
Photos of the natural world tend to be criticized and discredited if “Photoshopped”, even if the manipulation results in an image that is more faithful to the scene as seen by the photographers eye. Fortunately the modern usage of HDR composites is being done more tastefully and naturally than the early surreal looking images being produced in the early 2000s. Today a set of software and techniques exists to create very natural looking HDR images.
For me capturing a set of images at varying exposures allows me to record all the data from a particular scene. I have all the detail from the shadows and all the detail in the highlights, everything I need to create an image that is faithful to what I saw with my eyes when I made the photograph. I have found that using HDR processing techniques has allowed me to capture exciting compositions in difficult lighting conditions. I have been refining and incorporating this technique into my photography and workflow.
I hope this short article helped you gain some insight into HDR photography.
To learn more about the technical aspects check out the Wikipedia article on HDR photography.