Many people have asked me what HDR photography is over the years so it seemed like a good topic for a blog post. HDR probably got it's highest profile shortly after Apple added it as a feature to the iPhone camera. It's something I've occasionally had a dabble with but ultimately found it very difficult to do well (and exceptionally easy to do badly). To understand HDR photography, as with many photographic concepts, you first need to appreciate quite how amazing the human eye is.
Our eyes are incredibly adaptive to light. We can navigate a room in almost pitch black darkness or a bright sandy beach in extreme sunlight. And not only do we have this very impressive range of sight from very dark to very light, we can also process both in tandem in quick succession. In a church or cathedral we can pick out the intricacies of a brightly lit stained glass window and then, with but a flick of the eye, pick up the details in the dark brickwork that frames said window. Wherever we cast our gaze, our eye (and brain) quickly readjusts to the light levels and enable us to 'see' what we're looking at. The range of brightnesses our eyes can handle is rather breathtaking, and the speed at which it does it makes us forget quite how bright the stained glass window is and quite how dark the brickwork is.
The speed at which this adjustment happens is incredible. But it means we forget it does happen. You only notice it when the margin between the darkness and the brightness is particularly extreme. Take for example stepping from inside a dark cool hotel lobby, out into the bright midday sun. It takes a few seconds for our eyes to adjust to the extreme brightness and to properly 'see' the details. Likewise, when walking along a country lane at night your eyes can make out the lane, the trees, most of what is around you. Until of course a car comes towards you and dazzles you with it's headlights. Suddenly your 'night vision' has disappeared and it takes a little while to readjust to the situation.
You have now begun to appreciate exposure. That there is a process to adjust from one level of brightness to another level of brightness. That our eyes have 'settings' that are adjusted to 'see' different levels of brightness. As explained above, our eyes are constantly doing this to allow us to see the details in whatever it is that we are looking at, be it the stained glass window or the brickwork around it. Likewise, cameras have similar settings which replicate what the eye does so it can capture different levels of brightness. So it also can 'see' the detail in the brickwork or the detail in the stained glass window. But unlike our eyes which are 'seeing' what is around us but frantically changing settings depending on where we are looking, a photograph can only have one setting. We choose (or our camera chooses) which setting to use and that 'exposure' is set in stone for that image.
Suddenly our eyes don't get to choose anymore. As we navigate a photograph, we don't have any further control on how bright or dark different sections are. This 'exposure' for the whole image was chosen when we took the photograph. For example, if we had aimed our camera at the stained glass window discussed above, the details of the window may be clearly visible in the photograph but the brickwork will be almost black. Likewise, if we had aimed our camera at the brickwork the details of the brickwork will be clear but the stained glass window to the side will come out almost blank white.
High Dynamic Range images seek to resolve this. They serve to replicate the quick adjustments to exposure the eye does. By taking differently 'exposed' images for the lighter parts and the darker parts. Taking one photograph with the camera settings adjusted to show the details of the dark areas (such as the brickwork) and then another photograph with the camera settings adjusted to show the details of the light areas (such as the stained glass window). We can then merge these two photographs using software such as Photomatix Pro so every part of the photo is shown as our eye would see it as it moved around the scene. So that when we look at the brickwork in the photograph we can see the detail, and when we look at the stained glass window in the photograph we can see the detail there too.
That's the principle anyway. The problem is it is very easy to completely lose the fact that some things are lighter and somethings darker. As we compress this range to have detail in everything we can end up with an image that looks too flat. When HDR is done well it is breathtaking. But it should be done so that you don't notice it - just as you don't realise it when you're looking at that stained glass window and brickwork in that church.
Example: City sunset scene
Below are three photos of exactly the same scene. With the camera on a tripod, I could change the camera settings without changing what it was pointed at. Notice how different the brightnesses are. All three images are taken within only a few seconds of one another, yet by changing the camera settings to get a different 'exposure', we get different details in each.
1.This first image is exposed for the buildings. We can see the windows, the colours of the street, the parked cars, all of which I could clearly see when I looked at them at the time this was taken.
2. This second image is kind of a middle ground, a compromise between the darker bits and the lighter bits. We can now see pink in the sky immediately above the roof tops on the right and can tell that it's either sunset or sunrise.. Ultimately though the dark bits are too dark and the light bits are too light.
3. This third image is exposed for the sky. We can now see the amazing colours in the sky and make out the detail in the clouds and the crane - again all of which I could see when I looked at them at the time.
So by combining these three images in software, we take the details from the street in the first image (the darker bits at the time the photo was taken) and add the details from the sky in the third image (the brighter bits at the time photo was taken). We also add the details from the second image for any middle details - the bits that were inbetween the brightest bits and the lightest bits. In this example there aren't many, but the most obvious is the pink in the skyline on the right. In the first image it is almost white and in the third image it is a dark grey. In a different scene, there may be far more 'middle' detail to use.
So we put all of this together and then choose which bits we want from which images and tweak various settings to blend the three:
It's not a particularly pretty example, but it is certainly closer to the true scene I saw than either of the three images. Hopefully it demonstrates how HDR images are made and why they can be useful.
I'm keen to know if this is a good explanation or if it needs improving, or any other input you have so please add a comment by clicking here and choosing the 'Add Comment' button