Most common images are coded on 8 bits (24 if you have three channels for the colors) which gives 256 different values. The problem is that the human eye sensitivity spans a great range of luminosity and can adapt fast to different contrast. As a results, photos taken on a places with important contrast will look over or under exposed.
One solution to create an image which will look closer to the reality is to use High Dynamic Range (HDR). HDR combines different shots of the same scene taken with different exposure to create a 16 bits or 32 bits image. Pixel in these images represents a physical value: the luminance.
Of course as monitor, printers, etc can’t represent such an image, an additional step is required: tone mapping. The tone-mapping is a projection from the real luminance value back to the 8 bits value. Of course a simple linear projection won’t do the trick. Several algorithms exists and can do that for you. These images are often referred as HDR which is not strictly correct.
Qtpfsgui, an open-source HDR workflow can help you create your own HDR images. Here are some examples.
In a church in Amsterdam, three shots at different exposure. The camera was stable so no registration step was required before the HDR generation.
And the result:
The same on a street of Shanghai. Here the camera was not still so a registration was first necessary:
And the result
Here you can note the ghost effect on people would didn’t want to stand still while I was taking the pictures. Within the one second between the first and the third shot, they moved and of course the registration process is not designed to compensate for that.