What is HDR Deghosting?

HDR (or HDRI) is short for "high dynamic range image". Images we see on our computer screens range in brightness from 0 to 255, but there are many more intensities of light in the real world. This is why when we take photographs of magnificent scenes, like sunsets, it often turns out dull. High dynamic range images contain a higher range of light. The values HDR images can be processed by tone mapping to be viewed on regular monitors.

To capture an HDR image with a regular camera, one can take several pictures of the same scene, adjusting the shutter speed with each exposure, resulting in a series of pictures from dark to light. An HDR image can be made by combining the well-exposed pixels from the source images. However, there are frequently moving objects (such as people) in the scene, causing each picture to be inconsistent with the rest. This results in ghosting - objects appear semi-transparent. The process of removing the semi-transparency is called deghosting.

Choosing a Pixel with the Largest Weight

Edit: I made a mistake in my code, so that actually wasn't the real result. Below is the real result.

If you zoom in, you can see that the leaves don't look right. They are being cut off before the edges, although it alleviated the ghosty branches a little. That sounds reasonable because the pixels near the center of the leaves are more likely to still be within the leaves even with some displacement, but the edges are probably invading on space that's mostly sky.

The mistake I was making was pretty simple, and stupid. Instead of "result = tmp", I wrote "tmp = result". So each pixel in the noisy image you see below is the last well-exposed pixel in the stack, which is mostly the last image in the stack.


The simplest way to eliminate the stubborn traces of ghosting is to just ignore the lower weighted pixels, and this is the result of using only the pixels with the largest weights:

Not only did this eliminate the ghost person, it also completely eliminated the blurriness of the branches (click image for zoom in). However, the price to pay for this is a lot of noise. One of Khan algorithm's side benefits is that it eliminates noise by blending different images, so choosing a single image's pixel no longer gives that benefit. In anticipation of this, I tilted the initial weights to favor pixels near the brightness of 220/255, but that didn't seem to help. Maybe I didn't tilt the correct weights.

No comments: