There are some cases where Khan's algorithm doesn't work at all. In the previous post (the fountain scene), the tree branches could not be deghosted because the wind movement caused each image to be different, preventing the algorithm from finding a background. Even with large movements, as long as the images are different enough, the algorithm will have trouble. One example of such a case is when taking pictures of crowded places.
Source images:
Even though this scene is far from crowded, there's a lot of overlap in where the movements occur.
After 1 iteration:
After 5 iterations:
After 12 iterations:
The only difference between the first iteration and the 12th iteration is.... the error was more apparent in the latter one =P. One way to attempt at a solution is to encourage choosing from the same image when the weights are indecisive, but then the order in which the pixels are processed becomes very important, and when will we know it's ok to switch to a different image? We can't simply choose the input with the largest weight, because that could cut into some objects. For example, the body of the man on the right has a relatively high weight, while his head has a lower weight than the wall, and we don't want to decapitate him. This will require calculating what's the most coherent "edge" for switching images, which is very slow. We could use the calculated weights to help make the decision (similar weights = better seams), but I haven't sorted out the details yet...
Subscribe to:
Post Comments (Atom)
No comments:
Post a Comment