Bug Description

In some cases the individual pictures contains
significant brightness
differences (caused by heavy vignetting, auto exposure
program, or, as i found recently, use of a polarisation
filter). These whould look better, if they could blend
with a heavier blur:

Based on the detailed description of the blending
algorithm i would suggest
the following _optional_ modification: in situations
described above it could be useful to blend the pyramid
levels (especially the higher-numbered ones associated
with low spatial frequency) with a softer mask. (I
think this is equivalent with using a mask transformed
from a higher level.)

I am not fully familiar with the algorithm used to
generate the Laplacian pyramids, so it might be wrong:
it might not feasible to implement this idea using
blending masks transformed from higher level. Probably
using a "blur" function (like in image editing
softwares) on the blending masks created at each levels
in the Laplacian pyramid with different blur radiuses
would give some flexibility.

The general idea is that the width of the blending mask
grows proportionally to the spatial frequency of the image
features. The standard Laplacian pyramids the spatial
frequencies are divided into an exponential series. So there
is plenty of room to change the constant of proportionality,
or come up with a more sophisticated relationship.

The only caveat I can think of off-hand is that the width of
the blending mask can exceed the size of the image overlap
region. This happens even with blending mask generation in
the current version of Enblend. The result is that Enblend
identifies that certain pixels are close to the seam, and it
knows that it has to blend them against something in order
to prevent a visible seam, but there is nothing to blend
them against. This all stems from the fact that we have
images that overlap in irregular ways. It is an open
research question.

If the mask and the pyramids both use the same filter size,
and the seam line never leaves the overlap region, then this
problem does not occur. The image filter can automatically
perform a multiresolution extrapolation which fixes the
problem. Enblend does this now.

If you want the blend mask to grow a little faster, then we
first need to solve the problem of what to do when the mask
extends beyond the image data.

A possible solution might be some kind of extrapolation if
the blending mask covers non-overlapping regions. I think it
could be worth to try some modified blur algorithm to do
this: by calculating the weighted averages for 'missing'
pixels, only the present pixels should be taken in account.
In a formal way,
the value of a grey-scale pixel at position (X,Y) could be
SUM_x( SUM_y( p(x,y)*f( sqrt( (X-x)^2 + (Y-y)^2 ) )*m(x,y) )
) ) / SUM_x( SUM_y( f( sqrt( (X-x)^2 + (Y-y)^2 ) )*m(x,y) )
) ), where
p(x,y) is the value of the pixel at position (X,y)
f( d ) is the weight function for the blur
m(x,y) is 1 if pixel (x,y) present in the image, otherwise 0.
The blur radius (a parameter in function f() ) could be
proportional to the distance of the closest existing pixel
in the source image.