Having several of these images with different brightness, histograms, etc., how would you try to make them all more similar to each other. I was hoping that one of your auto* scripts could do this, since you analyze the histograms and make adjustments from there.

My script histmatch may work, but if your histogram is too sparse then that will not work well. Best suggestion is snibgo's gain and bias. It basically matches the brightness and contrast of images. If you know the desired mean and standard deviation you can use snibgo's gain and bias, which is a global approach. My script space does something similar in an adaptive approach. I would probably guess that the global method is what you need for similar types of images to the one you posted.

Looking at the results of gainbias and histmatch, they are sometimes good sometimes not so good, as would be expected, since we are trying to get a set of images to look as much as possible like another reference image.

I think this is another kind of problem. How to get a set of images to be transformed to a homogeneous "color surface", that has to be computed by looking at all images before transforming.

Looking at the abstract in the pdf it seems pretty simple (attempted humour):

"(...)To obtain color seamless mosaic dataset, local color is adjusted adaptively towards the target color. Local statistics of the source images are computed based on the so-called adaptive dodging window. The adaptive target colors are statistically computed according to multiple target models. The gamma function is derived from the adaptive target and the adaptive source local stats. It is applied to the source images to obtain the color balanced output images. Five target color surface models are proposed. They are color point (or single color), color grid, 1st, 2nd and 3rd 2D polynomials. Least Square Fitting is used to obtain the polynomial target color surfaces. Target color surfaces are automatically computed based on all source images or based on an external target image. (...)"

Some of Fred's scripts already compute target transformations from the original image or from a target image. The point seems to be to get target transformations from a set of images.

The results published in the article are superb.

There is a high demand for this kind of computing in the (my) gis area, since many non-experts get aereal or satellite imagery that many times lack quality color processing.

So I was wondering what your thoughts would be on this, and if there is any existing script that could maybe be adapted? Maybe gainbias could calculate these "global" transformations instead of looking at just 1 reference image?

The problem of seamless joining of mosaic images is similar to making a movie out of time-lapse photos from a fixed camera of a building site (see recent thread), but more difficult.

Changing all images to a common standard sometimes works well, but in the general case we want an evolving standard. A set of aerial photos might include areas of towns, vegetation, mountain and desert. The standard needs to evolve between areas -- a moving window. For a seamless mosaic, we need to blend between standards.

The Zhou paper uses "vOut = vIn ^ gamma", which is one-dimensional. It affects brightness and contrast, but not independently.

The gain-and-bias method is based on "vOut = vIn * a + b". It is two-dimensional, changing lightness and contrast independently.

Of course, either technique can be applied independently to RGB channels or HSL or Lab or whatever.

I do not know if this is relevant, but I have a script, space, that does spatially adaptive contrast enhancement. That is, it adjust the brightness and contrast across an image differently for each part of the image. Snibgo has a similar script that modifies the histogram adaptively.

Fred, that's relevant, and useful to correct in-image differences. I see also a good use-case for these scripts, because sometimes people who produce the aerials join images from different dates, resulting in 2 very different halves inside the one image.

For the case explained above the trick is getting to determine the transformation function for 1 image, from a whole set of images.
So instead of having for instance gain-bias look at 1 reference image, it would look to 10 images, compute a "statistical" reference and apply that to our image.

I'm sorry... this is the most I can understand of the algorithm... just thought you guys might have something that would apply to this.