The flat field calibration plugin was ably demonstrated by Jeff, Michael and Eric Chan in a recent tutorial, where it was used to remover light falloff and lens cast with color shading. The technique can also be used to remove dust marks on the sensor while also correcting for vignetting and other forms of uneven illumination. Martin Evening demonstrated how flat field calibration can be done in Photoshop for removal of sensor dust marks using the divide blending mode, but the technique is rather tricky. The technique should also be quite useful in photomicroscopy where uneven illumination and dust on the illumination optics can be a major problem.

Hopefully, forum members will post their experience with other uses of this plugin.

In C1 there is a semi-hdr process using an LCC that I've tried a few times. See here on Capture Integration's site: hdr-via-lcc

There's no way to do this with the LR flat field process directly. But I wonder about indirectly...? Out of curiosity I forced LR to use the flat field process by saving an image out as dng then applying it directly to itself. I got this somewhat interesting result (including the original image with no adjustments). Perhaps there is some way to use this result as a mask in Photoshop. But unfortunately my Photoshop skills, well, suck.

In C1 there is a semi-hdr process using an LCC that I've tried a few times. See here on Capture Integration's site: hdr-via-lcc

There's no way to do this with the LR flat field process directly. But I wonder about indirectly...? Out of curiosity I forced LR to use the flat field process by saving an image out as dng then applying it directly to itself. I got this somewhat interesting result (including the original image with no adjustments). Perhaps there is some way to use this result as a mask in Photoshop. But unfortunately my Photoshop skills, well, suck.

Dave,

I have no experience with your method, but I did try out the flat field plugin with photomicroscopy. It works very well to remove nonuniform illumination, but unfortunately has little effect on dust spots.

Image without flatfield:

Flat field. Note dust spots on the right and nonuniform illumination

.

Corrected image. Illumination is good, but dust spots remain.

Martin Evening's method using Photoshop divide requires careful adjustment of the curve to eliminate the dust spot and the process is rather tricky. It would be nice to accomplish the same effect in Lightroom.

Martin Evening's method using Photoshop divide requires careful adjustment of the curve to eliminate the dust spot and the process is rather tricky. It would be nice to accomplish the same effect in Lightroom.

If anyone has some ideas, please advise.

Hi Bill,

The flat field correction normally uses an input that is blurred before normalization and division. The blur is required to remove high spatial frequency detail (e.g. noise) thus leaving a low frequency image with the global fall-off. The blurring will also reduce or even remove the dust signature. Since the division requires a linear gamma for the highest quality and simplest calculation, one needs to fiddle around with the curve adjustment to match the response curve of the flatfield image too the response curve of the actual image. It helps to do the flatfield correction as early as possible in the processing chain, when the response/gamma curve is still relatively simple.

Dust spot removal does not require a significant low pass filtration of the flatfield image, on the contrary. So one can use the same flatfield image, but two passes are required, one blurred for flat-fielding, one not (or only minimally) blurred for dust shadow removal. The dust removal image also needs to be flatfielded. Alternatively one can try with only a minimally blurred flatfield image to remove both dust and fall-off in one operation, but that potentially increases the noise.

Lightroom is not the most obvious type of application for such mathematical image processing, but the functionality can be (and is) built into an experimental dedicated plugin.

As mentioned, the same division procedure can also be used to reduce image contrast as a pseudo HDR operation by using the original image as a flatfield image, but with reduced amount. Capture One allows to do that quite easily.

Bill, the FF plug-in intentionally does not remove dust spots. (The dust spots are in fact detected and ignored as part of the FF process.) This was a necessary compromise given the FF plug-in's approach is metadata-driven (it does not bake the results into the image pixels themselves).

Bill, the FF plug-in intentionally does not remove dust spots. (The dust spots are in fact detected and ignored as part of the FF process.) This was a necessary compromise given the FF plug-in's approach is metadata-driven (it does not bake the results into the image pixels themselves).

Hi Eric,

That makes sense, thanks for clarifying. It would be an interesting thing if a future version could offer something meta-data driven to deal with dustspots automatically. It's a real pain with e.g. focusstacking techniques, where the dustspots produce a trail that sticks out like a sore thumb.

It is of course possible to apply the Flat-fielding to the Flat Field image itself, which should leave us with a dust mask. That dust mask can then be used in Photoshop as a divide layer, or in Lightroom as a Spot Removal (Q) template to make a set of adjustment brush settings that can be copied to other images.

With a Spot healing template one just has to verify that the choices for the healing sample location still make sense depending on real image content, but it is now much harder to miss spots. It's faster to remove problematic healing brush locations that have little impact due to image detail, than to meticulously search for spots amidst image detail, and overlook one or two until after the resulting image was sent off to the client.

A darkframe removal plugin would also be welcomed, to improve long exposures where pattern noise and hot pixels start to ruin images.

The flat field correction normally uses an input that is blurred before normalization and division. The blur is required to remove high spatial frequency detail (e.g. noise) thus leaving a low frequency image with the global fall-off. The blurring will also reduce or even remove the dust signature. Since the division requires a linear gamma for the highest quality and simplest calculation, one needs to fiddle around with the curve adjustment to match the response curve of the flatfield image too the response curve of the actual image. It helps to do the flatfield correction as early as possible in the processing chain, when the response/gamma curve is still relatively simple.

The whole process is somewhat confusing to one (such as myself) who has a limited understanding of the underlying principles behind the process. I repeated my work with Martin Evening's method using the divide blending mode in Photoshop and used PV2010 with a linear tone curve to avoid unnecessary adjustments to the images prior to the correction process. Of course, the files are still gamma encoded; one could linearize the images in the ProphotoRGB space by linear_RIMM-RGB_v4.icc profile, but this entails extra work. Martin Evening did state that the brightest areas of the flat field should be white, and this did help in the process.

Here is the result of implementing your suggestions. The illumination is uniform and the dust spots are gone. Thanks for your help!

Dust spot removal does not require a significant low pass filtration of the flatfield image, on the contrary. So one can use the same flatfield image, but two passes are required, one blurred for flat-fielding, one not (or only minimally) blurred for dust shadow removal. The dust removal image also needs to be flatfielded. Alternatively one can try with only a minimally blurred flatfield image to remove both dust and fall-off in one operation, but that potentially increases the noise.

This is interesting. How does one apply the two passes to obtain the final image?

The whole process is somewhat confusing to one (such as myself) who has a limited understanding of the underlying principles behind the process. I repeated my work with Martin Evening's method using the divide blending mode in Photoshop and used PV2010 with a linear tone curve to avoid unnecessary adjustments to the images prior to the correction process. Of course, the files are still gamma encoded; one could linearize the images in the ProphotoRGB space by linear_RIMM-RGB_v4.icc profile, but this entails extra work. Martin Evening did state that the brightest areas of the flat field should be white, and this did help in the process.

Here is the result of implementing your suggestions. The illumination is uniform and the dust spots are gone. Thanks for your help!

Hi Bill,

You're welcome.

The flat fielding is basically done by generating a (very) low spatial frequency version of the light distibution, what remains is only the vignetting and light fall-off signature. That flat-field 'image' is normalized by dividing the entire image by the brightest pixels (usually found in the center) of the FF image. That gives a data file with the value 1.0 in it's center, and slightly lower values towards the corners. By then dividing an actual image pixel for pixel by the corresponding pixels in that normalized data file the brightness of the center remains unchanged (division by 1.0) and the corners are lifted (division by e.g. 0.8). This assumes linear gamma images to simplify the normalization and division operation. That's why Martin stated that the brightest areas of the flat field should be white (= normalized, preferably by division in linear gamma).

Quote

This is interesting. How does one apply the two passes to obtain the final image?

Things get complicated and laborious very fast, but the FF image can be flat-fielded itself, which leaves the dust as only variable. Normalizing that dust image also to bright=white, produces an image that can be used as a layer with divide blending mode to remove/lift the brightness of only the dust spots (which are slightly darker than white) on the image that was already flat-fielded earlier. Same procedure as with the blurred FF image, a curves adjustment layer for gamma adjustment is required.

All this requires battling with gamma compensations and image math which is not exactly Lightroom territory, but with Photoshop on the side one can get quite reasonable output.

A darkframe removal plugin would also be welcomed, to improve long exposures where pattern noise and hot pixels start to ruin images.

Bart,

You are probably aware that Raw Therapee has a dark frame removal as well as a flat frame option, but just in case you have not seen these options, they are described in the Raw Thereapee Manual. I have tried the latter and it works well.

The flat fielding is basically done by generating a (very) low spatial frequency version of the light distibution, what remains is only the vignetting and light fall-off signature. That flat-field 'image' is normalized by dividing the entire image by the brightest pixels (usually found in the center) of the FF image. That gives a data file with the value 1.0 in it's center, and slightly lower values towards the corners. By then dividing an actual image pixel for pixel by the corresponding pixels in that normalized data file the brightness of the center remains unchanged (division by 1.0) and the corners are lifted (division by e.g. 0.. This assumes linear gamma images to simplify the normalization and division operation. That's why Martin stated that the brightest areas of the flat field should be white (= normalized, preferably by division in linear gamma).

Things get complicated and laborious very fast, but the FF image can be flat-fielded itself, which leaves the dust as only variable. Normalizing that dust image also to bright=white, produces an image that can be used as a layer with divide blending mode to remove/lift the brightness of only the dust spots (which are slightly darker than white) on the image that was already flat-fielded earlier. Same procedure as with the blurred FF image, a curves adjustment layer for gamma adjustment is required.

All this requires battling with gamma compensations and image math which is not exactly Lightroom territory, but with Photoshop on the side one can get quite reasonable output.

Bart,

Thanks for the additional explanation, which was quite helpful. I just noted that Raw Therapee also has a flat field operation which works directly on raw files so the gamma compensation problem is avoided. It has a filtration option and one can use a low value to remove dust spots. If one has a low noise image, larger blur amounts are not needed. One can apply the tone curve and other adjustments directly in the program.

One thing I don't know with this facility and with the LR flat field plugin is what exposure is needed for the flat field image. As you mentioned the brightest areas should be white, but this could be adjusted by an exposure compensation for the flat field raw image so that one would not have to bracket or otherwise fiddle with exposure. I don't know if this is done in either program, but a bit of experimentation could determine this. I don't know if a two pass correction is possible in Raw Therapee.

You are probably aware that Raw Therapee has a dark frame removal as well as a flat frame option, but just in case you have not seen these options, they are described in the Raw Thereapee Manual. I have tried the latter and it works well.

Hi Bill,

Yes, (of course ) I'm aware of the huge range of RawTherapee features, and even though we're in a Lightroom oriented forum I do recommend the readers to check it out. Yet, assuming that people like Eric Chan (who is 'only' human) cannot visit all sites on the WWW, I try to concentrate my Adobe related comments on LuLa (also because my time to repeat myself is limited, and because it may benefit others).

Thanks for the additional explanation, which was quite helpful. I just noted that Raw Therapee also has a flat field operation which works directly on raw files so the gamma compensation problem is avoided. It has a filtration option and one can use a low value to remove dust spots.

Hi Bill,

Indeed, just as the doctor ordered. A low amount of blur to preserve some (dust) detail, yet reduce noise influence, and a larger blur to eliminate higher spatial frequency signal. Of course there are better ways to remove detail than a simple (Gaussian) blur (which e.g. literally meets it's limits at the image boundaries, something that e.g. ImageMagick can deal with by activating 'virtual' pixels).

Quote

If one has a low noise image, larger blur amounts are not needed. One can apply the tone curve and other adjustments directly in the program.

One thing I don't know with this facility and with the LR flat field plugin is what exposure is needed for the flat field image.

Exactly, that's why I would recommend to shoot one's LCCs or FF images at a lower (native) ISO ETTR level, exposed just short of clipping the tail of the shot noise. The lower the relative noise level is (and thus the higher the S/N ratio) the easier/better the postprocessing and number crunching can be.

Quote

As you mentioned the brightest areas should be white, but this could be adjusted by an exposure compensation for the flat field raw image so that one would not have to bracket or otherwise fiddle with exposure.

Correct, the sofware should normalize the input data, but it 'won't hurt' to maximize the image's S/N ratio either. At lower ISOs, it should help to +EV correct a camera metering of a uniform surface by 2-3 stops before the shotnoise tail clipping becomes an issue.

Quote

I don't know if this is done in either program, but a bit of experimentation could determine this. I don't know if a two pass correction is possible in Raw Therapee.

Since RT is not layer oriented, I doubt it, but since I use Capture One for most of my Raw conversions, I haven't tried the RawTherapee Flat Field corrections with moderate blur radius settings. It may be possible to find a good compromise.