Well, for lens blur I imagine it could be a bit better to use something more along the lines of a rounded off 'top hat' filter (perhaps more of a bowler ) rather than a Gaussian, since that more accurately approximates the structure of OOF specular highlights which in turn ought to reflect (no pun intended) the PSF of lens blur. Another thing that RT lacks is any kind of adaptivity to its RL deconvolution. The question is whether that would add significantly to the processing time. Its on my list of things to look into.

Not in theory. For bounded functions the Fourier transform is an analytic function, which means that if it is known for a certain range then techniques such as analytic continuation can be used to extend the solution to all the frequency range. However, as I indicated earlier, such resolution boosting techniques have difficulty in practise due to noise. It has been estimated in a particular case that to succeed in analytic continuation an SNR (amplitude ratio) of 1000 is required.

IIRC, even in the presence of noise, it has been claimed that a twofold to fourfold improvement of the Rayleigh resolution in the restored image over that in the acquired image may be achieved if the transfer function of the imaging system is known sufficiently accurately and using the resolution boosting analytic continuation techniques.

I would be surprised if any method can do more than guess at obliterated detail (data in the original beyond the Rayleigh limit). The problem is much akin to upsampling an image; in both cases there is a hard cutoff on frequency content somewhat below Nyquist (in the case of upsampling, I mean the Nyquist of the target resolution). Yes there are methods for the upsampling such as the algorithm in Genuine Fractals, but they amount to pleasing extrapolations of the image rather than genuine restored detail. That's not to say the result is not pleasing, and perhaps analytic continuation for super-resolution yields a pleasing result; in fact it sounds a bit similar to the use of fractal scaling to extrapolate image content to higher frequency bands.

Well, for lens blur I imagine it could be a bit better to use something more along the lines of a rounded off 'top hat' filter (perhaps more of a bowler ) rather than a Gaussian, since that more accurately approximates the structure of OOF specular highlights which in turn ought to reflect (no pun intended) the PSF of lens blur. Another thing that RT lacks is any kind of adaptivity to its RL deconvolution; that could mitigate some of the noise amplification if done properly. The question is whether that would add significantly to the processing time. Its on my list of things to look into.

With the chain of blur-upon-blur in images, doesn't it get complicated? Diffraction blur, defocus blur, lens aberrations, motion blur, AA filter blur... most of it changing from point to point in the frame. (I think it was mentioned that multiple blurs tend to become Gaussian?)

Maybe targeting AA filter blur would give a lot of bang for the buck? (Not much help for digital back users, though.)

I would be surprised if any method can do more than guess at obliterated detail (data in the original beyond the Rayleigh limit). The problem is much akin to upsampling an image; in both cases there is a hard cutoff on frequency content somewhat below Nyquist (in the case of upsampling, I mean the Nyquist of the target resolution). Yes there are methods for the upsampling such as the algorithm in Genuine Fractals, but they amount to pleasing extrapolations of the image rather than genuine restored detail. That's not to say the result is not pleasing, and perhaps analytic continuation for super-resolution yields a pleasing result; in fact it sounds a bit similar to the use of fractal scaling to extrapolate image content to higher frequency bands.

Not sure if the problem is akin to upsampling as upsampling does not create new information where as analytic continuation does create new information.

Not sure if the problem is akin to upsampling as upsampling does not create new information where as analytic continuation does create new information.

I'm getting increasingly dubious here about the ability of any method to "create" information if assumptions about the missing pieces are not made. Fractal upsampling software for instance is tuned to assume that certain objects are "clean"boundary lines and curves - it will thus "recreate" perfect typography in box-shots. In this sense, if assumptions about the origins of the image data are made, eg by means of a texture vocabulary, then a method tuned for these assumptions will do well "creating" image data when provided with such images, and presumably fail when the hypotheses are not met. Which also means that we need to define which measure we use to appreciate a good result and a bad one, and I respectfully suggest that photoreconnaissance, astronomy and beauty photography have different metrics.

Not sure if the problem is akin to upsampling as upsampling does not create new information where as analytic continuation does create new information.

I don't see what the difference is between a spectral density that is zero beyond the inverse Airy disk radius, and a spectral density that is zero beyond Nyquist. If you are going to extend the spectral density to higher frequencies, in effect that information is being invented. This is different from straight deconvolution, where the function being recovered has been multiplied by some nonzero function, and one recovers the original function by dividing out by the (nonzero) FT of the PSF. To generate information where the spectral density is intially zero, one has to invent a rule for doing so, and the issue then is how closely that rule hews to the properties of some family of 'natural' images.

I don't see what the difference is between a spectral density that is zero beyond the inverse Airy disk radius, and a spectral density that is zero beyond Nyquist. If you are going to extend the spectral density to higher frequencies, in effect that information is being invented.

Perhaps you mean oversampling and not upsampling.

Quote from: ejmartin

This is different from straight deconvolution, where the function being recovered has been multiplied by some nonzero function, and one recovers the original function by dividing out by the (nonzero) FT of the PSF.

The intent is to use deconvolution to recover the spectrum in the passband of the imaging system and then use analytic continuation to extend it out to those frequencies where it was zero before.

The intent is to use deconvolution to recover the spectrum in the passband of the imaging system and then use analytic continuation to extend it out to those frequencies where it was zero before.

Analytic continuation of what? We're talking about discrete data... so at best we're talking about some assumption about a smooth analytic function that interpolates the discrete data in a region you like (low frequencies) and extrapolates into a region you don't like with the existing data (high frequencies).

Also, analytic continuation is simply one of many possible assumptions about how to extend the data; the issue is whether it or another invents new data that is visually pleasing.

Anyway, I've made my point and I don't want this to hijack the thread.

Analytic continuation of what? We're talking about discrete data... so at best we're talking about some assumption about a smooth analytic function that interpolates the discrete data in a region you like (low frequencies) and extrapolates into a region you don't like with the existing data (high frequencies).

Also, analytic continuation is simply one of many possible assumptions about how to extend the data; the issue is whether it or another invents new data that is visually pleasing.

This line of reasoning started because you said that "zero MTF is going to remain zero", and I pointed out that that is not true in theory, and even in practise in the presence of noise some gains might be achieved (though not as good as the theory says). It appears now you are saying that it is one of the ways of "extrapolating/inventing" the data, thereby negating the position that zero MTF would stay zero.

Quote from: ejmartin

Anyway, I've made my point and I don't want this to hijack the thread.

deja, yes, the Detail slider in CR 6 & LR 3 is a blend of sharpening/deblur methods and if you want the deconv-based method then you crank up the Detail slider (even up to 100 if you want the pure deconv-based method). I do this for most landscapes and images with a lot of texture (rocks, bark, twigs, etc.) and I find it's not bad for that.

Erik, yes it will indeed amplify noise, which does become a little tricky (but not impossible) to differentiate from texture. I have some basic ideas on how to improve this, but for now the best way to treat this is to increase the Luminance slider and apply a bit of Masking (remember you can use the Option/Alt key with the Masking slider to get a visualization of which areas of the image are being masked off). Furthermore, if there are big areas of the image that you simply don't want to sharpen then you can paint those out with a local adjustment brush and a minus Sharpness value.

Bill, unfortunately I can't go into the PSF and other details of the CR 6 / LR 3 sharpen method. Sorry.

That pattern will differ for each AA-filter and camera(type) combination. Some of the variables are, the thickness(es) of the crossed filter layers, their (individual and combined) orientation/rotation, and the distance to the microlenses/sensels.

(Un)fortunately, in practice, the PSF of a lens (residual aberrations+diffraction, assuming perfect focus and no camera or subject motion) plus an optical low-pass filter (OLPF) and the sensel mask and spacing will resemble a modified Gaussian rather than just the OLPF's PSF. As with many natural sources of noise, when several are combined then a (somewhat) modified Gaussian approximation can be made.

I have analyzed the PSF of the full optical system (different lenses + OLPF at various apertures + aperture mask of the sensels) of e.g. my 1Ds3 (and the 1Ds2 and 20D before that), and the effect a Raw converter has on the captured data, and have found that a certain combination of multiple Gaussians does a reasonably good job of characterizing the system PSF. The complicating factor is that it thus requires prior knowledge to effectively counteract the effects.

The practical solution is to employ either a quasi-intelligent PSF determination based on the image at hand (or a test image under more controlled circumstances), or a flexible interactive interface system (some intelligent choices can be made to simplify things for the average user) that allows user interaction (human vision is e.g. quite good at comparing before/after images, especially when super-imposed).

There is a lot of ground to cover before simple tools are available, but threads like these serve to at least increase awareness.

(Un)fortunately, in practice, the PSF of a lens (residual aberrations+diffraction, assuming perfect focus and no camera or subject motion) plus an optical low-pass filter (OLPF) and the sensel mask and spacing will resemble a modified Gaussian rather than just the OLPF's PSF. As with many natural sources of noise, when several are combined then a (somewhat) modified Gaussian approximation can be made.

I have analyzed the PSF of the full optical system (different lenses + OLPF at various apertures + aperture mask of the sensels) of e.g. my 1Ds3 (and the 1Ds2 and 20D before that), and the effect a Raw converter has on the captured data, and have found that a certain combination of multiple Gaussians does a reasonably good job of characterizing the system MTF.

Hi Bart,

One doesn't necessarily need to rely on the "combination effect" to get closer to a Gaussian function. Any reasonable (finite energy) function can be represented by a linear combination of Gaussians (much like Fourier expansion). So for e.g., even if you isolate a system component (OLPF, etc.) and its response does not look like Gaussian, it can still be expanded into a sum of a number of Gaussians.