I have supplied a link to the data file. You can read the dat file with Wordpad or a similar simple document reader. You can input those numbers(rounded to 16-bit values, or converted to 8-bit numbers by dividing by 65535 and multiplying by 255 and rounding to integers). A small warning, the lower the accuracy, the lower the output quality will be. For convenience I've added a 16-bit Greyscale TIFF (convert to RGB mode if needed). I have turned it into an 11x11 kernel (9x9 + black border) because the program you referenced apparently (from the description) requires a zero backgound level.

I used the floating point data file, not the TIFF.

My algorithm can work on a PSF up to the size of the image itself, in this case 1115x1115... but I'd have to synthesize an Airy disk myself, and haven't learned how to do that yet. So I just padded Bart's 9x9 one with black.

When I said it's "very fast" I meant relative to other deconvolution algorithms.It takes about 2.5 seconds on my Intel E8400 (3.0 GHz) with DDR2 800 to process a 1115x1115 image single-threaded. It probably scales roughly as O(N log N) where N=X*Y, the number of pixels, since most of its time is taken calculating FFTs.It takes 39 seconds to process a 4096x4096 image (yep, that's O(N log N) all right). But with multi-threading it could be much faster, so maybe it could indeed be practical for raw conversion...

But first things first. I need to write some code to make it adapt its frequency cutoff to which frequencies are most garbled by noise, and I need to address the awful edge effects it currently has, and of course make it work with non-square images (EDIT: It now works with non-square images).

Thanks Edmund, and thanks Eric. It is indeed simple, which makes me wonder why seemingly nobody else has thought of it. However I do think it has a lot of room for improvement, for example dealing with a noisy or quantized image — my current solution is to cut off frequencies that are noisy, but that results in ringing artifacts. Maybe I can add an algorithm that fiddles with the noisy frequencies in order to reduce the appearance of ringing (not sure at this point how to go about it, though). And there's of course the issue of edge effects, which I haven't tried to tackle yet.

I intend to post the source code, but it's rather messy right now (the main problem is that it uses raw files instead of TIFFs), so I'd like to clean it up first. Unless you'd really like to play with it right away, in which case I can post it as-is...

Meanwhile, I've improved the algorithm: 1) Do a gradual frequency cutoff instead of a threshold discontinuity; 2) Use the exact floating point kernel for deconvolution, instead of using a kernel-blurred white pixel rounded to 16 bits/channel.The result: 0343_Crop+Diffraction+DFT_division_v3.jpg (1.2 MB)

I hope no one minds my entering this a bit late. I've written a small C program that does deconvolution by Discrete Fourier Transform division (using the library "FFTW" to do Fast Fourier Transforms). To me this seems to beat all the other deconvolution algorithms that have been presented in this thread... I'd like to see what others think.

Hi David,

The more the merrier, I'm glad you joined.

Quote

My algorithm currently only works on square images, and deals with edge effects very badly, so I added black borders to Bart's max-quality jpeg crop making it 1115x1115, then applied the convolution myself using his provided 9x9 kernel. I operated entirely on 16-bit per channel data. The exact convoluted image that I worked from can be downloaded here (5.0 MB PNG file). My result, saved as a maximum-quality JPEG (after re-cropping it to 1003x1107): 0343_Crop+Diffraction+DFT_division_v2.jpg (1.2 MB)

David, the results look too good to be true. Could you verify that you (your software) used the correct (convolved) file as input? It's not that I don't like the results, it's that all other algorithms I've tried cannot restore data that has been lost (too low S/N ratio) to f/32 diffraction. Maybe you didn't save the intermediate convolved/diffracted result, thus truncating the accuracy to 16-bit at best. Maybe you were performing all subsequent steps while keeping the intermediate results in floating point?

Quote

My algorithm takes a single white pixel on a black background the same size as the original image, applies the PSF blur to it, and divides the DFT of the blurred pixel by the DFT of the pixel (element by element, using complex arithmetic); this takes advantage of the fact that the DFT of a single white pixel in the center of a black background has a uniformly gray DFT. Then it takes the DFT of the convoluted image and divides this by the result from the previous operation. Division on a particular element is only done if the divisor is above a certain threshold (to avoid amplifying noise too much, even noise resulting from 16-bit quantization). An inverse DFT is done on the final result to get a deconvoluted image.

Yes, that's basic restoration by deconvolution in Fourier space.

Quote

This algorithm is very fast and does not need to go through iterations of successive approximation; it gets its best result right off the bat.

The iterations that were mentioned are part of the Richardson-Lucy algorithm, it's an iterative Bayesian maximum likelihood method. Regular deconvolution is faster, but poses other challenges. One of the things you can do to improve the edge performance is padding with e.g. mirrored image data, and making a final crop after deconvolution.

David, the results look too good to be true. Could you verify that you (your software) used the correct (convolved) file as input? It's not that I don't like the results, it's that all other algorithms I've tried cannot restore data that has been lost (too low S/N ratio) to f/32 diffraction. Maybe you didn't save the intermediate convolved/diffracted result, thus truncating the accuracy to 16-bit at best. Maybe you were performing all subsequent steps while keeping the intermediate results in floating point?

Well in a way, it is too good to be true. It's VERY sensitive to tiny changes in the convoluted input, and the edges need to be completely intact and fade out to black. But my program is indeed using a 16-bit/ch data file as input (the PNG I posted), which I'm absolutely sure of as 1) the convolution and deconvolution are done in separate runs, with only files on the hard disk as input and output, and 2) I've tried messing with the convoluted input, which thoroughly ruins the deconvoluted output unless it is made less sensitive by increasing the frequency drop-off threshold, and 3) I've tried pasting your 16-bit/ch convoluted PNG on top of mine, and the deconvolution still looks good but has a bit of noise near the edges, probably because of the difference between the JPEG crop and the original crop.

I would enjoy it if you posted another 16-bit/ch PNG of a convoluted image, without posting the original, but this time with edges that fade to black (i.e., pad the original with 8 pixels of black before applying a 9x9 kernel).

I would enjoy it if you posted another 16-bit/ch PNG of a convoluted image, without posting the original, but this time with edges that fade to black (i.e., pad the original with 8 pixels of black before applying a 9x9 kernel).

Hi David,

Okay, here it is, a 16-b/ch RGB PNG file with an 8 pixel black border, convolved with the same "N=32" kernel as before:

Okay, here it is, a 16-b/ch RGB PNG file with an 8 pixel black border, convolved with the same "N=32" kernel as before:

Tada:

But I really want to get this working with cropped-edge images. Right now that's the Achilles heel of the algorithm: having missing edges corrupts the entire image, not just the vicinity of the edges. I tried masking edges by multiplying them by a convoluted white rectangle, but that still left significant ringing noise over the whole picture. I'll try that mirrored-edge idea, but I doubt it'll work (EDIT: indeed, it didn't work). I have another idea, of tapering off the inverse PSF so that it doesn't have that "action at a distance", but that might remove its ability to reconstruct fine detail... it's a really hard concept to wrap my mind around. It seems that this algorithm works on a gestalt of the whole image to reconstruct even one piece of it, even though the PSF is just 9x9.

BTW, the edges can actually be any color, as long as it's uniform. I can just subtract it out (making some values negative within the image itself), and then add it back in before saving the result.

But I really want to get this working with cropped-edge images. Right now that's the Achilles heel of the algorithm: having missing edges corrupts the entire image, not just the vicinity of the edges. I tried masking edges by multiplying them by a convoluted white rectangle, but that still left significant ringing noise over the whole picture. I'll try that mirrored-edge idea, but I doubt it'll work. I have another idea, of tapering off the inverse PSF so that it doesn't have that "action at a distance", but that might remove its ability to reconstruct fine detail... it's a really hard concept to wrap my mind around. It seems that this algorithm works on a gestalt of the whole image to reconstruct even one piece of it, even though the PSF is just 9x9.

BTW, the edges can actually be any color, as long as it's uniform. I can just subtract it out (making some values negative within the image itself), and then add it back in before saving the result.

Pretty close. And here is (a max quality JPEG version of) the original before convolution:

Quote

But I really want to get this working with cropped-edge images. Right now that's the Achilles heel of the algorithm: having missing edges corrupts the entire image, not just the vicinity of the edges. I tried masking edges by multiplying them by a convoluted white rectangle, but that still left significant ringing noise over the whole picture. I'll try that mirrored-edge idea, but I doubt it'll work. I have another idea, of tapering off the inverse PSF so that it doesn't have that "action at a distance", but that might remove its ability to reconstruct fine detail... it's a really hard concept to wrap my mind around. It seems that this algorithm works on a gestalt of the whole image to reconstruct even one piece of it, even though the PSF is just 9x9.

As your experiment with the single white pixel shows, each and every pixel contributes to the reconstruction of the entire image, although the amplitude reduces with distance. The ringing at edges has to do with the abrupt discontinuity of the pixels contributing to the image. An image is presumed to have infinite dimensions, which can be approximated by creating a symmetric image, such as in the "Reflected" image padding example at:http://reference.wolfram.com/mathematica/ref/ImagePad.html

This also allows to low-pass filter the larger Fourier transform with an appropriate function (a Gaussian is a simple one), but the highest spatial frequencies are sacrificed to prevent other artifacts.

It is also important to note that here the PSF is known to high accuracy, where in real time situations we can only approximate it, and noise can spoil the party.

Inverse filtering can be very exact if the blur PSF is known and there is no added noise, as in these examples.

But what is really surprising is how much detail is coming back - it's as though nothing is being lost to diffraction. Detail above what should be the "cutoff frequency" is being restored.

I think what is happening is that the 9x9 or 11x11 Airy disk is too small to simulate a real Airy disk. It is allowing spatial frequencies above the diffraction cutoff to leak past. Then David's inverse filter is able to restore most of those higher-than-cutoff frequency details as well as the lower frequencies (on which it does a superior job).

To be more realistic I think it will be necessary to go with a bigger simulated Airy disk.

I'm vaguely following this - however, I would have thought that if the fourier transform of the blur function is invertible then it's pretty obvious that you'll get the original back - with some uncertainty in areas with a lot of higher frequency due to noise in the original and computational approximation.Edmund

Inverse filtering can be very exact if the blur PSF is known and there is no added noise, as in these examples.

But what is really surprising how much detail is coming back - it's as though nothing is being lost to diffraction. Detail above what should be the "cutoff frequency" is being restored.

I think what is happening is that the 9x9 or 11x11 Airy disk is too small to simulate a real Airy disk. It is allowing spatial frequencies above the diffraction cutoff to leak past. Then David's inverse filter is able to restore most of those higher-than-cutoff frequency details as well as the lower frequencies (on which it does a superior job).

To be more realistic I think it will be necessary to go with a bigger simulated Airy disk.

I think (possibly wrongly) that the main problem is estimating the real world PSF, both at the focal plane and ideally in a space around the focal plane (see Nijboer-Zernike). Inverting known issues is relatively easy in comparison. This is only based on my experience trying to fix astronomical images but I suspect (photographically vs astrophotographically) optical issues are equivalent, tracking errors are similar to camera movement and turbulence errors are similar to subject movement. Depth of field should make things more complex for photos. And of course, one doesn't lack distorted point sources in astro images.

I'm vaguely following this - however, I would have thought that if the fourier transform of the blur function is invertible then it's pretty obvious that you'll get the original back - with some uncertainty in areas with a lot of higher frequency due to noise in the original and computational approximation.

The frequencies above the cutoff frequency should be zero, so should not be invertible. (1/zero = )

The frequencies above the cutoff frequency should be zero, so should not be invertible. (1/zero = )

Speaking without experience, again, I would assume that is always the case. When you fourier transform or DFT or whatever, you will truncate at the cutoff or twice the cutoff or whatever - after that it doesn't make sense anymore.

Maybe I should go and try actually doing some computational experiments with Matlab and an image processing book, and refresh my knowledge, rather than spout nonsense.Edmund

I think what is happening is that the 9x9 or 11x11 Airy disk is too small to simulate a real Airy disk. It is allowing spatial frequencies above the diffraction cutoff to leak past. Then David's inverse filter is able to restore most of those higher-than-cutoff frequency details as well as the lower frequencies (on which it does a superior job).

To be more realistic I think it will be necessary to go with a bigger simulated Airy disk.

Wow, you are absolutely right. It makes a HUGE difference in this case; 9x9 was far too small to nullify the higher-than-cutoff frequencies in the same way the full Airy disk does. I tried a 127x127 kernel and indeed, my algorithm now cannot recover those frequencies at all. Also I notice that global contrast is quite visibly reduced using the 127x127 kernel, which didn't happen with the 9x9 for obvious reasons.

I used the Airy disk formula from Wikipedia, using the libc implementation of the Bessel function, double _j1(double). My result differed slightly from Bart's in the inner 9x9 pixels. Any idea why? Bart, was your kernel actually an Airy disk convolved with an OLPF?

So as you can see, my algorithm doesn't do so well with a real Airy disk. It has significant ringing. Do the other algorithms/programs demonstrated earlier in the thread deal with this 127x127-convoluted 16-bit PNG just as well as they dealt with the 9x9-convoluted one?

BTW, Bart, do you have protanomalous or protanopic vision? I notice you always change your links to blue instead of the default red, and I've been doing the same thing because the red is hard for me to tell at a glance from black, against a pale background.