I come from convolution of 1-d LTI audio signals. There is a mature theory for those things. I assume that some have worked out the theory for extending this into non-time/shift-invariant systems?

So, basically, an invertible known linear system with known output can be inverted. The degradations of camera lense can be inverted to the degree that the entire system behaves like that.

This theoretical model is a handy model to understand what is going on, but there are several practical complications:

1. It is impossible to measure the PSF perfectly, and if you did, it might change ever so slightly 2 seconds later

2. To know all output, you would need the outputs beyond the sensor. I guess that this can be sorted out by discarding results that are 1 or 0.5 effective PSF kernel width from the image edge (cropping), mirror-imaging the image to articficially get something to work on, or living with image artifacts at the edges.

3. The PSF might be very large. Perhaps a deconvolution + contrast modelling works to give good pictures, but surely there must be cases where the PSF reduce contrast in the right half of the image and not the left half. Global contrast adjustement would not fix that, and a 2MP PSF would be impractical?

4. The kernel might contain infinitely deep zeros (located on the unit circle of the Z-transform). These represent complete loss of information that cannot be brought back. I don't know if this is common or possible for PSF.

5. Light itself contains noise (being a stream of photons).

6. Any real camera contains an image sensor with a CFA, limited density sensels, clipping highlights and noise. Most also have an OLPF. We dont get to process the light from the lense until it has been further distorted by those things.