Can computer corrections make simple lenses look good?

Modern lenses tend to be large and expensive, with multiple glass elements combining to minimise optical aberrations. But what if we could just use a cheap single-element lens, and remove those aberrations computationally instead? This is the question scientists at the University of British Columbia and University of Siegen are asking, and they've come up with a way of improving images from a simple single element lens that gives pretty impressive results.

Image scientists are looking at whether a complex lens can be replaced by a simple one, along with lots of computation.

The method is described in detail in the researchers' paper. It works by understanding the lens's 'point spread function' - the way point light sources are blurred by the optics - and how this changes across the frame. Knowing this, in principle it's possible to analyse an image from a simple lens and reconstruct how it should look, through a computational process known as 'deconvolution'.

The Point Spread Function diagram for a simple f/4.5 lens of the plano-convex type (i.e. one side curved, the other flat). The centre shows broad discs due to chromatic aberration, while the cross-shapes towards the corners are due to coma and astigmatism. The researchers split the image up into 'tiles', each with its own PSF.

This isn't a new idea, but the team of researchers claim to have made some key advances in the field, making their method more robust than those previously suggested. For example chromatic aberration means that simple lenses can give detailed information in one colour channel with significant blur in the others, so they've decided to use cross-channel information to reconstruct the finest detail possible.

One serious problem with deconvolution approaches is that they often struggle to reach a single 'best' solution. The group claims to have solved this by optimising each colour channel in turn, rather than trying to deal with them all simultaneously.

This is all very clever, of course, but does it work? The group shows several before and after examples on its website, shot using a simple F4.5 plano-convex lens on a Canon EOS 40D, and the results are quite impressive.

Original version (click for original)

Image de-blurred using deconvolution (click for original)

So will this be coming to a camera near you anytime soon? In this precise form, probably not - the system still has problems understanding areas of the image which are slightly out-of-focus, and won't work with large aperture lenses. And while the images are certainly improved, they're unlikely to satisfy committed pixel peepers. In fact we'd guess it's most likely to be useful in smartphones, where the mechanical simplicity and robustness of simple lenses should be appealing. However, it certainly offers an interesting glimpse of the way results could be improved when shooting with a 'soft' lens.

This is why l was charmed by but ultimately skeptical of the Lytro Camera, perhaps l will be proven wrong but it seems as though you can't create information that isn't there, you may be able to alter information that is there a great deal but at the cost of diminishing returns.

That's not necessarily an accurate description. Yes, it's possible to irrecoverably destroy information in a photo. A perfect Gaussian blur or aliasing are examples of this. However, a lot of the perceived loss of detail here doesn't necessarily destroy it, but just obscures it. I don't really see a problem with a mathematical reconstruction of details, as long as it isn't recreating information that never actually existed in the original photo.

An old post but I'll respond anyway. Lytro has lots of extra information since it uses only subset of it for any given focus. So a 20MP capture renders a 4MP photo (or something like that). Basically, the extra pixels captures additional light angles since there is a lens in front of the ccd to do just that. Using that information allows rendering of different focus points.

I think adobe photoshop had a similar concept taking a blurred iamge and sharpening it. It was a hoax where they just reversed wha they did from an original sharp image.

In this case, well creating an algorithm based on a an element that has define algorithm correction created is cool. Not sure if is something is so blurred out that a wrong interpretation might result in an incorrect image processing. This is like an extreme case of lens correction. Since consumer wants cheaper product this might be more suited for like a bridge camera P&S.

The idea is interesting, but the execution needs more thought. Without knowing anything about the lens' optical characteristics, I can go into Photoshop and apply these two operations to the original version above: 1. Filter/Sharpen/Smart Sharpen Lens Blur 79%, 5.7 pixels (more accurate checked)

2. Filter/Sharpen/Unsharp Mask 47%, 5.7 pixels

I haven't corrected it for chromatic aberration, but examining the green channel alone it's clear that the corrections above produce a better result with much better separation of tonality in the details, and more fine detail. If I had a chromatic aberration tool I am confident I could outdo their effort. Perhaps someone here can apply that with these steps and post the final result.

Science marches on. The question is whether we will fall in step and reap the benefits of progress or fall by the wayside, mired in 20th century superstition and anxiety about the limitless power of optical and computer engineering.

Noise reduction *is* an intractable problem. There is a thing called quantum shot noise which is always going to be present and (in many situations) is the dominant contributor to image noise -- especially for "good" sensors like the modern Nikon and Sony (used by Olympus too) ones. If you're shooting something like animal fur or bird feathers -- fine low-contrast detail -- no amount of clever wavelet transforms are going to let you figure out what's signal and what's noise.

Deconvolution makes noise worse by a huge amount -- essentially, all it is is heavily customized sharpening.

1. The deconvolution matrix is defined for an unlimited number of monochromatic lights sources.

2. The basic resolution of the uncorrected lens is not near diffraction limit where wave effects with interference take place. So only pure monochromatic geometrical abberations could be corrected

1. Because of the limited number of color sensitive detectors I expect an improvement factor not better than the number color channels. So using a single element lens instead of a 20 element lens is an illusion.

2. The highest performing lenses are gauss based fixed aperture around 4.0 which are diffraction limited. These are special enlarger lenses for 1:10 but it is possible to calculate them for infinity. These lenses would profit only to small extend by processing.

More interessting today are algorithms which improve existing lenses without exact knowing the exact type or sample.

I assume there are too many variables to calculate good result. For example LoCAs are distant dependent, so unless we know the distance to ALL objects on the photo, we can't calculate back the initial image. Similar with non-flat focus-plane (field curvature). If the object is blurred by that, one must know the distance to the object for the reverse calculation of the "ideal" image.

From a detail and sharpness perspective, I'd say the USM wins handily, with a much more natural-looking result.

*BUT*

What the deconvolved image gets you is chromatic aberration correction. A lot of the twigs in the image have pretty severe red or blue bands at the edges. The USM doesn't correct that at all, but the deconvo does a pretty good job of fixing it up.

To all of you thinking that's just about some "sharpening" process, it's not really the same:1) with a dedicated calibration pattern, you measure a lense's MFTs/PSF (spatial, for every RGB channel); this basically gives out the mathematic formula that describes the spatial chromatic abberations for the three RGB wavelengths2) you then know how to inverse the chromatic abberation process by HEAVY computation

It is being done already somewhat on the MFT system. Open it up on ACR, and it already has already corrected the distortion and aberrations, but not the vignetting, which is unfortunate. DxO corrects all 3, but I much prefer to work with ACR.

Hi DXO Labsmay I ask how many copies of lens do you use to calibrate each lens, lots of variaion between different copy of the same lensalso can I use other RAW converters with no sharpening and just use DXO for lens specific sharpening? Thanks

Since there can be wide variations between different copies, it would be even better to have a custom calibration based on the specific copy of the lens that you have... I'm not sure how complex is this calibration, if there would be a way to let the end users do it?

I recall this had been done quite a number of times, based on similar concept of mathematically calculating and calibrating the image to made a pseudo lens imaging out of simple lens ( not always just single element ) but as had been, there is multiple environmental limitation and actual world factor that cannot be simply factored in and thus it will always be limited in plenty of fashion. But I can see this applied to the like of smartphone due to the nature of the sensor size vs the lens size / focal length.

It would be more useful in medical and industrial application though. Say High Power X-Ray imaging.

I wager its more than just that, I check both of those image enlarged, the deblurred image might look fine when you take it whole screen and thus downsized a lot, but when you check how well, it actually image, it show up pretty bad, and loads of artifacts and lost of definition / resolving elements.

Ultimately one cannot just get something out of thin air. the image details, must first be captured before it can be delivered. That won't change no matter how well the software goes

This is not really impressive as I duplicated this result with a simple "sharpen" command in Gimp. Also, you can't get more information than you put in, meaning that you can't create detail from blur. You can clarify detail that's already there, but I would always prefer to do it optically to the maximum possible extent, and only then use software to try to go even further. Intentionally designing bad lenses and relying on software to make them mediocre is not a good idea.

"Also, you can't get more information than you put in, meaning that you can't create detail from blur." Google 'adaptive optics' and you will understand why this statement is wrong. The additional info comes from knowing how your lens behaves.

The fatal flaw in your sharpening "duplication" of results is that you only applied a uniform amount of sharpening to all pixels. That's not where this technology is going. Your sharpening cannot correct chromatic aberration or intentionally compensating for compromises in the design of the lens to save money (as is done with several cameras' firmware already). Applying a blind uniform sharpening value is to miss the point.

Sorry, but in articles of this type, a reply posting "I got the same thing by sharpening in GIMP" usually discredits the post right away.

You need to read up on the theory here as it's pretty in depth and complicated. Even though the original image is blurred, ALL of the information is still contained within the image. It's recovering this information that is the problem. The best we can do is to make approximations. Generally speaking, better the approximation, the more computational power that is required.You can think of the sharpening filter as a very crude approximation.

A smartphone taking good pictures is certainly interesting for many users, but long battery life is much more important IMHO. Using (huge ?) computational power to correct or not lens defects implies a trade-off between lens simplicity/robustness and battery life...

One also has to consider that for the few (or even dozens) of pictures people take with their phones, a few extra milliseconds of processing power per images is insignificant. You'll "waste" far more power firing up the radios and sending a couple of selected images up to Facebook.

I've been studying PSFs for several years now. The biggest problem with deconvolution is that the PSFs are not really convolved in the first place -- especially for out-of-focus regions of the image. Still, there's lots one can do with better computational methods; I use genetic algorithms for this sort of thing.

What do you mean by the PSFs "not really convolved"? Does it mean that the idealized model of a (slowly varying) linear convolution does not describe the errors contributed by the lens? If not, what kind of physical process is it?

If you had access to highly detailed info about the lens (e.g. sweep monochromatic light from 400-800nm on a target print of impulses (or wavelets) distributed across the frame and sweep this target from close focus limit towards infinity), how much better could things be? Is it fundamentally a problem of gathering enough data, or is it about finding the right algorithms to apply?

Fundamentally it is that deconvolution, although computationally cheap, isn't quite the right algorithm.

Three major issues. (1) Rays coming in through different portions of the lens are actually different viewpoints; I use this for single-lens stereo capture, but it implies that out-of-focus PSFs are subject to occlusion (see Figure 5 in http://aggregate.org/DIT/SPIEEI2012/spieei2012paper.pdf ). (2) Standard frequency-domain deconvolution algorithms cannot handle arbitrary PSFs. (3) Modeling as convolution essentially assumes positive summation, but the wave nature of light means summation is signed (and negative results clipped) for small PSFs.

My approach has been largely attempting to directly search for the object distance and RGB energy in each pixel's view, attempting to match the actual image when a more accurate model of image construction is applied. So far, it is still scary expensive computationally....

If the lens designers knows that a given lens correction is available, they might be able to "tailormake" a PSF that is easy to correct (no deep zeros, gaussian-like?), rather than a PSF that is as small as possible.

Perhaps that would allow better system-performance for a given cost/size/Weight?

Easy-to-recognize PSFs are commonly used in research -- they usually get called "coded apertures" and are very non-smooth patterns. Gaussian PSFs are not easy to make and, although deconvolution would be easy with them, they would have huge problems with noise. Incidentally, I've been using anaglyph-like color-coding of the aperture in my research over the past few years....

Based on the particle theory of light one could reverse the distortion. However this not the whole story. Lens imperfections could greatly increase the processing required. Also the wave nature of light will result in interference effects that are less easy to remove as there is randomness involved.Still, with the large number of pixels on even very small sensors, much can be done using statistical analysis in addition to structural decomposition. Where the target resolution is much lower than the capture resolution - as in phone cameras - one can trade resolution for sharpness and the result will be very good, especially if multiple exposures at millisecond intervals can be used to remove motion blur.For more serious photography I would certainly favour less glass, but the improvements will probably come from curved sensors matched to the lens for primes (a la Ricoh) and, possibly, flexible lenses for zoom.I am suree will see even more software correction in future.

I think this will be more useful to the point and shoot crowd rather than DSLR/MILC owners. With P&S's, compactness is a necessity, and since their lenses are small anyway (which means huge DOF), the algorithm won't have any problems dealing with out of focus elements.

The key is that the "just a blur" thingy can be (more or less) accurately described as a function of the original, sharp image. Find that function, find a suitable inverse, and you can remove some blur.

I'm a software engineer and into computer science and I never trust software, computers or programmers. Windows, Chrome and all your software crashes quite often, think about it. The last thing I want is bugs in my lens correction.

He's not being entirely naive. Aircraft software (and financial software such as BACS) is written using extremely rigorous processes as lives (or large amounts of money) are at stake. This high integrity software runs under OSs written to similarly rigorous standards - I certainly wouldn't get on an aircraft running Windows or Apple OSs but of course such a system wouldn't stand any chance of being accepted by any Aviation Authority in the first place. Leaving aside their high failure rates, their complexity makes these OSs unqualifyable.

Having said that, camera software is written to rigorous standards. Imagine the cost to a manufacturer if they put out a firmware update that locked up your camera such that it couldn't be subsequently corrected without a return to the manufacturer.

If worked on the image where every pixel has all color channels plus depth information (to distinguish aberrations from focal plane and from out of it), digital corrections should give fantastic results, given enough computational power.

The computers in our cameras already make corrections. The discussion is only about what corrections you want to hand over to the image processing, and to what degree.

With everyone clamoring for wider aperture optics (and make it cheap!), there is intense pressure on manufacturers to increase the software corrections ...

With the massive pixel densities available in modern sensors, and the heavy noise-reduction processing that is already going on even at low ISO, lens corrections can be done pretty much for free, with little additional loss in image quality.

The only downside is any "character" of the lens is airbrushed out, but that's only something traditionalists like myself need to worry about.

A blurry lens causes irretrievable damage to the information that was gathered. This loss manifests itself as a bunch of noise in the corrected image. All this software does is let me trade off between sharpness and noise--but the overall SNR is unaffected. The lens in this article looks like it causes at least a couple stops of damage to the image judging by how noisy the corrected image is. There's no free lunch.

I can see cutting corners on CA or geometric distortion since fixing those doesn't really change the noise levels. But sharpness is not something to cut corners on IMO.

There is absolutely nothing "irretrievable" about that damage as long as (a) pixels are not saturated, (b) dynamic range of the sensor is sufficient, and (c) there is a precise enough mathematical model of how exactly the light that entered the lens was distributed.

A hologram is, in a way, an ultimate "blur" and yet it can easily be used to reconstruct "3D" image using purely optical means... Throw in enough math and number crunching power, and almost any "blur" that is a superposition of the source light can be reconstructed, let alone a very simple one produced by a single lens.

Mathematically, if you know the exact transfer function that caused the blur, you apply the inverse transform, you will arrive back at the original image. (I.e, deconvolution, which is nothing new nor nothing magical).

That is the reason for them seeking to understand the lens' point-spread. Understanding a function's impulse response tells you a lot about the function behaves.

Blur doesn't mean that information is lost - it only means that information is spread-out. In a purely academic example, if you apply a simple gaussian or spatial-averaging blur to an image, you can get the original image back from the blurry image just by applying the blur function's inverse.

Of course, a lens' blur characteristics are more complex than the simple academic examples, but it doesn't mean the information is irrecoverably lost.

You only know the transfer function of the image itself. But the sensor noise gets added after the lens, so it gets boosted by the de-convolution. In other words, the lens damages the dynamic range (or SNR) of the image irretrievably. Obviously if you have large image details or details with large contrast (black-to-white transitions), then the deconvolution make them more visible (along with the noise). But finer details or lower contrast details (e.g.., textures) are blurred below the noise floor and the deconvolution does nothing to recover them.

Vadims, if item (b) is true (which it clearly is not for the example in the article), then I'd argue you might as well stop the lens down and achieve optical sharpness in the first place. The result will be about the same and you avoid power hungry post-processing (and the cheap lens can probably be even cheaper without the larger aperture setting.)

i wonder if they can eventually make this so a moderately simple lenses could look awesome. with the right design, optimized for SW correction in post, however that may be achieved, and just a simple spherical elements or something

> there details in the "improved" version that do not exist in the "original"

Even though the "details" do not exist in the original version, the *information* does. What is needed to reconstruct the image is that information from the "original" image, plus information of how exactly the lens distorted the light. The latter can be thought of as a very, very extensive and detailed version of what we know today as "lens profile".

Modern digital images have more data than you can visually see. You are not looking at the actual data but the interpretation of that data.

take as raw file and run it through different raw converters to see what I mean. You get different results. A sufficiently good sensor will typically capture more than you can see.

have you ever shown an amateur how to recover highlight and shadow info? In the case of good sensors, say a Sony apsc or FF, what visually appears white or black actually has a ton of detail that can be recovered. You just can't see it until you manually pull out those details in software. The data is there.

as someone above said, for example, if you do a gaussian blur, you haven't destroyed that data. All you done is rearranged it. It's possible to mathematically revrse it completely provided you haven't saved it out in a lossy format.

Having worked a bit on lens correction, the trouble you have with the PSF is the fact it oscillates and goes through multiple zero crossings. For those that are interested, the PSF is essentially the response of the lens to an impulse response. An ideal lens would have a PSF given by the Airy disk (the diffraction-limited case). That is even in a lens free of all aberration a lens will image a point impulse as an Airy disk.

If you've seen the PSF caused by many aberrations you will know they can be incredibly complex. In order to undo the aberration, you essentially have to divide by the FT of the PSF and this leads to infinities. This is is what causes most of the problems in deconvolution. Unless you have specially designed apertures such as in coded aperture lenses for computational photography, where you can tailor the PSF, this is almost impossible issue to avoid that I know of. You have to make simplifying approximations which means you can never fully undo the aberration.

I understand the dive by zero problem... but when the correction is done by a physical lens system there is no such issue :-D Is it that we do not have a good modeling of PSF, or more information is available to the physical system (eg. distance to object) than to the software solution ?

Deconvolution (esp in digital realm) has always divide by zero (or divide by almost zero) problem. Actually even divide by almost zero creates big errors (=huge noise and ringing artefacts); most research is done exactly to find better algorithms to avoid these artefacts. In current case authors rely partially on CA - different color planes create different PSFs and where in one plane division by zero occurs, there in another plane pretty result can be obtained. Or that is how I understood their main idea; I can be mistaken of course.Physically corrected lens do not perform deconvolution, thereby no divide by zero problem happens.

The corrected version looks a hell of a lot better to me. Yeah, there's no substituting a better lens, but when that costs you over $1000 (you need a camera to use that lens also), I'd say this is a pretty good idea for phones and inexpensive compacts.

new boyz:If you average pixels, all you are doing is spreading the impulse response even further, so you'd need appropriately more aggressive deconvolution (with more unstable infinities) to get back to the sharper image.Even worse, if you use adaptive averaging (which is basically the way image noise reduction works in all cameras / PC software because that looks the least bad), you don't even _have_ a point spread function anymore and the whole theory falls flat.All in all, the sad fact is that you cannot add information by removing more of it.

maybe deconvolution is an unstable operator, but like it or not that is what a physical lens systems is performing fine. Why it cannot be done programmatically you think, is it a data or just not having the right algorithmic yet ?

How about still using some lens say it 2-5 lens combined wit this. Can it produce good images ? but still lowering the number of lens and also reducing the cost and complexity, but still getting good images

When they tried this with the Hubble Space Telescope (correction of optical defects via computer) they got only marginal results. It took a new set of corrective optics to do the job. You can't "create" results in resolution using software, you can only approximate.

Actually you can "create" results in resolution using software, if you know exactly how the image was blurred. However imprecise knowledge of the blurring operator and noise in the image limit the amount you can recover using this technique.

That said, it is good not to be too impressed with the research done at University of British Columbia and Siegen. The bad side effect of using deconvolution to deblur the image produced by an imperfect lens is the usual increase of noise. Anyone who has used Photoshop knows that noise can become a serious problem as he/she tries to sharp more an image. Indeed, sharpening can be considered a particular form of deconvolution, and as general rule, any deconvolution increases noise and other artifacts.

Starting October 1st, Getty Images will no longer accept images in which the models have been Photoshopped to "look thinner or larger." The change was made due to a French law that requires disclosure of such images.

A court ruling our of Newton, Massachusetts has set an important legal precedent for drone pilots: federal drone laws will now trump local drone regulations in situations where the two are in conflict.

macOS High Sierra came out today, but if you use a Wacom tablet you need to wait a few weeks before you upgrade. According to Wacom, they won't have a compatible driver ready for you until "late October."

Vitec, the company that owns popular accessory maker Manfrotto, has just acquired JOBY and Lowepro for a cool $10.3 million in cash. The acquisition adds JOBY and Lowepro to Vitec's already sizable collection of camera gear brands.

A veteran photojournalist, Rick Wilking secured a spot in the path of totality for the August solar eclipse. While things didn't quite pan out as predicted, an unexpected subject in the sky and a quick reaction made for a once-in-a-lifetime shot.