Forty years ago I was the only doc in a bioengineering lab in a famous technical university. We had our own computer — it took up a giant room — and had attached to it a rare device in those days, a scanner for converting transparencies to computer readable form. It was really a cathode ray tube with a sensor on the other side of the transparency. The CRT scanned and the sensor sensed and an analog to digital converter converted. About that time I got interested, along with an engineering colleague, in taking two-dimensional x-ray images, which were really projections or shadows of the interior of an object, and using a bunch of them taken at different angles to disentangle the spatial relationships using the computer. When my colleague figured out the mathematics of this, using Fourier analysis, we also found the methods had already been used for x-ray and radio astronomy by Bracewell. But we had a new use with some new wrinkles.
I went to Woolworth’s dime store (of blessed memory), bought a 99 cent plastic lazy susan and stuck a vertical rod in the middle of it to which I strapped the two bones of the lower leg, the tibia and the fibula (I disarticulated them from a skeleton used to teach anatomy). I brought this contraption to the radiology department of a famous medical school and took x-rays of it, rotating the lazy susan at 5 degree increments. I then took all these x-rays back to the computer scanner, digitized them with our unique scanner, and with an algorithm whose heart was a Fast Fourier Transform (written in Fortran), we reconstructed a slice through the two bones. When my colleague presented the picture at an international conference in Japan, it was probably the first CAT scan ever shown. We didn’t pursue this commercially, which is why, in my old age, I am writing a blog in freezing cold weather instead of lounging somewhere at my beachfront villa.

But I was reminded of this episode when I spotted a news story in New Scientist about making blurry pictures sharp by combining a bunch of low res shots:

Several low-resolution images can be combined into one super-sharp image more effectively thanks to new algorithms
Watch the full-size video

Software that creates high-quality images by combining several blurry ones could lead to safer X-rays and digital cameras that automatically sharpen up snapshots.

The software uses a number of low-resolution images to produce a single high resolution one — a technique known as “super-resolution”. Forensic scientists and astronomers already use super-resolution to produce clearer images from blurred security and astronomical images.

US researchers have now adapted the approach to produce usable X-ray images with less radiation. Meanwhile researchers in the UK have improved the algorithms behind the technique, perhaps paving the way for super-resolution consumer cameras. (New Scientist)

[snip]

Super resolution involves taking several low-resolution images by shifting a camera slightly each time. Each resulting image is subtly different. These images are then automatically aligned by comparing them all, two at a time.

By analysing the way the same features are blurred differently in each low-quality image, it is possible to mathematically reverse the blurring effect and make a higher-resolution image.

The advance here is the use of new fast algorithms to do the huge amount of number crunching needed to sharpen images in any reasonable time. Comparing and precisely registering 15 images two at a time requires 105 picture registrations, but the new techniques accomplish this in one pass instead of 105. The new methods are sufficiently fast you could contemplate using them in consumer devices like digital cameras or in radiology departments to take sharper x-rays with less radiation. X-ray sharpness usually comes from using more x-rays, but with this method you can use about 30% less dose even though you have to take more pictures. There are other methods being tried to accomplish the same end, but the use of multiple x–rays to produce something new and of clinical value, derived from similar techniques in astronomy, bore an uncanny to my earlier experience.

Here is a neat YouTube video that shows how 60 blurry 48 pixel x 48 pixel photos of the moon were sharpened to produce a high resolution 480 px x 480 px:

I feel relatively confident these guys are going to exploit this commercially via patenting or licensing. Can’t blame them. Of course with global warming on the way, there’s going to be a lot of new beaches soon. Maybe I’ll get my wish of having beachfront property without even having to move!

The human eye is naturally fairly low-res — there aren’t that many light receptors, not even in the macula. But the eye jiggles, essentially taking many low-res images, and the brain uses that data to bulid a single high-res image.