This site may earn affiliate commissions from the links on this page. Terms of use.

Conventional imaging systems — like digital cameras — use a lens to map an object onto a detector plane. They acquire information as pixels, or possibly even vectors, and perform software compression after the fact. With the adoption of ever higher pixel densities we will soon reach a point where this kind of strategy is just not going to cut it anymore. Fortunately there are much better ways to extract the information present in a visual scene. One of the more exotic among them is to use metamaterials to perform hardware compression during image acquisition.

Researchers at Duke University have built a device that illuminates objects with K-band microwave radiation (18.5 to 25 GHz) and images the reflections in a single pass. As with many things metamaterial, it only works for microwaves but has the redeeming merit of being able to see through materials such as clothes, wood, rain, and dust, which means it can provide an alternative to expensive Lidar systems. The device is made of thousands of tiny apertures arranged in a strip 40 cm in length and it records images in 2D — one dimension across the strip and the other for depth.

Efforts to circumvent runaway image resolutions have led to the field of compressive sensing. If a little extra care is taken during sampling of an image then far fewer measurements actually need to be taken, and the original image can be reconstructed later from a much smaller data set. A refinement of the technique, known as a single pixel camera, can reduce replace a 10 megapixel camera taking 10 million measurements with just 10 thousand most important measurements.

A single pixel camera acquires images through masks which only allow parts of the view to be captured at any one time. A reconstruction algorithm is then used to create the original image with an arbitrarily small loss of information. The more you might know about the kinds of images you will be acquiring, the better the compression can be. If you are photographing stars in the night sky, for example, you may not be interested in accurately capturing the particulars of flying elephants. The one drawback of the single pixel camera is that it is comparatively slow. The power of the metamaterial technique is that the entire image can be collected from a single sweep through the range of illumination frequency.

For the present time we still enjoy the luxury of light-sensitive silicon fabricated for next to nothing by semiconductor processes, and tolerate its liberal data production. Even patches of blue sky in an outdoor image remain tough to compress with software because, while largely similar, they still have enough variation that contributes to the fullness of the scene. For now the real market for this technology will remain in the non-visible wavelengths — at least until our computers might begin to seize under the weight of the pixel.

This is a very interesting technique. It reminds me of a potential move to vector-based graphics processing in computing, a move some believe to be inevitable as resolutions increase. Indeed, I’m inclined to think that, at some point, much of image rendering may be pushed to the display device’s own dedicated processor while formulas for what to display are calculated and sent from the CPU/GPU. Such an approach would put the burden of pixel resolution on the display device rather than the connected system, and developers would then only need to worry about aspect ratio.

In such a world, essentially all images would just be formulas for what to be displayed – the only limits of resolution would rely on how precisely the image is calculated, sort of like the classic mathematical formulas such as the mandelbrot sequence, which has as much detail as one is willing to calculate.

jhewitt123

Joel, thanks for the correction and again for the insightful comment. The original article is very vague on just how 2 dimensions were imaged.