Spotlight summary:
3D-camera technology’s rise in popularity over the past several years suggests that both consumers and researchers alike desire something beyond just “conventional” photographs. Recent commercially successful 3D cameras include the Lytro and Raytrix systems, which both rely on a light field architecture to detect multiple perspectives of a scene within a single snapshot. These multiple perspectives are then combined through digital post-processing to achieve a number of useful effects. For example, 3D light field cameras can digitally refocus a scene to different areas, generate depth maps, create a stereoscopic image for glasses-based 3D viewing, and even provide improved object segmentation.

In their recent article, Latorre-Carmona et al. extend 3D image capture technology into the multispectral realm, offering the applications described above for each color in the rainbow. To achieve this, their proposed setup uses a liquid crystal tunable filter (LCTF) placed in the aperture of a particular 3D light field camera design, known as an integral imaging system. The modified integral imaging camera then takes a sequence of 3D images, each containing the same scene filtered through a particular narrowband spectral window set by the LCTF, and fuses the images together into one large dataset. By appending a spectral fingerprint to each point in the scene’s 3D reconstruction, the camera’s output effectively contains four dimensions of highly descriptive, information-rich data. Put another way, Latorre-Carmona et al.’s camera can estimate a precise depth and color for every pixel in a conventional 2D photograph. Such a capability opens the door for a number of computational procedures – like scene segmentation, classification, and object tracking – to progress into new territory.

From distinguishing between healthy and diseased tissue to following a tank beneath a canopy, multispectral 2D imaging has already found widespread use in a host of computer vision applications. Thus, it is quite obvious that having data describing both the multispectral and depth content of a scene can only help to improve the accuracy of such current image post-processing methods. Simultaneously, the multispectral integral imager can also encourage the creation of new and innovative classification tasks. Cited potential areas of application include underwater 3d visualization, melanoma detection, remote sensing pattern recognition and assistance in photon-starved conditions, like at night or in fog. Additional examples may include searching for blood vessels within tissue, identifying cavities and other dental defects, classifying canopy and vegetation differences from the air, or even distinguishing body gestures for improved human-computer interaction devices. From the above list’s wide range of application, it is quite clear that the multispectral integral imager concept can be adopted and tweaked to assist nearly any niche computational imaging system achieve its detection goals.

While several others have explored the possibility of jointly obtaining the spectral and depth content of a scene, the camera proposed Latorre-Carmona et al. is unique in that it does not require active scene illumination and it offers a particularly compact design. If a portable system that can operate under any type of illumination is realized, many candidate applications in the long list above may certainly benefit. As development continues, extending capabilities to also capture polarization and dynamic range content can further move the multispectral integral imager away from what is typically considered a “conventional” camera, and closer towards a device that simply extracts as much useful optical information from scene as possible. Viewed as such, the proposed platform helps promote the idea that cameras designed to capture images for computers, instead of the human eye, are what we really need in our increasingly digital world.