Address image capture, processing challenges in portables, Part 3

Part 1 of this article series identified the primary drivers for the on-going development of camera modules. Part 2 discusses wafer-level architecture. This part will focus on software-enhanced optics.

To overcome the lack of support for time-based specification in modern computing environments, a number of techniques have evolved and form the basis for embedded systems programming. The advent of wafer-scale manufacturing techniques has made possible the production of extremely compact camera modules at incredibly low cost. For reasons of physics, a small camera module will have inferior performance to a larger camera module, but the deficiencies can be corrected by exploiting the novel lens structures that become possible by the switch to wafer-scale manufacturing. Nevertheless, this merely preserves the status quo in terms of image quality and does nothing to enhance the user experience.

Designers of higher resolution camera phones have, until recently, been able to sell to consumers solely on the basis of the headline pixel count number. With the proliferation of camera phones (more than 80 per cent of models now possessing one or more cameras), consumers have come to realise that picture quality and pixel count are not strongly connected. Indeed, the stunning pictures sent back by the Mars Rover vehicles were taken by ~1Mpixel cameras. Likewise, the designers of professional-grade cameras have known for a long time that to obtain the highest quality digital images requires the combination of optics with software.