About Optics & Photonics TopicsOSA Publishing developed the Optics and Photonics Topics to help organize its diverse content more accurately by topic area. This topic browser contains over 2400 terms and is organized in a three-level hierarchy. Read more.

Topics can be refined further in the search results. The Topic facet will reveal the high-level topics associated with the articles returned in the search results.

Abstract

Photography is a cornerstone of imaging. Ever since cameras became consumer products more than a century ago, we have witnessed great technological progress in optics and recording mediums, with digital sensors replacing photographic films in most instances. The latest revolution is computational photography, which seeks to make image reconstruction computation an integral part of the image formation process; in this way, there can be new capabilities or better performance in the overall imaging system. A leading effort in this area is called the plenoptic camera, which aims at capturing the light field of an object; proper reconstruction algorithms can then adjust the focus after the image capture. In this tutorial paper, we first illustrate the concept of plenoptic function and light field from the perspective of geometric optics. This is followed by a discussion on early attempts and recent advances in the construction of the plenoptic camera. We will then describe the imaging model and computational algorithms that can reconstruct images at different focus points, using mathematical tools from ray optics and Fourier optics. Last, but not least, we will consider the trade-off in spatial resolution and highlight some research work to increase the spatial resolution of the resulting images.

W.-S. Chan, E. Y. Lam, and M. K. Ng, “Extending the depth of field in a compound-eye imaging system with super-resolution reconstruction,” in International Conference on Pattern Recognition (IEEE, 2006), pp. 623–626.

W.-S. Chan, E. Y. Lam, and M. K. Ng, “Extending the depth of field in a compound-eye imaging system with super-resolution reconstruction,” in International Conference on Pattern Recognition (IEEE, 2006), pp. 623–626.

W.-S. Chan, E. Y. Lam, and M. K. Ng, “Extending the depth of field in a compound-eye imaging system with super-resolution reconstruction,” in International Conference on Pattern Recognition (IEEE, 2006), pp. 623–626.

W.-S. Chan, E. Y. Lam, and M. K. Ng, “Extending the depth of field in a compound-eye imaging system with super-resolution reconstruction,” in International Conference on Pattern Recognition (IEEE, 2006), pp. 623–626.

W.-S. Chan, E. Y. Lam, and M. K. Ng, “Extending the depth of field in a compound-eye imaging system with super-resolution reconstruction,” in International Conference on Pattern Recognition (IEEE, 2006), pp. 623–626.

Figures (10)

Illustrative plots of the ray-space diagram. (a) A regular array of light rays, from a set of points in the u plane to a set of points in the x plane. (b) A set of light rays arriving at the same x position. (c) A set of light rays approaching a location behind the x′ plane. (d) A set of light rays diverging after converging at a location before the x′′ plane.

Bringing the x plane closer to the u plane results in a tilted line in the ray-space at an angle ψ. (a) Moving the second plane closer to the first, by a factor of α. (b) Corresponding shearing in ray-space, with ψ=tan−1(1−α).

Light field camera system. Light rays marked in red show how the microlenses separate them, so the photodetector array can capture a sampling of the light field. Light rays marked in blue show that the photodetector array can also be thought of as recording the images of the exit pupil.

Illustration of the projection-slice theorem. Projection at angle ψ of a 2D function in the x−u plane, which becomes a 1D function in ρ, is related to its 2D Fourier transform in the fx−fu plane sliced at angle ψ by a 1D Fourier transform.