About Optics & Photonics TopicsOSA Publishing developed the Optics and Photonics Topics to help organize its diverse content more accurately by topic area. This topic browser contains over 2400 terms and is organized in a three-level hierarchy. Read more.

Topics can be refined further in the search results. The Topic facet will reveal the high-level topics associated with the articles returned in the search results.

Abstract

We report on the image formation pipeline developed to efficiently form gigapixel-scale imagery generated by the AWARE-2 multiscale camera. The AWARE-2 camera consists of 98 “microcameras” imaging through a shared spherical objective, covering a 120° x 50° field of view with approximately 40 microradian instantaneous field of view (the angular extent of a pixel). The pipeline is scalable, capable of producing imagery ranging in scope from “live” one megapixel views to full resolution gigapixel images. Architectural choices that enable trivially parallelizable algorithms for rapid image formation and on-the-fly microcamera alignment compensation are discussed.

Figures (11)

Fig. 1 The microcameras are tiled on the surface of a hemisphere to optimally cover the full system FoV without gaps. The projection of the microcamera FoVs into object space is shown in (a), where the microcameras populated in AWARE-2 have been highlighted. The machined aluminum geodesic dome, approximately 11.5” in diameter, which holds the microcameras in this configuration, is shown in (b).

Fig. 2 The MapReduce approach breaks the image formation process into two parts. The map step transforms a list of key/value pairs that represent the intensity value for a given pixel on a given microcamera into an intermediate list of key/value pairs which represent the intensity value for a given location in object space. This location corresponds directly to a pixel in the output image. The reduce step combines key/value pairs sharing the same key to form an estimate the intensity that was present at that single location in object space.

Fig. 3 Parametric models are used to predict the distortion and relative illumination as a function of radial position on a sensor. (a) Comparison of several polynomial functions to the distortion found in a ZEMAX simulation, demonstrating that a 9th order polynomial is sufficient to achieve a pixel-accurate distortion prediction. (b) Fit for the relative illumination model using an 8th order polynomial.

Fig. 4 Values from every pixel from every microcamera are mapped into object space. Pixels that overlap in the shared coordinate system are reduced to a single value. These operations can run in parallel on every pixel to quickly form a stitched output image. The set of images on the left depicts a collection of detector outputs. The image on the right is a portion of the final image generated from this group of individual microcamera images that have been positioned with an understanding of the geometry of the array.

Fig. 6 AWARE utilizes the Message Passing Interface (MPI) framework to distribute compositing work among a pool of workers. Each processing core in each server is designated a worker. The root node receives commands via some user interface and distributes the jobs to the workers.

Fig. 7 The relative time to composite an image decreases as more workers are used in the computation. This experiment was done on an NVIDIA GTX 570 with 480 cores, thus requesting more workers than are available results in a reduced performance gain.

Fig. 9 SIFT and SURF algorithms are used to identify clusters of features (shown as markers in the images) in neighboring microcameras. The clusters are transformed into object space and compared to calculate a registration error. The transformation parameters are adjusted to minimize the error.

Fig. 10 (a) A composite formed with an unregistered camera angles will have stitching errors due to mechanical and thermal drift, as shown in this overlap region between three cameras. (b) The extracted features can be used to find a globally optimal registration, leading to an improved composited image.

Fig. 11 A composited, tone-mapped HDR image from the AWARE-2 camera using the proposed image formation architecture. Each microcamera in the array automatically chooses a focal position and exposure time optimized for the distances and intensities found in the portion of the scene it is imaging.