This paper presents details of the SkySat-1 mission, which is the first microsatellite-class commercial earth- observation system to generate sub-meter resolution panchromatic imagery, in addition to sub-meter resolution 4-band pan-sharpened imagery. SkySat-1 was built and launched for an order of magnitude lower cost than similarly performing missions. The low-cost design enables the deployment of a large imaging constellation that can provide imagery with both high temporal resolution and high spatial resolution. One key enabler of the SkySat-1 mission was simplifying the spacecraft design and instead relying on ground- based image processing to achieve high-performance at the system level. The imaging instrument consists of a custom-designed high-quality optical telescope and commercially-available high frame rate CMOS image sen- sors. While each individually captured raw image frame shows moderate quality, ground-based image processing algorithms improve the raw data by combining data from multiple frames to boost image signal-to-noise ratio (SNR) and decrease the ground sample distance (GSD) in a process Skybox calls &#100;igital TDI". Careful qual-ity assessment and tuning of the spacecraft, payload, and algorithms was necessary to generate high-quality panchromatic, multispectral, and pan-sharpened imagery. Furthermore, the framing sensor configuration en- abled the first commercial High-Definition full-frame rate panchromatic video to be captured from space, with approximately 1 meter ground sample distance. Details of the SkySat-1 imaging instrument and ground-based image processing system are presented, as well as an overview of the work involved with calibrating and validating the system. Examples of raw and processed imagery are shown, and the raw imagery is compared to pre-launch simulated imagery used to tune the image processing algorithms.

Recent research demonstrates the advantage of designing electro-optical imaging systems by jointly optimizing the optical
and digital subsystems. The optical systems designed using this joint approach intentionally introduce large and often
space-varying optical aberrations that produce blurry optical images. Digital sharpening restores reduced contrast due to
these intentional optical aberrations. Computational imaging systems designed in this fashion have several advantages
including extended depth-of-field, lower system costs, and improved low-light performance. Currently, most consumer
imaging systems lack the necessary computational resources to compensate for these optical systems with large aberrations
in the digital processor. Hence, the exploitation of the advantages of the jointly designed computational imaging system
requires low-complexity algorithms enabling space-varying sharpening.
In this paper, we describe a low-cost algorithmic framework and associated hardware enabling the space-varying finite
impulse response (FIR) sharpening required to restore largely aberrated optical images. Our framework leverages the
space-varying properties of optical images formed using rotationally-symmetric optical lens elements. First, we describe
an approach to leverage the rotational symmetry of the point spread function (PSF) about the optical axis allowing computational
savings. Second, we employ a specially designed bank of sharpening filters tuned to the specific radial variation
common to optical aberrations. We evaluate the computational efficiency and image quality achieved by using this low-cost
space-varying FIR filter architecture.

Recent research in the area of electro-optical system design identified the benefits of spherical aberration for
extending the depth-of-field of electro-optical imaging systems. In such imaging systems, spherical aberration
is deliberately introduced by the optical system lowering system modulation transfer function (MTF) and then
subsequently corrected using digital processing. Previous research, however, requires complex digital postprocessing
algorithms severely limiting its applicability to only expensive systems. In this paper, we examine the
ability of low-cost spatially invariant finite impulse response (FIR) digital filters to restore system MTF degraded
by spherical aberration. We introduce an analytical model for choosing the minimum, and hence cheapest, FIR
filter size capable of providing the critical level sharpening to render artifact-free images. We identify a robust
quality criterion based on the post-processed MTF for developing this model. We demonstrate the reliability
of the estimated model by showing simulated spherical coded imaging results. We also evaluate the hardware
complexity of the FIR filters implemented for various spherical aberrations on a low-end Field-Programmable
Gate Array (FPGA) platform.

Recently, joint analysis and optimization of both the optical sub-system and the algorithmic capabilities of
digital processing have created new digital-optical imaging systems with system-level benefits. We explore
a special class of digital-optical imaging systems called spherical coding that combine lens systems having
controlled amounts of spherical aberration with digital sharpening filters to achieve fast, low-cost, extended
depth-of-field (EDoF) imaging systems. We provide analysis of the optimal amount of spherical aberration
required as a function of desired depth-of-field extension. We also characterize the MSE-optimal filters
required to restore contrast. Finally, we describe a simple method to designing spherical coded systems and
demonstrate several advantages such as improved manufacturing yield using an actual lens design.

In many imaging applications, the objects of interest have broad range of strongly correlated spectral components.
For example, the spectral components of grayscale objects such as media printed with black ink or toner are nearly perfectly correlated spatially. We describe how to exploit such correlation during the design of electro-optical imaging systems to achieve greater imaging performance and lower optical component cost.
These advantages are achieved by jointly optimizing optical, detector, and digital image processing subsystems
using a unified statistical imaging performance measure. The resulting optical systems have lower F# and greater depth-of-field than systems that do not exploit spectral correlations.

A recent theory claims that the late-Italian Renaissance painter Lorenzo Lotto secretly built a concave-mirror
projector to project an image of a carpet onto his canvas and trace it during the execution of <i>Husband and
wife</i> (c. 1543). Key evidence adduced to support this claim includes "perspective anomalies" and changes in
"magnification" that the theory's proponents ascribe to Lotto refocusing his projector to overcome its limitations
in depth of field. We find, though, that there are important geometrical constraints upon such a putative optical
projector not incorporated into the proponents' analyses, and that when properly included, the argument for the
use of optics loses its force. We used Zemax optical design software to create a simple model of Lotto's studio
and putative projector, and incorporated the optical properties proponents inferred from geometrical properties
of the depicted carpet. Our central contribution derives from including the 116-cm-wide canvas screen; we found
that this screen forces the incident light to strike the concave mirror at large angles (&ge; 15&deg;) and that this, in
turn, means that the projected image would reveal severe off-axis aberrations, particularly astigmatism. Such
aberrations are roughly as severe as the defocus blur claimed to have led Lotto to refocus the projector. In short,
we find that the projected images would not have gone in and out of focus in the way claimed by proponents,
a result that undercuts their claim that Lotto used a projector for this painting. We speculate on the value of
further uses of sophisticated ray-tracing analyses in the study of fine arts.

There is a long history of using light to change the shape of a material. More than a decade ago, our group proposed and demonstrated that the length of an optical fiber should change due to a guided mode in analogy to the refractive index change due to the Optical Kerr Effect. The mechanisms that we postulated as being responsible included photothermal heating and photoisomerization. In the present studies, we report on a polymer optical fiber cantilever, which is excited by launching a light beam off-axis into the fiber. In measurements of the degree of bending as a function of time after the light beam is turned on or turned off, we find that there are two distinct time responses, each of different magnitude. We show that the dynamics of photobending is consistent with coupling between the photothermal heating and photoisomerization mechanisms. More interestingly, we find that a collective release of stress must be invoked to describe the observations. We propose new kinetic models of the phenomena, and show that they are consistent with the data.

Reliable fabrication and assembly of high-quality electro-optical imaging systems is a critical challenge facing
electro-optical imaging system manufacturers. Optical compensation is one standard approach for minimizing
the effects of errors introduced during the manufacture and construction of the optical subsystems. We
describe how digital image processing should be considered as a form of compensation when evaluating a
complete imaging system. We describe a novel method for digital-optical compensation which jointly adjusts
both optical parameters and image processing parameters to maximize end-to-end imaging performance. We
verify the superiority of this joint compensation strategy over the traditional sequential compensation through
several example imaging systems.

The traditional approach to designing an electro-optical imaging system involves first optimizing the lens subsystem using an optical measure of performance and second optimizing the image processing subsystem. Designing the system in this sequential fashion fails to exploit opportunities for efficient cooperation between the optical and digital systems. We introduce a novel framework for designing digital imaging systems and specifically an end-to-end merit function based on pixel-wise mean squared error. We describe how we adapt commercial ray tracing software to design the matched optical and image processing subsystems in a joint fashion while satisfying the constraints imposed on each of the subsystems.

In the last two decades a variety of super-resolution (SR) methods have been proposed. These methods usually address the problem of fusing a set of monochromatic images to produce a single monochromatic image with higher spatial resolution. In this paper we address the dynamic and color SR problems of reconstructing a high-quality set of colored super-resolved images from low-quality mosaiced frames. Our approach includes a hybrid method for simultaneous SR and demosaicing, this way taking into account practical color measurements encountered in video sequences. For the case of translational motion and common space-invariant blur, the proposed method is based on a very fast and memory efficient approximation of the Kalman filter. Experimental results on both simulated and real data are supplied, demonstrating the presented algorithm, and its strength.

In the last two decades, many papers have been published, proposing a variety of methods for multi-frame resolution enhancement. These methods, which have a wide range of complexity, memory and time requirements, are usually very sensitive to their assumed model of data and noise, often limiting their utility. Different
implementations of the non-iterative Shift and Add concept have been proposed as very fast and effective super-resolution algorithms. The paper of Elad & Hel-Or 2001 provided an adequate mathematical justification for the Shift and Add method for the simple case of an additive Gaussian noise model. In this paper we prove that additive Gaussian distribution is not a proper model for super-resolution noise. Specifically, we show that L<sub>p</sub> norm minimization (1&le;p&le;2) results in a pixelwise weighted mean algorithm which requires the least possible amount of computation time and memory and produces a maximum likelihood solution. We also justify the use of a robust prior information term based on bilateral filter idea. Finally, for the underdetermined case, where the number of non-redundant low-resolution frames are less than square of the resolution enhancement factor, we propose a method for detection and removal of outlier pixels. Our experiments using commercialdigital cameras show that our proposed super-resolution method provides significant improvements in both accuracy and efficiency.

My Library

You currently do not have any folders to save your paper to! Create a new folder below.

Keywords/Phrases

Keywords

in

Remove

in

Remove

in

Remove

+ Add another field

Search In:

Proceedings

Volume

Journals +

Volume

Issue

Page

Journal of Applied Remote SensingJournal of Astronomical Telescopes Instruments and SystemsJournal of Biomedical OpticsJournal of Electronic ImagingJournal of Medical ImagingJournal of Micro/Nanolithography, MEMS, and MOEMSJournal of NanophotonicsJournal of Photonics for EnergyNeurophotonicsOptical EngineeringSPIE Reviews