I'm a third-year Ph.D. student at Stanford University. I completed my B.S. and M.S. degrees in Electrical Engineering at Brigham Young University and am interested in problems in imaging, optimization, computer vision, and remote sensing. I'm currently working in Stanford's Computational Imaging Lab on time-of-flight sensors, imaging around corners, and next-generation LIDAR systems.

News

March 2019

Our paper on acoustic non-line-of-sight imaging was accepted as an oral to CVPR!

June 2018

I'm interning at the Intelligent Systems Lab at Intel this summer with Vladlen Koltun

Publications

Non-line-of-sight (NLOS) imaging enables unprecedented capabilities in a wide range of applications, including robotic and machine vision, remote sensing, autonomous vehicle navigation, and medical imaging. Recent approaches to solving this challenging problem employ optical time-of-flight imaging systems with highly sensitive time-resolved photodetectors and ultra-fast pulsed lasers. However, despite recent successes in NLOS imaging using these systems, widespread implementation and adoption of the technology remains a challenge because of the requirement for specialized, expensive hardware. We introduce acoustic NLOS imaging, which is orders of magnitude less expensive than most optical systems and captures hidden 3D geometry at longer ranges with shorter acquisition times compared to state-of-the-art optical methods. Inspired by hardware setups used in radar and algorithmic approaches to model and invert wave-based image formation models developed in the seismic imaging community, we demonstrate a new approach to seeing around corners.

Active 3D imaging systems have broad applications across disciplines, including biological imaging, remote sensing and robotics. Applications in these domains require fast acquisition times, high timing accuracy, and high detection sensitivity. Single-photon avalanche diodes (SPADs) have emerged as one of the most promising detector technologies to achieve all of these requirements. However, these detectors are plagued by measurement distortions known as pileup, which fundamentally limit their precision. In this work, we develop a probabilistic image formation model that accurately models pileup. We devise inverse methods to efficiently and robustly estimate scene depth and reflectance from recorded photon counts using the proposed model along with statistical priors. With this algorithm, we not only demonstrate improvements to timing accuracy by more than an order of magnitude compared to the state-of-the-art, but our approach is also the first to facilitate sub-picosecond-accurate, photon-efficient 3D imaging in practical scenarios where widely-varying photon counts are observed.

Sensors which capture 3D scene information provide useful data for tasks in vehicle navigation, gesture recognition, human pose estimation, and geometric reconstruction. Active illumination time-of-flight sensors in particular have become widely used to estimate a 3D representation of a scene. However, the maximum range, density of acquired spatial samples, or overall acquisition time of these sensors is fundamentally limited by the minimum signal required to estimate depth reliably. In this paper, we propose a data-driven method for photon-efficient 3D imaging which leverages sensor fusion and computational reconstruction to rapidly and robustly estimate a dense depth map from low photon counts. Our sensor fusion approach uses measurements of single photon arrival times from a low-resolution single-photon detector array and an intensity image from a conventional high-resolution camera. Using a multi-scale deep convolutional network, we jointly process the raw measurements from both sensors and output a high-resolution depth map. To demonstrate the efficacy of our approach, we implement a hardware prototype and show results using captured data. At low signal-to-background levels our depth reconstruction algorithm with sensor fusion outperforms other methods for depth estimation from noisy measurements of photon arrival times.

Active imaging at the picosecond timescale reveals transient light transport effects otherwise not accessible by computer vision and image processing algorithms. For example, analyzing the time of flight of short laser pulses emitted into a scene and scattered back to a detector allows for depth imaging, which is crucial for autonomous driving and many other applications. Moreover, analyzing or removing global light transport effects from photographs becomes feasible.

While several transient imaging systems have recently been proposed using various imaging technologies, none is capable of acquiring transient images at interactive framerates. In this paper, we present an imaging system that records transient images at up to 25 Hz. We show several transient video clips recorded with this system and demonstrate transient imaging applications, including direct-global light transport separation and enhanced depth imaging.

Imaging objects hidden from a camera’s view is a problem of fundamental importance to many fields of research with applications in robotic vision, defense, remote sensing, medical imaging, and autonomous vehicles. Non-line-of-sight (NLOS) imaging at macroscopic scales has been demonstrated by scanning a visible surface with a pulsed laser and time-resolved detector. Whereas light detection and ranging (LIDAR) systems use such measurements to recover the shape of visible objects from direct reflections, NLOS imaging aims at reconstructing the shape and albedo of hidden objects from multiply scattered light. Despite recent advances, NLOS imaging has remained impractical due to the prohibitive memory and processing requirements of existing reconstruction algorithms, and the extremely weak signal of multiply scattered light. Here we show that confocalizing the scanning procedure provides a means to address these key challenges. Confocal scanning facilitates the derivation of a novel closed-form solution to the NLOS reconstruction problem, which requires computational and memory resources that are orders of magnitude fewer than previous reconstruction methods and recovers hidden objects at unprecedented image resolutions. Confocal scanning also uniquely benefits from a sizeable increase in signal and range when imaging retroreflective objects. We quantify the resolution bounds of NLOS imaging, demonstrate real-time tracking capabilities, and derive efficient algorithms that incorporate image priors and a physically-accurate noise model. Most notably, we demonstrate successful outdoor experiments for NLOS imaging under indirect sunlight.

Computer vision algorithms build on 2D images or 3D videos that capture dynamic events at the millisecond time scale. However, capturing and analyzing “transient images” at the picosecond scale—i.e., at one trillion frames per second—reveals unprecedented information about a scene and light transport within. This is not only crucial for time-of-flight range imaging, but it also helps further our understanding of light transport phenomena at a more fundamental level and potentially allows to revisit many assumptions made in different computer vision algorithms.

In this work, we design and evaluate an imaging system that builds on single photon avalanche diode (SPAD) sensors to capture multi-path responses with picosecond-scale active illumination. We develop inverse methods that use modern approaches to deconvolve and denoise measurements in the presence of Poisson noise, and compute transient images at a higher quality than previously reported. The small form factor, fast acquisition rates, and relatively low cost of our system potentially makes transient imaging more practical for a range of applications.

Arctic sea ice can be classified as first-year (FY) or multiyear (MY) based on data collected by satellite microwave scatterometers. The Oceansat-2 Ku-band Scatterometer (OSCAT) was operational from 2009 to 2014 and is here used to classify ice as FY or MY during these years. Due to similarities in backscatter measurements from sea ice and open water, a NASA Team ice concentration product derived from passive microwave brightness temperatures is used to restrict the classification area to within the sea ice extent. Classification of FY and MY ice is completed with OSCAT by applying a temporally adjusted threshold on backscatter values. The classification method is also applied to the Quick Scatterometer (QuikSCAT) data set, and ice age classifications are processed using QuikSCAT for 1999-2009. The combined QuikSCAT and OSCAT classifications represent a 15-year record, which extends from 1999 to 2014. The classifications show a decrease in MY ice, while the total area of the ice cover remains consistent throughout winter seasons over the time series.

The concentration, type, and extent of sea ice in the Arctic can be estimated based on measurements from satellite active microwave sensors, passive microwave sensors, or both. Here, data from the Advanced Scatterometer (ASCAT) and the Special Sensor Microwave Imager/Sounder (SSMIS) are employed to broadly classify Arctic sea ice type as first-year (FY) or multiyear (MY). Combining data from both active and passive sensors can improve the performance of MY and FY ice classification. The classification method uses C-band σ0 measurements from ASCAT and 37 GHz brightness temperature measurements from SSMIS to derive a probabilistic model based on a multivariate Gaussian distribution. Using a Gaussian model, a Bayesian estimator selects between FY and MY ice to classify pixels in images of Arctic sea ice. The ASCAT/SSMIS classification results are compared with classifications using the Oceansat-2 scatterometer (OSCAT), the Equal-Area Scalable Earth Grid (EASE-Grid) Sea Ice Age dataset available from the National Snow and Ice Data Center (NSIDC), and the Canadian Ice Service (CIS) charts, also available from the NSIDC. The MY ice extent of the ASCAT/SSMIS classifications demonstrates an average difference of 282 thousand km - + from that of the OSCAT classifications from 2009 to 2014. The difference is an average of 13.6% of the OSCAT MY ice extent, which averaged 2.19 million km2 over the same period. Compared to the ice classified as two years or older in the EASE-Grid Sea Ice Age dataset (EASE-2+) from 2009 to 2012, the average difference is 617 thousand km2 . The difference is an average of 22.8% of the EASE-2+ MY ice extent, which averaged 2.79 million km2 from 2009 to 2012. Comparison with the Canadian Ice Service (CIS) charts shows that most ASCAT/SSMIS classifications of MY ice correspond to a MY ice concentration of approximately 50% or greater in the CIS charts. The addition of the passive SSMIS data appears to improve classifications by mitigating misclassifications caused by ASCAT's sensitivity to rough patches of ice which can appear similar to, but are not, MY ice.

Satelliteborne C-band scatterometer measurements of the radar backscatter coefficient (σ0) of the Earth can be used to estimate soil moisture levels over land. Such estimates are currently produced at 25- and 50-km resolution using the Advanced Scatterometer (ASCAT) sensor and a change detection algorithm originally developed at the Vienna University of Technology (TU-Wien). Using the ASCAT spatial response function (SRF), high-resolution (approximately 15-20 km per pixel) images of σ0 can be produced, enabling the creation of a high-resolution soil moisture product using a modified version of the TU-Wien algorithm. The high-resolution soil moisture images are compared to images produced with the Water Retrieval Package 5.5 algorithm, which is also based on the TU-Wien algorithm, and to in situ measurements from the National Oceanic and Atmospheric Administration U.S. Climate Reference Network (NOAA CRN). The WARP 5.5 and high-resolution image products generally show good agreement with each other; the high-resolution estimates appear to resolve soil moisture features at a finer scale and demonstrate a tendency toward greater moisture values in some areas. When compared to volumetric soil moisture measurements from NOAA CRN stations for 2010 and 2011, the WARP 5.5 and high-resolution soil moisture estimates perform similarly, with both having a root-mean-square difference from the in situ data of approximately 0.06 m3/m3 in one study area and 0.09 m3/m3 in another.