Image Analysis & Enhancement

We investigate various image processing and classification tasks, primarily for applications in security and optical inspection. Current emphasis is on

Superresolution

Remote Sensing

Visual Inspection

Recent publications:

IEEE PAMI 2019: Toward Bridging the Simulated-to-Real Gap: Benchmarking Super-Resolution on Real Data
Capturing ground truth data to benchmark super-resolution (SR) is challenging. Therefore, current quantitative studies are mainly evaluated on simulated data artificially sampled from ground truth images. We argue that such evaluations overestimate the actual performance of SR methods compared to their behavior on real images. Toward bridging this simulated-to-real gap, we introduce the Super-Resolution Erlangen (SupER) database, the first comprehensive laboratory SR database of all-real acquisitions with pixel-wise ground truth. It consists of more than 80k images of 14 scenes combining different facets: CMOS sensor noise, real sampling at four resolution levels, nine scene motion types, two photometric conditions, and lossy video coding at five levels. As such, the database exceeds existing benchmarks by an order of magnitude in quality and quantity. This paper also benchmarks 19 popular single-image and multi-frame algorithms on our data. The benchmark comprises a quantitative study by exploiting ground truth data and qualitative evaluations in a large-scale observer study. We also rigorously investigate agreements between both evaluations from a statistical perspective. One interesting result is that top-performing methods on simulated data may be surpassed by others on real data. Our insights can spur further algorithm development, and the publicy available dataset can foster future evaluations.paper | website

Solar Energy 2019: Automatic Classification of Defective Photovoltaic Module Cells in Electroluminescence Images
Electroluminescence (EL) imaging is a useful modality for the inspection of photovoltaic modules. However, the analysis of EL images is typically a manual process that is expensive, time-consuming, and requires expert knowledge of many different types of defects. In this work, we investigate two approaches for automatic detection of such defects in a single image of a PV cell.paper

IEEE JSTARS 2019: Fast and Efficient Limited Data Hyperspectral Remote Sensing Image Classification via GMM-based Synthetic Samples
In hyperspectral remote sensing (HSRS), feature data can potentially become very high dimensional. At the same time, manual labeling of that data is an expensive task. As a consequence of these two factors, one of the core challenges is to perform multi-class classification using only relatively few training data points. In this work, we investigate the classification performance with limited training data. First, we revisit the optimization of the internal parameters of a classifier in the context of limited training data. Second, we report an interesting alternative to parameter optimization: classification performance can also be considerably increased by adding synthetic GMM data to the feature space while using a classifier with unoptimized parameters. Third, we show that using variational expectation maximization, we can achieve a much faster convergence in fitting the GMM on the data.paper

Zeitschrift für Medizinische Physik 2019: A Gentle Introduction to Deep Learning in Medical Image Processing
This paper tries to give a gentle introduction to deep learning in medical image processing, proceeding from theoretical foundations to applications. We first discuss general reasons for the popularity of deep learning, including several major breakthroughs in computer science. Next, we start reviewing the fundamental basics of the perceptron and neural networks,along with some fundamental theory that is often omitted. Doing so allows us to understand the reasons for the rise of deeplearning in many application domains. Obviously medical image processing is one of these areas which has been largely affected by this rapid progress, in particular in image detection and recognition, image segmentation, image registration,and computer-aided diagnosis. There are also recent trends in physical simulation, modeling, and reconstruction that have led to astonishing results. Yet, some of these approaches neglect prior knowledge and hence bear the risk of producing implausible results. These apparent weaknesses highlight current limitations of deep ()learning. However, we also briefly discuss promising approaches that might be able to resolve these problems in the future.paper

IEEE TCI 2016: A Comparative Error Analysis of Current Time-of-Flight Sensors
Time-of-flight (ToF) cameras suffer from systematic errors, which can be an issue in many application scenarios. In this paper, we investigate the error characteristics of eight different ToF cameras. We
present up to six experiments for each camera to quantify different types of errors. The results discussed in this paper enable the community to make appropriate decisions in choosing the best matching camera for a certain application. This work also lays the foundation for a framework to benchmark future ToF cameras. Furthermore, our results demonstrate the necessity for correcting characteristic measurement errors.paper

Solar Energy 2014: Continuous Short-Term Irradiance Forecasts using Sky Images
We present a system for forecasting occlusions of the sun and the expected Global Horizontal Irradiance (GHI) for
solar power plants. Our system uses non-rigid registration for detecting cloud motion and a Kalman filter to establish
continuous forecasts for up to 10 minutes. The Kalman filter and the use of a dense motion field instead of a global cloud speed prove to be key elements of the forecasting pipeline: by incorporating information from previous forecasts into the current one, a Kalman Filtering facilitates forecasting times below 3 minutes and the dense motion field enhances the accuracy of our forecasts.paper

IEEE TIP 2014: Multi-Illuminant Estimation with Conditional Random Fields
Most existing color constancy algorithms assume uniform illumination. However, in real-world scenes, this is not often the case. Thus, we propose a novel framework for estimating the colors of multiple illuminants and their spatial distribution in the scene. We formulate this problem as an energy minimization task within a Conditional Random Field over a set of local illuminant estimates. In order to quantitatively evaluate the proposed method, we created a novel dataset of two-dominant-illuminants images comprised of laboratory, indoor and outdoor scenes. Unlike prior work, our database includes
accurate pixel-wise ground truth illuminant information. Experimental results show that our framework clearly outperforms single illuminant estimators, as well as a recently proposed multi-illuminant estimation approach.paper