Abstract [en]

Light field technology, which emerged as a solution to the increasing demands of visually immersive experience, has shown its extraordinary potential for scene content representation and reconstruction. Unlike conventional photography that maps the 3D scenery onto a 2D plane by a projective transformation, light field preserves both the spatial and angular information, enabling further processing steps such as computational refocusing and image-based rendering. However, there are still gaps that have been barely studied, such as the light field demosaicing process. In this paper, we propose a depth-assisted demosaicing method for light field data. First, we exploit the sampling geometry of the light field data with respect to the scene content using the ray-tracing technique and develop a sampling model of light field capture. Then we carry out the demosaicing process in a layered object space with object-space sampling adjacencies rather than pixel placement. Finally, we compare our results with state-of-art approaches and discuss about the potential research directions of the proposed sampling model to show the significance of our approach.

Li, Yongwei

Abstract [en]

The transition of camera technology from film-based cameras to digital cameras has been witnessed in the past twenty years, along with impressive technological advances in processing massively digitized media content. Today, a new evolution emerged -- the migration from 2D content to immersive perception. This rising trend has a profound and long-term impact to our society, fostering technologies such as teleconferencing and remote surgery. The trend is also reflected in the scientific research community, and more intention has been drawn to the light field and its applications.

The purpose of this dissertation is to develop a better understanding of light field structure by analyzing its sampling behavior and to addresses three problems concerning the light field processing pipeline: 1) How to address the depth estimation problem when there is limited color and texture information. 2) How to improve the rendered image quality by using the inherent depth information. 3) How to solve the interdependence conflict of demosaicing and depth estimation.

The first problem is solved by a hybrid depth estimation approach that combines advantages of correspondence matching and depth-from-focus, where occlusion is handled by involving multiple depth maps in a voting scheme. The second problem is divided into two specific tasks -- demosaicing and super-resolution, where depth-assisted light field analysis is employed to surpass the competence of traditional image processing. The third problem is tackled with an inferential graph model that encodes the connections between demosaicing and depth estimation explicitly, and jointly performs a global optimization for both tasks.

The proposed depth estimation approach shows a noticeable improvement in point clouds and depth maps, compared with references methods. Furthermore, the objective metrics and visual quality are compared with classical sensor-based demosaicing and multi-image super-resolution to show the effectiveness of the proposed depth-assisted light field processing methods. Finally, a multi-task graph model is proposed to challenge the performance of the sequential light field image processing pipeline. The proposed method is validated with various kinds of light fields, and outperforms the state-of-the-art in both demosaicing and depth estimation tasks.

The works presented in this dissertation raise a novel view of the light field data structure in general, and provide tools to solve image processing problems in specific. The impact of the outcome can be manifold: To support scientific research with light field microscopes, to stabilize the performance of range cameras for industrial applications, as well as to provide individuals with a high-quality immersive experience.