(Auteur) A light field camera can capture both radiance and angular information, providing a novel solution for depth estimation. The paper proposes two improved methods including distortion model optimization and depth estimation refinement for a lenslet light field camera. For distortion model optimization, a novel 14-parameter distortion model that involves sub-aperture images generation is applied to correct the light field camera images. For depth estimation refinement, an algorithm reducing the high influence of outliers on depth estimation in weak texture regions is proposed based on multi-view stereo matching using the cost volume. Experimental results show the projection error has decreased by about 30% and the depth root-mean-squared error on real world images has decreased by about 42% with our distortion correction method and depth estimation method compared with state of art algorithms. It verifies the correctness and effectiveness of our proposed methods and show significant improvement on accuracy of depth map estimation.

(Auteur) This paper presents a new method for improving the geometric accuracy of photogrammetric reconstruction by modeling and correcting the thermal effect on camera image sensor. The objective is to verify that when the temperature of image sensor varies during the acquisition, image deformation induced by the temperature change is quantifiable, modelisable and correctable. A temperature sensor integrated in the camera enables the measurement of image sensor temperature at exposure. It is therefore natural and appropriate to take this effect into account and to finally model and correct it after a calibration step. Nowadays, in cartography applications performed with UAV, the frame rate of acquisitions is continuously increasing. A high frame rate over a long acquisition time can result in an important temperature increase of the image sensor and thus introduces image deformations. The correction of the above-mentioned effect can improve the measurement accuracy. We present three methods to calibrate the thermal effect and experiments on two datasets are carried out to verify the improvement in terms of the photogrammetric accuracy.

(Auteur) Event cameras are bio-inspired vision sensors that output pixel-level brightness changes instead of standard intensity frames. They offer significant advantages over standard cameras, namely a very high dynamic range, no motion blur, and a latency in the order of microseconds. However, because the output is composed of a sequence of asynchronous events rather than actual intensity images, traditional vision algorithms cannot be applied, so that a paradigm shift is needed. We introduce the problem of event-based multi-view stereo (EMVS) for event cameras and propose a solution to it. Unlike traditional MVS methods, which address the problem of estimating dense 3D structure from a set of known viewpoints, EMVS estimates semi-dense 3D structure from an event camera with known trajectory. Our EMVS solution elegantly exploits two inherent properties of an event camera: (1) its ability to respond to scene edges—which naturally provide semi-dense geometric information without any pre-processing operation—and (2) the fact that it provides continuous measurements as the sensor moves. Despite its simplicity (it can be implemented in a few lines of code), our algorithm is able to produce accurate, semi-dense depth maps, without requiring any explicit data association or intensity estimation. We successfully validate our method on both synthetic and real data. Our method is computationally very efficient and runs in real-time on a CPU.