IPOLIPOLhttp://www.ipol.im/feed/IPOL Articles — Latest articles published in IPOL.ikiwiki2018-01-02T15:16:11ZThe Production of Ground Truths for Evaluating Highly Accurate Stereovision Algorithmshttp://www.ipol.im/pub/art/2018/187/Tristan Dagobert2018-01-02T15:16:11Z2018-01-01T23:00:00Z
The conception and improvement of algorithms for subpixel stereovision requires very precise test databases. The state of the art on the sets of images used extensively by the scientific community shows that they are often incomplete and imprecise compared to the dataset goals. We will present a method based on image synthesis to produce stereoscopic pairs with ground truths such as disparity and occlusion maps reaching an accuracy of about 1e-6 pixels. The a priori noise estimate is also taken into account. This process allows us to deliver a new image database consisting of 66 stereo pairs together with their ground truths.
A Contrario 3D Point Alignment Detection Algorithmhttp://www.ipol.im/pub/art/2017/214/Álvaro Gómez,
Gregory Randall,
Rafael Grompone von Gioi2017-12-29T23:31:54Z2017-12-29T23:00:00Z
In this article we present an algorithm for the detection of perceptually relevant alignments in 3D point clouds. The algorithm is an extension of the algorithm developed by Lezama et al. [J. Lezama, J-M. Morel, G. Randall, R. Grompone von Gioi, A Contrario 2D Point Alignment Detection, IEEE Transactions on Pattern Analysis and Machine Intelligence, 37 (3), pp. 499-512, 2015] for the case of sets of 2D points. The algorithm is based on the a contrario detection theory that mathematically formalizes the non-accidentalness principle proposed for perception: an observed structure is relevant if it rarely occurs by chance. This framework has been widely used in different detection tasks and leads to algorithms with a single critical parameter to control the number of false detections.
Local Region Expansion: a Method for Analyzing and Refining Image Matcheshttp://www.ipol.im/pub/art/2017/154/Erez Farhan,
Elad Meir,
Rami Hagege2017-12-23T23:34:11Z2017-12-22T23:00:00Z
We present a novel method for locating large amounts of local matches between images, with highly accurate localization. Point matching is one of the most fundamental tasks in computer vision, extensively used in applications such as object detection, object tracking and structure from motion. The major challenge in point matching is to preserve large numbers of accurate matches between corresponding scene locations under different geometric and radiometric conditions, while keeping the number of false positives low. Recent publications have shown that applying an affine transformation model on local regions is a particularly suitable approach for point matching. Yet, affine invariant methods are not used extensively for two reasons: first, because these methods are computationally demanding; and second because the derived affine estimations have limited accuracy. In this work, we propose a novel method of region expansion that enhances region matches detected by any state-of-the-art method. The method is based on accurate estimation of affine transformations, which are used to predict matching locations beyond initially detected matches. We use the improved estimations of affine transformations to locally verify tentative matches in an efficient way. We systematically reject false matches, while improving the localization of correct matches that are usually rejected by state-of-the-art methods.
Non-Local Patch-Based Image Inpaintinghttp://www.ipol.im/pub/art/2017/189/Alasdair Newson,
Andrés Almansa,
Yann Gousseau,
Patrick Pérez2017-12-12T23:20:25Z2017-12-12T23:00:00Z
Image inpainting is the process of filling in missing regions in an image in a plausible way.
In this contribution, we propose and describe an implementation of a patch-based image inpainting algorithm. The method is actually a two-dimensional version of our video inpainting algorithm proposed in
[A. Newson et al., Video inpainting of complex scenes, SIAM Journal of Imaging Sciences, 7 (2014)].
The algorithm attempts to minimize a highly non-convex functional, first introducted by Wexler et al. in
[Wexler et al., Space-time video completion, CCVPR (2004)].
The functional
specifies that a good solution to the inpainting problem should be an image where each patch is very similar
to its nearest neighbor in the unoccluded area.
Iterations are performed in a multi-scale framework which yields globally coherent results. In this manner two of the major goals of image inpainting, the correct reconstruction
of textures and structures, are addressed.
We address a series of important practical issues which arise when using such an
approach. In particular, we reduce execution times by using the PatchMatch
[C. Barnes, PatchMatch: a randomized correspondence algorithm for structural image editing, ACM Transactions on Graphics, (2009)]
algorithm for nearest neighbor searches, and we propose a modified
patch distance which improves the comparison of textured patches.
We address the crucial issue of initialization and the choice of the
number of pyramid levels, two points which are rarely discussed in
such approaches.
We provide several examples which illustrate the advantages of our algorithm,
and compare our results with those of state-of-the-art methods.
A Sub-Pixel Edge Detector: an Implementation of the Canny/Devernay Algorithmhttp://www.ipol.im/pub/art/2017/216/Rafael Grompone von Gioi,
Gregory Randall2017-11-28T22:35:52Z2017-11-27T23:00:00Z
An image edge detector is described which produces chained edge points with sub-pixel accuracy. The method incorporates the main ideas of the classic Canny and Devernay algorithms. The analysis shows that a slight modification to the original formulation improves the accuracy of the edge points.
Comparison of Motion Smoothing Strategies for Video Stabilization using Parametric Modelshttp://www.ipol.im/pub/art/2017/209/Javier Sánchez2017-11-26T14:29:24Z2017-11-25T23:00:00Z
This paper is devoted to a rigorous implementation and to an exhaustive comparison of video stabilization techniques. These techniques aim at removing the undesirable effects of camera shake. They first estimate a global transform from frame to frame, which can be a translation, a similarity, an affine map or a homography. This generates a signal that can be smoothed and used to compensate the noisy transform signal. This paper compares all classic smoothing methods and their boundary conditions. It also analyzes two algorithms to crop the video after stabilization.
The stabilization results are displayed in a scale-space form permitting to extract valuable information about ego-motion such as its frequencies and its general tendencies.
Multi-Scale DCT Denoisinghttp://www.ipol.im/pub/art/2017/201/Nicola Pierazzo,
Jean-Michel Morel,
Gabriele Facciolo2017-11-24T10:07:37Z2017-10-28T22:00:00Z
DCT denoising is a classic low complexity method built in the JPEG compression norm. Once made translation invariant, this algorithm was still proven to be competitive at the beginning of this century. Since then, it has been outperformed by patch based methods, which are far more complex. This paper proposes a two-step multi-scale version of the algorithm that boosts its performance and reduces its artifacts.
The multi-scale strategy decomposes the image in a dyadic DCT pyramid, which keeps noise white at all scales. The single scale denoising is then applied to all scales, thus giving multiple denoised versions of the low frequency coefficients of the denoised image. A &#x27;multi-scale fusion&#x27; of these multiple estimates avoids the ringing artifacts resulting from the pyramid recomposition. The final algorithm attains a good PNSR and much improved visual image quality. It is shown to have a deficit of only 1dB with respect to state of the art algorithms, but its complexity is two orders of magnitude lower.
The Bilateral Filter for Point Cloudshttp://www.ipol.im/pub/art/2017/179/Julie Digne,
Carlo de Franchis2017-10-28T22:55:58Z2017-10-28T22:00:00Z
Point sets obtained by 3D scanners are often corrupted with noise, that can have several causes, such as a tangential acquisition direction, changing environmental lights or a reflective object material. It is thus crucial to design efficient tools to remove noise from the acquired data without removing important information such as sharp edges or shape details. To do so, Fleishman et al. introduced a bilateral filter for meshes adapted from the bilateral filter for gray level images. This anisotropic filter denoises a point with respect to its neighbors by considering not only the distance from the neighbors to the point but also the distance along a normal direction. This simple fact allows for a much better preservation of sharp edges. In this paper, we analyze a parallel implementation of the bilateral filter adapted for point clouds.
An Algorithm for Gaussian Texture Inpaintinghttp://www.ipol.im/pub/art/2017/198/Bruno Galerne,
Arthur Leclaire2017-11-24T10:07:37Z2017-10-08T22:00:00Z
Inpainting consists in computing a plausible completion of missing parts of an image given the available content.
In the case of images composed of a homogeneous microtexture, inpainting can be addressed by relying on Gaussian conditional simulation.
In this paper we describe an algorithm which allows to perform inpainting by Gaussian conditional simulation, in a scalable way.
We provide a detailed numerical study of this algorithm.
2D Filtering of Curvilinear Structures by Ranking the Orientation Responses of Path Operators (RORPO)http://www.ipol.im/pub/art/2017/207/Odyssee Merveille,
Benoît Naegel,
Hugues Talbot,
Laurent Najman,
Nicolas Passat2017-10-01T21:53:55Z2017-09-30T22:00:00Z
We present a filtering method for 2D curvilinear structures, called RORPO (Ranking the Orientation Responses of Path Operators). RORPO is based on path operators, a recently developed family of mathematical morphology filters. Compared with state of the art methods, RORPO is non-local and well adapted to the intrinsic anisotropy of curvilinear structures. Since RORPO does not depend on a linear scale-space framework, it tends to preserve object contours without a blurring effect. Due to these properties, RORPO is a useful low-level filter and can also serve as a curvilinear prior in segmentation frameworks. In this article, after introducing RORPO, we develop the 2D version of the algorithm and present a few applications.
Hyperspectral Image Classification Using Graph Clustering Methodshttp://www.ipol.im/pub/art/2017/204/Zhaoyi Meng,
Ekaterina Merkurjev,
Alice Koniges,
Andrea L. Bertozzi2017-08-18T22:50:04Z2017-08-18T22:00:00Z
Hyperspectral imagery is a challenging modality due to the dimension of the pixels which can range from hundreds to over a thousand frequencies depending on the sensor. Most methods in the literature reduce the dimension of the data using a method such as principal component analysis, however this procedure can lose information. More recently methods have been developed to address classification of large datasets in high dimensions. This paper presents two classes of graph-based classification methods for hyperspectral imagery. Using the full dimensionality of the data, we consider a similarity graph based on pairwise comparisons of pixels. The graph is segmented using a pseudospectral algorithm for graph clustering that requires information about the eigenfunctions of the graph Laplacian but does not require computation of the full graph. We develop a parallel version of the Nystr&#xF6;m extension method to randomly sample the graph to construct a low rank approximation of the graph Laplacian. With at most a few hundred eigenfunctions, we can implement the clustering method designed to solve a variational problem for a graph-cut-based semi-supervised or unsupervised classification problem. We implement OpenMP directive-based parallelism in our algorithms and show performance improvement and strong, almost ideal, scaling behavior. The method can handle very large datasets including a video sequence with over a million pixels, and the problem of segmenting a data set into a pre-determined number of classes.
The Image Curvature Microscope: Accurate Curvature Computation at Subpixel Resolutionhttp://www.ipol.im/pub/art/2017/212/Adina Ciomaga,
Pascal Monasse,
Jean-Michel Morel2017-07-28T13:30:43Z2017-07-27T22:00:00Z
We detail in this paper the numerical implementation of the so-called image curvature microscope, an algorithm that computes accurate image curvatures at subpixel resolution, and yields a curvature map conforming with our visual perception. In contrast to standard methods, which would compute the curvature by a finite difference scheme, the curvatures are evaluated directly on the level lines of the bilinearly interpolated image, after their independent smoothing, a step necessary to remove pixelization artifacts. The smoothing step consists in the affine erosion of the level lines through a geometric scheme, and can be applied in parallel to all level lines. The online algorithm allows the user to visualize the image of curvatures at different resolutions, as well as the set of level lines before and after smoothing.
Iterative Hough Transform for Line Detection in 3D Point Cloudshttp://www.ipol.im/pub/art/2017/208/Christoph Dalitz,
Tilman Schramke,
Manuel Jeltsch2017-07-18T22:50:12Z2017-07-18T22:00:00Z
The Hough transform is a voting scheme for locating geometric objects in point clouds. This paper describes its application for detecting lines in three dimensional point clouds. For parameter quantization, a recently proposed method for Hough parameter space regularization is used. The voting process is done in an iterative way by selecting the line with the most votes and removing the corresponding points in each step. To overcome the inherent inaccuracies of the parameter space discretization, each line is estimated with an orthogonal least squares fit among the candidate points returned from the Hough transform.
Realistic Film Grain Renderinghttp://www.ipol.im/pub/art/2017/192/Alasdair Newson,
Noura Faraj,
Bruno Galerne,
Julie Delon2017-07-19T21:31:47Z2017-07-17T22:00:00Z
Film grain is the unique texture which results from the silver halide based analog photographic process. Film emulsions are made up of microscopic photo-sensitive silver grains, and the fluctuating density of these grains leads to what is known as film grain. This texture is valued by photographers and film directors for its artistic value. We present two implementations of a film grain rendering algorithm based on a physically realistic film grain model. The rendering algorithm uses a Monte Carlo simulation to determine the value of each output rendered pixel. A significant advantage of using this model is that the images can be rendered at any resolution, so that arbitrary zoom factors are possible, even to the point where the individual grains can be observed. We provide a method to choose the best implementation automatically, with respect to execution time.
Vanishing Point Detection in Urban Scenes Using Point Alignmentshttp://www.ipol.im/pub/art/2017/148/José Lezama,
Gregory Randall,
Rafael Grompone von Gioi2017-07-14T09:43:31Z2017-07-13T22:00:00Z
We present a method for the automatic detection of vanishing points in urban scenes based
on finding point alignments in a dual space, where converging lines in the image are mapped
to aligned points. To compute this mapping the recently introduced PClines transformation
is used. A robust point alignment detector is run to detect clusters of aligned points in the
dual space. Finally, a post-processing step discriminates relevant from spurious vanishing point
detections with two options: using a simple hypothesis of three orthogonal vanishing points
(Manhattan-world) or the hypothesis that one vertical and multiple horizontal vanishing points
exist. Qualitative and quantitative experimental results are shown. On two public standard
datasets, the method achieves state-of-the-art performances. Finally, an optional procedure for
accelerating the method is presented.
A Fast Approximation of the Bilateral Filter using the Discrete Fourier Transformhttp://www.ipol.im/pub/art/2017/184/Pravin Nair,
Anmol Popli,
Kunal N. Chaudhury2017-07-03T22:07:56Z2017-05-23T22:00:00Z
The bilateral filter is a popular non-linear smoother that has applications in image processing, computer vision, and computational photography.
The novelty of the filter is that a range kernel is used in tandem with a spatial kernel for performing edge-preserving smoothing, where both kernels are usually Gaussian.
A direct implementation of the bilateral filter is computationally expensive, and several fast approximations have been proposed to address this problem.
In particular, it was recently demonstrated in a series of papers that a fast and accurate approximation of the bilateral filter can be obtained by approximating the Gaussian range kernel using polynomials and trigonometric functions. By adopting some of the ideas from this line of work, we propose a fast algorithm based on the discrete Fourier transform of the samples of the range kernel. We develop a parallel C implementation of the resulting algorithm for Gaussian kernels, and analyze the effect of various extrinsic and intrinsic parameters on the approximation quality and the run time. A key component of the implementation are the recursive Gaussian filters of Deriche and Young.
Data Adaptive Dual Domain Denoising: a Method to
Boost State of the Art Denoising Algorithmshttp://www.ipol.im/pub/art/2017/203/Nicola Pierazzo,
Gabriele Facciolo2017-05-23T22:07:48Z2017-05-23T22:00:00Z
This article presents DA3D (Data Adaptive Dual Domain Denoising), a &#x27;last step denoising&#x27; method that takes as input a noisy image and as a guide the result of any state-of-the-art denoising algorithm. The method performs frequency domain shrinkage on shape and data-adaptive patches.
DA3D doesn&#x27;t process all the image samples, which allows it to use large patches (64 x 64 pixels). The shape and data-adaptive patches are dynamically selected, effectively concentrating the computations on areas with more details, thus accelerating the process considerably.
DA3D also reduces the staircasing artifacts sometimes present in smooth parts of the guide images. The effectiveness of DA3D is confirmed by extensive experimentation. DA3D improves the result of almost all state-of-the-art methods, and this improvement requires little additional computation time.
An Unsupervised Algorithm for Detecting Good
Continuation in Dot Patternshttp://www.ipol.im/pub/art/2017/176/José Lezama,
Gregory Randall,
Jean-Michel Morel,
Rafael Grompone von Gioi2017-04-24T13:30:57Z2017-04-23T22:00:00Z
In this article we describe an algorithm for the automatic detection of perceptually relevant
configurations of &#x27;good continuation&#x27; of points in 2D point patterns. The algorithm is based
on the &#x27;a contrario&#x27; detection theory and on the assumption that &#x27;good continuation&#x27; of points
are locally quasi-symmetric. The algorithm has only one critical parameter, which controls the
number of false detections.
Midway Video Equalizationhttp://www.ipol.im/pub/art/2017/181/Javier Sánchez2017-10-11T08:37:38Z2017-04-23T22:00:00Z
This article presents an implementation of the &#x27;Midway Equalization&#x27; method for videos. This
technique allows us to modify the image histograms so that they present similar luminances. We
propose two algorithms: the first one based on histogram inversion and the second one on the
sorting of images intensities. The former computes the histograms and then finds the contrast
change functions by convolving the inverse histograms with a Gaussian function. The latter
starts by sorting the pixels of each image by intensity; the temporal signals, composed of all gray
levels of the same rank, are then convolved with a Gaussian function. In this sorting method, the
resulting histograms are more similar and homogeneous. Nevertheless, the histogram strategy
is faster and provides good results in general. The algorithms include a &#x27;dithering&#x27; option for
reducing quantization artifacts. The whole implementation depends on a single parameter,
the standard deviation, that is used for Gaussian convolutions. The experiments show several
examples, including the quantization artifacts that appear in some situations and the benefits of
dithering. We observe that these artifacts are usually more important in the histogram method.
Implementation of Local Fourier Burst Accumulation for Video Deblurringhttp://www.ipol.im/pub/art/2017/197/Jérémy Anger,
Enric Meinhardt-Llopis2017-03-19T11:51:40Z2017-03-18T23:00:00Z
This article presents Local Fourier Burst Accumulation, a recent motion deblurring method.
This method processes image bursts and can be naturally extended to video deblurring. The
algorithm first registers the frame to its neighboring frames by consistency-controlled optical
flow and then fuses the frames temporally by a weighted average in the Fourier domain. As
the method does not require the estimation of the blur kernel, it is less sensitive to common
deblurring issues such as ringing. The algorithm is detailed and comparison with the results of
the original authors are performed.
Robust Phase Retrieval with the Swept Approximate Message Passing (prSAMP) Algorithmhttp://www.ipol.im/pub/art/2017/178/Boshra Rajaei,
Sylvain Gigan,
Florent Krzakala,
Laurent Daudet2017-01-26T22:22:14Z2017-01-25T23:00:00Z
In phase retrieval, the goal is to recover a complex signal from the magnitude of its linear measurements. While many well-known algorithms guarantee deterministic recovery of the unknown signal using i.i.d. random measurement matrices, they suffer serious convergence issues for some ill-conditioned measurement matrices.
As an example, this happens in optical imagers using binary intensity-only spatial light modulators to shape the input wavefront. The problem of ill-conditioned measurement matrices has also been a topic of interest for compressed sensing researchers during the past decade.
In this paper, using recent advances in generic compressed sensing, we propose a new phase retrieval algorithm that well-behaves for a large class of measurement matrices, including Gaussian and Bernoulli binary i.i.d. random matrices, using both sparse and dense input signals. This algorithm is also robust to the strong noise levels found in some imaging applications.
Cradle Removal in X-Ray Images of Panel Paintingshttp://www.ipol.im/pub/art/2017/174/Gábor Fodor,
Bruno Cornelis,
Rujie Yin,
Ann Dooms,
Ingrid Daubechies2017-11-24T10:07:37Z2017-01-13T23:00:00Z
We address the problem of mitigating the visually displeasing effects of cradling in X-ray images of panel paintings. The proposed algorithm consists of three stages. In the first stage the location of the cradling is detected semi-automatically and the grayscale inconsistency, caused by the thickness of the cradling, is adjusted. In a second stage we use a blind source separation method to decompose the X-ray image into a so-called cartoon part and a texture part, where the latter contains mostly the wood grain from both the panel as well as the cradling. In the third and final stage the algorithm tries to learn the distinction between the texture patterns that originate from the cradling and those from other components such as the panel and/or the painting. The goal of the proposed research is to improve the readability of X-ray images of paintings for art experts.
Efros and Freeman Image Quilting Algorithm for Texture Synthesishttp://www.ipol.im/pub/art/2017/171/Lara Raad,
Bruno Galerne2017-01-11T09:10:19Z2017-01-10T23:00:00Z
Exemplar-based texture synthesis is defined as the process of generating, from an input texture sample, new texture images that are perceptually equivalent to the input. Efros and Freeman&#x27;s method is a non-parametric patch-based method which computes an output texture image by quilting together patches taken from the input sample. The main innovation of their work relies in the stitching technique which significantly reduces the transition effect between patches. In this paper, we propose a detailed analysis and implementation of their work. We provide a complete mathematical description of the linear programing problem used for the quilting step as well as implementation details. Additionally we propose a partially parallel version of the quilting technique.
Analysis and Extension of the PCA Method, Estimating a Noise Curve from a Single Imagehttp://www.ipol.im/pub/art/2016/124/Miguel Colom,
Antoni Buades2017-11-23T08:49:33Z2016-12-28T23:00:00Z
In the article &#x27;Image Noise Level Estimation by Principal Component Analysis&#x27;, S. Pyatykh, J. Hesser, and L. Zheng propose a new method to estimate the variance of the noise in an image from the eigenvalues of the covariance matrix of the overlapping blocks of the noisy image.
Instead of using all the patches of the noisy image, the authors propose an iterative strategy to adaptively choose the optimal set containing the patches with lowest variance. Although the method measures uniform Gaussian noise, it can be easily adapted to deal with signal-dependent noise, which is realistic with the Poisson noise model obtained by a CMOS or CCD device in a digital camera.
An Iterative Optimization Algorithm for Lens Distortion Correction Using Two-Parameter Modelshttp://www.ipol.im/pub/art/2016/130/Daniel Santana-Cedrés,
Luis Gómez,
Miguel Alemán-Flores,
Agustín Salgado,
Julio Esclarín,
Luis Mazorra,
Luis Álvarez2016-12-29T15:03:55Z2016-12-17T23:00:00Z
We present a method for the automatic estimation of two-parameter radial distortion models, considering polynomial as well as division models. The method first detects the longest distorted lines within the image by applying the Hough transform enriched with a radial distortion parameter. From these lines, the first distortion parameter is estimated, then we initialize the second distortion parameter to zero and the two-parameter model is embedded into an iterative nonlinear optimization process to improve the estimation. This optimization aims at reducing the distance from the edge points to the lines, adjusting two distortion parameters as well as the coordinates of the center of distortion. Furthermore, this allows detecting more points belonging to the distorted lines, so that the Hough transform is iteratively repeated to extract a better set of lines until no improvement is achieved. We present some experiments on real images with significant distortion to show the ability of the proposed approach to automatically correct this type of distortion as well as a comparison between the polynomial and division models.