Co-author: Martin Welk (UMIT Hall, Austria) Image enhancement with forward-and-backward (FAB) diffusion is numerically very challenging due to its negative diffusivities. As a remedy, we first extend the explicit nonstandard scheme by Welk et al. (2009) from the 1D scenario to the practically relevant two-dimensional setting. We prove that under a fairly severe time step restriction, this 2D scheme preserves a maximum--minimum principle. Moreover, we find an interesting Lyapunov sequence which guarantees convergence to a flat steady state. Since a global application of the time step size restriction leads to very slow algorithms and is more restrictive than necessary for most pixels, we introduce a much more efficient scheme with locally adapted time step sizes. It applies diffusive interactions of adjacent pixel pairs in a randomized order and adapts the time step size locally. These space-variant time steps are synchronized at sync times which are determined by stability properties of the explicit forward diffusion scheme. Experiments show that our novel two-pixel scheme allows to compute FAB diffusion with guaranteed stability in the maximum norm at a speed that can be three orders of magnitude larger than its explicit counterpart with a global time step size.

This talk discusses two seemingly unrelated data analysis methodologies: kernel clustering and graphical models. Clustering is widely used for general data where kernel methods are particularly popular due to their discriminating power. Graphical models such as Markov Random Fields (MRF) and related continuous geometric methods represent the state-of-the-art regularization methodology for image segmentation. While both clustering and regularization models are very widely used in machine learning and computer vision, they were not combined before due to significant differences in the corresponding optimization, e.g. spectral relaxation vs. combinatorial methods for submodular optimization and its approximations. This talk reviews the general properties of kernel clustering and graphical models, discusses their limitations (including newly discovered "density biases" in kernel methods), and proposes a general unified framework based on our new bound optimization algor ithm. In particular, we show that popular MRF potentials introduce principled geometric and contextual constraints into clustering, while standard kernel methodology allows graphical models to work with arbitrary high-dimensional features.

The talk will survey the idea of using over-parametrization in variational methods, in cases when some parameterized models for the signals of interest are available. Recently such methods were observed to yield state-of-the art results in recovering optic flow fields and in some signal segmentation problems.

In this talk we will discuss nonlinear spectral decompositions in Banach spaces, which shed a new light on multiscale methods in imaging and open new possibilities of filtering techniques. We provide a novel geometric interpretation of nonlinear eigenvalue problems in Banach spaces and provide conditions under which gradient flows for norms or seminorms yield a spectral decomposition. We will see that under these conditions standard variational schemes are equivalent to the gradient flows for arbitrary large time step, recovering previous results e.g. for the one dimensional total variation flow as special cases.

The talk is based on joint work with Guy Gilboa, Michael Moeller, Martin Benning, Daniel Cremers, Lina Eckardt

A brief overview will be given on current results in nonlinear eigenvalue analysis for one homogeneous functionals. We will then discuss how one can go beyond the convex framework by analyzing decay patterns of iterative filtering, based on sparsity constraints.

This work provides a new method to colorize gray-scale images. While the reverse operation is only a matter of standard, the colorization process is an ill-posed problem that requires some priors. In the literature two classes of approach exist. The first class includes manual methods that needs the user to manually add colorson the image to colorize. The second class includes exemplar-based approacheswhere a color image, with a similar semantic content, is provided as input to the method.These two types of priors have their own advantages and drawbacks. In this work, a new variational framework for exemplar-based colorization is proposed. A non-local approach is used to find relevant color in the source image in order to suggest colors on the gray-scale image. The spatial coherency of the result as well as the final color selection is provided by a non-convex variational framework based on a total variation. An efficient primal-dual algorithm. In this work, we also extend the proposed exemplar-based approach to combine both exemplar-based and manual methods. It provides a single framework that unifies advantages of both approaches. Finally,experiments and comparisons with state-of-the-art methods illustrate the efficiency of our method.This is joint work with Fabien Pierre, Aurélie Bugeau, Nicolas Papadakis, and Vinh Tong Ta.

In this joint work with T. Pock (TU Graz, Austria) we present a relaxation of line energies which depend on the curvature, such as the elastica functional, introduced in particular to complete contours in image inpainting problems. Our relaxation is convex and tight on C^2 curves.

In variational imaging and other inverse problem modeling, regularisation plays a major role.In recent years, high order regularizers such as the mean curvature, the Gaussian curvature and Euler's elastica are increasingly studied and applied, and many impressive results over the widely-used gradient based models are reported.

Here we present some results from studying another class of high and non-integer order regularisers based on fractional order derivatives and focus on two aspects of this class of models:(i) theoretical analysis and advantages; (ii) efficient algorithms.We found that models with regularization by fractional order derivatives are convex in a suitable space and algorithms exploiting structured matrices can be employed to design efficient algorithms.Applications to restoration and registration are illustrated. This opens many opportunities to apply these regularisers to a wide class of imaging problems.

Co-author: Hongpeng Sun (Renmin University of China) We present preconditioned and accelerated versions of the Douglas-Rachford (DR) splitting method for the solution of convex-concave saddle-point problems which often arise in variational imaging. The methods enable to replace the solution of a linear system in each iteration step in the corresponding DR iteration by approximate solvers without the need of controlling the error. These iterations are shown to converge in Hilbert space under minimal assumptions on the preconditioner and for any step-size. Moreover, ergodic sequences associated with the iteration admit at least a
convergence rate in terms of restricted primal-dual gaps. Further, strong convexity of one or both of the involved functionals allow for acceleration strategies that yield improved rates of
and
for
, respectively. The methods are applied to non-smooth and convex variational imaging problems. We discuss denoising and deconvolution with
and
discrepancy and total variation (TV) as well as total generalized variation (TGV) penalty. Preconditioners which are specific to these problems are presented, the results of numerical experiments are shown and the benefits of the respective preconditioned iterations are discussed.

Minimal paths have been
used for long as an interactive tool to find edges or tubular structures as
cost minimizing curves. The user usually provides start and end points on the
image and gets the minimal path as output. These minimal paths correspond to
minimal geodesics according to some adapted metric. They are a way to find a
(set of) curve(s) globally minimizing the geodesic active contours energy.
Finding a geodesic distance can be solved by the Eikonal equation using the
fast and efficient Fast Marching method. Different metrics can be adapted to
various problems. In the past years we have introduced different extensions of
these minimal paths that improve either the interactive aspects or the results.
For example, the metric can take into account both scale and orientation of the
path. This leads to solving an anisotropic minimal path in a 2D or 3D+radius
space. We recently introduced the use of Finsler metrics allowing to take into
account the local curvature in order to smooth the path. It can also be adapted
to take into account a region term inside the closed curve formed by a set of
minimal geodesics. Co-authors: Da Chen and
J.-M. Mirebeau

This talk will address several issues related to training neural networks using stochastic gradient methods. First, we'll talk about the difficulties of training in a distributed environment, and present a new method called centralVR for boosting the scalability of training methods. Then, we'll talk about the issue of automating stochastic gradient descent, and show that learning rate selection can be simplified using "Big Batch" strategies that adaptively choose minibatch sizes.

I will consider the Valued Constraint Satisfaction Problem (VCSP), whose goal is to minimize a sum of local terms where each term comes from a fixed set of functions (called a "language") over a fixed discrete domain. I will present recent results characterizing languages that can be solved using the basic LP relaxation. This includes languages consisting of submodular functions, as well as their generalizations.

One of such generalizations is k-submodular functions. In the second part of the talk I will present an application of such functions in computer vision. Based on joint papers with Igor Gridchyn, Andrei Krokhin, Michal Rolinek, Johan Thapper and Stanislav Zivny.

Co-authors: Maryam Yashtini (Georgia Institute of Technology), Wei Zhu (The University of Alabama) Recent developments of fast algorithms, based on operator splitting, augmented Lagrangian, and alternating minimization, enabled us to revisit some of the variational image inpainting models. In this talk, we will present some fast algorithms for Euler's Elastica image inpainting model, and variational edge-weighted image colorization model based on chromaticity and brightness models. Main ideas of the models and algorithms, some analysis and numerical results will be presented.

This talk will present a set of sensitivity analysis and activity identification results for a class of convex functions with a strong geometric structure, that we coin ``mirror-stratifiable''. These functions are such that there is a bijection between a primal and a dual stratification of the space into partitioning sets, called strata. This pairing is crucial to track the strata that are identifiable by solutions of parametrized optimization problems or by iterates of optimization algorithms. This class of functions encompasses all regularizers routinely used in signal and image processing, machine learning, and statistics. We show that this ``mirror-stratifiable'' structure enjoys a nice sensitivity theory, allowing us to study stability of solutions of optimization problems to small perturbations, as well as activity identification of first-order proximal splitting-type algorithms.

Existing results in the literature typically assume that, under a non-degeneracy condition, the active set associated to a minimizer is stable to small perturbations and is identified in finite time by optimization schemes. In contrast, our results do not require any non-degeneracy assumption: in consequence, the optimal active set is not necessarily stable anymore, but we are able to track precisely the set of identifiable strata. We show that these results have crucial implications when solving challenging ill-posed inverse problems via regularization, a typical scenario where the non-degeneracy condition is not fulfilled. Our theoretical results, illustrated by numerical simulations, allow to characterize the instability behaviour of the regularized solutions, by locating the set of all low-dimensional strata that can be potentially identified by these solutions.

In this talk, I will introduce a novel low dimensional
manifold model for image processing problem.
This model is based on the observation that for many natural
images, the patch manifold usually has low dimension
structure. Then, we use the dimension of the patch manifold as a
regularization to recover the original image.
Using some formula in differential geometry, this problem is
reduced to solve Laplace-Beltrami equation on manifold.
The Laplace-Beltrami equation is solved by the point integral
method. Numerical tests show that this method gives very good
results in image inpainting, denoising and super-resolution
problem.
This is joint work with Stanley Osher and Wei Zhu.

1. A well known result of H. Attouch states that the Mosco convergence of a sequence of proper convex lower semicontinuous functions defined on a Hilbert space is equivalent to the pointwise convergence of the associated Moreau envelopes. In the present paper we generalize this result to Hadamard spaces. More precisely, while it has already been known that the Mosco convergence of a
sequence of convex lower semicontinuous functions on a Hadamard space implies the pointwise convergence of the corresponding Moreau envelopes, the converse implication was an open question. We now fill this gap. Our result has several consequences. It implies, for instance, the equivalence of
the Mosco and Frolik-Wijsman convergences of convex sets. As another application, we show that there exists a~complete metric on the cone
of proper convex lower semicontinuous functions on a separable Hadamard space such that a~sequence of functions converges
in this metric if and only if it converges in the sense of Mosco.

2. We extend the parallel Douglas-Rachford algorithm to the
manifold-valued setting.

This talk is divided into three parts.
In the first part, we will introduce the essential ideas in using Augmented Lagrangian/operator-splitting techniques for fast numerical algorithms for minimizing Euler's Elastica energy.
In the 2nd part, we consider an Euler's elastica based image segmentation model. An interesting feature of this model lies in its preference of convex segmentation contour. However,
due to the high order and non-differentiable term, it is often nontrivial to minimize the
associated functional. In this work, we propose using augmented Lagrangian
method to tackle the minimization problem. Especially, we design a novel augmented
Lagrangian functional that deals with the mean curvature term differently as those ones in the
previous works. The new treatment reduces the number of Lagrange multipliers employed,
and more importantly, it helps represent the curvature more effectively and faithfully.
Numerical experiments validate the efficiency of the proposed augmented Lagrangian method
and also demonstrate new features of this particular segmentation model, such as shape
driven and data driven properties.
In the 3rd part, we will introduce some recent fast algorithms for minimizing Euler's elastica energy for interface problems. The method combine level set and binary representations of interfaces. The algorithm only needs to solve an Rodin-Osher-Fatemi problem and a re-distance of the level set function to minimize the elastica energy. The algorithm is easy to implement and fast with efficiency.
The content of this talk is based joint works with Egil Bae, Tony Chan, Jinming Duan and Wei Zhu.
Related links:
1) ftp://ftp.math.ucla.edu/pub/camreport/cam17-36.pdf
2) https://www.researchgate.net/profile/Xue_Cheng_Tai/publication/312519936_Augmented_Lagrangian_method_for_an_Euler's_elastica_based_segmentation_model_that_promotes_convex_contours/links/58a1b9d292851c7fb4c1907f/Augmented-Lagrangian-method-for-an-Eulers-elastica-based-segmentation-model-that-promotes-convex-contours.pdf
3) https://www.researchgate.net/publication/257592616_Image_Segmentation_Using_Euler%27s_Elastica_as_the_Regularization.

For many years, discrete optimization models such as conditional random fields (CRFs) have defined the state-of-the-art for classical correspondence problems such as motion and stereo. One of the most important ingredients in those models is the choice of the feature transform that is used to compute the similarity between images patches. For a long time, hand crafted features such as the celebrated scale invariant feature transform (SIFT) defined the state-of-the-art. Triggered by the recent success of convolutional neural networks (CNNs), it is quite natural to learn such a feature transform from data. In this talk, I will show how to efficiently learn such CNN features from data using an end-to-end learning approach. It turns out that our learned models yields state-of-the-art results on a number of established benchmark databases.

In this talk, I will introduce a new directional regularization based on the total generalized variation (TGV), which is very useful for applications with strong directional information. I will show that it has the same essential properties as TGV. With automatic direction estimators, we demonstrate the improvement of using directional TGV compared to standard TGV. Numerical simulations are carried out for image restoration and computed tomography reconstruction.

In this talk I will present a convex relaxation technique for a particular class of energy functionals consisting of a pointwise nonconvex data term and a total variation regularization as frequently used in image processing and computer vision problems. The method is based on the technique of functional lifting in which the minimization problem is reformulated in a higher dimensional space in order to obtain a tighter approximation of the original nonconvex energy.

New generations of imaging devices aim to produce high resolution and high dynamic range images. In this context, the high dimensionality associated inverse problems can become extremely challenging from an algorithmic view point. In addition, the quality and accuracy of the reconstructed images often depend on the precision with which the imaging device has previously been calibrated. Unfortunately, calibration does not depend only on the device but may also rely on the time and on the direction of the acquisitions. This leads to the need of performing joint image reconstruction and calibration, and thus of solving non-convex blind deconvolution problems. We focus on the joint calibration and imaging problem in the context of radio-interferometric imaging in astronomy. In this case, the sparse images of interest can reach gigapixel or terapixel size, while the calibration variables consist of a large number of low resolution images related to each antenna of the telescope. To solve this problem, we leverage a block-coordinate forward-backward algorithm, specifically designed to minimize non-smooth non-convex and high dimensional objective functions. We demonstrate by simulation the performance of this first joint imaging and calibration method in radio-astronomy.

We consider inverse problems where where a distributed
parameter is known a priori to only take on values from a given discrete
set. This property can be promoted in Tikhonov regularization with the
aid of a suitable convex but nondifferentiable regularization term. This
allows applying standard approaches to show well-posedness and
convergence rates in Bregman distance. Using the specific properties of
the regularization term, it can be shown that convergence (albeit
without rates) actually holds pointwise. Furthermore, the resulting
Tikhonov functional can be minimized efficiently using a semi-smooth
Newton method. Numerical examples illustrate the properties of the
regularization term and the numerical solution.

This is joint work with Thi Bich Tram Do, Florian Kruse, and Karl Kunisch.

There has been an increasing interest in constrained nonconvex regularized block multiconvex optimization problems. We introduce an approach that effectively exploits the multiconvex structure of the coupling term and enables complex application-dependent regularization terms to be used. The proposed Alternating Structure-Adapted Proximal gradient descent algorithm enjoys simple well defined updates. Global convergence of the algorithm to a critical point is proved using the so-called Kurdyka-Lojasiewicz property. What is more, we prove that a large class of useful objective functions obeying our assumptions are subanalytic and thus satisfy the Kurdyka-Lojasiewicz property. Finally, present an application of the algorithm to big-data air-born sequences of images.

In practice, ill-posed inverse problems are often dealt with by introducing a suitable regularization functional. The idea is to stabilize the problem while promoting "desirable" solutions. Here, we are interested in contrasting the effect Tikhonov vs. total-variation-like regularization. To that end, we first consider a discrete setting and present two representer theorems that characterize the solution of general convex minimization problems subject to $\ell_2$ vs. $\ell_1$ regularization constraints. Next, we adopt a continuous-domain formulation where the regularization semi-norm is a generalized version of total-variation tied to some differential operator L. We prove that the extreme points of the corresponding minimization problem are nonuniform L-splines with fewer knots than the number of measurements. For instance, when L is the derivative operator, then the solution is piecewise constant, which confirms a standard observation and explains why the solution is intrinsically sparse. The powerful aspect of this characterization is that it applies to any linear inverse problem.

In this talk, I will propose a variational method to reconstruct operators with smooth kernels from scattered and noisy impulse responses. The proposed approach relies on the formalism of smoothing in reproducing kernel Hilbert spaces and on the choice of an appropriate regularization term that takes the smoothness of the operator into account. It is numerically tractable in very large dimensions and yields a representation that can be used for achieving fast matrix-vector products. We study the estimator's robustness to noise and analyze its approximation properties with respect to the size and the geometry of the dataset. It turns out to be minimax optimal.

We finally show applications of the proposed algorithms to reconstruction of spatially varying blur operators in microscopy imaging.

Co-author: Lena Gorelick (University of Western Ontario) Many computer vision problems require optimization of binary non-submodular energies. In this context, local iterative submodularization techniques based on trust region (LSA-TR) and auxiliary functions (LSA-AUX) have been recently proposed. They achieve state-of-the-art-results on a number of computer vision applications. We extend the LSA-AUX framework in two directions. First, unlike LSA-AUX, which selects auxiliary functions based solely on the current solution, we propose to incorporate several additional criteria. This results in tighter bounds for configurations that are more likely or closer to the current solution. Second, we propose move-making extensions of LSA-AUX which achieve tighter bounds by restricting the search space. Finally, we evaluate our methods on several applications. We show that for each application at least one of our extensions significantly outperforms the original LSA-AUX. Moreover, the best extension of LSA-AUX is comparable to or better than LSA-TR on four out of six applications.

Co-Author: Jan Lellmann (Institute of Mathematics and Image Computing, University of Lübeck)

One strategy in functional lifting is to consider probability measures on the label space of interest, which can be discrete or continuous. The considered functionals often make use of a total variation regularizer which, when lifted, allows for a dual formulation introducing a Lipschitz constraint. In our recent work, we proposed to use a similar formulation of total variation for the restoration of so-called Q-Ball images. In this talk, we present a mathematical framework for total variation regularization that is inspired from the theory of Optimal Transport and that covers all of the previous cases, including probability measures on discrete and continuous label spaces and on manifolds. This framework nicely explains the above-mentioned Lipschitz constraint and comes with a robust theoretical background.

Co-authors: Kristian Bredies (University of Graz), Martin Storath (University of Heidelberg), Andreas Weinmann (Darmstadt University of Applied Sciences) Introduced in 2010, the total generalized variation (TGV) functional is nowadays amongst the most successful regularization functionals for variational image reconstruction. It is defined for an arbitrary order of differentiation and provides a convex model for piecewise smooth vector-space data. On the other hand, variational models for manifold-valued data have become popular recently and many successful approaches, such as first- and second-order TV regularization, have been successfully generalized to this setting. Despite the fact that TGV regularization is, generally, considered to be preferable to such approaches, an appropriate extension for manifold-valued data was still missing. In this talk we introduce the notion of second-order total generalized variation (TGV) regularization for manifold-valued data. We provide an axiomatic approach to formalize reasonable generalizations of TGV to the manifold setting and present concrete instances that fulfill the proposed axioms. We prove well-posedness results and present algorithms for a numerical realization of these generalizations to the manifold setup. Further, we provide experimental results for synthetic and real data to further underpin the proposed generalization numerically and show its potential for applications with manifold-valued data.

In the talk I will present variational methods to image segmentation with application to brain MRI tissue classification. In particular I will present an `unconventinal' use of the multinomial logistic regression function.The work is based on a joint work with Jacob Goldberger, Shiri Gordon and Boris Kodner.