Analytic and Geometric Methods in Medical Imaging

Conformal (C) / Quasi-conformal (QC) geometry has a long history in pure mathematics, and is an active field in both modern geometry and modern physics. Recently, with the rapid development of 3D digital scanning technology, the demand for effective geometric processing and shape analysis is ever increasing. Computational conformal / quasi-conformal geometry plays an important role for these purposes. Applications can be found in different areas such as medical imaging, computer visions and computer graphics.

In this talk, I will first give an overview of how conformal geometry can be applied in medical imaging and computer graphics. Examples include brain registration and texture mapping, where the mappings are constructed to be as conformal as possible to reduce geometric distortions. In reality, most registrations and surface mappings involve non-conformal distortions, which require more general theories to study. A direct generalization of conformal mapping is quasiconformal mapping, where the mapping is allowed to have bounded conformality distortions. In the second part of my talk, theories of quasicoformal geometry and its applications will be presented. In particular, I will talk about how QC can be used for registration of biological surfaces, shape analysis, medical morphometry and the inpainting of surface diffeomorphism.

Calderon's problem asks if and how one can determine the conductivity structure of a material from boundary current-voltage measurements. In two dimensions the problem admits a complete solution. This includes the uniqueness proof for (very) rough coefficients, developing new reconstruction algorithms and their computer implementation.

In this talk we give an overview of the recent progress on the EIT problem in two dimensions. The talk is based on joint works with M. Lassas, L.
Päivärinta, S. Siltanen, J. Müller and A. Perämäki.

We will give a survey on some recent results on travel tomography which consists in determining the index of refraction of a medium by measuring the travel times of sound waves going through the medium. We will also consider the related problem of tensor tomography which consists in determining a function, a vector field or tensors of higher rank from their integrals along geodesics.

Rapid development of 3D data acquisition technologies stimulates researches on 3D surface analysis. Intrinsic descriptors of 3D surfaces are crucial to either process or analyze surfaces. In this talk, I will present our recent work on 3D surfaces analysis by using Laplace-Beltrami (LB) eigen-system. The intrinsically defined LB operator provides us a powerful tool to study surface geometry through its LB eigen-system. By combining with other variational PDEs on surfaces, I will show our results on skeleton construction, feature extraction, pattern identification and surface mapping in 3D brain imaging by using LB eigen-geometry. The nature of LB eigen-system guarantee that our methods are robust to surfaces rotation and translation variations.

Image-based bio-markers have become powerful diagnostic tools due to the rapid and amazing development of medical hardware. In such a context, efficient processing and understanding of the corresponding images has gained significant attention over the past decade. The task to be addressed is extremely challenging due to: (i) curse of non-linearity (images and desired bio-markers exhibit a non-linear relationship), (ii) curse of dimensionality (number of degrees of freedom versus their inference), (iii) curse of non-convexity (designed objective functions present numerous local minima) and (iv) curse of modularity (variability of organs, imaging modalities).
In this talk, we will provide some preliminary answers to the aforementioned challenges by exploiting through graphical models and discrete optimization algorithms. Furthermore, concrete examples will be presented towards addressing fundamental problems in biomedical perception like knowledge-based segmentation and deformable image fusion.

The role of curvatures in visual perception goes back to '54 and is due to Attneave. It can be argued on neurological grounds that human brain could not possible use all the information provided by states of simulation. But information that stimulates the retina, is located at regions where color changes abruptly (contours), and furthermore at angles and peaks of curvature. Yet, a direct computation of curvatures on a raw image is impossible.
We show in this presentation how curvatures can be accurately estimated, at subpixel resolution, by a direct computation on level lines after their independent smoothing. This view towards shape analysis requires a representation of an image in terms of its level lines. At the same time, it involves short time smoothing (in occurrence Curve Shortening or Af?ne Shortening) simultaneously for level lines and images.

In this setting, we found an explicit connection between the geometric approach for Curve / Af?ne Shortening and the viscosity approach for the Mean / Af?ne Curvature Motion, based on a complete image processing pipeline, that we term Level Lines (Af?ne) Shortening, shortly LL(A)S. We show that LL(A)S provides an accurate visualization tool of image curvatures, that we call an Image Curvature Microscope. As an application we give some illustrative examples of image visualization and restoration: noise, JPEG artifacts, and aliasing will be shown to be nicely smoothed out by the subpixel curvature motion.

Biophysical models are increasingly used for medical applications at the organ scale. However, model predictions are rarely associated with a confidence measure although there are important sources of uncertainty in computational physiology methods. For instance, the sparsity and noise of the clinical data used to adjust the model parameters (personalization), and the difficulty in modeling accurately soft tissue physiology. The recent theoretical progresses in stochastic models make their use computationally tractable, but there is still a challenge in estimating patient-specific parameters with such models.

In this talk I will describe an efficient Bayesian inference method for model personalization (parameter estimation) using polynomial chaos and compressed sensing. I will demonstrate the method in the context of cardiac electrophysiology and show how this can help in quantifying the impact of the data characteristics and uncertainty on the personalization (and thus prediction) results.

Described method can be beneficial for the clinical use of personalized models as it explicitly takes into account the uncertainties on the data and the model parameters while still enabling simulations that can be used to optimize treatment. Such uncertainty handling can be pivotal for the proper use of modeling as a clinical tool, because there is a crucial requirement to know the confidence one can have in personalized models.

Multiscale analysis can give useful insight into various natural and manmade phenomena.
In this talk, we will discuss some new techniques of multiscale analysis in the context
of digital images.

We will discuss multiscale image processing using variational and partial dierential
equations. We will describe novel integro-dierential equations based on the Rudin-Osher-
Fatemi decomposition and its variants.
In the second part of the talk, we will discuss the problem of tracing blood vessel boundaries in placental histology images using a combination of global/local registration and Chan-Vese segmentation.

We present a novel reconstruction technique for image reconstruction in positron emission tomography (PET). This technique provides an effective combination of accurately inverting the Radon transform and of implementing an appropriate regularisation for noise removal. In contrast to the majority of existing algorithms which apply denoising to the reconstructed image, our work applies a regularisation both in the measurement and the image space. For this task we use an alternating total variation algorithm. This is joint work with P. E. Barbano and T. Fokas.

In this talk we consider analytical reconstruction formulas for photoacoustic experiments where the illumination is focused to a plane. In standard Photoacoustic experiments, where the whole specimen is uniformly illuminated the total energy required can be way too much, and thus focusing becomes necessary.

Such focusing experiments require novel reconstruction techniques for
imaging, which will be the core topic of this talk. Moreover, the reconstruction
algorithms also depend on the measurement setup - we will discuss standard
point detectors, realizable for instance by piezo crystals, and integrating
detectors, realizable for instance by Mach-Zehnder interferometers.
In addition, we review photoacoustic imaging formulas to but the work in
perspective.

This is joint work with P. Elbau and R. Schulze (RICAM, Linz, Austria).

We present a new spatiotemporal model for 4D-CT from matrix perspective, Robust PCA based 4DCT model. Instead of viewing 4D object as a temporal collection of three-dimensional (3D) images and looking for local coherence in time or space independently, we explore the maximum temporal coherence of spatial structure among phases. This Robust PCA based 4DCT model can be applicable in other imaging problems for motion reduction or/and change detection. A dynamic data acquisition procedure, i.e., a temporally spiral scheme, is proposed that can potentially maintain the similar reconstruction accuracy while using fewer projections of the data. The key point of this dynamic scheme is to reduce the total number of measurements and hence the radiation dose by acquiring complementary data in different phases without redundant measurements of the common background structure.

We develop fast numerical methods for the practical solution of the famous EIT and DC-resistivity problems in the presence of discontinuities and potentially many experiments or data.

Based on a Gauss-Newton (GN) approach coupled with preconditioned conjugate gradient (PCG) iterations, we propose two algorithms. One determines adaptively the number of inner PCG iterations required to stably and effectively carry out each GN iteration. The other algorithm, useful especially in the presence of many experiments, employs a randomly chosen subset of experiments at each GN iteration that is controlled using a cross validation approach. Numerical examples demonstrate the efﬁcacy of our algorithms.

We develop photoacoustic tomography (PAT) for functional and molecular imaging by physically combining optical and ultrasonic waves via energy transduction. Key applications include early-cancer and functional imaging. Light provides rich tissue contrast but does not penetrate biological tissue in straight paths as x-rays do. Consequently, high-resolution pure optical imaging (e.g., confocal microscopy, two-photon microscopy, and optical coherence tomography) is limited to depths within one optical transport mean free path (~1 mm in the skin). Ultrasonic imaging, on the contrary, provides good image resolution but suffers from poor contrast in early-stage tumors as well as strong speckle artifacts. PAT-embodied in the forms of computed tomography and focused scanning-overcomes the above problems because ultrasonic scattering is ~1000 times weaker than optical scattering. In PAT, a pulsed laser beam illuminates the tissue and generates a small but rapid temperature rise, which induces emission of ultrasonic waves due to thermoelastic expansion. The short-wavelength ultrasonic waves are then detected to form high-resolution tomographic images. PAT broke through the diffusion limit for penetration and achieved high-resolution images at depths up to 7 cm in tissue. Further depths can be reached by thermoacoustic tomography (TAT) using microwaves or RF waves instead of light for excitation.

In this talk I will present a challenging mathematical problem in
biomedical imaging: that of registering (bringing into spatial alignment)
two highly deformable surfaces of the colon. Solving this problem has
clinical application in CT Colonography, also known as "virtual
colonoscopy", used to screen patients for colorectal lesions. Our
registration approach relies on conformally flattening each colon surface
using Ricci flow, which is a partial differential equation that deforms the
metric of a Riemannian manifold.

We then map 3D differential geometric shape descriptors to the flattened surfaces, and perform a cylindrical registration to derive the final registration. With the registration in
place, we can determine corresponding points between the different surfaces,
which is accurate to within approximately 6 millimeters.

Drug development involves huge and extremely expensive global experiments, which at late phase can involve many thousands of patients recruited at 100s of hospitals all over the world. Imaging biomarkers have the potential to provide standardized, quantitative measurements of patient eligibility for the study, drug side effects, and drug efficacy, but deploying them in such global studies has great challenges. Alzheimer's Disease (AD) is an area of huge unmet medical need, and huge sums are being invested in testing potential new treatments for AD, and drug companies are being very ambitious in their use of sophisticated image analysis methods in these studies. The technical and regulatory challenges of these applications is quite different from the normal focus of image analysis research, but the potential benefit of overcoming these challenges is better tools to help bring safe and effective medicines to patients.

In the past decade, the field of cancer imaging has continued to expand and grow in response to new challenges posed by our increasing understanding of the molecular basis of cancer and introduction of novel targeted treatments to the clinic. Technological advancement in imaging software and hardware has enabled more rapid acquisition and processing of images, which has led to the growth of functional imaging techniques. We are now able to apply a number of imaging techniques in oncological practice for early diagnosis of disease, detection of small volume disease, providing a roadmap for treatment planning, enabling novel assessment of treatment response, as well as for disease prognostication. However there are a number of imaging challenges which are currently unmet. There is a need for continued validation and qualification of imaging biomarkers using histology, patient outcome data and corroborative multi-parametric imaging. There is a recognised gap in translating imaging findings to treatment delivery, particularly in radiotherapy. There is also a desire to move from simplistic unidimensional tumour burden estimates to volumetric disease quantification across the body. Last but not least, the presence of physiological motion continues to pose diagnostic and therapeutic challenges to pinpoint biologically relevant disease for focussed treatments.

Open for Business Panel Discussion with LV.Wang, G.Slabaugh, D.Hill & D-M.Koh. This discussion aims to encourage exchange of information and discussion on possible collaboration between academia and industry on future challenges in medical imaging.

Optical Tomography has developed enormously in the last 20 years. In this modality, light in the visible or near infrared part of the spectrum is injected into an object and its transmitted intensity measured on the boundary of the domain. Several inverse problems can be described which correspond to parameter idenfication problems, inverse source problems, or both. Both linear and non-linear approaches can be used.
In this talk I will describe several of these problems, their applications, and methods for their solution.

Hybrid (multi-physics) inverse problems aim at combining the high contrast of one imaging modality (such as e.g. Electrical Impedance Tomography or Optical Tomography in medical imaging) with the high resolution of another modality (such as e.g. based on ultrasound or magnetic resonance). Mathematically, these problems often take the form of inverse problems with internal information. This talk will review several results of uniqueness and stability obtained recently for such inverse problems.

This tutorial will describe the basic steps of the construction of shape spaces via the action of diffeomorphic transformations. We will discuss how right-invariant distances and Riemannian metrics can be projected onto distances or metrics on shape spaces, and review how they can be built in a computational framework. We will then discuss particular cases, with a special focus on point sets, and show applications of this framework to registration problems and data analysis in shape spaces.

Electroencephalography (EEG) and Magnetoencephalography (MEG) provide the two most efficient imaging techniques for the study of the functional brain, because of their time resolution. Almost all analytical studies of EEG and MEG are based on the spherical model of the brain, while studies in more realistic geometries are restricted to numerical treatments alone. The human brain can best approximated by an ellipsoid with average semi-axes equal to 6, 6.5 and 9 centimeters. An analytic study of the brain activity in ellipsoidal geometry though, is not a trivial problem and a complete closed form solution does not seems possible for either EEG or MEG. In the present work we introduce vector surface ellipsoidal harmonics, we discuss their peculiar orthogonality properties, and finally we use them to decompose the neuronal current within the brain into the part that is detectable by the EEG and that is detectable by the MEG measurements. The decomposition of a vector field in vec tor surface ellipsoidal harmonics leads to three subspaces R, D and T, depending on the character of the surface harmonics that they span this subspaces. We see that both, the electric field obtained from EEG and the magnetic field obtained from MEG, have no T-component. Furthermore, the T-component of the neuronal current does not influence the EEG recordings, while the MEG recordings depend on all three components of the current.

Tubular and tree structures appear very commonly in biomedical images like vessels, microtubules or neuron cells. Minimal paths have been used for long as an interactive tool to segment these structures as cost minimizing curves. The user usually provides start and end points on the image and gets the minimal path as output. These minimal paths correspond to minimal geodesics according to some adapted metric. They are a way to find a (set of) curve(s) globally minimizing the geodesic active contours energy. Finding a geodesic distance can be solved by the Eikonal equation using the fast and efficient Fast Marching method. In the past years we have introduced different extensions of these minimal paths that improve either the interactive aspects or the results. For example, the metric can take into account both scale and orientation of the path. This leads to solving an anisotropic minimal path in a 2D or 3D+radius space. On a different level, the user interaction can be minimized by adding iteratively what we called the keypoints, for example to obtain a closed curve from a single initial point. The result is then a set of minimal paths between pairs of keypoints. This can also be applied to branching structures in both 2D and 3D images. We also proposed different criteria to obtain automatically a set of end points of a tree structure by giving only one starting point. More recently, we introduced a new general idea that we called Geodesic Voting or Geodesic Density. The approach consists in computing geodesics between a given source point and a set of points scattered in the image. The geodesic density is defined at each pixel of the image as the number of geodesics that pass over this pixel. The target structure corresponds to image points with a high geodesic density. We will illustrate different possible applications of this approach. The work we will present involved as well F. Benmansour, Y. Rouchdy and J. Mille at CEREMADE.

In this talk we consider the question how inverse problems posed for continuous objects, for instance for continuous functions, can be discretized. This means the approximation of the problem by infinite dimensional inverse problems. We will consider linear inverse problems of the form $m=Af+\epsilon$. Here, the function $m$ is the measurement, $A$ is a ill-conditioned linear operator, $u$ is an unknown function, and $\epsilon$ is random noise.

The inverse problem means determination of $u$ when $m$ is given.
In particular, we consider the X-ray tomography with sparse or limited angle measurements where $A$ corresponds to integrals of the attenuation function $u(x)$ over lines in a family $\Gamma$.

The traditional solutions for the problem include the generalized Tikhonov regularization and the estimation of $u$ using Bayesian methods. To solve the problem in practice $u$ and $m$ are discretized, that is, approximated by vectors in an infinite dimensional vector space. We show positive results when this approximation can successfully be done and consider examples of problems that can appear. As an example, we consider the total variation (TV) and Besov norm penalty regularization, the Bayesian analysis based on total variation prior and Besov priors.

The accelerated development of imaging techniques in biomedical engineering is challenging mathematicians and computer scientists to develop appropriate methods for the representation and the statistical analysis of various geometrically structured data like submanifolds.

We will first explain how the concepts of homogeneous spaces and riemannian manifolds embedded in the large deformation diffeomorphic metric mapping setting (LDDMM) and the introduction of mathematical currents by Glaunes and Vaillant in this setting have been a powerful and effective framework to support local statistical analysis in more and more complex shape spaces.

We will then discuss a new extension when the submanifolds are the supports of informative fields that need to be also analyzed in a common geometrical-functional representation (joint work with Nicolas Charon).

We consider the problem of recovering an isotropic conductivity outside some perfectly conducting or insulating inclusions from the interior measurement of the magnitude of one current density field $|J|$. We prove that the conductivity outside the inclusions, and the shape and position of the perfectly conducting and insulating inclusions are uniquely determined (except in an exceptional case) by the magnitude of the current generated by imposing a given boundary voltage. We have found an extension of the notion of admissibility to the case of possible presence of perfectly conducting and insulating inclusions. This makes it possible to extend the results on uniqueness of the minimizers of the least gradient problem $F(u)=\int_{Omega}a | \nabla u|$ with $u|_{\partial \Omega}=f$ to cases where $u$ has flat regions (is constant on open sets). This is a joint work with Adrian Nachman and Alexandru Tamasam.

We discuss inversion formulae for the attenuated X-ray transform on curves in the two-dimensional unit disc. This tomographic problem has applications in the medical imaging modality SPECT, and has more recently arisen in the problem of determining the internal permittivity and permeability parameters from a conductive body based on external measurements.

Image registration and segmentation tasks lie in the heart of Medical Imaging.
In registration, our concern is to align two or more images using deformable transforms that have desirable regularities. In a multimodal image registration scenario, where two given images have similar features, but non-comparable intensity variations, the sum of squared differences is not suitable to measure image similarities.

In this talk, we first propose a new variational model based on combining intensity and geometric transformations, as an alternative to using mutual information and an improvement to the work by Modersitzki and Wirtz (2006, LNCS, vol.4057), and then develop a fast multigrid algorithm for solving the underlying system of fourth order and nonlinear partial differential equations. We can demonstrate the effective smoothing property of the adopted primal-dual smoother by a local Fourier analysis.
An earlier use of mean curvature to regulairse image denosing models was in T F Chan and W Zhu (2008) and the previous work of developing a multigrid algorithm for the Chan-Zhu model was by Brito-Chen (2010). Numerical tests will be presented to show both the improvements achieved in image registration quality as well as multigrid efficiency.
Joint work with Dr Noppadol Chumchob.

We consider Discrete Total Variation Flows. Using a combinatorial point of view, we show that these differential inclusions can be exactly computed and we give some properties of the trajectories. An application to contrast-preserving image denoising is presented.

Personalization of biophysical models consists in estimating parameters from patient specific data. In this presentation, various strategies for the estimation of parameters of electromechanical models of the heart will be covered. In particular the issue of observability will be raised since only a subset of biophysical parameters can be estimated from common measurements. Personalization results of electrophysiological models from measured isochrones and mechanical models from cine MR images will be presented.

Phase-field methods and length or perimeter penalization have been successfully applied to many imaging problems, such as for instance the Mumford-Shah approach to segmentation and its phase-field counterpart by Ambrosio and Tortorelli.

In this talk we shall illustrate how these techniques may be used also to treat inverse problems where a discontinuous function has to be recovered. As an example we consider the inverse problem of determining insulating cracks or cavities by performing few electrostatic measurements on the boundary. We show the validity of these methods by a convergence analysis and by numerical experiments. The numerical experiments have been performed jointly with Wolfgang Ring (University of Graz, Austria).

In the past few years there has been a growing interest, in diverse scientific communities, in endowing Shape Spaces with Riemannian metrics, so to be able to measure similarities between shapes and perform statistical analysis on data sets (e.g. for object recognition, target detection and tracking, classification, and automated medical diagnostics).

The knowledge of curvature on a Riemannian manifold is essential in that it allows one to infer about the existence of conjugate points, the behavior of geodesic curves, the well-posedness of the problem of computing the implicit mean (and higher statistical moments) of samples on the manifold, and more. In shape analysis such issues are of fundamental importance since they allow one to build templates, i.e. shape classes that represent typical situations in different applications (e.g. in the field of computational anatomy).

The actual differential geometry of Shape Spaces has started to emerge only very recently: in this talk we will explore the sectional curvature for the Shape Space of landmark points, endowed with the Riemannian metric induced by the action of a diffeomorphism group.
Applications to Medical Imaging will be discussed and numerical results will be shown.

Medical morphometry is an important area in medical imaging for disease analysis. Its goal is to systematically analyze anatomical structures of different subjects, and to generate diagnostic images to help doctors to visualize abnormalities. Quasiconformal(QC) Teichmuller theory, which studies the distortions of the deformation patterns between shapes, has become an important tool for this purpose. In practice, objects are usually represented discretely by triangulation meshes. In this talk, I will firstly describe how quasi-conformal geometry can be discretized onto discrete meshes. This gives a discrete analogue of QC geometry on discrete meshes which represent anatomical structures. Then, I will talk about how computational QC geometry can been applied to practical applications in medical shape analysis.

I will discuss a generalization of the Shannon Sampling Theorem that allows for reconstruction of signals in arbitrary bases (and frames). Not only can one reconstruct in arbitrary bases, but this can also be done in a completely stable way. When extra information is available, such as sparsity or compressibility of the signal in a particular basis, one may reduce the number of samples dramatically. This is done via Compressed Sensing techniques, however, the usual finite-dimensional framework is not sufficient. To overcome this obstacle I'll introduce the concept of Infinite-Dimensional Compressed Sensing.