Combined PET/MR provides at the same time molecular and functional imaging as well as excellent soft tissue contrast. It does not allow one to directly measure the attenuation properties of scanned tissues, despite the fact that accurate attenuation maps are necessary for quantitative PET imaging. Several methods have therefore been proposed for MR-based attenuation correction (MR-AC). So far, they have only been evaluated on data acquired from separate MR and PET scanners. We evaluated several MR-AC methods on data from 10 patients acquired on a combined BrainPET/MR scanner. This allowed the consideration of specific PET/MR issues, such as the RF coil that attenuates and scatters 511 keV gammas. We evaluated simple MR thresholding methods as well as atlas and machine learning-based MR-AC. CT-based AC served as gold standard reference. To comprehensively evaluate the MR-AC accuracy, we used RoIs from 2 anatomic brain atlases with different levels of detail.
Visual inspection of the PET images indicated that even the basic FLASH threshold MR-AC may be sufficient for several applications. Using a UTE sequence for bone prediction in MR-based thresholding occasionally led to false prediction of bone tissue inside the brain, causing a significant overestimation of PET activity. Although it yielded a lower mean underestimation of activity, it exhibited the highest variance of all methods. The atlas averaging approach had a smaller mean error, but showed high maximum overestimation on the RoIs of the more detailed atlas. The Nave Bayes and Atlas-Patch MR-AC yielded the smallest variance, and the Atlas-Patch also showed the smallest mean error.
In conclusion, Atlas-based AC using only MR information on the BrainPET/MR yields a high level of accuracy that is sufficient for clinical quantitative imaging requirements. The Atlas-Patch approach was superior to alternative atlas-based methods, yielding a quantification error below 10% for all RoIs except very small ones.

73rd Annual Meeting of the Institute of Mathematical Statistics (IMS), August 2010 (talk)

Abstract

We develop a novel method for detection of signals and reconstruction of images in the presence of random noise. The method uses results from percolation theory. We specifically address the problem of detection of objects of unknown shapes in the case of nonparametric noise. The noise density is unknown and can be heavy-tailed. We view the object detection problem as hypothesis testing for discrete statistical inverse problems. We present an algorithm that allows to detect objects of various shapes in noisy images. We prove results on consistency and algorithmic complexity of our procedures.

24th European Conference on Operational Research (EURO XXIV), July 2010 (talk)

Abstract

We introduce cooperative cut, a minimum cut problem whose cost is a submodular function on sets of edges: the cost of an edge that is added to a cut set depends on the edges in the set. Applications are e.g. in probabilistic graphical
models and image processing. We prove NP hardness and a polynomial lower bound on the approximation factor, and upper bounds via four approximation algorithms based on different techniques. Our additional heuristics have attractive practical properties, e.g., to rely only on standard min-cut. Both our algorithms and heuristics appear to do well in practice.

The problem of making decisions is ubiquitous in life. This problem becomes even more
complex when the decisions should be made sequentially. In fact, the execution of an action
at a given time leads to a change in the environment of the problem, and this change cannot be
predicted with certainty. The aim of a decision-making process is to optimally select actions
in an uncertain environment. To this end, the environment is often modeled as a dynamical
system with multiple states, and the actions are executed so that the system evolves toward
a desirable state.
In this thesis, we proposed a family of stochastic models and algorithms in order to improve
the quality of of the decision-making process. The proposed models are alternative to Markov
Decision Processes, a largely used framework for this type of problems.
In particular, we showed that the state of a dynamical system can be represented more
compactly if it is described in terms of predictions of certain future events. We also showed
that even the cognitive process of selecting actions, known as policy, can be seen as a dynamical
system. Starting from this observation, we proposed a panoply of algorithms, all based on
predictive policy representations, in order to solve different problems of decision-making, such
as decentralized planning, reinforcement learning, or imitation learning.
We also analytically and empirically demonstrated that the proposed approaches lead to
a decrease in the computational complexity and an increase in the quality of the decisions,
compared to standard approaches for planning and learning under uncertainty.

16th Conference of the International Linear Algebra Society (ILAS), June 2010 (talk)

Abstract

We study the fundamental problem of nonnegative least squares. This problem
was apparently introduced by Lawson and Hanson [1] under the name NNLS.
As is evident from its name, NNLS seeks least-squares solutions that are also
nonnegative. Owing to its wide-applicability numerous algorithms have been
derived for NNLS, beginning from the active-set approach of Lawson and Han-
son [1] leading up to the sophisticated interior-point method of Bellavia et al. [2].
We present a new algorithm for NNLS that combines projected subgradients
with the non-monotonic gradient descent idea of Barzilai and Borwein [3]. Our
resulting algorithm is called BBSG, and we guarantee its convergence by ex-
ploiting properties of NNLS in conjunction with projected subgradients. BBSG
is surprisingly simple and scales well to large problems. We substantiate our
claims by empirically evaluating BBSG and comparing it with established con-
vex solvers and specialized NNLS algorithms. The numerical results suggest
that BBSG is a practical method for solving large-scale NNLS problems.

Brain-computer interfaces (BCI) work by making the user perform a specific mental task, such as imagining moving body parts or performing some other covert mental activity, or attending to a particular stimulus out of an array of options, in order to encode their intention into a measurable brain signal. Signal-processing and machine-learning techniques are then used to decode the measured signal to identify the encoded mental state and hence extract the user&amp;amp;lsquo;s initial intention.
The high-noise high-dimensional nature of brain-signals make robust decoding techniques a necessity. Generally, the approach has been to use relatively simple feature extraction techniques, such as template matching and band-power estimation, coupled to simple linear classifiers. This has led to a prevailing view among applied BCI researchers that (sophisticated) machine-learning is irrelevant since it doesn&amp;amp;lsquo;t matter what classifier you use once your features are extracted.
Using examples from our own MEG and EEG experiments, I&amp;amp;lsquo;ll demonstrate how machine-learning principles can be applied in order to improve BCI performance, if they are formulated in a domain-specific way. The result is a type of data-driven analysis that is more than just classification, and can be used to find better feature extractors.

The acquisition and self-improvement of novel motor skills is among the most important problems in robotics. Motor primitives offer one of the most promising frameworks for the application of machine learning techniques in this context. Employing the Dynamic Systems Motor primitives originally introduced by Ijspeert et al. (2003), appropriate learning algorithms for a concerted approach of both imitation and reinforcement learning are presented. Using these algorithms new motor skills, i.e., Ball-in-a-Cup, Ball-Paddling and Dart-Throwing, are learned.

Our goal is to understand the principles of Perception, Action and Learning in autonomous systems that successfully interact with complex environments and to use this understanding to design future systems