2007

In recent years, spectral clustering has become one of the most popular modern clustering algorithms. It is simple to implement, can be solved efficiently by standard linear algebra software, and very often outperforms traditional clustering algorithms such as the k-means algorithm. On the first glance spectral clustering appears slightly mysterious, and it is not obvious to see why it works at all and what it really does. The goal of this tutorial is to give some intuition on those questions. We describe different graph Laplacians and their basic properties, present the most common spectral clustering algorithms, and derive those algorithms from scratch by several different approaches. Advantages and disadvantages of the different spectral clustering algorithms are discussed.

The abilities to learn and to categorize are fundamental for cognitive systems, be it animals or machines, and therefore have attracted attention from engineers and psychologists alike. Modern machine learning methods and psychological models of categorization are remarkably similar, partly because these two fields share a common history in artificial neural networks and reinforcement learning. However, machine learning is now an independent and mature field that has moved beyond psychologically or neurally inspired algorithms towards providing foundations for a theory of learning that is rooted in statistics and functional analysis. Much of this research is potentially interesting for psychological theories of learning and categorization but also hardly accessible for psychologists. Here, we provide a tutorial introduction to a popular class of machine learning tools, called kernel methods. These methods are closely related to perceptrons, radial-basis-function neural networks and exemplar theories of catego
rization. Recent theoretical advances in machine learning are closely tied to the idea that the similarity of patterns can be encapsulated in a positive definite kernel. Such a positive definite kernel can define a reproducing kernel Hilbert space which allows one to use powerful tools from functional analysis for the analysis of learning algorithms. We give basic explanations of some key conceptsthe so-called kernel trick, the representer theorem and regularizationwhich may open up the possibility that insights from machine learning can feed back into psychology.

Background: For splice site recognition, one has to solve two classification problems:
discriminating true from decoy splice sites for both acceptor and donor sites. Gene finding systems
typically rely on Markov Chains to solve these tasks.
Results: In this work we consider Support Vector Machines for splice site recognition. We employ
the so-called weighted degree kernel which turns out well suited for this task, as we will illustrate in
several experiments where we compare its prediction accuracy with that of recently proposed
systems. We apply our method to the genome-wide recognition of splice sites in Caenorhabditis
elegans, Drosophila melanogaster, Arabidopsis thaliana, Danio rerio, and Homo sapiens. Our
performance estimates indicate that splice sites can be recognized very accurately in these genomes
and that our method outperforms many other methods including Markov Chains, GeneSplicer and
SpliceMachine. We provide genome-wide predictions of splice sites and a stand-alone prediction
tool ready to be used for incorporation in a gene finder.
Availability: Data, splits, additional information on the model selection, the whole genome
predictions, as well as the stand-alone prediction tool are available for download at http://
www.fml.mpg.de/raetsch/projects/splice.

Recently, Udwadia (Proc. R. Soc. Lond. A 2003:17831800, 2003) suggested to derive tracking controllers for mechanical systems with redundant degrees-of-freedom (DOFs) using a generalization of Gauss principle of least constraint. This method allows reformulating control problems as a special class of optimal controllers. In this paper, we take this line of reasoning one step further and demonstrate that several well-known and also novel nonlinear robot control laws can be derived from this generic methodology. We show experimental verifications on a Sarcos Master Arm robot for some of the derived controllers. The suggested approach offers a promising unification and simplification of nonlinear control law design for robots obeying rigid body dynamics equations, both with or without external constraints, with over-actuation or underactuation, as well as open-chain and closed-chain kinematics.

Open source tools have recently reached a level of maturity which makes them suitable for building large-scale real-world systems. At the same time, the field of machine learning has developed a large body of powerful learning algorithms for diverse applications. However, the true potential of these methods is not realized, since existing implementations are not openly shared, resulting in software with low usability, and weak interoperability. We argue that this situation can be significantly improved by increasing incentives for researchers to publish their software under an open source model. Additionally, we outline the problems authors are faced with when trying to publish algorithmic implementations of machine learning methods. We believe that a resource of peer reviewed software accompanied by short articles would be highly valuable to both the machine learning and the general scientific community.

Journal of the Optical Society of America A, 24(10):3233-3241, October 2007 (article)

Abstract

There are 8 cycle / deg ripples or oscillations in performance as a function of location near Mach bands in experiments measuring Mach bands masking effects on random polarity signal bars. The oscillations with increments are 180 degrees out of phase with those for decrements. The oscillations, much larger than the measurement error, appear to relate to the weighting function of the spatial-frequency-tuned channel detecting the broad-
band signals. The ripples disappear with step maskers and become much smaller at durations below 25 ms, implying either that the site of masking has changed or that the weighting function and hence spatial-frequency tuning is slow to develop.

Human immunodeficiency virus type 1 (HIV-1) evolves in human body,
and its exposure to a drug often causes mutations that enhance
the resistance against the drug.
To design an effective pharmacotherapy for an individual patient,
it is important to accurately predict the drug resistance
based on genotype data.
Notably, the resistance is not just
the simple sum of the effects of all mutations.
Structural biological studies suggest that
the association of mutations is crucial:
Even if mutations A or B alone do not affect the resistance,
a significant change might happen
when the two mutations occur together.
Linear regression methods cannot take the associations into account,
while decision tree methods can reveal only limited associations.
Kernel methods and neural networks implicitly use all possible
associations for prediction, but cannot select salient associations
explicitly.
Our method, itemset boosting, performs linear regression
in the complete space of power sets of mutations.
It implements a forward feature selection procedure where,
in each iteration, one mutation combination is
found by an efficient branch-and-bound search.
This method uses all possible combinations,
and salient associations are explicitly shown.
In experiments, our method worked particularly well for predicting the
resistance of nucleotide reverse transcriptase inhibitors
(NRTIs). Furthermore, it successfully recovered many mutation
associations known in biological literature.

Electrophysiological signals of the developing fetal brain and heart can be investigated by fetal magnetoencephalography (fMEG). During such investigations, the fetal heart activity and that of the mother should be monitored continuously to provide an important indication of current well-being. Due to physical constraints of an fMEG system, it is not possible to use clinically established heart monitors for this purpose. Considering this constraint, we developed a real-time heart monitoring system for biomagnetic measurements and showed its reliability and applicability in research and for clinical examinations. The developed system consists of real-time access to fMEG data, an algorithm based on Independent Component Analysis (ICA), and a graphical user interface (GUI). The algorithm extracts the current fetal and maternal heart signal from a noisy and artifact-contaminated data stream in real-time and is able to adapt automatically to continuously varying environmental parameters. This algorithm has been na
med Adaptive Real-time ICA (ARICA) and is applicable to real-time artifact removal as well as to related blind signal separation problems.

The final properties of sophisticated products can
be affected by many unapparent dependencies within the manufacturing
process, and the products integrity can often only be
checked in a final measurement. Troubleshooting can therefore
be very tedious if not impossible in large assembly lines.
In this paper we show that Feature Selection is an efficient tool for
serial-grouped lines to reveal causes for irregularities in product
attributes. We compare the performance of several methods for
Feature Selection on real-world problems in mass-production of
semiconductor devices.
Note to Practitioners We present a data based procedure
to localize flaws in large production lines: using the results of
final quality inspections and information about which machines
processed which batches, we are able to identify machines which
cause low yield.

Motivation: Identifying significant genes among thousands of sequences on a microarray is a central challenge for cancer research in bioinformatics. The ultimate goal is to detect the genes that are involved in disease outbreak and progression. A multitude of methods have been proposed for this task of feature selection, yet the selected gene lists differ greatly between different methods. To accomplish biologically meaningful gene selection from microarray data, we have to understand the theoretical connections and the differences between these methods. In this article, we define a kernel-based framework for feature selection based on the Hilbert–Schmidt independence criterion and backward elimination, called BAHSIC. We show that several well-known feature selectors are instances of BAHSIC, thereby clarifying their relationship. Furthermore, by choosing a different kernel, BAHSIC allows us to easily define novel feature selection algorithms. As a further advantage, feature selection via BAHSIC works directly on multiclass problems.
Results: In a broad experimental evaluation, the members of the BAHSIC family reach high levels of accuracy and robustness when compared to other feature selection techniques. Experiments show that features selected with a linear kernel provide the best classification performance in general, but if strong non-linearities are present in the data then non-linear kernels can be more suitable.

The cDNA array technology is a powerful tool to analyze a high number of genes in parallel. We investigated whether large-scale gene expression analysis allows clustering and identification of cellular phenotypes of chondrocytes in different in vivo and in vitro conditions. In 100% of cases, clustering analysis distinguished between in vivo and in vitro samples, suggesting fundamental differences in chondrocytes in situ and in vitro regardless of the culture conditions or disease status. It also allowed us to differentiate between healthy and osteoarthritic cartilage. The clustering also revealed the relative importance of the investigated culturing conditions (stimulation agent, stimulation time, bead/monolayer). We augmented the cluster analysis with a statistical search for genes showing differential expression. The identified genes provided hints to the molecular basis of the differences between the sample classes. Our approach shows the power of modern bioinformatic algorithms for understanding and class
ifying chondrocytic phenotypes in vivo and in vitro. Although it does not generate new experimental data per se, it provides valuable information regarding the biology of chondrocytes and may provide tools for diagnosing and staging the osteoarthritic disease process.

The genomes of individuals from the same species vary in sequence as a result of different evolutionary processes. To examine the patterns of, and the forces shaping, sequence variation in Arabidopsis thaliana, we performed high-density array resequencing of 20 diverse strains (accessions). More than 1 million nonredundant single-nucleotide polymorphisms (SNPs) were identified at moderate false discovery rates (FDRs), and ~4% of the genome was identified as being highly dissimilar or deleted relative to the reference genome sequence. Patterns of polymorphism are highly nonrandom among gene families, with genes mediating interaction with the biotic environment having exceptional polymorphism levels. At the chromosomal scale, regional variation in polymorphism was readily apparent. A scan for recent selective sweeps revealed several candidate regions, including a notable example in which almost all variation was removed in a 500-kilobase window. Analyzing the polymorphisms we describe in larger sets of accessions will enable a detailed understanding of forces shaping population-wide sequence variation in A. thaliana.

Given a sample from a probability measure with support on a submanifold in Euclidean space one can construct a neighborhood graph which can be seen as an approximation of the submanifold. The graph Laplacian of such a graph is used in several machine learning methods like semi-supervised learning, dimensionality reduction and clustering. In this paper we determine the pointwise limit of three different graph Laplacians used in the literature as the sample size increases and the neighborhood size approaches zero. We show that for a uniform measure on the submanifold all graph Laplacians have the same limit up to constants. However in the case of a non-uniform measure on the submanifold only the so called random walk graph Laplacian converges to the weighted Laplace-Beltrami operator.

A Bayesian framework is developed to reconstruct the density of states from multiple canonical simulations. The framework encompasses the histogram reweighting method of Ferrenberg and Swendsen. The new approach applies to nonparametric as well as parametric models and does not require simulation data to be discretized. It offers a means to assess the precision of the reconstructed density of states and of derived thermodynamic quantities.

Motivation: Despite many years of research on how to properly align sequences in the presence of sequencing errors, alternative splicing and micro-exons, the correct alignment of mRNA sequences to genomic DNA is still a challenging task.
Results: We present a novel approach based on large margin learning that combines accurate plice site predictions with common sequence alignment techniques. By solving a convex optimization problem, our algorithm  called PALMA  tunes the parameters of the model such that true alignments score higher than other alignments. We study the accuracy of alignments of mRNAs containing artificially generated micro-exons to genomic DNA. In a carefully designed experiment, we show that our algorithm accurately identifies the intron boundaries as well as boundaries of the optimal local alignment. It outperforms all other methods: for 5702 artificially shortened EST sequences from C. elegans and human it correctly identifies the intron boundaries in all except two cases. The best other method is a recently proposed method called exalin which misaligns 37 of the sequences. Our method also demonstrates robustness to mutations, insertions and deletions, retaining accuracy even at high noise levels.
Availability: Datasets for training, evaluation and testing, additional results and a stand-alone alignment tool implemented in C++ and python are available at http://www.fml.mpg.de/raetsch/projects/palma.

Most literature on Support Vector Machines (SVMs) concentrate on
the dual optimization problem. In this paper, we would like to point out that the primal problem can also be solved efficiently, both for linear and non-linear SVMs, and that there is no reason for ignoring this possibilty.
On the contrary, from the primal point of view new families of algorithms for large scale SVM training can be investigated.

While kernel canonical correlation analysis (CCA) has been applied in many contexts, the convergence of finite sample estimates of the associated functions to their population counterparts has not yet been established. This paper gives a mathematical proof of the statistical convergence of kernel CCA, providing a theoretical justification for the method. The proof uses covariance operators defined on reproducing kernel Hilbert spaces, and analyzes the convergence of their empirical estimates of finite rank to their population counterparts, which can have infinite rank. The result also gives a sufficient condition for convergence on the regularization coefficient involved in kernel CCA: this should decrease as n^{-1/3}, where n is the number of data.

The pedestal or dipper effect is the large improvement in the detectability of a sinusoidal grating observed when it is added
to a masking or pedestal grating of the same spatial frequency, orientation, and phase. We measured the pedestal effect
in both broadband and notched noiseVnoise from which a 1.5-octave band centered on the signal frequency had been
removed. Although the pedestal effect persists in broadband noise, it almost disappears in the notched noise. Furthermore,
the pedestal effect is substantial when either high- or low-pass masking noise is used. We conclude that the pedestal effect
in the absence of notched noise results principally from the use of information derived from channels with peak sensitivities
at spatial frequencies different from that of the signal and the pedestal. We speculate that the spatial-frequency components
of the notched noise above and below the spatial frequency of the signal and the pedestal prevent off-frequency looking,
that is, prevent the use of information about changes in contrast carried in channels tuned to spatial frequencies that are
very much different from that of the signal and the pedestal. Thus, the pedestal or dipper effect measured without notched
noise appears not to be a characteristic of individual spatial-frequency-tuned channels.

Relative depth judgments of vertical lines based on horizontal disparity deteriorate enormously when the lines form part of closed configurations (Westheimer, 1979). In studies showing this effect, perspective was not manipulated and thus produced inconsistency between horizontal disparity and perspective. We show that stereoacuity improves dramatically when perspective and horizontal disparity are made consistent. Observers appear to use unhelpful perspective cues in judging the relative depth of the vertical sides of rectangles in a way not incompatible with a form of cue weighting. However, 95% confidence intervals for the weights derived for cues usually exceed the a-priori [0-1] range.

Ring-resonators are in general not amenable to strain-free (non-contact) displacement measurements. We show that this limitation may be overcome if the ring-resonator, here a fiber-loop, is designed to contain a gap, such that the light traverses a free-space part between two aligned waveguide ends. Displacements are determined with nanometer sensitivity by measuring the associated changes in the resonance frequencies. Miniaturization should increase the sensitivity of the ring-resonator interferometer. Ring geometries that contain an optical circulator can be used to profile reflective samples. (c) 2006 Elsevier B.V. All rights reserved.

We show that magnetic-field-induced circular differential deflection of light can be observed in reflection or refraction at a single interface. The difference in the reflection or refraction angles between the two circular polarization components is a function of the magnetic-field strength and the Verdet constant, and permits the observation of the Faraday effect not via polarization rotation in transmission, but via changes in the propagation direction. Deflection measurements do not suffer from n-pi ambiguities and are shown to be another means to map magnetic fields with high axial resolution, or to determine the sign and magnitude of magnetic-field pulses in a single measurement.

In an optically active liquid the diffraction angle depends on the circular polarization state of the incident light beam. We report the observation of circular differential diffraction in an isotropic chiral medium, and we demonstrate that double diffraction is an alternate means to determine the handedness (enantiomeric excess) of a solution. (c) 2007 Optical Society of America.

HFSP Journal Frontiers of Interdisciplinary Research in the Life Sciences, 1(2):115-126, 2007, clmc (article)

Abstract

Research in robotics has moved away from its primary focus on industrial
applications. The New Robotics is a vision that has been developed in past years
by our own university and many other national and international research
instiutions and addresses how increasingly more human-like robots can live
among us and take over tasks where our current society has shortcomings. Elder
care, physical therapy, child education, search and rescue, and general
assistance in daily life situations are some of the examples that will benefit from
the New Robotics in the near future. With these goals in mind, research for the
New Robotics has to embrace a broad interdisciplinary approach, ranging from
traditional mathematical issues of robotics to novel issues in psychology,
neuroscience, and ethics. This paper outlines some of the important research
problems that will need to be resolved to make the New Robotics a reality.

Our goal is to understand the principles of Perception, Action and Learning in autonomous systems that successfully interact with complex environments and to use this understanding to design future systems