We show via an equivalence of mathematical programs that a support vector (SV) algorithm can be translated into an equivalent boosting-like algorithm and vice versa. We exemplify this translation procedure for a new algorithmone-class leveragingstarting from the one-class support vector machine (1-SVM). This is a first step toward unsupervised learning in a boosting framework. Building on so-called barrier methods known from the theory of constrained optimization, it returns a function, written as a convex combination of base hypotheses, that characterizes whether a given test point is likely to have been generated from the distribution underlying the training data. Simulations on one-class classification problems demonstrate the usefulness of our approach.

We consider the learning problem of finding a dependency between a general class of objects and another, possibly different, general class of objects. The objects can be for example: vectors, images, strings, trees or graphs. Such a task is made possible by employing similarity measures in both input and output spaces using kernel functions, thus embedding the objects into vector spaces. Output kernels also make it possible to encode prior information and/or invariances in the loss function in an elegant way. We experimentally validate our approach on several tasks: mapping strings to strings, pattern recognition, and reconstruction from partial images.

In recent years, an increasing number of research projects investigated whether the central nervous system employs internal models in motor control. While inverse models in the control loop can be identified more readily in both motor behavior and the firing of single neurons, providing direct evidence for the existence of forward models is more complicated. In this paper, we will discuss such an identification of forward models in the context of the visuomotor control of an unstable dynamic system, the balancing of a pole on a finger. Pole balancing imposes stringent constraints on the biological controller, as it needs to cope with the large delays of visual information processing while keeping the pole at an unstable equilibrium. We hypothesize various model-based and non-model-based control schemes of how visuomotor control can be accomplished in this task, including Smith Predictors, predictors with Kalman filters, tapped-delay line control, and delay-uncompensated control. Behavioral experiments with human participants allow exclusion of most of the hypothesized control schemes. In the end, our data support the existence of a forward model in the sensory preprocessing loop of control. As an important part of our research, we will provide a discussion of when and how forward models can be identified and also the possible pitfalls in the search for forward models in control.

The authors used a recognition memory paradigm to assess the influence of color information on visual memory for images of natural scenes. Subjects performed 5-10% better for colored than for black-and-white images independent of exposure duration. Experiment 2 indicated little influence of contrast once the images were suprathreshold, and Experiment 3 revealed that performance worsened when images were presented in color and tested in black and white, or vice versa, leading to the conclusion that the surface property color is part of the memory representation. Experiments 4 and 5 exclude the possibility that the superior recognition memory for colored images results solely from attentional factors or saliency. Finally, the recognition memory advantage disappears for falsely colored images of natural scenes: The improvement in recognition memory depends on the color congruence of presented images with learned knowledge about the color gamut found within natural scenes. The results can be accounted for within a multiple memory systems framework.

Practical experience has shown that in order to obtain the best possible performance, prior knowledge about invariances of a classification
problem at hand ought to be incorporated into the training procedure. We describe and review all known methods for doing so in support vector machines,
provide experimental results, and discuss their respective merits. One of the significant new results reported in this work is our recent achievement of the
lowest reported test error on the well-known MNIST digit recognition benchmark task, with SVM training times that are also significantly faster than
previous SVM methods.

The detectability of contrast increments was measured as a function of the contrast of a masking or pedestal grating at a number of different spatial frequencies ranging from 2 to 16 cycles per degree of visual angle. The pedestal grating always had the same orientation, spatial frequency and phase as the signal. The shape of the contrast increment threshold versus pedestal contrast (TvC) functions depend of the performance level used to define the threshold, but when both axes are normalized by the contrast corresponding to 75% correct detection at each frequency, the (TvC) functions at a given performance level are identical. Confidence intervals on the slope of the rising part of the TvC functions are so wide that it is not possible with our data to reject Webers Law.

In this paper we investigate connections between statistical learning
theory and data compression on the basis of support vector machine
(SVM) model selection. Inspired by several generalization bounds we
construct ``compression coefficients'' for SVMs, which measure the
amount by which the training labels can be compressed by some
classification hypothesis. The main idea is to relate the coding
precision of this hypothesis to the width of the margin of the
SVM. The compression coefficients connect well known quantities such
as the radius-margin ratio R^2/rho^2, the eigenvalues of the kernel
matrix and the number of support vectors. To test whether they are
useful in practice we ran model selection experiments on several real
world datasets. As a result we found that compression coefficients can
fairly accurately predict the parameters for which the test error is
minimized.

Detection performance was measured with sinusoidal and pulse-train gratings. Although the 2.09-c/deg pulse-train, or line gratings, contained at least 8 harmonics all at equal contrast, they were no more detectable than their most detectable component. The addition of broadband pink noise designed to equalize the detectability of the components of the pulse train made the pulse train about a factor of four more detectable than any of its components. However, in contrast-discrimination experiments, with a pedestal or masking grating of the same form and phase as the signal and 15% contrast, the noise did not affect the discrimination performance of the pulse train relative to that obtained with its sinusoidal components. We discuss the implications of these observations for models of early vision in particular the implications for possible sources of internal noise.

Sum or difference frequency generation (SFG or DFG) in isotropic media is in the electric-dipole approximation only symmetry allowed for optically active systems. The hyperpolarizability giving rise to these three-wave mixing processes features only one isotropic component. It factorizes into two terms, an energy (denominator) factor and a triple product of transition moments. These forbid degenerate SFG, i.e., second harmonic generation, as well as the existence of the linear electrooptic effect (Pockels effect) in isotropic media. This second order response also has no static limit, which leads to particularly strong resonance phenomena that are qualitatively different from those usually seen in the ubiquitous even-wave mixing spectroscopies. In particular, the participation of two (not the usual one) excited states is essential to achieve dramatic resonance enhancement, We report our first efforts to see such resonantly enhanced chirality specific SFG.

Sum-frequency generation in isotropic media is in the electric-dipole approximation the only symmetry allowed for chiral systems. We demonstrate that the sum-frequency intensity from an optically active liquid depends quadratically on the difference in concentration of the two enantiomers. The dominant contribution to the signal is found to be due to the chirality specific electric-dipolar three-wave mixing nonlinearity. Selecting the polarization of all fields allows the chiral electric-dipolar contributions to the bulk sum-frequency signal to be discerned from any achiral magnetic-dipolar and electric-quadrupolar contributions. (C) 2002 Published by Elsevier Science B.V.

Coherent nonlinear optical processes at second-order are only electric-dipole allowed in isotropic media that are optically active. Sum-frequency generation in chiral liquids has recently been observed, and difference-frequency and optical rectification have been predicted to exist in isotropic chiral media. Both Rayleigh-Schrodinger perturbation theory and the density matrix approach are used to discuss the quantum-chemical basis of optical rectification in optically active liquids. For pinene we compute the corresponding orientationally averaged hyperpolarizability, and estimate the light-induced dc electric polarization and the consequent voltage across a measuring capacitor it may give rise to near resonance.

Locally weighted learning (LWL) is a class of techniques from nonparametric statistics that provides useful representations and training algorithms for learning about complex phenomena during autonomous adaptive control of robotic systems. This paper introduces several LWL algorithms that have been tested successfully in real-time learning of complex robot tasks. We discuss two major classes of LWL, memory-based LWL and purely incremental LWL that does not need to remember any data explicitly. In contrast to the traditional belief that LWL methods cannot work well in high-dimensional spaces, we provide new algorithms that have been tested on up to 90 dimensional learning problems. The applicability of our LWL algorithms is demonstrated in various robot learning examples, including the learning of devil-sticking, pole-balancing by a humanoid robot arm, and inverse-dynamics learning for a seven and a 30 degree-of-freedom robot. In all these examples, the application of our statistical neural networks techniques allowed either faster or more accurate acquisition of motor control than classical control engineering.

Our goal is to understand the principles of Perception, Action and Learning in autonomous systems that successfully interact with complex environments and to use this understanding to design future systems