Nanorobots are untethered structures of sub-micron size that can be controlled in a non-trivial way. Such nanoscale robotic agents are envisioned to revolutionize medicine by enabling minimally invasive diagnostic and therapeutic procedures. To be useful, nanorobots must be operated in complex biological fluids and tissues, which are often difficult to penetrate. In this chapter, we first discuss potential medical applications of motile nanorobots. We briefly present the challenges related to swimming at such small scales and we survey the rheological properties of some biological fluids and tissues. We then review recent experimental results in the development of nanorobots and in particular their design, fabrication, actuation, and propulsion in complex biological fluids and tissues. Recent work shows that their nanoscale dimension is a clear asset for operation in biological tissues, since many biological tissues consist of networks of macromolecules that prevent the passage of larger micron-scale structures, but contain dynamic pores through which nanorobots can move.

Haptics is an interdisciplinary field that seeks to both understand and engineer touch-based interaction. Although a wide range of systems and applications are being investigated, haptics researchers often concentrate on perception and manipulation through the human hand.
A haptic interface is a mechatronic system that modulates the physical interaction between a human and his or her tangible surroundings. Haptic interfaces typically involve mechanical, electrical, and computational layers that work together to sense user motions or forces, quickly process these inputs with other information, and physically respond by actuating elements of the user’s surroundings, thereby enabling him or her to act on and feel a remote and/or virtual environment.

Recent approaches to independent component analysis have used kernel
independence measures to obtain very good performance in ICA, particularly
in areas where classical methods experience difficulty (for instance,
sources with near-zero kurtosis). In this chapter, we compare two efficient
extensions of these methods for large-scale problems: random subsampling
of entries in the Gram matrices used in defining the independence
measures, and incomplete Cholesky decomposition of these matrices.
We derive closed-form, efficiently computable approximations for the
gradients of these measures, and compare their performance on ICA using
both artificial and music data. We show that kernel ICA can scale up to much larger
problems than yet attempted, and that incomplete Cholesky decomposition
performs better than random sampling.

Most literature on Support Vector Machines (SVMs) concentrate on
the dual optimization problem. In this paper, we would like to point out
that the primal problem can also be solved efficiently, both for linear
and non-linear SVMs, and that there is no reason to ignore this possibility.
On the contrary, from the primal point of view new families of algorithms for
large scale SVM training can be investigated.

A wealth of computationally efficient approximation methods for Gaussian process regression have been recently proposed. We give a unifying overview of sparse approximations, following Quiñonero-Candela and Rasmussen (2005), and a brief review of approximate matrix-vector multiplication methods.

The annual Neural Information Processing Systems (NIPS) conference is the flagship meeting on neural computation and machine learning. It draws a diverse group of attendees--physicists, neuroscientists, mathematicians, statisticians, and computer scientists--interested in theoretical and applied aspects of modeling, simulating, and building neural-like or intelligent systems. The presentations are interdisciplinary, with contributions in algorithms, learning theory, cognitive science, neuroscience, brain imaging, vision, speech and signal processing, reinforcement learning, and applications. Only twenty-five percent of the papers submitted are accepted for presentation at NIPS, so the quality is exceptionally high. This volume contains the papers presented at the December 2006 meeting, held in Vancouver.

Convex learning algorithms, such as Support Vector Machines (SVMs), are often
seen as highly desirable because they offer strong practical properties and are
amenable to theoretical analysis. However, in this work we show how nonconvexity
can provide scalability advantages over convexity. We show how concave-convex
programming can be applied to produce (i) faster SVMs where training errors are
no longer support vectors, and (ii) much faster Transductive SVMs.

In this chapter we are concerned with the problem of reconstructing patterns from their representation in feature space, known as the pre-image problem. We review existing algorithms and propose a learning based approach. All algorithms are discussed regarding their usability and complexity and evaluated on an image denoising application.

In the past, computational motor control has been approached from at least two major frameworks: the dynamic systems approach and the viewpoint of optimal control. The dynamic system approach emphasizes motor control as a process of self-organization between an animal and its environment. Nonlinear differential equations that can model entrainment and synchronization behavior are among the most favorable tools of dynamic systems modelers. In contrast, optimal control approaches view motor control as the evolutionary or development result of a nervous system that tries to optimize rather general organizational principles, e.g., energy consumption or accurate task achievement. Optimal control theory is usually employed to develop appropriate theories. Interestingly, there is rather little interaction between dynamic systems and optimal control modelers as the two approaches follow rather different philosophies and are often viewed as diametrically opposing. In this paper, we develop a computational approach to motor control that offers a unifying modeling framework for both dynamic systems and optimal control approaches. In discussions of several behavioral experiments and some theoretical and robotics studies, we demonstrate how our computational ideas allow both the representation of self-organizing processes and the optimization of movement based on reward criteria. Our modeling framework is rather simple and general, and opens opportunities to revisit many previous modeling results from this novel unifying view.

Our goal is to understand the principles of Perception, Action and Learning in autonomous systems that successfully interact with complex environments and to use this understanding to design future systems