Symmetry breaking and the emergence of self-organized patterns is the hallmark of com-
plexity. Here, we demonstrate that a sessile drop, containing titania powder particles with
negligible self-propulsion, exhibits a transition to collective motion leading to self-organized
ﬂow patterns. This phenomenology emerges through a novel mechanism involving the
interplay between the chemical activity of the photocatalytic particles, which induces Mar-
angoni stresses at the liquid–liquid interface, and the geometrical conﬁnement provided by
the drop. The response of the interface to the chemical activity of the particles is the source
of a signiﬁcantly ampliﬁed hydrodynamic ﬂow within the drop, which moves the particles.
Furthermore, in ensembles of such active drops long-ranged ordering of the ﬂow patterns
within the drops is observed. We show that the ordering is dictated by a chemical com-
munication between drops, i.e., an alignment of the ﬂow patterns is induced by the gradients
of the chemicals emanating from the active particles, rather than by hydrodynamic
interactions.

Variational Autoencoders (VAEs) provide a theoretically-backed framework for deep generative
models. However, they often produce “blurry” images, which is linked to their training objective. Sampling in the most popular implementation, the Gaussian VAE, can be interpreted as simply injecting noise to the input of a deterministic decoder. In practice, this simply enforces a smooth latent space structure. We challenge the adoption of the full VAE framework on this specific point in favor of a simpler, deterministic one. Specifically, we investigate how substituting stochasticity with other explicit and implicit regularization schemes can lead to a meaningful latent space without having to force it to conform to an arbitrarily chosen prior. To retrieve a generative mechanism for sampling new data points, we propose to employ an efficient ex-post density estimation step that can be readily adopted both for the proposed deterministic autoencoders as well as to improve sample quality of existing VAEs. We show in a rigorous empirical study that regularized deterministic autoencoding achieves state-of-the-art sample quality on the common MNIST, CIFAR-10 and CelebA datasets.

Traditional photodetectors generally show a unipolar photocurrent response when illuminated with light of wavelength equal or shorter than the optical bandgap. Here, we report that a thin film of gallium oxide (GO) decorated with plasmonic nanoparticles, surprisingly, exhibits a change in the polarity of the photocurrent for different UV bands. Silver (Ag) nanoparticles are vacuum-deposited onto β-Ga2O3 and the AgNP@GO thin films show a record responsivity of 250 A/W, which significantly outperforms bare GO planar photodetectors. The photoresponsivity reverses sign from +157 µA/W in the UV-C band under unbiased operation to -353 µA/W in the UV-A band. The current reversal is rationalized by considering the charge dynamics stemming from hot electrons generated when the incident light excites a local surface plasmon resonance (LSPR) in the Ag nanoparticles. The Ag nanoparticles improve the external quantum efficiency and detectivity by nearly one order of magnitude with high values of 1.2×105 and 3.4×1014 Jones, respectively. This plasmon-enhanced solar blind GO detector allows UV regions to be spectrally distinguished, which is useful for the development of sensitive dynamic imaging photodetectors.

In 2020 IEEE International Conference on Robotics and Automation (ICRA),, Febuary 2020 (conference)

Abstract

Non-contact manipulation is of great importance in the actuation of micro-robotics. It is challenging to contactless manipulate micro-scale objects over large spatial distance in fluid. Here, we describe a novel approach for the dynamic position control of microparticles in three-dimensional (3D) space, based on high-speed acoustic streaming generated by a micro-fabricated gigahertz transducer. Due to the vertical lifting force and the horizontal centripetal force generated by the streaming, microparticles are able to be stably trapped at a position far away from the transducer surface, and to be manipulated over centimeter distance in all three directions. Only the hydrodynamic force is utilized in the system for particle manipulation, making it a versatile tool regardless the material properties of the trapped particle. The system shows high reliability and manipulation velocity, revealing its potentials for the applications in robotics and automation at small scales.

There have been several reports of plasmonically enhanced graphene photodetectors in the visible and the near infrared regime but rarely in the ultraviolet. In a previous work, we have reported that a graphene-silver hybrid structure shows a high photoresponsivity of 13 A/W at 270 nm. Here, we consider the likely mechanisms that underlie this strong photoresponse. We investigate the role of the plasmonic layer and examine the response using silver and gold nanoparticles of similar dimensions and spatial arrangement. The effect on local doping, strain, and absorption properties of the hybrid is also probed by photocurrent measurements and Raman and UV-visible spectroscopy. We find that the local doping from the silver nanoparticles is stronger than that from gold and correlates with a measured photosensitivity that is larger in devices with a higher contact area between the plasmonic nanomaterials and the graphene layer.

Transurethral resection of the prostate (TURP) is a minimally invasive endoscopic procedure that requires experience and skill of the surgeon. To permit surgical training under realistic conditions we report a novel phantom of the human prostate that can be resected with TURP. The phantom mirrors the anatomy and haptic properties of the gland and permits quantitative evaluation of important surgical performance indicators. Mixtures of soft materials are engineered to mimic the physical properties of the human tissue, including the mechanical strength, the electrical and thermal conductivity, and the appearance under an endoscope. Electrocautery resection of the phantom closely resembles the procedure on human tissue. Ultrasound contrast agent was applied to the central zone, which was not detectable by the surgeon during the surgery but showed high contrast when imaged after the surgery, to serve as a label for the quantitative evaluation of the surgery. Quantitative criteria for performance assessment are established and evaluated by automated image analysis. We present the workflow of a surgical simulation on a prostate phantom followed by quantitative evaluation of the surgical performance. Surgery on the phantom is useful for medical training, and enables the development and testing of endoscopic and minimally invasive surgical instruments.

A robot senses its environment, processes the sensory information, acts in response to these inputs, and possibly communicates with the outside world. Robots generally achieve these tasks with electronics-based hardware or by receiving inputs from some external hardware. In contrast, simple microorganisms can autonomously perceive, act, and communicate via purely physicochemical processes in soft material systems. A key property of biological systems is that they are built from energy-consuming ‘active’ units. Exciting developments in material science show that even very simple artificial active building blocks can show surprisingly rich emergent behaviors. Active non-equilibrium systems are therefore predicted to play an essential role to realize interactive materials. A major challenge is to find robust ways to couple and integrate the energy-consuming building blocks to the mechanical structure of the material. However, success in this endeavor will lead to a new generation of sophisticated micro- and soft-robotic systems that can operate autonomously.

We introduce two new functionals, the constrained covariance and the kernel mutual information, to measure the degree of independence of random variables. These quantities are both based on the covariance between functions of the random variables in reproducing kernel Hilbert spaces (RKHSs). We prove that when the RKHSs are universal, both functionals are zero if and only if the random variables are pairwise independent.
We also show that the kernel mutual information is an upper bound near independence on the Parzen window estimate of the mutual information.
Analogous results apply for two correlation-based dependence functionals introduced earlier: we show the kernel canonical correlation and the kernel generalised variance to be independence measures for universal
kernels, and prove the latter to be an upper bound on the mutual information near independence. The performance of the kernel dependence functionals in measuring independence is verified in the context of independent component analysis.

In IGAIA 2005, pages: 324-333, 2nd International Symposium on Information Geometry and its Applications, December 2005 (inproceedings)

Abstract

The purpose of this paper is to propose a method of constructing exponential
families of Hilbert manifold, on which estimation theory can be built. Although
there have been works on infinite dimensional exponential families of Banach manifolds
(Pistone and Sempi, 1995; Gibilisco and Pistone, 1998; Pistone and Rogantin,
1999), they are not appropriate to discuss statistical estimation with finite number
of samples; the likelihood function with finite samples is not continuous on the
manifold.
In this paper we use a reproducing kernel Hilbert space as a functional space for
constructing an exponential manifold. A reproducing kernel Hilbert space is dened as a Hilbert space of functions such that evaluation of a function at an arbitrary
point is a continuous functional on the Hilbert space. Since we can discuss the
value of a function with this space, it is very natural to use a manifold associated
with a reproducing kernel Hilbert space as a basis of estimation theory.
We focus on the maximum likelihood estimation (MLE) with the exponential
manifold of a reproducing kernel Hilbert space. As in many non-parametric estimation
methods, straightforward extension of MLE to an infinite dimensional
exponential manifold suffers the problem of ill-posedness caused by the fact that
the estimator should be chosen from the infinite dimensional space with only finite
number of constraints given by the data. To solve this problem, a pseudo-maximum
likelihood method is proposed by restricting the infinite dimensional manifold to
a series of finite dimensional submanifolds, which enlarge as the number of samples
increases. Some asymptotic results in the limit of infinite samples are shown,
including the consistency of the pseudo-MLE.

We provide a new unifying view, including all existing proper probabilistic
sparse approximations for Gaussian process regression. Our approach relies on
expressing the effective prior which the methods are using. This
allows new insights to be gained, and highlights the relationship between
existing methods. It also allows for a clear theoretically justified ranking
of the closeness of the known approximations to the corresponding full GPs.
Finally we point directly to designs of new better sparse approximations,
combining the best of the existing strategies, within attractive
computational constraints.

Data mining algorithms are facing the challenge to deal with an increasing number of complex objects. For graph data, a whole toolbox of data mining algorithms becomes available by defining a kernel function on instances of graphs. Graph kernels based on walks, subtrees and cycles in graphs have been proposed so far. As a general problem, these kernels are either computationally expensive or limited in their expressiveness. We try to overcome this problem by defining expressive graph kernels which are based on paths. As the computation of all paths and longest paths in a graph is NP-hard, we propose graph kernels based on shortest paths. These kernels are computable in polynomial time, retain expressivity and are still positive definite. In experiments on classification of graph models of proteins, our shortest-path kernels show significantly higher classification accuracy than walk-based kernels.

In this paper we present a primal-dual decomposition algorithm for
support vector machine training. As with existing methods that use
very small working sets (such as Sequential Minimal
Optimization (SMO), Successive Over-Relaxation (SOR) or
the Kernel Adatron (KA)), our method scales well, is
straightforward to implement, and does not require an external QP
solver. Unlike SMO, SOR and KA, the method is applicable to a
large number of SVM formulations regardless of the number of
equality constraints involved. The effectiveness of our algorithm
is demonstrated on a more difficult SVM variant in this respect,
namely semi-parametric support vector regression.

We propose an independence criterion based on the eigenspectrum of covariance operators in reproducing kernel Hilbert spaces (RKHSs), consisting of an empirical estimate of the Hilbert-Schmidt norm of the cross-covariance operator (we term this a Hilbert-Schmidt Independence Criterion, or HSIC). This approach has several advantages, compared with previous kernel-based independence criteria. First, the empirical estimate is simpler than any other kernel dependence test, and requires no user-defined regularisation. Second, there is a clearly defined population quantity which the empirical estimate approaches in the large sample limit, with exponential convergence guaranteed between the two: this ensures that independence tests based on {methodname} do not suffer from slow learning rates.
Finally, we show in the context of independent component analysis (ICA) that the performance of HSIC is competitive with that of previously published kernel-based criteria, and of other recently published ICA methods.

Journal of Computer and System Sciences, 71(3):333-359, October 2005 (article)

Abstract

In order to apply the maximum margin method in arbitrary metric
spaces, we suggest to embed the metric space into a Banach or
Hilbert space and to perform linear classification in this space.
We propose several embeddings and recall that an isometric embedding
in a Banach space is always possible while an isometric embedding in
a Hilbert space is only possible for certain metric spaces. As a
result, we obtain a general maximum margin classification
algorithm for arbitrary metric spaces (whose solution is
approximated by an algorithm of Graepel.
Interestingly enough, the embedding approach, when applied to a metric
which can be embedded into a Hilbert space, yields the SVM
algorithm, which emphasizes the fact that its solution depends on
the metric and not on the kernel. Furthermore we give upper bounds
of the capacity of the function classes corresponding to both
embeddings in terms of Rademacher averages. Finally we compare the
capacities of these function classes directly.

This paper deals with an unusual phenomenon where most machine learning algorithms yield good performance on the training set but systematically worse than random performance on the test set. This has been observed so far for some natural data sets and demonstrated for some synthetic data sets when the classification rule is learned from a small set of training samples drawn from some high dimensional space. The initial analysis presented in this paper shows that anti-learning is a property of data sets and is quite distinct from overfitting of a training data. Moreover, the analysis leads to a specification of some machine learning procedures which can overcome anti-learning and generate ma- chines able to classify training and test data consistently.

Gaussian process priors can be used to define flexible, probabilistic classification models. Unfortunately exact Bayesian inference is analytically intractable and various approximation techniques have been proposed. In this work we review and compare Laplace‘s method and Expectation Propagation for approximate Bayesian inference in the binary Gaussian process classification model. We present a comprehensive comparison of the approximations, their predictive performance and marginal likelihood estimates to results obtained by MCMC sampling. We explain theoretically and corroborate empirically the advantages of Expectation Propagation compared to Laplace‘s method.

Gauss' principle of least constraint and its generalizations have provided a useful insights for the development of tracking controllers for mechanical systems [1]. Using this concept, we present a novel methodology for the design of a specific class of robot controllers. With our new framework, we demonstrate that well-known and also several novel nonlinear robot control laws can be derived from this generic framework, and show experimental verifications on a Sarcos Master Arm robot for some of these controllers. We believe that the suggested approach unifies and simplifies the design of optimal nonlinear control laws for robots obeying rigid body dynamics equations, both with or without external constraints, holonomic or nonholonomic constraints, with over-actuation or underactuation, as well as open-chain and closed-chain kinematics.

Several large scale data mining applications, such as text categorization and gene expression analysis, involve high-dimensional data that is also inherently directional in nature. Often such data is L2 normalized so that it lies on the surface of a unit hypersphere. Popular models such as (mixtures of) multi-variate Gaussians are inadequate for characterizing such data. This paper proposes a generative mixture-model approach to clustering directional data based on the von Mises-Fisher (vMF) distribution, which arises naturally for data distributed on the unit hypersphere. In particular, we derive and analyze two variants of the Expectation Maximization (EM) framework for estimating the mean and concentration parameters of this mixture. Numerical estimation of the concentration parameters is non-trivial in high dimensions since it involves functional inversion of ratios of Bessel functions. We also formulate two clustering algorithms corresponding to the variants of EM that we derive. Our approach provides a
theoretical basis for the use of cosine similarity that has been widely employed by the information retrieval community, and obtains the spherical kmeans algorithm (kmeans with cosine similarity) as a special case of both variants. Empirical results on clustering of high-dimensional text and gene-expression data based on a mixture of vMF distributions show that the ability to estimate the concentration parameter for each vMF component, which is not present in existing approaches, yields superior results, especially for difficult clustering tasks in high-dimensional spaces.

We propose statistical learning methods for approximating implicit surfaces and computing dense 3D deformation fields. Our approach is based on Support Vector (SV) Machines, which are state of the art in machine learning. It is straightforward to implement and computationally competitive; its parameters can be automatically set using standard machine learning methods.
The surface approximation is based on a modified Support Vector regression. We present applications to 3D head reconstruction, including automatic removal of outliers and hole filling.
In a second step, we build on our SV representation to compute dense 3D deformation fields between two objects.
The fields are computed using a generalized SVMachine enforcing correspondence between the previously learned implicit SV object representations, as well as correspondences between feature points if such points are available.
We apply the method to the morphing of 3D heads and other objects.

Support vector machines (SVM) have been successfully used to classify proteins into functional categories.
Recently, to integrate multiple data sources, a semidefinite programming (SDP) based SVM method was introduced Lanckriet et al (2004). In SDP/SVM, multiple kernel matrices corresponding to each of data sources are combined with
weights obtained by solving an SDP. However, when trying to apply SDP/SVM to large problems, the computational cost can become prohibitive, since both converting the data to a kernel matrix for the SVM and solving the SDP are time and memory demanding. Another application-specific drawback arises when some of the data sources are protein networks. A common method of converting the network to a kernel matrix is the diffusion kernel method, which has
time complexity of O(n^3), and produces a dense matrix of size n x n. We propose an efficient method of protein classification using multiple protein networks. Available protein networks, such as a physical interaction network or a
metabolic network, can be directly incorporated. Vectorial data can also be incorporated after conversion into a network by means of neighbor point connection. Similarly to the SDP/SVM method, the combination weights are obtained by convex optimization. Due to the sparsity of network edges, the computation time is nearly linear in the number of edges
of the combined network. Additionally, the combination weights provide information useful for discarding noisy or irrelevant networks. Experiments on function prediction of 3588 yeast proteins show promising results: the computation time is enormously reduced, while the accuracy is still comparable to the SDP/SVM method.

ENTROPY index monitoring, based on spectral entropy of the electroencephalogram,
is a promising new method to measure the depth of anaesthesia. We examined the
association between spectral entropy and regional cerebral blood flow in healthy
subjects anaesthetised with 2%, 3% and 4% end-expiratory concentrations of
sevoflurane and 7.6, 12.5 and 19.0 microg.ml(-1) plasma drug concentrations of
propofol. Spectral entropy from the frequency band 0.8-32 Hz was calculated and
cerebral blood flow assessed using positron emission tomography and
[(15)O]-labelled water at baseline and at each anaesthesia level. Both drugs
induced significant reductions in spectral entropy and cortical and global
cerebral blood flow. Midfrontal-central spectral entropy was associated with
individual frontal and whole brain blood flow values across all conditions,
suggesting that this novel measure of anaesthetic depth can depict global
changes in neuronal activity induced by the drugs. The cortical areas of the
most significant associations were remarkably similar for both drugs.

Our goal is to understand the principles of Perception, Action and Learning in autonomous systems that successfully interact with complex environments and to use this understanding to design future systems