Climbing robots have been widely applied in many industries involving hard to access, dangerous, or hazardous environments to replace human workers. Climbing speed, payload capacity, the ability to overcome obstacles, and wall-to-wall transitioning are significant characteristics of climbing robots. Here, multilinked track wheel-type climbing robots are proposed to enhance these characteristics. The robots have been developed for five years in collaboration with three universities: Seoul National University, Carnegie Mellon University, and Yeungnam University. Four types of robots are presented for different applications with different surface attachment methods and mechanisms: MultiTank for indoor sites, Flexible caterpillar robot (FCR) and Combot for heavy industrial sites, and MultiTrack for high-rise buildings. The method of surface attachment is different for each robot and application, and the characteristics of the joints between links are designed as active or passive according to the requirement of a given robot. Conceptual design, practical design, and control issues of such climbing robot types are reported, and a proper choice of the attachment methods and joint type is essential for the successful multilink track wheel-type climbing robot for different surface materials, robot size, and computational costs.

Marker-based motion capture (mocap) is widely criticized as producing lifeless animations. We argue that important information about body surface motion is present in standard marker sets but is lost in extracting a skeleton. We demonstrate a new approach called MoSh (Motion and Shape capture), that automatically extracts this detail from mocap data. MoSh estimates body shape and pose together using sparse marker data by exploiting a parametric model of the human body. In contrast to previous work, MoSh solves for the marker locations relative to the body and estimates accurate body shape directly from the markers without the use of 3D scans; this effectively turns a mocap system into an approximate body scanner. MoSh is able to capture soft tissue motions directly from markers
by allowing body shape to vary over time. We evaluate the effect of different marker sets on pose and shape accuracy and propose a new sparse marker set for capturing soft-tissue motion. We illustrate MoSh by recovering body shape, pose, and soft-tissue motion from archival mocap data and using this to produce animations with subtlety and realism. We also show soft-tissue motion retargeting to new characters and show how to magnify the 3D deformations of soft tissue to create animations with appealing exaggerations.

ACM Transactions on Applied Perception for the Symposium on Applied Perception, 11(3):13:1-13:18, September 2014 (article)

Abstract

The goal of this research was to investigate women’s sensitivity to changes in their perceived weight by altering the body mass index (BMI) of the participants’ personalized avatars displayed on a large-screen immersive display. We created the personalized avatars with a full-body 3D scanner that records both the participants’ body geometry and texture. We altered the weight of the personalized avatars to produce changes in BMI while keeping height, arm length and inseam fixed and exploited the correlation between body geometry and anthropometric measurements encapsulated in a statistical body shape model created from thousands of body scans. In a 2x2 psychophysical experiment, we investigated the relative importance of visual cues, namely shape (own shape vs. an average female body shape with equivalent height and BMI to the participant) and texture (own photo-realistic texture or checkerboard pattern texture) on the ability to accurately perceive own current body weight (by asking them ‘Is the avatar the same weight as you?’). Our results indicate that shape (where height and BMI are fixed) had little effect on the perception of body weight. Interestingly, the participants perceived their body weight veridically when they saw their own photo-realistic texture and significantly underestimated their body weight when the avatar had a checkerboard patterned texture. The range that the participants accepted as their own current weight was approximately a 0.83 to −6.05 BMI% change tolerance range around their perceived weight. Both the shape and the texture had an effect on the reported similarity of the body parts and the whole avatar to the participant’s body. This work has implications for new measures for patients with body image disorders, as well as researchers interested in creating personalized avatars for games, training applications or virtual reality.

Hierarchical assembly of self-healing adhesive proteins creates strong and robust structural and interfacial materials, but understanding of the molecular design and structure–property relationships of structural proteins remains unclear. Elucidating this relationship would allow rational design of next generation genetically engineered self-healing structural proteins. Here we report a general self-healing and -assembly strategy based on a multiphase recombinant protein based material. Segmented structure of the protein shows soft glycine- and tyrosine-rich segments with self-healing capability and hard beta-sheet segments. The soft segments are strongly plasticized by water, lowering the self-healing temperature close to body temperature. The hard segments self-assemble into nanoconfined domains to reinforce the material. The healing strength scales sublinearly with contact time, which associates with diffusion and wetting of autohesion. The finding suggests that recombinant structural proteins from heterologous expression have potential as strong and repairable engineering materials.

Modeling how the human body deforms during breathing is important for the realistic animation of lifelike 3D avatars. We learn a model of body shape deformations due to breathing for different breathing types and provide simple animation controls to render lifelike breathing regardless of body shape. We capture and align high-resolution 3D scans of 58 human subjects. We compute deviations from each subject’s mean shape during breathing, and study the statistics of such shape changes for different genders, body shapes, and breathing types. We use the volume of the registered scans as a proxy for lung volume and learn a novel non-linear model relating volume and breathing type to 3D shape deformations and pose changes. We then augment a SCAPE body model so that body shape is determined by identity, pose, and the parameters of the breathing model. These parameters provide an intuitive interface with which animators can synthesize 3D human avatars with realistic breathing motions. We also develop a novel interface for animating breathing using a spirometer, which measures the changes in breathing volume of a “breath actor.”

As we move towards the miniaturization of devices to perform tasks at the nano and microscale, it has become increasingly important to develop new methods for actuation, sensing, and control. Over the past decade, bio-hybrid methods have been investigated as a promising new approach to overcome the challenges of scaling down robotic and other functional devices. These methods integrate biological cells with artificial components and therefore, can take advantage of the intrinsic actuation and sensing functionalities of biological cells. Here, the recent advancements in bio-hybrid actuation are reviewed, and the challenges associated with the design, fabrication, and control of bio-hybrid microsystems are discussed. As a case study, focus is put on the development of bacteria-driven microswimmers, which has been investigated as a targeted drug delivery carrier. Finally, a future outlook for the development of these systems is provided. The continued integration of biological and artificial components is envisioned to enable the performance of tasks at a smaller and smaller scale in the future, leading to the parallel and distributed operation of functional systems at the microscale.

Tissue and biological fluids are complex viscoelastic media with a nanoporous macromolecular structure. Here, we demonstrate that helical nanopropellers can be controllably steered through such a biological gel. The screw-propellers have a filament diameter of about 70 nm and are smaller than previously reported nanopropellers as well as any swimming microorganism. We show that the nanoscrews will move through high-viscosity solutions with comparable velocities to that of larger micropropellers, even though they are so small that Brownian forces suppress their actuation in pure water. When actuated in viscoelastic hyaluronan gels, the nanopropellers appear to have a significant advantage, as they are of the same size range as the gel’s mesh size. Whereas larger helices will show very low or negligible propulsion in hyaluronan solutions, the nanoscrews actually display significantly enhanced propulsion velocities that exceed the highest measured speeds in Newtonian fluids. The nanopropellers are not only promising for applications in the extracellular environment but small enough to be taken up by cells.

One of the central problems in computer vision is the detection of semantically important objects and the estimation of their pose. Most of the work in object detection has been based on single image processing and its performance is limited by occlusions and ambiguity in appearance and geometry. This paper proposes an active approach to object detection by controlling the point of view of a mobile depth camera. When an initial static detection phase identifies an object of interest, several hypotheses are made about its class and orientation. The sensor then plans a sequence of viewpoints, which balances the amount of energy used to move with the chance of identifying the correct hypothesis. We formulate an active M-ary hypothesis testing problem, which includes sensor mobility, and solve it using a point-based approximate POMDP algorithm. The validity of our approach is verified through simulation and real-world experiments with the PR2 robot. The results suggest a significant improvement over static object detection

In this paper, we present a novel probabilistic generative model for multi-object traffic scene understanding from movable platforms which reasons jointly about the 3D scene layout as well as the location and orientation of objects in the scene. In particular, the scene topology, geometry and traffic activities are inferred from short video sequences.
Inspired by the impressive driving capabilities of humans, our model does not rely on GPS, lidar or map knowledge. Instead, it takes advantage of a diverse set of visual cues in the form of vehicle tracklets, vanishing points, semantic scene labels, scene flow and occupancy grids. For each of these cues we propose likelihood functions that are integrated into a probabilistic generative model. We learn all model parameters from training data using contrastive divergence. Experiments conducted on videos of 113 representative intersections show that our approach successfully infers the correct layout in a variety of very challenging scenarios. To evaluate the importance of each feature cue, experiments using different feature combinations are conducted. Furthermore, we show how by employing context derived from the proposed method we are able to improve over the state-of-the-art in terms of object detection and object orientation estimation in challenging and cluttered urban environments.

We review the work on data-driven grasp synthesis and the
methodologies for sampling and ranking candidate grasps. We
divide the approaches into three groups based on whether they
synthesize grasps for known, familiar or unknown objects. This
structure allows us to identify common object representations and
perceptual processes that facilitate the employed data-driven grasp
synthesis technique.
In the case of known objects, we concentrate on the approaches that
are based on object recognition and pose estimation. In the case of
familiar objects, the techniques use some form of a
similarity matching to a set of previously encountered objects.
Finally for the approaches dealing with unknown objects, the core part
is the extraction of specific features that are indicative of good
grasps.
Our survey provides an overview of the different methodologies and
discusses open problems in the area of robot grasping. We also draw a
parallel to the classical approaches that rely on analytic
formulations.

Intracortical brain computer interfaces (iBCIs) decode intended movement from neural activity for the control of external devices such as a robotic arm. Standard approaches include a calibration phase to estimate decoding parameters. During iBCI operation, the statistical properties of the neural activity can depart from those observed during calibration, sometimes hindering a user’s ability to control the iBCI. To address this problem, we adaptively correct the offset terms within a Kalman filter decoder via penalized maximum likelihood estimation. The approach can handle rapid shifts in neural signal behavior (on the order of seconds) and requires no knowledge of the intended movement. The algorithm, called MOCA, was tested using simulated neural activity and evaluated retrospectively using data collected from two people with tetraplegia operating an iBCI. In 19 clinical research test cases, where a nonadaptive Kalman filter yielded relatively high decoding errors, MOCA significantly reduced these errors (10.6 ± 10.1\%; p < 0.05, pairwise t-test). MOCA did not significantly change the error in the remaining 23 cases where a nonadaptive Kalman filter already performed well. These results suggest that MOCA provides more robust decoding than the standard Kalman filter for iBCIs.

Objective. Patients in the completely locked-in state (CLIS), due to, for example, amyotrophic lateral sclerosis (ALS), no longer possess voluntary muscle control. Assessing attention and cognitive function in these patients during the course of the disease is a challenging but essential task for both nursing staff and physicians. Approach. An electrophysiological cognition test battery, including auditory and semantic stimuli, was applied in a late-stage ALS patient at four different time points during a six-month epidural electrocorticography (ECoG) recording period. Event-related cortical potentials (ERP), together with changes in the ECoG signal spectrum, were recorded via 128 channels that partially covered the left frontal, temporal and parietal cortex. Main results. Auditory but not semantic stimuli induced significant and reproducible ERP projecting to specific temporal and parietal cortical areas. N1/P2 responses could be detected throughout the whole study period. The highest P3 ERP was measured immediately after the patient's last communication through voluntary muscle control, which was paralleled by low theta and high gamma spectral power. Three months after the patient's last communication, i.e., in the CLIS, P3 responses could no longer be detected. At the same time, increased activity in low-frequency bands and a sharp drop of gamma spectral power were recorded. Significance. Cortical electrophysiological measures indicate at least partially intact attention and cognitive function during sparse volitional motor control for communication. Although the P3 ERP and frequency-specific changes in the ECoG spectrum may serve as indicators for CLIS, a close-meshed monitoring will be required to define the exact time point of the transition.

Time plays an essential role in the diffusion of information, influence, and disease over networks. In many cases we can only observe when a node is activated by a contagion—when a node learns about a piece of information, makes a decision, adopts a new behavior, or becomes infected with a disease. However, the underlying network connectivity and transmission rates between nodes are unknown. Inferring the underlying diffusion dynamics is important because it leads to new insights and enables forecasting, as well as influencing or containing information propagation. In this paper we model diffusion as a continuous temporal process occurring at different rates over a latent, unobserved network that may change over time. Given information diffusion data, we infer the edges and dynamics of the underlying network. Our model naturally imposes sparse solutions and requires no parameter tuning. We develop an efficient inference algorithm that uses stochastic convex optimization to compute online estimates of the edges and transmission rates. We evaluate our method by tracking information diffusion among 3.3 million mainstream media sites and blogs, and experiment with more than 179 million different instances of information spreading over the network in a one-year period. We apply our network inference algorithm to the top 5,000 media sites and blogs and report several interesting observations. First, information pathways for general recurrent topics are more stable across time than for on-going news events. Second, clusters of news media sites and blogs often emerge and vanish in a matter of days for on-going news events. Finally, major events, for example, large scale civil unrest as in the Libyan civil war or Syrian uprising, increase the number of information pathways among blogs, and also increase the network centrality of blogs and social media sites.

Objective. Brain–computer interface (BCI) systems are often based on motor- and/or sensory processes that are known to be impaired in late stages of amyotrophic lateral sclerosis (ALS). We propose a novel BCI designed for patients in late stages of ALS that only requires high-level cognitive processes to transmit information from the user to the BCI. Approach. We trained subjects via EEG-based neurofeedback to self-regulate the amplitude of gamma-oscillations in the superior parietal cortex (SPC). We argue that parietal gamma-oscillations are likely to be associated with high-level attentional processes, thereby providing a communication channel that does not rely on the integrity of sensory- and/or motor-pathways impaired in late stages of ALS. Main results. Healthy subjects quickly learned to self-regulate gamma-power in the SPC by alternating between states of focused attention and relaxed wakefulness, resulting in an average decoding accuracy of 70.2%. One locked-in ALS patient (ALS-FRS-R score of zero) achieved an average decoding accuracy significantly above chance-level though insufficient for communication (55.8%). Significance. Self-regulation of gamma-power in the SPC is a feasible paradigm for brain–computer interfacing and may be preserved in late stages of ALS. This provides a novel approach to testing whether completely locked-in ALS patients retain the capacity for goal-directed thinking.

We describe a novel polarization interferometer which permits the determination of the refractive indices for circularly-polarized light. It is based on a Jamin-Lebedeff interferometer, modified with waveplates, and permits us to experimentally determine the refractive indices n(L) and n(R) of the respectively left- and right-circularly polarized modes in a cholesteric liquid crystal. Whereas optical rotation measurements only determine the circular birefringence, i.e. the difference (n(L) - n(R)), the interferometer also permits the determination of their absolute values. We report refractive indices of a cholesteric liquid crystal in the region of selective (Bragg) reflection as a function of temperature. (C) 2014 Optical Society of America

Motility in living systems is due to an array of complex molecular nanomotors that are essential for the function and survival of cells. These protein nanomotors operate not only despite of but also because of stochastic forces. Artificial means of realizing motility rely on local concentration or temperature gradients that are established across a particle, resulting in slip velocities at the particle surface and thus motion of the particle relative to the fluid. However, it remains unclear if these artificial motors can function at the smallest of scales, where Brownian motion dominates and no actively propelled living organisms can be found. Recently, the first reports have appeared suggesting that the swimming mechanisms of artificial structures may also apply to enzymes that are catalytically active. Here we report a scheme to realize artificial Janus nanoparticles (JNPs) with an overall size that is comparable to that of some enzymes similar to 30 nm. Our JNPs can catalyze the decomposition of hydrogen peroxide to water and oxygen and thus actively move by self-electrophoresis. Geometric anisotropy of the Pt-Au Janus nanoparticles permits the simultaneous observation of their translational and rotational motion by dynamic light scattering. While their dynamics is strongly influenced by Brownian rotation, the artificial Janus nanomotors show bursts of linear ballistic motion resulting in enhanced diffusion.

Our goal is to understand the principles of Perception, Action and Learning in autonomous systems that successfully interact with complex environments and to use this understanding to design future systems