In order to self-organize symbols from observed motion patterns, it is necessary to temporally segment the continuous motion pattern flows into meaningful chunks. For reusability of the acquired information, repeatedly observed patterns are important, which means that segmentation, memorization, recognition and abstraction depend on each other. From this point of view, we propose methods for motio...
View full abstract»

This paper describes integrated/intelligent humanoid robot system for daily-life environment tasks. We have realized complex behaviors of a humanoid robot in daily-life environment based on motion planner technique using an environment and manipulation knowledge. However in order to adapt to unknown or dynamic situations, sensor based behavior variation is essentially important. In this paper, we ...
View full abstract»

Autonomous humanoid navigation in non-trivial environments requires high precision accuracy due to the difficulty in achieving stable bipedal locomotion. In particular, an accurate localisation estimate is needed to plan footstep placement on a narrow staircase. This paper reports the development of an accurate 6DOF particle filter based localisation system for a humanoid robot moving within a kno...
View full abstract»

Being able to identify and localize objects is an important requirement for various humanoid robot applications. In this paper we present a method which uses PCA-SIFT in combination with a clustered voting scheme to achieve detection and localization of multiple objects in real-time video data. Our approach provides robustness against constraints that are common for humanoid vision systems such as...
View full abstract»

Automatic speech recognition (ASR) is essential for a human-humanoid communication. One of the main problems with ASR is that a humanoid inevitably generates motor noises. These noises are easily captured by the humanoid's microphones because the noise sources are closer to the microphones than the target speech source. Thus, the signal-to-noise ratio (SNR) of input speech becomes quite low (somet...
View full abstract»

Joint attention is the skill of attending to the same object another person is looking at. The acquisition of this skill is crucial in children for the development of many social and communicative abilities, and has been proposed as a critical social capability for interactive robots. Although recent attempts to model the acquisition of this skill on a robot have been moderately successful (Nagai ...
View full abstract»

In this paper, we deal with imitation learning of arm movements in humanoid robots. Hidden Markov models (HMM) are used to generalize movements demonstrated to a robot multiple times. They are trained with the characteristic features (key points) of each demonstration. Using the same HMM, key points that are common to all demonstrations are identified; only those are considered when reproducing a ...
View full abstract»

In this paper we present the current improvements of our biologically motivated interacting and learning vision system for humanoids. Building on the work presented by Goerick et al., 2005 the system features now a very natural gaze selection and interaction for learning freely presented complex objects in realtime. The new features are facilitated by two major contributions. First, by the introdu...
View full abstract»

Current research has identified the need to equip robots with perceptual capabilities that not only recognise objective entities such as visual or auditory objects but that are also capable of assessing the affective evaluations of the human communication partner in order to render the communication situation more natural and social. In equivalence to Watzlawick's statement that "one cannot not co...
View full abstract»

Previous research has shown that human actions can be detected by motion patterns. However, labeling motion patterns is not sufficient in a cognitive system that requires reasoning about the agent's intentions, and how the environmental context affects the way an action is performed. In this paper, we develop a graphical model that captures how the movements that realize the action vary depending ...
View full abstract»