Choose your preferred view mode

Please select whether you prefer to view the MDPI pages with a view tailored for mobile displays or to view the MDPI
pages in the normal scrollable desktop version. This selection will be stored into your cookies and used automatically
in next visits. You can also change the view style at any point from the main header when using the pages with your
mobile device.

Abstract

:
The research on intelligent robots will produce robots that are able to operate in everyday life environments, to adapt their program according to environment changes, and to cooperate with other team members and humans. Operating in human environments, robots need to process, in real time, a large amount of sensory data—such as vision, laser, microphone—in order to determine the best action. Intelligent algorithms have been successfully applied to link complex sensory data to robot action. This editorial briefly summarizes recent findings in the field of intelligent robots as described in the articles published in this special issue.

Keywords:

adaptive robots; reinforcement learning; evolution; multiple agents

Robots are expected to operate in everyday life environments. In contrast to industrial robots that perform a set of predetermined motions, operation of robots in human environments requires adaptation of the robot policy based on the surrounding conditions. In the last few decades, many successful applications of intelligent algorithms, such as neural networks, genetic algorithms, and fuzzy logic, have been proposed. Other important applications of intelligent algorithms have been developed in the field of multiple robots for applications such as environment exploration, collaborative object transportation, and surveillance missions. The focus of this special issue is on recent findings in the field of intelligent robots [1,2,3,4,5].

Genetic Programming has been proven to give good results in the field of intelligent robots. Kuyucu et al. [1] present a bio-inspired decision mechanism, which provides a convenient way for evolution to configure the conditions and timing of behaving as a swarm or a modular robot in an exploration scenario. The collective decision of the multiple robotic system, to switch from a certain type of behavior into another when each individual robot makes its own decisions is very complex. They used Genetic Programming (GP) to evolve the controller for these decisions, which acts without a centralized mechanism and with limited inter-robot communication.

In the field of intelligent robots, reinforcement learning has been proven to give good results for creating robots with the ability to learn and adapt their policy in dynamic environments. An interesting application of reinforcement learning in multiple agent systems is presented by Kuremoto et al. [2]. In contrast to conventional Q-learning algorithms, the authors propose a computational motivation function, which adopts two principle affective factors “Arousal” and “Pleasure” of Russell’s circumplex model. The computer simulations of pursuit problems with static and dynamic preys show a fast and stable learning performance.

Control of flexible manipulators, which have several advantages over their rigid counterpart, is a challenging research problem. The combination of high, particularly unstructured, nonlinearities, such as in the form of Coulomb friction and manipulator’s joint elasticity, changes significantly the system’s dynamics. Chaoui et al. [4] applied an adaptive fuzzy logic controller which can deal with structured and unstructured dynamic uncertainties. The authors show that type-2 fuzzy controller outperformed type-1 when large magnitudes of uncertainties are presented.

Brain machine interface (BMI) has been proposed as a novel technique to control prosthetic devices aimed at restoring motor functions in paralyzed patients. In this issue, Mano et al. [5] proposed a neural network based controller that maps brain signals and transforms them into robot movement. The experiments were performed with electrodes implanted in the rat’s brain. First, the rat is trained to move the robot. The brain signals of four electrodes, two in the motor cortex and two in the somatosensory cortex area are collected. These data are used to train several neural networks, which are employed online to map brain signals and transform them into robot motion.