Archive for category Robotics

You’re bombarded with sensory information every day — sights, sounds, smells, touches and tastes. A constant barrage that your brain has to manage, deciding which information to trust or which sense to use as a backup when another fails. Understanding how the brain evaluates and juggles all this input could be the key to designing better therapies for patients recovering from stroke, nerve injuries, or other conditions. It could also help engineers build more realistic virtual experiences for everyone from gamers to fighter pilots to medical patients.

Now, some researchers are using virtual reality (VR) and even robots to learn how the brain pulls off this juggling act.

Do You Believe Your Eyes?

At the University of Reading in the U.K., psychologist Peter Scarfe and his team are currently exploring how the brain combines information from touch, vision, and proprioception – our sense of where our body is positioned – to form a clear idea of where objects are in space.

Generally, the brain goes with whichever sense is more reliable at the time. For instance, in a dark room, touch and proprioception trump vision. But when there’s plenty of light, you’re more likely to believe your eyes. Part of what Scarfe’s crew hopes to eventually unravel is how the brain combines information from both senses and whether that combination is more accurate than touch or sight alone. Does the brain trust input from one sense and ignore the other, does it split the difference between the two, or does it do something more complex?

To find out, the team is using a VR headset and a robot called Haptic Master.

While volunteers wear the VR headset, they see four virtual balls – three in a triangle formation and one in the center. They can also reach out and touch four real spheres that appear in the same place as the ones they see in VR: the three in the triangle formation are just plastic and never move, but the fourth is actually a ball bearing at the end of Haptic Master’s robot arm. Researchers use the robot to move this fourth ball between repetitions of the test. Think of the three-ball-triangle as a flat plane in space. The participant has to decide whether the fourth ball is higher or lower than the level of that triangle.

It’s a task that requires the brain to weigh and combine information from multiple senses to decide where the fourth ball is in relation to the other three. Participants get visual cues about the ball’s location through the VR headset, but they also use their haptic sense – the combination of touch and proprioception – to feel where the ball is in space.

The VR setup makes it easier to control the visual input and make sure volunteers aren’t using other cues, like the location of the robot arm or other objects in the room, to make their decisions.

Collectively, volunteers have performed this task hundreds of times. Adams and his colleagues are looking at how accurate the results are when the participant used only their eyes, only their haptic sense, or both senses at once. The team is then comparing those results to several computer models, each predicting how a person would estimate the ball’s position if their brain combined the sensory information in different ways.

So far, the team needs more data to learn which model best describes how the brain combines sensory cues. But they say that their results, and those of others working in the field, could one day help design more accurate haptic feedback, which could make interacting with objects in virtual reality feel more realistic.

On Shaky Footing

Anat Lubetzky, a physical therapy researcher at New York University, is also turning to VR. She uses the burgeoning technology to study how our brains weigh different sensory input to help us when things get shaky — specifically, if people rely on their sense of proprioception or their vision to keep their balance.

Conventional wisdom in sports medicine says that standing on an uneven surface is a good proprioception workout for patients in rehabilitation after an injury. That’s because it forces your somatosensory system, the nerves involved in proprioception, to work harder. So if your balance is suffering because of nerve damage, trying to stabilize yourself while standing on an uneven surface, like a bosu ball, should help.

But Lubetzky’s results tell a different story.

In the lab, Lubetzky’s subjects strap on VR headsets and stand on either a solid floor or an unsteady surface, like a wobble board. She projects some very subtly moving dots onto the VR display and uses a pressure pad on the floor to measure how participants’ bodies sway.

It turns out, when people stand on an unstable surface, they’re more likely to sway in time with the moving dots. But on a stable surface, they seem to pay less attention to the dots.

So rather than working their somatosensory systems harder, it seems people use their vision to look for a fixed reference point to help keep them balanced. In other words, the brain switches from a less reliable sense to a more reliable one, a process called sensory weighting.

Ultimately, Lubetzky hopes her VR setup could help measure how much a patient with somatosensory system damage relies on their vision. This knowledge, in turn, could help measure the severity of the problem so doctors can design a better treatment plan.

As VR gets more realistic and more immersive – partly thanks to experiments like these – it could offer researchers an even more refined tool for picking apart what’s going on in the brain.

MethodsSubjects were eligible if they were able to perform the robot-assisted game training and were divided randomly into a RCT and an OCT group. The RCT group performed one daily session of 30 minutes of robot-assisted game training with a rehabilitation robot, plus one daily session of 30 minutes of conventional rehabilitation training, 5 days a week for 2 weeks. The OCT group performed two daily sessions of 30 minutes of conventional rehabilitation training. The effects of training were measured by a Manual Function Test (MFT), Manual Muscle Test (MMT), Korean version of the Modified Barthel Index (K-MBI) and a questionnaire about satisfaction with training. These measurements were taken before and after the 2-week training.

ResultsBoth groups contained 25 subjects. After training, both groups showed significant improvements in motor and daily functions measured by MFT, MMT, and K-MBI compared to the baseline. Both groups demonstrated similar training effects, except motor power of wrist flexion. Patients in the RCT group were more satisfied than those in the OCT group.

ConclusionThere were no significant differences in changes in most of the motor and daily functions between the two types of training. However, patients in the RCT group were more satisfied than those in the OCT group. Therefore, RCT could be a useful upper extremity rehabilitation training method.

INTRODUCTION

stroke is a central nervous system disease caused by cerebrovascular problems such as infarction or hemorrhage. Stroke may lead to impairment of various physical functions, including hemiplegia, language disorder, swallowing disorder or cognitive disorder, according to the location and degree of morbidity [1]. Among these, hemiplegia is a common symptom occurring in 85% of stroke patients. In particular, upper extremity paralysis is more frequent and requires longer recovery time than lower extremity paralysis [2, 3]. To maintain the basic functions of ordinary life, the use of the upper extremities is essential; therefore, upper extremity paralysis commonly causes problems in performing the activities of daily living [2].

Robot-assisted rehabilitation treatment has recently been widely investigated as an effective neurorehabilitation approach that may augment the effects of physical therapy and facilitate motor recovery [4]. Robot-assisted rehabilitation treatments have been developed in recent decades to reduce the expenditure of therapists’ effort and time, to reproduce accurate repetitive motions and to interact with force feedback [5, 6]. The most important advantage of using robot-assisted rehabilitation treatment is the ability to deliver high-dosage and high-intensity training [7].

In rehabilitation patients may find such exercises monotonous and boring, and may lose motivation over time [8]. Upper extremity rehabilitation training using video games, such as Nintendo Wii games and the PlayStation EyeToy games, enhanced upper extremity functions and resulted in greater patient satisfaction than conventional rehabilitation treatment [9, 10, 11, 12, 13].

The objective of this study was to determine the effects of combining robot-assisted game training with conventional upper extremity rehabilitation training (RCT) on motor and daily functions in comparison to conventional upper extremity rehabilitation training (OCT) in stroke patients. This study was a randomized controlled trial and we evaluated motor power, upper extremity motor function, daily function and satisfaction. […]

The paper suggests a therapeutic device for hemiparesis that combines robot-assisted rehabilitation and mirror therapy. The robot, which consists of a motor, a position sensor, and a torque sensor, is provided not only to the paralyzed wrist, but also to the unaffected wrist to induce a symmetric movement between the joints. As a user rotates his healthy wrist to the direction of either flexion or extension, the motor on the damaged side rotates and reflects the motion of the normal side to the symmetric angular position. To verify performance of the device, five stroke patients joined a clinical experiment to practice a 10-minute mirroring exercise. Subjects on Brunnstrom stage 3 had shown relatively high repulsive torques due to severe spasticity toward their neutral wrist positions with a maximum magnitude of 0.300kgfm, which was reduced to 0.161kgfm after the exercise. Subjects on stage 5 practiced active bilateral exercises using both wrists with a small repulsive torque of 0.052kgfm only at the extreme extensional angle. The range of motion of affected wrist increased as a result of decrease in spasticity. The therapeutic device not only guided a voluntary exercise to loose spasticity and increase ROM of affected wrist, but also helped distinguish patients with different Brunnstrom stages according to the size of repulsive torque and phase difference between the torque and the wrist position.

Lower extremity function recovery is one of the most important goals in stroke rehabilitation. Many paradigms and technologies have been introduced for the lower limb rehabilitation over the past decades, but their outcomes indicate a need to develop a complementary approach. One attempt to accomplish a better functional recovery is to combine bottom-up and top-down approaches by means of brain-computer interfaces (BCIs). In this study, a BCI-controlled robotic mirror therapy system is proposed for lower limb recovery following stroke. An experimental paradigm including four states is introduced to combine robotic training (bottom-up) and mirror therapy (top-down) approaches. A BCI system is presented to classify the electroencephalography (EEG) evidence. In addition, a probabilistic model is presented to assist patients in transition across the experiment states based on their intent. To demonstrate the feasibility of the system, both offline and online analyses are performed for five healthy subjects. The experiment results show a promising performance for the system, with average accuracy of 94% in offline and 75% in online sessions.

Robot-assisted therapy is regarded as an effective and reliable method for the delivery of highly repetitive training that is needed to trigger neuroplasticity following a stroke. However, the lack of fully adaptive assist-as-needed control of the robotic devices and an inadequate immersive virtual environment that can promote active participation during training are obstacles hindering the achievement of better training results with fewer training sessions required. This study thus focuses on these research gaps by combining these 2 key components into a rehabilitation system, with special attention on the rehabilitation of fine hand motion skills. The effectiveness of the proposed system is tested by conducting clinical trials on a chronic stroke patient and verified through clinical evaluation methods by measuring the key kinematic features such as active range of motion (ROM), finger strength, and velocity. By comparing the pretraining and post-training results, the study demonstrates that the proposed method can further enhance the effectiveness of fine hand motion rehabilitation training by improving finger ROM, strength, and coordination.

Repeated use of brain-computer interfaces (BCIs) providing contingent sensory feedback of brain activity was recently proposed as a rehabilitation approach to restore motor function after stroke or spinal cord lesions. However, there are only a few clinical studies that investigate feasibility and effectiveness of such an approach. Here we report on a placebo-controlled, multicenter clinical trial that investigated whether stroke survivors with severe upper limb (UL) paralysis benefit from 10 BCI training sessions each lasting up to 40 min. A total of 74 patients participated: median time since stroke is 8 months, 25 and 75% quartiles [3.0; 13.0]; median severity of UL paralysis is 4.5 points [0.0; 30.0] as measured by the Action Research Arm Test, ARAT, and 19.5 points [11.0; 40.0] as measured by the Fugl-Meyer Motor Assessment, FMMA. Patients in the BCI group (n = 55) performed motor imagery of opening their affected hand. Motor imagery-related brain electroencephalographic activity was translated into contingent hand exoskeleton-driven opening movements of the affected hand. In a control group (n = 19), hand exoskeleton-driven opening movements of the affected hand were independent of brain electroencephalographic activity. Evaluation of the UL clinical assessments indicated that both groups improved, but only the BCI group showed an improvement in the ARAT’s grasp score from 0 [0.0; 14.0] to 3.0 [0.0; 15.0] points (p < 0.01) and pinch scores from 0.0 [0.0; 7.0] to 1.0 [0.0; 12.0] points (p < 0.01). Upon training completion, 21.8% and 36.4% of the patients in the BCI group improved their ARAT and FMMA scores respectively. The corresponding numbers for the control group were 5.1% (ARAT) and 15.8% (FMMA). These results suggests that adding BCI control to exoskeleton-assisted physical therapy can improve post-stroke rehabilitation outcomes. Both maximum and mean values of the percentage of successfully decoded imagery-related EEG activity, were higher than chance level. A correlation between the classification accuracy and the improvement in the upper extremity function was found. An improvement of motor function was found for patients with different duration, severity and location of the stroke.

As motor imagery results in specific modulations of brain electroencephalographic (EEG) signals, e.g., sensorimotor rhythms (SMR) (Pfurtscheller and Aranibar, 1979), it can be used to voluntarily control an external device, e.g., a robot or exoskeleton using a brain-computer interface (BCI) (Nicolas-Alonso and Gomez-Gil, 2012). Such system allowing for voluntary control of an exoskeleton moving a paralyzed limb can be used as an assistive device restoring lost function (Maciejasz et al., 2014). Besides receiving visual feedback, the user receives haptic and kinesthetic feedback which is contingent upon the imagination of a specific movement.

Here we report a randomized and controlled multicenter study investigating whether 10 sessions of BCI-controlled hand-exoskeleton active training after subacute and chronic stroke yields a better clinical outcome than 10 sessions in which hand-exoskeleton induced passive movements were not controlled by motor imagery-related modulations of brain activity. Besides assessing the effect of BCI training on clinical scores such as the ARAT and FMMA, we tested whether improvements in the upper extremity function correlates with the patient’s ability to generate motor imagery-related modulations of EEG activity.[…]

The PABLO is the latest in a long row of clinically tried and tested robotic- and computer-assisted therapy devices for arms and hands. The new design and the specially developed tyroS software make the PABLO more flexible and offer an expanded spectrum of therapy options.

The rapid progress of robotic technique provides new opportunities
for the biomedical and healthcare engineering. For
instance, a micro-nano robot allows us to study the fundamental
problems at cellular scale owing to its precise
positioning and manipulation ability; the medical robot
paves a new way for the low invasive and high efficient clinical
operation; and rehabilitation robot is able to improve the
rehabilitative efficacy of patients. This special issue aims at
exhibiting the latest research achievements, findings, and
ideas in the field of robotics in biomedical and healthcare
engineering, especially focusing on the upper/lower limb
rehabilitation, walking assistive robot, telerobotic surgery,
and radiosurgery.

Currently, there is an increasing population of patients
suffering from limb motor dysfunction, which can be caused
by nerve injuries associated with stroke, traumatic brain
injury, or multiple sclerosis. Past studies have demonstrated
that highly repetitive movement training can result in
improved recovery. The robotic-assisted technique is a novel
and rapidly expanding technology in upper/lower limb rehabilitation
that can enhance the recovery process and facilitate
the restoration of physical function by delivering high-dose
and high-intensity training. This special issue covers several
interesting papers addressing these challenges. X. Tu and
coworkers introduced an upper limb rehabilitation robot
powered by pneumatic artificial muscles which cooperates
with functional electrical stimulation arrays to realize active
reach-to-grasp training for stroke patients. The dynamic
models of a pneumatic muscle and functional electrical
stimulation-induced muscle are built for reaching training.
By using surface electromyography, the subject’s active intent
can be identified. Finally, grasping and releasing behaviors
can be realized by functional electrical stimulation array electrodes.
C. Guo and coworkers proposed an impedance-based
iterative learning control method to analyze the squatting
training of stroke patients in the iterative domain and time
domain. Patient’s training trajectory can be corrected by integrating
the iterative learning control scheme with the value of
impedance. In addition, the method can gradually improve
the performance of trajectory tracking by learning the past
trajectory tracking information and obtain specific training
condition of different individuals. The paper demonstrated
an effective control methodology in dealing with repeated
tracking control problems or periodic disturbance rejection
problems. Apart from these works, J. Li and coworkers
designed an open-structured treadmill gait trainer for lower
limb rehabilitation; T. Sun and coworkers proposed a
method for detecting the motion of human lower limbs
including all degrees of freedoms via the inertial sensors,
which permits analyzing the motion ability according to the
rehabilitation needs.

Other biomedical and healthcare robots included in this
special issue cover a range of interesting topics, such as walking
assistive robot, telerobotic surgery, and radiosurgery. To improve the walking ability of the elderly, the walker-type
rehabilitation robot has become a popular research topic over
the last decade. C. Tao and coworkers proposed a hierarchical
shared control method of the walking-aid robot for both
human motion intention recognition and the obstacle
emergency-avoidance method based on the artificial potential
field. In the implementation, the human motion intention
is obtained from the interaction force measurements of
the sensory system composed of force sensing registers and
a torque sensor. Meanwhile, a laser-range finder forward is
applied to detect the obstacles and try to guide the operator
based on the repulsion force calculated by artificial potential
field. The robot realizes obstacle avoidance while keeping
partially the operators’ original walking intention. X. Li and
coworkers demonstrated a general framework for robotassisted
surgical simulators for a more robust and resilient
robotic surgery. They created a hardware-in-the-loop simulator
platform and integrated the simulator with a physics
engine and a state-of-the-art path planning algorithm to help
surgeons acquire an optimal sense of manipulating the robot
instrumental arm. Eventually, they achieved autonomous
motion of the surgical robot. For coping with the workspace
issue during the application of Linac system during radiosurgery,
a specialized robotic system was presented by Y. Noh
et al. The design and implementation of the robotic system
were elaborated. All of these works showed comparative
advantages versus classical approaches and will hold great
potential for providing insights on the practical and systematic
design of robots that serve for broad applications in
biomedical and healthcare engineering.

The objectives of the special issue were reached in terms
of advancing the state of the art of robotic techniques and
addresing the challenging problems in biomedical and
healthcare engineering. Several critical problems in these
areas were addressed, and most of the proposed contributions
showed very promising results that outperform existing
studies. Some of the proposed approaches were also validated
from patients’ perspectives, which show the applicability of
these techniques in realistic environments.

Acknowledgments
We would like to express our thanks to all the authors who
submitted their work to this special issue and to all the
reviewers who helped us ensure the quality.

Abstract

Nowadays, the communication gap between humans and computers might be reduced due to multimodal sensors available in the market. Therefore, it is important to know the specifications of these sensors and how they are being used in order to create human computer interfaces, which tackle complex tasks. The purpose of this paper is to review recent research regarding the up-to-date application areas of the following sensors:

Gándara CV, Bauza CG (2015) Intellihome: a framework for the development of ambient assisted living applications based in low-cost technology. In: Proceedings of the Latin American conference on human computer interaction, ACM, p 18Google Scholar

This contribution will focus on the design, analysis, fabrication, experimental characterization and evaluation of a family of prototypes of robotic extra ﬁngers that can be used as grasp compensatory devices for hemiparetic upper limb.

The devices are the results of experimental sessions with chronic stroke patients and consultations with clinical experts. All the devices share a common principle of work which consists in opposing to the paretic hand/wrist so to restrain the motion of an object.

Robotic supernumerary ﬁngers can be used by chronic stroke patients to compensate for grasping in several Activities of Daily Living (ADL) with a particular focus on bimanual tasks.

The devices are designed to be extremely portable and wearable. They can be wrapped as bracelets when not being used, to further reduce the encumbrance. The motion of the robotic devices can be controlled using an Electromyography (EMG) based interface embedded in a cap. The interface allows the user to control the device motion by contracting the frontalis muscle. The performance characteristics of the devices have been measured through experimental set up and the shape adaptability has been conﬁrmed by grasping various objects with different shapes. We tested the devices through qualitative experiments based on ADL involving a group of chronic stroke patients in collaboration with by the Rehabilitation Center of the Azienda Ospedaliera Universitaria Senese.

The prototypes successfully enabled the patients to complete various bi-manual tasks. Results show that the proposed robotic devices improve the autonomy of patients in ADL and allow them to complete tasks which were previously impossible to perform.