View description

IEEE Several kinds of brain-computer interface (BCI) systems have been proposed to compensate for the lack of medical technology for assisting patients who lose the ability to use motor functions to communicate with the outside world. However, most of the proposed systems are limited by their non-portability, impracticality and inconvenience because of the adoption of wired or invasive electroencephalography (EEG) acquisition devices. Another common limitation is the shortage of functions provided because of the difficulty of integrating multiple functions into one BCI system. In this study, we propose a wireless, non-invasive and multifunctional assistive system which integrates steady state visually evoked potential (SSVEP)-based BCI and a robotic arm to assist patients to feed themselves. Patients are able to control the robotic arm via the BCI to serve themselves food. Three other functions: video entertainment, video calling, and active interaction are also integrated. This is achieved by designing a functional menu and integrating multiple subsystems. A refinement decision-making mechanism is incorporated to ensure the accuracy and applicability of the system. Fifteen participants were recruited to validate the usability and performance of the system. The averaged accuracy and information transfer rate (ITR) achieved is 90.91% and 24.94 bit per min respectively. The feedback from the participants demonstrates that this assistive system is able to significantly improve the quality of daily life.

View description

Electroencephalogram (EEG) signals are usually contaminated with various artifacts, such as signal associated with muscle activity, eye movement, and body motion, which have a noncerebral origin. The amplitude of such artifacts is larger than that of the electrical activity of the brain, so they mask the cortical signals of interest, resulting in biased analysis and interpretation. Several blind source separation methods have been developed to remove artifacts from the EEG recordings. However, the iterative process for measuring separation within multichannel recordings is computationally intractable. Moreover, manually excluding the artifact components requires a time-consuming offline process. This work proposes a real-time artifact removal algorithm that is based on canonical correlation analysis (CCA), feature extraction, and the Gaussian mixture model (GMM) to improve the quality of EEG signals. The CCA was used to decompose EEG signals into components followed by feature extraction to extract representative features and GMM to cluster these features into groups to recognize and remove artifacts. The feasibility of the proposed algorithm was demonstrated by effectively removing artifacts caused by blinks, head/body movement, and chewing from EEG recordings while preserving the temporal and spectral characteristics of the signals that are important to cognitive research.

View description

OAPA Improving the degree of assistance given by in-car navigation systems is an important issue for the safety of both drivers and passengers. There is a vast body of research that assesses the usability and interfaces of the existing navigation systems but very few investigations study the impact on the brain activity based on navigation-based driving. In this study, a real-world experiment is designed to acquire the electroencephalography (EEG) and in-car information to analyze the dynamic brain activity while the driver is performing the lane-changing task based on the auditory instructions from an in-car navigation system. The results show that auditory cues can influence the speed and increase the frontal EEG delta and beta power which is related to motor preparation and decision making during a lane change. However, there were no significant results on the alpha power. A better lane-change assessment can be obtained using specific vehicle information (lateral acceleration and heading angle) with EEG features for future naturalized driving study.

View description

Despite shortcomings, Screen Readers have been the primary
tool for using internet by visually impaired. In this paper, we
present a framework for an advanced Screen Reader that aims
at eliminating the drawbacks that are associated with the
existing systems. The proposed framework makes the use of
informed search technique to enhance the usability and
navigability. Some of its features like background music to
appraise the layout structure of web page, mouse hovering to
speak out glimpses of the underlying text make the use of
image processing techniques. These features are implemented
independent of the rest development therefore they can also
be used to enhance any existing Screen Reader.

View description

Researcher and developers have to face with performance variation in motor imagery[1] across and
within subjects and its fluctuations over time. In addition MI achievement variations within subjects are closely
correlated to neurophysiological variables[2]. In our study a MI task was submitted to a group of healthy subjects
before and after playing BCIGEM videogame for 90 minutes. Some EEG features were found, suggesting a
different pathway of activation inside MU rhythm during Motor Imagery (MI) after a mentally challenging activity
like playing a videogame

View description

The current study presents one Brain Computer
Interface (BCI) based communication system in which the
intentional eye blink were extracted from the single channel EEG
data. This system could be useful for the sufferers of motor
disease, locked in syndrome, and paralysis. In particular, one new
Human Computer Interface (HCI) can also benefit healthy users.
To detect the intentional eye blinking in real-time, the score was
calculated from delta, theta and gamma power band from the
brain dynamics acquired from single channel through NeuroSky
Headset wirelessly. The soft and hard blink represent ' 0 ' and '1'
in this system respectively and formed four continuous bit string
which map to the pre-defined text in this system. The mapped
text was convert into speech and send to the speaker. The
experimental results shows that this system can provide an
accurate and convenient way to communicate through the brain
dynamics.

View description

As virtual reality (VR) emerges as a mainstream platform, designers have
started to experiment new interaction techniques to enhance the user
experience. This is a challenging task because designers not only strive to
provide designs with good performance but also carefully ensure not to disrupt
users' immersive experience. There is a dire need for a new evaluation tool
that extends beyond traditional quantitative measurements to assist designers
in the design process. We propose an EEG-based experiment framework that
evaluates interaction techniques in VR by measuring intentionally elicited
cognitive conflict. Through the analysis of the feedback-related negativity
(FRN) as well as other quantitative measurements, this framework allows
designers to evaluate the effect of the variables of interest. We studied the
framework by applying it to the fundamental task of 3D object selection using
direct 3D input, i.e. tracked hand in VR. The cognitive conflict is
intentionally elicited by manipulating the selection radius of the target
object. Our first behavior experiment validated the framework in line with the
findings of conflict-induced behavior adjustments like those reported in other
classical psychology experiment paradigms. Our second EEG-based experiment
examines the effect of the appearance of virtual hands. We found that the
amplitude of FRN correlates with the level of realism of the virtual hands,
which concurs with the Uncanny Valley theory.