Control Interface using Gestures for the ABC wheelchair

Foeng, Vivian, 2019 Control Interface using Gestures for the ABC wheelchair , Flinders University, College of Science and Engineering

Terms of Use: This electronic version is (or will be) made publicly available by Flinders University in accordance with its open access policy for student theses. Copyright in this thesis remains with the author. You may use this material for uses permitted under the Copyright Act 1968. If you are the owner of any included third party copyright material and/or you believe that any material has been made available without permission of the copyright owner please contact copyright@flinders.edu.au with the details.

Abstract

Smart wheelchairs offer a solution to wheelchair users that have difficulties controlling a standard power wheelchair, which may be due to cognitive or severe mobility problems. At Flinders University, the development of the ABC wheelchair aims to meet this goal with the use of autonomous navigation and the ability to accept various inputs. Alternative control methods have been explored particularly for those who have limited movement of their upper limbs. Gestures have been identified as a promising option as they are natural to human communication and can be adapted for the capabilities of the user. Face and head gestures could be particularly useful for those who may have limited control over their upper limbs.

This project explores the use of face and head gestures in the control of a smart wheelchair using computer vision for a non-contact alternative. The emphasis of this project is on the user and exploring the factors that affect their use of this system while exploring the factors that influence the effectiveness of this system. This system was implemented using the open-source software OpenFace and an Arduino Mega 2560 for control of the wheelchair. The main challenges that was faced were in user intention and software limitations. These were aimed to be solved by making assumptions surrounding the intention of the user. One assumption made was that they would hold a gesture for a longer time if it was intentional. Software limitations faced were inherent issues with face tracking and computer vision. Through the implementation of this system, further considerations were found and addressed.

It was found that face and head gestures were a viable method of control but lacked in reliability. The performance of the system ranged from 65.23% to 88.28% in informedness, which indicates that while the system most likely can predict the correct gesture with the given information, there are several factors that limit the performance of this system. Individual differences in appearance as well as execution of gestures plays a large role in this. Therefore, more research would need to be conducted into the area of face and head tracking to create a reliable and versatile system. Implementing more varied training data or 3D data could potentially improve these issues. The challenge of deducing user intention can also be improved by integrating various user sensing capabilities such as a brain computer interface (BCI). More studies to formally evaluate this system from a user’s perspective could also be future work.