An Experimental Overview on Electric Field Sensing

2019

Journal of Ambient Intelligence and Humanized Computing

Electric fields exist everywhere. They are influenced by living beings, conductive materials, and other charged entities. Electric field sensing is a passive capacitive measurement technique that detects changes in electric fields and has a very low power consumption. We explore potential applications of this technology and compare it to other measurement approaches, such as active capacitive sensing. Five prototypes have been created that give an overview of the potential use cases and how they compare to other technologies. Our results reveal that electric field sensing can be used for indoor applications as well as outdoor applications. Even a mobile usage is possible due to the low energy consumption of this technology.

Designing and Evaluating Safety Services Using Depth Cameras

2019

Journal of Ambient Intelligence and Humanized Computing

Not receiving help in the case of an emergency is one of the most common fears of older adults that live independently at home. Falls are a particularly frequent occurrence and often the cause of serious injuries. In the last years, various ICT solutions for supporting older adults at home have been developed. Based on sensors and services in a smart environment they provide a wide range of services. In this work we have designed and evaluated safety-related services, based on a single Microsoft Kinect that is installed in a user’s home. We created two services to investigate the benefits and limitations of these solutions. The first is a fall detection service that registers falls in real-time, using a novel combination of static and dynamic skeleton tracking. The second is a fall prevention service that detects potentially dangerous objects in the walking path, based on scene analysis in a depth image. We conducted technical and user evaluations for both services, in order to get feedback on the feasibility, limitations, and potential future improvements.

Face morphing attacks create face images that are verifiable to multiple identities. Associating such images to identity documents lead to building faulty identity links, causing attacks on operations like border crossing. Most of previously proposed morphing attack detection approaches directly classified features extracted from the investigated image. We discuss the operational opportunity of having a live face probe to support the morphing detection decision and propose a detection approach that take advantage of that. Our proposed solution considers the facial landmarks shifting patterns between reference and probe images. This is represented by the directed distances to avoid confusion with shifts caused by other variations. We validated our approach using a publicly available database, built on 549 identities. Our proposed detection concept is tested with three landmark detectors and proved to outperform the baseline concept based on handcrafted and transferable CNN features.

Performing Indoor Localization with Electric Potential Sensing

2019

Journal of Ambient Intelligence and Humanized Computing

Location-based services or smart home applications all depend on an accurate indoor positioning system. Basically one divides these systems into token-based and token-free localization systems. In this work, we focus on the token-free system based on smart floor technology. Smart floors can typically be built using pressure sensors or capacitive sensors. However, these set-ups are often hard to deploy as mechanical or electrical features are required below the surface and even harder to replace when detected a sensor malfunctioning. Therefore we present a novel indoor positioning system using an uncommon form of passive electric field sensing (EPS), which detects the electric potential variation caused by body movement. The EPS-based smart floor set-up is easy to install by deploying a grid of passive electrode wires underneath any non-conductive surfaces. Easy maintenance is also ensured by the fact that the sensors are not placed underneath the surface, but on the side. Due to the passive measuring nature, low power consumption is achieved as opposed to active capacitive measurement. Since we do not collect image data as in visual-based systems and all sensor data is processed locally, we preserve the user’s privacy. The proposed architecture achieves a high position accuracy and an excellent spatial resolution. Based on our evaluation conducted in our living lab, we measure a mean positioning error of only 12.7 cm.

A Look at Feet: Recognizing Tailgating via Capacitive Sensing

At many every day places, the ability to be reliably able to determine how many individuals are within an automated access control area, is of great importance. Especially in high-security areas such as banks and at country borders, access systems like mantraps or drop-arm turnstiles serve this purpose. These automated systems are designed to ensure that only one person can pass through a particular transit area at a time. State of the art systems use camera systems mounted in the ceiling to detect people sneaking in behind authorized individuals to pass through the transit space (tailgating attacks). Our novel method is inspired by recently achieved results in capacitive in-door-localization. Instead of estimating the position of humans, the pervasive capacitance of feet in the transit space is measured to detect tailgating attacks. We explore suitable sensing techniques and sensor-grid layout to be used for that application. In contrast to existing work, we use machine learning techniques for classification of the sensor’s feature vector. The performance is evaluated on hardware-level, by defining its physical effectiveness. Tests with simulated attacks show its performance in comparison with competitive camera-image methods. Our method provides verification of tailgating attacks with an equal-error-rate of 3.5%, which outperforms other methods. We conclude with an evaluation of the amount of data needed for classification and highlight the usefulness of this method when combined with other imaging techniques.

An Intuitive and Personal Projection Interface for Enhanced Self-management

Smart environments offer a high potential to improve intuitive and personal interactions in our everyday life. Nowadays, we often get distracted by interfaces and have to adapt ourselves to the technology, instead of the interfaces focusing on the human needs. Especially in work situations, it is important to focus on the essential in terms of goal setting and to have a far-reaching vision about ourselves. Particularly with regard to self-employment, challenges like efficient self-management, regulated work times and sufficient self-reflection arise. Therefore, we present ‘Selv’, a novel transportable device that is intended to increase user productivity and self-reflection by having an overview about obligations, targets and success. ‘Selv’ is an adaptive interface that changes its interactions in order to fit into the user’s everyday routine. Our approach is using a pen on a projected interface. Adapting to our own feeling of naturalness ‘Selv’ learns usual interactions through handwriting recognition. In order to address users needs, it is more likely to built a mutual relationship and to convey a new feeling of an interface in a personal and natural way. This paper includes an elaborate concept and prototypical realization within the internet of things environment. We conclude with an evaluation of testings and improvements in terms of interactions and hardware.

An Ontology for Wearables Data Interoperability and Ambient Assisted Living Application Development

2018

Recent Developments and the New Direction in Soft-Computing Foundations and Applications

World Conference on Soft Computing <6, 2016, Berkeley, USA>

Studies in Fuzziness and Soft Computing (STUDFUZZ), 361

Over the last decade a number of technologies have been developed that support individuals in keeping themselves active. This can be done via e-coaching mechanisms and by installing more advanced technologies in their homes. The objective of the Active Healthy Ageing (AHA) Platform is to integrate existing tools, hardware, and software that assist individuals in improving and/or maintaining a healthy lifestyle. This architecture is realized by integrating several hardware/software components that generate various types of data. Some examples include heart-rate data, coaching information, in-home activity patterns, mobility patterns, and so on. Various subsystems in the AHA platform can share their data in a semantic and interoperable way, through the use of a AHA data-store and a wearable devices ontology. This paper presents such an ontology for wearable data interoperability in Ambient Assisted Living environments. The ontology includes concepts such as height, weight, locations, activities, activity levels, activity energy expenditure, heart rate, or stress levels, among others. The purpose is serving application development in Ambient Intelligence scenarios ranging from activity monitoring and smart homes to active healthy ageing or lifestyle profiling.

The rapid development of VR technology in the past three years allowed artists, filmmakers and other media producers to create great experiences in this new medium. But filmmakers are, however, facing big challenges, when it comes to cinematic narration in VR. The old, established rules of filmmaking do not apply for VR films and important techniques of cinematography and editing must be completely rethought. Possibly, a new filmic language will be found. But even though filmmakers eagerly experiment with the new medium already, there exist relatively few scientific studies about the differences between classical filmmaking and filmmaking in 360 and VR. We therefore present this study on cinematic narration in VR. In this we give a comprehensive overview of techniques and concepts that are applied in current VR films and games. We place previous research on narration, film, games and human perception into the context of VR experiences and we deduce consequences for cinematic narration in VR. We base our assumptions on a conducted empirical test with 50 participants and on an additional online survey. In the empirical study, we selected 360-degree videos and showed them to a test-group, while the viewer’s behavior and attention was observed and documented. As a result of this paper, we present guidelines which suggest methods of guiding the viewers’ attention as well as approaches to cinematography, staging and editing in VR.

Once upon a time, there was a blacklisted criminal who usually avoided appearing in public. He was surfing the Web, when he noticed, what had to be a targeted advertisement announcing a concert of his favorite band. The concert was in a near town, and the only way to get there was by train. He was worried, because he heard in the news about the new face identification system installed at the train station. From his last stay with the police, he remembers that they took these special face images with the white background. He thought about what can he do to avoid being identified and an idea popped in his mind “what if I can make a crazy-face, as the kids call it, to make my face look different? What do I exactly have to do? And will it work?”. He called his childhood geeky friend and asked him if he can build him a face recognition application he can tinker with. The geeky friend was always interested in such small projects where he can use open-source resources and didn’t really care about the goal, as usual. The criminal tested the application and played around, trying to figure out how can he make a crazy-face that won’t be identified as himself. On the day of the concert, he took off to the train station with some doubt in his mind and fear in his soul. To know what happened next, you should read the rest of this paper.

Deep and Multi-algorithmic Gender Classification of Single Fingerprint Minutiae

Accurate fingerprint gender estimation can positively affect several applications, since fingerprints are one of the most widely deployed biometrics. For example, gender classification in criminal investigations may significantly minimize the list of potential subjects. Previous work mainly offered solutions for the task of gender classification based on complete fingerprints. However, partial fingerprint captures are frequently occurring in many applications, including forensics and the fast growing field of consumer electronics. Moreover, partial fingerprints are not well-defined. Therefore, this work improves the gender decision performance on a well-defined partition of the fingerprint. It enhances gender estimation on the level of a single minutia. Working on this level, we propose three main contributions that were evaluated on a publicly available database. First, a convolutional neural network model is offered that outperformed baseline solutions based on hand crafted features. Second, several multi-algorithmic fusion approaches were tested by combining the outputs of different gender estimators that help further increase the classification accuracy. Third, we propose including minutia detection reliability in the fusion process, which leads to enhancing the total gender decision performance. The achieved gender classification performance of a single minutia is comparable to the accuracy that previous work reported on a quarter of aligned fingerprints including more than 25 minutiae.

Deep Learning-based Face Recognition and the Robustness to Perspective Distortion

Face recognition technology is spreading into a wide range of applications. This is mainly driven by social acceptance and the performance boost achieved by the deep learningbased solutions in the recent years. Perspective distortion is an understudied distortion in face recognition that causes converging verticals when imaging 3D objects depending on the distance to the object. The effect of this distortion on face recognition was previously studied for algorithms based on hand-crafted features with a clear negative effect on verification performance. Possible solutions were proposed by compensating the distortion effect on the face image level, which requires knowing the camera settings and capturing a high quality image. This work investigates the effect of perspective distortion on the performance of a deep learning-based face recognition solution. It also provides a device parameter-independent solution to decrease this effect by creating more perspective-robust face representations. This was achieved by training the deep learning model on perspective-diverse data, without increasing the size of the training data. Experiments performed on the deep model in hand and a specifically collected database concluded that the perspective distortion effects face verification performance if not considered in the training process, and that this can be improved by our proposal of creating robust face representations by properly selecting the training data.

Eliminating the Ground Reference for Wireless Electric Field Sensing

Capacitive systems are getting more and more attention these days. But many systems today like smart-phone screens, laptops, and non-mechanical buttons use capacitive techniques to measure events within several centimeters of distance. The reason that battery-powered devices don’t have high measurement ranges lies in the principle of capacitive measurement itself - the electrical ground is an inherent part of the measurement. In this paper, we present a method for passive and wireless capacitive systems to eliminate the reference to ground. This bears a couple of advantages for mobile, battery-powered capacitive sensor designs in the field of ambient intelligence. We compare the detection range of normal passive capacitive systems with our new approach. The results show that our improvements result in a higher detection range and higher power efficiency.

Fingerprint and Iris Multi-biometric Data Indexing and Retrieval

Indexing of multi-biometric data is required to facilitatefast search in large-scale biometric systems. Previous worksaddressing this issue in multi-biometric databases focused onmulti-instance indexing, mainly iris data. Few works addressedthe indexing in multi-modal databases, with basic candidate listfusion solutions limited to joining face and fingerprint data. Irisand fingerprint are widely used in large-scale biometric systemswhere fast retrieval is a significant issue. This work proposes jointmulti-biometric retrieval solution based on fingerprint and irisdata. This solution is evaluated under eight different candidatelist fusion approaches with variable complexity on a databaseof 10,000 reference and probe records of irises and fingerprints.Our proposed multi-biometric retrieval of fingerprint and irisdata resulted in a reduction of the miss rate (1- hit rate) at 0.1%penetration rate by 93% compared to fingerprint indexing and88% compared to iris indexing.

Quantified Self has seen an increased interest in recent years, with devices including smartwatches, smartphones, or other wearables that allow you to monitor your fitness level. This is often combined with mobile apps that use gamification aspects to motivate the user to perform fitness activities, or increase the amount of sports exercise. Thus far, most applications rely on accelerometers or gyroscopes that are integrated into the devices. They have to be worn on the body to track activities. In this work, we investigated the use of a speaker and a microphone that are integrated into a smartphone to track exercises performed close to it. We combined active sonar and Doppler signal analysis in the ultrasound spectrum that is not perceivable by humans. We wanted to measure the body weight exercises bicycles, toe touches, and squats, as these consist of challenging radial movements towards the measuring device. We have tested several classification methods, ranging from support vector machines to convolutional neural networks. We achieved an accuracy of 88% for bicycles, 97% for toe-touches and 91% for squats on our test set.

Investigating Large Curved Interaction Devices

2018

Personal and Ubiquitous Computing

Large interactive surfaces enable novel forms of interaction for their users, particularly in terms of collaborative interaction. During longer interactions, the ergonomic factors of interaction systems have to be taken into consideration. Using the full interaction space may require considerable motion of the arms and upper body over a prolonged period of time, potentially causing fatigue. In this work, we present Curved, a large-surface interaction device, whose shape is designed based on the natural movement of an outstretched arm. It is able to track one or two hands above or on its surface by using 32 capacitive proximity sensors. Supporting both touch and mid-air interaction can enable more versatile modes of use. We use image processing methods for tracking the user's hands and classify gestures based on their motion. Virtual reality is a potential use case for such interaction systems and was chosen for our demonstration application. We conducted a study with ten users to test the gesture tracking performance, as well as user experience and user preference for the adjustable system parameters.

Normalization is an important step for different fusion, classification, and decision making applications. Previous normalization approaches considered bringing values from different sources into a common range or distribution characteristics. In this work we propose a new normalization approach that transfers values into a normalized space where their relative performance in binary decision making is aligned across their whole range. Multi-biometric verification is a typical problem where information from different sources are normalized and fused to make a binary decision and therefore a good platform to evaluate the proposed normalization.We conducted an evaluation on two publicly available databases and showed that the normalization solution we are proposing consistently outperformed state-of-the-art and best practice approaches, e.g. by reducing the false rejection rate at 0.01% false acceptance rate by 60- 75% compared to the widely used z-score normalization under the sum-rule fusion.

Prototyping Shape-Sensing Fabrics Through Physical Simulation

Embedding sensors into fabrics can leverage substantial improvements in application areas like working safety, 3D modeling or health-care, for example to recognize the risk of developing skin ulcers. Finding a suitable setup and sensor combination for a shape-sensing fabric currently relies on the intuition of an application engineer. We introduce a novel approach: Simulating the shape-sensing fabric first and optimize the design to achieve better real-world implementations. In order to enable developers to easily prototype their shape-sensing scenario, we have implemented a framework that enables soft body simulation and virtual prototyping. To evaluate our approach, we investigate the design of a system detecting sleeping postures. We simulate potential designs first, and implement a bed cover consisting of 40 distributed acceleration sensors. The validity of our framework is confirmed by comparing the simulated and real evaluation results. We show that both approaches achieve similar performances, with an F-measure of 85% for the virtual prototype and 89% for the real-world implementation.

Step by Step: Early Detection of Diseases Using an Intelligent Floor

The development of sensor technologies in smart homes helps to increase user comfort or to create safety through the recognition of emergency situations. For example, lighting in the home can be controlled or an emergency call can be triggered if sensors hidden in the floor detect a fall of a person. It makes sense to also use these technologies regarding prevention and early detection of diseases. By detecting deviations and behavioral changes through long-term monitoring of daily life activities it is possible to identify physical or cognitive diseases. In this work, we first examine in detail the existing possibilities to recognize the activities of daily life and the capability of such a system to conclude from the given data on illnesses. Then we propose a model for the use of floor-based sensor technology to help diagnose diseases and behavioral changes by analyzing the time spent in bed as well as the walking speed of users. Finally, we show that the system can be used in a real environment.

Smart Environments should be able to understand a user’s need without explicit interaction. In order to do that, one step is to build a system that is able to recognize and track some common activities of the user. This way, we can provide a system that provides various services for controlling installed appliances and offering help for every day activities. Applying these services in the users’ environment should make his life more comfortable, easier, and safer. In this paper, we will introduce an embedded sensor system using surface acoustic arrays to analyze human activities in a smart environment. We divided basic activity groups ranging from walking, cupboard closing to falling, including their extended sub-activity groups. We expanded walking into walking barefoot, with shoes and with high heels and further extended closing cupboard with three cupboards locating on different positions. We further investigated the usage of single pickup or a combination of 4 pickups with their effect on the recognition precision. We achieved an overall precision of 97.23% with 10-fold cross validation using support vector machine (SVM) for all sub-activity group combined. Even using one pickup only, we can achieve an overall precision of more than 93%, but we can further increase the precision by using a combination of pickups up to 97.23%.

Text Localization in Born-Digital Images of Advertisements

Localizing text in images is an important step in a number of applications and fundamental for optical character recognition. While born-digital text localization might look similar to other complex tasks in this field, it has certain distinct characteristics. Our novel approach combines individual strengths of the commonly used methods: stroke width transform and extremal regions and combines them with a method based on edge-based morphologically growing. We present a parameterfree method with high flexibility to varying text sizes and colorful image elements. We evaluate our method on a novel image database of different retail prospects, containing textual product information. Our results show a higher f-score than competitive methods on that particular task.

The Dark Side of the Face: Exploring the Ultraviolet Spectrum for Face Biometrics

Facial recognition in the visible spectrum is a widelyused application but it is also still a major field of research.In this paper we present melanin face pigmentation (MFP)as a new modality to be used to extend classical face biometrics. Melanin pigmentation are sun-damaged cells thatoccur as revealed and/or unrevealed pattern on human skin.Most MFP can be found in the faces of some people whenusing ultraviolet (UV) imaging. To proof the relevance ofthis feature for biometrics, we present a novel image datasetof 91 multiethnic subjects in both, the visible and the UVspectrum. We show a method to extract the MFP featuresfrom the UV images, using the well known SURF featuresand compare it with other techniques. In order to proof itsbenefits, we use weighted score-level fusion and evaluatethe performance in an one against all comparison. As a resultwe observed a significant amplification of performancewhere traditional face recognition in the visible spectrum isextended with MFP from UV images. We conclude with afuture perspective about the use of these features for futureresearch and discuss observed issues and limitations.

Affective computing allows machines to simulate and detect emotional states. The most common method is the observation of the face by camera. However, in our increasingly observed society, more privacy-aware methods are worth exploring that do not require facial images, but instead look at other physiological indicators of emotion. In this work we present the Emotive Couch, a sensor-augmented piece of smart furniture that detects proximity and motion of the human body. We present the design rationale and use standard machine learning techniques to detect the three basic emotions Anxiety, Interest, and Relaxation. We evaluate the performance of our approach with 15 participants in a study that includes various affect elicitation methods, achieving an accuracy of 77.7 %.

What Can a Single Minutia Tell about Gender?

2018

2018 International Workshop on Biometrics and Forensics (IWBF)

Since fingerprints are one of the most widely deployed biometrics, several applications can benefit from an accurate fingerprint gender estimation. Previous work mainly tackled the task of gender estimation based on complete fingerprints. However, partial fingerprint captures are frequently occurring in many applications including forensics and consumer electronics, with the considered ratio of the fingerprint is variable. Therefore, this work investigates gender estimation on a small, detectable, and well-defined partition of a fingerprint. It investigates gender estimation on the level of a single minutia. Working on this level, we propose a feature extraction process that is able to deal with the rotation and translation invariance problems of fingerprints. This is evaluated on a publicly available database and with five different binary classifiers. As a result, the information of a single minutia achieves a comparable accuracy on the gender classification task as previous work using quarters of aligned fingerprints with an average of more than 25 minutiae.

3D-printed Electrodes for Electric Field Sensing Technologies

2017

Darmstadt, TU, Master Thesis, 2017

Electrical field sensing and capacitive sensing have been an intensively explored research topic for over a century. Combined with the rising popularity of rapid prototyping technologies, like affordable all- in-one micro-controller boards and especially fused filament fabrication 3D-printing, new possibilities occur. 3D-printing drives the ambitions of custom designed objects with fully integrated and unobtrusive electronics. Conductive 3D-printing materials (filaments) can be used to create electrodes for electrical field sensing. These electrodes can be 3D-printed as an integral part into the overall object. However, none of the previous work examines the properties of these conductive materials, the chosen 3D-printing configurations, and patters regarding their sensing performance and costs. This thesis provides a first insight into the interdependency between the chosen 3D- printing parameters and the overall sensing performance. For this, 30 3D-printed electrodes were created from graphene filament and evaluated against one copper electrode, and a placebo electrode. The evaluation was performed by a custom made measuring toolkit, the CapLiper, which was also evaluated for proper sensing behavior. The results show, that 3D-printed electrodes can compete with the sensing performance of copper electrodes, with some exceeding its performance. Using these results, as well as lessons learned in creating two different prototypes, the thesis establishes best practice and gives an outlook on potential future work in this domain.

An Exploratory Study on Electric Field Sensing

Electric fields are influenced by the human body and other conducting materials. Capacitive measurement techniques are used in touch-screens, in the automobile industry, and for presence and activity recognition in Ubiquitous Computing. However, a drawback of the capacitive technology is the energy consumption, which is an important aspect for mobile devices. In this paper we explore possible applications of electric field sensing, a purely passive capacitive measurement technique, which can be implemented with an extremely low power consumption. To cover a wide range of applications, we examine five possible use cases in more detail. The results show that the application is feasible both in interior spaces and outdoors. Moreover, due to the low energy consumption, mobile usage is also possible.

Assistive Apps for Activities of Daily Living Supporting Persons with Down's Syndrome

2017

Journal of Ambient Intelligence and Smart Environments

Supporting persons with Down's Syndrome in their daily activities using ICT is a key element in further advancing their independence and integration into society. The POSEIDON project embraces this goals and develops technology which creates adjustable and personalizable assistive systems. We present a system for Money-Handling Training and assistance for shopping. In this paper we present results of evaluating the Money-Handling Training App in different pilot studies and work-shops, with a larger group of persons with Down's Syndrome, comparing different interaction devices like tablet, personal computer and interactive table. Furthermore, we present evaluation results for the Shopping App.

Curved - Free-Form Interaction Using Capacitive Proximity Sensors

Large interactive surfaces have found increased popularity in recent years. However, with increased surface size ergonomics become more important, as interacting for extended periods may cause fatigue. Curved is a large-surface interaction device, designed to follow the natural movement of a stretched arm when performing gestures. It tracks one or two hands above the surface, using an array of capacitive proximity sensors and supports both touch and mid-air gestures. It requires specific object tracking methods and the synchronized measurement from 32 sensors. We have created an example application for users wearing a virtual reality headset while seated that may benefit from haptic feedback and ergonomically shaped surfaces. A prototype with adaptive curvature has been created that allows us to evaluate gesture recognition performance and different surface inclinations.

E-Textile Couch: Towards Smart Garments Integrated Furniture

Application areas like health-care and smart environments have greatly benefited from embedding sensors into every-day-objects, enabling for example sleep apnea detection. We propose to further integrate parts of sensors into the very own materials of the objects. Thus, in this work we explore integrating smart garments into furniture using a couch as our use-case. Equipped with textile capacitive sensing electrodes, we show that our prototype outperforms existing systems achieving an F-measure of 94.1%. Furthermore, we discuss implications and limitation of the integration process.

Efficient, Accurate, and Rotation-Invariant Iris Code

2017

IEEE Signal Processing Letters

The large scale of the recently demanded biometric systems has put a pressure on creating a more efficient, accurate, and private biometric solutions. Iris biometrics is one of the most distinctive and widely used biometric characteristics. High-performing iris representations suffer from the curse of rotation inconsistency. This is usually solved by assuming a range of rotational errors and performing a number of comparisons over this range, which results in a high computational effort and limits indexing and template protection. This work presents a generic and parameter-free transformation of binary iris representation into a rotation-invariant space. The goal is to perform accurate and efficient comparison and enable further indexing and template protection deployment. The proposed approach was tested on a database of 10 000 subjects of the ISYN1 iris database generated by CASIA. Besides providing a compact and rotational-invariant representation, the proposed approach reduced the equal error rate by more than 55% and the computational time by a factor of up to 44 compared to the original representation.

Enabling an Internet of Things Framework for Ambient Assisted Living

2017

Ambient Assisted Living

Ambient Assisted Living (AAL) <9, 2016, Frankfurt, Germany>

Ambient Assisted Living (AAL) technologies hold great potential to meet the challenges of health, support, comfort and social services in European countries. After years of research, innovation and development in the field of health care and life support, there is still a lack of good practices on how to improve the market uptake of AAL solutions, how to commercialize laboratory results and prototypes and achieve widely accepted mature solutions with a significant footprint in the European market. The Internet of Things (IoT) consists of Internet connected objects such as sensors and actuators, as well as Smart appliances. Due to its characteristics, requirement and impact on real life system, the IoT has gained significant attention over the last few years. The major goal of this paper is to strategically specify and demonstrate the impact of the usage of IoT technology and the respect of IoT specification on the quality and future collaborative usage and extendability of deployed AAL solutions in real life.

Exercise Monitoring On Consumer Smart Phones Using Ultrasonic Sensing

Quantified self has been a trend over the last several years. An increasing number of people use devices, such as smartwatches or smartphones to log activities of daily life, including step count or vital information. However, most of these devices have to be worn by the user during the activities, as they rely on integrated motion sensors. Our goal is to create a technology that enables similar precision with remote sensing, based on common sensors installed in every smartphone, in order to enable ubiquitous application. We have created a system that uses the Doppler effect in ultrasound frequencies to detect motion around the smartphone. We propose a novel use case to track exercises, based on several feature extraction methods and machine learning classification. We conducted a study with 14 users, achieving an accuracy between 73% and 92% for the different exercises.

Fiber Defect Detection of Inhomogeneous Voluminous Textiles

Quality assurance of dry cleaned industrial textiles is still a mostly manually operated task. In this paper, we present how computer vision and machine learning can be used for the purpose of automating defect detection in this application. Most existing systems require textiles to be spread flat, in order to detect defects. In contrast, we present a novel classification method that can be used when textiles are in inhomogeneous, voluminous shape. Normalization and classification methods are combined in a decision-tree model, in order to detect different kinds of textile defects. We evaluate the performance of our system in realworld settings with images of piles of textiles, taken using stereo vision. Our results show, that our novel classification method using key point pre-selection and convolutional neural networks outperform competitive methods in classification accuracy.

General Borda Count for Multi-biometric Retrieval

Indexing of multi-biometric data is required to facilitate fast search in large-scale biometric systems. Previous works addressing this issue were challenged by including biometric sources of different nature, utilizing the knowledge about the biometric sources, and optimizing and tuning the retrieval performance. This work presents a generalized multi-biometric retrieval approach that adapts the Borda count algorithm within an optimizable structure. The approach was tested on a database of 10k reference and probe instances of the left and the right irises. The experiments and comparisons to five baseline solutions proved to achieve advances in terms of general indexing performance, tunability to certain operating points, and response to missing data. A clear advantage of the proposed solution was noticed when faced by candidate lists of low quality.

Biometrics is a rapidly developing field of research and biometric-based identification systems experience a massive growth all around the world caused by the gaining industrial, government and citizen acceptance. The US-VISIT program uses biometric systems to enforce homeland and border security, whereas in the United Arab Emirates (UAE), biometric systems play a major role in the border control process. Similar, in India, biometrics have gained a great deal of attention, as the Unique Identification Authority of India (UIDAI) have already registered over one billion Indian citizens in the last 7 years (uidai.gov.in). Despite the rapid propagation of large-scale databases, the majority of researchers are still focusing on the matching accuracy of small databases, while neglecting scalability and speed issues. Identity association is usually determined by comparing input data against every entry in the database, which causes computational problems when it comes to large-scale databases. Biometric indexing aims to reduce the number of candidate identities to be considered by an identification system when searching for a match in large biometric databases. However, this is a challenging task since biometric data is fuzzy and does not exhibit any natural sorting order. Current indexing methods are mainly based on tree traversal (using kd-trees, B-trees, R-trees) which suffer from the curse of dimensionality, while other indexing methods are based on hashing, which suffer from pure key generation. The goal of this thesis is to develop an indexing scheme based on multiple biometric modalities. It aims to present the main results of research focusing on iris and fingerprint indexing. Fingerprints are undisputedly the most studied biometric modality that are extensive used in civil and forensic recognition systems. Together with the potential rise of iris recognition accurateness along with enhanced robustness, indexing of this modalities becomes a promising field of research. Different unimodal and multimodal identification approaches have already been proposed in past years. However, most of them trade fast identification rates at the cost of accuracy, while the remaining make use of complex indexing structures, which results in a complete restructuring if insertions or deletions are necessary. This work offers a framework for fast and accurate iris indexing as well as effective indexing schemes to combine multiple modalities. To achieve that, three main contributions are made: First, a new rotation invariant iris representation was developed, reducing the equal error rate by more than 55% and the computation time by a factor up to 44 compared to the original representation. Second, this representation was used to construct an indexing scheme, which reaches a hit rate of 99.7% at 0.1% penetration rate, outperforming state of the art algorithms. And third, a general rank-level indexing fusion scheme was developed to effectively combine multiple sources, achieving over 99.98% hit rate at same penetration rate of 0.1%.

Indexing of Single and Multi-instance Iris Data Based on LSH-Forest and Rotation Invariant Representation

Indexing of iris data is required to facilitate fast search in large-scale biometric systems. Previous works addressing this issue were challenged by the tradeoffs between accuracy, computational efficacy, storage costs, and maintainability. This work presents an iris indexing approach based on rotation invariant iris representation and LSH-Forest to produce an accurate and easily maintainable indexing structure. The complexity of insertion or deletion in the proposed method is limited to the same logarithmic complexity of a query and the required storage grows linearly with the database size. The proposed approach was extended into a multi-instance iris indexing scheme resulting in a clear performance improvement. Single iris indexing scored a hit rate of 99.7% at a 0.1% penetration rate while multi-instance indexing scored a 99.98% hit rate at the same penetration rate. The evaluation of the proposed approach was conducted on a large database of 50k references and 50k probes of the left and the right irises. The advantage of the proposed solution was put into prospective by comparing the achieved performance to the reported results in previous works.

Indoor Localization Based on Passive Electric Field Sensing

The ability to perform accurate indoor positioning opens a wide range of opportunities, including smart home applications and location-based services. Smart floors are a well-established technology to enable marker-free indoor localization within an instrumented environment. Typically, they are based on pressure sensors or varieties of capacitive sensing. These systems, however, are often hard to deploy as mechanical or electrical features are required below the surface. They might also have a limited range or not be compatible with different floor materials. In this paper, we present a novel indoor positioning system using an uncommon form of passive electric field sensing, which detects the change in body electric potential during movement. It is easy to install by deploying a grid of passive wires underneath any non-conductive floor surface. The proposed architecture achieves a high position accuracy and an excellent spatial resolution. In our evaluation, we measure a mean positioning error of only 12.7 cm. The proposed system also combines the advantages of very low power consumption, easy installation, easy maintenance, and the preservation of privacy.

Invisible Human Sensing in Smart Living Environments Using Capacitive Sensors

2017

Ambient Assisted Living

Ambient Assisted Living (AAL) <9, 2016, Frankfurt, Germany>

Smart living environments aim at supporting their inhabitants in daily tasks by detecting their needs and dynamically reacting accordingly. This generally requires several sensor devices, whose acquired data is combined to assess the current situation. Capturing the full range of situations necessitates many sensors. Often cameras and motion detectors are used, which are rather large and difficult to hide in the environment. Capacitive sensors measure changes in the electric field and can be operated through any non-conductive material. They gained popularity in research in the last few years, with some systems becoming available on the market. In this work we will introduce how those sensors can be used to sense humans in smart living environments, providing applications in situation recognition and human-computer interaction. We will discuss opportunities and challenges of capacitive sensing and give an outlook on future scenarios.

Multi-biometrics aims at building more accurate unified biometric decisions based on the information provided by multiple biometric sources. Information fusion is used to optimize the process of creating this unified decision. In previous works dealing with score-level multibiometric fusion, the scores of different biometric sources belonging to the comparison of interest are used to create the fused score. This is usually achieved by assigning static weights for the different biometric sources. In contrast, we focus on integrating the information imbedded in the relative relation between the comparison scores (within a 1:N comparison) in the biometric fusion process using a dynamic weighting scheme. This is performed by considering the neighbors distance ratio in the ranked comparisons to influence the dynamic weights of the fused scores. The evaluation was performed on the Biometric Scores Set BSSR1 database. The enhanced performance induced by including the neighbors distance ratio information within a dynamic weighting scheme in comparison to the baseline solution was shown by an average reduction of the equal error rate by more than 40% over the different test scenarios.

New Approach for Optimizing the Usage of Situation Recognition Algorithms Within IoT Domains

The growth of the Internet of Things (IoT) over the past few years enabled a lot of application domains. Due to the increasing number of IoT connected devices, the amount of generated data is increasing too. Processing huge amounts of data is complex due to the continuously running situation recognition algorithms. To overcome these problems, this paper proposes an approach for optimizing the usage of situation recognition algorithms in Internet of Things domains. The key idea of our approach is to select important data, based on situation recognition purposes, and to execute the situation recognition algorithms after all relevant data have been collected. The main advantage of our approach is that situation recognition algorithms will not be executed each time new data is received, thus allowing the reduction of the situation recognition algorithms execution frequency and saving computational resources.

New Approaches for Localization and Activity Sensing in Smart Environments

2017

Ambient Assisted Living

Ambient Assisted Living (AAL) <9, 2016, Frankfurt, Germany>

Smart environments need to be able to fulfill the wishes of its occupants unobtrusively. To achieve this goal, it has to be guaranteed that the current state environment is perceived at all times. One of the most important aspects is to find the current position of the in- habitants and to perceive how they move in this environment. Numerous technologies enable such supervision. Particularly challenging are marker-free systems that are also privacy-preserving. In this paper, we present two such systems for localizing inhabitants in a Smart Environment using - electrical potential sensing and ultrasonic Doppler sensing. We present methods that infer location and track the user, based on the acquired sensor data. Finally, we discuss the advantages and challenges of these sensing technologies and provide an overview of future research directions.

Opportunities for Biometric Technologies in Smart Environments

Smart environments describe spaces that are equipped with sensors, computing facilities and output systems that aim at providing their inhabitants with targeted services and supporting them in their tasks. Increasingly these are faced with challenges in differentiating multiple users and secure authentication. This paper outlines how biometric technologies can be applied in smart environments to overcome these challenges. We give an introduction to these domains and show various applications that can benefit from the combination of biometrics and smart environments.

Safety Services in Smart Environments Using Depth Cameras

Falls of elderly persons are the most common cause of serious injuries in this age group. It is important to detect the fall in a timely manner. If medical help can't be provided immediately a deterioration of the patient's state may occur. In order to tackle this challenge, we want to propose two combined safety services that can utilize the same sensor to prevent and detect falls. The Dangerous Object Adviser detects small obstacles located on the floor and warns the user about the stumbling hazard when the user walks in their direction. The Fall Detection Service detects a fall and informs caregivers. This enables the caregivers to provide medical care in time. Both services are implemented by using the Microsoft Kinect, with the obstacles extracted from the depth image and the usage of skeleton tracking gives to provide the necessary information on the user position and pose.

This thesis introduces an approach for tracking three different activities, including their context extension, with a precision of 94% by using multiple pickup/piezo sensors. The mechanical waves, which are created by people touching various objects, can be recognized with these sensors. The combination of classical signal processing and current methods of machine learning enables the implementation of a processing pipeline for classifying these signal events and assigning the propagated event signal to its activity. Compared to the C4.5 CART and BayesNET classifiers,the best precision and performance balance is offered by the SVM classifier. The observed activities in this thesis are Walking, Closing a Cupboard and Falling. Especially Walking and Closing a Cupboard provide a good basis for extending the context. For a context expansion of Walking, the classification classes are split into the shoe types. Closing a Cupboard is divided into the cupboard instances, which have different positions and facing directions, in the environment. To avoid creating a Non class an Impact Filter is applied for preprocessing the recorded signals. The utilized main features are the RMS value and the Zero Crossings of the time-domain signal. They are extended by the FFT vector, statistical values like the mean and standard derivation of this vector as well as the index of the maximal FFT value. With this results, it is possible to lower the issues with common sensors like wearables and cameras. An additional advantage is that it can very easily be integrated in any environment.

Talis - A Design Study for a Wearable Device to Assist People with Depression

One of the major diseases affecting the global population, depression has a strong emotional impact on its sufferers. In this design study, "Talis" is presented as a wearable device which uses emotion recognition as an interface between patient and machine to support psychotherapeutic treatment. We combine two therapy methods, "Cognitive Behavioral Therapy" and "Well- Being Therapy", with interactive methods thought to increase their practical application potential. In this study, we draw on the results obtained in the area of "affective computing" for the use of emotions in empathic devices. The positive and negative phases experienced by the patient are identified through speech recognition and used for direct communication and later evaluation. After considering the design possibilities and suitable hardware, the future realization of such technology appears feasible. In order to design the wearable, user studies and technical experiments were carried out. The results of these suggest that the device could be beneficial for the treatment of patients with depression.

Multi-biometrics aims at building more accurate unified biometric decisions based on the information provided by multiple biometric sources. Information fusion is used to optimize the process of creating this unified decision. In previous works dealing with score-level multi-biometric fusion, the scores of different biometric sources belonging to the comparison of interest are used to create the fused score. This is usually achieved by assigning static weights for the different biometric sources with more advanced solutions considering supplementary dynamic information like sample quality and neighbours distance ratio. This work proposes embedding score coherence information in the fusion process. This is based on our assumption that a minority of biometric sources, which points out towards a different decision than the majority, might have faulty conclusions and should be given relatively smaller role in the final decision. The evaluation was performed on the BioSecure multimodal biometric database with different levels of simulated noise. The proposed solution incorporates, and was compared to, three baseline static weighting approaches. The enhanced performance induced by including the coherence information within a dynamic weighting scheme in comparison to the baseline solution was shown by the reduction of the equal error rate by 45% to 85% over the different test scenarios and proved to maintain high performance when dealing with noisy data.

UPPERCARE: A Community Aware Environment for Post-surgical Musculoskeletal Recovery of Elderly Patients

2017

Proceedings of the 2017 IEEE 21st International Conference on Computer Supported Cooperative Work in Design (CSCWD)

International Conference on Computer Supported Cooperative Work in Design (CSCWD) <21, 2017, Wellington, New Zealand>

Disability from musculoskeletal diseases and comorbidities may lead to the worsening of social and economic well-being through a multitude of paths. Moreover since in European Union (EU) Member States it is projected that those aged 65 and over will become a much larger share (rising from 17% to 30% of the population), and those aged 80 and over (rising from 5% to 12%) will almost become as numerous as the young population in 2060, there is a great potential for Information and Communication Technologies (ICT) solutions for addressing the present and future living arrangements in older people. The UPPERCARE system is meant to affect positively both the intergenerational and partners care since it contributes to decrease usability barriers and promote collaborative environments for informal and self-care. UPPERCARE is a new approach for integrated care supported by ICT systems and services, focusing on post-operative rehabilitation of musculoskeletal pathologies, having as a case study the knee post-operative scenarios of prosthetic care. This paper presents the UPPERCARE system, that provides an integrated care solution, supported ICT, for empowering self-care and adherence to rehabilitation plans through natural interfaces, gamification and cross-modal paths for community care collaboration. The system addresses current barriers from technological, clinical, social and organisational perspectives in a multidisciplinary environment. Special attention is given to the patients' needs and behaviours entailing the participation of a wide care community, including clinical and non-clinical people, associations, institutions and authorities) through an user driven interaction within the system.

Attack Detection in an Autonomous Entrance System using Optical Flow

Unstaffed access control portals are becoming more common in high security areas. Existing systems require expensive hardware, or are sensitive to changing environmental conditions. We present a single camera system for a mantrap which is able to verify that only one individual is in the designated transit area. Our novel approach combines optical flow and machine-learning classification. A database was created that consists of images of attempted attacks and regular verification. The results show that our approach provides competitive results and outperforms detection rates in several attack scenarios.

Benchmarking Sensors in Smart Environments - Method and Use Cases

2016

Journal of Ambient Intelligence and Smart Environments

Smart environment applications can be based on a large variety of different sensors that may support the same use case, but have specific advantages or disadvantages. Benchmarking can allow determining the most suitable sensor systems for a given application by calculating a single benchmarking score, based on weighted evaluation of features that are relevant in smart environments. This set of features has to represent the complexity of applications in smart environments. In this work we present a benchmarking model that can calculate a benchmarking score, based on nine selected features that cover aspects of performance, the environment and the pervasiveness of the application. Extensions are presented that normalize the benchmark-ing score if required and compensate central tendency bias, if necessary. We outline how this model is applied to capacitive proximity sensors that measure properties of conductive objects over a distance. The model is used to identify existing and find potential new application domains for this upcoming technology in smart environments.

Capacitive sensing is a common technology for finger-controlled touch screens. The variety of proximity sensors extends the range, thus supporting mid-air gesture interaction and application below any non-conductive materials. However, this comes at the cost of limited resolution for touch detection. In this paper, we present CapTap, which uses capacitive proximity and acoustic sensing to create an interactive surface that combines mid-air and touch gestures, while being invisibly integrated into living room furniture. We introduce capacitive imaging, investigating the use of computer vision methods to track hand and arm positions and present several use cases for CapTap. In a user study we found that the system has average localization errors of 1.5cm at touch distance and 5cm at an elevation of 20cm above the table. The users found the system intuitive and interesting to use.

Capacitive proximity sensors are a variety of the sensing technology that drives most finger-controlled touch screens today. However, they work over a larger distance. As they are not disturbed by non-conductive materials, they can be used to track hands above arbitrary surfaces, creating flexible interactive surfaces. Since the resolution is lower compared to many other sensing technologies, it is necessary to use sophisticated data processing methods for object recognition and tracking. In this work we explore machine learning methods for the detection and tracking of hands above an interactive surface created with capacitive proximity sensors. We discuss suitable methods and present our implementation based on Random Decision Forests. The system has been evaluated on a prototype interactive surface - the CapTap. Using a Kinect-based hand tracking system, we collect training data and compare the results of the learning algorithm to actual data.

Investigating Low-Cost Wireless Occupancy Sensors for Beds

Occupancy sensors are used in care applications to measure the presence of patients on beds or chairs. Sometimes it is necessary to swiftly alert help when patients try to get up, in order to prevent falls. Most systems on the market are based on pressure-mats that register changes in compression. This restricts their use to applications below soft materials. In this work we want to investigate two categories of occupancy sensors with the requirements of supporting wireless communication and a focus on low-cost of the systems. We chose capacitive proximity sensors and accelerometers that are placed below the furniture. We outline two prototype systems and methods that can be used to detect occupancy from the sensor data. Using object detection and activity recognition algorithms, we are able to distinguish the required states and communicate them to a remote system. The systems were evaluated in a study and reached a classification accuracy between 79 % and 96 % with ten users and two different beds.

Low-cost Indoor Localization Using Cameras - Evaluating AmbiTrack and its Applications in Ambient Assisted Living

2016

Journal of Ambient Intelligence and Smart Environments

Many systems have been proposed in recent years that provide for the tracking and localization of users in indoor environments, often with a specific focus on pervasive computing settings. Our solution AmbiTrack, as presented here, allows for a marker-free localization and tracking of multiple persons, meaning that users are not required to carry special items or tags with them in order for the system to work. This approach allows for an application of AmbiTrack in circumstances where wearing a tag is not viable, e.g., in typical Ambient Assisted Living scenarios where the users of the provided technological systems are usually not technologically well-versed. In this contribution, we explain AmbiTrack and also introduce the adaptations we made for the 3rd EvAAL competition of 2013 in order to make the system more reliable in tracking multiple persons, using context information for improving the recognition rate, and for simplifying the set up and configuration process.

Money Handling Training - Applications for Persons with Down Syndrome

Paying for goods and services is a fundamental activity of daily living. Persons with Down Syndrome face these situations as a challenge. Through the usage of assistive technologies, the project Poseidon aims to enable persons with Down Syndrome to be more independent. In this paper we describe a training application for handling money. The novelty is the concept of extending the screen of an application to a palpable table, which serves as novel interaction device. Furthermore, we design the user interface highly personalizable in order to cover a large range of learning profiles of persons with Down Syndrome.

Prototyping Capacitive Sensing Applications with OpenCapSense

2016

GetMobile

OpenCapSense is a prototyping platform to develop innovative applications that rely on perceiving humans with electric fields. Despite today's use of capacitive sensing mostly as a method to detect touch, it offers many interesting facets that range from mid-air interaction to contactless indoor localization and identification. The platform provides active sensors to detect human interactions at distances of more than 40 cm, by generating electric fields. Passive sensors allow for measuring changes in electric fields that occur naturally in the environment, enabling detection distances up to 2 m.

Scaling up IoT: Impact of Semantic Open Platforms

2016

VDE-Kongress 2016 - Internet der Dinge

VDE-Kongress <2016, Mannheim>

The Internet of Things (IoT) consists of connected objects such as sensors and actuators, as well as smart services. Due to its characteristics, requirements and impact on real life system, the IoT has gained significant attention over the last few years. The main reported issue is the exponential growing number of "Things". Among the open platform technologies, a semantic open platform offers the opportunity to reduce the system complexity, ensuring a direct communication between heterogeneous components without knowing each other, and sharing data based on a "common" semantic model without any need for a specific API. The major goal of this contribution is to clarify the previously highlighted advantages and to strategically recommend the usage of semantic open platforms thus facilitating the growth of the IoT.

The visual detection of defects in textiles is an important application in the textile industry. Existing systems require textiles to be spread flat so they appear as 2D surfaces, in order to detect defects. In contrast, we show classification of textiles and textile feature extraction methods, which can be used when textiles are in inhomogeneous, voluminous shape. We present a novel approach on image normalization to be used in stain-defect recognition. The acquired database consist of images of piles of textiles, taken using stereo vision. The results show that a simple classifier using normalized images outperforms other approaches using machine learning in classification accuracy.

Verification of Single-Person Access in a Mantrap Portal Using RGB-D Images

2016

XII Workshop de Visão Computacional. Proceedings

Workshop de Visão Computacional <2016, Campo Grande, Brasil>

Automatic entrance systems are increasingly gaining importance to guarantee security in e.g. critical infrastructure. A pipeline is presented which verifies that only a single, authorized subject can enter a secured area. Verification scenarios are carried out by using a set of RGB-D images. Features, invariant to rotation and pose are used and classified by different metrics to be applied in real-time. The performance was evaluated by using scenarios in which the system was attacked by a second subject. The results show that the presented approach outerperforms competitive methods. It concludes with a summary of strengths and weaknesses and gives an outlook for future work.

Acoustic Tracking of Hand Activities on Surfaces

Many common forms of activities are tactile in their nature. We touch, grasp, and interact with a plethora of objects every day. Some of those objects are registering our activities, such as the millions of touch screens we are using every day. Adding perception to arbitrary objects is an active area of research, with a variety of technologies in use. Acoustic sensors, such as microphones, react to mechanical waves propagating through a medium. By attaching an acoustic sensor to a surface, we can analyze activities on this medium. In this paper, we present signal analysis and machine learning methods that enable us to detect a variety of interaction events on a surface. We extend from previous work, by combining swipe and touch detection in a single method, for the latter achieving an accuracy between 91% and 99% with a single microphone and 97% to 100% with two microphones.

Assessing Real World Imagery in Virtual Environments for People with Cognitive Disabilities

People with cognitive disabilities are often socially excluded. We propose a system based on Virtual and Augmented Reality that has the potential to act as an educational and support tool in everyday tasks for people with cognitive disabilities. Our solution consists of two components: the first that enables users to train for several essential quotidian activities and the second that is meant to offer real time guidance feedback for immediate support. In order to illustrate the functionality of our proposed system, we chose to train and support navigation skills. Thus, we conducted a preliminary study on people with Down Syndrome (DS) based on a navigation task. Our experiment was aimed at evaluating the visual and spatial perception of people with DS when interacting with different elements of our system. We provide a preliminary evaluation that illustrates how people with DS perceive different landmarks and types of visual feedback, in static images and videos. Although we focused our study on people with DS, people with different cognitive disabilities could also benefit from the features of our solution. This analysis is mandatory in the design of a virtual intelligent system with several functionalities that aims at helping disabled people in developing basic knowledge in every day tasks.

Capacitive Proximity Sensing in Smart Environments

2015

Journal of Ambient Intelligence and Smart Environments

To create applications for smart environments we can select from a huge variety of sensors that measure environmental parameters or detect activities of different actors within the premises. Capacitive proximity sensors use weak electric fields to recognize conductive objects, such as the human body. They can be unobtrusively applied or even provide information when hidden from view. In the past years various research groups have used this sensor category to create singular applications in this domain. On the following pages we discuss the application of capacitive proximity sensors in smart environments, establishing a classification in comparison to other sensor technologies. We give a detailed overview of the background of this sensing technology and identify specific application domains. Based on existing systems from literature and a number of prototypes we have created in the past years we can specify benefits and limitations of this technology and give a set of guidelines to researchers that are considering this technology in their smart environment applications.

Inattentiveness is one of the major causes of traffic accidents. Advanced car safety systems try to mitigate this by detecting potential signs of distraction or tiredness, and providing alerts to the driver. In this paper we present CapSeat - a car seat equipped with integrated capacitive proximity sensors that are used to measure a wide range of physiological parameters about the driver. This can support safety systems by detecting inattentiveness and increase passive safety by facilitating suitable seat adjustments and posture detection. We present a sensor electrode layout suitable for detecting the necessary parameters and processing methods that acquire multiple physiological parameters from sensor data, using a variety of different algorithms. A prototype of the system is presented that was evaluated for all detectable parameters in a proof-of-concept study. We achieved a classification precision between 95% and 100%.

Capacitive sensors in both touch and proximity varieties are becoming more common in many industrial and research applications. Each sensor requires one or more electrodes to create an electric field and measure changes thereof. The design and layout of those electrodes is crucial when designing applications and systems. It can influence range, detectable objects, or refresh rate. In the last years, new measurement systems and materials, as well as advances in rapid prototyping technologies have vastly increased the potential range of applications using flexible capacitive sensors. This paper contributes an extensive set of capacitive sensing measurements with different electrode materials and layouts for two measurement modes - self-capacitance and mutual capacitance. The evaluation of the measurement results reveals how well-suited certain materials are for different applications. We evaluate the characteristics of those materials for capacitive sensing and enable application designers to choose the appropriate material for their application.

Enhancing Traffic Safety with Wearable Low-Resolution Displays

Safety is a major concern for non-motorized traffic participants, such as cyclists, pedestrians or skaters. Due to their weak nature compared to cars, accidents often lead to serious implications. In this paper, we investigate how additional protection can be achieved with wearable displays attached to a person's arm, leg or back. Different to prior work, we present an extensive study on design considerations for wearable displays in traffic. Based on interviews, experiments, and an online questionnaire with more than 100 participants, we identify potential placements, form factors, and use-cases. These findings enabled us to develop a wearable display system for traffic safety, called beSeen. It can be attached to different parts of the human body, such as arms, legs, or the back. Our device unobtrusively recognizes turn indication gestures, braking, and its placement on the body. We evaluate beSeen's performance and show that it can be reliably used for enhancing traffic safety.

In the last few years, devices such as the Microsoft Kinect or the Leap Motion have enabled affordable gesture tracking in mid-air. While this is a fast and natural form of interaction, many applications can benefit from a combination of mid-air interaction and touch recognition. The CapTap is an interactive table that uses a combination of capacitive proximity sensors and acoustic touch detection, enabling various interaction modes. In this thesis, these capabilities were evaluated in the context of musical control scenarios. The goal of research was to determine the complexity of the interaction modes and gain results about how appealing and motivating the test persons find the musical interaction. Another focus was the comparison of an Augmented Reality visualization with a standard display, to obtain data about the preferred visualization technology. Furthermore, the acoustic touch detection of the CapTap was extended with a fingernail input. The technical evaluation addressed the reliability of three touch inputs and resulted in an average detection rate of 85%. The user-experience evaluation indicated that all modes are highly stimulating and support the need to develop further. While the one- and two-handed mid-air control modes could be utilized without much practice and were rated as being very attractive, the combined touch and mid-air modes received lower ratings. The comparison between an Augmented Reality visualization and a standard computer screen yielded a clear preference for the standard display.

ExerSeat - Sensor-Supported Exercise System for Ergonomic Microbreaks

The percentage of older adult workers in Europe has been increasing in the last decades. They are an important part of the work force, highly experienced and often hard to replace. However, their productivity can be affected by health problems, such as lower back pain. This increases the cost for employers and reduces the quality of life of the office workers. Knowledge workers that spend a large part of their day in front of a screen are particularly affected by pack pain. Regular exercise can help to mitigate some of these issues. This training can be performed in microbreaks that are taken at regular intervals during the work day. In this work we present ExerSeat, a combination of a smart sensing chair that uses eight capacitive proximity sensors to precisely track the posture of persons on or near an office chair. It is augmented by a desktop training software that is able to track exercises and training units during microbreaks, by analyzing frequency and form. We have performed a pilot over eight weeks with ten office workers. They performed training units at regular intervals during their work day. We report on the findings.

International Conference on Information and Communication Technologies for Ageing Well and e-Health (ICT4AgeingWell) <1, 2015, Lisbon, Portugal)

Virtual coaching is an application area that allows individuals to improve existing skills or learn new ones; it ranges from simple textual tutoring tools to fully immersive 3D learning situations. The latter aim at improving the learning experience with realistic 3D environments. In highly individual training scenarios it can be beneficial to provide some level of personalization of the environment. This can be supported using procedural modeling that allows to easily modify shape, look and contents of an environment. We present the application of personalization using procedural modeling in learning applications in the project V2me. This project combines virtual and social networks to help senior citizens maintain and create meaningful relationships. We present a system that uses a procedurally generated ambient virtual coaching environment that can be adjusted by training subjects themselves or in collaboration. A small user experience study has been executed that gives first insight to the acceptance of such an approach.

The Capacitive Chair

Modern office work often consists of spending long hours in a sitting position. This can cause a number of health-related issues, including chronic back pain. Ergonomic sitting requires suitably adjusted chairs and switching through a variety of different sitting positions throughout the day. Smart furniture can support this positive behavior, by recognizing poses and activities and giving suitable feedback to the occupant. In this work we present the Capacitive Chair. A number of capacitive proximity sensors are integrated into a regular office chair and can sense various physiological parameters, ranging from pose to activity levels or breathing rate recognition. We discuss a suitable sensor layouts and processing methods that enable detecting activity levels, posture and breathing rate. The system is evaluated in two user studies that test the activity recognition throughout a work week and the recognition rate of different poses.

The Technical Specification and Architecture of a Virtual Support Partner

Most elderly people prefer to live independent in their own homes for as long as possible. Needed support is delivered by someone else and/or via the use of technology. The current paper describes how so called conversational agents can be designed to provide a virtual support and help in daily life activities of the older adults. The paper describes the concept and the idea of an virtual support partner and the concrete realization of a virtual support partner in the EU funded Miraculous-Life project. It describes the deployment setup, the components as well as the architecture and gives some conclusion and lessons learned.

A Benchmarking Model for Sensors in Smart Environments

In smart environments, developers can choose from a large variety of sensors supporting their use case that have specific advantages or disadvantages. In this work we present a benchmarking model that allows estimating the utility of a sensor technology for a use case by calculating a single score, based on a weighting factor for applications and a set of sensor features. This set takes into account the complexity of smart environment systems that are comprised of multiple subsystems and applied in non-static environments. We show how the model can be used to find a suitable sensor for a use case and the inverse option to find suitable use cases for a given set of sensors. Additionally, extensions are presented that normalize differently rated systems and compensate for central tendency bias. The model is verified by estimating technology popularity using a frequency analysis of associated search terms in two scientific databases.

A Gesture-Based Door Control Using Capacitive Sensors

In public places sanitary conditions are always of concern, particularly of surfaces that are touched by a multitude of persons, such as door handles in rest rooms. Similar issues also arise in medical facilities. Doors that open based on presence are common in environments such as shopping malls; however they are not suited for sensitive areas, such as toilet stalls. Capacitive proximity sensors detect the presence of the human body over a distance and can be unobtrusively applied in order to enable hidden gesture-based interfaces that work without touch. In this paper we present a concept for a gesture controlled automated door based on this sensor technology. We introduce the underlying technology and present the concept and electronic components used in detail. Novel interaction patterns and data processing methods allow to open, close, lock and unlock the door using simple gestures. A prototype device has been created and evaluated in a user study.

Smart environments feature a number of computing and sensing devices that support occupants in performing their tasks. In the last decades there has been a multitude of advances in miniaturizing sensors and computers, while greatly increasing their performance. As a result new devices are introduced into our daily lives that have a plethora of functions. Gathering information about the occupants is fundamental in adapting the smart environment according to preference and situation. There is a large number of different sensing devices available that can provide information about the user. They include cameras, accelerometers, GPS, acoustic systems, or capacitive sensors. The latter use the properties of an electric field to sense presence and properties of conductive objects within range. They are commonly employed in finger-controlled touch screens that are present in billions of devices. A less common variety is the capacitive proximity sensor. It can detect the presence of the human body over a distance, providing interesting applications in smart environments. Choosing the right sensor technology is an important decision in designing a smart environment application. Apart from looking at previous use cases, this process can be supported by providing more formal methods. In this work I present a benchmarking model that is designed to support this decision process for applications in smart environments. Previous benchmarks for pervasive systems have been adapted towards sensors systems and include metrics that are specific for smart environments. Based on distinct sensor characteristics, different ratings are used as weighting factors in calculating a benchmarking score. The method is verified using popularity matching in two scientific databases. Additionally, there are extensions to cope with central tendency bias and normalization with regards to average feature rating. Four relevant application areas are identified by applying this benchmark to applications in smart environments and capacitive proximity sensors. They are indoor localization, smart appliances, physiological sensing and gesture interaction. Any application area has a set of challenges regarding the required sensor technology, layout of the systems, and processing that can be tackled using various new or improved methods. I will present a collection of existing and novel methods that support processing data generated by capacitive proximity sensors. These are in the areas of sparsely distributed sensors, model-driven fitting methods, heterogeneous sensor systems, image-based processing and physiological signal processing. To evaluate the feasibility of these methods, several prototypes have been created and tested for performance and usability. Six of them are presented in detail. Based on these evaluations and the knowledge generated in the design process, I am able to classify capacitive proximity sensing in smart environments. This classification consists of a comparison to other popular sensing technologies in smart environments, the major benefits of capacitive proximity sensors, and their limitations. In order to support parties interested in developing smart environment applications using capacitive proximity sensors, I present a set of guidelines that support the decision process from technology selection to choice of processing methods.

Automotive Interfaces Using an Interactive Armrest

2014

Darmstadt, TU, Bachelor Thesis, 2014

Due to the rapid technological development of cars and their entertainment and infotainment systems, drivers are confronted with feature-rich interfaces that can become both confusing and distracting. Therefore, new ways of interaction between driver and car have to be developed in order to reduce driver distraction to a minimum. This is relevant to the safety of both the driver and road users surrounding him. In this thesis, gesture based interaction in the automotive is examined. The main focus is gestural interaction using capacitive sensors. In this area, an overview over related work is given. Challenges in developing a capacitive system for gesture based interaction in the automotive environment are presented and discussed. Afterwards, a model for a gesture-based input system using an augmented armrest is proposed. A prototypical system is implemented in order to test the possibilities and limitations of the proposed model. This system is then evaluated in order to test its general viability and to compare different kinds of gestures for interacting with in-car systems.

This work's general topic is advanced driver assistance systems. In particular, it's about the assisted driver seat adjustment in dependence on anthropometric data, the detection of Out-of-Position postures and the driver drowsiness detection. Already existing systems use sensors like in- and off-cabin cameras to detect drowsiness or require the manual input of anthropometric data to adjust the driver's seat. Contrary to these system's approaches, the aim of this work is to build a system which captures drowsiness symptoms, tracks the head position and captures anthropometric data only by the use of invisible seat integrated capacitive proximity sensors. Still, the aim includes the evaluation of the system's concepts to give direction for further examinations. The idea is the integration of several capacitive proximity sensors at meaningful positions into a driver's seat. Owing to the fact that these sensors can sense through non-conductive materials, the sensors can be installed invisible under the seat cover. Furthermore, the sensors measure changes in the electric field. Occupants, which are in range of the sensors, change the electric field. Therefore, the sensor values shall give information about the occupant's anthropometry and position. With these anthropometric data, an assisted seat adjustment shall be possible. Especially the movement of the driver's head could give information about the driver's drowsiness. A first question of this report addresses the driver's anthropometry. What's a proper seat adjustment? Furthermore, what are the symptoms for drowsiness and which could be measured with capacitive proximity sensors? Moreover, what is an Out-of-Position posture? With information about the anthropometrical requirements, the work shows which concepts can meet the demands on the system. Owing to the fact that the system needs evaluation, how shall a prototype be developed with reference to the concepts? Due to the results of the evaluation, the concepts can satisfy the demands on the system. The ideas which rely on machine learning classifiers result in reliable data. Nevertheless, the different approaches show different demands on the collected data's diversity, which is used to train the algorithms. Besides the machine learning classifiers, many functions of the assisted seat adjustment depend on generic relations between the prototype's sensor system and the occupant's anthropometry. These functions show positive results. Nevertheless, a multiclass SVM approach with discrete adjustment classification could lead to better results, because this approach can include more sensors. Therefore, further obedience between the sensors' data and the anthropometry could be included. Several functions of advanced driver assistance systems are integrated into the capacitive proximity sensing supported advanced driver assistance system. The evaluation shows that invisibly seat integrated capacitive proximity sensors can sense several symptoms of driver drowsiness. Furthermore, the system can assist the driver's seat adjustment and detect "Out-of-Position" postures. The detection concepts are constrained by several requirements for a proper working system. Consequently, the next step is a further integration of the system into a real car. Supplementary, the evaluation shows that the machine learning concepts require a plenty of miscellaneous data. Hence, a further data collection will improve the systems creditableness. Besides the further data collection and real system integration, the developed prototype can be the basis for a further function development, like the gesture recognition for the control of a multimedia system.

Curved Large-Area Surfaces for Gestural Interaction

2014

Darmstadt, TU, Bachelor Thesis, 2014

Gestures are a natural and intuitive part of human communication. Since the appearance of smartphones and tablet computers, gestural interaction became suitable to many customers. Usually gesture interaction is implemented using two dimensional planar surfaces, although the natural movement of the human body results in elliptic or spherical paths. This thesis shows a way of equipping large-area curved surfaces with capacitive loading-mode proximity sensors and gesture recognition from theses sensors data. Therefore already existing techniques, wellknown from the use in planar system, were adapted to the use in curved prototypes. To prove the results both, the interaction with the prototype and the gesture recognition have been evaluated and the results discussed.

MoviBed - Sleep Analysis Using Capacitive Sensors

Sleep disorders are a wide-spread phenomenon that can gravely affect personal health and well-being. An individual sleep analysis is a first step in identifying unusual sleeping patterns and providing suitable means for further therapy and preventing escalation of symptoms. Typically such an analysis is an intrusive method and requires the user to stay in a sleep laboratory. In this work we present a method for detecting sleep patterns based on invisibly installed capacitive proximity sensors integrated into the bed frame. These sensors work with weak electric fields and do not disturb sleep. Using the movements of the sleeping person we are able to provide a continuous analysis of different sleep phases. The method was tested in a prototypical setup over multiple nights.

Robot-Supported Pointing Interaction for Intelligent Environments

A natural interaction with appliances in smart environment is a highly desired form of controlling the surroundings using intuitively learned interpersonal means of communication. Hand and arm gestures, recognized by depth cameras, are a popular representative of this interaction paradigm. However they usually require stationary units that limit applicability in larger environments. To overcome this problem we are introducing a self-localizing mobile robot system that autonomously follows the user in the environment, in order to recognize performed gestures independent from the current user position. We have realized a prototypical implementation using a custom robot platform and evaluated the system with various users.

Towards Interactive Car Interiors - the Active Armrest

Modern cars are often equipped with touch-based interaction systems, such as touchscreens or touchpads. However, they are typically exposed within the car environment. In this paper, we present the Active Armrest. This regular car armrest is equipped with capacitive proximity sensors that combine limb detection and recognition of gestures. The sensors are designed for invisible integration into existing environments and can be used to create interactive surfaces in a car. We investigate two different types of gestural interaction, touch gestures with the arm lifted and free-air finger gestures performed above the interactive area, while the arm stays on the armrest. The system was integrated into a prototype and tested for gesture recognition precision and usability.

Systems providing tracking and localization of persons in an indoor environment have been continuously proposed in recent years, particularly for Pervasive Computing applications. AmbiTrack is a system that provides marker-free localization and tracking, i.e., it does not require the users to carry any tag with them in order to perform localization. This allows easy application in circumstances where wearing a tag is not viable, e.g. in typical Ambient Assisted Living scenarios, where users may not be well-versed technologically. In this work, we present the AmbiTrack system and its adaptation for the EvAAL competition 2013. We present a marker-free, camera-based system for usage in indoor environments designed for cost-effectiveness and reliability. We adapt our previously presented system to make it more reliable in tracking multiple persons, using context information for improving recognition rate and simplifying the installation.

Building Up Virtual Environments Using Gestures

When realizing human-machine-interaction in smart environments it is required to create a virtual representation of the environment that encompasses not only location of the different devices supported but may also contain meta-information such as technical and logical communication layers or a description of supported functionalities, e.g. by using semantics. Creating this representation typically requires technical knowledge and manipulation of object representation files. Therefor it is a major challenge to enable this set-up for regular users, by providing an easy way to establish the virtual environment and the respective position and orientation of integrated devices. In this work we present a novel user-centered approach to create these physical parameters in the virtual representation. Based on intuitive gestural interaction we are able to define the boundaries of appliances and select their capabilities. We have evaluated this method with various users, in order to investigate if such a gestural modification of virtual representations provides an easy way for regular users to create their own smart environment.

Input devices based on arrays of capacitive proximity sensors allow the tracking of a user's hands in three dimensions. They can be hidden behind materials such as wood, wool or plastics without limiting their functionality, making them ideal for application in Ambient Intelligence (AmI) scenarios. Most gesture recognition frameworks are targeted towards classical input devices and interpret two-dimensional data. In this work, we present a concept for adapting classical gesture recognition methods for capacitive input devices by realizing an extension of the feature set to three dimensional input data. This allows more robust gesture recognition for free-space interaction and training specific to capacitive input devices. We have implemented this concept in a prototypical setup and tested the device in various Ambient Intelligence scenarios, ranging from manipulating home appliances to controlling multimedia applications.

In the last few years the number of intelligent systems has been growing rapidly and classical interaction devices like mouse and keyboard are replaced in some use cases. Novel, goal-based interaction systems, e.g. based on gesture and speech allow a natural control of various devices. However, these are prone to misinterpretation of the user's intention. In this work we present a method for supporting goal-based interaction using multimodal interaction systems. Combining speech and gesture we are able to compensate the insecurities of both interaction methods, thus improving intention recognition. Using a prototypical system we have proven the usability of such a system in a qualitative evaluation.

The need for novel interaction paradigms in automotive human-machine interface (HMI) applications has become apparent within the last decade. The number of functions to be controlled in modern cars rises constantly. In parallel, increasing traffic density demands more and more attention and concentration from the driver. Natural interaction paradigms, like gesture-based interaction, provide prospects for more intuitive and less distracting function control. Prominent research projects use vision-based approaches like cameras to replace buttons successively. Capacitive proximity sensing constitutes a promising alternative technology to realize contactless gesture-based interaction. The required sensor electrode surfaces are installed at locations at which interaction should be possible. This thesis describes an approach for free-space contactless gesture recognition with transparent electrode surfaces, which are attached to a common monitor display. An adaption on the Condensation algorithm is proposed for object tracking. Recognition and discrimination of gestures is realized with an approach employing hidden Markov models. The developed method with its required hardware and electrode layouts is prototypically realized as a demonstrator. Based upon the finalized prototype, a user survey is undertaken to evaluate the user experience and relevance of contactless gesture-based interaction paradigms for in-vehicle applications.

Marker-Free Indoor Localization and Tracking of Multiple Users in Smart Environments Using a Camera-Based Approach

In recent years, various indoor tracking and localization approaches for usage in conjunction with Pervasive Computing systems have been proposed. In a nutshell, three categories of localization methods can be identified, namely active marker-based solutions, passive marker-based solutions, and marker-free solutions. Both active and passive marker-based solutions require a person to carry some type of tagging item in order to function, which, for a multitude of reasons, makes them less favorable than marker-free solutions, which are capable of localizing persons without additional accessories. In this work, we present a marker-free, camera-based approach for use in typical indoor environments that has been designed for reliability and cost-effectiveness. We were able to successfully evaluate the system with two persons and initial tests promise the potential to increase the number of users that can be simultaneously tracked even further.

Capacitive sensing allows the creation of unobtrusive user interfaces that are based on measuring the proximity to objects or recognizing their dielectric properties. Combining the data of many sensors, applications such as in-the-air gesture recognition, location tracking or fluid-level sensing can be realized. We present OpenCapSense, a highly flexible opensource toolkit that enables researchers to implement new types of pervasive user interfaces with low effort. The toolkit offers a high temporal resolution with sensor update rates up to 1 kHz. The typical spatial resolution varies between one millimeter at close object proximity and around one centimeter at distances of 35 cm or above.

Personalized Smart Environments to Increase Inclusion of People with Down's Syndrome

Most people with Downs Syndrome (DS) experience low integration with society. Recent research and new opportunities for their integration in mainstream education and work provided numerous cases where levels of achievement exceeded the (limiting) expectations. This paper describes a project, POSEIDON, aiming at developing a technological infrastructure which can foster a growing number of services developed to support people with DS. People with DS have their own strengths, preferences and needs so POSEIDON will focus on using their strengths to provide support for their needs whilst allowing each individual to personalize the solution based on their preferences. This project is user-centred from its inception and will give all main stakeholders ample opportunities to shape the output of the project, which will ensure a final outcome which is of practical usefulness and interest to the intended users.

Providing Visual Support for Selecting Reactive Elements in Intelligent Environments

When realizing gestural interaction in a typical living environment there often is an offset between user-perceived and machine-perceived direction of pointing, which can hinder reliable selection of elements in the surroundings. This work presents a support system that provides visual feedback to a freely gesturing user; thus enabling reliable selection of and interaction with reactive elements in intelligent environments. We have created a prototype that is showcasing this feedback method based on gesture recognition using the Microsoft Kinect and visual support provision using a custom built laser-robot. Finally an evaluation has been performed, in order to prove the efficiency of such a system, acquire usability feedback and determine potential learning effects for gesture-based interaction.

Swiss-Cheese Extended proposes a novel real-time method for recognizing objects with capacitive proximity sensors. Applying this technique to ubiquitous user interfaces, it is possible to detect the 3D-position of multiple human hands in different configurations above a surface that is equipped with a small number of sensors. The retrieved object configurations can significantly improve a user's interaction experience or an application's execution context, for example by detecting multi-hand zoom and rotation gestures or recognizing a grasping hand. We emphasize the broad applicability of the proposed method with a study of a multi-hand gesture recognition device. Swiss-Cheese Extended proposes a novel real-time method for recognizing objects with capacitive proximity sensors. Applying this technique to ubiquitous user interfaces, it is possible to detect the 3D-position of multiple human hands in different configurations above a surface that is equipped with a small number of sensors. The retrieved object configurations can significantly improve a user's interaction experience or an application's execution context, for example by detecting multi-hand zoom and rotation gestures or recognizing a grasping hand. We emphasize the broad applicability of the proposed method with a study of a multi-hand gesture recognition device.

Unobtrusive Recognition of Working Situations

In many countries, people are obliged to remain in their jobs for a long time. This results in an increased number of elderly people with certain disabilities in working life. Therefore, a support with technical assistance systems can avoid further health risks and help employees in their everyday life. An important step for offering a suitable assistance is the automatic recognition of working situations. In this paper we explore the unobtrusive data acquisition and classification of working situations above a tabletop surface. Therefore, a grid of capacitive sensors is deployed directly underneath the tabletop.

In the last decades the demographic change in Europe has become apparent. In Germany already 20% of the population are older than 65. This age group is particularly affected by the increasing complexity of modern public transit systems. In this paper we present the results of a user requirements elicitation of a navigation assistant for elderly people in public transit. This system shall have a targeted user experience and takes into account the personal profile of the different users, e.g. modeling mobility deficiencies that require walking aids and prevents paths that would be impassable. We have performed an exhaustive user evaluation in expert interviews and focus groups to identify suitable interface choices and in the process were able to exclude some systems that were considered obvious in initial assessments.

Capfloor

2012

Partnerships for Social Innovation in Europe

AAL Forum <3, 2011, Lecce, Italy>

Indoor localisation is an important part of integrated AAL solutions, providing continuous service to elderly people. They are able to fulfil multiple purposes, ranging from energy saving or location-based reminders to burglary detection. Combined systems that include localisation are particularly useful, as well as additional services e.g. fall detection. Capacitive sensing systems that allow the detection of the presence of a body over distance are a possible solution for indoor localisation that has been used in the past. However the installation requirements are usually high and consequently they are expensive to integrate. We propese a flexible, integrated solution based on affordable, open-source hardware that allows indoor localisation and fall detection, specifically designed for challenges in the context of AAL. The system is composed of sensing mats that can be placed under various types of floor covering that wirelessly transmit data to a central platform, providing localisation and fall detection services to connect to AAL platforms.

CapFloor - A Flexible Capacitive lndoor Localization System

Indoor localization is an important part of integrated AAL solutions providing continuous services to elderly persons. They are able to fulfill multiple purposes, ranging from energy saving or location based reminders to burglary detection. Particularly useful are combined systems that include localization, as well as additional services e.g. fall detection. Capacitive sensing systems that allow detecting the presence of a body over distance are a possible solution for indoor localization that has been used in the past. However usually the installation requirements are high and consequently they are expensive to integrate. We propose a flexible, integrated solution based on affordable, open-source hardware that allows indoor localization and fall detection specifically designed for challenges in the context of AAL. The system is composed of sensing mats that can be placed under various types of floor covering that wirelessly transmit data to a central platform providing localization and fall detection services to connected AAL platforms. The system was evaluated noncompetitive in the 2011 EvAAL indoor localization competition.

Dynamic User Representation in Video Phone Applications

Video phone applications are growing more commonplace with integration into mobile smart phone platforms like Apple iOS or into online social networks like Facebook. However users may desire to not show their present mood or disorderly appearance while still desiring to use such applications. Virtual user representations are an option to hide the actual appearance while still participating in video phone calls. This paper discusses different approaches to using virtual characters in video phone applications, dynamic self-representation and user interface considerations.

Empowering and Integrating Senior Citizens With Virtual Coaching

With Europe's aging population and an increasing number of older people living alone or geographically distant from kin, loneliness is turning into a prevalent issue. This might involve deleterious consequences for both the older person and society, such as depression and increased use of healthcare services. Virtual coaches that act as friend in a para-social relationship but also as mentor that helps the elderly end-user to create meaningful relationships in his actual social environment are a powerful method to overcome loneliness and increase the quality of life in the elderly population. The AAL Joint Programme projects A²E² (AAL-2008-1-071) and V2me (AAL-2009-2-107) are exploring virtual coaches and their application in AAL scenarios, including the use of user avatars, virtual self-representations that allow the user to be represented in communication scenarios. Other European research projects that focus on social integration of the elderly are e.g. ALICE (AAL-2009-2-091) or WeCare (AAL-2009-2-026). Outside the European Union the negative implications of population aging can be observed in Japan, having an even larger proportion of senior citizens, using individual-centred devices, such as robot pets [1], to improve the quality of life of lonely elderly persons. The user groups involved often are not acquainted with modern ICT systems and therefore it is a challenge to create intuitive, adaptive platforms that cater to the individual needs and allow the user to interact easily.

Honeyfish - a High Resolution Gesture Recognition System Based on Capacitive Proximity Sensing

The recognition of gestures in free space using sensors that determine the proximity of a body mass based on electric field variance is a challenging research topic. Arrays of such capacitive proximity sensors allow the creation of novel user interaction systems based on recognizing presence and position of body parts and inferring performed gestures in three dimensions from that information. These systems may be incorporated as unobtrusive remote control in home automation scenarios or automotive applications, for example as a smart car-dashboard. Present systems that use time-division multiplexing have limitations in their temporal and spatial resolution. In this paper we present the Honeyfish - a novel capacitive proximity sensing system that uses a combination of frequency division and time division multiplex, improving both temporal and spatial resolution. With this affordable system we are able to detect fast multi-hand gestures in three dimensions above large surface areas.

Multi-hand Interaction Using Custom Capacitive Proximity Sensors

2012

Darmstadt, TU, Master Thesis, 2012

The recognition of gestures in free space using sensors that determine the proximity of a body mass based on electric field variance is a challenging research topic. Arrays of such capacitive proximity sensors allow the creation of novel user interaction systems based on recognizing presence and position of body parts and inferring performed gestures in three dimensions. These systems may be incorporated as unobtrusive remote control in home automation scenarios or automotive applications, for example as a smart car-dashboard. Present systems that use time-division multiplexing have limitations in their temporal and spatial resolution. In this thesis a novel capacitive proximity sensor system that uses a combination of frequency division and time division multiplex is presented, improving both temporal and spatial resolution. With this affordable system one is able to detect fast multi-hand gestures in three dimensions above large surface areas. Moreover, a method for object recognition using capacitive proximity sensors is extended and refined and a new object tracking method, employing particle filters is presented. A user evaluation emphasizes the feasibility of the presented capacitive sensing system as an explicit interaction modality.

Session C1: Taking Part in the Self-service Society

2012

Partnerships for Social Innovation in Europe

AAL Forum <3, 2011, Lecce, Italy>

The main focus of this session is on the sociotechnological challenges of the selfserve society. Taking an active part in the selfserve society should be encouraged for as long as possible for all population as they age or should they become unwell. Support is needed since the ICT-based selfserve society presents problems, in particular for older people with physical or cognitive impairments or little or no familiarity with technologies. Solutions which increase independence and efficiency for experienced technology users may inadvertently threaten others with exclusion and loss of independence. Therefore, there is an emerging need to stimulate and support the capacities required for more inclusive or pervasive participation (e.g. mobility, physical, and cognitive). This session will give an overview of the topic but will also point out the difficulties and problems actual projects have encountered.

Session C3: Self-service in Daily Living

2012

Partnerships for Social Innovation in Europe

AAL Forum <3, 2011, Lecce, Italy>

The main goal of the session 'Selfservices in daily living' is to enable support and its customization to meet individual needs for the whole service chain comprising different providers, channels, methods and market segments. This session focuses on the integration of new ICT-based solutions available from existing service providers, channels or market segments that can be adapted to meet the seniors' needs. A very important aspect here is the ease of use of services. There is also an urgent need that services have to be made easily accessible by the elderly.

User Requirements in ICT-based Social Media Use: Acceptance of a Virtual Coach

2012

Partnerships for Social Innovation in Europe

AAL Forum <3, 2011, Lecce, Italy>

The AAL JP project V2me ("Virtual Coach reaches out to me") aims at increasing the quality of the social network in old age, thereby providing an opportunity to increase well-beiog and alleviate loneliness. In this contribution, the results of two empirical studies are presented that (1) aim to gain knowledge about user requirements and (2) offer first results on the usability and user acceptance of the prototype version of the V2me system.

V2me: Evaluating the First Steps in Mobile Friendship Coaching

2012

Journal of Ambient Intelligence and Smart Environments

Life events, such as retirement or being widowed, can change the social circle of older people considerably. It may be difficult to find new social contacts when one has never got used to, or perhaps even never learnt, to seek and maintain those contacts. Loneliness has many negative effects for well-being, including depression and even cardiovascular disease. The Ambient Assisted Living Joint Programme (AAL-JP) research project V2me seeks to find a solution for alleviating loneliness by means of easy-to-use technology including touch screen devices. The idea is to use a virtual coach for encouraging users to take an active role in contacting people and teaching them how to initiate and maintain meaningful and enduring relations. The first step, in the process of creating the complete virtual coach-assisted system for preventing loneliness, is to create a proto type and use the feedback from older users for developing the system. In this paper we discuss the results of the first pilot and what steps need to be taken next.

Visual Support System for Selecting Reactive Elements in Intelligent Environments

Concerning gestural interaction in realistic environments there often is an offset between perceived and actual direction of pointing that makes it difficult to reliably select elements in the environment. This work presents a visual support system that provides feedback to a user gesturing freely in an environment and thus enabling reliable selection of and interaction with reactive elements in intelligent environments. A prototype has been created that is showcasing this feedback method based on gesture recognition using the Microsoft Kinect and feedback provision using a custom laser-robot. Finally an evaluation has been performed, in order to prove the efficiency of such a system, acquire usability feedback and determine potential learning effects for gesture-based interaction.

Since the vision of the vanishing, ubiquitous computer was formulated in the 1990s, Intelligent Environments have become the main topic of many research efforts. Interacting with Intelligent Environments is preferably following the multi-modal interaction paradigm such as the notable research on natural interaction that allows communication through facial expressions, voice commands and gestures. Gestural interaction in terms of pointing for selection is the main focus of this thesis. Although being regarded as intuitive for the user, it leads to a significant offset between the user's intention and the system's interpretation. This offset makes interaction with reactive elements in Intelligent Environments unintuitive and hardly predictable if no guidance is provided to the user. This thesis shows the challenges during the pointing for selection process, including the drawbacks of current guiding systems and presents a concept for solving these challenges with a ubiquitous visual guiding system. This system supports marker-free, full-body gestural interaction in Intelligent Environments by providing a visual cue on the location the user is currently pointing at. We expect this system to place the users in a situation where they are able to correct their pointing themselves, without extensive training of user or machine. This results in a more accurate and intuitive selection of reactive elements in Intelligent Environments. A prototype system - the E.A.G.L.E. - was build to realize this concept using a robotic laser pointing system. A comparative evaluation with a group of 20 subjects was performed to confirm our expectations regarding the intention-to-interpretation offset and the effects of the self-correction process caused by the visual cue, resulting in a significant gain in accuracy.

The current work is going to provide you information about our solution in the challenge of nutrition and food intake supervision, which has been developed lately. We will give an overview of the system and the implemented mechanisms, which were needed for aiding users in supervising and improving their eating habits. We will show the features, which may be useful for persons who want to analyze their eating habits and try to improve those. Therefore our system provides a cooking advisor, which is able to recognize the available food and respecting those presents the user a list of recipes, which fit his available ingredients and also his nutritional needs. If he wishes, he has also the possibility to set other filter parameters. Additionally the cooked menus are logged by the system and may be subject to further analyses. For determining the available ingredients our system uses RFID technology and also provides the user some community-like features for submitting new receipts or new ingredients.

Classification of User Postures with Capacitive Proximity Sensors in AAL-Environments

In Ambient Assisted Living (AAL), the context-dependent adaption of a system to a person's needs is of particular interest. In the living area, a fine-grained context may not only contain information about the occupancy of certain furniture, but also the posture of a user on the occupied furniture. This information is useful in the application area of home automation, where, for example, a lying user may effect a different system reaction than a sitting user. In this paper, we present an approach for determining contextual information from furniture, using capacitive proximity sensors. Moreover, we evaluate the performance of Naive Bayes classifiers, decision trees and radial basis function networks, regarding the classification of user postures. Therefore, we use our generic classification framework to visualize, train and evaluate postures with up to two persons on a couch. Based on a data set collected from multiple users, we show that this approach is robust and suitable for real-time classification.

The recent success of Nintendo's Wii and multi-touch input devices like the Apple iPhone clearly shows that people are more willing to accept new input device-technologies based on intuitive forms of interaction. Gesture-based input is thus becoming important and even relevant in specific application scenarios. A sensor type especially suited for natural gesture recognition is the capacitive proximity sensor that allows the detection of objects without any physical contact. In this paper we extend the input device taxonomy by Card et al to include this detector category and allow modeling of devices based on advanced sensor units that involve data processing. We have created a prototype based on this modeling and evaluated its use regarding several application scenarios, where such a device might be useful. The focus of this evaluation was to determine the suitability of the device for different interaction paradigms.

Interactive Personalization of Ambient Assisted Living Environments

2011

Human Interface and the Management of Information: Part I

Symposium on Human Interface <2011, Orlando, FL, USA>

Ambient Assisted Living (AAL) comprises methods, systems, and services applied to improve the quality of daily life for humans, especially elderly people. Recent research emphasizes the implementation of comprehensive AAL platforms which control all technological components included in the entire environment such as one's apartment. The behavior of the system is often determined by a specific set of rules. Thus, personalization according to the person's needs and preferences includes a configuration of the given rule system. Assuming that configuration is not only conducted by technical staff but also by the person him or herself, this process can be regarded as complex, requiring technical knowledge. In this work, we present an interactive and architectural approach to support at the personalization of an AAL system by different types of users.

Passive Identification and Control of Arbitrary Devices in Smart Environments

Modern smart environments are comprised of multiple interconnected appliances controlled by a central system. Pointing at devices in order to control them is an intuitive way of interaction, often unconsciously performed when switching TV stations with an infrared remote, even though it is usually not required. However, only a limited number of devices have the required facilities for this kind of interaction since it does require attaching transceivers and often results in the necessity to use multiple remote controls. We propose a system giving a user the ability to intuitively control arbitrary devices in smart environments by identifying the appliance an interaction device is pointed at and providing means to manipulate these. The system is based on identifying the position and orientation of said interaction device, registering these values to a virtual representation of the physical environment, which is used to identify the selected appliance. We have created a prototype interaction device that manipulates the environment using gesture-based interaction.

Virtual Coach Reaches Out to me: The V2me-Project

2011

Ercim News

The V2me project combines real life and virtual social network elements to prevent and overcome loneliness in Europe's aging population. Its overall goal is to enhance the joy of living for the network members. To fulfil this goal, V2me supports active ageing by improving integration into society through the provision of advanced social connectedness and social network services and activities.

Within the last few years the market for input devices has seen a considerable shift towards novel technologies, using advanced sensor units to register and interpret human behavior, examples being the gaming console and mobile device market. Capacitive proximity sensors are devices that allow detecting the presence of a human body without physical contact, therefore being especially suited for unobtrusive applications. This thesis presents methods and algorithms to model input devices, using data generated by a network of wireless capacitive proximity sensors. Furthermore several input devices have been built and evaluated for several interaction techniques with the help of specifically implemented graphical applications. These devices focus on the ability for natural interaction, providing several usage scenarios within ambient assisted living context.

In this paper we present a novel technique for integration in assistive environments, using capacitive proximity sensing to detect the presence of a human body, thus creating a medium for natural, deliberate or unaware interaction. As the world's population ages, we witness a growing number of health-related issues and the need to simplify interaction with technologies that are getting ever more complex. We present implementation of hardware and software prototypes of this versatile and cheap technology that can be easily and unobtrusively integrated into ambient assisted living environments.