Abstract

Previous research has shown that hearing aid wearers can successfully self-train their instruments' gain-frequency response and compression parameters in everyday situations. Combining hearing aids with a smartphone introduces additional computing power, memory, and a graphical user interface that may enable greater setting personalization. To explore the benefits of self-training with a smartphone-based hearing system, a parameter space was chosen with four possible combinations of microphone mode (omnidirectional and directional) and noise reduction state (active and off). The baseline for comparison was the "untrained system," that is, the manufacturer's algorithm for automatically selecting microphone mode and noise reduction state based on acoustic environment. The "trained system" first learned each individual's preferences, self-entered via a smartphone in real-world situations, to build a trained model. The system then predicted the optimal setting (among available choices) using an inference engine, which considered the trained model and current context (e.g., sound environment, location, and time).To develop a smartphone-based prototype hearing system that can be trained to learn preferred user settings. Determine whether user study participants showed a preference for trained over untrained system settings.An experimental within-participants study. Participants used a prototype hearing system-comprising two hearing aids, Android smartphone, and body-worn gateway device-for ∼6 weeks.Sixteen adults with mild-to-moderate sensorineural hearing loss (HL) (ten males, six females; mean age = 55.5 yr). Fifteen had ≥6 mo of experience wearing hearing aids, and 14 had previous experience using smartphones.Participants were fitted and instructed to perform daily comparisons of settings ("listening evaluations") through a smartphone-based software application called Hearing Aid Learning and Inference Controller (HALIC). In the four-week-long training phase, HALIC recorded individual listening preferences along with sensor data from the smartphone-including environmental sound classification, sound level, and location-to build trained models. In the subsequent two-week-long validation phase, participants performed blinded listening evaluations comparing settings predicted by the trained system ("trained settings") to those suggested by the hearing aids' untrained system ("untrained settings").We analyzed data collected on the smartphone and hearing aids during the study. We also obtained audiometric and demographic information.Overall, the 15 participants with valid data significantly preferred trained settings to untrained settings (paired-samples t test). Seven participants had a significant preference for trained settings, while one had a significant preference for untrained settings (binomial test). The remaining seven participants had nonsignificant preferences. Pooling data across participants, the proportion of times that each setting was chosen in a given environmental sound class was on average very similar. However, breaking down the data by participant revealed strong and idiosyncratic individual preferences. Fourteen participants reported positive feelings of clarity, competence, and mastery when training via HALIC.The obtained data, as well as subjective participant feedback, indicate that smartphones could become viable tools to train hearing aids. Individuals who are tech savvy and have milder HL seem well suited to take advantages of the benefits offered by training with a smartphone.

Abstract

Chronic wounds, including pressure ulcers, compromise the health of 6.5 million Americans and pose an annual estimated burden of $25 billion to the U.S. health care system. When treating chronic wounds, clinicians must use meticulous documentation to determine wound severity and to monitor healing progress over time. Yet, current wound documentation practices using digital photography are often cumbersome and labor intensive. The process of transferring photos into Electronic Medical Records (EMRs) requires many steps and can take several days. Newer smartphone and tablet-based solutions, such as Epic Haiku, have reduced EMR upload time. However, issues still exist involving patient positioning, image-capture technique, and patient identification. In this paper, we present the development and assessment of the SnapCap System for chronic wound photography. Through leveraging the sensor capabilities of Google Glass, SnapCap enables hands-free digital image capture, and the tagging and transfer of images to a patient's EMR. In a pilot study with wound care nurses at Stanford Hospital (n=16), we (i) examined feature preferences for hands-free digital image capture and documentation, and (ii) compared SnapCap to the state of the art in digital wound care photography, the Epic Haiku application. We used the Wilcoxon Signed-ranks test to evaluate differences in mean ranks between preference options. Preferred hands-free navigation features include barcode scanning for patient identification, Z(15) = -3.873, p < 0.001, r = 0.71, and double-blinking to take photographs, Z(13) = -3.606, p < 0.001, r = 0.71. In the comparison between SnapCap and Epic Haiku, the SnapCap System was preferred for sterile image-capture technique, Z(16) = -3.873, p < 0.001, r = 0.68. Responses were divided with respect to image quality and overall ease of use. The study's results have contributed to the future implementation of new features aimed at enhancing mobile hands-free digital photography for chronic wound care.

CHARACTERIZING REFLECTIVE PRACTICE IN DESIGN - WHAT ABOUT THOSE IDEAS YOU GET IN THE SHOWER?18th International Conference on Engineering Design (ICED)Currano, R. M., Steinert, M., Leifer, L. J.DESIGN SOC.2011: 374–383

UNDERSTANDING IDEALOGGING: THE USE AND PERCEPTION OF LOGBOOKS WITHIN A CAPSTONE ENGINEERING DESIGN COURSE17th International Conference on Engineering DesignCurrano, R., Leifer, L.DESIGN SOC.2009: 323–331

Abstract

Stroke is the leading cause of disability among adults in the United States. Behaviors such as learned nonuse hinder hemiplegic stroke survivors from the full use of both arms in activities of daily living. Active force-feedback cues, designed to restrain the use of the less-affected arm, were embedded into a meaningful driving simulation environment to create robot-assisted therapy device, driver's simulation environment for arm therapy (SEAT). The study hypothesized that force-feedback control mode could "motivate" stroke survivors to increase the productive use of their impaired arm throughout a bilateral steering task, by providing motivating feedback and reinforcement cues to reduce the overuse of the less-affected arm. Experimental results demonstrate that the force cues counteracted the tendency of hemiplegic subjects to produce counter-productive torques only during bilateral steering tasks (p < 0.05) that required the movement of their impaired arm in steering directions up and against gravity. Impaired arm activity was quantified in terms of torques due to the measured tangential forces on the split-steering wheel of driver's SEAT during bilateral steering. Results were verified using surface electromyograms recorded from key muscles in the impaired arm.

Abstract

A desktop vocational assistant robotic workstation was evaluated by 24 high-level quadriplegics from the Palo Alto Veterans Affairs Spinal Cord Injury Center. The system is capable of performing daily living and vocational activities for individuals with high-level quadriplegia via voice control. Subjects were asked to use the robot to perform a repertoire of daily living activities, including preparing a meal and feeding themselves, washing their face, shaving, and brushing teeth. Pre- and post-test questionnaires, interviews, and observer assessments were conducted to determine the quality of the robot performance and the reaction of the disabled users toward this technology. Results of the evaluations were generally positive and demonstrated the usefulness of this technology in assisting high-level quadriplegics to perform daily activities and to gain a modicum of independence and privacy in their lives.

The Talking Glove: an expressive and receptive verbal communication aid for the deaf, deaf-blind and nonvocalKramer, J., Leifer, L.1987

On the Nature of Design and an Environment for DesignSystem Design: Behavioral Perspectives on Designers, Tools, and OrganizationsLeifer, L., J.edited by Rouse, W., B., Boff, K., R.North-Holland.1987: 65–70

A Methodological Approach to Studying Group DesignTang, J., Leifer, L.1986

Development of an Advanced Robotic Aid: from Feasibility to Utility, reprinted from the RESNA'86 Conference on Rehabilitation Engineering, to the Interactive Robotic Aids - One Option for Independent Living: an International PerspectiveLeifer, L., Michalowski, S., Van der Loos, H., M., H., F.1986, 1986