dwellSense (Embedded Assessment)

October 01, 2012

More than 200 people signed up for last week’s webinar, "From
the iPhone to the EMR: Can patients' personal health data help improve their
clinical care?” to learn about the latest work by Project HealthDesign’s grantee
teams. Co-hosted by Stephen Downs of the
Robert Wood Johnson Foundation (RWJF) and Patricia Flatley Brennan of the
National Program Office, the webinar set a wider lens on the growing interest in
weaving patient-generated data into the matrix of clinical health care.

Steve Downs explained that as part of RWJF’s Pioneer
Portfolio, Project HealthDesign was created to stimulate innovation in the area
of personal health records. “We laid out
a vision where your medical record would serve as a platform” he explained, “and
then lots of third-party apps could be tailored to your specific needs.” The project’s five research teams took that
vision and created applications tailored for patients with chronic conditions, from
asthma to obesity to Alzheimer’s disease.

Project director Patti Brennan described what might be the
most significant finding to arise from the project, the discovery that information
from patients’ daily lives -- data about things like diet, mood and stress level
– can play an important role in managing health care. These “Observations of Daily Living,” or
ODLs, became central to the grantees’ research projects and represent a significant,
new category of patient-generated data. Brennan
also introduced a short video where members of each project team illustrated their
projects’ findings and challenges.

Project investigator Katherine Kim described how her team’s
iN Touch application helped young people struggling with obesity to become more
engaged in managing their health and lifestyle.
Dr. Stephen Rothemich outlined findings
of his team’s BreathEasy project, showing how ODLs can provide clinically
useful information and in some cases can lead to changes in diagnosis or
therapy.

Finally, National Advisory Council member Dr. Michael
Christopher Gibbons spoke eloquently of the critical role personal health
records can play in a rapidly evolving society, where new approaches are needed
to tend to the needs of a growing population of seniors, minorities, immigrants
and the underserved.

If you registered for the webinar but weren’t able to view
it, follow this link and log in with the email address used at registration.

April 03, 2012

As mentioned in our previous blog post, each of our study participants has a different set of clinicians who provide care for them. At our request, our participants took out their address books to look up their doctor's information and shared it with us. Now we had the daunting task of trying to convince time-strapped physicians to spend 30-60 minutes with us to review the patient-generated data from the dwellSense system.

Our recruitment approach began with sending a letter to the physicians' offices that described the study, said it involved one of their patients, and emphasized how valuable their contributions would be to our study. We didn't hesitate to use whatever persuasive strategies we could think of, including name-dropping and some gentle groveling. We made sure to mention that the research was funded by the Robert Wood Johnson Foundation, a name that undoubtedly carried a lot of weight with clinicians. We also used the fact that I’m a student who’s working on completing my Ph.D. dissertation. The letter asked not only for the clinician to agree to an interview session to support the cause of the study, but also to support my Ph.D. dissertation. People often are willing to help students with their work in ways that they might not help a corporation or something a little less faceless. Fortunately, I didn't mind casting myself in the role of a struggling graduate student for the sake of science.

After we sent these letters to the clinicians’ offices, we waited. Fortunately, because many of the physicians were also part of the University of Pittsburgh Medical Center, we were able to find their email addresses on the University of Pittsburgh website and send the letter to them via email. To our surprise, we received responses from four of the primary care physicians we targeted within a week of sending the letter and/or email.

For those we didn't hear back from, we followed up with a phone call. Some offices received the letter, but other offices reported that the letters were lost in the sea of paperwork that flows in and out of the office. Many asked to have the letter sent again by fax, a 1980s-era technology apparently still widely used in doctors’ offices. After faxing the letters, we followed up with the offices to make sure they received them and got on the radar of the physician. We found that the front office staffs are usually quite accommodating of our requests to make sure the letters (and presumably other paperwork) get into the hands of the physicians.

Thus far, we have been able to interview five physicians, and we still have approximately seven physicians to follow up with to see if they would like to participate. Overall, we were delighted with the responses we have received so far. As we reflect on the reason for our success, we have come to find that certain physicians were really interested in the idea of patient-generated data, in particular, medication taking, and how it could provide a window in the everyday functioning of one their patients (with multiple chronic conditions). We also think that leveraging the fact that it is part of a Ph.D. dissertation and giving busy physicians a chance to help a student was another reason they were willing to make time for us. Stay tuned for a follow up post about some of the interesting findings from our interviews with physicians.

March 27, 2012

In our evaluation of dwellSense, each study participant had a different primary care provider. We wanted to engage with each one of these physicians to identify how they would use the patient-generated data about medication taking, phone use, and coffee making. Unlike many of the other current Project HealthDesign teams that had one or two clinicians on their team who provided direct care for the participants, we had to reach out to the clinicians who were in our participants’ existing care networks.

The first step was convincing participants to allow us to speak to their health care providers (PCPs, specialists, nurses, therapists, etc.). Because they felt the data we were collecting about their observations of daily living were valuable for their doctors to have, it wasn't a stretch for them to give us permission. Our patients generally had very trusting and positive relationships with their health care providers. From this experience, we expect that a technological solution or service that shares data about ODLs or activities of daily living (ADLs) with clinicians would not normally be met with much opposition.

However, participants were a little tentative when we asked them to sign a release form authorizing their health care providers to speak to us, the research team members, about their medical conditions when we evaluated how care providers utilized the data about ODLs in their workflow. At this point, we reassured them that the point of the study wasn't to dig up all the details of their medical history from their care providers, but rather to give permission to the care providers to bring up any details that were relevant for evaluating the usefulness of patient-generated data in their workflow. All participants trusted us to keep matters confidential, as we were bound by university IRB confidentiality rules. Furthermore, all participants were open in sharing the details of their medical history and conditions with us as part of the study, so it was unlikely we would discover any secrets when speaking with their care providers.

Extrapolating from our experiences and applying it to how a third-party technology or service might share patient-generated data and collect medical details from the health care provider, we envision that these systems should limit the additional information they collect from health care providers, or at least allow the patient to control what information their health care providers can disclose to the system. Integrating patient-generated data about ODLs and ADLs directly into EHRs may work around this issue, but the patient-generated data, visualizations, and analytics may still reside with a third-party. Any information about how the patient-generated data are used or accessed by the care provider may reveal sensitive patient information. Thus, we are learning that the information flows among patients, care providers, and third-party services should be an important part of designing a useable system for sharing and using patient-generated data about ODLs and ADLs.

March 06, 2012

Long-term deployments of technology give researchers an opportunity to understand how technology can be embedded and incorporated into people's lives, particularly since observations of daily living (ODLs) naturally occur in people's everyday lives. However, long-term deployments are not without their hazards. One of these hazards is that the longer the study, the more likely it is that things (both good and bad) will happen to affect the data collection and even the integrity of the ODL data. I'd like to detail a couple of issues that recently complicated our deployment: Wi-Fi coverage and residents moving apartments.

In our evaluation of the dwellSense system, we have deployed a suite of sensors in 12 apartments in a senior high-rise apartment building. For the last seven months, these sensors have captured information about participants’ daily activities such as medication taking, phone use, and meal preparation. To provide us with the connectivity we needed to collect the data from our sensors, we set up a Wi-Fi network that covered the apartments in our study. Initially, this was a challenge because 1) participants lived in different parts of the buildings and 2) the building’s dated architecture and thick walls reduced Wi-Fi coverage. During the initial deployment, we solved this coverage issue by installing wireless routers that piggy-backed on existing broadband service in two of our participants’ homes. This was enough to cover three floors of the building where many of the study participants lived. All was well for about seven months.

At the beginning of the eighth (and final) month of the evaluation study, we learned that the 90-year-old, historic building was finally going to be renovated. Of our 12 study participants, three had to move apartments. Fortunately for them, they each moved to a beautiful corner apartment with a view of city. Unfortunately for us, the Wi-Fi network and router we had installed in one of their apartments no longer reached any of the other residents.

The first solution we tried was to install a Wi-Fi repeater to increase the range of our network. This attempt didn't work because the repeater would not connect to the router we set up. The repeater connected well with other routers, but not with our router. This was frustrating. After many hours rebooting and configuring the repeater and router, they just would not get along. Even if we had gotten them to connect, the repeated Wi-Fi signal was weak at best when extended two floors down where we would need it.

The second solution we tried was to reinstate a Wi-Fi network in the location where it had been before the move. Fortunately, one of our study residents lived in the center of where we needed Wi-Fi coverage. We asked him if we could pay the additional cost to add broadband Internet service to his already extensive (and expensive) cable television service. He agreed, so we ordered the new service for his account, picked up the equipment from Comcast, and installed the cable modem and a new router in his home. All was well—the dwellSense system in his apartment was once again connected to Wi-Fi and uploading data. However, even though the router was just a few feet away from the original position, the Wi-Fi coverage did not reach all the study participants that it had once covered. After countless visits to apartments to find that the computers having trouble connecting to a Wi-Fi signal that teetered between "weak" and "no signal," we realized that we needed to boost the signal with the Wi-Fi repeater. Finally, after installing two additional Wi-Fi repeaters, we were able to get everyone back on Wi-Fi for the final month of the study. This took nearly a week and a half of trial and error with debugging.

This Wi-Fi-induced nightmare should be lesson that evaluating systems in the field can be difficult, particularly if the evaluation lasts for a long time. The difficulties are twofold: 1) the unpredictability of people's lives (e.g., people had to move apartments) and 2) debugging IT infrastructure that is mostly invisible (e.g., it's hard to know how far Wi-Fi signals will reach except by trial and error). However, when understanding how technology fits into people’s lives, long-term evaluations and field deployment are still the best way to get real, valid data.

January 19, 2012

Throughout our project, we have had issues with recruiting subjects for our study. Our goal was to recruit 30 elders, along with their caregivers and doctors. However, we have struggled to recruit 20 subjects. There are a number of possible reasons for this. Some are due to not being targeted enough in recruiting, and some are due to not having enough constraints. To perform our study, we require a great deal of access to subjects and their apartments in order to perform assessments, to regularly check the state of the display and sensors, and to replace batteries.

From a time and manpower perspective, we needed to recruit subjects within a small geographic area. Otherwise, the commute time to and from our participants' apartments would have been unmanageable. So, we made a choice to identify a small number of locations from which to recruit subjects. That drastically limited our subject pool, but having subjects spread around the city would have reduced the number of subjects we could have actually worked with, too. It was not easy to identify the potential locations or potential subject populations centered in an area, particularly because we weren't looking for individuals with particular disabilities or diseases. Instead, we were looking for elders who were "at risk" but not necessarily exhibiting signs of cognitive decline.

Once we found suitable locations with large potential subject pools, we faced other difficulties in recruiting. One issue was the amount of time we wanted participants to commit to being part of our study. Because we expect the personal health information we collect via sensors to provide value over time rather than immediately, and because we wanted enough time for participants to get over the novelty factor of our embedded assessment appliances and displays, we needed participants to commit for several months. Many potential subjects had a hard time making this long-term commitment.

We also wanted to maximize the number of assessments we could make for each participant. This meant we were looking for subjects who regularly perform the three activities we monitor: coffee making, making/receiving phone calls, and taking medicine. Some potential subjects didn't work out because they drank tea instead of coffee, or made instant coffee rather than using a coffee maker. These activities, though perfectly reasonable, made them inappropriate for our study. Others had digital phone lines that were not compatible with our phone sensing hardware, which works with analog phone lines. Furthermore, some took pills from the original pill bottles rather than from pill boxes. We were afraid that changing someone's habits (from using pill bottles to using pill boxes) would impact their behavior too much and introduce errors in actions, so we removed these individuals from the potential subject pool. In the end, we chose to accept subjects if they used pill boxes and satisfied the requirement for either the coffee making or the phone use.

Further complications arose from the nature of the research itself. When they heard our description of the study, some subjects declined to participate. When asked why, it was clear that participating in such a study and using such a system that tracks and represents health information served as another reminder of their declining physical and/or cognitive health. The last thing they wanted was a "constant" notification of how they were performing everyday activities.

We are continuing to recruit subjects, but this is a slower process than we had expected. We’ve definitely learned a lot about recruiting for this type of study.

November 29, 2011

During an earlier phase of our research, we identified the prospective information needs of our various stakeholders: elders, caregivers, occupational therapists and primary care physicians. In particular, our elders were quite interested in real-time or daily information about their performance of their everyday activities, but didn’t find a lot of value in long-term data or trends. They weren’t opposed to the long-term data, but they really wanted to have the daily information in order to see if they had taken their medication, and to see how well they were doing each day.

We designed the visualization below to run as an application on a Samsung Galaxy 10.1” tablet. The visualization has three horizontal panels, one for each of the ODLs that our sensing systems are tracking: medication taking, making phone calls and coffee-making. The large type size, the colors used and the simple design were purposefully chosen to make it easy for the elders to consume the information.

For medication tracking, we present information about morning and evening pills. In the visualization below, you can see that the subject took their morning medicine on time at 5:07 a.m., but had difficulty determining the right day of the week, having opened the Sunday and Monday doors before correctly opening the Tuesday door. The subject missed the evening pill completely. The two bar charts on the right indicate how timely the subject was in taking his or her medicine and how well the subject performed the task. Because the subject missed his or her evening medicine, the timeliness bar chart is red . The other chart is yellow to indicate his or her difficulty in selecting the correct pillbox door. These bar charts are intended to serve as quick additional abstractions to make it easy to consume the medicine-taking information.

For the phone calls, the visualization indicates the number of calls made and, of those, the number of times the user . The bar chart provides a summary of how well the user performed this task.

For coffee-making, the visualization indicates whether the subject made coffee correctly. In particular, it shows whether the subject performed each of four key coffee-making tasks correctly: getting water, adding the coffee, placing the carafe in the coffee maker and starting the machine. In this case, the subject performed all four tasks correctly, which is what the associated bar chart indicates.

We are still going to provide a long-term visualization to some of our subjects to see what value it can provide, despite their seeming lack of interest. This will be a more traditional line graph, with time on the horizontal axis, and the quality of the performance of the task on the vertical axis. Subjects will be able to click on any of the data points to receive the regular daily view for the chosen day. We expect that by providing both kinds of views, we can improve subjects’ awareness of their own performance, and that this will cause them to engage others in discussions (e.g., caregivers, occupational therapists, doctors) about their abilities, with the goal of improving their performance over time.

November 24, 2011

One idea that pervades each of the Project HealthDesign studies is that computing and advanced technology has the potential to change both people’s impressions of their health and their actual health. At least that’s the promise. One of the challenges is that these positive changes often occur over a long period of time, and usually require individuals to integrate a new technology into their lives.

In human-computer interaction, the research field I call home, most of the studies of technologies for behavior change occur over a few weeks, with only a few lasting more than a month. The difficulties in running a longer study are well-documented: recruitment, attrition of participants, maintaining ecological validity while keeping the study as “clean” as possible, researcher fatigue, analysis strategies, funding, etc. Additionally, researchers are able to conduct research and get papers published without conducting such long studies.

The benefits too are well-documented: a long-term study may be the only reasonable and ecologically valid way to study the impact of a technological intervention, novelty effects of introduced technologies are minimized, there’s more time to test out technology in the field and to evolve it before needing to collect data, it gives time for users to feel comfortable with researchers and introduced technologies, and it provides an opportunity to study phenomena that occur over a long period of time.

In our project, we have the opportunity to conduct a long-term study with a few of the early-stage users of our technology. We reported on their use of visualizations of their activities after they had used our embedded assessment system for four months. Our subjects have now used our system for about a year, and we are planning on a follow-up study to understand 1) whether the problems they noticed in the performance of everyday activities caused them to change their behavior to improve performance, 2) whether repeated but infrequent reflection on their data helps them understand their performance data better than with a one-time reflection, and 3) whether our users still find value in the embedded assessment system now.

Although short-term studies of such technologies are useful and common, truly understanding how performance of activities changes over time requires a long-term study.

November 18, 2011

I am visiting Southern India for six months — I came in June and will leave in late December. In keeping with my work on dwellSense, I have been looking for opportunities in India where advanced technologies, or at least better information, can be used to enhance clinical practice and personal decision-making. Very unscientifically, I have observed a few things that suggest that India is in need of such technology.

One observation is the huge number of advertisements for help for people with diabetes. Apparently, adult-onset diabetes has become a common issue as India’s growing middle class takes on more of a Western lifestyle and diet. As my cardiologist constantly tells me, “Indian bodies were not made to handle Western diets.”

Another issue is the huge number of people living below the poverty line. I don’t have exact numbers, but it is close to half a billion people, even with the poverty line set quite low (daily per person: 31 Indian Rupees in urban areas, which equates to $0.62 U.S. Dollars, and 25 Rupees in rural areas à $0.50 U.S. Dollars). Many of those individuals can’t afford medical care nor can they take the time to manage their health issues unless an acute problem develops. Therefore, the ability to collect observations of daily living (ODLs) in an unobtrusive, low-cost manner seems crucial.

A third observation is that although all doctors rely on self-reports from patients, homeopathic and ayurvedic doctors here seem to only rely on these reports and what they can observe during a visit. There are rarely, if any, tests that are conducted to confirm or substantiate the patient self-reports. This calls for the collection of data that would help the diagnostic process.

The main issues here are time, cost and literacy. Thinking around how to make technologies that require little manual effort, are cheap and don’t require a literate population or training, is what’s needed. These requirements might also apply to a number of other problems on the subcontinent, but they certainly apply here as well.

November 04, 2011

For a couple of years, we have been calling our project Embedded Assessment, based on a really nice paper by Margie Morris. In that paper, Morris and colleagues defined embedded assessment as systems that serve monitoring, prevention and compensation purposes; are personalized to a user and embedded in a user’s everyday environment; and monitor health status and look for meaningful patterns that can inform health-related decision-making. Although this definition still holds for our project, we have decided to change the project name to dwellSense

There are a few reasons for the name change. First, embedded assessment describes a category of systems and not just our project. Our project is an example of an embedded assessment system. Second, the full name for our project started with “Embedded Assessment” but this is a shortened version of a longer name. Sometimes, we referred to it as “Embedded Assessment of Aging Adults.” Other times it was “Embedded assessment of wellness with smart home sensors.” Other times it was “Task-based embedded assessment of functional abilities for older adults, caregivers, and clinicians.” All of these are accurate, but none of them roll off the tongue particularly well and none is particularly memorable.

A couple of months ago, Matthew Lee, our dwellSense lead researcher, came up with the new name, dwellSense. I really like this new name. It’s short and succinct, and I think it sums up the main ideas behind our project; it includes sensors, focuses on where people live or dwell, and, as you can see by the logo, has a particular focus on wellness.

As Gillian Hayes wrote in a post about the FitBaby team changing its name to Estrellita, a name is important. Although the new name won’t make a difference to the participants in our study, it will likely impact others who hear about our project, the work we are doing, and the goals we are working toward.

June 15, 2011

One of the many reasons observations of daily living (ODLs) are interesting is because people don’t normally pay much attention to them as they live their everyday lives. If ODLs easily lent themselves to simple mental accounting, we would not need special sensors or mobile applications to log them. Thus, when we present ODL data back to the user, they often encounter an odd feeling that mixes familiarity (because, after all, the data show their own behaviors) and unfamiliarity (because they have never seen this particular behavior logged and presented this way). This showcasing of the mundane allows people to explore and attempt to make sense of this new kind of data. Is there a process by which people explore their own ODLs?

We had a chance to understand this process in our pilot study with two elders and four months of sensor data about their medication adherence. We gave them each a special pillbox to use that could keep track of when they picked it up and opened the pillbox doors. When we showed them their data, our participants went through a process that involved three basic steps: 1) looking for anomalies, 2) explaining these anomalies, and 3) confirming their explanations with the details in the data. When first presented with their data, our participants wanted to find the anomalous behavior — in this case, instances of days when they did not take their pills. The visualization we used actually made it rather difficult to find these days where no pill-taking event occurred, but nonetheless our users were persistent in finding the days with missing pills. After finding these anomalies, they wanted to find an explanation for why they might have missed their pills. They first tried to find a benign explanation such as being away on travel, but they also looked for explanations that might show their own forgetfulness and lack of structure in their routines. And finally, after coming up with possible explanations for why they might have missed their pills on one occasion, they were able to use the low-level details in the data to confirm their explanations.

We observed one instance with a study participant that illustrates the process involved in making sense of ODL data: As she reviewed the data, the participant noticed that she had missed her evening pills on a Friday evening two weeks prior. She thought back and explained that she visited her nephew that day and had probably taken her evening pills with her and took the pills at her nephew’s home. She then looked at the detailed view of the data for the pillbox for that day and noticed that she had the door open for 20 seconds in the morning — much longer than she thought was normal. She reasoned that she must have been taking her evening pills from that box and filling her travel pillbox in those 20 seconds.

This same sense-making process, which involves identifying anomalies, explaining them and confirming them with other or more detailed streams of data, can also apply to other times when people are exploring their ODLs for the first time. Granted, our example focuses on only one data stream, pillbox activity, but this process may very well be expanded and developed to account for multiple streams of data and different types of ODLs.

For more details on this process of making sense of ODL data, read our CHI 2011 paper.