Search form

Five lessons learned from ResApp’s botched trial

Brisbane, Australia-based ResApp intended to have its software, which uses a smartphone microphone to diagnose respiratory conditions based on coughs, well on its way through the FDA by now. But this past summer, the company’s big plans hit a major setback when the data came back from its landmark clinical trial, conducted at such austere institutions as Massachusetts General Hospital, Cleveland Clinic, and Texas Children’s Hospital: Thanks to “procedural anomalies,” the company proclaimed the data all but useless.

MobiHealthNews caught up with ResApp CEO Tony Keating in Boston this week, where he had flown in for meetings about the upcoming mulligan of the trial. Keating was happy to share some of the lessons learned from the failed trial in the hopes of keeping other startups from running over the same road blocks.

“One of the lessons is to realize that you are doing a unique clinical trial that no one’s ever done before,” he said. “Standard practices are not there. We took a top, world-class CRO to do the trial, thinking this is a standard clinical trial, they should be able to use the processes they already have to do it and it should be great. I think we lost sight of the fact that what we’re doing is very unique and very different.”

The company plans to start recruiting for the new trial next month and to have the data in by summer 2018. In the meantime, here’s five lessons ResApp learned, and how they’re correcting for the next go-round.

1. Don’t make assumptions about technological fluency

“In some ways it might have been easier if it had not been a simple smartphone,“ Keating said. “If it had been a black box. And I think that they thought it was better than it was in some ways.”

Keating said that many of the nurses who were actually responsible for taking the recordings were generally inexperienced with using smartphones. Others thought the smartphone microphone was much more sensitive than it was and attempted to record patients while the TV was on or in a noisy room.

“So we’ve worked a lot on, rather than a traditional medical device method, which is provide a big user manual and get people to read it, we’re starting to move a lot of that into the app itself,” he said. “So when we’re running this new trial we’ve got checklists that they have to check through before they’re allowed to test. It does a background noise estimation and then it gives them immediate feedback on whether there’s too much noise or it’s ok. Things like that.”

Notably, those checklist features and ambient noise checks will also help to improve the product when it’s deployed commercially.

2. Make sure data collection protocols are clear and lead to meaningful data

Another thing that hurt the data in ResApp’s trial was a regional difference in how trials are conducted in working hospitals.

“In the US institutions you have the treating team that’s treating the patient and then you have the research team that are doing the clinical trials,” Keating explained. “And they’re very separate. The research team is not really invited in to even see a patient until the treating team’s happy. So what we found was that there were some patients who we were doing the ResApp test on who were better, who essentially were cured by the time we were able to do the test because the treating team had given them asthma inhalers and things like that before we got access to them.”

In the next version of the trial, Keating said, investigators will be clearer with the hospital staff about when to collect data.

3. Standardize the outcome measures in multi-site trials

Another problem with the trial was the absence of a gold standard against which the app’s diagnoses could be compared.

“Respiratory disease is not a black and white clinical diagnosis,” Keating said. “Pneumonia, bronchitis, with all these diseases there’s not a simple test today that tells you definitively this or this. So the FDA asked us for a clinical adjudication panel. They recognized this and said ‘You need to have three clinicians look at all medical records and make a decision.’

Unfortunately, ResApp chose to use different panels at different sites, and found that the panels disagreed with one another more than expected, leading to even muddier data.

“So in the new trial we’re having centralized adjudication, so a panel of three or four clinicians based here in the Boston area that are going to be the ones that do every single patient, which means that they can be better trained on the definitions,” Keating said.

4. Make arrangements to catch problems early

“I think one of the things that we didn’t do well in the previous study was look at that as the study went,” Keating said. “I mean, it is a double-blind perspective study so it’s difficult to actually get access to things. But we didn’t have someone doing a full absolute quality control of audio recordings. We should have done that and we’re doing it this time.”

Keating said his team realized something was wrong immediately after the data started coming in, just based on clinical notes provided along with the recordings.

“We spent a week or so trying to get rid of some of the stuff we could get rid of and trying to clean it up as much as we could and then we ran the algorithms, and obviously that wasn’t great,” he said.

5. Choose good partners — and if at first you don’t succeed, try, try again

While ResApp faced some backlash from shareholders and media after the data came back bad, there was no ill will from the hospitals, which were all game for a do-over.

“The day after we had results, I had all three PIs from hospitals call me up and say ‘Hey, we want to do this again, we understand things went wrong, we didn’t understand how to do this properly’,” Keating said.

With all the stakeholders on the same page, Keating sees things going much more smoothly this time around, and has high hopes for the response from the FDA.

“With new technologies like this, when you go to the FDA or when you go for commercialization, having trials run at top of the line hospitals is going to help,” he said. “And they’ve been really good supporters of what we’ve been doing. They see very strong promise to what we’re doing and they appreciate that we’re trying to do it the right way by going through the regulatory groups and trials.”