Stephen Knych, M.D. is leading an innovative initiative at Adventist Health System around improving surgical performance in the robotics area

As data analytics becomes more and more common in healthcare, it is being used for more and more complex and “advanced” purposes, and is reaching into all sorts of innovative niches in patient care organizations. That certainly is the case these days at the Altamonte Springs, Fla.-based Adventist Health System, where Stephen Knych, M.D., the 45-hospital-campus health system’s chief quality and patient safety officer, is leading a pioneering effort to leverage analytics to help improve the clinical skills and performance of surgeons engaged in robotic surgeries in that health system.

Dr. Knych presented about this initiative in a workshop focused on the “Business Case for Safety,” last May, at the annual NPSF Safety Congress, sponsored by the National Patient Safety Foundation, as well as participating in the IHI National Forum on Quality Improvement in Health Care, held in Orlando in December, and sponsored by the Cambridge, Mass.-based Institute for Healthcare Improvement.

Dr. Knych and his colleagues at Adventist have been partnering with the Seattle-based healthcare technology company C-SATS, in order to leverage analytics to improve surgeons’ clinical and operational performance.

Using C-SATS’ analytics at Adventist facilities, Dr.Knych and his colleagues have seen scientifically measured and statistically significant improvements in quality measures in robotic surgeries, specifically in reductions in cases that were converted to open surgery, and in blood loss. They have also seen significant reductions in procedural costs.

Among the advances documented by the collaborative performance initiative in this area so far:

Recently, Dr. Knych, along with Derek Streat, CEO of C-SATS, spoke with Healthcare Informatics Editor-in-Chief Mark Hagland, to discuss the progress being made in this initiative. Below are excerpts from that interview.

Tell me about the origins of this program?

Stephen Knych, M.D.: Our interest in the program began at Adventist when we had developed a robotic-assisted minimally invasive surgery guideline, and had had had that guideline approved internally, and our medical executive committees had approved it in all or in part. We had 15 hospitals at that time engaged in robotically assisted minimally invasive surgery. And because there’s no nationally recognized body, as in weight-loss surgery, these guidelines helped us structure how we managed the program. So we put that out, gave our facilities a year and a half to adopt it; and then we recognized a gap for surgeons, around continuing medical education credits, for robotic-assisted minimally invasive surgeries. Even the medical specialty societies hadn’t created programs around this.

Stephen Knych, M.D.

So we started looking for information. And Dr. Richard Satava knew about an innovative approach at the University of Washington, and introduced us to this technology. He is an independent surgeon, an expert in the field of simulation and training; he worked on a fundamentals of robotic surgery curriculum. He was on our robotic surgeon task force as a simulation and training expert; he was an external expert. He is based out of Washington state.

What pieces were missing, for practicing surgeons?

The robotic surgery guidelines are contained in a 25-or-so-page document; a consensus document that had been developed by our taskforce. What was asked for was a specific number of robotic surgery-specific CME [continuing medical education] credits for each privileging cycle that all doctors go through in hospitals. But they weren’t able to obtain that from their specialty societies, so we had to create this.

What types of gaps were there? Technical, clinical, process gaps?

All of the above. Education and training, privileging, etc. What was in the literature in 2015 when these were offered, and pertaining to establishing robotic surgery guidelines—clinical practice, education and training, and some of the nuances around privileging and credentialing.

What was the role of C-SATS in this?

Derek Streat: As Dr. Knych mentioned, C-SATS was spun out of the University of Washington in 2014, based on research by my co-founder, Dr. Thomas Lendvay. Dr. Lendvay is our co-founder and CMO, and still a practicing pediatric urologist at Seattle Children’s Hospital. Addressing this area of skill improvement, and doing it in a scalable and effective way. Even in his own practice, he was finding it was difficult to get feedback as a robotic surgeon. Either you’d get feedback from over your shoulder, and they’d be colleague at best, a competitor at worst; and most of all, people didn’t have the time. So he came up with the idea of using distributed reviewers, people with certain skills around surgery. And then he figured out how to take a complex task like a surgical case, and break it up into smaller pieces, to assign those pieces to people on a panel.

For example, you’d consider a robotic prostatectomy case, and take a video of the case being performed. And you’d say, OK, let’s have somebody evaluate the left and right-hand movements of the surgeon to determine their level of bimanual dexterity. So we take many pieces of the surgery and evaluate them. And he figured out that if you broke these surgeries into pieces and sent them out to people around the world, and got all that data back and rolled it up, you actually got a very accurate representation of that surgeon’s skill, and you could identify specific ways to improve performance. So we’re up to about 50 clinical articles, and we turned it into a product. So C-SATS is now a technology offering that people can use, to provide methodology and feedback for surgeons’ individual performance.

This has been commercialized as a service?

Yes, it’s a software-as-a-service offering.

How many organizations are using it?

Right now, we’re approaching about 100 hospitals around the country.

What percentage of the universe of robotic surgery is using this?

I’m not sure; this really works for any type of procedure, even beyond surgery, that can be video-recorded. As long as you can take a video of something, it can be evaluated in our system. As of today, five of the ten largest non-profit health systems in the country are using CSATS to evaluate their robotic surgery procedures, and Adventist is one of those.

What kinds of things are you learning from using this application of this technology, Dr. Knych?

Knych: That’s a good question, and it’s interesting. The edict came down years ago from the Office of Technology in Washington, D.C, about converting to electronic medical records in hospitals, and yet people are still asking the question, what are we getting from spending billions and billions of dollars on converting to electronic records? One of the criticisms has been the question, what are we getting out of technology, that makes the investment worthwhile? And this is an example of where technology is able to be leveraged to provide some very meaningful impact. And I would classify it in this way: I as a surgeon can go into the OR, upload a case to C-SATS, they will assess the case with 30 or so reviewers, using a carefully evaluated tool, and can come back to me in five to seven days, confidentially to me, with reviews that will give me a quantitative score about my skills, as well as qualitative reviews of me and my case, and the piece de resistance is that, through the comfort of my own home, looking at my own computer or mobile device, I can look at educational skills offered through the program, to improve myself, and can take this into my case tomorrow.

So this shows how technology can speed up processes. In this case, technology is accelerating the learning process. And that’s what we’re doing, and that’s where I think the payoff might be. The other part of my job is performance improvement. And performance improvement science tells us that you need actionable feedback that is timely, in order create improvement; and this gives the surgeon actionable feedback that can accelerate the learning curve.

How many surgeons in the Adventist system are using this feedback solution and process now?

We have somewhere between 140 and 160 surgeons actively practicing, using this robotic platform, using robotic assistance.

How many have made use of this process? All of them?

Yes, the latest numbers are showing us, right around 74 percent. We instituted this program on a voluntary basis at our institutions. The surgeons choose to participate, and around 73-74 percent have done so. We had a nine-month pilot period that went from March 2016 through December 2016. So it’s been fully implemented at Adventist since the end of February 2017.

How would you say that this fits into the broader performance improvement movement in healthcare in general? And how does this fit into the broader culture around performance improvement at Adventist, and what you’re trying to accomplish?

It fits very much with the culture we’ve been advancing here. We have six imperatives at Adventist Health System that we are pursuing, and this fits squarely into several of those. One of those is called “Improve the Product”; another is “Improve People Systems.” And also, we’re finding that we’re lowering the cost, because we’re finding that as our skills increase, our efficiency increases, and our costs go down. So it hits three of our six imperatives. With regard to your question about surgeons being resistant to performance improvement initiatives in the past, I think some of that is changing, with the shift from volume to value. Surgeons are becoming much more accustomed to seeing performance measures that drill down to them. We have hospital medicine measures, patient satisfaction measures; they’re getting quite used to working with measures that relate directly to them.

But I agree with your premise about culture and process: when you look at how surgical performance and quality have been measured in the past, through very distant measures, such as mortality statistics, and such—when you start at that level, it’s very difficult for surgeons to embrace what their performance has had to do with that outcome; but when you give them a measure, saying, this is your individual surgical skill, they embrace that very quickly. So if you can give them measures that really connect to what they do, and to clinical outcomes, they will embrace those measures, as long as they are objective and connect directly to their practice. We all think we’re superior in skill, but will adjust when we’re able to see the data.

Streat: The speed with which you’re able to receive and respond to the feedback, is impressive. You may see your tissue-handling score was at 4.0 for a particular case, and then you’ll get specific feedback about how you can improve that specific aspect in your case, and can see videos of better performance. And doctors have said, I got feedback and immediately changed my behaviors with the next case. So the data becomes addictive to people; they want to see how their scores improve.

What will happen at Adventist in the next year or two, in this area?

Knych: We’re investigating two opportunities with C-SATS, deploying this to our advanced laparoscopic surgeons; and secondly, with developing similar assessments, with robotic surgical first assistants, the people who assist in the surgery. Some are surgical residents. There’s a whole industry that provides surgical assistants. Some will be residents and interns; but in community hospitals with no training programs, some are PAs, and some are nurses, etc. But they go through training as surgical first assistants, and are privileged credentialed according to those requirements.

What should the IT and data analytics professionals in other hospitals and health systems around the country, know about this, especially with regard to its potential to change behaviors?

Yes, it will change behaviors. But it also has to fit within their workflow. It has to be presented to them in a meaningful, actionable, timely way; and also, with people running back and forth everywhere across the hospital, it has to be presented to them within their workflow—so being able to be presented on mobile devices or wherever it’s convenient for them, is very important, because knowledge has a very short half-life. If I see feedback on a case from a few weeks ago, it’s less meaningful than if I see it within a week. So if I can be sitting in the physicians’ lounge and can quickly ingest information, I can digest it right away. And it’s so important for the technology to allow for the portability of this information, and the workflow integration.

Streat: I think that two things are important here: one, this is very much a data story; and in particular, it’s the creation of a new data set that either doesn’t exist or has been hidden until recently, that’s connected to individual clinical performance. That needs to be unearthed. And the other side of the story. I’m a technologist, and the companies I’ve built in healthcare and other industries have typically been data mining or AI companies. That’s where this is going. Even now, when a surgeon gets feedback, much of that has been powered by information coming from preceding systems. We’re approaching 3 million assessments that have been run through our service; that’s a lot of data that’s been integrated into the system. And ultimately, that starts to allow you to predict the outcomes and the performance that will be experienced by surgeons and patients. Just by looking at past cases, that allows you to do prescriptive improvement—it helps drive the outcomes you want to drive. And that’s driven by having enough data, and by training the systems to produce the actionable recommendations you want; and that’s what we’re starting to do today.

Knych: The fact that I can get a turnaround time this tight, of my own information, using a standardized, validated research assessment tool, by 30 people in many locations, and one to three experts, and it’s specifically targeted on key aspects of the procedure, that are important to outcomes—that is important. And, in terms of getting all those elements pulled together in that short a time, I can’t get that without technology.

The Health IT Summits gather 250+ healthcare leaders in cities across the U.S. to present important new insights, collaborate on ideas, and to have a little fun - Find a Summit Near You!

Artificial intelligence (AI) has been a hot topic lately. Much has been said about its promise to improve our lives, as well as its threat to replace jobs ranging from receptionists to radiologists. These wider discussions have naturally led to some interesting questions about the future of medicine. What role will human beings have in an ever-changing technology landscape? When AI becomes a better "doctor," what will become of doctors? How will patients and medical professionals adjust to these changes?

While it is, of course, hard to make accurate predictions about the distant future, my experience both as a doctor and now CEO of a software company that uses AI to help doctors deliver safer care, gives me some insight into what the intermediate future will hold for the medical profession.

Medicine is one of the great professions in every culture in the world—an altruistic, challenging, aspirational vocation that often draws the best and the brightest. Doctors spend years in training to make decisions, perform procedures, and guide people through some of their most vulnerable points in life. But medicine is, for the most part, still stuck in a pre-internet era. Entering a hospital is like walking into a time capsule to a world where people still prefer paper, communication happens through pagers, and software looks like it’s from the 1980s or 1990s.

But this won’t last; three giant forces of technology have been building over the last few years, and they are about to fundamentally transform healthcare: the cloud, mobile, and AI. The force least understood by doctors is AI; after all, even technophobic doctors now spend a lot of time using the internet on their smartphones. Even so, AI is the one that will likely have the biggest impact on the profession.

Webinar

Through real use cases and live demo, Frank Lee, PhD, Global Industry Leader for Healthcare & Life Sciences, will illustrate the architecture and solution for high performance data and AI...

A lot of people believe that AI will become the primary decision maker, replacing human doctors. In that eventuality, Dr. AI will still need a human “interface,” because it is likely patients will need the familiarity of a human to translate the AI’s clinical decision making and recommendations. I find it an intriguing thought—going to the doctor’s office and seeing a human whose job it is to read the recommendations of a computer just to offer the human touch.

But to understand what the future could hold, we must first understand the different types of problems that need to be solved. Broadly, problems can be split into simple, complicated, and complex ones. Simple and complicated problems can be solved using paradigmatic thought (following standardized sets of rules), something computers excel at. What makes complex problems unique is that they require judgment based on more than just numbers and logic. For the time being, the modern machine learning techniques that we classify as “AI” are not well suited to solving complex problems that require this deeper understanding of context, systems, and situation.

Given the abundance of complex problems in medicine, I believe that the human “interfaces” in an AI-powered future won't simply be compassionate people whose only job is to sit and hold the hand of a patient while reading from a script. These people will be real doctors, trained in medicine in much the same way as today—in anatomy, physiology, embryology, and more. They will understand the science of medicine and the decision making behind Dr. AI. They will be able to explain things to the patient and field their questions in a way that only people can. And most importantly, they will be able to focus on solving complex medical problems that require a deeper understanding, aided by Dr. AI.

I believe that the intermediate future of medicine will feel very similar to aviation today. Nobody questions whether commercial airline pilots should still exist, even though computers and autopilot now handle the vast majority of a typical flight. Like these pilots, doctors will let "auto-doc" automate the routine busy work that has regrettably taken over a lot of a clinician’s day—automatically tackling simple problems that only require human monitoring, such as tracking normal lab results or following an evidence-based protocol for treatment. This will let doctors concentrate on the far more complex situations, like pilots do for takeoffs and landings.

Dr. AI will become a trusted assistant who can help a human doctor make the best possible decision, with the human doctor still acting as the ultimate decision maker. Dr. AI can pull together all of the relevant pieces of data, potentially highlighting things a human doctor may not normally spot in an ocean of information, while the human doctor can take into consideration the patient and their situation as a whole.

Medicine is both an art and a science, requiring doctors to consider context when applying evidence-based practices. AI will certainly take over the science of medicine in the coming years but most likely won't take over the art for a while. However, in the near future, doctors will need to evolve from being scientists who understand the art of medicine to artists who understand the science.

In comparing healthcare CIOs’ priorities at the end of 2017 to this current moment, new analysis has found that core clinical IT goals have shifted from focusing on EHR (electronic health record) integration to data analytics.

In December 2017, hospitals CIOs said they planned to mostly focus on EHR integration and mobile adoption and physician buy-in, according to a survey then-conducted by Springfield, Va.-based Spok, a clinical communications solutions company, of College of Healthcare Information Management Executives (CHIME) member CIOs.

The survey from one year ago found that across hospitals, 40 percent of CIO respondents said deploying an enterprise analytics platform is a top priority in 2018. Seventy-one percent of respondents cited integrating with the EHR is a top priority, and 62 percent said physician adoption and buy-in for securing messaging was a top priority in the next 18 months. What’s more, 38 percent said optimizing EHR integration with other hospital systems with a key focus for 2018.

Spok researchers were curious whether their predictions became reality, so they analyzed several industry reports and asked a handful of CIOs to recap their experiences from 2018. The most up-to-date responses revealed that compared to last year when just 40 percent of CIOs said they were deploying an enterprise analytics platform in 2018, harnessing data analytics looks to be a huge priority in 2019: 100 percent of the CIOs reported this as top of mind.

Further comparisons on 2018 predictions to realities included:

62 percent of CIOs predicted 2018 as the year of EHR integration; 75 percent reported they are now integrating patient monitoring data

79 percent said they were selecting and deploying technology primarily for secure messaging; now, 90 percent of hospitals have adopted mobile technology and report that it’s helping improve patient safety and outcomes

54 percent said the top secure messaging challenge was adoption/buy in; now, 51 percent said they now involve clinicians in mobile policy and adoption

What’s more, regarding future predictions, 87 percent of CIOs said they expect to increase spending on cybersecurity in 2019, and in three years from now, 60 percent of respondents expect data to be stored in a hybrid/private cloud.

CIOs also expressed concern regarding big tech companies such as Apple, Amazon and Google disrupting the healthcare market; 70 percent said they were somewhat concerned.

Managing clinical variation continues to be a significant challenge facing most hospitals and health systems today as unwarranted clinical variation often results in higher costs without improvements to patient experience or outcomes.

Like many other hospitals and health systems, Flagler Hospital, a 335-bed community hospital in St. Augustine, Florida, had a board-level mandate to address its unwarranted clinical variation with the goal of improving outcomes and lowering costs, says Michael Sanders, M.D., Flagler Hospital’s chief medical information officer (CMIO).

“Every hospital has been struggling with this for decades, managing clinical variation,” he says, noting that traditional methods of addressing clinical variation management have been inefficient, as developing care pathways, which involves identifying best practices for high-cost procedures, often takes up to six months or even years to develop and implement. “By the time you finish, it’s out of date,” Sanders says. “There wasn’t a good way of doing this, other than picking your spots periodically, doing analysis and trying to make sense of the data.”

What’s more, available analytics software is incapable of correlating all the variables within the clinical, billing, analytics and electronic health record (EHR) databases, he notes.

Another limitation is that care pathways are vulnerable to the biases of the clinicians involved, Sanders says. “In medicine, what we typically do is we’ll have an idea of what we want to study, design a protocol, and then run the trial and collect the data that we think is important and then we try to disprove or prove our hypothesis,” he says.

Working with Palo Alto, Calif.-based machine intelligence software company Ayasdi, Flagler Hospital initiated a pilot project to use Ayasdi’s clinical variation management application to develop care pathways for both acute and non-acute conditions and then measure adherence to those pathways.

Michael Sanders, M.D.

Flagler targeted their treatment protocols for pneumonia as an initial care process model. “We kicked around the idea of doing sepsis first, because it’s a huge problem throughout the country. We decided to use pneumonia first to get our feet wet and figure out how to use the tool correctly,” he says.

The AI tools from Ayasdi revealed new, improved care pathways for pneumonia after analyzing thousands of patient records from the hospital and identifying the commonalities between those with the best outcomes. The application uses unsupervised machine learning and supervised prediction to optimally align the sequence and timing of care with the goal of optimizing for patient outcomes, cost, readmissions, mortality rate, provider adherence, and other variables.

The hospital quickly implemented the new pneumonia pathway by changing the order set in its Allscripts EHR system. As a result, for the pneumonia care path, Flagler Hospital saved $1,350 per patient and reduced the length of stay (LOS) for these patients by two days, on average. What’s more, the hospital reduced readmission by 7 times—the readmission rate dropped from 2.9 percent to 0.4 percent, hospital officials report. The initial work saved nearly $850,000 in unnecessary costs—the costs were trimmed by eliminating labs, X-rays and other processes that did not add value or resulted in a reduction in the lengths of stay or readmissions.

With the success of the pneumonia care pathway, Flagler Hospital leaders also deployed a new sepsis pathway. The hospital has expanded its plans for using Ayasdi to develop new care pathways, from the original plan of tackling 12 conditions over three years, to now tackling one condition per month. Future plans are to tackle heart failure, total hip replacement, chronic obstructive pulmonary disease (COPD), coronary artery bypass grafting (CABG), hysterectomy and diabetes, among other conditions. Flagler Hospital expects to save at least $20 million from this program in the next three years, according to officials.

Finding the “Goldilocks” group

Strong collaboration between IT and physician teams has been a critical factor in deploying the AI tool and to continue to successfully implement new care pathways, Sanders notes.

The effort to create the first pathway began with the IT staff writing structured query language (SQL) code to extract the necessary data from the hospital’s Allscripts EHR, enterprise data warehouse, surgical, financial and corporate performance systems. This data was brought into the clinical variation management application using the FHIR (Fast Healthcare Interoperability Resources) standard.

“That was a major effort, but some of us had been data scientists before we were physicians, and so we parameterized all these calls. The first pneumonia care path was completed in about nine weeks. We’ve turned around and did a second care path, for sepsis, which is much harder, and we’ve done that in two weeks. We’ve finished sepsis and have moved on to total hip and total knee replacements. We have about 18 or 19 care paths that we’re going to be doing over the next 18 months,” he says.

After being fed data of past pneumonia treatments, the software automatically created cohorts of patients who had similar outcomes accompanied by the treatments they received at particular times and in what sequence. The program also calculated the direct variable costs, average lengths of stay, readmission and mortality rates for each of those cohorts, along with the statistical significance of its conclusions. Each group had different comorbidities, such as diabetes, COPD and heart failure, which was factored into the application's calculations. At the push of a button, the application created a care path based on the treatment given to the patients in each cohort.

The findings were then reviewed with the physician IT group, or what Sanders calls the PIT crew, to select what they refer to as the “Goldilocks” cohort. “This is a group of patients that had the combination of low cost, short length of stay, low readmissions and almost zero mortality rate. We then can publish the care path and then monitor adherence to that care path across our physicians,” Sanders says.

The AI application uncovered relationships and patterns that physicians either would not have identified or would have taken much longer to identify, Sanders says. For instance, the analysis revealed that for patients with pneumonia and COPD, beginning nebulizer treatments early in their hospital stays improved outcomes tremendously, hospital leaders report.

The optimal events, sequence, and timing of care were presented to the physician team using an intuitive interface that allowed them to understand exactly why each step, and the timing of the action, was recommended. Upon approval, the team operationalized the new care path by revising the emergency-department and inpatient order sets in the hospital EHR.

Sanders says having the data generated by the AI software is critical to getting physicians on board with the project. “When we deployed the tool for the pneumonia care pathway, our physicians were saying, ‘Oh no, not another tool’,” Sanders says. “I brought in a PIT Crew (physician IT crew) and we went through our data with them. I had physicians in the group going through the analysis and they saw that the data was real. We went into the EMR to make sure the data was in fact valid, and after they realized that, then they began to look at the outcomes, the length of stay, the drop in readmissions and how the costs dropped, and they were on board right away.”

The majority of Flagler physicians are adhering to the new care path, according to reports generated by the AI software's adherence application. The care paths effectively sourced the best practices from the hospital’s best doctors using the hospital’s own patient groups, and that is key, Sanders notes.

“When we had conversations with physicians about the data, some would say, ‘My patient is sicker than yours,’ or ‘I have a different patient population.’ However, we can drill down to the physician’s patients and show the physician where things are. It’s not based on an ivory tower analysis, it’s based on our own data. And, yes, our patients, and our community, are unique—a little older than most, and we have a lot of Europeans here visiting. We have some challenges, but this tool is taking our data and showing us what we need to pursue. That’s pretty powerful.”

He adds, “It’s been amazing to see physicians rally around this. We just never had the tool before that could do this.”

While Flagler Hospital is a small community hospital with fewer resources than academic medical centers or larger health systems—for example, the hospital doesn’t have a dedicated data scientist but rather uses its in-house informatics staff for this project—the hospital is progressive in its use of advanced analytics, according to Sanders.

“We’ve been able to do a lot of querying ourselves, and we have some sepsis predictive models that we’ve created and put into place. We do a lot of real-time monitoring for sepsis and central line-associated bloodstream infections,” he says. “Central line-associated bloodstream infections are a bane for all hospitals. In the past year and a half, since we’ve put in our predictive model, we’ve had zero bloodstream infections, and that’s just unheard of.”

Sanders and his team plan to continue to use the AI tool to analyze new data and adjust the care paths according to new discoveries. As the algorithms find more effective and efficient ways to deliver care that result in better outcomes, Flagler will continue to improve its care paths and measure the adherence of its providers.

There continues to be growing interest, and also some hype, around AI tools, but Sanders notes that AI and machine learning are simply another tool. “Historically, what we’ve done is that we had an idea of what we wanted to do, conducted a clinical trial and then proved or disproved the hypothesis, based on the data that we collected. We have a tool with AI which can basically show us relationships that we didn’t know even existed and answer questions that we didn’t know to ask. I think it’s going to open up a tremendous pathway in medicine for us to both reduce cost, improve care and really take better care of our patients,” he says, adding, “When you can say that to physicians, they are on board. They respond to the data.”