AI as patient safety assistant that reduces, prevents adverse events

The 30 year old SXSW conference and cultural event has been rising as a healthcare venue for the past few years. One talk this Editor would like to have attended this past weekend was presented by Eric Horvitz, Microsoft Research Laboratory Technical Fellow and managing director, who is both a Stanford PhD in computing and an MD. This combination makes him a unique warrior against medical errors, which annually kill over 250,000 patients. His point was that artificial intelligence is increasingly used in tools that are ‘safety nets’ for medical staff in situations such as failure to rescue–the inability to treat complications that rapidly escalate–readmissions, and analyzing medical images.

A readmissions clinical support tool, RAM (Readmissions Management), he worked on eight years agon, produced now by Caradigm, predicts which patients have a high probability of readmission and those who will need additional care. Failure to rescue often results from a concatenation of complications happening quickly and with a lack of knowledge that resemble the prelude to an aircraft crash. “We’re considering [data from] thousands of patients, including many who died in the hospital after coming in for an elective procedure. So when a patient’s condition deteriorates, they might lose an organ system. It might be kidney failure, for example, so renal people come in. Then cardiac failure kicks in so cardiologists come in and they don’t know what the story is. The actual idea is to understand the pipeline down to the event so doctors can intervene earlier.” and to understand the patterns that led up to it. Another is to address potential problems that may be outside the doctor’s direct knowledge field or experiences, including the Bayesian Theory of Surprise affecting the thought process. Dr Horvitz discussed how machine learning can assist medical imaging and interpretation. His points were that AI and machine learning, applied to thousands of patient cases and images, are there to assist physicians, not replace them, and not to replace the human touch. MedCityNews

Our definitions

Telehealth and Telecare Aware posts pointers to a broad range of news items. Authors of those items often use terms 'telecare' and telehealth' in inventive and idiosyncratic ways. Telecare Aware's editors can generally live with that variation. However, when we use these terms we usually mean:

• Telecare: from simple personal alarms (AKA pendant/panic/medical/social alarms, PERS, and so on) through to smart homes that focus on alerts for risk including, for example: falls; smoke; changes in daily activity patterns and 'wandering'. Telecare may also be used to confirm that someone is safe and to prompt them to take medication. The alert generates an appropriate response to the situation allowing someone to live more independently and confidently in their own home for longer.

• Telehealth: as in remote vital signs monitoring. Vital signs of patients with long term conditions are measured daily by devices at home and the data sent to a monitoring centre for response by a nurse or doctor if they fall outside predetermined norms. Telehealth has been shown to replace routine trips for check-ups; to speed interventions when health deteriorates, and to reduce stress by educating patients about their condition.

Telecare Aware's editors concentrate on what we perceive to be significant events and technological and other developments in telecare and telehealth. We make no apology for being independent and opinionated or for trying to be interesting rather than comprehensive.