I agree to TechTarget’s Terms of Use, Privacy Policy, and the transfer of my information to the United States for processing to provide me with relevant information as described in our Privacy Policy.

Please check the box if you want to proceed.

I agree to my information being processed by TechTarget and its Partners to contact me via phone, email, or other means regarding information relevant to my professional interests. I may unsubscribe at any time.

Please check the box if you want to proceed.

By submitting my Email address I confirm that I have read and accepted the Terms of Use and Declaration of Consent.

data sets and powerful computers has offered a new way to leverage technology in patient care. AI is also able to perform complex cognitive tasks and analyze large amounts of patient data instantly. However, despite the powerful capabilities that AI can offer, some physicians are skeptical about the safety of using AI in healthcare, especially in roles that can impact a patient's health.

Today, most consumers have been exposed to some form of AI. Services like Google Home and Amazon's Alexa extensively use artificial intelligence and machine learning as part of their core application. But AI is not limited to taking basic commands to give weather forecasts or set reminders. Artificial intelligence has shown that it can perform several complex and cognitive tasks faster than a human. The automotive industry has already showcased its ability to leverage AI to offer driverless cars, while other industries have also found ways to use machine learning to detect fraud or assess financial risks. These are just a few examples that highlight the maturity level of AI.

Companies such as IBM play a big part in pushing AI into healthcare. Its use in leveraging its Watson platform in cancer research, insurance claims and clinical support tools has encouraged many in the industry to see the importance of this technology. Despite these encouraging signs and positive uses of artificial intelligence in healthcare, there are still some concerns and questions around its potential risks, and some healthcare professionals are uneasy about AI getting into the business of patient care. Below are four challenges of artificial intelligence in healthcare that need to be overcome before physicians will fully adopt the technology.

Concerns around security and privacy

Patient health data is protected under federal law, and any breaches or failure to maintain its integrity can have legal and financial penalties. Since AI used for patient care would need access to multiple health data sets, it would need to adhere to the same regulations that current applications and infrastructures must meet. As most AI platforms are consolidated and require extensive computing power, patient data -- or parts of it -- would likely reside in the vendor's data centers. This would cause concerns around data privacy, but could also lead to significant risk if the platform is breached.

Lack of interoperability between AI vendors

One of the popular subjects in the healthcare industry in recent years has been interoperability. Hospitals across the nation face the challenge of not being able to efficiently exchange patient health data across other healthcare organizations, despite the availability of data standards across the world. Adding AI to the mix would likely complicate things even further. When vendors like IBM or Microsoft actively deliver health-related services using their AI capabilities, the likelihood of these organizations talking to each other is very slim due to competition and proprietary technology. However, if policies are put in place that require these platforms to meet current interoperability requirements, this may help address the exchange of data right away.

Humans are not perfect, but AI could be worse

Opponents of AI in healthcare have argued that computers are not always reliable and can fail on us from time to time. These failures can lead to catastrophic consequences if AI prescribes the wrong medication or gives a patient the wrong diagnosis. However, AI could eventually move to a stage where it can be trusted once it has proven its safety and readiness for patient care. If its error margins are less than or equal to those of its human counterparts, then the platform could be ready to take on an active role in patient care.

Logical choices are not always the same as human choices

AI has progressed to the point where robots or virtual characters can mimic human behavior and interact naturally with humans. Emotional responses expressed in voice tones or text have been engineered based on human emotional reactions. However, there are several decisions physicians make that are based on their gut feeling, and intuition that may never be replicated using algorithms and super computers. These are the areas of patient care that would be hard to replace with a robot.

AI technology is advancing at a rapid rate. Several well-known scientists and popular figures such as Stephen Hawking, Bill Gates and Elon Musk have said that AI could become so powerful and self-aware that it may put its own interests before those of humans. But before robots become the enemy, there are tremendous benefits of artificial intelligence in healthcare, and many physicians are welcoming the technology. AI in healthcare offers the opportunity to help physicians identify better treatment options, detect cancer early and engage patients.

Join the conversation

1 comment

Register

I agree to TechTarget’s Terms of Use, Privacy Policy, and the transfer of my information to the United States for processing to provide me with relevant information as described in our Privacy Policy.

Please check the box if you want to proceed.

I agree to my information being processed by TechTarget and its Partners to contact me via phone, email, or other means regarding information relevant to my professional interests. I may unsubscribe at any time.