By Aine Cryts

Artificial intelligence (AI), and its applications in healthcare, has garnered a lot of attention in the headlines in recent years, and with good reason—a steadily increasing body of medical research has suggested that AI applications have the potential to become valuable tools to aid physicians with diagnostic and treatment decisions.

But how well does success in controlled trials translate to usefulness in clinical practice? “We are entering a period in which health systems around the country are going to be making more concerted efforts to implement artificial intelligence solutions in clinical practice,” said Casey Ross, a national technology correspondent for STAT, in a webinar titled “From Computer to Clinic: The Challenges of Implementing Artificial Intelligence in Practice” on Feb. 13, 2020.

“But how do you do that safely and effectively? How do you guard against bias and protect the privacy of patient information? How do you ensure that these technologies are not only accurate, but also improving patient outcomes?”

AI technologies still face some significant barriers before they can enter clinical practice, said Ross in the webinar. For example, there needs to be a way to ensure that the data used to train the systems are accurate and unbiased. One of the goals of AI is to remove bias from medical decision-making.

“If AI systems are not tested or trained on diverse data, they may not perform equally on different kinds of patients,” he said. “At a minimum, AI models need to be tested on diverse patient groups in different geographies, so they can generalize and perform across multiple populations. Otherwise, the bias you are trying to remove from patient care by implementing AI is actually going to increase and cause already marginalized groups to get even more so.”

Another major, and often overlooked, barrier to widespread implementation of AI in healthcare is changing clinician behaviors, Ross said: “I think this is an underestimated challenge both in terms of cost and logistics. It’s not enough to simply deliver an interesting piece of information; the piece of information must be meaningful to the doctor and delivered in a certain time window. And is it going to be presented in a form that physicians use and understand?”

Ross said it’s also important to ask: “What information is going to improve patient care?” After all, it can take many years to determine the long-term effects of specific treatment decisions on patient populations. Ross cites the example of computer-assisted detection (CAD) of breast cancer, which was hailed as a breakthrough in the 1990s. But, he said, later studies found that CAD did not improve outcomes in any metric studied—and it took more than 20 years for that evidence to become apparent.

AI systems are already making inroads into patient care in some specific areas, such as sepsis detection, and in pathology, oncology, and radiology—specialties with substantial digital datasets, which work well with training AI systems, Ross said.

“Modern machine learning systems are showing a lot of potential for certain applications that can improve care, but they will ultimately fail if they don’t meet certain criteria: A few key ones are, it must work on a diverse population of people. It must deliver clinically meaningful information that can be incorporated into the process of delivering care. And finally, it must improve outcomes. Very few, if any, check all of those boxes so far, and that’s why we are not seeing as many in clinical settings yet.”

To hear the entire webinar, go to STAT.

Aine Cryts is a contributing writer for AXIS Imaging News.