By Matt Murren

In late 2020, the Centers for Medicare and Medicaid Services announced a special pathway for reimbursement—of up to $1040 per patient—for the use of artificial intelligence software to diagnose and treat stroke. Yet a survey by the American College of Radiology Data Science Institute found that only 30% of ACR radiologists were using AI clinically in 2020. The remaining 70% were unprepared: unprepared to implement software that would help improve time to treatment and capture government payments tied to that care.

With AI-focused technology companies developing hundreds of FDA-cleared algorithms for clinical use in radiology, radiology practices should be making AI-readiness a top priority. Besides planning for AI-software coding and billing, though, what does that readiness look like?

Here we drill down into the computing capacity, IT infrastructure, interoperability, data security, and HIPAA compliance aspects of AI readiness.

Benefits of AI

First, we should identify the reasons a group would want to be AI-ready. After all, more than half of all respondents in the 2020 survey said they saw “no benefit” in using AI for radiology.

While each product will have its distinct value proposition, the primary advantage of AI tools in radiology is workload reduction—a key strategy for addressing the looming problem of burnout. Exhaustion, disillusionment, and overwhelm affected all specialties during the COVID pandemic. Even before the pandemic, however, task forces had been assembled to address the alarming rate of burnout in radiology.

AI algorithms reduce the burden of image interpretation on humans, especially in the screening phases, and AI-driven “assistants” help lessen the administrative burden as well, particularly in the realm of report drafting. For its part, the ACR has encouraged the use of AI through its Data Science Institute® AI Central database, which lists vetted applications by modality, body part, type of disease/condition, FDA clearance category, and specific use case.

To realize these benefits, groups will need to integrate new applications into existing workflows while continuing to meet regulatory requirements for vendor management and patient data protection.

Increasing Computing Capacity

According to University of Chicago professor and Vice Chair of Radiology Informatics Paul J. Chang, MD, IT infrastructure is less sophisticated in radiology than in most other enterprise integration models. This “suboptimal IT infrastructure is a constraint in the real-world adoption of AI” at all levels: from development and validation to implementation, support, and management.

To be interoperable with AI programs, radiology groups will at a minimum need to upgrade their information systems, operating systems, interoperability platforms, and EHRs to the most recent versions. And adequate storage for scanned images—either on premise or in the cloud—will need to be secured before adding new steps of AI-screening or assessment to the workflow. 

Radiology-specific interoperability demands include ensuring that new images meet existing DICOM technical standards. This requirement ramps up considerably if the group is considering training its own algorithm or innovating on a publicly available one.

Establishing New Workflows

While outsourcing radiology image interpretation to offshore radiologists or third-party partners has become a common practice, the advent of AI-powered solutions introduces a new paradigm: on-site image analysis augmented by junior personnel guided by AI algorithms.

This approach can streamline workflows and also mitigate potential vulnerabilities associated with transmitting sensitive medical data across international borders. By eliminating the need to send images offshore, radiology practices can reduce their exposure to cybersecurity risks, thus strengthening data protection and patient privacy.

The true potential of AI in radiology extends far beyond mere efficiency gains. AI-trained algorithms can interpret medical images at an unprecedented speed, surpassing human capabilities. Moreover, these intelligent systems can rapidly adapt to new findings, seamlessly incorporating emerging medical knowledge and abnormality patterns into their analysis. This agility enables radiologists to remain at the forefront, enhancing diagnostic accuracy and facilitating timely interventions, particularly in emergent situations.

To harness the full potential of AI in radiology, practices must ensure they have robust, up-to-date information and storage systems in place, as well as in-house expertise in image coding and interoperability standards. To safeguard all this data, practices will need to honestly assess their own security posture as well as the posture of their AI application vendors.

Ensuring Security and Compliance

AI vendors are subject to the same regulations and penalties as any other business partner, so vendor management is a critical part of AI-readiness. Groups can conduct these evaluations in-house or seek a third-party assessment, but the evaluation is a necessary step given the current state of the industry: increasing data breaches, particularly those targeting healthcare vendors, and dwindling cyberinsurance options.

Before looking outward, radiology groups will want to take a hard look at their own security posture. Referring to a digital security maturity framework can help groups understand where their security processes and personnel meet basic requirements, and where they might be exceeding them—or falling short.

Three key questions addressed by these frameworks include:

  1. Do you have a 24-hour Security Operations Center assessing real-time threats to system integrity and patient data security?
  2. Do you have AI-capable MDR (managed detection and response) or XDR (extended detection and response) software systems in place to protect and defend your practice’s data?
  3. What is your process for vendor evaluation for security and integration into your workflows? This includes failover, security, and business continuity.

Scoring high on the maturity framework means that a group can either automatically or with minimal effort comply with HIPAA and HITECH requirements—and further, that compliance is merely the baseline for the measures it’s taking to ensure data security. The equivalent standard for AI vendors is the Systems and Organizations Control 2 (SOC 2) certification, created by the American Institute of Certified Public Accountants. Assessing any prospective vendors’ SOC 2 Type 2 and/or HITRUST status is essential for radiology practices interested in limiting their exposure to data breaches and compliance violations.

Anticipating innovation: Will Radiology Groups Lead the Way?

According to Jan Beger, head of AI advocacy at GE HealthCare, one of the most significant changes coming to radiology is that “Multimodal AI will uncover new diagnostic uses for images, leading to the advent of precision health and more accurate disease staging and quantification.” Beger bases this prediction on the establishment of a “substantial, widely available imaging database . . . promoting unbiased AI and reducing health disparities.” CMS is certainly seeking to provide just that. It claims to have “one of the most robust data portfolios in the federal government, comprising … patient and provider claims, beneficiary enrollments, and medical records, along with internal data such as budget documents and contract records.”

Beger foresees interdisciplinary teams in academic organizations being the ones to make the quickest and widest-ranging use of such a database. But what if radiology groups—already the primary focus of the AI technology companies—were to develop their own algorithms?

There is significant room for integrating innovation here, but only the bolder groups will end up investing in honing their image interpretation processes via algorithms they refine themselves. These groups will encounter numerous decision points if they hope to market those algorithms or applications, including issues related to liability, regulatory compliance, and intellectual property.

Taking a step back, do the interests of self-sustainability for a radiology group practice outweigh the collective benefit of sharing an innovative algorithm with other providers or health systems? And who in the group (or in the group’s investors/backers) has the authority to make that determination?

Where We Are

AI is prompting what may be the biggest shift in radiology in a century. In addition to the benefits to patient outcomes and clinical efficiency, the business opportunities are immense. If a radiology group can create its own trusted database and targeted algorithms—drawing on the specific knowledge and current frustrations of expert radiologists—this could be a significant differentiator in the market.

But the flipside of opportunity is liability. Insurance companies are already asking providers whether a diagnosis is assisted by AI or not; even with all of this change, it is unlikely (even if just for the sake of liability) that we’ll reach a time when a physician no longer needs to sign off on an AI-generated image interpretation. Along with innovation, then, advanced radiology groups will need a robust set of checks and balances, avenues for model correction, and human evaluation at every stage of development.

However, AI readiness will vary for radiology groups depending on their interest in proprietary algorithm development. But the essential elements—computing capacity, interoperability, data security, and compliance—are non-negotiable.

Matt Murren is CEO and founder of True North ITG.

References