By Shannon Werb

Key_lockIn my March article, we set the stage that imaging analytics—data and insight—hold the keys for radiology to become an indispensable partner by demonstrating value beyond the reading room. We discussed forms of meta-data—patient demographics, order information or data—used to analyze traditional volumetrics (volume by physician, modality, etc.). These are things you find in a typical “day in the life” of a radiologist before delivering a final diagnostic report. But another important element is the significant amount of valuable unstructured data locked away in those radiology reports.

Unstructured data is information that is not in a pre-defined model or is unorganized. It is typically text-heavy and can contain dates, numbers, and other important quantitative and qualitative facts. The leading IT research and advisory company, Gartner Group, predicted that data would grow 800 percent by 2017, and 80 percent of that data would be unstructured. This is important since radiology reports hold clinically relevant information that is unstructured, lacking any kind of pre-defined model.

Traditional analytics tools are locked out from leveraging such data for quality-based or evidence-based medicine analysis and conversations. How can we unlock this valuable information? What is the key? Natural Language Processing (NLP).

NLP_Graphic_WerbColmnNLP involves computer science, Artificial Intelligence, linguistics and the interaction among the three. Applied to radiology, NLP allows computers to read, understand and extract meaningful data from unstructured clinical information in radiology reports.

Basically, NLP technology can—automatically and without human intervention—turn unstructured data into structured data with context, which should then be normalized and placed into a database—“unlocked”—so analytics tools can access the data and place it into a meaningful context associated with traditional upstream meta-data (demographics, orders, etc.).

In a recent vRad webinar with SyTrue, a healthcare data-refinement company that provides NLP solutions, CEO Kyle Silvestro noted that physicians create around 2 billion clinical notes and reports each year. That’s 95 new notes every second, or 8 million new notes a day. “The average human going through these documents manually would only see anywhere between 40 and 100 documents per day—we just don’t have the labor force to deal with this challenge,” he said. “It’s not enough just to go through information; you also have to be able to extract, normalize and validate that information. You have to make it usable for end users. Think of a Google on steroids where you have the ability to ask natural-language questions of a platform or a technology.”

Our practice saw NLP’s potential—when integrated with our clinical benchmarking platform and data normalization tools—to unlock that 80 percent and allow us to develop findings-based metrics. Findings are determined to be present if sufficient text is found to support an ICD code and/or if there is a documented abnormal indication. A “normal” or “no findings present” is determined by the lack of ICD-code-generating text or the presence of “negative” or “normal” phrasing within the impression section of the report.

This findings-based NLP work is now a key element in our Global Patient Information (GPI) Report, which provides our finals clients with consistent information for trending, benchmarking and imaging operating plan oversight—information not easily obtained or even available in an existing RIS or EMR. It also is part of our custom analytics that help clients with specific questions and insights where unstructured radiology interpretation data can hold clinically relevant insight. Finally, it underpinned the launch of Radiology Patient Care (RPC?) Indices, the first findings-based national benchmarking metrics for radiology imaging.

Such NLP-driven metrics allow us to ask better questions, gain insight and drive better informed discussions about utilization. Are we ordering too many imaging studies given the lack of positive findings? Are we ordering the proper imaging modality based on the reason for study? Can these findings be used to help inform and equip clinical decision support (CDS) solutions in a strategic, less-intrusive manner?

These questions are even more important now given how healthcare continues to move down the outcomes/evidence-based/quality-based reimbursement path. Consequently, radiology must understand the quality of the entire patient experience—and measure it. Were the presenting symptoms understood and consulted appropriately? Was the right exam referred? Were findings present or did the radiologist hedge their findings? In mammography, can we provide further classification (BIRADS) that helps the oncologist and their patient? Can we track and improve patient outcomes based on what the radiologist actually dictates in the report?

Metrics help radiology tie information back and understand quality from the beginning (i.e., the front end of the patient experience), to the middle (the radiologist/imaging), to the end (the follow-on process of managing the patient within the primary care physician’s oversight). Imaging metrics and analytics help radiology show value through meaningful information-driven conversations between radiologists, other physicians and hospital administration, generating “win-win-win” scenarios that start with evidence rather than emotion.

Given its potential, NLP will play an expanding role in imaging analytics. Right now, much of the analytics are historical/retrospective in nature (look back, extract, measure, improve), and we are exploring how to use NLP-powered analytics that help radiologists more seamlessly and in real time—without getting in the way of caring for patients. One innovation involves an “auto call” system that automatically contacts referring physicians if critical findings are dictated as a radiologist finalizes a report. This reduces time required to alert physicians to important, high-risk findings while improving communication, collaboration and care.

There is no doubt that data must drive the discussions radiology must lead to show value and start asking the right questions—even finding answers to questions we didn’t even think to ask. What if we move imaging analytics powered by NLP to the point of care and build intuitive capabilities for physicians to interact seamlessly? Could we provide better, more informed clinical information at the point of need based on NLP outcomes? Are there ways to use NLP to allow diagnostic physicians to query a wealth of existing healthcare information (reports) to find similar studies to help inform the diagnostic process?

Whatever the next step, NLP will be a key piece that “power-boosts” imaging analytics.

###

Shannon Werb is Chief Information Officer for Virtual Radiologic (vRad).