The ambition of pioneer researchers in artificial intelligence (AI) was to create behavior that would imitate human behavior, and some even entertained dreams of producing intelligent humanoid robots. Since then, the focus in artificial intelligence has shifted to solving specific problems, such as detecting fraudulent credit card usage patterns, intelligent management of braking systems for high-speed trains, and searching for mineral and oil deposits underground.

Radiology initially saw in artificial intelligence the potential for a highly educated assistant, essentially an automated master radiologist who could be called upon to assist the diagnostician with a challenging case. Practice patterns and the fact that structured data from the medical record and the radiology report are not yet available online, however, rendered such tools of little clinical use so far, and the systems that have been developed to date are used primarily for pedagogical purposes.

But that situation is poised to change. All signposts are directing radiology toward soft-copy reading, and thus the potential for developing useful decision-support tools for computer-aided detection, computer-aided diagnosis, and therapy planning is greater and more intriguing than ever.

The four primary AI techniques used for radiology decision-support systems are artificial neural networks (ANN), Bayesian networks, rule-based reasoning, and case-based reasoning. After concentrating on developing decision-support systems based on case-based and rule-based reasoning, the focus in radiology has shifted to ANN and Bayesian networks for computer-aided detection and diagnosis (CAD).

ARTIFICIAL NEURAL NETWORKS

Artificial neural networks are designed to imitate the function of the human brain. Nodes act like neurons by receiving information from input sources — and other connected nodes — and then firing off a signal if a certain threshold or pattern of inputs is received. This node provides information to other nodes, or to a results node, to be interpreted.

The ANN system is taught by a database of inputs and outputs (for example, images and diagnoses) to create a pathway between these inputs and outputs. The system then learns patterns from this database and, when confronted with an input, will stimulate the nodes, or neurons, in the pathway to the correct output. This process is similar to how the human brain works. While we do not have enough processing power available today to completely simulate a human brain, there is certainly enough power for task-specific applications.

An advantage to this system is that it learns directly from observations.1 ANN systems do not require a computer programmer (sometimes called a knowledge engineer) to implement rules or other knowledge in the system’s knowledge base. Instead, the system relies on known inputs and outputs and is given the freedom to fill in the rest. It can then use its pattern recognition approach to classify new, unknown cases.

“ANN systems can be used in deciding when a breast mass is cancerous,” says Charles Kahn, MD, associate dean for informatics at the Medical College of Wisconsin, Milwaukee. “It is not clear in the literature how much you should weight spiculation, calcifications, and so forth in these cases. ANN provides a way around the lack of knowledge by building its own associations,” Kahn says.

The biggest disadvantage to this system is that it cannot explain its reasoning. The training feeds only input and output and what happens in between has been called a black box. This could clearly limit the utility of the ANN as physicians may resist accepting a conclusion without knowing the basis for it.

Another disadvantage linked to ANN-based systems is that there is no way to know what the system has learned. The system must be tested at length to determine if it has learned in an efficient way. Kahn sites a study about the training of a military neural network system to distinguish cars from tanks in a laboratory setting. In the study, the system performed perfectly in the laboratory when it was shown pictures. However, when it moved to a real world environment, the system clearly failed. “All of the pictures of tanks were taken on sunny days and those of cars on cloudy days,” Kahn says. “The system learned that the most prominent diagnostic feature of a car was a cloudy day.”

Despite the disadvantages, ANN systems have become the most popular and widely used artificial intelligence technique in radiology. “In various studies, ANN systems have equaled or exceeded the diagnostic performance of human experts,” Kahn says.

BAYESIAN NETWORKS

Bayesian networks rely on probabilities obtained directly from published statistical data or from human experts to make decisions in uncertain situations. Also called a belief network or causal probability network, this system takes complex and abundant data from research and human experience and balances it to make decisions based on probabilities. The system does not try to determine new relationships, but instead relies on proven, accepted inferences.

“These systems often require probability estimates from human experts, because probabilities in the proper form are not as abundant in the literature as one would hope,” notes Curtis Langlotz, MD, advisor for informatics and clinical trials, Diagnostic Imaging Program, National Institutes of Health (NIH), Bethesda, Md. “Those that are published may not apply to the specific situation at hand, or may have come from flawed studies, so the question of whether they are valid is not always so clear-cut. Nevertheless, some data are better than none.”

“The network is a directed graph consisting of nodes and links,” explains Azar Dagher, MD, a researcher with the diagnostic radiology department at the NIH. “These nodes and links represent variables and associations among the variables. Each node has parents and children to which it is linked, and each link represents a conditional-probability that the variable child node will assume any of its states given the states of its parents. Once the system is programmed, it can be fed clinical findings, laboratory values, demographics, and imaging study results that can be digested down to a decision as to the most likely diagnoses.”

Kahn constructed a model in the domain of acute hepatobiliary disease to demonstrate the utility of Bayesian networks for radiologic diagnosis and procedure selection. The model’s nodes represent diagnoses, physical findings, laboratory test results, and imaging study findings in the setting of acute abdominal pain. The network determined the a priori probabilities of, for instance, gallstones, acute cholecystitis, appendicitis, gastroenteritis, and small bowel obstruction, and incorporated laboratory and imaging results to calculate the a posteriori probabilities.1

A significant advantage to a system based on a Bayesian network is that it can explain why it made a decision, including referring to the scientific literature on which the decision was based. Because the inferences are based on probability, a user can query the system on how a decision was achieved, with the answer residing in the variables and the links between those variables. The variables typically represent concepts radiologists are accustomed to working with, such as specific features of the image.

A disadvantage is that the system is limited to current available data and probability estimates from experts, causing it to be mostly useful where data are well accepted without much controversy and leaving gaps where not much is known or well accepted. Unfortunately, these gaps are often the areas where the radiologist needs the most assistance.

RULE-BASED REASONING

Systems based on rule-based reasoning use If-Then rules to work like an algorithm or flowchart. They are simpler systems that can be useful in some situations. One example is the ICON system, developed by Henry A. Swett, MD, while at Yale. ICON contains 70 rules that can take entered clinical information and image findings to provide the differential diagnosis of lung disease seen on chest radiographs in patients with lymphoproliferative disorders. For example, if a patient with Hodgkin’s disease has a pleural effusion and no lymphadenopathy, then there is a moderate probability that the effusion is caused by an infectious process.

One limitation to this system is that it must have solid clinical data on which to base its rules. Additionally, for each problem, the rule set must be written from the ground up and is somewhat difficult to update as data change. However, the system can demonstrate its reasoning and, like Bayesian networks, can even refer to medical literature on which its decision was based. Though useful for some problems, these techniques are not likely to serve as the basis for a large-scale decision-support system.

“Another important limitation to these systems is that the rules are typically categorical,” Langlotz says. “The components of the rules must be either true or false, and not in between. But we are often uncertain about many basic clinical data. That uncertainty cannot easily be propagated by these categorical rules.”

CASE-BASED REASONING

With these systems, the problem solver reuses the solution from a past case to solve a current problem. This system is taught with old cases that have been reasoned through and indexed according to pertinent features; prior cases can then be drawn on whenever a similar case is presented.

“It solves new problems by adapting solutions that were used to solve old problems,” Kahn says. An example of this is MacRad, in which cases solved by experts in a particular area are indexed by diagnosis, anatomic location, and age group. The diagnosis is represented by the features that led to the particular outcome. The variables from a new case are entered and reference is made to prior, similar cases and solutions. Additionally, MacRad can retrieve images and text from Radiology, Resource and Review (R3), a case-based resource based on radiographic features, to help solve current, similarly featured cases.

The advantage of this system, as with artificial neural networks, is that a complete understanding of the problem area is not needed and thus vast amounts of research data are not required. An existing set of cases is used to train the system and the reasoning is in the form of reference to learned cases. These systems can also continue learning through experience, as old case solutions are altered by the expert to fit the new case. This results in a new case that is indexed.

Disadvantages of this system are that it is essentially encoding anecdotes, rather than allowing the likelihood of disease to guide the decision. These systems have been extremely useful for pedagogical purposes, as they can help identify companion cases that bring home a teaching point.

WHO’S USING THEM AND WHERE?

Although technology exists to create a commercial product, artificial intelligence decision-support systems are still only in the research stages. The goal is to get data in a usable format, including image findings and patient data, which requires multiple advances that are currently under development.

“Decision-support techniques have been available for 20 years or more,” Langlotz notes. “The big barrier to the use of artificial intelligence systems in radiology and decision support in general is that the data those systems need to make their decisions and recommendations are not generally available. Patient data are beginning to be available at institutions that are pioneering the use of electronic medical records. The imaging findings are available only in those few instances where structured reporting systems are used.”

As the requisite patient data become available online, it will become economical to begin incorporating AI technology into the computer systems that radiologists use. Decision-support systems of the future will have access to patient laboratory studies, demographics, and physical examination findings in a format that can be directly utilized.

Also under development are structured reporting systems such as the Breast Imaging Reporting and Data Systems (BI-RADSTM) used in mammography. These reporting systems standardize the lexicon used in reports by radiologists. A by-product is that the standardized language provides the report data in a format that can be used by a decision-support system. The system can capture information such as clinical information, findings, and the differential as given by the radiologist. The system could then get more information from the aforementioned electronic patient record, prompt the radiologist for additional information, or use what is available to develop a differential of its own and compare it to the radiologist’s differential.

“The future lies in software that is designed to capture structured imaging data as part of the routine reporting process,” Langlotz says. The system could then suggest that the radiologist consider factors not yet mentioned in the report.

CAD systems 2 could be used in conjunction with decision-support systems to result in tremendous enhancements in the quality of radiological interpretation. The system would not have to rely solely on the radiologist to enter the characteristics of an image finding. This could save time as the radiologist would not have to learn the language of the decision-support system. It would also be more precise because the system could take full advantage of its own knowledge.

HOW WILL THEY BE USED?

The exact role these systems will eventually play remains undefined. Will they be used for every image all day long, or will they be used on certain types of examinations or only when the radiologist needs a consult? As Langlotz describes, “There are two basic modes that these systems can work in: critiquing mode or consultation mode.”

In the critiquing mode, the system would work quietly alongside the radiologist, using some combination of CAD findings, the radiologists’ entered findings on the structured report system, and the patient’s electronic record to create a differential diagnosis or recommendation that is unobtrusively displayed. This mode could be used for every examination. The consultation mode system would be used only when consulted by the radiologist, perhaps when there is a diagnostic dilemma such as a long differential diagnosis or uncommon findings, or when the diseases involved are not often considered in normal practice.

“The way to make this successful is not to create a stand-alone system that requires you to walk down the hall to enter information into some computer system to get an answer and then have to walk back to where you work,” Kahn asserts. “The way to make the systems work is to integrate them into the everyday work flow.”

THE FUTURE

No one really wants to hazard a guess as to when decision-support systems for radiology will be commercially available. The only consensus is that it will be after the PACS and digital image interpretation are firmly in place.

“Until [decision support] is integrated, it will not take off,” Kahn believes. Decision-support systems will likely evolve as all the pieces come together, including electronic medical records, CAD, and organized reporting systems.

Although significant barriers to the development and adoption of decision-support systems remain, there are several compelling reasons to persevere. First, radiologists and clinicians constantly are searching for ways to improve diagnostic accuracy. A number of studies show that these decision-support systems perform at or above the level of human decision-makers. “However, radiologists should not be discouraged,” Langlotz notes, “because the best results are obtained when the two are used together, where the system makes suggestions and the human uses the information to arrive at a diagnosis. This combination truly has the potential to improve radiological decision-making.”

Another motivator is that the amount of medical knowledge is growing at a staggering pace. Most of the imaging technology used by radiologists today, such as ultrasound, CT, and MRI, was not available just 20 years ago. It is impossible to keep up with scientific advances by reading all of the literature. The estimates are that more than 2 million articles are published in the biomedical literature each year and that the mass of medical knowledge doubles every 5 years.1 Decision-support systems can help the radiologist not only keep up with this growing body of knowledge, but also actively utilize it.

How the new information in the literature makes its way into the decision-support system remains, however, an active research question, and is yet to be answered definitely.

There could be economic benefits as well. Decision-support systems have the potential to decrease costly errors, allow general radiologists to perform more specialized work, and may, over time, give the radiologist the confidence to interpret images more rapidly. Ultimately, this will add up to valuable savings.

In conclusion, the technology exists for artificial intelligence-based decision-support systems, but the foundation of available online data on which these systems can operate is yet to be established. The research systems show much promise and it is not really a matter of if, but rather when, they will become widely available. Decision-support systems are not going to be revolutionary, but evolutionary, and likely will integrate into the practice of virtually every practicing radiologist.

“My hope is that these systems can be incorporated as a routine part of the software that we will come to use as a part of our practice every day,” Langlotz says. “The display station that we use to display the image may have CAD incorporated into it that can be used to point out suspicious areas of an image, and a structured reporting system that has decision support built into it and that can suggest differentials and make recommendations.”

?

Peter Prokell, MD, is a contributing writer for Decisions in Axis Imaging News