By Aine Cryts

Artificial intelligence can help radiologists in their detection of cerebral aneurysms. That’s according to study published in JAMA Network Open in June.

In the study, Allison Park, an applied scientist at Microsoft, and her co-authors wrote about an algorithm called HeadXNet. The algorithm has been used to improve clinicians’ ability to identify an aneurysm “at a level equivalent to finding six more aneurysms in 100 scans that contain aneurysms,” according to an announcement. The co-authors note that additional research is needed to evaluate the generalizability of the AI tool prior to clinical deployment; that’s due to differences in scanner hardware and imaging protocols across different healthcare facilities.

AXIS Imaging News recently discussed the results of this study with Park. What follows is a lightly edited version of that conversation.

AXIS Imaging News: What’s one key finding from your research that radiologists should pay attention to and why?

Allison Park: We developed an artificial intelligence tool built around a model called HeadXNet. In a CT angiograph (CTA) scan of the head, HeadXNet assesses which part of the scan, if any, is likely to be an aneurysm. The output from HeadXNet is a collection of voxels (a unit of a 3D image that is similar to pixels in a 2D image) that have a high probability of being part of an aneurysm.

The tool then renders this output into a red overlay that radiologists can view on top of the CTA scan using their standard DICOM viewer. Because the outputs are 3D, radiologists can view these overlays in axial, coronal, and sagittal planes. In addition, radiologists can choose to turn off the overlay to read the scan without this artificial intelligence tool.

This result suggests that we can improve diagnostic accuracy by equipping radiologists with emerging diagnostic artificial intelligence tools. Smooth integration of artificial intelligence into the current workflow is both safe and effective since it keeps humans in the loop by leaving it to the radiologist to accept or reject artificial intelligence outputs.

AXIS: What’s your desired outcome from doing this research and having radiologists apply this knowledge in their daily work?

Park: I hope that this research can be implemented to help boost radiologists’ accuracy and reduce false negatives in aneurysm diagnosis. Our research shows that artificial intelligence models that simply output probabilities with high accuracy may not be enough. However, when those outputs are visualized, made easily interpretable, and met with human expertise, they can be much more powerful.

Interpretability has been a big issue in assessing the practicality of many medical artificial intelligence models, and our study is another example that emphasizes the importance of interpretable outputs.

AXIS: How did your research team include radiologists in the study?

Park: Our study was made possible through a collaboration between engineers and clinicians. Because practicing radiologists are the people who will eventually use the tool, their guidance is valuable for making sure the artificial intelligence tool has a practical use. Our research team received feedback from radiologists, from the beginning stage of establishing a clinically valuable goal for the study to the final stage of making design decisions for the artificial intelligence interface.

Radiologist expertise is also essential for data collection, which is crucial because a larger training dataset can significantly improve the performance of an artificial intelligence model. Kristen Yeom, MD, a neuroradiologist at Stanford University Medical Center and a co-senior author of our paper, annotated the sizes and locations of clinically significant aneurysms in the dataset. The team then worked with these annotations to manually outline the aneurysms to give as inputs to the model.

I hope that this study encourages more radiologists to take an interest in collaborating with machine learning engineers to promote additional research in this field.

AXIS: What are some misconceptions radiologists might have about artificial intelligence? And how would you address them?

Park: There has been heated discussion about whether radiologists are going to be replaced by artificial intelligence. Radiologists have voiced concerns about the shortfalls of comparing an artificial intelligence model with a radiologist based on their performance on a single disease. In response to these concerns, we say that a rivalry between human clinicians and artificial intelligence is unnecessary.

Artificial intelligence isn’t a competition threatening to replace radiologists; rather, it’s a tool that can complement radiologists. Artificial intelligence integration into diagnostic radiology should undoubtedly be treated with caution, but not with over-dependence on the new technology or hostility.

I want to borrow an analogy by Curt Langlotz, MD, PhD, director of the Center for Artificial Intelligence in Medicine & Imaging at Stanford University. In the same way that the autopilot function didn’t replace pilots, but instead enables safer flights, artificial intelligence tools can be used to augment radiologists and complement their human shortcomings—thus ensuring more accurate diagnoses and better healthcare.

Editor’s note: Pranav Rajpurkar, a co-author of this study, presented on this topic on September 23 at the Society for Imaging Informatics in Medicine’s 2019 Conference on Machine Learning in Medical Imaging, which took place in Austin, Texas.