Summary: A new deep learning model detects clinically significant prostate cancer on MRI as accurately as experienced abdominal radiologists, showing potential as an adjunct tool to improve cancer detection rates.
Key Takeaways
- A deep learning model performs at the level of experienced abdominal radiologists in detecting clinically significant prostate cancer on MRI, potentially serving as an adjunct tool to improve cancer detection rates and reduce false positives.
- This model, developed by researchers at the Mayo Clinic, does not require lesion annotation and uses a convolutional neural network to predict significant prostate cancer from multiparametric MRI, overcoming limitations of the PI-RADS system.
- The study’s results showed that combining the model’s output with radiologists’ interpretations enhances diagnostic performance, and future research will focus on integrating the model into clinical practice to assess its impact on radiologist decision-making.
——————————————————————————————————————————————————
A deep learning model performs at the level of an abdominal radiologist in the detection of clinically significant prostate cancer on MRI, according to a study published today in Radiology, a journal of the Radiological Society of North America (RSNA). The researchers hope the model can be used as an adjunct to radiologists to improve prostate cancer detection.
Multiparametric MRI
Prostate cancer is the second most common cancer in men worldwide. Radiologists typically use a technique that combines different MRI sequences (called multiparametric MRI) to diagnose clinically significant prostate cancer. Results are expressed through the Prostate Imaging-Reporting and Data System version 2.1 (PI-RADS), a standardized interpretation and reporting approach. However, lesion classification using PI-RADS has limitations.
“The interpretation of prostate MRI is difficult,” says study senior author Naoki Takahashi, MD, from the department of radiology at the Mayo Clinic in Rochester, Minn. “More experienced radiologists tend to have higher diagnostic performance.”
Enhancing Prostate Cancer Detection
Applying artificial intelligence (AI) algorithms to prostate MRI has shown promise for improving cancer detection and reducing observer variability, which is the inconsistency in how people measure or interpret things that can lead to errors. However, a major drawback of existing AI approaches is that the lesion needs to be annotated (adding a note or explanation) by a radiologist or pathologist at the time of initial model development and again during model re-evaluation and retraining after clinical implementation.
“Radiologists annotate suspicious lesions at the time of interpretation, but these annotations are not routinely available, so when researchers develop a deep learning model, they have to redraw the outlines,” Takahashi says. “Additionally, researchers have to correlate imaging findings with the pathology report when preparing the dataset. If multiple lesions are present, it may not always be feasible to correlate lesions on MRI to their corresponding pathology results. Also, this is a time-consuming process.”
Takahashi and colleagues developed a new type of deep learning model to predict the presence of clinically significant prostate cancer without requiring information about lesion location. They compared its performance with that of abdominal radiologists in a large group of patients without known clinically significant prostate cancer who underwent MRI at multiple sites of a single academic institution.
CNN AI Detects Prostate Cancer from MRI
The researchers trained a convolutional neural network (CNN)—a sophisticated type of AI that is capable of discerning subtle patterns in images beyond the capabilities of the human eye—to predict clinically significant prostate cancer from multiparametric MRI.
Among 5,735 examinations in 5,215 patients, 1,514 examinations showed clinically significant prostate cancer. On both the internal test set of 400 exams and an external test set of 204 exams, the deep learning model’s performance in clinically significant prostate cancer detection was not different from that of experienced abdominal radiologists. A combination of the deep learning model and the radiologist’s findings performed better than radiologists alone on both the internal and external test sets.
Since the output from the deep learning model does not include tumor location, the researchers used something called a gradient-weighted class activation map (Grad-CAM) to localize the tumors. The study showed that for true positive examinations, Grad-CAM consistently highlighted the clinically significant prostate cancer lesions.
Takahashi sees the model as a potential assistant to the radiologist that can help improve diagnostic performance on MRI through increased cancer detection rates with fewer false positives. “I do not think we can use this model as a standalone diagnostic tool,” Takahashi says. “Instead, the model’s prediction can be used as an adjunct in our decision-making process.”
The researchers have continued to expand the dataset, which is now twice the number of cases used in the original study. The next step is a prospective study that examines how radiologists interact with the model’s prediction.
“We’d like to present the model’s output to radiologists and assess how they use it for interpretation and compare the combined performance of radiologist and model to the radiologist alone in predicting clinically significant prostate cancer,” Takahashi says.
Featured image: Diagram shows the architecture of the image-only model and clinical model. The image-only model consisted of one set of three-dimensional (3D) convolutions (3D convolutional kernel [Conv3D], maximum pooling [MaxPool3D], and group normalization [GroupNorm]) for each input volume (T2, diffusion-weighted imaging [DWI], apparent diffusion coefficient [ADC], and dynamic contrast-enhanced [DCE]), followed by concatenation, three additional sets of 3D convolutions, global average pooling (GlobalAvgPool), and two fully connected layers. The clinical model consists of a neural network of two fully connected layers with prostate-specific antigen (PSA) level, whole-gland PSA density, and transition zone PSA density as input. The output from each model was the probability of clinically significant prostate cancer (csPCa). ch = channel, Grad-CAM = gradient-weighted class activation map.