The Radiological Society of North America (RSNA) presented its seventh Alexander R. Margulis Award for Scientific Excellence to Paras Lakhani, MD, from Thomas Jefferson University Hospital (TJUH) in Philadelphia, for the article, “Deep Learning at Chest Radiography: Automated Classification of Pulmonary Tuberculosis by Using Convolutional Neural Networks,” published online in April 2017.

Named for Alexander R. Margulis, MD, a distinguished investigator and inspiring visionary in the science of radiology, this annual award recognizes the best original scientific article published in RSNA’s peer-reviewed journal Radiology.

While imaging plays a pivotal role in the diagnosis and management of tuberculosis (TB), access to radiology is often limited in the developing countries where TB is most prevalent. Hoping to bridge that gap, Lakhani and colleague Baskaran Sundaram, MD, also from TJUH, investigated the efficacy of an automated method for detecting TB on chest radiographs. Specifically, the researchers used deep learning, a type of artificial intelligence (AI) using pre-trained deep convolutional neural networks (DCNNs), to identify TB on chest x-rays. The results of the research were promising.

“We determined that deep learning with DCNNs can classify TB at chest radiography,” says Lakhani, lead author on the study. “This method means that radiography may facilitate screening and evaluation efforts in TB-prevalent areas with limited access to radiologists.”

The potential for improving detection of TB, one of the top 10 causes of death worldwide, was a strong motivator for the research, Lakhani says. In 2016, approximately 10.4 million people fell ill from TB, resulting in 1.8 million deaths, according to the World Health Organization (WHO).

“An automated solution—or proof that an automated solution could work—could change the landscape of this disease, particularly in developing countries like Sub-Saharan Africa,” Lakhani says. “A great priority of WHO is ending TB.”

For the study, Lakhani and Sundaram obtained 1,007 x-rays of patients with and without active TB, consisting of multiple chest x-ray datasets from the National Institutes of Health, the Belarus Tuberculosis Portal, and TJUH. The datasets were split into training (68%), validation (17.1%), and test (14.9%).

The cases were used to train two different DCNN models—AlexNet and GoogLeNet—which learned from TB-positive and TB-negative X-rays. The models’ accuracy was tested on 150 cases that were excluded from the training and validation datasets. The best performing AI model was a combination of the AlexNet and GoogLeNet, with a net accuracy of 96%.

The two DCNN models had disagreement in 13 of the 150 test cases. For these cases, the researchers evaluated a workflow where an expert radiologist was able to interpret the images, accurately diagnosing 100% of the cases. This workflow, incorporating a human into the loop, had a greater net accuracy of close to 99%.

The DCNNs were not trained to distinguish potential mimics of pulmonary TB, such as lung cancer, bacterial pneumonia, or tropical diseases, according to Lakhani.

“The goal of such algorithms is to differentiate normal from abnormal chest x-rays with respect to TB evaluation,” Lakhani says. “Those flagged as abnormal with characteristics of pulmonary TB should be followed by bacteriologic confirmation, as suggested by screening workflows presented by WHO. The goal in these workflows is cost savings, as the cost of digital radiography has substantially lowered in the past decade.”