Small leadless implanted electronic devices (LLIEDs) have emerged as a safer alternative to lead-dependent cardiac rhythm-management devices, with advancements in miniaturization, battery technology, and communication. Intrathoracic LLIEDs can not only help in cardiac pacing but also enable the monitoring of cardiovascular and electrophysiologic activity, and non-cardiovascular physiology.
However, their subsequent detection and identification (location, general category, specific type, etc.) is critical, especially prior to situations like magnetic resonance imaging (MRI) scans involving electromagnetic and radiofrequency exposures.
In pre-MRI safety screening, existing methods involving direct interaction between the patient and physician, electronic medical records (EMR) and chest X-ray (CXR) provide limited and inadequate information. They are, therefore, insufficient for the recognition of evolving, infrequently used, and much smaller LLIEDs. Moreover, the issue is compounded by the small LLIED size, suboptimal screening technique, motion-related blurring, and similarities in appearance. LLIEDs can easily be overlooked on a CXR during emergency situations.
Moreover, the inability to tell whether an LLIED is a pacemaker or a recorder can put the patient at a considerable risk during MRI scan. Although both are considered “MRI conditional,” the pacemaker requires cardiology device and patient oversight before and after, and possibly during, the MRI examination.
Responding to the need for prompt and accurate detection of LLIEDs during MRI pre-screening, an artificial intelligence (AI)-based model was previously developed by researchers led by Richard D. White, an eminent radiologist at Mayo Clinic in Florida. In their recent study published in the SPIE Journal of Medical Imaging, White’s team assessed the readiness and operational prerequisites of this model with the aim of progressing towards real-world applications.
“LLIEDs span a spectrum of categories based on their MRI exposure safety, from being ‘MRI conditional’ to being ‘MRI unsafe.’ Our AI model for recognizing continuously evolving LLIEDs is based on LLIED classification obtained from the identification and labeling of regions of interest from retrospective and/or future organization-wide CXR data,” explains White.
For the pre-deployment assessment, the team used a two-tier cascading methodology comprising LLIED detection (tier 1) followed by classification (tier 2). They performed a five-fold cross validation during tier 1 to assess the durability of the “Original LLIED Model” initially comprising 9 LLIED categories. To imitate real-world trialing, they further applied the two-tier cascading AI model on 150 new CXR images from randomly selected newer patients, already revealing 3 new LLIED categories.
Further, the team incorporated some essential technical developments to facilitate real-world deployment of their AI model. These included a Zero-Footprint (ZF GUI/Viewer) viewing platform for imaging, DICOM-Structured reports (DICOM-SR) for enabling end-user inference-result adjudication and, most importantly, continuous learning with the addition of the 3 new LLIED types to create a 12-class “Updated LLIED Model.” They then used new additional cases to further test this model using the two-tier methodology.
The tier 1 study yielded 100% detection/location sensitivity of LLIEDs for both the 9-class and 12-class models, and its durability was further attested by the five-fold cross validation. In tier 2, both models achieved very high accuracy in identifying the type of LLIED (MRI safety category and specific type). While no LLIEDs remained undetected in tier 1, the few cases of misidentification occurring in tier 2 were attributed to suboptimal image quality. Remarkably, the AI model did not misidentify any of the “MRI stringently conditional” or “MRI unsafe” LLIEDs.
Discussing the significance of their study, White says, “While the actual value of the AI model can only be assessed in a true real-world clinical setting, these results harbor optimism in favor of deploying the AI model in the near future for assisting pre-screening evaluation by radiologists for patient safety.”
Focusing on mimicking real-world conditions for validating their model, the team incorporated continuous learning, retraining, and modernization of AI models based on end-user experience. This is the first study of its kind to report AI-based radiographic detection and identification of leadless implanted electronic devices.
Going forward, White and his team plan to capitalize on these results and launch the AI model in a relevant clinical setting. They also expect to address the limitations of this study by future retraining and fine-tuning of the AI model.