Study published in Nature shows that AI was expert in assessing and diagnosing cardiac function by analyzing echocardiogram images
According to Cedars-Sinai investigators and their research published in the peer-reviewed journal Nature, AI proves to be more accurate in assessing and diagnosing cardiac function when compared with echocardiogram assessments made by sonographers.
The findings are reportedly based on a blinded, randomized clinical trial of AI in cardiology led by investigators in the Smidt Heart Institute and the Division of Artificial Intelligence in Medicine at Cedars-Sinai.
“The results have immediate implications for patients undergoing cardiac function imaging as well as broader implications for the field of cardiac imaging,” says cardiologist David Ouyang, MD, principal investigator of the clinical trial and senior author of the study. “This trial offers rigorous evidence that utilizing AI in this novel way can improve the quality and effectiveness of echocardiogram imaging for many patients.”
In 2020, researchers at the Smidt Heart Institute and Stanford University developed one of the first AI technologies to assess cardiac function, specifically, left ventricular ejection fraction—the key heart measurement used in diagnosing cardiac function, which was also published in Nature.
“This successful clinical trial sets a superb precedent for how novel clinical AI algorithms can be discovered and tested within health systems, increasing the likelihood of seamless deployment for improved patient care,” says Sumeet Chugh, MD, director of the Division of Artificial Intelligence in Medicine and the Pauline and Harold Price Chair in Cardiac Electrophysiology Research.
Building on those findings, the new study assessed whether AI was more accurate in evaluating 3,495 transthoracic echocardiogram studies by comparing initial assessment by AI or by a sonographer—also known as an ultrasound technician.
Among the findings, cardiologists more frequently agreed with the AI initial assessment and made corrections to only 16.8% of the initial assessments made by AI; cardiologists made corrections to 27.2% of the initial assessments made by the sonographers; the physicians were unable to tell which assessments were made by AI and which were made by sonographers; the AI assistance saved cardiologists and sonographers time.
“We asked our cardiologists to guess if the preliminary interpretation was performed by AI or by a sonographer, and it turns out that they couldn’t tell the difference,” Ouyang commented. “This speaks to the strong performance of the AI algorithm as well as the seamless integration into clinical software. We believe these are all good signs for future AI trial research in the field.”
The clinical trial and subsequent published research reportedly sheds light on the opportunity for regulatory approvals.
“This work raises the bar for artificial intelligence technologies being considered for regulatory approval, as the FDA has previously approved artificial intelligence tools without data from prospective clinical trials,” says Susan Cheng, MD, MPH, director of the Institute for Research on Healthy Aging in the Department of Cardiology at the Smidt Heart Institute, and co-senior author of the study. “We believe this level of evidence offers clinicians extra assurance as health systems work to adopt artificial intelligence more broadly as part of efforts to increase efficiency and quality overall.”