Summary: EchoCLIP, a machine learning algorithm developed by AI experts at Cedars-Sinai Medical Center and the Smidt Heart Institute, interprets echocardiogram images with high accuracy, assisting in identifying patients needing treatment by assessing heart function, past surgeries, device assessments, and detecting significant changes over time.

Key Takeaways

  1. EchoCLIP, developed from a dataset of over 1 million echocardiograms, exhibits strong performance in assessing cardiac function, aiding in the identification of patients needing treatment.
  2. The algorithm’s foundation model efficiently identifies patients across various studies and time points, facilitating the detection of clinically significant changes in heart health.
  3. EchoCLIP demonstrates the capability to identify implanted intracardiac devices and accurately recognize clinically important changes, such as prior heart surgeries, from echocardiogram images.


AI experts at Cedars-Sinai Medical Center and the Smidt Heart Institute compiled a dataset of over 1 million echocardiograms and their clinical interpretations. This data fueled the creation of EchoCLIP, a potent machine learning algorithm capable of interpreting echocardiogram images and analyzing key findings.

EchoCLIP’s design and evaluation, detailed in a manuscript published in Nature Medicine, indicate that its interpretation provides clinician-level evaluations of heart function, past surgeries, and device assessments. This tool aids clinicians in identifying patients requiring treatment. Moreover, EchoCLIP’s foundation model efficiently identifies patients across multiple videos, studies, and time points, facilitating the detection of clinically significant changes in a patient’s heart.

Cardiology’s AI Revolution

“To our knowledge, this is the largest model trained on echocardiography images,” says corresponding author David Ouyang, MD, a faculty member in the cardiology department in the Smidt Heart Institute and in the Division of Artificial Intelligence in Medicine. “Many previous AI models for echocardiograms are only trained on tens of thousands of examples. In contrast, EchoCLIP’s uniquely strong performance in image interpretation is a result of its training on almost tenfold more data than existing models.”

“Our results suggest that large datasets of medical imaging and expert-adjudicated interpretations can serve as the basis for training medical foundation models, which are a form of generative artificial intelligence,” Ouyang says. He says this advanced foundation model can soon help cardiologists in the assessment of echocardiograms by generating preliminary assessments of cardiac measurements, identify changes that happen over time, and common disease states.

Inside EchoCLIP

The team of investigators built a dataset of 1,032,975 cardiac ultrasound videos and corresponding expert interpretations to develop EchoCLIP. Key takeaways from the study include:

  • EchoCLIP displayed strong performance when assessing cardiac function using heart images.
  • The foundation model could identify implanted intracardiac devices like a pacemaker, implanted mitral valve repairs, and aortic valves from the echocardiogram images.
  • EchoCLIP accurately identified unique patients across studies, identified clinically important changes such as having undergone heart surgery, and enabled the development of a preliminary text interpretation of echocardiogram images.

AI and Cardiology Converge

“Foundation models are one of the newest areas within generative AI, but most models do not have enough medical data to be useful in the healthcare arena,” says Christine M. Albert, MD, MPH, chair of the cardiology department in the Smidt Heart Institute and the Lee and Harold Kapelovitz Distinguished Chair in Cardiology.

Albert, who was not involved in the Nature Medicine study, says, “This novel foundation model integrates computer vision interpretation of echocardiogram images with natural language processing to augment cardiologists’ interpretation of echocardiograms.”