A number of vulnerabilities, known collectively as deep learning adversaries, hold artificial intelligence (AI) back from its full potential in applications like improving medical imaging quality and computer-aided diagnosis.

With the support of a National Science Foundation Faculty Early Career Development Program (CAREER) Award, Pingkun Yan, PhD, an assistant professor of biomedical engineering at Rensselaer Polytechnic Institute in Troy, N.Y., will lead a team of researchers in developing new AI techniques that protect algorithms from such vulnerabilities, which include contaminated data, malicious attacks, or independent algorithms that interfere with one another.

“We see great potential in AI, machine learning, and deep learning in biomedical imaging,” says Yan, a member of the Center for Biotechnology and Interdisciplinary Studies (CBIS) at Rensselaer. “We just need to build a system one step at a time to make it more robust, usable, and understandable.” 

AI techniques, like the ones Yan and his team have previously developed and tested, have the potential to advance image reconstruction, image quality, computer-aided diagnosis, and image-guided surgery. But deep learning adversaries, Yan says, remain a barrier to widespread implementation. They could result in inaccurate or confusing image results. 

“The adversary might cause the system to generate undesired or unwanted outputs,” Yan says. “The clinicians may be confused by the output of the system, or a wrong diagnosis may just slip through. It could cause significant medical errors in the diagnosis process and cause significant cost to our healthcare system.” 

While there are currently some tools to protect algorithms against adversaries, Yan says that they often come at the cost of reduced performance. The goal of this research, funded with a nearly $550,000 grant, is to solve this problem by developing AI techniques that are robust enough to guard against deep learning glitches without interfering with tasks that other algorithms are trying to carry out.

The tools Yan and his team develop will be able to detect adversarial images, correct those discrepancies, and improve the quality of information the AI system produces. New algorithms that the team develops must consider the overall system, not just individual components, Yan says, so that accuracy isn’t sacrificed as robustness is increased. 

“This CAREER Award is a recognition of Professor Yan’s innovative research with broad applications to multiple areas,” says Deepak Vashishth, the director of CBIS. “For example, he and his team continue to advance the potential of AI to improve human health, while tackling persistent challenges that must be overcome along the way.” 

“We envision such systems will help to make healthcare imaging more robust, more accurate, and will also boost the confidence of the users—especially the medical professionals,” Yan says.