So maybe a patient waits 20 or 30 minutes for imaging an elbow. Perhaps there is a slight variation in protocols for an MRI within a group practice. And, OK, radiologists do make mistakes when reading an x-ray now and then; radiologists are human, too. So is improving quality in radiology really such a big deal? Stephen Swensen, MD, past chairman of radiology at the Mayo Clinic, Rochester, Minn, and its current director for quality, believes that quality in radiology is important not only to patients, but also to the future of radiology in North America.

“Radiology as a commodity will crash and burn in this flat world,” Swensen says. “Right now, for cents on a dollar, you can have images interpreted in other parts of the planet using teleradiology. Unless we can differentiate our product by quality—meaning quality as a combination of outcomes, safety, and service—then why wouldn’t someone send their images to Bangalore, India, for that dramatic savings in a commodity market? We have to be able to not just say that we’re better; we have to be able to prove it.”

To avoid having radiology interpretations become an international commodity, Swensen believes that radiology must significantly distinguish itself through measurable benchmarks that are achieved through systems engineering and science. Doing so will demonstrate that on-site radiology services are so reliable, efficient, and convenient as to be almost incomparable. In effect, Swensen’s theory is that if consumers are given the choice between the Yugo of radiology with a questionable reputation, and the Toyota of radiology with a stellar one, they will pay a markedly higher price for the one that is verified as being accurate, safe, convenient, and cost-efficient.

Forget the Moral Imperative

Although the Institute of Medicine’s landmark report, “To Err Is Human,” published in 2000, addresses the ethics and cost savings of quality care in medicine, Swensen’s focus is on the business case for quality—which is inevitably beneficial to patients, as well.

New Medicare Quality Measures

A new Centers for Medicare and Medicaid Services (CMS) quality initiative will give radiologists a 1.5% bonus for voluntarily reporting two stroke-related quality measures to CMS. The program, known as the Medicare Physician Quality Reporting Initiative (PQRI), will allow physicians to receive up to a 1.5% bonus for all allowable Medicare billings from July 1, 2007, to December 31, 2007. Physicians must meet minimum reporting requirements for measures listed under CMS’s Physician Voluntary Reporting Program (PVRP).

Currently, 66 measures apply to different fields of medicine. The two measures that affect radiology for the purpose of this bonus are measure #10, which asks radiologists to report on stroke and stroke rehabilitation when using CT or MRI; and measure #11, which also applies to stroke and stroke rehabilitation when carotid imaging is utilized.

Reporting to CMS will be accomplished by adding special G codes or CPT II codes to the Medicare claim form. Radiologists must report each of the two stroke-related measures for 80% of all patients with the qualifying condition. The bonus payment will be calculated and distributed retrospectively, after all Medicare claims data is received for the last 6 months of 2007. CMS will likely impose caps on maximum payout to an individual physician or practice.

The American College of Radiology (ACR) is developing six to eight additional quality measures with the American Medical Association (AMA) and the American Academy of Neurology (AAN) to be included in the PVRP, but these will not be eligible for the bonus. The ACR is nearing completion of the following three measure sets, which will be posted on the ACR web site for member comment:

  • Communication of Diagnostic Imaging Findings
  • Radiation Dose Management in CT Procedures, and
  • Management of Intravascular Iodinated Contrast Media Administration

For more information on the PVRP and PQRI programs and the latest about the new quality measures, radiologists can visit the ACR’s web site or the PVRP web site.

—T. Valenza

In his presentation at the 2006 Annual Meeting of the Radiological Society of North America (RSNA) entitled “The Business Case for Quality,” Swensen described radiology as a $100 billion industry that is subject to the same quality control goals as other industries. He went on to acknowledge that there is absolutely a moral imperative for better quality in radiology; then he asked his audience to forget about it.

“Assume that 300 American and North American patients die in a plane crash every day from medical errors,” he said. “Forget about the 3,000 American patients on a cruise ship who are harmed by medical errors every day. Let’s assume that we’re not concerned about it, or that there’s variation in care in North America, or that 40% of what we do is wasteful because of overutilization or mis-utilization. Let’s just talk about the business case for quality and not the moral imperative.”

For Swensen, the goals for quality in radiology should be based on essentially the same principles that large manufacturers have been using for decades—namely, reducing needless variation, waste, and defects.

“The 3Ms, GEs, and Toyotas of the world have [quality control] as a business strategy, knowing that if you drive out waste, variation, and defects, you save money,” he says. “Well, medicine and radiology must do it for patients because it’s the right thing for patients, and because we find more cancers, we have fewer complications from strokes, infections, and so on. But, by the way, if you do it with the right approach with systems engineering, you’ll also save money and have a better bottom line.”

Why Quality Now? Three Reasons

Few medical professionals would state that quality care has not always been a goal in their practices. However, Swensen says that now, more than ever, there are three basic reasons why radiologists must go further and accurately measure their level of quality with science and systems engineering.

The first reason is the aforementioned threat of having radiology become a one-size-fits-all product, thanks to the Internet and international teleradiology. But Swensen also notes that this radiology commoditization threat also comes from the radiologist across the United States and even across town. “If you have two or three hospitals in a city, you should be able to show that your results are better or why someone should come there. Otherwise, we become a commodity, and people will just go to the lowest price.”

Swensen’s second reason why measuring and improving quality is important today is the impending Medicare pay-for-performance (P4P) initiatives that will become part of everyday medicine. The Deficit Reduction Act of 2005 kept most Medicare reimbursements stable for physicians until 2008, but only in exchange for voluntarily complying with new P4P reporting measures. Consequently, P4P is a looming reality, and Swensen believes that if radiologists do not create their own evidence-based standards and measures for quality and outcomes, then the government will do it for them. (For more on P4P and its possible effects on radiology practices, read “P4P: A Coming Out Party for Outcomes?” in the December 2006 issue of Axis Imaging News.)

Third, and perhaps most important, Swensen says that taking responsibility for quality is part of the professionalism of being a physician. “We’ve always said that quality was important, and genuinely and sincerely, physicians thought that to be true,” he says. “But as a profession, we haven’t taken that to the level where we measure quality, safety, service, and outcomes of most everything we do, and then making those results transparent to whoever’s paying for it and to the patients.”

So What’s Wrong with Radiology?

Swensen first became interested in quality in radiology while studying for his master of medical management degree at Carnegie Mellon University’s Heinz School of Public Policy and Management, Pittsburgh. However, he credits his radiology colleague at the Mayo Clinic, C. Daniel Johnson, MD, for bringing the program to Mayo, believing that the clinic could not rest on its reputation and that it needed to be the best it could be.

Together, Johnson and Swensen developed a systems-based quality control program at Mayo. In an article they wrote for the Journal of the American College of Radiology, Johnson and Swensen outline where and when there are opportunities for improvement in radiology.1

Johnson and Swensen’s radiology value map pinpoints nine areas in radiology care that can be measured and improved, from the referral (where referring physicians may prescribe an inappropriate examination), to procedure protocols (where each radiologist within a practice may have varying protocols for the same procedure), to interpretation (where there are many opportunities for correcting errors). (See Table 1 below.)

Table 1. Mayo Cinic Radiology Quality Map Events and Metrics





Referring physician orders examination

ACR Appropriateness Criteria
Intended examination (ordering error)


Appointment scheduled

Access times


Initial radiology encounter

Patient wait time
Patient education: preparation,
expectations, NPO status, diabetes
Triage patient health needs


Protocol selection

Standardized protocol: best practice
IV contrast media (yes/no)
IV oral contrast media protocol


Patient examination

Environment of care
Safety, comfort
Procedural complications
RN or RT credentialed
Falls, infections, hand disinfectant
Process effectiveness and efficiency


Interpretation peer-reviewed credentials

Correct subspecialty interpretation
Structured report
Report answers clinical question


Finalization errors



Communication emergent/important

Referring physician satisfaction
Query answered/addressed



Health improved?
Patient satisfaction

Legend: IV — intravenous; NPO — nothing by mouth; QOL — quality of life; RN — registered nurse; RT — radiologic technologist

Within those nine areas, Johnson and Swensen outline many opportunities for improvements leading to more efficient, safer radiology care, including:

  1. the appropriateness of an examination ordered by the referring physician;
  2. performing evidenced-based radiology;
  3. overutilization and underutilization of procedures;
  4. timely access to procedures based on urgency;
  5. waiting room times;
  6. procedure protocols that are standardized based on evidence-based best practices;
  7. patient safety during the examination (infections, falls, mislabeled exams, contrast-induced nephropathy, wrong procedure/site/side/patient, improper radiation doses);
  8. better peer review and avoiding conflicts of interest;
  9. decreasing the number of radiologic errors caused by poor perception, poor interpretation, lack of knowledge, or miscommunication;
  10. distributing prompt finalized reports to referring physicians;
  11. poor postprocedure communication with referring physicians; and
  12. lack of measured outcomes and transparency (to payors and to the public).

Variation: The Slippery Slope

Issues, such as wait times and variation in protocols, might seem unimportant compared to errors in interpretation, but Swensen believes variation becomes a slippery slope. In particular, Swensen mentions technologists having to look up each radiologist’s preference for protocols and radiologic dyes for the same procedure, which can be confusing, be time-consuming, and lead to mistakes. “Variation is an environment that predisposes you to defects, mistakes, and inefficiency,” he says. “So, it costs more that way.”

Swensen believes that the short answer to variation is picking evidence-based best practices and having set standards of care that are based on clinical prediction rules.

As to who will ultimately choose the best practices, Swensen admits that payors who dictate P4P requirements may have the last word. But, he adds, “Individuals can start in their own practices, saying that if we’re going to be looking at, for example, diffuse lung disease with high-resolution CT, let’s pick a single protocol so that all of us do it the same way. Then, when a patient comes back, we can compare the same protocol exam to the one done 6 months ago, or 6 years ago, which is better patient care.”

Some may be concerned that having set standards will restrict radiologists’ personal judgments and expose them to malpractice suits, but Swensen thinks it might actually do the opposite.

“That’s the beauty of these clinical prediction rules or having standards of care,” Swensen says. “If you follow what’s considered to be the best care, then you have an opportunity to say, ‘I did not image this patient with CT because all the publications that lead to this clinical prediction rule, which is the highest level of evidence for appropriate use of imaging, say that this is the best way of doing it if the patient has these clinical signs and symptoms.’ So, it’s actually a help for that liability.”

Swensen also notes that once these standards and best practices are implemented, then innovation may be slowly tested against the outcomes of the current standards.

Other Solutions

Table 2. Mayo Clinic Radiology Quality Metrics.

Other solutions for quality are simpler, but must have a system in place for compliance. For example, technologists and radiologists can reduce the risk of nosocomial infection by simply washing their hands between examinations.

To reduce perception and interpretation errors, Swensen and Johnson cite several studies where second readings, computer-aided detection, and dual interpretations with specially trained radiographers have reduced error rates—and also may be cost-efficient.2–6

Addressing overutilization, Swensen supports radiologists and referring physicians consulting the American College of Radiology’s Appropriateness Criteria, which suggest the appropriate radiology procedures for a variety of imaging and treatment decisions. In the future, if they were one day electronically linked into electronic medical records and an office’s electronic examination-ordering tools, they also could improve efficiency and reduce overutilization.

Swensen also says that radiologists should “maximize” communications with referring physicians in order to decrease postprocedure communication errors. Doing so may help reduce liability for malpractice.

“When you look at the top four causes for malpractice in radiology, one of them is failure to communicate results clearly and effectively,” he explains. “So, even if we find the cancer or the pneumonia or the pneumothorax, and we don’t communicate that via our report—or for urgent findings, via a phone call or page—to the ordering health care provider, we have fallen short of our commitment to patients. And we consistently fall short of that, and that’s one reason why [poor communication] is one of the top malpractice causes.”

To maximize the communication, Swensen recommends that radiologists submit structured reports and establish a follow-up mechanism that documents that the communication was made to the appropriate provider in a timely fashion.

Consumers with X-Ray Vision

The last step in Swensen and Johnson’s radiology value map is measuring outcomes. The metrics include patient satisfaction, morbidity, mortality, and quality of life.

Having those postprocedure facts and figures may be a useful tool for a radiology practice, but Swensen thinks this information should be available to the public as well. Will that mean that a patient can go to the Internet before a procedure and see how accurate a particular radiologist or hospital has been for mammography or any other procedure?

Swensen says yes. “When a woman is in a position to make a choice about the three screening centers in her metropolitan area, shouldn’t she know the accuracy rates of the three centers? Today, you don’t have a clue,” he says. “You probably can’t even tell how much the places charge—but, more importantly, how much they charge is how good they are. Why don’t we make that accurate or transparent?”

Furthermore, Swensen says that measuring outcomes for particular procedures will allow payors to be aware of radiologists who are performing poorly in a particular area. “Just because radiologists are board certified doesn’t mean they have the same level of accuracy in interpreting an exam,” he notes. “In mammography screening, the current median accuracy in the literature is 66%.7 If we want that median accuracy in the United States to be 71%, we’d have to take the bottom 6,000 radiologists, which is about one third of the practitioners, and say, ‘You have to improve your results, or you won’t be allowed to interpret mammography and get reimbursed for it.’”

The Way to Quality: Systems Engineering

Swensen does not believe that simply admitting mistakes and promising to work harder can achieve improvements in quality. Instead, solutions should be implemented with a scientific, systems-engineering approach with many of the same quality control theories used by manufacturing industries.

Mayo initially developed its quality management program by working with The Baldrige Foundation. Later, Johnson and Swensen developed another approach based on a combination of Lean, Six Sigma, and the quality management principles of W. Edwards Deming. Swensen also mentions being influenced by W. Chan Kim and Renée Mauborgne’s book, Blue Ocean Strategy.8

Johnson and Swensen eventually used their knowledge to develop Mayo’s value management software, a quality management program that helps identify problems by tracking seminal events and specific relevant measures. “You can measure everything from image labeling to nephropathy to hand-washing compliance and compliance with appropriateness criteria,” Swensen says. (See table 2 above)

No matter what protocol is put into service, Swensen says that the key to any value management tool is to use some sort of consistent, process-based systems approach.

In terms of quality management manpower, Mayo employs a core group of six individuals who exclusively manage quality control for radiology. Other hospital departments have their own teams.

Although having a quality control team might be more feasible to a large and well-funded facility like Mayo, Swensen says that all practices should implement some system with costs that are proportional to the size of the group.

Administrators should expect to employ or consult with people who have expertise in systems engineering. Although the initial capital for the program may be expensive, Swensen is confident that the proper implementation of quality controls will have a return on investment in the long run.

“If it is done right, and the department owns the spectrum of activity—from technical activity and scanners to radiologists—instead of just taking a piece of it, this is a business strategy for increasing efficiency and saving money. Because as you have less variation, less waste, and fewer defects, your malpractice rates will go down, and your efficiency will go up. So, if you use a systems approach, driving out waste, variation, and defects, and you are in charge of the whole radiology enterprise, it is absolutely a cost-saving environment.”

P4P? Not at Mayo. P4P may have the final say on whether a radiologist is typically rewarded or penalized for performance. As for Mayo, Swensen says that the clinic currently does not reward physicians for quality improvements. “All of Mayo Clinic’s staff is salaried—unique in medicine today,” he notes. “There are no financial incentives for anything from productivity to academic rank. Socialism works very well and is absolutely the most patient-centered model.”

Tor Valenza is a staff writer for  Axis Imaging News. For more information, contact .


  1. Swensen S, Johnson CD. Radiologic quality and safety: mapping value into radiology. J Am Coll Radiol. 2005;2:992–1000.
  2. Markus JB, Somers S, O’Malley BP, Stevenson GW. Double-contrast barium enema studies: effect of multiple reading on perception error. Radiology. 1990;175:155–156.
  3. Anderson ED, Muir BB, Walsh JS, Kirkpatrick AE. The efficacy of double reading mammograms in breast screening. Clin Radiol. 1994;49:248–251.
  4. Yerushalmy J, Harkness JT, Cope JH, Kennedy BR. The role of dual reading in mass radiography. Am Rev Tuberc. 1950;61:443–464.
  5. Stradling P, Johnston RN. Reducing observer error in a 70-mm chest radiography service for general practitioners. Lancet. 1955;1:1247–1250.
  6. Hessel SJ, Herman PG, Swensson RG. Improving performance by multiple interpretations of chest radiographs: effectiveness and cost. Radioliogy. 1978;127:589–594.
  7. Beam CA, Layde PM, Sullivan DC. Variability in the interpretation of screening mammograms by US radiologists: findings from a national sample. Arch Intern Med. 1996;156:209–213.
  8. Kim WC, Mauborgne R. Blue Ocean Strategy. Boston: Harvard Business School Press; 2005.