Cheryl Proval

A study conducted by Rebecca Seidel, MD, and Deborah Baumgarten, MD, MPH, among radiologists and radiologist trainees in the radiology department of Emory University in Atlanta, turned up an interesting conclusion that the authors suspect could be extrapolated across the academic landscape: there is a knowledge gap about pay for performance (P4P) programs among radiologists. Based on some of their data, published in the June issue of the Journal of the American College of Radiology, their conclusion could be taken a step further to say, most radiologists do not much care about P4P. Only 35% believed that P4P would be effective in improving quality; 42% were undecided; and 23% disagreed that P4P would impact quality in a positive way. Trainees proved to be far more cognizant than their professors when it came to understanding that P4P was soon to have an impact on their income: when asked if they believed that P4P would have an impact on income from Medicare patients, 74% of trainees agreed that a portion of their future income would be tied to quality measures and only 42% of faculty members believed this was true.

Not only have quality measures arrived, but radiologists can get paid by Medicare for reporting on these. With very little ceremony and not enough fanfare, the Centers for Medicare and Medicaid Services rolled out two radiology-related voluntary quality measures in July. Interventional radiologists may be eligible to participate in up to six of the 2007 CMS Physician Quality Reporting Initiative measures. Participating groups can earn a bonus of up to 1.5% for being at least 80 %compliant in reporting.

(For more on these measures, see the PQRI guide at the ACR website.)

Although the quality measures look more like a fact-finding mission for a utilization management program than true quality measurement, any effort to link imaging with outcomes should be applauded and supported. Furthermore, radiologists should not complain about having to flag certain studies, at least not in front of their business manager, who will bear most of the burden of implementing a program to capture the requisite data. But radiologists must believe that quality measures are here today in many forms, whether paid or unpaid, and that they will become an increasingly important part of their future.

Payors define quality in terms of efficiency, with utilization management programs; equipment, with privileging programs; and the ACR stamp of approval, with accreditation programs.

Consumers define quality by the speediness and readiness with which they can get the examination they need, the friendliness and efficiency of your staff, the size of the co-pay, and, yes, the comfort of your waiting room.

Referring physicians define quality by the clarity and accuracy of the report, the availability of subspecialized reads, access to images, and the speed of service.

How does radiology define quality? What is high-quality radiology?

Certainly, the ACR has developed programs to be used by payors and referring physicians that assure the technology used is properly serviced and calibrated and the images produced contain an acceptable level of information for radiologists to interpret.

The college also has designed a peer review program that radiology practices can fairly easily integrate into their practice patterns and use internally to assess agreement with previously interpreted studies. The six new voluntary data registries launched by the ACR will also have a feedback component so that participating facilities can find out how their images and results compare with those produced by other facilities.

Those programs are valuable tools that can be used to distinguish between acceptable and inferior facilities, and agreement among peers on a report. But defining quality is as much an act of interpretation as is a radiologist reading an image. And just as the radiologist must recognize the age, gender, and symptoms of the patient when they read, so will the perception of quality vary not only with the buyer, but also with the rapidly evolving marketplace in which the buyer purchases these services.

Those perceptions can be at odds, and conflict already is percolating over a plan by UnitedHealthcare to release socalled quality and cost rankings on physicians. It was effectively turned back by outraged physicians in Missouri who claimed the data was flawed and the cost rankings were based on the cost to the plan, not the cost to patients. The New York attorney general’s office has threatened UnitedHealthcare with legal action if it follows through on a plan to release such data on New York physicians in September.

Radiology provides extraordinary value to health care, but it needs to spend more time assessing the needs of its constituents, evaluating the hallmarks of quality for each of its constituents, developing metrics to measure those hallmarks, and then communicating those measures to all constituents, including consumers. What good is a quality metric that only the radiologist knows and understands?

If radiology does not take a more proactive stand on the definition of quality, then it may have to settle for being judged on the variety of magazines in its waiting rooms, or, worse, the cost of the study.

Cheryl Proval is the business editor of  Axis Imaging News. For more information, contact .