Editor’s Note: Due to an editing error, the print version of this article misstated the required frequency that technologists must scan the phantom. The requirements, which are scheduled to take effect in August, ask that once a week, not once a month, the technologist must conduct two scans on the American College of Radiology phantom, in addition to checking film quality and verifying that the components of the MRI system are in good condition. Our apologies to Dr. Bell and the ACR.

Reality can be boring. How else can we explain the popularity of soap operas among Americans? People love to get caught up in rumor and hyperbole, and it appears that many people are acting this way regarding MRI accreditation through the American College of Radiology (ACR).

In its short 5-year history, ACR’s MRI accreditation program has been attacked as a mechanism to drive low-field systems out of business, as too costly, as too labor intensive, and as fundamentally unfair. However, borrowing a quote from act five of Macbeth, past criticisms appear to have been “full of sound and fury, signifying nothing.” The program’s wide adoption among all types of MRI providers argues strongly for its efficacy and fairness. To date, almost 4,300 MRI units from more than 3,300 sites have applied and more than 3,000 systems have been accredited (almost 2,600 sites). Thus, about 60% of the MRI sites in the United States have achieved accreditation or are in the process.

The benefits of ACR MRI accreditation have also been recognized by the reimbursement community. Payors are demanding proof of quality and consistency. Almost 2 years ago, Aetna/US Healthcare was among the first to announce it would require its providers to have MRI accreditation. Since that time, Rhode Island and Connecticut have made it a state requirement, Blue Cross and High Mark Blue Shield of Pennsylvania require the ACR program, Florida insurance regulations allow higher reimbursement for accreditation, and Alabama Blue Cross Blue Shield is preparing to announce its requirements.

ACR MRI accreditation was originally designed to establish minimum performance criteria for MRI examinations in the United States. After 5 years, it has become apparent that the original concept addressed only part of the problem. Setting universal minimum standards does not necessarily guarantee superior performance. The program has made tremendous strides by raising awareness of performance criteria in the MRI community. However, some MRI units generate substantially higher signal-to-noise (SNR) values per time unit than others. These may not be challenged by the minimum specifications that all systems must achieve. Therefore, the ACR announced last August two new requirements, scheduled to go into effect August 2002. The additions provide individual sites with the ability to test against themselves as well as sense trouble, often before it reaches critical proportions.

1. Every accredited MRI site must have a functioning weekly quality control (QC) program meeting minimum ACR standards.

2. Each MRI system must be benchmarked annually. ACR strongly suggests this be done by a qualified medical physicist or MRI scientist.

Although ACR MRI accreditation has established basic performance metrics, rumor and mischaracterization continue to distort the opportunity it offers to MRI providers. Some complain that on-site QC is too difficult, too time-consuming, and simply not needed. They argue that vendor service is all that is needed to ensure high-quality imaging. However, it is physicians, not vendors, who select clinical protocols. Vendors usually have only minor input into clinical image quality after initial system applications. Further, in many areas local service personnel may each be responsible for six to 10 MRI systems, which radically limits the time and attention they can afford for an individual site. The ACR program allows participants to demonstrate (1) a basic level of achievement in the clinical and technical aspects of MRI, and (2) consistent instrument performance. It also encourages local personnel to become more sensitive to changes in their systems.

Personal Testimony

Why is more testing necessary? For the past 15 years, I have assisted health care providers through performance testing? of MRI systems and by helping with quality control issues. Based on my testing of more than 350 magnets from 0.18 T to 3 T, I have found problems that need attention in approximately 70% of cases. These have ranged from simple gradient calibration errors or laser alignment light adjustments to bad head and body coils, internal landmark errors, broken patient tables, RF leakage, and inadequate site control of magnetic fringe fields. I have seen vibration-induced SNR loss, excessive ghosting artifacts, distorted monitors, and laser cameras out of calibration. In most instances, the local physicians and technologists were unaware that their system was not performing properly. A competent on-site quality control program can help guard against such difficulties.

NEW ACR REQUIREMENTS

Weekly QC. What is included in the new ACR on-site QC requirements? Once a week local technologists must conduct two scans on the ACR phantom, a sagittal slice for about 1 minute and an 11-slice axial series that takes slightly more than 2 minutes. These images are reviewed according to nine criteria listed below. Additionally, they must check film quality and verify that the components of their MRI system are in good condition. Total time for the process is usually less than 15 to 20 minutes, including data analysis.

Position accuracy. The ACR phantom allows the local user to check the accuracy of the longitudinal and the transverse alignment lights. Further, this test examines the accuracy of an axial slice prescribed from a sagittal slice (internal landmark accuracy).

Center frequency drift. All superconductive magnets lose strength over time, albeit very slowly. As this occurs, either the RF circuitry must be adjusted to the new static field level or the magnet must be boosted back up to its original strength. Center frequency can monitor this change to signal abnormal drift, which may indicate a magnet problem.

Transmitter gain (attenuation). The amount of RF energy needed to produce a 90-degree pulse depends on many factors. Transmitter gain can be a probe of system performance by tracking this quantity.

Geometric accuracy. Distortions of shapes and sizes within images can obscure diagnostic information. Checking known dimensions verifies proper calibration.

High contrast spatial resolution. The ability to distinguish small structures is usually determined by the field-of-view (FOV) and the acquisition matrix. However, spatial filters and low SNR can limit resolution.

Low contrast object detectability (LCOD). Small objects that differ slightly in signal strength from surrounding tissue may be very hard to detect. The LCOD insert offers the opportunity to assess objects as small as 1.5 mm with contrast differences as low as 1.4%.

Assessment of image artifacts. Ghosting, zippers, bright spots, and other anomalies should be brought to the attention of vendor service as soon as possible. These can degrade image quality and may presage equipment failure.

Hard copy image QC (film). Review media should offer the same contrast variations as found in the original data. Film and cameras should be checked to verify the accuracy of the recorded image.

Visual checklist of scanner functionality. A thorough check of instrument components can detect misalignment, strange noises, frayed wires, and other sources of potential problems. Bringing these to the attention of vendor service on a timely basis can improve uptime and avoid catastrophic failure.

ANNUAL SYSTEM BENCHMARK

The annual benchmark analysis builds on the weekly review. In addition to a review of the weekly parameters and the original technical ACR accreditation criteria, this in-depth assessment examines static field homogeneity, slice position accuracy, slice thickness accuracy, RF coil function (SNR, uniformity, for instance), interslice cross-talk, and soft copy monitors. The person responsible for the annual review, presumably a qualified medical physicist or MRI scientist, will also review the weekly QC data, suggest appropriate action limits, and be a resource when variations are found or questions arise. Such a review usually takes about 8 to 10 hours on-site to collect data and can usually be scheduled after hours to minimize impact on normal patient scanning. Another 8 to 10 hours of office time are typically consumed analyzing the data and authoring the final report. This document should describe what tests were conducted, the data obtained, the analysis against standards, and recommendations for further action. Details should be sufficient to allow any qualified person to repeat the tests based on locally available materials.

Now that the reality of the ACR MRI program has been reviewed, it may be useful to explore some of the myths and rumors that appear to be circulating.

Myth No. 1. ACR MRI accreditation is a badge of honor that will demonstrate the high level of service offered by an MRI site. Although ACR accreditation is certainly desirable, passing levels for the criteria are the bare minimum standards that every MRI system and operation should be able to meet. For example, the low density object detectability test consists of four disks, each of which contains 10 spokes of holes. The passing level is the ability to see nine of the total of 40. Most high-field systems should not be satisfied with less than 30.

Myth No. 2. The program is designed to favor high-field systems. The passing criteria permit any commercially available whole body system, if operating properly, to meet or exceed them.

Myth No. 3. The program is too expensive. The ACR program costs about $2,700, including phantom and initial ACR review. Considering that the average MRI site generates about $1,700,000 in revenue each year (~2,750 examinations per year at an average technical fee of $600), 1 day of downtime costs about $6,750. If the initial review or on-site QC reduces downtime by a half day per year, it has more than paid for itself, not to mention free benefits such as lower liability due to better operational performance.

Myth No. 4. On-site quality control takes too much time. Acquiring data for initial submission takes about 1 hour of system time and 1 to 2 days for a supervisor to collect the necessary paperwork. On-site QC requires 3 minutes of scanning and about 10 to 15 minutes of technologist time once each week.

Myth No. 5. My vendor provides all of the quality control that I need. Vendors usually do not monitor clinical image quality. They often look for changes in electronic parameters that may or may not reflect image alterations.

Myth No. 6. Quality control just is not worth the time and energy. Many payors appear to disagree with this attitude. Demonstrable quality is a high priority.

Before instituting your QC program, request a system tune-up from your vendor.? They have equipment and phantoms to test many imaging parameters. Your service engineer may even be willing to take an active role in your testing. Additionally, make sure the SMPTE [Society of Motion Picture and Television Executives] test pattern (with the proper window and level settings) is available in your software. You will need this for the hard copy (film) QC.

MRI instruments are complex and sensitive devices. In my 15 years of testing MRI systems, I have observed problems in more than half of the units examined. A small commitment of time and effort can yield the rewards of increased uptime, more consistent performance, and better trained personnel. ACR MRI accreditation can provide reassurance that your present MRI services are excellent and a convenient way to help to keep them state-of-the-art.

Robert A. Bell, PhD, is president of a health care consulting firm in Encinitas, Calif, [email protected], (858) 759-0150. He specializes in technical and operational services for diagnostic imaging equipment.