Figure 1. Monitor calibration performed using DICOM test patterns.

Quality assurance (QA) and quality control (QC) have long been familiar terms in diagnostic and therapeutic radiology, with an excellent body of knowledge supporting processes ranging from Darkroom Fog Checks and Exposure Timing to Results Charting and Reject Analysis. Comprehensive books on the subjects of quality control have provided functional guidelines for more than 30 years. However, with the advent and subsequent growth spurt of digital imaging and picture archiving and communications systems (PACS), a dilution in the direct applicability of such publications has occurred and the well-defined QC processes and procedures that were used in managing film-based, analog imaging systems no longer fully apply. In fact, when digital imaging and PACS are implemented, a general reengineering and renegotiation of QC-related tasks, roles, and procedures are required.

The addition of digital imaging and PACS has moved us into an environment where the images and associated data are merged into a logically composite folder of information related to each patient, study, series, and image. The PACS and radiology information system (RIS) manage the content, structure and format of that data, as well as the display characteristics of that data. The quality of the data in these composite folders is, naturally, a direct result of all the factors affecting the quality of the data that initially went into the folder, and all the factors affecting the storage, display, and data updates since it went into the folder.

An additional complication to the QC process is the interaction of maintenance activities with the digital imaging modalities, PACS, and other connected systems with little regard to the impact on PACS and digital imaging quality issues. It is not uncommon to hear about the loading of new software on a CT that wiped out the DICOM linkage with the PACS, and the problem was not discovered until the CT work flow was found to be stalled and diagnostic reports were overdue. Maintenance and QC need to be linked from both a management and a functional standpoint. Maintenance tasks should overlap with QC tasks.

QC, Maintenance Task Allocation

In the digital imaging and PACS environment, a clear understanding of quality control and maintenance services and the parties responsible for the performance of those services is required. The simultaneous planning of quality control and maintenance tasks to ensure that no gaps exist between these two critical programs is recommended. A task allocation chart can be developed that lists the equipment items to be included in the quality control and maintenance programs. The list should be as complete as possible so that all major subsystems of the PACS and the different types of imaging devices are included. The list can even include infrastructure items such as networks and power systems and upstream information systems such as RIS, transcription and hospital interface engines.

Table 1. Sample task allocation chart resulting from simultaneous negotiations of maintenance and QC. SA represents system administrator and SE, system engineer.

Table 1 is a sample of a task allocation chart that lists several imaging, PACS, and infrastructure systems and distributes the QC and maintenance tasks across a mix of; in-house and vendor resources.

The task allocation chart in Table 1 is an example of what could result from the simultaneous negotiations of the Quality Control and Maintenance programs. The negotiations should include all parties listed in the table to ensure that they are adequately staffed, trained, tooled, and compensated to perform their assigned tasks.

For actual hospital use, all different types of imaging modalities, output devices, and PACS components should be further stratified to expose those with different QC and maintenance requirements. For example, diagnostic workstations with CRT displays will have different QC requirements than workstations that have flat panel displays.

Note that there are multiple levels of QC (QC1 and QC2), Preventive Maintenance (PM1 and PM2), and Repair Services (first call, minor repairs, and major repairs) listed in the table. The advantage of this is that task allocations are made with a resource mix that is most advantageous for the hospital. Digital imaging and PACS require daily, weekly, and monthly operational and QC functions to be performed. They also require less frequent but more sophisticated maintenance checks and services. It is impractical and economically unreasonable to require equipment vendors to perform all of these tasks. The exact terms and conditions for each vendor’s responsibilities in the program should be negotiated on a case-by-case basis.

Component Tests

After the task allocation chart is completed at the level described in Table 1, a further level of detail must be added to expose the attributes for each task. These attributes become specific to the QC and maintenance requirements of individual components and imaging devices. Table 2 (page 22) is a sample of this detail expansion for three tasks associated with diagnostic workstations.

Table 2. Sample task attribute table for a diagnostic workstation.

Table 2 would be completed for all tasks and equipment listed in Table 1. There would possibly be different service requirements for different makes and models of devices, such as diagnostic workstations with cathode ray tube (CRT) monitors and display cards vs flat panel monitors and display cards.

Additionally, the resource requirements, man-hours, tools, and test equipment required for each task are exposed in Table 2. These will also differ when describing specific makes and models of equipment. For example, CR manufacturers have different QC and calibration tools that are recommended for their equipment. The tools the manufacturer recommends should be used for departmental QC, so there is no difference between the checks performed by the vendor service teams and the in-house QC and service teams.

Thread Tests

Any comprehensive test of a digital imaging device or PACS should include the transport of data and images over the actual network that is in use during clinical operations. A thread test is just this: a test that calls into play the functions of the imaging devices and the PACS between specific sources and destinations, including clinical data sources transported over the network that supports daily clinical activities. Thread tests should be designed around actual clinical scenarios or specific component interoperability.

Clinical Scenario Thread Tests. Clinical scenario thread tests should be used to check the interaction of data systems such as HIS, RIS, transcription, PACS components, and imaging equipment in such scenarios as (1) a scheduled radiographic examination, (2) an unscheduled radiographic examination, and (3) a John Doe radiographic examination. In each of these three scenarios, different upstream data system transactions will occur, which should be propagated through all downstream systems including the PACS. The purpose of this type of thread test is validating the interoperability of all devices and systems involved in the complete processing of each clinical scenario.

The performance of clinical thread tests can be accomplished through the development of test patients in the HIS, RIS, and PACS, and the tracking of examinations ordered against these “dummy patients” or through the tracking of true clinical activities as they naturally occur. There are advantages and disadvantages to both approaches. A combination of the two methods is recommended. Dummy patients are best used when repeated testing is required, such as during system setup and integration or following a software upgrade in any of the information systems or imaging devices. After the systems are integrated and data flow is as it should be, periodic and as-needed tracking of actual clinical data transactions will suffice.

Modality Thread Tests. The best method of testing a specific imaging modality is to perform a thread test that includes all supporting upstream data transactions and downstream image and data output functions pertinent to the performance of that modality. The basic upstream transactions include RIS orders and status updates to the point of study completion. RIS transactions subsequent to completed image acquisition, such as the dictation/transcription and report approval, are tested in the Clinical Scenario Thread Tests and need not be repeated here. The PACS will contribute the DICOM Modality Worklist, and the Storage, Display, and Printing functions to the modality thread test process.

A modality thread test should be designed to evaluate the individual performance and functionality of each specific imaging device. It cannot be assumed that because CT1 performed its thread tests well, CT2 will also perform well. The same is true of all imaging modalities. Even if a hospital has the same make and model of a given modality, its internal configuration can make it behave quite differently when communicating to PACS and printing devices.

Some PACS-specific attributes that should be tested in modality thread tests include:

  • Modality Worklist
  • DICOM Performed Procedure Step functionality (if applicable)
  • DICOM Store
  • DICOM Query/Retrieve (if applicable)
  • Display of stored studies in the PACS
  • Modality-specific hanging protocols in the PACS
  • Proper listing of patient, study, series, and image data on PACS lists and overlays
  • Proper modality-specific annotations, tools, and overlays on displayed images
  • Proper modality-specific annotations and overlays on printed images
  • Direct print capabilities from the modality to a printer
  • DICOM Print from the PACS to a printer
  • Correlation of overlays and annotations between direct modality printing and PACS printing
  • Correlation of annotations between printed and workstation displays
  • Correlation of printed image tone scales and PACS display tone scales.
  • Transmission and display speed to workstations
  • Transmission and output speed to printers
  • Magnification factor validation on printed images
  • Magnification factor validation of workstation images.
  • Proper image tone scaling on printers

Setting up and testing for these attributes should be part of digital modality and PACS installation processes, and benchmarks of acceptable performance should be made at that time. Periodic and situational resampling of these attributes should be part of the QC and maintenance programs.

Workstation Tests

One of the critical components of a filmless environment is the PACS workstation. It is, after all, the workstation that, in a filmless environment, provides the final clinical image and supporting clinical data to the physicians. Quality lost at the final stage of a long and expensive clinical and technical process, such as diagnostic imaging, is certainly unacceptable.

Workstations can be classified into basic categories: Local Diagnostic Workstations, Local Review Workstations, Local PC Workstations, and the remote version of these three types. The term local is used to indicate LAN connections and the implied performance and security features associated with LAN connections. The differentiating characteristics between workstations are primarily based in (1) the display quality, (2) viewing and manipulation tools, (3) the display speed, and (4) the privileges associated with the user of each workstation. Each of these characteristics can be further defined and is highly dependent upon individual PACS architecture and software (SW), workstation hardware, and network configuration.

Workstations are subject to a wide variety of degradation and failure modes. These can be categorized into general performance, monitor problems, and workstation software problems. Each of these problem areas can be caused by an outright hardware failure, which would trigger a service request, but they can also be caused by more subtle events such as CRT energy drift. Additionally, activities in the upstream and downstream segments of the systems are often the cause of complaints by PACS users because the PACS are, in most cases, responsible for the output of consolidated data.

A listing of workstation attributes to be tested and some of the subtle causative events should include, at least, the items listed in Table 3.

Table 3. A listing of workstation attributes to be tested and sample causative events.

All of the tests in Table 3 are best done using standard test protocols and test patterns that are stored in the PACS system. In that way, the users can be assured that the tests are repeatable and relate to the benchmark tests. An apples-to-apples comparison should always be maintained when discussing quality control and maintenance issues.

Many of the tests can be done less frequently than others. Tests of attributes that are subject to drift need to be performed at frequencies derived from the drift rate of each attribute. Tests of attributes that change due to hardware and software configuration, upgrade, replacement, or repair can be performed much less frequently and in many cases only when the triggering event occurs.

The monitor calibration tests need to be the most frequent tests performed, due to the high drift rate of the monitors and the criticality of presenting acceptable quality images to the various users of the PACS.

Data and configuration related checks are less subject to drift but are highly subject to problems induced through human error and connected system perturbations such as SW upgrades and maintenance operations. For this reason, the frequency of these checks can be much less than that of monitor calibration checks.

Discussion

The reengineering of policies and procedures for QC, QA, and maintenance operations must be incorporated into the clinical and business environments. A collaborative program of standards and procedures must be established and shared between the clinical users of PACS and the PACS vendors, and imaging device vendors, information systems vendors, and third-party service providers. The end goal has not changed. Radiology is still charged with the maximization of the quality and accessibility of diagnostic information.

PACS and digital imaging have changed the context of quality control for radiology. PACS system outputs (workstation and printed film) will reflect problems that are actually in all the upstream systems as well as problems in the PACS itself. For this reason the causative effects of PACS problems should become known to the QC and maintenance resources and must be used as the triggers of QC and maintenance services.

The quality control checks being performed need to be standardized and shared between the clinical QC resources and the operators and maintenance resources. Renegotiation of these tasks and resources is part of the process. All parties must come to the table prepared to look toward the future and act for the best interest of the patients.

In QC and maintenance practice, efficiency and effectiveness must be the watchwords. Movement toward more holistic management tools and performance tests that look not only at individual vendors and component performance, but also at the interactive and multidimensional nature of digital imaging, PACS, and medical informatics are needed. The use of thread tests, component tests, and task allocation matrices facilitates this reengineering process.

John Romlein is director of clinical projects for a health care consulting company based in Frederick, Md, (301) 682-3200, [email protected].