With the ever increasing size of radiology imaging data and the growth of procedure volumes, digital storage is quickly becoming a primary component in the total cost of ownership (TCO) of picture archiving and communications systems (PACS). To keep PACS storage affordable in the midst of this rapid data expansion, economical storage solutions must be delivered to PACS-based hospitals and imaging centers.

While the market prices for digital storage are rapidly decreasing, not all PACS vendors (or storage vendors) have passed these critical savings on to their customers. In order to leverage these technological advances, PACS users will need to explore all digital storage alternatives available today, while understanding the strengths and weaknesses of each.

Savvy users of PACS may even need to persuade their vendors to support these advanced and more affordable solutions (a far easier task to accomplish prior to vendor selection). The intent of this article is to educate PACS users about the options available for archiving image data in cooperation with, or independent of, their PACS providers.

Hospitals must be concerned with both the clinical archive used daily by their health care providers to treat and manage patients, and the legal archive for very long-term data retention (preferentially on removable media such as tape or digital versatile disk [DVD]).

The remainder of this article will explore alternatives for the clinical archive, while legal archive options will be discussed in subsequent writings.

DISK TECHNOLOGY

Hard disk drives (HHDs) are the basic component of any storage strategy. An HDD contains one or more magnetically sensitive platters that maintain magnetic polarity states. The plates rotate on a spindle anywhere from 4,400 (laptops) to 15,000 (servers) revolutions per minute (RPM). Each platter is paired with a robotic arm that moves the read/write head across the platter’s tracks, very similar to how a compact disk player reads a CD or a video player reads a DVD. The read/write head floats on a cushion of air and never touches the platterif it did, the disk would immediately fail (head crash). The rotational time (speed the platter rotates) and the amount of time required to position the heads for a read or write operation (seek time) combined together determines the latency (total wait time from when an operation is requested to when the operation commences). Once an operation begins (reading or writing), the time to completion is dependent on the rotational speed of the HDD, the internal electronics, and the technology employed to transmit the data to the HDD.

DISK INTERFACES

Disk interfaces determine connectivity from the HDD to the computer, and there are two major interface technologies utilized today: SCSI (small computer systems interface) and ATA (advanced technology attachment, aka IDE or integrated drive electronics). Every HDD must be connected to a host computer’s drive controller (the traffic cop that manages the communication between the host and the HDD or storage device). In the fiber channel world, the controller is referred to as a HBA (host bus adapter).

  • SCSI: This is the most common HDD interface in high-end desktop and server computers. It can be used to connect to multiple HDDs via copper cables daisy chained with up to 16 drives per controller or can be connected to hundreds of HDDs via fiber cables (SCSI FCP, or fiber channel protocol). Every couple of years, new performance standards and improvements in the HDD’s electronics are introduced, and as of this writing, the top performance copper connected HDDs supported bursts of 320MBps (megabytes per second). Fiber-connected SCSI HDDs supported 250MBps in each direction, full duplex (the process of transmitting data to and from the same host simultaneously). The next iteration of SCSI will incorporate a new standard SAS (serial attach SCSI). The standards for SAS are still being defined, but it promises to improve performance and manageability.

  • ATA: This interface typically can support only up to two drives per controller. ATA is also the most common HDD in laptop and desktop computers. ATA HDDs contain less intelligence than their SCSI counterparts. Unlike SCSI HDDs, the intelligence is located on the controller. ATA HDDs typically have slower overall performance than SCSI drives, but have much higher capacities. The highest performing ATA HDDs can attain speeds of 133MBps. A new ATA standard emerging in product offerings is SATA (Serial ATA). It promises to offer higher performance and densities at lower costs.

Table 1. Performance characteristics of six different RAID configurations. The number of asterisks (*) corresponds to the degree to which the technology agrees with the characteristic.

HDD AGGREGATION-RAID

Regardless of the interface employed or the number of HDDs and controllers installed or connected, the host computers require management systems in order to access storage. If 50 HDDs were connected to a single host computer with no management, only access to each individual drive would be allowed. The application or user would need to remember the drive letter (Windows: A through Z limits the total drives to 26) or device name (Unix). There would be no performance gains or fault redundancy built into the system. The management of the storage would be completely manual. In order to solve these and other issues, RAID (redundant array of independent disks) was conceived. The idea is to make many HDDs appear logically as one large HDD to the host computer. Other benefits including improved performance, utilizing techniques such as striping, writing units of data (blocks) to each HDD member within the RAID group, and the ability to add data redundancy. Redundancy (fault tolerance) is achieved by either mirroring all writes to a specific HDD to an assigned identical twin or through parity. Parity is algorithmically generated data that allows for data reconstruction in the event of a HDD failure. For a conceptual example, we will assume the actual data equals 2345, parity data equals 1234, and the total equals 3579. If an HDD were to fail, the actual data could be retrieved by subtracting the parity from the total data, yielding the actual data: 3579-1234=2345. In order to be as resilient as possible, the actual data and its parity data are always stored on different HDDs. When an HDD that is a member of a RAID group fails and is replaced, it could take hours or days before all the data has been re-created through the parity or mirroring process. While the drive is being re-created, the storage system is said to be degraded. This implies a performance degradation and loss of data redundancy, meaning that if a second drive were to fail before the rebuild was complete, the whole volume (all the data on the RAID group) would be lost.

There are different standard RAID configurations addressed through number: RAID 1-5 plus some combinations such as RAID 0+1.

  • RAID 0: This groups multiple HDDs and makes them appear as one volume to the host computer. It has the performance advantages from data striping, but no redundancy. If one drive fails, all the data is lost.
  • RAID 1: This features no parity or striping, but mirrors each HDD to a twin providing data redundancy (very costlyhalf of the total storage is consumed by redundancy).
  • RAID 2: This is the same as RAID 3, but stores parity to more than one disk. This means that a two drive failure could occur with no data loss. In practice, this is rarely implemented because of the storage overhead.
  • RAID 3: This stripes data across all but one drive. The remaining drive is dedicated to parity information. Because of the dedicated parity, the drive reads and writes are slow in multi-user environments.
  • RAID 4: Large stripes are employed making for fast reads. Write operations are slow because the multiple writes cannot occur simultaneously when a single drive is dedicated to parity.
    n RAID 5: Stripes data and parity across all HDDs allowing for fast reads and fast writes. Multiple writes can occur simultaneously.
  • RAID 0+1: This combines RAID level 1 and RAID level 0 to create a volume where the data is striped and mirrored to an equivalent twin volume. This provides for very fast reads and very fast writes, but also has the expense associated with losing half of the space to redundancy.

To improve performance and manageability of aggregated HDDs, most storage companies offer intelligent feature rich stand-alone storage devices. The devices implement RAID internally (often referred to as hardware RAID), reporting only the aggregate logical volume(s) to the host computer. Other device features include backup, data migration, and a variety of other usually proprietary functions to move, copy, and manage storage volumes.

PRESENTATION: DAS, SAN, NAS

There are a variety of ways to configure the actual hard drives.

Direct Attached Storage (DAS). DAS describes HDDs that are directly attached to a host computer. Examples include laptops and desktop computer systems. Servers with modest storage requirements will often implement this simple storage solution. Volume creation (carving the HDD into units called partitions, which can be translated into drive letters such as C:, D:, E:) and formatting (process of preparing the volumes for use with the creation of the file system) are all directly controlled by the host computer.

Storage Area Network (SAN). SAN is a dedicated network based on the SCSI FCP (not the same network used for email and internet) that connects multiple storage deviceseach containing numerous HDDsto multiple host computers. All data that is read or written is arbitrated block by block. A unit of data stored on the storage unit is referred to as a block, and when combined with other associated blocks, they make up a file. Each host computer is responsible for managing the file system on a given volume, and only one host can access the system at a time. (Sharing of the volume with other hosts is complex and requires a layer of software that provides NAS-like functionality.) The aggregation of drives to form larger/redundant volumes is managed by the storage device. The storage device and associated software (usually proprietary) will present the volumes to the host computer as very large DAS.

  • SAN over ISCSI (internet packet SCSI) This feature is identical to a conventional SAN with the exception that it allows for block transfers over standard internet packet (IP) networks (same network used for an institution’s email), but is about 50% slower. Although ISCSI SANs may be segmented onto a private network, it is not required.
  • NAS (network attached storage): This describes a strategy in which storage devices are connected to an institution’s existing network that exposes storage via standard UNCs (universal name convention: file://\server nameshared folder). Data is copied to the NAS devices file by file, not block by block (internally, NAS devices will write data to disk block by block). Each RAID device is a self-contained computer with storage subsystems that can be either  DAS or SAN. In a SAN, this computer is often referred to as a NAS head. Multiple hosts can access data stored on a single device because file system management is done by the NAS device, not the host computer. This approach is very similar to how computers in a work group can share folders with other users.

PUTTING IT ALL TOGETHER

The clinical archive must be online 24 hours a day. It must provide users with recent examinations and prior examinations, sometimes going back many years. Many institutions have policies dictating that all studies will be forever online and be retrievable to a workstation within seconds. For some time now, the principal storage topology for PACS has incorporated some combination of SAN and DAS. Some PACS companies have started to utilize NAS, but usually as a gateway to a SAN solution.

Directly attaching storage (DAS) to servers within the PACS core appears to be a simple, viable option. Unfortunately, management and scalability issues could make the simple implementation turn into a complex management resource drain. Careful planning of how much storage to attach to each PACS server would need to be undertaken. Usually the resultregardless of planningis that one server runs out of storage while another is underutilized. This leads to constant rebalancing of the storage allocations, which can be done only while the systems are offline and could lead to out-of-budget-cycle storage purchases. Further growing out the storage over multiple years and eventual replacement of the servers themselves are so dependent on manual processes that eventually a mistake is made and data is lost.

Storage solutions have traditionally been supplied by PACS vendors and are typically very expensive and difficult to manage without extensive vendor support. Technological advances, the rapidly dropping cost of disk storage, commodity pricing, and more robust hospital networks offer the opportunity for storage alternatives such as running a SAN over ISCSI and non-SAN solutions such as NAS.

Table 2. Performance characteristics of four different storage presentations. The number of asterisks (*) corresponds to the degree to which the technology agrees with the characteristic.

STORAGE ALTERNATIVES

Ideally, the clinical archive should keep two copies of each examination (all examinations forever) in separate storage devices. Below are three simple PACS storage examples that could be scaled to fit any size institution.

Storage Assumptions: Total examinations per year: 140,000
Yearly storage requirement: 2.5TB (terabyte)

Five terabytes of physical storage will be required to accommodate two copies of each examination. Each examination will be compressed 2-1 lossless (compression that will not degrade image quality: the image will still be full fidelity). The sample PACS core will contain three gateways (generic term to reflect the computer that serves as the entry point of an examination into the PACS)

Storage Directives:

  • Multiple copies of each examination on removable media with near-line automation for business continuance (not the focus of this article)
  • Clinical archive: Keep two copies of each examination in separate storage devices
  • Additional storage to be purchased yearly, and all studies are forever online
  • Multi-vendor capable

Storage Alternative No. 1

The initial configuration would require at least one storage device complete with a fiber switch, numerous fiber drives, redundant controller cards, and fiber backplane, a common path/conduit that allows the HDDs and controller cards to interact. To ensure a smooth installation, all components would be purchased from one vendor. Each of the PACS gateways would be allocated one third of the total addressable storage (addressable storage is the amount of storage left after the redundancy overhead has been factored). Usually, storage vendors promote RAID 1 (50% overhead) or RAID 5 (20% overhead). Usually, because of the cost of the storage device, there would be only one device until the third or fourth year. For the first couple of years, additional HDDs would be purchased from the same vendor (negotiate this cost up front). Implementing a single monolithic storage device is usually acceptable because of the internalalthough costlyredundant electronics built into the storage device. This approach would also have a higher dependency on the disaster/legal archive (to be covered in a future article).

Total estimated implementation cost: $160,000

Advantages:

  • Accepted practice for mass storage
  • High performance
  • “Nobody gets fired for buying the brand name”
  • Mature and very reliable RAID technologies: very solid
  • Dedicated storage network prevents storage traffic from interfering with application traffic
  • Scalable and (usually) modifiable without system downtime

Disadvantages:

  • Vendor dependency for future upgrades and maintenance
  • Significant trainingneed to learn proprietary software and management protocols
  • Performance is scalable only by adding expensive storage devices
  • Multi-vendor environments can be complex and full of support issues (more vendors, more software to manage, more training required)
  • Total cost of ownership is very high because of the vendor dependence
  • Does not leverage existing infrastructurerequires a private network, host bus adapters, and switches (unless ISCSI is implemented)
  • Does not allow for easy/cost-effective geographic separation (different buildings)
  • If one gateway were to fail, all studies stored through that gateway become unavailable (manual/auto failover is possible if the PACS application/SAN management software allow for it: it is not easy)

Storage Alternative No. 2

This option is to deploy multiple small NAS devices with about 1TB of addressable storage. Each device is to be a separate self-contained computer with 100/1,000 (Mbps) network accessaggregate throughput would be number of NAS devices times the network speed. Storage will be grown yearly with an initial purchase of six devices (~5.8TB usable). This starting configuration will present an aggregate throughput of 6Gbps. For redundancy, all examinations will be stored twice across two distinct NAS devices.

Many PACS applications are architected to function in a SAN infrastructure. Therefore, to leverage NAS, the host requires that the NAS devices look like  SAN devices. For example, the PACS application may map a storage location to a physical device (in the Windows world “F:”). To remap this to a NAS destinationthe F: can be reassigned to equal a UNC (F: can represent [point to] \server1share1 ).
The NAS devices would be configured to utilize DFS (distributed file system: this method of aggregating NAS devices for data redundancy is also called file replication). DFS monitors all writes to a given network share (UNC) and automatically replicates those writes to one or more other NAS devices. If a NAS device were to fail, the system would automatically work off one of the backup devices. Each PACS gateway would be configured to attach to one or more NAS devices via UNCs.

Total estimated implementation cost: $60,000

Advantages:

  • Very cost-effective: could make three clinical copies and still be significantly less expensive than a SAN solution
  • Relatively high performance and easy scalability: the more devices and hosts, the higher the aggregate throughput
  • Effectively utilizes the existing network infrastructure. Funds that would be spent on a private network could be used to enhance the current network, thus improving performance for all applications, not just storage.
  • Leverages integrated connectivity components (standard Ethernet controllers) integrated into the devices and hosts computers motherboards
  • Easy multi-vendor integration
  • No proprietary software training: DFS is supplied with all Windows server operating systems (simple to learn and maintain)
  • Low vendor reliance: changes and maintenance can be done by local information systems support
  • PACS gateways could load-balance because all NAS volumes can be seen by all hosts.
  • Drive technology could be ATA, SCSI, fiber…or a mix. An existing SAN could also be presented through a NAS Head (NAS gateway to SAN storage devices).
  • Automatic or near-automatic failover could occur if a gateway were to fail. The PACS application could have all gateways see all volumes masking any failure.

Disadvantages:

  • Some latency introduced on image retrieval due to the extra redirection of data from the NAS devices. Workstations would make requests for examinations through a PACS gateway, which in turn must request the data from the NAS device. This creates two network requests instead of one.
  • PACS gateway will incur a doubling of its network load because of the redirection described above.
  • PACS vendor reluctance: not experienced with NAS or reluctant to jeopardize current third-party relationships with “Big Iron” SAN storage vendors.
  • Some PACS applications are not architecturally designed to exploit the advantages of NAS
  • Greater dependency on local IS personnel

Storage Alternative No. 3

The ideal NAS configuration for PACS would be to implement all elements from alternative 2 except for gateway redirection. This can be achieved by allowing the PACS workstation to have direct access to the NAS device. All requests for examinations would be routed directly to the NAS device without any interaction with the PACS gateways. This design takes full advantage of the aggregate throughput and switching of the network. All examinations enter the system through the PACS gateway: stored on NAS; retrieved directly from NAS.

Total estimated implementation cost: $60,000

Advantages (same as alternative 2 plus):

  • No introduced latencies: one network retrieval; no redirection through a PACS gateway.
  • Distributed examination retrieval: gateways are not centric for examination retrieval.
  • PACS gateways are primarily dedicated to performing the numerous tasks of importing a study into the system, not serving priors to the PACS workstations
  • Improvements are cost neutral: same hardware storage cost as option 2
  • Not dependent on any single system for study retrieval

Disadvantages:

The disadvantages that may remain are with the PACS vendor and their storage partner’s acceptance of NAS, as well as the PACS application’s ability to leverage the advantages.

CONCLUSIONS

The current trend in storage is to effectively utilize NAS for file-based applications such as document sharing, and SAN or DAS for block-based applications such as database systems. For PACS, the trend is toward open standards, NAS-centric storage solutions that within the NAS device utilize DAS, SAN, or both. The drive technology within the NAS can be ATA/SATA/SCSI/SASit does not matter.

NAS devices today leverage file replication technologies providing data protection at a cost point never before available. And, since NAS purchases can easily be decoupled from the PACS core, users can finally choose between a variety of commodity NAS providers while negotiating for the best price, support, and reliability available at the time.

Thomas Schultz is chief engineer

Keith J. Dreyer, DO, PhD, is vice chairman of radiology, computing and information sciences

David Hirschorn, MD, is a computing and information sciences fellow in the Department of Radiology, Massachusetts General Hospital, Boston.