Storing and archiving medical images was once a fairly straightforward proposition. Not anymore. Now, owing to the explosive growth of imaging?both in study volume and the number of images comprising each exam?information technology (IT) departments are scrambling to embrace new and better strategies for managing their massive accumulations.

The approaches taken by imaging enterprises vary in accordance with each organization?s size, geographic reach, data-generation processes, technology base, and, of course, financial resources, as well as appetite for bold action.

St Peter?s Health Care Services

The image-management strategy is quite bold indeed at St Peter?s Health Care Services in Albany, NY. It amounts to packing as much data as possible onto the enterprise?s storage area network (SAN) to be able to eliminate server hardware. Consolidation, says CIO Jonathan Goldberg, pushes St Peter?s toward a single, unified storage environment. “As our storage requirements grow, we want to be dealing with just one set of storage versus multiple sets of storage,” he says. “If we have one set of storage, it will mean?among other things?a lower cost of entry for future informatics initiatives. For example, the project that conventionally costs $500,000 could end up costing us only $400,000 because we don?t need the $100,000 storage solution that the vendor customarily offers with it.”

St Peter?s consists of a 442-bed, acute care hospital (established in 1869) and two skilled-nursing facilities with 320 beds between them. Eleven on-campus buildings house those units, along with hospice services, comprehensive addictions treatment, and a sweep of other programs and services. Operating on a budget of nearly $350 million, St Peter?s is one of Albany?s busiest hospitals. It has long strived to position itself on the technological forefront; today, St Peter?s possesses the latest in imaging devices and informatics.

Until 3 years ago, the hospital?s IT operation was supported by more than 150 servers, each with its own localized storage and tied into a centralized Veritas tape backup system from Symantec Corp, Cupertino, Calif. PACS has been with the hospital since 2004 when a system for radiology was installed, followed not long after by one in cardiology. Storage for each was originally accomplished using a DVD optical jukebox. But in less than a year, radiology?s storage solution was approaching full. “We needed to do something to increase the capacity,” Goldberg says. “The easiest response would have been to build-out the jukebox. But the limitations of that particular jukebox?s technology were such that capacity could be expanded a small amount only. And it would do nothing to help us with the challenge of our backups.

“At that point, our direction became clearer to us,” he continues. “We discussed the possibility of eliminating the jukebox in favor of a scalable, long-term storage solution, something that offered faster retrieval speed than what was possible with optical disks.”

The solution involved using the VMware product from EMC Corp, Hopkinton, Mass, to virtualize the servers and, thus, dramatically increase the size and performance of the enterprise?s SAN-based storage. “From physical 150 servers, we pared that down to 55 by virtualizing 100 of them and having 5 VMware servers,” Goldberg says. “That gave us a total of 18 terabytes of data under management, compared to 13 terabytes total in the days before the 100 servers were virtualized.”

Found at the heart of St Peter?s storage environment are a mid-tier EMC CLARiiON CX700 (which scales up to 117 terabytes and includes fiber-channel SAN support for up to 256 dual-connected hosts), two EMC Centera systems (offering a combined total of 31 terabytes of storage), and five physical VMware servers. Also is a Z-Series mainframe from IBM, White Plains, NY, is used for the hospital?s clinical system.

“The CLARiiON is designated for our short-term storage needs,” Goldberg says. “Because stored data can be deleted from it, we don?t use the CLARiiON for anything we want retained on a long-term basis. For purposes of long-term storage, we have our Centera systems, each having built-in safeguards to prevent even accidental deletion of data.”

Limitations of the backup capabilities of the outdated tape system led the hospital to acquire a disk-based backup solution from Avamar, Irvine, Calif. “It used to be that backup was a near-continuous process, night and day,” says Curt Damhof, network manager at St Peter?s. “That wasn?t just because we had so much data requiring backup; it also was due to the fact that, quite often, the backups didn?t take. They errored out, and there were problems with data being incorrectly copied. It was very frustrating. It was a manpower issue as well, because it required allocation of one dedicated FTE.”

The new disk-based backup system?which grabs data from the hospital?s servers in very small increments?permits completion of archive chores in a fraction of the time. “It now takes between 1 and 2 hours a night to complete all our backups, and we no longer need to have continual monitoring,” Damhof says. “All we need do now is have someone take a quick glance at everything in the morning to make sure it?s all working. In addition to being a big labor-savor, it also has eliminated our worries about whether backups are occurring and taking effect.”

Radiological Associates of Sacramento

The elusiveness of a fast and sure backup was something that likewise once plagued Craig Roy at Radiological Associates of Sacramento (RAS), where he serves as CIO. That difficulty has since been overcome, thanks to a decision by the private-practice group to reduce its reliance on tape-based backup in favor of a disk-to-disk-to-tape arrangement.

“The next step for us might be to convert to a purely disk-based backup solution that leaves tape entirely out of the equation,” Roy says.

Need for better backups?and storage?arose a number of years ago when the group began deploying digital modalities at its nearly two dozen Northern California imaging centers. At that time, acquisition of a PACS was not contemplated, even though the group?s radiologists were eager to begin reading images on monitors instead of printing to film. “This led us to explore using a pay-per-click/pay-per-study company called InSiteOne,” Roy says of the Wallingford, Conn-based company. “Their product gave us the ability to use any kind of agnostic-type viewer to look at those studies. But it also gave us the ability to start storing our digital modalities on spinning disks.”

Today, RAS is moving away from pay-per-click now that PACS has come aboard. “As we shopped for a PACS, we saw that the product usually comes with its own storage solution, so we decided that would be the direction our storage strategy should take,” he says. “Also, we found that pay-per-click did not have the ability to store HL7 information.”

The PACS eventually acquired was from Emageon, Birmingham, Ala. Today, this system is tied into all of the digital modalities at each of the group?s outpatient imaging centers. “The centers have their own individual storage caches,” says Eric Ganz, supervisor of network administration at RAS. “Each cache contains images of just the patients who receive services at that facility. In addition, a copy of each image from each local cache is stored in our centralized long-term archive. In this way, we?re able to scale each local cache so that it can store 90 days? worth of patient images. The actual size of the cache depends on the amount of imaging volume typically generated by the individual site. Right now, that ranges from a minimum of 2 terabytes and as much as 9 terabytes of storage per cache.”

At the largest of the centers, the storage horsepower is supplied by an EMC product that pulls double duty as the enterprise?s permanent archive. At the smaller centers, storage needs are met with the aid of a NetApp 960 system from Network Appliance Inc, Sunnyvale, Calif. “We recently signed a contract for installation of an Emageon content manager, which will let us create a real-time copy of our archive,” Roy adds. “That means the archive will be shareable, so we can engage in load-sharing among each device. Our feeling is that system performance will be much better if what referring physicians access when they request images from our distributed PACS environment is a copy of the archive rather than the original, since the original is part of our live-production environment. By the same token, this should help lighten usage of the production environment and keep it from bogging down with traffic.”

Children?s Oncology Group

Storing and archiving images from relatively small imaging centers within a radius of about 100 miles or so is one thing. Doing the same with images from large hospitals scattered across the entire North American continent is another matter entirely. That is why the Children?s Oncology Group (COG) and the New Treatments in Neuroblastoma Therapy (NANT) cancer research organizations and 40 of its member hospitals elected to employ a storage and archiving strategy based on the Globus Grid system.

“The Globus Grid makes it possible for our hospitals? radiologists, physicians, and pediatric oncologists to quickly and securely exchange high-resolution medical images, and it does so by coordinating noncentrally controlled resources through use of standard, open, general-purpose protocols and interfaces, resulting in the delivery of nontrivial qualities of service from each participating hospital?s existing information systems,” says Stephan G. Erberich, PhD, director of biomedical imaging and informatics at Children?s Hospital Los Angeles, flagship site of the COG and NANT Grids. “One of the many beauties of the Grid is that each hospital needs only one interface to connect to it. That one interface serves the whole hospital, which has the effect of reusing the hospital?s capital investment in DICOM visualization devices.

“Also, the Grid is useful because it utilizes Web Service Interface,” Erberich continues. “This data-transport feature allows other companies to provide services while sharing the same infrastructure and its components, including X509 security certificates. Because of that, it is possible for a hospital connected to the Grid to search for available services or resources, such as an image-processing or storage service, and to then be authorized to draw upon those services or resources.”

The Grid was created by the Globus Alliance, a research and software-development project led by Ian Foster of the Argonne National Laboratory at the University of Chicago and by Carl Kesselman of the Information Sciences Institute at the University of Southern California (where Erberich is an assistant professor of radiology in Keck School of Medicine and Biomedical Engineering at USC). “The goal of the Globus Alliance is to make Grid computing a realistic possibility in those situations where collaborative work happens,” Erberich says. “This is of ample relevance for medicine where research and clinical operations are converging toward interdisciplinary and collaborative work. The days of doing research or clinical work as individuals are gone. Thus, we need technology than can bring physicians together and share information. We see benefits for both research and clinical image workflow.”

The Grid as applied to the COG was structured in accordance with guidelines from a government, academic, and industry consortium known as MEDICUS, which Erberich had a hand in organizing. “We built the network?s Grid system by basing it directly upon earlier work by the DICOM standards committee,” Erberich says. “What we attempted to do is translate DICOM, which is very slow, into Grid, which is very fast.”

A big benefit of the Grid is that the price to connect other hospitals new to the COG is almost negligible. “A Grid gateway attached to a high-bandwidth ?net connection costs a hospital about $1,000,” he says. “The gateway provides two-way access to the Grid, allowing upload of de-identified local images and also continuing access to a catalog of archived DICOM records.”

Meanwhile, data entries are cached at the local level. “The cache size is flexible, so it can match any size of operations,” Erberich says, adding that the Grid incorporates services to replicate data. “Disaster recovery fail-over is achieved by alternative replica location in the Grid.”

Iowa Health System

Multiple hospitals linked by a grid network is similarly a feature of the strategy employed by Iowa Health System, Des Moines, composed of 11 urban hospitals and a string of rural hospitals in 14 Iowa farm towns. (The enterprise also is partnered with more than 125 clinics in more than 30 communities across Iowa, western Illinois, and eastern Nebraska.)

Movement toward a grid strategy began after Iowa Health System acquired its first batch of PACS in 2004?one for each of the biggest hospitals. Although those PACS installations were decentralized, their storage repositories were not: Image and text files were managed by a single, centrally maintained IBM DS4000 storage solution offering 60 terabytes of archival capacity delivered through lower-cost serial advanced technology attachment (SATA) disk drives with backup to tape.

Bob Thompson, IT governance director at Iowa Health System, says that his team did not expect storage-growth demand to exceed 10 terabytes annually. To their surprise, it did. As a result, tape for timely backup and restore became less and less feasible, Thompson reveals.

That development spurred Iowa Health System to acquire IBM Grid Medical Archive Solution (GMAS) software that was loaded onto the existing DS4000 platform.

“The storage process today begins at the local hospital level, where we have a RAID-5 DAS [direct attached storage] or SAN with at least 3.5 terabytes of capacity ?these we procured from two respected storage vendors,” Thompson explains. “The images in RAID are designated for short-term storage and remain there until a preset capacity threshold is reached, generally about 90% full, at which point the oldest images are purged as required to reclaim some storage and reduce it to 80% full. At the same time, images are being sent to the RAID short-term storage, copies go simultaneously to redundant GMAS repositories in central data centers.

“With regard to retrievals, we use prefetch logic to download any archived images potentially needed locally prior to a physician?s examination,” he continues. “Retrievals from the archive also can be on demand. Any image retrieved remains on short-term storage until the normal aging process marks them for purging. In that sense, short-term storage at each site may be considered a data silo for that site. All this data, however, also resides on the redundant central archive and is available to any authorized user enterprise-wide. By accepting data from multiple vendors, applications, and sites, GMAS is a tool that enables data silos to be centralized, managed, and protected.”

For purposes of disaster recovery, Iowa Health System?s strategy relies heavily on those replicated GMAS archives and databases. “Replication entails use of selected tape-based components culled from the two data centers,” Thompson says. “In the future, we?ll have primary fail-over within the primary data center and secondary fail-over to the secondary data center.”

From here, Iowa Health System plans to expand its GMAS to include other high-volume, nondynamic data?scanned documents, for instance. “GMAS works with multiple applications from multiple vendors, so only one central archive solution is needed,” Thompson clarifies. “As storage capacity expands, so does the number of servers handling the storage. The ability to manage and manipulate data at required performance levels is thereby maintained as storage capacity expands. Further, as data ages on GMAS and online-versus-near-line requirements are refined, selected data can be ported from GMAS disk to tape libraries.”

Thompson finds much to like about the strategy his enterprise has adopted in response to the explosive growth of imaging. “We needed?and found?a solution that provided us enhanced application performance and both extensibility and scalability,” he enthuses. “It made a great deal of sense and was seen to be a good investment.”

Rich Smith is a contributing writer for  Medical Imaging. For more information, contact .