Peter A. Janick, MD, PhD, remembers how it was 15 years ago when he worked at a Baltimore radiology group that had acquired a 3D workstation to process cross-sectional images and he was chosen to make this system his happy hunting ground. Soon, the computer-savvy Janick was creating gorgeous renderings of organs, vessels, and other anatomic structures—at an average speed of 2 hours per study.
“The senior partner came by, tapped me on the shoulder, and told me that I couldn’t be spending that much time on each study,” says Janick, who today is radiology chairman at Sparrow Health System, Lansing, Mich, and a senior partner with 18-radiologist Lansing Radiology Associates. “He warned me that I should be spending no more than 10 minutes to read the study, or else the group was going to lose money. So, I trained a couple of our techs to do the bulk of the rendering chores, and I was able to get the time down to 20 or 30 minutes per study.”
Flash forward 15 years to the present. Three-dimensional renderings are more attractive than ever—and demanded by referring physicians and their patients as never before. Today, however, instead of being formed from image data sets numbering in the low hundreds, they are made from data sets in the low thousands, thanks to the advent of scanner technology that has progressively reduced the size of a single cross-sectional slice to a mere 0.5 mm.
“I’m in a different practice now, yet we’re back to spending an hour or more at the 3D workstation, segmenting out all the vessels and cutting away the bones in order to make these pretty pictures,” Janick says. “And here I am, telling the younger partners that they can’t be spending all this time doing these renderings, that they have 10 minutes to read the study, or we’re going to go broke.”
Janick’s group—like many others around the country that daily use 64-slice CT scanners and the latest in MRI systems—is awash in cross-sectional image data sets. Consequently, he and his radiologist partners are treading water as hard as they can to keep their heads above the surface.
Interventional neuroradiologist J. Neal Rutledge, MD, of 70-radiologist Austin Radiological Association, Austin, Tex, knows the feeling well. “Reading time is directly proportional to the number of images, and we’ve certainly witnessed an explosion in images over the past several years,” he says. “Then you add to that the fact that many of these images are becoming more complex and richly detailed, adding more time. With thinner the slices, there are more pathological changes to potnetially look at—not to mention the greater amount of effort that has to be put into reconstructing and manipulating those slices into understandable image planes.”
Reading of these massive data sets is further slowed by dependence on plodding workstations. “It’s hardly a revelation that the speed at which images come up on the monitor affects radiologist productivity,” says Raym Geis, MD, one of two neuroradiologists with Advanced Medical Imaging Consultants, Fort Collins, Colo—a 21-radiologist group that provides coverage at a 240-bed hospital operated by Poudre Valley Health System and at another hospital owned by Banner Health System in the adjacent town of Loveland, Colo.
Janick adds, “The problem with going through a thousand slices on a PACS workstation is that it takes a long time to physically scroll through each image.”
University of Maryland diagnostic radiology professor Eliot Siegel, MD (who also is vice chairman of the school’s radiology department and director of radiology for the VA Maryland Healthcare System), asserts that a PACS workstation would be useful for the reading of huge cross-sectional studies only if it possessed the ability to take multiple thin sections and combine them into one or more thicker sections—something almost no PACS currently on the market can do. “The problem we have is that the PACS vendors, for the most part, are not providing the tools that are necessary for more advanced volume digitalization, although that has recently started to change,” he says. “[In general], what we have is the radiologist forced to get up from his PACS workstation and walk over to an advanced digitalization workstation and there perform maximum intensity projection [MIP] processing and other multivolume visualizations. That alone—taking the walk and getting settled in at the second station—can add a couple of minutes to his or her read time.”
In Janick’s offices, cross-sectional angiographic imaging reads take place exclusively on a 3D workstation programmed to allow relatively fast automatic segmentation of vessels. “On our 3D workstation, instead of flipping through discrete images, we deal with a volume of images,” he explains. “And thanks to the processing power of today’s computers and optimized reconstruction routines, it’s possible to go from the top of the volume to the bottom of the volume in just 2 or 3 seconds. Now we can sit at the workstation and read out an uncomplicated study involving, say, the head and neck in anywhere from 5 to 10 or 15 minutes. But a complicated study—such as for peripheral vascular disease—can still be time-consuming, because there are 20 different segments of the vessel that have to be looked at and measured.”
Vendors of 3D workstations have been rising to this challenge by striving to develop systems that incorporate some of the workflow features of PACS workstations. Meanwhile, PACS vendors have been trying to return the favor with workstations offering the advanced visualization features of their 3D counterparts.
“The next thing vendors need to do,” Janick suggests, “is take these 3D workstations and embed them into PACS and optimize them for reading, rather than for making batch reconstructions.”
Basic Survival Tool
Workstation technologic issues notwithstanding, the biggest challenge to productivity remains the need for eyes on a large number of individual images making up a single study. “Each of our radiologists is developing their own way to deal with large data sets—these include 2D multiplanar reconstructions [MPRs] and use of computer-aided detection systems,” Geis says.
The value of MPRs is difficult to overstate, and that is why their use constitutes a preferred survival strategy for many groups. “When you start reading in multiplanar mode, you regain much of the speed you had back in the days when you could comfortably read in stack mode,” Janick says. “All the important diagnostic decision-making measurements come from the MPRs. The advantage of MPR is that we’re no longer confined to the X-Y plane; it allows us to look in the X-Z or Y-Z plane or an oblique plane and have images that are exactly the same quality as the axial plane. In fact, some of the off-plane reconstructions actually look better because the statistical averaging over a number of slices causes loss of significant amounts of quantum mottle.”
That view is echoed by Vassilios D. Raptopoulos, MD, director of CT services and associate radiologist-in-chief at Beth Israel Deaconess Medical Center, Boston. “With the current generation of scanners, MPR image quality is very good,” he says. “MPRs also are easy to create. In our department, our scanners produce MPRs automatically using some very good protocols that we’ve developed.”
However, the ease of creation of MPRs by the scanner comes with a price, as Siegel has discovered. “When you generate a lot of images at the scanner, that increases the number of images you have to archive and move around,” he says. Siegel’s solution is to make the MPRs—and MIPs, too—directly within the server. “We call it server-side rendering, and we believe it’s a smarter way to go,” he says. “With server-side rendering, the workstation is used to open a window into the server’s memory and there perform the reconstructions. No images are physically sent to the workstation; they remain on the server at all times. It’s significantly faster than if we were to attempt to send images to the workstation, and faster still in comparison to the time it takes to send images to PACS.”
The strategy for regaining read-speed embraced by Austin Radiological Association’s Rutledge requires what he calls an “optimization of perception.” He explains it thus: “I achieve optimization of perception by reviewing images in optimal planes under the most ideal conditions in my area of specialty. By ideal conditions, I mean having an ergonomic workstation, an ergonomic chair and desk, a monitor set for best brightness and contrast levels, and the ambient room lighting equal to that of the monitor but not allowing distracting reflections on the screen. Also, the image must be displayed at the ideal size of about 6 inches for CT and MR; larger images can result in subtle eye fatigue. I also minimize distractions on the screen—no mouse on the field, no buttons. Foremost, I try and always review images by scrolling through the series; this optimizes human perception of change by working with our sensitivity to motion, and minimizes iconic memory errors and unconscious inference.”
Rutledge, who also serves as an adjunct professor of psychology at the University of Texas and medical director of its Imaging Research Center, San Antonio, adds that in addition to reviewing images by scrolling, he uses the highest contrast and a hierarchical physiological-sensitive order in his review. “With regard to hierarchical physiological-sensitive images on MR, I look at the diffusion-weighted images first, because they’re most sensitive to pathology,” he says. “Then I look at flair images next, because their high detail and conspicuity in lesion detection provide the greatest amount of difference per background. After that, I go through the different pulse sequences and planes as needed based on the pathophysiology.”
This approach is not at all dissimilar to that employed by pathologists. Jim Whitfill, MD, medical informatics specialist and CIO with Scottsdale Medical Imaging—a group of 35 radiologists, with subspecialists culled from top academic and health systems across the United States—asserts that radiologists might find the pathology model worth considering. “Pathologists have had image data overload for generations,”says Whitfill, a former internal medicine specialist before turning to diagnostic imaging. “Think about it: the pathologist receives an entire kidney from the operating room and needs to understand the data contained in that kidney, so he’ll take representative slices of different places and then go down to the microscopic level and use other techniques to guide him to the areas he needs to focus on and look into further. In radiology, following the pathology model, we take the data from the scanner, reformulate it into a 3D pattern, and use it to gain a bird’s-eye view; then, we go into those individual slices as needed for additional information.”
MPRs and MIPs are each seen in some circles as waystations along the road to 3D. However, Janick contends that it is unlikely that true 3D will ever be used for reading purposes, because in 3D mode, subtleties of the image are lost. “Surface rendering is what you produce in 3D, and this is done for the benefit of referring clinicians and their patients as a means of more easily demonstrating to them your findings developed from, for instance, an MPR,” he says. “But a radiologist isn’t likely to diagnose from an actual 3D image for the reason that surface renderings prevent you from seeing what lies within the virtual body—a tumor, for instance.”
The same shortcoming applies to MIPs, he adds. “MIPs are very useful, but great care must be exercised,” Janick says. “For example, if you have an eccentric lesion, but it is eccentric in the wrong projection, the vessel can look normal on the MIPs, even though it’s highly stenotic. The problem is that MIPs show you only the highest intensity pixel. So, if there’s one in the way, it will cover up the lesion.”
The task of creating 3D renderings can be parceled in any of three ways: to the radiologist (least productive), to the technologist (least efficient), or to an in-house laboratory (least economical).
Peter A. Janick, MD, PhD, radiology chairman at Sparrow Health System, Lansing, Mich, and a senior partner with Lansing Radiology Associates, thinks the in-house laboratory model is probably the best option, given the downside to the other two.
“An in-house lab frees the radiologist from the chore of producing the 3D renderings and will allow the renderings to be readied much faster than if the job is assigned to the technologist,” he says. “And even though the in-house lab model introduces a cost center to the practice, it probably will pay for itself several times over when you weigh it against the gains to be had from not harming—and probably adding to—radiologist and technologist productivity and efficiency.”
A major cost involved in a 3D laboratory is the reconstruction software—about $100,000 for a decent package, Janick reports. “You also need workstations—one for the lab tech who will perform the surface reconstructions, and then one for each location where a radiologist will be looking at the various cross-sectional images,” he says.
The mechanics of getting the images from the laboratory to the radiologist workstation are not all that complicated. “It’s easy to set up the scanners so they automatically route a copy of the thin-sliced data to the lab tech’s workstation,” he says. “You can route those images through PACS, provided you’re using the very most up-to-date technology. Just 6 months ago, when we still had our older PACS, it could take 25 minutes to move a 1,000-image trauma CT case from scanner to PACS. That, of course, was because the PACS had to validate each individual image against the RIS data, which represented a lot of system-slowing database calls on an overburdened system. To work around that, we sent only thicker-slice images, so we could reduce the image volume to about 300 per CT study. It wasn’t the best solution because the thin-sliced data would be routed to the technologist for 3D processing, after which, another 600 or 700 images would be spit back to the PACS—bringing us right back to where we started: 1,000 images. Worse, the thin-slice data were discarded, precluding reanalysis. Additionally, the reconstructions were separated over 10 or 15 different series, none clearly labeled, making it hard for the radiologist to determine whether he was looking at the right, left, or oblique view of the structure in question, and overwhelming the PACS workstation, which can’t accommodate more than eight series without making the view port too small to see the image.
With our recent upgrade to a fully gigaspeed network with new hardware and an upgraded PACS backend, the same 1,000 image CT is now available for viewing in under 5 minutes. We now archive the thin slice data and look to drastically reduce the number of reconstructions.
“The best way to go is with a 3D workstation in place of a PACS workstation for this purpose,” Janick continues. “It is so much more efficient. And efficiency is the name of the game here.”
Read Them or Weep
Some radiologists are leery of 3D for another reason. They believe the renderings give rise to the notion—wrong, they say—that one no longer need look at every image in the set. “We still are responsible for everything on every image,” Geis says. “Radiologists simply cannot migrate away from the practice pattern of interrogating each image.”
In response, Rutledge asks a simple question: “What’s going to happen when we have infinite resolution and infinite planes? Today, you have 500 images, tomorrow it’s 3,000, and in 3 years, it’ll be 30,000 images per study. To be able to look at 30,000 images, there will need to be a paradigm shift that frees us from the responsibility of looking at every image and instead, obliges us to find the pathology. Then, the question becomes, how do we interrogate large sets of data to find the pathology? We need to look at how other fields have done this—data mining is not a new science.”
Janick is not so sure radiologists will be confronted with 30,000-image data-set studies any time soon. He believes the march to thinner slices that began a few years ago has, for now, halted and is not likely to resume until there is a major, dramatic breakthrough of some sort in scanner detection technology. “Having thinner slices beyond today’s 0.5 mm won’t help that much because we’re already at isotropic pixel levels,” he says. “Without corresponding improvements in the detectors, thinner slices will lead only to image degradation rather than image improvement.”
However, all bets are off if 4D imaging ever comes to be a routine process. Says Janick, “3D is not the destination; volume is the destination. Progress is not going to involve a journey from 2D to 3D, but rather a journey from 2D to 4D, where we can look at things temporally. But for that to happen, there will need to be an advance that allows us to deliver a huge reduction in radiation dose so that we can image the target many more times than we do now.
“Of course, the potential problem with 4D imaging is that it will make today’s struggle with gigantic image data sets seem like a walk in the park by comparison. But I’m hopeful that radiology’s proven ability to devise solutions to each prior leap in the size of data sets will permit the profession to cope with the leap into 4D. I think our experiences with MPR suggest this is possible. For instance, with chest CT, it’s helpful that we can clone the stack of images and create view ports for soft tissues, for lungs, and for bone, which can then be synchronized so that the same lesion can be observed in all three windows and levels. I believe something similar will be done with regard to 4D—volume visualization will morph in such a way that as you move in one plane or another, it will be possible to sync all the temporal volumes and thereby be able to look at the arterial phase, the venous phase, and a delayed phase all at the same time. It will be fast enough that the radiologist can deal with all the different series and not be overwhelmed.”
But that is the future, this is the present. How well are radiologists coping with the large image data sets in the here and now? Whitfill offers a clue: “The homegrown solutions we’ve all come up with are greatly varied, and, by and large, they are working—even if some of us still feel as if we’re hanging on by our fingernails.”
Rich Smith is a contributing writer for Axis Imaging News.