Virtual reality employs medical images as a foundation to create innovative methods of reviewing and analyzing the data they produce. From the traditional definition of VR that uses head tracking devices, stereo glasses, and fully immersive environments for interaction with the data to emerging techniques that combine key elements of the animation process with medical images from all three-dimensional modalities, virtual reality provides valuable information for medical education, clinical practice, and research.

Combining 3D DICOM image data sets produced by medical imaging equipment with extremely high-powered computer capabilities, VR has entered new realms of value and usefulness for the medical community.

Traditional VR
The University of Illinois at Chicago began developing networked virtual reality applications for medical education in 1997 to assist medical students and residents in their understanding of complex 3D anatomic structures. The Virtual Reality in Medicine Lab (VRMedLab) at the University of Chicago uses VR displays developed by the Electronic Visualization Laboratory (EVL, also at UIC) for their projects. They began with a virtual temporal bone, virtual pelvic floor, and virtual craniofacial anatomy to teach their surgical residents as well as share their surgical expertise with other institutions throughout the world.

They have moved onto new projects with real-time applications to meet the clinical need in cranial implants, says Mary Rasmussen, director of the VRMedLab. Because their VR system is networked, collaborators from throughout the world can evaluate CT data that have been rendered as a virtual reality image to design an implant that will precisely fit a skull defect.

“Someone who has suffered trauma to the head, for example, in a car accident, or who has lost a portion of their skull to a disease such as cancer, or a child with a congenital defect, may be missing a piece of their skull,” Rasmussen explains. While surgical intervention often involves harvesting bone from other parts of the patient’s body, there are several substances that are used during surgery to fill in the space left by the defect. Their networked system provides a way to take the patient’s CT data and within 10 minutes have it up on their VR system, available to be reviewed from any place in the world where they have at least a desktop VR system.

 Silicon Graphics Octane2

Once a treatment decision is made, the data are sent to a stereo lithography system where medical modelers create a physical 3D model that fits the defect precisely. The resulting implant is sterilized and delivered to the operating room, where it is placed to seal the defect. At this point, they have used these techniques to perform nine successful surgical repairs.

The latest endeavors for the VRMedLab staff involve the use of a new VR prototype from the EVL that includes haptics, to provide a sense of touch, which Rasmussen expects to offer new capabilities in the future.

TGS Inc (San Diego, Calif) has been providing interactive graphics software for more than 20 years. Two years ago, they created amiraVR to assist a non-programmer who wants to visualize images in VR to accomplish those tasks.

“There are a multitude of options in what you can do,” says Steve Lutz, vice president of sales and marketing for TGS. “In CAVE [CAVE Automatic Virtual Environment], dual display, large reality center or in a stereo environment where you see the image come toward you and interact with it, there is a lot of activity in the medical field with immersive surgery and treatment programs to visualize what they will do.”

Using amira software after 3D medical images from MRI or CT or any of the 3D data sets have been imported, the user is able to interact with the images. Lutz explains that not only can the user see a structure in excellent detail but also cut it in half, cut it into slices, render it into a volume, animate it, and add different effects to highlight various aspects of the object.

Head tracking or wand-tracking capabilities are used to effect interaction with images. A wand-tracking device operates in a similar manner to a computer mouse where the user can point at the data, grab them and move them, and perform the interactive functions. A head tracking system allows a means of viewing the image from a variety of vantage points.

Geoffrey A. Dorn, PhD, executive director of the Center for Visualization and research professor at the University of Colorado (Boulder) used amira software to complete a feasibility study to determine the potential of using VR technology in stereotactic radiosurgery and conformal radiation therapy in collaboration with Ronald V. Dorn, MD, a radiation oncologist at the Mountain States Tumor Institute (Boise, Idaho). A geophysicist by background, Geoffrey Dorn and his group explored crossover application technology from his work in the energy industry to assist in the visualization of tumors for radiation therapy planning.

After an initial assessment of their current practice in tumor visualization, the team evaluated a variety of visualization environments from desktop systems (called fish tank VR) with stereo to bench-sized systems that provide increased immersion, up to room-sized systems.

“There is some advanced segmentation technology that we’ve developed in the energy industry to segment geobodies of seismic volumes of data,” explains Geoff Dorn. “Very promising in this study, we took some of that technology, made modifications to account for the difference between seismic and medical volume, and applied that segmentation technology to MRI and MRA volumes.”

The resulting images were used in both brachytherapy treatment for prostate and cervical cancer and imaging the boundary of brain tumors for external beam conformal therapy.

They studied the use of a VR display with tracking to allow the physician to rapidly plan and adjust the treatment. The goal is to have the system automatically calculate the pattern of radiation beams to optimally “paint” the tumor with radiation while sparing other critical structures in the brain. They anticipate a more efficient and effective plan generated through VR visualization than with current systems.

At this point, the group is seeking further funding to continue their work in these VR approaches.

Todd Lempert, MD, chief of interventional neuroradiology at Scripps Memorial Hospital (La Jolla, Calif) is using a Silicon Graphics Inc (Mountain View, Calif) high-speed computer in addition to their rotational angiography system to accomplish an image fusion technique with stereoscopic renderings to examine cerebral vascular malformations and to evaluate vertebroplasty procedures.

“We take stereoscopic views [of the images] from the right and left, separated by about 6 degrees. When you use special glasses to fuse the two images, it provides a realistic 3D illusion,” says Lempert. “It’s as if the image is suspended right in front of you, and you can reach out and touch it.”

f02e.jpg (25934 bytes)
Dr Rory McCloy (with glasses) of the University of Manchester’s teaching hospital uses SGI Visual Area Networking technologies to bring 3D images of a patient’s brain into the operating room.

They are using these techniques to study blood vessels in the head, to look for narrowing or a malformation such as an aneurysm to determine the best way to treat using endovascular techniques. In the case of an aneurysm, he navigates a microcatheter into the brain artery and prevents rupture of the defect by placing a series of platinum coils within the aneurysm to seal it off from circulation. The 3D stereoscopic images provide a level of detail unavailable using other techniques.

In addition to their cerebral vascular applications, they use the imaging capabilities to direct vertebroplasty procedures to determine the exact placement of injected cement into vertebral bodies. This intervention is used to stabilize spinal compression fractures caused by osteoporosis and other situations.

“We can fly down the spinal canal as if we’re inside it and make sure the cement is not encroaching,” explains Lempert.

Augmented Reality
“From the traditional virtual reality point of view, you create a virtual world, immerse yourself in it, and try to act in that virtual space,” says Branislav Jaramaz, PhD, scientific director, Institute for Computer Assisted Orthopedic Surgery (ICAOS), The Western Pennsylvania Hospital (Pittsburgh). “That is useful in training, but for our goals, based in real surgical interventions, we have developed something we call ‘augmented reality,’ which is a hybrid of virtual reality and reality.”

Working in collaboration with CASurgica Inc. (Pittsburgh, Pa), The Robotics Institute at Carnegie Mellon and others, the group has designed two different applications based on CT scans of the patient’s hip.

 Three-D image created by Data Inc shows stroke damage in a human brain.

In the first scenario, they use CT images to produce a virtual model of a given joint replacement that is planned. For example, in a hip replacement, they take the virtual model and test it for a variety of leg motions to determine whether the joint is likely to dislocate. There are five or six leg motions that are typically associated with dislocation, and they test to see the range of motion for the planned implant before a problem can arise. They can then try other implant scenarios until they find an appropriate match for the particular patient. This allows them to improve preoperative planning to enhance outcome.

f02h.gif (26723 bytes)
At UCLA, Paul Thompson, MD using an SGI Onyx system, has built dramatic time-lapse videos depicting damage to the brain caused by Alzheimer’s disease.

 This cutaway of a 3D brain image from Data Inc shows a hemorrhagic stroke in the frontal lobe.

In addition to these virtual model studies, they have developed a device that projects a CT image overlay from pre-operative scans onto the patient’s hip. This provides the surgeon with the exact alignment of structures inside the patient prior to the first incision, which enables minimally invasive procedures. Jaramaz likens this technique to providing “x-ray vision” for the surgeon.

Education and Clinical Practice
Although the traditional view of VR involves the use of a system to project images into 3D space, as described in the applications above, there are other approaches that merge technologies such as animation used in the film industry with medical images to offer additional information for clinicians.

Alias (Toronto, Ontario, Canada) produces innovative 3D graphics software used in film, broadcast venues, and computer-driven games, according to Alias’ global industry market manager for digital content creation and feature film, Chris Ruffo. Awarded a Science and Technical Academy Award last March, their interactive 3D software package called Maya was used extensively for projects ranging from Finding Nemo, Lord of the Rings, and Spiderman to the Kennewick Man project for National Geographic magazine. Now entering the realm of medical imaging animation, Maya has been used to produce interactive and dynamic animations based on medical imaging data sets.

“What the software allows you to do is to take complex geometry and make it interactive,” explains Ruffo. “You can spin it around, work in 3D with it, add lights and shading, and render it out.”

Aaron Oliker, CEO of CyberFiber Inc (Ridgefield Park, NJ), has used Maya for two innovative medical imaging projects.

Oliker collaborated with Court B. Cutting, MD, associate professor of plastic surgery at the New York University Medical Center, to produce animated images to teach surgeons in Third World countries how to perform cleft lip and palate repairs for a not-forprofit organization called The Smile Train.

f02i.jpg (16839 bytes)
This polygon mesh line drawing (top) serves as the building block for these 3D representations of the heart (right and bottom). The images were made using plug-ins for Maya software, which were created by CyberFiber Inc.

“We took 1 mm slice CT data and created a 3D model,” says Oliker. They animated the model to be able to demonstrate all of the procedures necessary for cleft lip and palate reconstructions, and put the images onto three CD-ROM training videos for distribution throughout the world. To see examples of the animations, go to www.smiletrain.org/medpro/training_cds.htm.

The next project that Oliker tackled was to produce an animation of a beating heart based on anatomic photographs from the Visible Human project, and data from high-speed biplane fluoroscopy images of sheep hearts from Stanford University.

“With the human heart from the Visible Human, and the sheep data from that study, we merged the data to create a moving heart, specifically the mitral valve in the left ventricle, in exquisite detail,” says Oliker. These images are visible on www.cyberfibermed.com.

Data Inc (Denver, Colo) has combined DICOM images from an electron beam computed tomography system at the Colorado Heart and Body Imaging center to produce a QuickTime VR file. The resulting CT and VR images are burned on a CD-ROM and sent to the patient.

“We work with the software that they are using to drive the scanners, such as TeraRecon or Viatronix, to position the model at a given angle,” explains DATA’s technical director, Tom Richardson. Using 1 mm slices, they interpolate approximately 120 to 140 sections through the heart. “Then we use our software and our methods to create a polygonal model that we can texture map and manipulate in a way that they never were able to look at it before.” The end product is used to teach the patient about his or her particular heart anatomy and physiology, or to teach others about scan topics.

Another approach to creating virtual images that are unavailable from a single MRI scan has been designed by Paul Thompson, PhD, assistant professor of neurology, University of California, Los Angeles, in collaboration with a multi-institution team. They developed a network to permit sharing images from MRI brain scans of Alzheimer’s patients and their healthy elderly counterparts, using a Silicon Graphics Inc (Mountain View, Calif) Onyx 3400 server equipped with infinite reality graphics cards. At this time, they have accomplished close to 7,000 scans.

“The goal of this project was to visualize disease spreading in the brain,” explains Thompson. They took standard MRI scans every 3 months of the brains of patients diagnosed with Alzheimer’s disease and created a time-lapse “movie” of the disease progression. “We can detect subtler changes than you could see in an individual image.” This capability becomes very important in testing new pharmaceuticals.

Their next projects involve scanning patients with schizophrenia, where they see sequential grey matter loss in the brain. The other focus for their future work is to scan the brains of large numbers of normal children to seek any physical changes in the scans of children who develop autism or dyslexia.

Research collaboration between Children’s Healthcare of Atlanta (CHOA) Sibley Heart Center Cardiology and Georgia Tech and Emory University Wallace H. Coulter Department of Biomedical Engineering is producing actual heart models for children born with a single-ventricle heart defect.

W. James Parks, MD, who is a pediatric cardiologist at CHOA, performs MRI scans in standard format using routine planes for these children.

“We take that information and send it to the engineers at Georgia Tech, who create models that are compatible with the images,” says Parks. “Then they use a fluid, the same consistency as blood, to flow through the model they have made, with transducers on either end of the vessels. They can change the pressure, volume, flow rate, and shape of the vessels to see if we can improve the efficiency.” All of these data provide valuable information for surgeons who perform reconstructions to allow the single ventricle to circulate blood to the lungs and throughout the body.

Ajit Yoganathan, PhD, regents’ professor of biomedical engineering at Georgia Tech, serves as the primary investigator for the project, which is designed to improve what is called Fontan circulation. Children born with a single heart ventricle may have variable anatomy, so the surgery they require is individualized. With an ability to see how the corrections are working, the surgeons gain insight to help them with the next patient and in the event they need to perform additional surgery on the original patient.

“The downside of some of the corrections is that they are [in some cases] inefficient from an energy standpoint,” says Yoganathan. If that occurs, blood backs up from the inferior vena cava into the hepatic veins and gastrointestinal circulation. After careful study of the blood flow through the anatomically exact model produced from the MRI images, they can offer suggestions to maximize the efficiency of any connections. “It’s a hemodynamic issue, and we are using MRI to reconstruct the geometries.”

Conclusion
As computers have become increasingly sophisticated and powerful, there are many benefits to clinical management of a wide range of patient conditions. Researchers are engaged in a number of innovative solutions that are designed to provide clinicians with the best possible information for both their education and to improve patient care. Merging disparate technologies holds the potential to benefit health care around the world and around the corner, too.