Jeffrey G. Jarvik, M.D., M.P.H, became interested in radiology outcomes research as a fellow in the Robert Wood Johnson Clinical Scholars Program at the University of Washington in 1993. Now, as an associate professor of radiology in Neurological Surgery and Health Services at the University of Washington in Seattle, he continues to explore outcomes research and is currently conducting a randomized trial comparing rapid magnetic resonance procedures, which utilize fast spin echo imaging techniques, with X-rays to determine the most effective way to diagnose people suffering from low back pain.
Through his research, Jarvik hopes to determine if rapid MR, which can be performed in less than three minutes, can help physicians make better therapeutic decisions and reduce the need for additional tests down the road than plain film. If so, rapid MR may be the more cost-effective alternative in the long run.
Jarvik says a pilot study suggested rapid MR might provide better patient outcomes, more cost-effectively. The question, Jarvik adds, is whether the MR study picks up abnormalities unrelated to the back pain, which may only make diagnoses more difficult. Jarvik expects to release the studys findings in fall 2001.
Radiology has been behind the curve in performing outcome studies such as his, Jarvik says. But the field is catching up as the importance of such studies both for improving patient outcomes and the bottom line becomes more widely recognized and researchers become trained to perform them.
What prompted your involvement in studying patient outcomes for diagnosing back pain?
The reason that were looking at patient outcomes in these studies is because intermediate outcomes, such as diagnostic accuracy, are much harder to measure for low back pain than for other entities, primarily because of a lack of a good standard. Its difficult to say whether one diagnostic test is more accurate than another diagnostic test for low back pain. And then, even if you can say one is more accurate than another, its still not clear whether its the more useful test for low back pain. Thats primarily because of the large number of incidental findings on imaging studies. So you may have a diagnostic test which is a lot more sensitive, able to pick up a lot more anatomic changes than another diagnostic test, but that information may be irrelevant to the low back pain. So ultimately, in order to decide if one test is better than another, you have to know what happens to the patients. Is one group of patients benefiting from a diagnostic test? Thats what were trying to answer.
What is the value of outcome studies to radiology; why should physicians pay attention to them?
Ultimately its the outcomes that matter to patients and to payors and to caregivers. You can look at health services research as looking at three different aspects of healthcare delivery. One is quality of healthcare, one is access to healthcare, and the final one is healthcare outcomes. And when you look at outcomes, especially when you look at diagnostic tests, there are different levels of outcomes that you focus on. There are the traditional outcomes, which are mortality and various measures of morbidity. Functional status and quality-of-life are the in-vogue outcomes currently.
But the problem when youre examining these outcomes is that theyre several steps removed from your intervention. Unlike therapeutic trials in which you have patients that have a certain disease, you randomize them to one therapy or another, and then you look at the outcomes. The outcomes are usually directly related to which therapeutic intervention the patient got.
Diagnosis is a little harder because youre several steps removed. You have the diagnostic test, but then certain information is gleaned from that diagnostic test, which may or may not be used in an appropriate way. And even if it is used in an appropriate way, there may or may not be good therapies for the patients condition. Or if there are good therapies, the patients physicians may not administer those therapies appropriately. So there are many layers between the diagnostic test and the eventual patient outcomes. Even though you do have these problems and its harder to measure patient outcomes with diagnostic tests, its still vital information because ultimately thats whats important. Are we doing good or harm with a certain diagnostic test? Thats what must be answered.
What are the challenges in conducting outcome studies?
Randomized trials are very expensive to conduct and you can only do them on a certain number of questions. You just have to be judicious about what questions you want to study. Those questions should have a high impact on society either because the condition is prevalent, such as low back pain, which has high economic consequences if you even have small changes in the diagnostic workup or orthopedic intervention, or you have a very serious condition that accounts for important rates of mortality or morbidity like heart disease or stroke. Thats one of the fundamental questions agencies look at when funding randomized trials is it an important question to society? Thats something people who want to do this kind of research should think about.
Very few randomized trials are being done, however. But you dont have to have a randomized trial to do outcomes research. Thats just one tool. Another way of doing outcomes research is to look at secondary databases. The problem of looking at administrative databases is that theyre not set up to do research. So theres often a lot of information thats not available or you have to worry about the fidelity of the information that youre getting. Nevertheless, theyre extraordinarily important ways to do research because its much cheaper and you get answers much faster.
Do radiology outcome studies work toward standardizing healthcare?
Yes, this is an important point of research right now. One of the areas of emphasis for AHRQ, the Agency for Healthcare Research and Quality, is called translational research translating research into practice. Changing physician behavior is not an easy thing. And research has shown that simply presenting physicians with new data doesnt always change their behavior right away. Theres interesting work going on now called academic detailing where, in order to change physician behaviors, researchers are adopting practices from the pharmaceutical industry. Traditionally, pharmaceutical reps go out on a one-on-one basis and tell physicians about the drugs, give them samples of the drugs, and leave them small gifts. Theres interesting research looking at the efficacy of adopting a model for changing physician practice for not only new technologies, but new ways of practicing medicine that might be more cost-effective. And theyre finding that, though its certainly not a panacea, it is effective to a certain extent.
Are guidelines helpful to physicians?
Yes and no. The guideline development process is often as political as it is science-based. Frequently, when you look at guidelines, theyre often too vague to be useful. Thats not always true. It depends on who does them. The guidelines put out by the Agency for Healthcare Research and Quality were quite influential and impacted directly the use of diagnostic tests or should have impacted the use of diagnostic tests for low back pain. There are other guidelines put out by various societies that have their own agendas to pursue that may or may not be as useful as other guidelines. The watchword in guidelines now is to be evidence-based.
The other problem with guidelines is they come out with guidelines for the entire United States. There may be very different issues in Seattle than there are in Tampa. So the guidelines may be more or less useful in some areas than in others. A lot of hospitals are coming up with their own guidelines, or they might be called critical pathways. Any large institution will have critical pathways, created partly by economics and partly by the desire to standardize care and deliver better care.
So how can physicians evaluate the guidelines or the results that come from these studies?
Often its difficult. The first thing to do is look at where the guidelines came from. Are they from a well-known organization or not? Who made up the panel that developed the guidelines? Was it a multidisciplinary panel or not? How did they gather the evidence for their evidence-based guidelines? Was there a systematic approach to identifying the evidence and critically evaluating it, because not all published studies are equal? Are there explicit criteria in the guidelines for deciding which articles to include and which not to include?
Its not easy to decide if these are good or bad guidelines. Ultimately, it might be like many other things in life, you have to go with what you trust. Ive put a fair amount of trust in the Agency for Healthcare Research and Quality. Part of that trust is based on the fact that their guideline development process was very transparent; you could see all the steps that they went through. There are other agencies, private organizations, that have guidelines that also go through a very rigorous process and are quite reputable. So you have to base it somewhat on reputation.
How does cost-effectiveness tie in with these outcome studies?
From a societal perspective, cost is very important. It includes not only the cost of the test, but also the cost of the entire diagnostic workup and treatment of the patient. Cost-effectiveness incorporates, just as the term suggests, two different variables the cost of a strategy with the overall effectiveness. Effectiveness can be measured in a number of different ways. You can look at effectiveness as improvement on a certain health scale. For example, the Roland scale is a widely used measure for low back pain. So you can form your cost-effectiveness equation in terms of the cost per three-point change on the Roland scale.
Or more recently, say in the last 10 years or so, quality-of-life has become more and more important and that translates into utilities. Now theres a term, cost-utility analysis, which is, in essence, a type of cost-effectiveness analysis. Its a sub-type where, instead of looking at some arbitrary measure of effectiveness, youre looking at quality adjustments in life years.
The advantages of doing that is if you have a denominator that is common to many different types of cost-effectiveness analyses, it can then compare them and decide, Well, is societys money better spent doing screening mammography or doing vaccinations for children? It tries to answer those types of questions. That sounds easy, but there are a tremendous number of assumptions and unknowns that go into building these cost-effectiveness models, and they get quite complex, quite quickly. Even though you end up with what looks like a very solid appearing number, if you start delving into it, it be a might not be so solid.
What are some examples of radiology outcome studies that have changed the way patients are diagnosed and treated?
There are two areas where there has been a lot of work on outcome studies. One is screening mammography. There are a number of large trials showing the benefits or calling into question the benefits of mammography for various age groups.
The other area has to do with contrast agents, where theyre doing randomized trials to show the efficacy of one type of contrast agent versus another. There have been some very large-scale studies looking at patient outcomes comparing ionic versus non-ionic contrast agents. There was a lot of work done in the 1980s looking at reactions or side effects to these contrast agents and the use of steroids for preventing contrast reactions.
There are not a lot of outcome studies going on. But I would say there are more going on now than 10 years ago. More importantly, theres a growing cohort of people in radiology that have been trained in how to do health services research. These are people who have gone through programs such as the Robert Wood Johnson Clinical Scholars Program, which is a two-year fellowship after training in radiology where you focus on things like study design and learning how to do rigorous outcomes research. Some of these programs have been going on for a number of years, so there are probably 15 to 20 radiologists that are now part of this group. And in fact, theres a society that Bruce Hillman (M.D., professor of radiology at the University of Virginia) founded called the Society for Health Services Research in Radiology.
Why are outcome studies so important to radiology in 2000?
I think economics had a lot to do with it. About five or six years ago, radiology was feeling a lot of pressure economically, as was the whole field of medicine. And I think that served as the catalyst to get people more involved in answering these sorts of questions. And as insurers and government agencies were paying more attention to the economics as well as the outcomes aspects, radiologists didnt want to be left out in the cold. They at least wanted a seat at the table as those conversations were occurring. Thats why there was a push to train people. I think its interesting research, and ultimately its whats important for patients. Weve sort of been slow to recognize it compared to other specialties. But were getting there.