Why we still need prostate cancer comparative effectiveness research

by The Incidental Economist on December 12, 2012 · 2 comments

There have already been many studies of outcomes of prostate cancer treatment. And yet, more of both randomized controlled trials and observational studies are warranted. In this post, I will make the case for the latter by summarizing some of the body of work to date and pointing out the key limitations. By way of full disclosure, I am the principal investigator on observational study of prostate cancer treatment funded by the Department of Veterans Affairs’ Health Services Research & Dissemination service. My co-investigators are Steven Pizer, Price Kerfoot, Graeme Fincke, Michael McWilliams, and Bruce Landon. Though based on the work of the group, what follows are my views and does not necessarily reflect the position or policy of the Department of Veterans Affairs, my co-investigators, or the other institutions with which they or I am affiliated.

In a year 2007 review of the literature the American Urological Association (PDF) identified 27 prostate cancer treatment studies based on randomized controlled trials. However, very few directly compared two different major treatment modalities; most focused on differences in variations within a particular modality. Thus, as of 2007, high quality comparative effectiveness evidence upon which to base decisions across main modalities was lacking. Three separate trials led by Bill-AlexsonIversen, and Paulson that compared treatment modalities were insufficiently powered to address disease-specific (as opposed to all-cause) mortality or risk of metastatic spread. Moreover, the trials were conducted in an era prior to PSA testing, which is now common.

The PIVOT study has compared all-cause mortality and prostate cancer mortality based on a sample of 731 American men randomized to radical prostatectomy (removal of the prostate) versus watchful waiting (monitoring without definitive treatment). For men with PSA scores below 10 ng/mL, the investigators found no statistically different effects on all-cause or prostate cancer mortality between the two groups. However, with less than 15% of eligible individuals agreeing to participate, non-random selection into the trial may threaten generalizability and the study has been critiqued for being underpowered. Wilt and colleagues note differences between PIVOT enrollees and non-enrollees in age, race, degree of differentiation of prostate cancer, and self-reported health status, suggesting non-random selection may be an issue. Recruitment challenges such as those found on the PIVOT trial are a significant threat to cancer trials in general and prostate cancer ones in particular. This is one advantage observational studies have over trials.

Variations in prostate cancer treatment practice patterns offer opportunities to compare outcomes through observational studies. Though such studies have limitations, they do not face the same challenges of recruitment and generalizability as randomized trials. The results of observational studies could be useful in guiding treatment decisions while awaiting additional results from ongoing or planned, long term clinical trials. Variation in application of radiation or surgical treatment has been documented for at least two decades. Using 1995-1999 National Cancer Institute SEER data, Krupski, et al. documented variation across ten states or metropolitan regions in the likelihood of surgical or radiation therapy. Variation is large (odds ratio range from 0.44 to 2.01) and significant even after controlling for age, ethnicity, income, education, and histologic grade. Similar patterns of variation were found in the Veterans Health Administration, even after adjusting for diagnosing (as opposed to treating) institution, age, grade, stage, race, ethnicity, insurance type, and marital status.

Existing observational studies suffer limitations that can be overcome. In a review of 473 observational studies of prostate cancer treatment outcomes conducted prior to September 2007, Wilt, et al. found wide variation in effectiveness estimates, outcome reporting, and use of controls or risk adjustment. More importantly,

Investigators rarely stratified outcomes according to patient and tumor characteristics. Many [observational] studies included patients with locally advanced disease but did not analyze outcomes on the basis of tumor stage.

Consequently, a consensus on the relative effectiveness of treatment options does not emerge from the body of observational research. The vast majority of observational studies focus on emerging technologies, harms, and quality of life outcomes. Among observational studies that have compared primary treatments ”[o]verall and disease-specific mortality were infrequently reported.”

Within the Veterans Health Administration and in other settings, there is ample data and sample to conduct observational, comparative effectiveness studies of prostate cancer treatment. Crucially, practice pattern variation exists and can be exploited as a source of quasi-randomness. This leads to an “instrumental variables” design, which has its own set of limitations. However, radnomized trials, though valuable, are also limited in ways that observational studies are not, as described above. We need more of both and prostate cancer treatment is just one area in which they can be put to good use. In a subsequent post I will describe another.