It is crude but true that cancer makes for profitable business. The FDA approved 19 new drugs in 2012 and three this year, according to CenterWatch: Kadcyla ($9,800 per month), Pomalyst ($10,5000 per 28 days), and Stivarga ($9,350 per 28 days). There is no doubt a need for more and better cancer treatments. No doubt.
But two new studies in the past month cast light on the trouble of pushing research harder than perhaps it should be pushed. In an article entitled “Characteristics of Oncology Clinical Trials: Insights From a Systematic Analysis of ClinicalTrials.gov” published in JAMA Internal Medicine, a number of authors conclude that the research on which cancer treatment is based often lacks the statistical rigor and robustness of treatment for other diseases. “An inherent tension arises between the desire to use new, life-saving treatments and the imperative to develop the evidence that patients, clinicians, regulatory agencies, and advocacy groups need to make sound decisions.” said Bradford Hirsch, lead author of the study, in a press release. “Unfortunately, the high prevalence of small studies that lack rigor limits the ability to assess the evidence supporting specific treatments.”
And released just yesterday in PLOS-One, a new study targets the widespread lack of reproducibility that afflicts much of the oncology literature:
When asked if investigators had ever tried to reproduce a finding from a published paper and not been able to do so, 54.6% (237/434) of all survey respondents said that they had, with 58.5% (154/263) of faculty having experienced the inability to reproduce data, and 48.5% (83/171) of trainees having the same experience. Of note, some of the non-repeatable data were published in well-known and respected journals including several high impact journals…
Upon finding results from a paper that could not be repeated the question arises; “what to do with that contradictory data?” When survey respondents were asked if they contacted the authors of the research paper, 78% (71/91) said that they had, but the ultimate results of that contact were very mixed…Overall, only 33.3% of respondents were ever able to explain or resolve their discrepant findings. This implies that 66.7% of conflicting results between research groups were not resolved through communication with the authors of the original publication.
These complementary problems run deep through the cycle that connects R&D with commercial markets, and solutions on any meaningful scale may require systemic examination and overhaul. But little choice exists. As one of the papers concludes, “the lives of our patients depend on it.”