Why cancer biomarkers haven’t lived up their hype

  • Share
  • Read Later

It’s a frequent complaint that, despite all the money poured into cancer research in the last few decades, progress has only ever seemed incremental. But perhaps nowhere is this more apparent, at least in the last 10 years, than in the field of cancer screening — in the biological indicators or “biomarkers” that promise early detection and better chances of survival.

In a recent paper in the Journal of the National Cancer Institute, clinical biochemist Eleftherios Diamandis uncovers some of bigger blunders in cancer-diagnosis techniques — explaining how experimental data could be misinterpreted and how, as a result, once-touted breakthroughs turned out to be far less than met the eye. Diamandis spoke to TIME earlier this week about his findings, and about how fizzled hopes can affect medicine:

TIME: What exactly is a cancer biomarker?

ED: There are many definitions. But a biomarker, quite simply, is a substance — it could be a protein, it could be a cell, it could be a nucleic acid — that is present in a patient’s biological fluids and that can be used for diagnosis, prognosis or monitoring. So it’s a biological indicator of a disease process, in this case obviously cancer.

TIME: In your new paper, you give some examples of what you call “highly publicized breakthrough biomarkers” that, after an initial period of hope, subsequently could not be validated. Can you give an example of one of those and explain why the biomarker turned out not to be as useful as first anticipated?

ED: In the paper I think I give about seven or eight examples, and I chose these examples — I could have cited many more — because each one has a specific flaw that is different from the others. All of them are high-profile. They’ve all been published in what you’d call big journals.

I think in recent times, the most compelling example is the one that [in the paper] is called “proteomic profiling of serum by mass spectrometry for ovarian cancer diagnosis.” That was published in 2002, and the reason this is a very good example is that it came from spectacular investigators — very famous investigators. It was published in Lancet [one of the top medical journals in the world]. It was using new technology, mass spectrometry, and the authors claimed they could diagnose ovarian cancer with close to 100% accuracy.

I’m not sure if you remember, but this report reached CNN. It even went to Congress; Congress took note of it. And yet when you scrutinize this paper you will find that people made many mistakes. You know, it took others three or four years to try it independently, and then find out that it wasn’t working at all. It was all an artifact because the samples the researchers initially used were not homogeneous. Some cancer samples were much older than the samples from the controls, and the age of the samples — the way you store them — makes a difference in the reading. Later, bioinformaticians found that the method used to analyze the data was flawed.

TIME: But if it’s just methodological mistakes, how do these things get through the journals’ peer-review process and get published in the first place?

ED: Authors will say we did this and we found that — and reviewers are not sophisticated enough to understand every word of every paper that comes across their desks for review. So if the authors say they used a mathematical algorithm developed by X-Y-Z, and the results came out like so, the reviewers often won’t worry about the middle part. They know what the input is. They know what the output is. But a regular reviewer will never know what’s happening in the computer — in the black box. It was only when people took the raw data — and biostatisticians did that subsequently, [after publication] — that they found out the interpretation was no good. There’s no way a journal reviewer would normally do that. People have faith that those who have had great success in the past will likely publish good results as they go on. But you know even Einstein published blunders

TIME: When these things do get published, they can get people’s hopes up, and get a lot of money for clinical implementation. What do you think is the best way to avoid this kind of false result?

ED: Unfortunately, diagnostics are different from therapeutics. When a therapeutic comes out, to get to the patient, as you know, there are very stringent criteria. Therapeutics have to get FDA approval. They have to do Phase I, Phase II, and Phase III trials, so there’s a very well defined roadmap. The diagnostics business is very different because we don’t have these stringent phases that people have to go through to make their case. If somebody says I measured this, in this patient, and this thing goes up or down, you can publish without too much difficulty. It’s only when something is published that other people say, okay, let’s see if it’s going to work. If it doesn’t work then we find out three years later. But eventually we will find out. I’ll tell you this much: If somebody publishes something substantial, of clinical value, somebody eventually will have to try the method, and if the method doesn’t work people will find out.

TIME: Do you think this is harmful for medical progress or for patients, or do you think it’s just sort of the way it is?

ED: I think this kind of report creates a lot of damage. People get their hopes up based on crappy reports, and I think that’s not fair. You tell people, “Don’t worry. We can catch your cancer early,” and people run to take the test, only to find out that the test is not good. But what can you do? I don’t think these things will ever disappear. I hope authors will think twice before they publish in high-profile journals and seek attention. I mean, if I were one of these seven authors who published in the New England Journal or the Lancet and somebody comes along and says, “Your paper is crappy,” it’s painful.