What the ‘limits of DNA’ story reveals about the challenges of science journalism in the ‘big data’ age
By Erika Check Hayden | April 6, 2012 | 30 Comments
As a science journalist, I sympathize with book reviewers who wrestle with the question of whether to write negative reviews. It seems a waste of time to write about a dog of a book when there are so many other worthy ones; but readers deserve to know if Oprah is touting a real stinker.
On 2 April, Science Translational Medicine published a study on DNA’s shortcomings in predicting disease. My editors and I had decided not to cover the study last week after we saw it in the journal’s embargoed press packet, because my sources offered heavy critiques of its methods. But it was a tough choice: we knew the paper was bound to get a lot of other coverage, as it conveyed a provocative message, would be published in a prominent journal, and would be highlighted at a press conference at the well-attended annual meeting of the American Association for Cancer Research. Its lead authors, Bert Vogelstein and Victor Velculescu of the Johns Hopkins Kimmel Cancer Center in Baltimore, Maryland, are also leaders in the cancer genetics field.
I ended up writing about the paper anyway after it made a huge media splash that prompted fury among geneticists. In a thoughtful post at the Knight Science Journalism tracker, Paul Raeburn asked yesterday why other reporters didn’t notice the problems with the study that I wrote about. Having been burned by my own share of splashy papers that go bust, I think the “limits of DNA “ story underscores a few broader issues for our work as science journalists:
1. Science consists of more and more “big data” studies whose findings depend on statistical methods that few of us reporters can understand on our own. I never would have detected the statistical problems with the Vogelstein paper by myself. We can look for certain red flags that a study might not be up to snuff, such as small sample sizes or weak clinical trial designs, but it’s a lot harder to sniff out potential problems with complicated statistical methods.
2. Challenges in the news business are ratcheting up pressure on all of us. Reporters are doing much more work in much less time than we have in the past as we compete with an expanded universe of news providers who have sped up the news cycle. Yet it still takes time and effort to make sense of the developments we cover. It took me about three days to report my piece on the Vogelstein paper while I was simultaneously working on other assignments. That’s probably longer than most reporters can spend on a piece like this.
3. We are only as good as our sources. Science has splintered into so many disciplines that it’s hard to know when we’re actually asking the right sources for feedback. And even when we do find the right people, only certain sources are willing to take the time and energy to explain things in language that we can understand. Statistical geneticist Luke Jostins was patient enough to explain his critique of the paper to me through a phone call and many follow-up emails; I don’t know how many other researchers would have endured my (doubtless naieve) questioning. At the blog Genomes Unzipped, Luke has now written a call to arms for his fellow statisticians: “If we are annoyed that a bad paper got the message across,” Luke argues, “then we should be annoyed with ourselves that we never communicated our own results properly.”
4. It’s becoming more difficult to trust traditional scientific authorities. This paper was published by respected researchers in a respected journal and promoted by a respected scientific society. Yet it wasn’t all that media reports made it out to be. Because of the splintering of disciplines, even scientific journals aren’t always able to catch serious flaws in the peer review process (arsenic life, anyone?). And institutions and scientists themselves are struggling to manage the increasing complexity of science funding and oversight; this is one of the problems why it took so long to stop flawed clinical trials involving the former Duke researcher Anil Potti, the Institute of Medicine recently concluded.
5. Beware the deceptively simple storyline. When we’re competing for readers against the Whitney Houston autopsy and the Presidential campaign, it sometimes seems that the only way to sell science is to claim that it’s either saving or destroying the world. Everyone leaped on the “DNA is worthless” message of this study, but the truth is more complex. Yes, the predictive power of the genome is limited, for most of us, right now. But we’re still at the very early days of seeing what genomics will do in the clinic, and genomics has actually saved some patients’ lives.
6. Getting the story right matters more than ever. I wondered whether it was worth writing about the problems with the Vogelstein study when its basic message – that DNA is no crystal ball – is actually true.
Then, a commenter on my Nature blog post pointed to a patient who responded to one CNN story about the new study. The patient had tested positive for a genetic mutation linked to a hereditary disease that, she said, had caused her brother to have a heart attack. The patient said that the new study “was good news;” “It takes some of the fear of the unknown away,” she wrote. Unfortunately, as other commenters pointed out, the patient might actually have one of the rare genetic mutations that is linked to serious, preventable medical problems, and it might be harmful to her health to dismiss that information.
I agree with Raeburn when he writes that “it’s not the concern of reporters how their stories affect support for research.” But people use our reporting to help inform decisions that have a major impact on their health and welfare. We owe it to our readers to be sure that we’re telling them the whole story, and that our stories are based on solid science.
Photo credit: Proceed with caution; a spider explores a Venus fly trap. Courtesy ~Sage~/flickr.