What the ‘limits of DNA’ story reveals about the challenges of science journalism in the ‘big data’ age

|

As a science journalist, I sympathize with book reviewers who wrestle with the question of whether to write negative reviews. It seems a waste of time to write about a dog of a book when there are so many other worthy ones; but readers deserve to know if Oprah is touting a real stinker.

On 2 April, Science Translational Medicine published a study on DNA’s shortcomings in predicting disease. My editors and I had decided not to cover the study last week after we saw it in the journal’s embargoed press packet, because my sources offered heavy critiques of its methods. But it was a tough choice: we knew the paper was bound to get a lot of other coverage, as it conveyed a provocative message, would be published in a prominent journal, and would be highlighted at a press conference at the well-attended annual meeting of the American Association for Cancer Research. Its lead authors, Bert Vogelstein and Victor Velculescu of the Johns Hopkins Kimmel Cancer Center in Baltimore, Maryland, are also leaders in the cancer genetics field.

I ended up writing about the paper anyway after it made a huge media splash that prompted fury among geneticists. In a thoughtful post at the Knight Science Journalism tracker, Paul Raeburn asked yesterday why other reporters didn’t notice the problems with the study that I wrote about. Having been burned by my own share of splashy papers that go bust, I think the “limits of DNA “ story underscores a few broader issues for our work as science journalists:

1. Science consists of more and more “big data” studies whose findings depend on statistical methods that few of us reporters can understand on our own. I never would have detected the statistical problems with the Vogelstein paper by myself. We can look for certain red flags that a study might not be up to snuff, such as small sample sizes or weak clinical trial designs, but it’s a lot harder to sniff out potential problems with complicated statistical methods.

2. Challenges in the news business are ratcheting up pressure on all of us. Reporters are doing much more work in much less time than we have in the past as we compete with an expanded universe of news providers who have sped up the news cycle. Yet it still takes time and effort to make sense of the developments we cover. It took me about three days to report my piece on the Vogelstein paper while I was simultaneously working on other assignments. That’s probably longer than most reporters can spend on a piece like this.

3. We are only as good as our sources. Science has splintered into so many disciplines that it’s hard to know when we’re actually asking the right sources for feedback. And even when we do find the right people, only certain sources are willing to take the time and energy to explain things in language that we can understand. Statistical geneticist Luke Jostins was patient enough to explain his critique of the paper to me through a phone call and many follow-up emails; I don’t know how many other researchers would have endured my (doubtless naieve) questioning. At the blog Genomes Unzipped, Luke has now written a call to arms for his fellow statisticians: “If we are annoyed that a bad paper got the message across,” Luke argues, “then we should be annoyed with ourselves that we never communicated our own results properly.”

4. It’s becoming more difficult to trust traditional scientific authorities. This paper was published by respected researchers in a respected journal and promoted by a respected scientific society. Yet it wasn’t all that media reports made it out to be. Because of the splintering of disciplines, even scientific journals aren’t always able to catch serious flaws in the peer review process (arsenic life, anyone?). And institutions and scientists themselves are struggling to manage the increasing complexity of science funding and oversight; this is one of the problems why it took so long to stop flawed clinical trials involving the former Duke researcher Anil Potti, the Institute of Medicine recently concluded.

5. Beware the deceptively simple storyline. When we’re competing for readers against the Whitney Houston autopsy and the Presidential campaign, it sometimes seems that the only way to sell science is to claim that it’s either saving or destroying the world. Everyone leaped on the “DNA is worthless” message of this study, but the truth is more complex. Yes, the predictive power of the genome is limited, for most of us, right now. But we’re still at the very early days of seeing what genomics will do in the clinic, and genomics has actually saved some patients’ lives.

6. Getting the story right matters more than ever. I wondered whether it was worth writing about the problems with the Vogelstein study when its basic message – that DNA is no crystal ball – is actually true.

Then, a commenter on my Nature blog post pointed to a patient who responded to one CNN story about the new study. The patient had tested positive for a genetic mutation linked to a hereditary disease that, she said, had caused her brother to have a heart attack. The patient said that the new study “was good news;” “It takes some of the fear of the unknown away,” she wrote. Unfortunately, as other commenters pointed out, the patient might actually have one of the rare genetic mutations that is linked to serious, preventable medical problems, and it might be harmful to her health to dismiss that information.

I agree with Raeburn when he writes that “it’s not the concern of reporters how their stories affect support for research.” But people use our reporting to help inform decisions that have a major impact on their health and welfare. We owe it to our readers to be sure that we’re telling them the whole story, and that our stories are based on solid science.

***

Photo credit: Proceed with caution; a spider explores a Venus fly trap. Courtesy ~Sage~/flickr.

Share Button

Categorized in: Commentary, Erika, Health/Medicine, On Writing

Tags: , , , ,

30 thoughts on “What the ‘limits of DNA’ story reveals about the challenges of science journalism in the ‘big data’ age

  1. It’s a good example of the conflict between the needs of daily journalism and formats where there can be more time to digest and more space for context.

  2. One thing I wanted to highlight is that while there are challenges in reporting a scientific breakthrough, Twitter and other social media has also made it easy to offer a counterpoint very quickly.
    We saw this happen with the Arsenic story in a big way. And it happened here too – as your Storify account of the criticisms of the paper indicates.

  3. @Omespeak: Absolutely agree. The Twitter outcry is part of what convinced me that the “other side” of this story needed to be told.

    @Michael Wosnick: Thanks! I appreciated this point from your post: “while it is easy to criticize the reporters who may have rushed to judgment and perhaps overly sensationalized what for many is a non-story, on balance one surely also has to hold accountable the very scientists themselves who may have allowed this steamroller to roll down the hill, and indeed, from what I can surmise, may have even given it quite a little push.”

  4. We are all somewhat complicit (authors, reviewers, editors, journalists). The need to position or sell one’s research to get the attention of major journal editors creates a tension that can sometimes go awry. The best means to temper this is to ensure there are consequences. It relates back to the issue of reproducibility and the failure of so many discoveries to be validated. This is partly the cost of research (a study can be flawed or misinterpreted for valid reasons of missing data or knowledge) but it is exacerbated by the pressures to publish.

  5. Fantastic work here (and in your article), Erika. This rings perfectly with my own experience writing about genetics. The science is getting more complicated, but there seems more pressure than ever both within science and in the media to offer stark interpretations of the ‘save the world’ or “that stuff is crap’ variety — big claims or big contrarian claims. Amid this we need somehow to convey the nuances, complexities, and sheer uncertainties about a field (genetics and genomics) that is at once becoming one of the most important disciplines, because of its vast potential, but one of the most immature, because it is SO DAMNED EARLY.

    Same goes for neuroscience. At the big neurosci meeting last year I asked a table-full of researchers how far, on a scale of 100, we had come in neuroscience be able to say we understood the brain well enough to speak with reasonable confidence about how all aspects of this quite-entwined organ works. The consensus was “maybe 5 percent.”

    I doubt we’re even that far with genetics and genomics, yet people want to either insist that they’ve Figured It Out or that It’s Worthless. Writing well about this stuff amid this is our challenge. Cheers to you for always trying to do so, despite all.

  6. This makes me wonder about data journalism, where there isn’t even peer review…

  7. Is this due an “better-to-be-first-and-wrong-than-be-right-and-last” attitude on part of the researchers? And due to a sort of “pass” given to the researchers for being big names in the field? A lesser known scientist would be hard pressed to find this paper accepted in a puny Impact Factor 3 journal. Nice job here Erika!

  8. @Jim Woodgett: I’m interested to know what consequences you think apply for scientists; for journalists, our reputations are at stake.

    @David Dobbs: Thanks for your comment; that’s a fascinating observation from the neuroscience meeting. I think genome scientists would agree that the field is still very young. Certainly when I talked to people for a piece on the tenth anniversary of the Human Genome Project, some of them felt that way, e.g. U Penn mathematical biologist Joshua Plotkin: “Just the sheer existence of these exotic regulators suggests that our understanding about the most basic things — such as how a cell turns on and off — is incredibly naive.” (http://www.nature.com/news/2010/100331/full/464664a.html)

    @Robin Meadows: Agree – would be interested to see what happens when you open up that particular can of worms :)

  9. Erika_Check: Since I became aware of your post after I had done mine, I have now added an update (http://cancer-research101.blogspot.ca/2012/04/science-and-steamrollers-how-research_06.html) to make sure that anyone reading there gets linked back here.

    Jim Woodgett: As you and I have been discussing, I agree with your assessment.

    David Dobbs: As I alluded to in my posts, as someone who led two major cancer funding agencies in the past, both charities, we also were under enormous pressure to constantly be pitching “breakthrough” type stories to the media.

    I established some wonderful relationships with journalists that way, but I refused to go overboard and hype something out of all proportion even though that might have excited the donors. Sometimes got me in trouble with my Board, but the high road has to be taken whenever in doubt.

  10. Scientific reputations are at stake – if the flaws are called out. Likewise, journal standards, etc. However, there seems to be a code of silence or willing ignorance among the research community with respect to certain publications. This may partly be because of “hierarchy” but it also reflects dominance and herd behaviour. What we don’t appear willing to acknowledge is the wasted effort in poor reproducibility or the difficulty in actually publishing work that tries to correct an “established” finding. It’s typically too much effort and so the errors are compounded.

  11. Great article and excellent comments. All of the points you make are important and I would especially note the problems with being only as good as your sources, and with trusting traditional scientific authorities. I agree that the splintering of disciplines was one of the major causes of the arsenic debacle. The referees must have just not been the “right” experts. I work in publicity for a NASA observatory and I’ve heard of situations where experts were consulted before publicity but then a very different message was received from other scientists after the publicity was done. This is similar to one of the comments at the KSJ Tracker.

    I sometimes joke that new results are either wrong or they’ve been done before. It’s possible for both of these faults to be present, but it’s generally less likely than just one. From your Nature article with “this study is beating a dead horse” plus all of the other criticisms that you heard from experts it seems like they might have hit a double here!

    For difficult and controversial results like this I sometimes wonder if there is a bias in the criticism that’s heard. That is, the researchers who hear about a study and see a few problems – but nothing major – aren’t motivated enough to jump on the web with criticism or answer email from reporters. This is where having a trusted set of experts can be very helpful.

  12. Thanks first of all for sharing that insight view in how you work and what’s involved, it’s pretty insightful.
    I must say I find it kind of interesting that the many stories which grossly overstate the power of DNA didn’t cause a fury among geneticists. I understand they are passionate about their subject and believe in it’s importance and potential, but at the same time, if you ask most people about genetics today, I think you’ll find that most people overestimate how much is genetically determined about their lifes and capacities and health. So this “DNA is worthless” kind of story might just nudge many people to a more balanced view and thus might serve a good purpose…

  13. This is a wonderful piece. It really gets to the grit of a lot of challenges that we’re currently facing.

    Regarding the point that “we are only as good as our sources”, one could argue that the embargo system makes this problem unnecessarily hard. I’ve often found stories where I’ve thought: “This is really complicated and interdisciplinary. What would be really useful is if I could just tweet a link to it and get my thousand-strong network to give me their opinions, or tell me who would be good to ask.” This would also help to deal with problem #4, because it provides speaking opportunities for people who don’t normally get their voices heard in the media.

    But I can’t. Because it’s embargoed.

  14. @Ed Yong: I made a comment like that on the Guardian piece “Science journalists should be asking questions and deflating exaggeration”.

    If papers came out when papers came out–and science bloggers had a look at them–we’d all be in a much better place to say how relevant or exciting the work is. And a decent and accurate story could be written. Does it really matter that it’s 5 days after the paper launches publicly?

    http://www.guardian.co.uk/discussion/comment-permalink/15013608

    But I was told that advertising income matters more. That’s especially troubling when I spotted the comment on CNN about the woman who was ready to dump her genetic test result because of the reporting of this paper. There are real consequences to misunderstanding.

  15. Pingback: links | Pearltrees
  16. Pingback: blogs | Pearltrees
  17. Having in the last while written a number of pieces comparing what people 15 to 20 years ago were predicting genetics/genomics was going to tell us with what has happened to date, I should point out the intrinsic difficulty with reporting on the area.
    Every month there is another tidal wave of genomics data. For example the estimate is that there will be up to 10,000 new vertebrate species genomes available in the next few years. And at the same time there is no, or next to no, theory which explains what we have discovered either as either a mathematical equation or a logical argument. No E=mc2. No periodic table. Rather the information feels like a jigsaw puzzle where when you think you have put one piece together that linkage suddenly reveals a dozen other pieces which now have to be fit together and – this is the most unnerving part – probably somewhere else.
    How do journalists sensibly report about this?
    I would say: They don’t, because nobody can. We all know genetics/genomics is severely meaningful because we all know we are looking at a biochemical equation whose final expression is “= being alive”. But in that no geneticist can logically/mathematically explain how you get to the living point, the equation today ends with an “=?????” I personally have found this so frustrating that I have begun asking geneticists/genomicists not to tell me what they think their discoveries are saying, but instead to express those results in a biologically meaningful equation.
    You wouldn’t be surprised to learn this request irritates them no end.

  18. @Ed Yong: That’s an excellent point about one of the limitations imposed by the embargo system. Have you tried this canvassing of your twitter followers for papers that haven’t been under embargo or for papers where the embargo has dropped? In the latter case if the initial wave of stories is flawed, you may end up with an important article summarizing a lot of dissent (like Carl Zimmer’s for the arsenic story), or at least a new list of expert contacts for future articles in the same field.

    Universe Today has taken the extreme move of giving up on embargoed stories, by ignoring them until information is publicly available:

    http://embargowatch.wordpress.com/2011/03/31/universe-today-jettisons-embargoes/

    But, the cynic can argue they’re not giving up as much because relatively few astronomy stories appear in embargo-happy journals like Nature and Science.

    @Stephen Strauss: Good comment and one that surely applies to many areas of research.

    @Erika Check Hayden: I would be interested to hear if Vogelstein responds to all of this criticism.

  19. “It’s becoming more difficult to trust traditional scientific authorities.” This is a very important point. There are significant critiques about the lack of reproducibility of science (e.g. Begley and Ellis, Nature 2012, WSJ http://online.wsj.com/article/SB118972683557627104.html, http://online.wsj.com/article/SB10001424052702303627104576411850666582080.html). It is much too easy for an academic scientist to publish an article and move on to the next problem with little heed for whether their published work stands the test of time.

    A comment from the world of finance, complexity is fraud. It does not matter that the data sets are large. If something cannot be explained in simple terms, be cautious. As science moves into Big Data, the complexity of calculations will increase while the logistics of independent verification will become more burdensome. A ripe mix for deception and fraud.

    But there is more to this story. Look at Wikipedia, a resource that has been accused of being maintained by masses of dilettantes. Yes, there are certainly some defaced and erroneous pages, but on whole it is an extraordinarily useful resource. If traditional scientific authorities are increasingly found wanting while nontraditional organizations like Wikipedia are producing useful results, then traditional science is heading for a significant shakeup.

    David

Comments are closed.