Trust no one, and other lessons I learned from physics reporters

|

As I’ve been thinking about the challenges facing science journalism, a little voice in my head has been murmuring, “Yes, but isn’t all this navel-gazing a bit biology-centric?”

Number one on my list of lessons from the “limits of DNA” story is that datasets are getting bigger, and few of us reporters are well-equipped to cope with the statistics behind analysis of these datasets on our own. But datasets in physics have been huge for a long time now. And while it’s definitely becoming more difficult to trust traditional scientific authorities in biology and biomedicine due to the splintering of disciplines and the struggle for funding, some of the forces that are driving fierce competition in biology don’t apply to physics. Physicists don’t have to pretty up their data to try to win drug approvals, for instance.

But I only know what I do, and all I do is biology. So I asked a highly unscientific sample of my favorite physics reporters and writers (n=4) what they thought about these issues. And what I learned surprised me.

First, I learned that the data in physics is so big that it actually helps reporters to be more certain about the results that they cover. As my Nature colleague, physics reporter Geoff Brumfiel, points out, “There are a lot of stats in physics, but there is also a lot more data. Experiments are highly repeatable and typically done thousands or millions of times. So the stats you see in physics tend to be quite solid.”

But if the results are usually reliable, research groups still differ over their interpretation. So my physics friends agreed with me that we’re only as good as our sources, and that these sources all have their own agendas.

Fellow LWONer Richard Panek for instance, has written a lot about how human factors such as greed, competition and stubbornness can influence the way that physicists choose to interpret their data.

LWON’s own Ann Finkbeiner says she largely trusts physicists, because they demand highly significant statistical results. But, she adds, “all this believability and trust is called off when the subject has political implications” – as happens frequently with federally funded labs and agencies.

“So when Fermilab, for instance, announces a tantalizingly new clue to some secret of the universe, I just think ‘Fermilab’s being sparkly for the media and Congress again, so it can stay in business,’ because Fermilab has good reason to worry about going out of business,” Ann says by way of example.

Whether physics is as corruptible as biology is one of those oft-debated chestnuts that’s currently sparking a fresh round of discussion; Carl Zimmer has a piece in The New York Times exploring whether competition for publications and funding is driving a “dysfunctional climate” in biomedical research, but Sean Carroll thinks that physics is somewhat immune to the factors that may or may not be corrupting biology.

For reporters, it doesn’t really matter whether there is more or less competition in one field or another. It’s simply important for us to be aware that our sources have human motivations like anyone else, and to take that into account in everything we do. Or, as my Nature colleague, physics and astronomy reporter Eric Hand puts it, “We have to do our jobs,” which means calling people, finding critics, and generally applying the same skeptical filter to science that reporters should to any subject.

Or, as evolutionary genomicist Joe Pickrell Tweeted after my last LWON post: “Trust no one.”

I couldn’t agree more.

***

Image: Physics reporters are used to navigating galaxies of data. (Get it?) Grand Spiral Galaxy NGC 1232, courtesy NASA, Astronomy Image of the Day.

11 thoughts on “Trust no one, and other lessons I learned from physics reporters

  1. I’m in psychology, and ended up interviewed on the Swedish Radio when the Stapel Case broke. An obvious idea whas that this only could happen in something soft and squishy as social psychology. But, just a year before I had read Eugenie Samuel Reich’s book Plastic Fantastic, about Jan Hendrik Schön’s fraudulent research at the Bell Labs – which involved multiple publications in prestigious journals. Very much Physics. (I got the tip from Seth Robert’s blog).

    The trust noone is sad, because science builds so much on trust. You lose trust, there is nothing.

  2. The additional risk in biology, though, is the readership making flawed personal interpretations on disease or treatment possibilities.

    If you dispute a star distance or planet size, that doesn’t have a huge impact on your daily life. If you ignore your cardiac mutation, you could in fact drop dead on your next bike ride.

  3. I’ve been a reporter a while. And I have covered stuff that isn’t science. And while I have a (small and relatively cursory) science background, I learned a long time ago that you don’t trust anyone. A I said at one gathering where they had just this discussion, when you have to pore over government documents and corporate budgets there is no peer review.

    Now, the difference with covering physics or other sciences is that you have less call to assume every source is a liar (or telling lies of omission) as happens whenever you cover cops or politics.

    But really, if you have to tell a reporter to not trust his sources completely, then you have just found someone who should be in another line of work.

    I always ask scientists (when time ans space allow) how they came up with an idea, or what led them to it. As a layman I have to ask, “out of all the possible experiments, why this one?” and “why this line of investigation?” “why now?”

    Those alone will tell you a lot. Even if it is something the source isn’t aware of initially. I ask myself those questions all the time pursuing stories.

    I think maybe many who see themselves as science writers have never been burned by a source. (I have) and never been put in a position where you had to make really tough ethical decisions. That’s a tribute to the nature of the scientific enterprise, but it also means there are certain mental muscles, I guess, or habits, that don’t get developed. One of those is just assuming that everyone is a liar 🙂 or at least self-serving.

  4. @Åse: Science builds on trust, but it’s more like “trust but verify,” right? Replication seems to be an important part of the scientific process.

    @Mary M: I absolutely agree.

    @Jesse: It’s an interesting question whether science reporters treat their subjects with more reverence than they deserve, and Richard brought this up in our conversation for this post. I don’t think that unwarranted credulity is necessarily unique to science writers though. There’s certainly been abundant criticism of insufficient skepticism among reporters in other fields, e.g., among journalists covering the lead-up to the 2003 Iraq war.

  5. The problem in 2003 wasn’t credulity per se — if you asked the people actually covering stuff most probably didn’t trust the Bush team at all. THe problem was trying to prove the “liberal media” canard wrong. In an effort to show they were good patriots, the big news organizations’ management (with a few shining exceptions) went on board with the propaganda.

    There were examples of reporters for whom there is no excuse — I’m looking at you, Judith Miller — but fundamentally there is a big difference in covering the science that breeds people who aren’t reverent, just not experienced with getting burned.

    In physics especially, you don’t run into sordid things like near-statutory rape by a policeman, a local postdoc, say, getting the crap kicked out of him by a group of thugs, or someone calling you up and making threats. I’ve had those things happen to me and covered stuff like that. I asked how many sci journos in a room had done so, and there was me and one other person in a group of 20 or so.

    Now, that means that skeptical as you might be, you don’t develop certain habits of mind. If you aren’t looking for deception you won’t find it, you know? Because you aren’t used to being routinely lied to and insulted. 🙂

  6. “Trust no one” is a fine rule to apply in science – whether it be research, publicity or science journalism – but challenging to apply in practice. For example, when I’m scanning papers (I work in publicity for Chandra, a NASA observatory) I’m automatically skeptical about interesting results authored by people I’ve never heard of. But, does that mean I’m not skeptical enough about people who have built up a solid reputation? No one is infallible.

    I agree with Sean Carroll that fraud is probably less common in the physical sciences. But, there’s still plenty of room for errors in statistical analysis, methodology and interpretation. I know from direct experience that these problems can be very difficult to find, even by the smartest reviewers. So, the challenge for science journalists can be daunting, as Eric Check Hayden pointed out on this blog recently. There are no easy solutions to this problem. An astronomy colleague of mine has complained that science journalists should be less afraid about making mistakes, but I think it’s the height of professionalism to obsess about them and to continually search for solutions, as this blog and others do.

    The comments about political motivation especially interested me. There are many possible sources of motivation for publicity from scientists at federally funded labs and agencies. In the case of NASA, it’s a “statutory responsibility” for them to “provide for the widest practicable and appropriate dissemination of information concerning its activities and the results thereof.” I can’t see why Fermilab wouldn’t have a similar directive. It’s important for the public to see status reports on the research that they help fund. Can politics creep into this process? Of course, but based on my experience it’s not a common problem. I think peer review and general human fallibility provide a lot more problems.

    The fallout can be substantial when there are problems with publicized results, whether by NASA or Fermilab or a private institution. In particular, serious damage can be done to the reputation of the scientists involved. If a scientist’s work isn’t trusted by other scientists, then grants can be harder to earn, papers can be treated more harshly, and collaborations and promotions can be limited. Most scientists are aware of and respect these pitfalls when doing publicity and there’s only so much that “PR machines” can do to influence them. We all know about the exceptions.

    1. Peter, I was the one who talked about political motivation, so I’m answering you. I don’t really have an answer.

      I thoroughly understand the responsibility of publicly-funded institutions to report to the public. I don’t think that’s a problem at all. The problem for me as a writer and total outsider is knowing how to trust what I hear from the institutions. And maybe the problem I’m having is less with the institutions than with the “PR machines” you talk about. Big breakthroughs, farthest whatnots, tantalizing hints of particles — and I know if I call the actual scientist, I’m going to hear, “Well, yeah. That’s sorta 2 sigma.”

      That’s pretty much all I was saying.

  7. Ann, thanks for answering. I can appreciate being unsure about how much you can trust institutions. As Mitt Romney might say, “institutions are people” and we’ve already agreed that it makes sense not to trust anyone. I’ve been thinking about general issues like this recently, and plan to write more about them.

    One thing that struck me about the quote you provided in the article was the high level of trust you have in physicists who are *not* in federally funded labs and agencies. But, I can think of several reasons one might not want to trust these researchers, such as private agendas like career advancement, rivalry with competing researchers, or pressure from Universities for publicity. Problems won’t be common, as I don’t think the pressures are nearly as severe as those in medical-related fields, but they can occur.

    It’s easy to come up with examples of problematic publicity efforts both inside and outside federally funded institutions, across many different fields. Examples inclue alien dinosaurs, artistic Krakens and – going much further back – cold fusion. As far as I know, all of these were publicized by professional societies and Universities. Of course on the federal side there are stories like arsenic life and the Martian meteorite back in the 90s. Obviously any story remotely related to alien life should be treated with great suspicion!

    One point I can make is there’s a big difference between distrusting a source and consistently finding problems, such as exaggeration of results. That’s why your follow-up comment intrigued me: “I know if I call the actual scientist, I’m going to hear, ‘Well, yeah. That’s sorta 2 sigma.'”. Are you saying that when you call a scientist they usually end up explaining that the result being publicized is not statistically significant? Is that what you meant? Is this what most science journalists think? I’m very interested to hear what general reactions journalists have to publicity, whether federally funded or not.

    We work closely with scientists in preparing releases, and the other NASA observatories I’m familiar with do the same. Clearly these standards aren’t adopted by everyone. Charles Choi pointed out not too long ago on twitter that many press releases may not be vetted by scientists and he did a storify on this:

    http://storify.com/cqchoi/the-dangers-of-press-releases

    I was surprised that a lack of vetting is common, but my explanation that *we* have scientists review releases was too late to make it into his storify.

    Back to your comment: from our perspective, any differences between wording choice in a press release and a scientist’s words will mostly come down to mundane reasons like scientist’s reliance on familiar, technical language when talking to a reporter. It’s also possible that the scientist just isn’t paying attention, but I don’t think that happens very often.

    Taking your comment literally, I’ll say it’s very unlikely that we would base a whole release on a result that’s significant at only about a 2 sigma level. Maybe if the authors are willing to speculate about some secondary effect at low significance, we’d say there’s a hint of it being there, but there would have to be a primary detection of something at higher significance.

    Stories with solid statistics are sometimes relatively easy to publicize. The trickiest stories are often ones where multiple interpretations are possible and the statistical significance of each are difficult or impossible to quantify.

    Reputation is important for scientists, but it’s also important to us. If we became known for exaggerating or distorting stories then it would damage our credibility with astronomers and result in a lack of cooperation in doing stories. The first thing I ask when I contact people about a paper is usually “are you interested in doing publicity?” I expect, if we developed a bad reputation, that we’d also find fewer people coming to us with interesting papers. All of this would be counterproductive in trying to publicize the research Chandra is involved in. These are serious concerns for us.

    I use a term like “PR machine” or “publicity machine” with reluctance, since it does a poor job of describing what goes on behind the scenes. In particular, it implies that little or no thought or discretion is applied by people working on publicity and that press releases are just cranked out automatically. This isn’t true, for the NASA observatories I’m familiar with. Mistakes can and have been made, of course, including by us, but it’s easy for outsiders to be critical and label publicity efforts as hype – and I’m not talking about you but some of the critics I read about – without understanding some of the challenges involved in doing publicity. Like umpiring in sports, publicity efforts tend to be ignored when a good job is done and panned when mistakes are made.

    I could go on and explain in detail about more of the challenges we face, but this comment is already very long. A personal challenge for me is that I’m naturally long-winded.

  8. Peter, you sound like a pub info guy I could trust. My own personal challenge is occasionally writing too short. So re: the 2-sigma business: I should have written that when I’ve gotten a press release announcing the newest, biggest, farthest something, and check it with an outside, knowledgeable scientist, I often hear, “Yeah, it’s interesting if it’s true but it really needs checking.” And I translate that in my own mind to, “the source believes it at the 2-sigma level.”

  9. Thanks Ann. Yes, I can understand hearing that sort of thing from outside experts. They’re not infallible either, but if you hear doubts from several different people on the same story, you know something is going on.

Comments are closed.

Categorized in: Erika, On Writing, Physics

Tags: , , , ,