This is what a meaningless study looks like.

|

shutterstock_1765535

So many press releases land in my inbox that I don’t have time to read them all. But a recent release from the Canadian Medical Association Journal (CMAJ) caught my eye — “Treatment by naturopathic doctors shows reduction in cardiovascular risk factors. Randomized controlled trial.”

This would be big news if it were true. Naturopathic physicians are eager for respect in the mainstream, and if a randomized, controlled trial really showed that naturopathic medicine could produce measurable results, this could give naturopaths some bona fide credibility.

At this point you may be asking, what exactly is “naturopathic medicine” anyway? According to the American Association of Naturopathic Physicians, “Naturopathic physicians work with nature to restore people’s health.” Got that? They work with nature. Not to be confused with physicians that work against nature. For instance, oncologists who work to stop cancers made possible by genes that evolved in nature. Or infectious disease docs who fight parasites that have naturally co-evolved with their hosts.

Here’s what the American Cancer Society says about naturopathy. “Available scientific evidence does not support claims that naturopathic medicine is effective for most health problems.”  A letter published in Allergy, Asthma & Clinical Immunology in 2011 examined the treatments offered at 53 naturopathic clinics in Alberta and British Columbia and concluded,

A review of the therapies advertised on the websites of clinics offering naturopathic treatments does not support the proposition that naturopathic medicine is a science and evidence-based practice.

So if the CMAJ study truly showed that naturopathic medicine was effective for something, this would give naturopaths some ammunition against their critics. Alas, the study provides nothing of the sort.

The study’s authors split the study’s 246 volunteers into two groups. The control group received “enhanced usual care,” while the treatment group got seven sessions of “naturopathic care” in addition to the enhanced usual care.

What are enhanced usual care and naturopathic care? Who knows. Here’s how they’re explained in the paper:

Because a range of interventions were recommended to participants in the naturopathic group, the frequency and composition of each recommendation as well as participant adherence are not reported. We did not have direct control over the care given to the control group; thus, we did not track or report recommendations made by the participants’ family physicians.

In other words, the researchers have no idea what treatments and advice any of the patients were receiving. Naturopaths could select the recommendations they offered their patients from a menu of more than 20 different interventions. Many of these, like losing weight and exercise, are no different from what a traditional doctor might tell their patients. Others, like pomegranate juice and cinnamon, are more “alternative.”

The paper states, “This study has been peer reviewed” and it looks as though the authors simply printed the reviewers’ comments in the discussion section. For instance,

Some may perceive this as an unfair comparison and would prefer that we had asked whether the addition of, for example, 7 sessions of naturopathic care to usual care reduced cardiovascular risk compared to the addition of 7 sessions with a family physician.

The authors also state that critics

might reasonably suggest that our design was unfair and was geared toward showing a benefit in the intervention group.

But they didn’t let such shortcomings stop them from making sweeping conclusions. Their results showed that people in the naturopathy group had a modestly lower 10-year cardiovascular risk score and slightly lower rates of metabolic syndrome than people in the control group. From this, the researchers conclude:

Our findings support the hypothesis that the addition of naturopathic care to enhanced usual care may reduce the risk of cardiovascular disease among those at high risk.

No, that’s not what this study found. The findings don’t support the hypothesis, they just don’t contradict it. An editorial by CMAJ editor Matthew B. Stanbrook MD PhD, gets to the heart of the matter.

To the extent that these may have driven the observed cardiovascular risk reductions, one might say that the intervention worked because the naturopaths were, in effect, practising medicine.

There is nothing about this study that proves that naturopathy, rather than increased interactions with a physician, produced the better outcomes in the treatment group. In his editorial, Stanbrook writes,

Physicians should hold complementary medicine accountable to scientific standards equivalent to — but not higher than — medicine itself. Consequently, medical journals must be open to publishing complementary medicine research that succeeds in meeting these standards. CMAJ, for one, will continue to do so.

I commend CMAJ for its willingness to publish research on complementary medicine. Scientific minds must remain open to the possibility that new evidence will support treatments that fall outside of the mainstream. But CMAJ has failed in its task to hold naturopathic medicine to scientific scrutiny. The problem with this study is not that the treatment being tested falls outside of standard practice, but that the experiment was not properly designed to test the hypothesis.

*

Photo courtesy Shutterstock.com

7 thoughts on “This is what a meaningless study looks like.

  1. If you like this article, then you should read some of the works from Nassim Nicholas Taleb such as “Anti-Fragility” or “Fooled by Randomness”. Taleb’s latest book Anti-Fragility also talks a bit about how things are “proven” in science and medicine. Spoiler alert… many people who call themselves scientists make some very fundamental mistakes when they apply statistics to their works. Definitely worth reading !

  2. This speaks more to the issue that people don’t understand the designs of RCTs and what they can answer and are powered to answer. While I can’t find the full study in the CMAJ to read about the design, I would second what was quoted as being in the editorial that this doesn’t prove anything. Depending on how the trial was designed, it would shape the type of conclusions that could be made, but if the outcomes did not reach significance, you can’t really say anything (again, this would be less of an issue if they used a non-inferiority design, and did achieve significance in which case they can say that the addition of naturopathic medicine is not worse than conventional medicine). The biggest issue appears to be the lack of the C in RCT, control. The less tightly controlled your interventions are, the less you can read into the outcomes (not sure how the trial happened without a defined protocol, at least on the conventional medicine side). I would also argue, at least in a Canadian setting, where we have universal health care, that you would need to use a superiority design to justify the inclusion of Naturopathic care in the treatment of cardiovascular disease. The reason being that everyone has access to the standard of care, and since you have to pay for sessions with the ND, if it’s just not worse than standard of care, why would you pay the money to go and see them? If it were actually better than just getting the standard of care, than you could make a stronger recommendation to patients to consider the option, or make the case that inclusion of naturopathic care become standard of care for the treatment of these conditions…
    I’m strongly in favor of evidence based practice, as long as practice is based on the results of the trials being considered, rather than the conclusions of the authors included in the article.

  3. Doesn’t “enhanced usual” treatment seem slightly self-contradictory?

    “It’s normal, but we’re paying special attention to it” – so whatever this treatment was, its name suggests something out of the ordinary anyway. Hardly a good set of control conditions.

    Though as outlined above this isn’t the only problem with this study.

  4. 1. What if all physicians spent as much time counseling patients on risk factors? Would the CMA fund this?
    2. What is self-directed care as described in the article? Is this sticking to recommendations.
    3. How was the placebo effect handled? 1 hour spent with each patient and then an additional 3 hours should suggest these voluteers felt someone was interested.
    4. What exactly were the dietary and supplemental interventions? And how do these differ from ordinary suggestions to help decrease risk factors?

Comments are closed.

Categorized in: Christie, Health/Medicine