By Sally Adee | December 24, 2012 | 10 Comments
I don’t like when people tell me this. Someday, I hope to acquire the guts to issue the following rejoinder: “Which 10 per cent do you use?” But because I don’t like confrontation, I usually just make a face of mute disappointment and change the subject.
If you read LWON, you already know we use 100 per cent of our brain. That’s not the point of this post. But you know what is? I’ve spread similarly outrageous rumours about the brain.
This week, my esteemed colleagues will try to convince you that chemistry is the most nightmarish discipline to cover as a science journalist, or maybe archaeology or biology or physics. They will be wrong. The most dangerous science is neuroscience, because it gives journalists so much rope to use to hang ourselves.
The brain is such an agreeable little lump of meat. Swing a tennis racket and neurons fire in your motor cortex. Try not to smoke a cigarette after two glasses of mulled wine, and extra blood flows to the executive manager in the dorsolateral prefrontal cortex.
It’s so seductive to try to explain its complicated behaviours by pointing to specific functional areas as though they were cuts of meat on a butcher’s chart.
Thinking about math? That’ll be the pork loin. Trying to resist that piece of chocolate? You’re engaging the prime rib. Trying too hard to be funny in a post about neuroscience strains the neurons in the anterior flank steak.
It’s a bit more complicated than that. Greater activity in certain brain areas might reflect what you’re doing or thinking, but there are also an enormous number of connections among and between all the different areas, so many that it’s not that clear how meaningful it is when one lights up.
Anyway, what do we even mean when we accuse one of these cuts of meat of “lighting up?”
Susan Greenfield and I once had a chat about this. “They talk about the brain lighting up in certain area when certain thought processes are happening,” she told me. “I’m reminded that whenever I use my coffee maker, the light goes on. I know it’s making coffee by looking at the light but that light doesn’t tell me anything about how the machine makes coffee.”
And even that thin method has its limits. For example, fMRI studies of people trapped in clanking, claustrophobic machines for several hours have revealed that when they think about swinging a tennis racket, maybe some part of their brain gets a bit of extra blood flow, as revealed in some false-colour images.But when someone is in an fMRI machine, thinking about playing tennis, how similar is it to actually playing tennis? No idea, because no one’s ever managed to wear an fMRI machine during a game of doubles.
These thin results can get a bit dangerous when they’re overinterpreted, as Sharon Begley brilliantly explained a few years ago at the Daily Beast. She looked at some discredited neuroscience papers that had overrelied on spurious correlations and functional magnetic resonance imaging — fMRI, the neuroscientist’s favourite brain imaging technique — to draw questionable conclusions. “What’s striking about the discredited papers,” Begley said, “is how blithely they tend to vindicate the crudest of stereotypes.” For example, depending on what you’re trying to prove, fMRI data could show:
By no means am I suggesting neuroscientists can’t do amazing things by looking at the brain’s blood flow changes and electrical signatures. It is possible to use fMRI to tell when a person who was thought to be in a vegetative state is thinking the word “no.” You can even apply that information; a thought-controlled prosthetic arm contains signal-processing algorithms that can divine your neurons’ intent to move your arm and translate it to an electronic replacement.
But figuring out how to use the brain’s signals to do something interesting is not the same thing as understanding the brain. The blood flow changes revealed by fMRI are not answers, they’re clues. The reason they can’t be answers is that there is as yet no general theorem for neuroscience. Neuroscience has no e = mc² or F=ma.
Or rather, it has lots of different ones. The human brain has about 100 billion neurons, each networked with up to 25,000 other neurons by way of communication channels called synapses. A synapse will only pass information to that of a neighbouring neuron after an electrical impulse called an action potential has motivated the transfer; then some ion channel will open and neurotransmitters will be released. This is the basic interaction that creates all of human experience. So that’s biology (neurons), physics (action potentials), chemistry (ion channels), pharmacology (neurotransmitters), and a lot of calculus. Some researchers believe quantum mechanical processes are involved too. Neuroscience, in other words, is all the sciences rolled up into one.
Which of them is the governing discipline? Whose equations are most important? These are important question when it comes to modeling anything. A model needs a basic unit. Is it the neuron? Depends on who you talk to. Henry Markram, a neuroscientist at EPFL who has spent almost a decade trying to simulate the human brain on a supercomputer, thinks the basic unit should be the neocortical column, a small cylinder that in human brains is comprised of 70,000 neurons. He likens it to the bricks that make up a house. But others think that brick is the atom. Or the molecule. The danger is that under enough scrutiny, any of these bricks could eventually begin to seem as complex as the house itself.
When there’s no equation to fall back on, there’s a lot of leeway to just make shit up. Falsely reassured by false-colour images and stretched metaphors, we science writers can stretch the data to draw convenient conclusions about human nature (“women love shopping!”), but that is only part of the problem: the metaphors are even worse. “The brain is wired very much like a microprocessor,” I once mansplained in a blog post for IEEE Spectrum, without sustaining the slightest clue that what I was parroting was absolute bollocks.
In the grand scheme of things, journalists inadvertently misrepresenting neuroscience is a bit of a first-world problem. The neuroscientists know the score; the journalists will keep getting either bamboozled or ignored. No one ever died from believing we only use 10 per cent of our brains.
There is one group, however, who will suffer disproportionately. Won’t someone please think of the Singularitarians?
Enabled by tech journalists who bang on about the brain’s “wiring diagram” and its similarity to microprocessor architecture, this group firmly believes that within ten, twenty, fifty years, computers will rival the sophistication and complexity of the human brain. that. As far as I can tell — because there are different sects — Singularitarians believe that this will be the magic moment when we will be able to upload our brains into machines and float up into the Great Big Cloud Computer of immortality. When you tell them that this is highly improbable, they get very angry.
The 10-percenters aren’t entirely off base. We may use 100 per cent of our brain, but I’ll be damned if we understand 10 percent of it. So the next time someone brings up that pernicious statistic, remember that it’s not their fault: it’s neuroscience’s fault for being the cruellest of them all.
Brain illustration from Shutterstock
Cow figurine from Shutterstock
MRI machine from Shutterstock
Matrix brain from Shutterstock