“Reading Minds” with fMRI

|

Some of you, I suspect, have read in Time, Slate, NPR, Popular ScienceWired, or dozens of other news outlets that scientists have figured out how to read minds. I hate to always be the neurotech downer, but that claim is just false. Laughably false.

That’s not to say that the study behind all of the commotion, published late last month in Current Biology, isn’t impressive and worth talking about. But, as happens all too often with brain imaging studies, this one was hyped, big time. Few reporters* bothered to look for critical, or even thoughtful, comments from experts outside the research team. And so their stories wound up with headlines like, “Scientists Can (Almost) Read Your Mind,” and “Soon Enough, You May Be Able to DVR Your Dreams.”

I’ll admit to being seduced by the video, below, of the study’s basic results (and I’m not the only one: it has amassed 1.6 million YouTube views since September 21st). On the left are movie clips that the study’s three subjects (who happened to be the researchers themselves) looked at while lying in a brain scanner. On the right is a computer’s best guess of what the subject was looking at:

So how did the scientists make this incredible, if creepy thing? First they watched hours and hours of movie clips while lying passively inside of a functional magnetic resonance imaging (fMRI) machine, which measures second-to-second changes in blood flow inside the brain. Then they fed that raw data — brain activity from the visual cortex, in the back of the brain, and the visual information from the movies — into a computer that could associate certain blips of neural activity with certain visual changes on the screen.

Then came the big test: if the subjects looked at a new set of movies in the scanner, could this algorithm use only their brain activity and figure out what they were looking at? The answer is yes, sort of. The algorithm can’t create movies on its own; it’s more of a sophisticated matching machine. So the researchers gave it a bank of 18 million seconds of YouTube videos to choose from and it picked the ones that seem to best match the subjects’ brain activity. (The actual movies they saw were not included in these 18 million.) The result is that seductive comparison above.

I first read the study with an overly cynical eye, probably, because I was so annoyed by what I had read in the press. Now that I’ve had some time to cool down, I think it’s actually a remarkable piece of work, and one that neuroscientists of 10 or 20 years ago may not have been able to imagine. My beef, as usual, is not with the data, but how the researchers (and journalists) have spun it.

On a website they created to explain their results, the researchers write that, “as long as we have good measurements of brain activity and good computational models of the brain, it should be possible in principle to decode the visual content of mental processes like dreams, memory, and imagery.” What’s more, if they collect appropriate data from non-visual areas of the brain, like the frontal cortex or the parietal cortex, then they should be able to “decode” thoughts outside of the visual arena. “Our results will be crucial for developing brain-reading technologies that can decode dynamic experiences,” they write.

I asked a few other neuroscientists what they thought about the study. “It was overly hyped, but it’s actually very cool work,” said Russell Poldrack, of the University of Texas at Austin, who is certainly not afraid of calling bullshit on ridiculous brain imaging claims. “Computationally, it is quite a feat,” said Marco Iacoboni of UCLA.

The main problem, though, according to Iacoboni, is that the researchers assume that our brains work like an algorithm does, using predictable if-then computations. Since the 1950s, lots of computer scientists have tried to build artificial intelligence with that basic approach. They’ve had a few successes — like IBM’s Jeopardy-winning Watson — but have failed at building anything that comes close to the perceptual and cognitive skills of a human infant.

“The whole idea of reading minds with machine learning is clearly destined to fail,” Iacoboni says.

Even if the scientists could recreate our visual experiences with 100 percent accuracy, that doesn’t mean that they’re doing it in the same way that the brain does. And what would that give us, anyway? “You can ‘see’ that I am representing in my mind a house. So what?” Iacoboni says. “I may be thinking of buying it, reminiscing about a house where I had a brief encounter with a lover, or it may be the house of a friend where I am going to have dinner tonight.”

The bottom line, he says, is that “the construction of meaning is not simple.” So why do journalists always make it seem like it is?

*Props to Erica Westly, whose Technology Review piece did quote other researchers (whose comments were extremely positive) and mentioned caveats of the study.

“Neuron fractal” image by Anthony Mattox on Flickr; “Like a Neural Network” image by Cobalt123

8 thoughts on ““Reading Minds” with fMRI

  1. The picture on the right is a blond girl in a blue shirt and dark trousers holding a pumpkin.

    What makes people think it’s a parrot? 🙂

  2. Well, Marco Iacoboni ?… Isn’t he the one who did publish, ahem, wrote that one NYT op-ed piece claiming to use fmri to predict voters’ attitude to 2008 presidential candidates? A “feat” that many considered (at best) irritating just-so stories [http://kolber.typepad.com/ethics_law_blog/2007/11/this-is-your-br.html] up to the point that some of the most respectable signatures in the field — including Russ Poldrack — felt the urge to write a scathing response [http://www.nytimes.com/2007/11/14/opinion/lweb14brain.html]. So, having him expressing words of caution feels bitterly ironical.

    Plus, argumentum ad hominem aside, Iacoboni is blatantly missing his point here by arguing that Gallant’s method will fail to uncover the non-visual aspect of experience while the authors have explicitly anticipated on this point: Gallant et al’s goal is actually to not only use their method to decode visual experience from visual cortex activity but also to “decode thoughts” (sic) from higher-order regions (eg. frontal cortex). In other words: One this will be achieved, shazam! Mr Iacoboni, there would be nothing left to decode if you can tell (let’s say) my intention to buy and my visual recollection of that one apple-pie I just saw…

    I am arguing at a purely logical level. As a matter of fact, I am afraid that applying Gallant’s method to decode thought will not be anything as easy as a pie.

  3. Loving the LWON posts. This one is particularly fascinating. Keep up the great work.

Comments are closed.

Categorized in: Mind/Brain, Technology, Virginia

Tags: , , , , , , ,