Reading Minds – Overview of the Last Decade Research

On January 6, 2014
By scanning brain activity, scientists may be able to decode people’s thoughts, dreams and even intentions. A recent news article published in Nature (Nature 502, 428–430, , doi:10.1038/502428a) by Kerri Smith discusses the work of  Professor Jack Gallant’s laboratory (at the University of California, Berkeley) that focuses on computational modeling of the visual system. Gallant’s group and others are trying to find out what underlies those different brain patterns and want to work out the codes and algorithms the brain uses to make sense of the world around it. They hope that these techniques can tell them about the basic principles governing brain organization and how it encodes memories, behavior and emotion.
Illustration by Peter Quinnell;Photo: Kevork Djansezian/Getty

Illustration by Peter Quinnell;Photo: Kevork Djansezian/Getty

We also published here on Neurorelay back in 2012 an article about Galant’s research (click here for the article). The Nature articles goes forward and presents recent developments in this direction. And it looks like groups around the world are using techniques like these to try to decode brain scans and decipher what people are seeing, hearing and feeling, as well as what they remember or even dream about.
Neuroscientists can predict what a person is seeing or dreaming by looking at their brain activity. Listen more on this podcast:

Although companies are starting to pursue brain decoding for a few applications, such as market research (neuromarketing) and lie detection, scientists are far more interested in using this process to learn about the brain itself.

Applying their techniques beyond the encoding of pictures and movies will require a vast leap in complexity. “I don’t do vision because it’s the most interesting part of the brain,” says Gallant. “I do it because it’s the easiest part of the brain. It’s the part of the brain I have a hope of solving before I’m dead.” But in theory, he says, “you can do basically anything with this”.

Decoding techniques interrogate more of the information in the brain scan. Early studies of this sort proved that objects are encoded not just by one small very active area, but by a much more distributed array. These recordings are fed into a ‘pattern classifier’, a computer algorithm that learns the patterns associated with each picture or concept (machine learning). Once the program has seen enough samples, it can start to deduce what the person is looking at or thinking about. Further attention to these patterns can take researchers to testing hypotheses about the nature of psychological processes — asking questions about the strength and distribution of memories.

  • In early studies, scientists were able to show that they could get enough information from these patterns to tell what category of object someone was looking at — scissors, bottles and shoes, for example. “We were quite surprised it worked as well as it did,” says Jim Haxby at Dartmouth College in New Hampshire, who led the first decoding study in 2001.decoding_for_dummies
  • Soon after, two other teams independently used it to confirm fundamental principles of human brain organization. It was known from studies using electrodes implanted into monkey and cat brains that many visual areas react strongly to the orientation of edges, combining them to build pictures of the world. In the human brain, these edge-loving regions are too small to be seen with conventional fMRI techniques. But by applying decoding methods to fMRI data, John-Dylan Haynes and Geraint Rees, both at the time at University College London, and Yukiyasu Kamitani at ATR Computational Neuroscience Laboratories, in Kyoto, Japan, with Frank Tong, now at Vanderbilt University in Nashville, Tennessee, demonstrated in 2005 that pictures of edges also triggered very specific patterns of activity in humans. The researchers showed volunteers lines in various orientations — and the different voxel mosaics told the team which orientation the person was looking at.
  • Edges became complex pictures in 2008, when Gallant’s team developed a decoder that could identify which of 120 pictures a subject was viewing — a much bigger challenge than inferring what general category an image belongs to, or deciphering edges. They then went a step further, developing a decoder that could produce primitive-looking movies of what the participant was viewing based on brain activity.
  • Kamitani and his team published their attempts at dream decoding in Science in 2013. They let participants fall asleep in the scanner and then woke them periodically, asking them to recall what they had seen. The team tried first to reconstruct the actual visual information in dreams, but eventually resorted to word categories. Their program was able to predict with 60% accuracy what categories of objects, such as cars, text, men or women, featured in people’s dreams. The subjective nature of dreaming makes it a challenge to extract further information, says Kamitani. But dreams may engage more than just the brain’s visual realm, and involve areas for which it’s harder to build reliable models.

Decoding (reverse engineering) relies on the fact that correlations can be established between brain activity and the outside world. And simply identifying these correlations is sufficient if all you want to do, for example, is use a signal from the brain to command a robotic hand. But Gallant and others want to do more; they want to work back to find out how the brain organizes and stores information in the first place — to crack the complex codes the brain uses.

Leave a Reply

Your email address will not be published. Required fields are marked *

You may use these HTML tags and attributes: <a href="" title=""> <abbr title=""> <acronym title=""> <b> <blockquote cite=""> <cite> <code> <del datetime=""> <em> <i> <q cite=""> <s> <strike> <strong>

%d bloggers like this: