If you're like me, you pobably think that using magnetic resonance imaging (MRI) or more traditional electroencephalograms (EEG) for imaging our brain gives accurate results. Wrong! Our brain is always changing his mind, switching from an activity to another every millisecond or so. But current imaging techniques are averaging these activities over seconds, creating blurry images of active areas in the brain. Now, after eight years of work, neurobiologists of the University of California at San Diego have developed a new technique to capture thinking as it happens. The researchers are currently using this new technique to study patients with epilepsy and autism.
A team led by University of California San Diego neurobiologists has developed a new approach to interpreting brain electroencephalograms, or EEGs, that provides an unprecedented view of thought in action and has the potential to advance our understanding of disorders like epilepsy and autism.
The significance of the advance is that thought processes occur on the order of milliseconds -- thousandths of a second -- but current brain imaging techniques, such as functional Magnetic Resonance Imaging and traditional EEGs, are averaged over seconds. This provides a "blurry" picture of how the neural circuits in the brain are activated, just as a picture of waves breaking on the shore would be a blur if it were created from the average of multiple snapshots.
Here is how the new technique works.
To take an EEG, recording electrodes -- small metal disks -- are attached to the scalp. These electrodes can detect the tiny electrical impulses nerve cells in the brain send to communicate with each other. However, interpreting the pattern of electrical activity recorded by the electrodes is complicated because each scalp electrode indiscriminately sums all of the electrical signals it detects from the brain and non-brain sources, like muscles in the scalp and the eyes.
"The challenge of interpreting an EEG is that you have a composite of signals from all over the brain and you need to find out what sources actually contributed to the pattern," explains Scott Makeig, a research scientist in UCSD's Institute for Neural Computation of the Swartz Center for Computational Neuroscience. "It is a bit like listening in on a cocktail party and trying to isolate the sound of each voice. We found that it is possible, using a mathematical technique called Independent Component Analysis, to separate each signal or "voice" in the brain by just treating the voices as separate sources of information, but without other prior knowledge about each voice."
Below is a snaphot of a movie showing in real time patterns of activities in the brain illustrated by the colored spheres (Credit: Scott Makeig et al.). You can even download this movie if you like (4.26 MB).
For more information about this technique, you can read the research paper published by the journal Public Library of Science Biology in its June 2004 issue. The long paper, named "Electroencephalographic Brain Dynamics Following Manually Responded Visual Targets," is available here.
Sources: University of California San Diego news release, June 15, 2004, via EurekAlert!; Public Library of Science Biology, Volume 2, Issue 6, June 2004