Capturing your thoughts

A team led by University of California neurobiologists has developed an approach to interpreting brain electroencephalograms that provides an unprecedented view of thought in action.

A team led by University of California San Diego (UCSD) neurobiologists has developed a new approach to interpreting brain electroencephalograms, or EEGs, that provides an unprecedented view of thought in action and has the potential to advance our understanding of disorders like epilepsy and autism.

The new information processing and visualisation methods that make it possible to follow activation in different areas of the brain dynamically were detailed in a paper featured in a recent issue of the journal Public Library of Science Biology.

Thought processes occur on the order of milliseconds-thousandths of a second, but current brain imaging techniques, such as functional Magnetic Resonance Imaging and traditional EEGs, are averaged over seconds. This provides a “blurry” picture of how the neural circuits in the brain are activated, just as a picture of waves breaking on the shore would be a blur if it were created from the average of multiple snapshots.

“The new technique can parse EEG data and identify the individual signals coming from different areas of the brain,” says Scott Makeig, a research scientist in UCSD’s Swartz Center for Computational Neuroscience of the Institute for Neural Computation.

“This much more comprehensive view of brain dynamics was only made possible by exploiting recent advances in mathematics and increases in computing power.”

To take an EEG, recording electrodes-small metal disks-are attached to the scalp. These electrodes can detect the tiny electrical impulses nerve cells in the brain send to communicate with each other. However, interpreting the pattern of electrical activity recorded by the electrodes is complicated because each scalp electrode indiscriminately sums all of the electrical signals it detects from the brain and non-brain sources, like muscles in the scalp and the eyes.

“The challenge of interpreting an EEG is that you have a composite of signals from all over the brain and you need to find out what sources actually contributed to the pattern,” explains Makeig.

“We found that it is possible, using a mathematical technique called Independent Component Analysis, to separate each signal or “voice” in the brain by just treating the voices as separate sources of information, but without other prior knowledge about each voice.”

Independent component analysis, or ICA, looks at the distinctiveness of activity in each patch of the brain’s cortex. It uses this information to determine the location of the patch and separate out the signals from non-brain sources. Because ICA can distinguish signals that are active at the same time, it makes it possible to identify the electrical signals in the brain that correspond to the brain telling the muscles to take an action.

The research was funded by the Swartz Foundation, the National Institutes of Health and the Howard Hughes Medical Institute.