This is an archived article that was published on in 2010, and information in the article may be outdated. It is provided only for personal research purposes and may not be reprinted.

University of Utah researchers have demonstrated the feasibility of translating brain signals into words, a preliminary step toward technology that could allow severely paralyzed people to turn thoughts into computer-generated words.

In a study published this month, the team decoded spoken words from signals recorded in the brain of a volunteer subject whose cranium was temporarily opened for epilepsy treatment.

"This is proof of concept," said lead author Bradley Greger, an assistant professor of bioengineering at the U. "We showed that using micro-scale signals in the brain, we could decode speech. Having proved that, we can refine it."

The team rigged a volunteer patient's brain with tiny electrodes, then recorded the signals as he repeated 10 words: yes, no, hot, cold, hungry, thirsty, hello, goodbye, more and less. Then they tried to deduce which signals represented each word by analyzing patterns.

When they compared neural activity generated by two words, they could accurately attribute the signal to the correct word 76 to 90 percent of the time. But the success rate dropped to less than half when they examined neural patterns arising from all 10 words at once.

"It does work, but we have to get it better," Greger said. "We would like to get that from 10 to 50 words and also do the alphabet. You could restore a lot of communication."

Such a breakthrough could improve the lives of stroke patients, head trauma victims and others who have lost their ability to speak due to a condition known as "locked-in syndrome."

The team conducted the research using special micro-electrodes that rested on the volunteer's brain without penetrating the tissue. Greger, whose research interest is developing neural interfaces to control prosthetic devices, had already discovered that such electrodes could interpret brain signals that control arm movements. His subjects were epilepsy patients who had parts of their skulls removed, leaving their brains exposed so that doctors could locate the source of their seizures and neutralize them. His research piggybacked on brain surgeries that were clinically necessary.

One of the patients from the arm-movement study also volunteered for the speech study. U. neurosurgeon Paul House, an author on both studies, performed the craniotomy and installed the larger conventional electrocorticography electrodes to address the patient's epilepsy. For the speech study, House positioned two additional arrays of micro-electrodes on two critical areas of the patient's brain. One governs facial movements associated with speech and the other is involved with language comprehension.

"It's a big operation. We are opening a large window over the brain, but these electrodes are no more dangerous than ones we ordinarily use," House said. "What we were listening to has to do with forming speech. We were decoding facial movement because it is intimately tied to speech formation."

While the patient recovered in the ICU, the team asked the person to repeat the 10 test words over and over as they recorded corresponding brain activity in these two brain centers. This protocol was repeated for one-hour sessions over four consecutive days.

Analyzing the data for patterns that could be translated into words, researchers achieved greater accuracy — 85 vs. 76 percent, they reported — in decoding signals from the facial motor cortex as opposed to the language center (known in neuroscience as Wernicke's area). Greger's next step is to further investigate these brain regions with larger electrode grids, increasing the number from 16 to 121, to gather a much larger stream of data.

"There's a high patient need, and the technology is simple enough to have a short time horizon to get it to a real-world application. I'm hoping in two to three years for a trial in paralyzed patients," Greger said.

The research is published in the current edition of the Journal of Neural Engineering. Co-authors include U. engineering dean Richard Brown, doctoral candidate Spencer Kellis and University of Washington neuroscientist Kai Miller. Funding was provided by the National Institutes of Health, the National Science Foundation, the U. and the Defense Advanced Research Projects Agency, or DARPA.