Posts Tagged ‘brain activity’

By Hilary Hurd Anyaso

Leading theories propose that sleep presents an opportune time for important, new memories to become stabilized. And it’s long been known which brain waves are produced during sleep. But in a new study, researchers set out to better understand the brain mechanisms that secure memory storage.

The team from Northwestern and Princeton universities set out to find more direct and precisely timed evidence for the involvement of one particular sleep wave — known as the “sleep spindle.”

In the study, sleep spindles, described as bursts of brain activity typically lasting around one second, were linked to memory reactivation. The paper, “Sleep spindle refractoriness segregates periods of memory reactivation,” published today in the journal Current Biology.

“The most novel aspect of our study is that we found these spindles occur rhythmically — about every three to six seconds — and this rhythm is related to memory,” said James W. Antony, first author of the study and a postdoctoral fellow in Princeton’s Computational Memory Lab.

Three experiments explored how recent memories are reactivated during sleep. While volunteers took an afternoon nap, sound cues were surreptitiously played. Each was linked to a specific memory. The researchers’ final experiment showed that if cues were presented at opportune times such that spindles could follow them, the linked memories were more likely to be retained. If they were presented when a spindle was unlikely to follow, the linked memories were more likely to be forgotten.

“One particularly remarkable aspect of the study was that we were able to monitor spindles moment by moment while people slept,” said Ken A. Paller, senior author of the study and professor of psychology at Northwestern’s Weinberg College of Arts and Sciences. “Therefore, we could know when the brain was most ready for us to prompt memory reactivation.”
If the researchers reminded people of a recently learned fact, a spindle would likely be evident in the cerebral cortex, and memory for that information would be improved, added Paller, also director of Northwestern’s Cognitive Neuroscience Program.

“In memory research, we know it’s important to segregate experiences while you’re awake so that everything doesn’t just blend together,” said Antony, who worked in Paller’s lab at Northwestern as a doctoral student. “If that happens, you may have difficulty retrieving information because so many things will come to mind at once. We believe the spindle rhythmicity shown here might play a role in segregating successive memory reactivations from each other, preventing overlap that might cause later interference between memories.”

Ultimately, the researchers’ goal is to understand how sleep affects memory under natural conditions and how aging or disease can impact these functions.

“With that goal in mind, we’ve helped elucidate the importance of sleep spindles more generally,” Antony said.

Paller said they are on the trail of the physiology of memory reactivation.

“Future work will be needed to see how spindles fit together with other aspects of the physiology of memory and will involve other types of memory testing and other species,” Paller said.

In addition to Antony and Paller, co-authors are Luis Piloto, Margaret Wang, Paula Pacheco and Kenneth A. Norman, all of Princeton.

https://news.northwestern.edu/stories/2018/may/bursts-of-brain-activity-linked-to-memory-reactivation/

Advertisements

brainy_2758840b

Talking to yourself used to be a strictly private pastime. That’s no longer the case – researchers have eavesdropped on our internal monologue for the first time. The achievement is a step towards helping people who cannot physically speak communicate with the outside world.

“If you’re reading text in a newspaper or a book, you hear a voice in your own head,” says Brian Pasley at the University of California, Berkeley. “We’re trying to decode the brain activity related to that voice to create a medical prosthesis that can allow someone who is paralysed or locked in to speak.”

When you hear someone speak, sound waves activate sensory neurons in your inner ear. These neurons pass information to areas of the brain where different aspects of the sound are extracted and interpreted as words.

In a previous study, Pasley and his colleagues recorded brain activity in people who already had electrodes implanted in their brain to treat epilepsy, while they listened to speech. The team found that certain neurons in the brain’s temporal lobe were only active in response to certain aspects of sound, such as a specific frequency. One set of neurons might only react to sound waves that had a frequency of 1000 hertz, for example, while another set only cares about those at 2000 hertz. Armed with this knowledge, the team built an algorithm that could decode the words heard based on neural activity alone (PLoS Biology, doi.org/fzv269).

The team hypothesised that hearing speech and thinking to oneself might spark some of the same neural signatures in the brain. They supposed that an algorithm trained to identify speech heard out loud might also be able to identify words that are thought.

Mind-reading

To test the idea, they recorded brain activity in another seven people undergoing epilepsy surgery, while they looked at a screen that displayed text from either the Gettysburg Address, John F. Kennedy’s inaugural address or the nursery rhyme Humpty Dumpty.

Each participant was asked to read the text aloud, read it silently in their head and then do nothing. While they read the text out loud, the team worked out which neurons were reacting to what aspects of speech and generated a personalised decoder to interpret this information. The decoder was used to create a spectrogram – a visual representation of the different frequencies of sound waves heard over time. As each frequency correlates to specific sounds in each word spoken, the spectrogram can be used to recreate what had been said. They then applied the decoder to the brain activity that occurred while the participants read the passages silently to themselves.

Despite the neural activity from imagined or actual speech differing slightly, the decoder was able to reconstruct which words several of the volunteers were thinking, using neural activity alone (Frontiers in Neuroengineering, doi.org/whb).

The algorithm isn’t perfect, says Stephanie Martin, who worked on the study with Pasley. “We got significant results but it’s not good enough yet to build a device.”

In practice, if the decoder is to be used by people who are unable to speak it would have to be trained on what they hear rather than their own speech. “We don’t think it would be an issue to train the decoder on heard speech because they share overlapping brain areas,” says Martin.

The team is now fine-tuning their algorithms, by looking at the neural activity associated with speaking rate and different pronunciations of the same word, for example. “The bar is very high,” says Pasley. “Its preliminary data, and we’re still working on making it better.”

The team have also turned their hand to predicting what songs a person is listening to by playing lots of Pink Floyd to volunteers, and then working out which neurons respond to what aspects of the music. “Sound is sound,” says Pasley. “It all helps us understand different aspects of how the brain processes it.”

“Ultimately, if we understand covert speech well enough, we’ll be able to create a medical prosthesis that could help someone who is paralysed, or locked in and can’t speak,” he says.

Several other researchers are also investigating ways to read the human mind. Some can tell what pictures a person is looking at, others have worked out what neural activity represents certain concepts in the brain, and one team has even produced crude reproductions of movie clips that someone is watching just by analysing their brain activity. So is it possible to put it all together to create one multisensory mind-reading device?

In theory, yes, says Martin, but it would be extraordinarily complicated. She says you would need a huge amount of data for each thing you are trying to predict. “It would be really interesting to look into. It would allow us to predict what people are doing or thinking,” she says. “But we need individual decoders that work really well before combining different senses.”

http://www.newscientist.com/article/mg22429934.000-brain-decoder-can-eavesdrop-on-your-inner-voice.html