Posts Tagged ‘neuroscience’


Brain tissue from deceased patients with Alzheimer’s has more tau protein buildup (brown spots) and fewer neurons (red spots) as compared to healthy brain tissue.

By Yasemin Saplakoglu

Alzheimer’s disease might be attacking the brain cells responsible for keeping people awake, resulting in daytime napping, according to a new study.

Excessive daytime napping might thus be considered an early symptom of Alzheimer’s disease, according to a statement from the University of California, San Francisco (UCSF).

Some previous studies suggested that such sleepiness in patients with Alzheimer’s results directly from poor nighttime sleep due to the disease, while others have suggested that sleep problems might cause the disease to progress. The new study suggests a more direct biological pathway between Alzheimer’s disease and daytime sleepiness.

In the current study, researchers studied the brains of 13 people who’d had Alzheimer’s and died, as well as the brains from seven people who had not had the disease. The researchers specifically examined three parts of the brain that are involved in keeping us awake: the locus coeruleus, the lateral hypothalamic area and the tuberomammillary nucleus. These three parts of the brain work together in a network to keep us awake during the day.

The researchers compared the number of neurons, or brain cells, in these regions in the healthy and diseased brains. They also measured the level of a telltale sign of Alzheimer’s: tau proteins. These proteins build up in the brains of patients with Alzheimer’s and are thought to slowly destroy brain cells and the connections between them.

The brains from patients who had Alzheimer’s in this study had significant levels of tau tangles in these three brain regions, compared to the brains from people without the disease. What’s more, in these three brain regions, people with Alzheimer’s had lost up to 75% of their neurons.

“It’s remarkable because it’s not just a single brain nucleus that’s degenerating, but the whole wakefulness-promoting network,” lead author Jun Oh, a research associate at UCSF, said in the statement. “This means that the brain has no way to compensate, because all of these functionally related cell types are being destroyed at the same time.”

The researchers also compared the brains from people with Alzheimer’s with tissue samples from seven people who had two other forms of dementia caused by the accumulation of tau: progressive supranuclear palsy and corticobasal disease. Results showed that despite the buildup of tau, these brains did not show damage to the neurons that promote wakefulness.

“It seems that the wakefulness-promoting network is particularly vulnerable in Alzheimer’s disease,” Oh said in the statement. “Understanding why this is the case is something we need to follow up in future research.”

Though amyloid proteins, and the plaques that they form, have been the major target in several clinical trials of potential Alzheimer’s treatments, increasing evidence suggests that tau proteins play a more direct role in promoting symptoms of the disease, according to the statement.

The new findings suggest that “we need to be much more focused on understanding the early stages of tau accumulation in these brain areas in our ongoing search for Alzheimer’s treatments,” senior author Dr. Lea Grinberg, an associate professor of neurology and pathology at the UCSF Memory and Aging Center, said in the statement.

The findings were published Monday (Aug. 12) in Alzheimer’s & Dementia: The Journal of the Alzheimer’s Association.

https://www.livescience.com/alzheimers-attacks-wakefulness-neurons.html?utm_source=notification

Advertisements

by Bob Yirka

A team of researchers from the University of California and Stanford University has found that the tendency to see people from different racial groups as interchangeable has a neuronal basis. In their paper published in Proceedings of the National Academy of Sciences, the group describes studies they conducted with volunteers and what they found.

One often-heard phrase connected with racial profiling is “they all look the same to me,” a phrase usually perceived as racist. It implies that people of one race have difficulty discerning the facial characteristics of people of another race. In this new effort, the researchers conducted experiments to find out if this is valid—at least among one small group of young, white men.

In the first experiment, young, white male volunteers looked at photographs of human faces, some depicting black people, others white, while undergoing an fMRI scan. Afterward, the researchers found that the part of the brain involved in facial recognition activated more for white faces than it did for black faces.

In the second experiment, the same volunteers looked at photographs of faces that had been doctored to make the subjects appear more alike, regardless of skin color. The researchers report that the brains of the volunteers activated when dissimilarities were spotted, regardless of skin color, though it was more pronounced when the photo was of a white face.

In a third series of experiments, the volunteers rated how different they found faces in a series of photographs or whether they had seen a given face before. The researchers report that the volunteers had a tendency to rate the black faces as more similar to one another than the white faces. And they found it easier to tell if they had seen a particular white face before.

The researchers suggest that the results of their experiments indicate a neural basis that makes it more difficult for people to see differences between individuals of other races. They note that they did account for social contexts such as whether the volunteers had friends and/or associates of other races. They suggest that more work is required to determine if such neuronal biases can be changed based on social behavior.

Brent L. Hughes et al. Neural adaptation to faces reveals racial outgroup homogeneity effects in early perception, Proceedings of the National Academy of Sciences (2019). DOI: 10.1073/pnas.1822084116

https://medicalxpress.com/news/2019-07-neuronal-alike.html


Illustrations of electrode placements on the research participants’ neural speech centers, from which activity patterns recorded during speech (colored dots) were translated into a computer simulation of the participant’s vocal tract (model, right) which then could be synthesized to reconstruct the sentence that had been spoken (sound wave & sentence, below). Credit: Chang lab / UCSF Dept. of Neurosurgery

A state-of-the-art brain-machine interface created by UC San Francisco neuroscientists can generate natural-sounding synthetic speech by using brain activity to control a virtual vocal tract—an anatomically detailed computer simulation including the lips, jaw, tongue, and larynx. The study was conducted in research participants with intact speech, but the technology could one day restore the voices of people who have lost the ability to speak due to paralysis and other forms of neurological damage.

Stroke, traumatic brain injury, and neurodegenerative diseases such as Parkinson’s disease, multiple sclerosis, and amyotrophic lateral sclerosis (ALS, or Lou Gehrig’s disease) often result in an irreversible loss of the ability to speak. Some people with severe speech disabilities learn to spell out their thoughts letter-by-letter using assistive devices that track very small eye or facial muscle movements. However, producing text or synthesized speech with such devices is laborious, error-prone, and painfully slow, typically permitting a maximum of 10 words per minute, compared to the 100-150 words per minute of natural speech.

The new system being developed in the laboratory of Edward Chang, MD—described April 24, 2019 in Nature—demonstrates that it is possible to create a synthesized version of a person’s voice that can be controlled by the activity of their brain’s speech centers. In the future, this approach could not only restore fluent communication to individuals with severe speech disability, the authors say, but could also reproduce some of the musicality of the human voice that conveys the speaker’s emotions and personality.

“For the first time, this study demonstrates that we can generate entire spoken sentences based on an individual’s brain activity,” said Chang, a professor of neurological surgery and member of the UCSF Weill Institute for Neuroscience. “This is an exhilarating proof of principle that with technology that is already within reach, we should be able to build a device that is clinically viable in patients with speech loss.”

Brief animation illustrates how patterns of brain activity from the brain’s speech centers in somatosensory cortex (top left) were first decoded into a computer simulation of a research participant’s vocal tract movements (top right), which were then translated into a synthesized version of the participant’s voice (bottom). Credit:Chang lab / UCSF Dept. of Neurosurgery. Simulated Vocal Tract Animation Credit:Speech Graphics
Virtual Vocal Tract Improves Naturalistic Speech Synthesis

The research was led by Gopala Anumanchipalli, Ph.D., a speech scientist, and Josh Chartier, a bioengineering graduate student in the Chang lab. It builds on a recent study in which the pair described for the first time how the human brain’s speech centers choreograph the movements of the lips, jaw, tongue, and other vocal tract components to produce fluent speech.

From that work, Anumanchipalli and Chartier realized that previous attempts to directly decode speech from brain activity might have met with limited success because these brain regions do not directly represent the acoustic properties of speech sounds, but rather the instructions needed to coordinate the movements of the mouth and throat during speech.

“The relationship between the movements of the vocal tract and the speech sounds that are produced is a complicated one,” Anumanchipalli said. “We reasoned that if these speech centers in the brain are encoding movements rather than sounds, we should try to do the same in decoding those signals.”

In their new study, Anumancipali and Chartier asked five volunteers being treated at the UCSF Epilepsy Center—patients with intact speech who had electrodes temporarily implanted in their brains to map the source of their seizures in preparation for neurosurgery—to read several hundred sentences aloud while the researchers recorded activity from a brain region known to be involved in language production.

Based on the audio recordings of participants’ voices, the researchers used linguistic principles to reverse engineer the vocal tract movements needed to produce those sounds: pressing the lips together here, tightening vocal cords there, shifting the tip of the tongue to the roof of the mouth, then relaxing it, and so on.

This detailed mapping of sound to anatomy allowed the scientists to create a realistic virtual vocal tract for each participant that could be controlled by their brain activity. This comprised two “neural network” machine learning algorithms: a decoder that transforms brain activity patterns produced during speech into movements of the virtual vocal tract, and a synthesizer that converts these vocal tract movements into a synthetic approximation of the participant’s voice.

The synthetic speech produced by these algorithms was significantly better than synthetic speech directly decoded from participants’ brain activity without the inclusion of simulations of the speakers’ vocal tracts, the researchers found. The algorithms produced sentences that were understandable to hundreds of human listeners in crowdsourced transcription tests conducted on the Amazon Mechanical Turk platform.

As is the case with natural speech, the transcribers were more successful when they were given shorter lists of words to choose from, as would be the case with caregivers who are primed to the kinds of phrases or requests patients might utter. The transcribers accurately identified 69 percent of synthesized words from lists of 25 alternatives and transcribed 43 percent of sentences with perfect accuracy. With a more challenging 50 words to choose from, transcribers’ overall accuracy dropped to 47 percent, though they were still able to understand 21 percent of synthesized sentences perfectly.

“We still have a ways to go to perfectly mimic spoken language,” Chartier acknowledged. “We’re quite good at synthesizing slower speech sounds like ‘sh’ and ‘z’ as well as maintaining the rhythms and intonations of speech and the speaker’s gender and identity, but some of the more abrupt sounds like ‘b’s and ‘p’s get a bit fuzzy. Still, the levels of accuracy we produced here would be an amazing improvement in real-time communication compared to what’s currently available.”

Artificial Intelligence, Linguistics, and Neuroscience Fueled Advance

The researchers are currently experimenting with higher-density electrode arrays and more advanced machine learning algorithms that they hope will improve the synthesized speech even further. The next major test for the technology is to determine whether someone who can’t speak could learn to use the system without being able to train it on their own voice and to make it generalize to anything they wish to say.


Image of an example array of intracranial electrodes of the type used to record brain activity in the current study. Credit: UCSF

Preliminary results from one of the team’s research participants suggest that the researchers’ anatomically based system can decode and synthesize novel sentences from participants’ brain activity nearly as well as the sentences the algorithm was trained on. Even when the researchers provided the algorithm with brain activity data recorded while one participant merely mouthed sentences without sound, the system was still able to produce intelligible synthetic versions of the mimed sentences in the speaker’s voice.

The researchers also found that the neural code for vocal movements partially overlapped across participants, and that one research subject’s vocal tract simulation could be adapted to respond to the neural instructions recorded from another participant’s brain. Together, these findings suggest that individuals with speech loss due to neurological impairment may be able to learn to control a speech prosthesis modeled on the voice of someone with intact speech.

“People who can’t move their arms and legs have learned to control robotic limbs with their brains,” Chartier said. “We are hopeful that one day people with speech disabilities will be able to learn to speak again using this brain-controlled artificial vocal tract.”

Added Anumanchipalli, “I’m proud that we’ve been able to bring together expertise from neuroscience, linguistics, and machine learning as part of this major milestone towards helping neurologically disabled patients.”

https://medicalxpress.com/news/2019-04-synthetic-speech-brain.html

By Emily Underwood

One of the thorniest debates in neuroscience is whether people can make new neurons after their brains stop developing in adolescence—a process known as neurogenesis. Now, a new study finds that even people long past middle age can make fresh brain cells, and that past studies that failed to spot these newcomers may have used flawed methods.

The work “provides clear, definitive evidence that neurogenesis persists throughout life,” says Paul Frankland, a neuroscientist at the Hospital for Sick Children in Toronto, Canada. “For me, this puts the issue to bed.”

Researchers have long hoped that neurogenesis could help treat brain disorders like depression and Alzheimer’s disease. But last year, a study in Nature reported that the process peters out by adolescence, contradicting previous work that had found newborn neurons in older people using a variety of methods. The finding was deflating for neuroscientists like Frankland, who studies adult neurogenesis in the rodent hippocampus, a brain region involved in learning and memory. It “raised questions about the relevance of our work,” he says.

But there may have been problems with some of this earlier research. Last year’s Nature study, for example, looked for new neurons in 59 samples of human brain tissue, some of which came from brain banks where samples are often immersed in the fixative paraformaldehyde for months or even years. Over time, paraformaldehyde forms bonds between the components that make up neurons, turning the cells into a gel, says neuroscientist María Llorens-Martín of the Severo Ochoa Molecular Biology Center in Madrid. This makes it difficult for fluorescent antibodies to bind to the doublecortin (DCX) protein, which many scientists consider the “gold standard” marker of immature neurons, she says.

The number of cells that test positive for DCX in brain tissue declines sharply after just 48 hours in a paraformaldehyde bath, Llorens-Martín and her colleagues report today in Nature Medicine. After 6 months, detecting new neurons “is almost impossible,” she says.

When the researchers used a shorter fixation time—24 hours—to preserve donated brain tissue from 13 deceased adults, ranging in age from 43 to 87, they found tens of thousands of DCX-positive cells in the dentate gyrus, a curled sliver of tissue within the hippocampus that encodes memories of events. Under a microscope, the neurons had hallmarks of youth, Llorens-Martín says: smooth and plump, with simple, undeveloped branches.

In the sample from the youngest donor, who died at 43, the team found roughly 42,000 immature neurons per square millimeter of brain tissue. From the youngest to oldest donors, the number of apparent new neurons decreased by 30%—a trend that fits with previous studies in humans showing that adult neurogenesis declines with age. The team also showed that people with Alzheimer’s disease had 30% fewer immature neurons than healthy donors of the same age, and the more advanced the dementia, the fewer such cells.

Some scientists remain skeptical, including the authors of last year’s Nature paper. “While this study contains valuable data, we did not find the evidence for ongoing production of new neurons in the adult human hippocampus convincing,” says Shawn Sorrells, a neuroscientist at the University of Pittsburgh in Pennsylvania who co-authored the 2018 paper. One critique hinges on the DCX stain, which Sorrells says isn’t an adequate measure of young neurons because the DCX protein is also expressed in mature cells. That suggests the “new” neurons the team found were actually present since childhood, he says. The new study also found no evidence of pools of stem cells that could supply fresh neurons, he notes. What’s more, Sorrells says two of the brain samples he and his colleagues looked at were only fixed for 5 hours, yet they still couldn’t find evidence of young neurons in the hippocampus.

Llorens-Martín says her team used multiple other proteins associated with neuronal development to confirm that the DCX-positive cells were actually young, and were “very strict,” in their criteria for identifying young neurons.

Heather Cameron, a neuroscientist at the National Institute of Mental Health in Bethesda, Maryland, remains persuaded by the new work. Based on the “beauty of the data” in the new study, “I think we can all move forward pretty confidently in the knowledge that what we see in animals will be applicable in humans, she says. “Will this settle the debate? I’m not sure. Should it? Yes.”

https://www.sciencemag.org/news/2019/03/new-neurons-life-old-people-can-still-make-fresh-brain-cells-study-finds?utm_campaign=news_daily_2019-03-25&et_rid=17036503&et_cid=2734364

by David Nield

How exactly do our brains sort between the five taste groups: sweet, sour, salty, bitter and umami? We’ve now got a much better idea, thanks to research that has pinned down where in the brain this taste processing happens.

Step forward: the insular cortex. Already thought to be responsible for everything from motor control to social empathy, we can now add flavour identification to its list of jobs.

It’s an area of the brain scientists have previously suspected could be responsible for sorting tastes, and which has been linked to taste in rodents, but this new study is much more precise in figuring out the role it plays in decoding what our tongues are telling us.

“We have known that tastes activate the human brain for some time, but not where primary taste types such as sweet, sour, salty, and bitter are distinguished,” says one of the team, Adam Anderson from Cornell University in New York.

“By using some new techniques that analyse fine-grained activity patterns, we found a specific portion of the insular cortex – an older cortex in the brain hidden behind the neocortex – represents distinct tastes.”

Anderson and his team used detailed fMRI scans of 20 adults as well as a new statistical model to dig deeper than previous studies into the link between the insular cortex and taste. This helped separate the taste response from other related responses – like the disgust we might feel when eating something sour or bitter.

Part of the problem in pinning down the taste-testing parts of the brain is that multiple regions of neurons get busy whenever we’re eating something. However, this study helps to cut through some of that noise.

In particular, it seems that different tastes don’t necessarily affect different parts of the insular cortex, but rather prompt different patterns of activity. Those patterns help the brain determine what it’s tasting.

For example, one particular section of the insular cortex was found to light up – in terms of neural activity – whenever something sweet was tasted. It’s a literal sweet spot, in other words, but it also showed that different brains have different wiring.

“While we identified a potential sweet spot, its precise location differed across people and this same spot responded to other tastes, but with distinct patterns of activity,” says Anderson.

“To know what people are tasting, we have to take into account not only where in the insula is stimulated, but also how.”

The work follows on from previous research showing just how big a role the brain plays in perceiving taste. It used to be thought that receptors on the tongue did most of the taste testing, but now it seems the brain is largely in charge of the process.

That prior study showed how switching certain brain cells on and off in mice was enough to prevent them from distinguishing between sweet and bitter. The conclusion is that while the tongue does identify certain chemicals, it’s the brain that interprets them.

The new research adds even more insight into what’s going on in the brain in humans when we need to work out what we’re tasting – and shows just how important a job the insular cortex is doing.

“The insular cortex represents experiences from inside our bodies,” says Anderson. “So taste is a bit like perceiving our own bodies, which is very different from other external senses such as sight, touch, hearing or smell.”

The research has been published in Nature Communications.

https://www.sciencealert.com/now-we-know-the-part-of-the-brain-that-tells-us-what-we-re-tasting

by Parashkev Nachev

“Why can’t you just relax into it?” is a question many of us have asked in frustration with ourselves or others – be it on the dance floor, the sporting field or in rather more private circumstances. The task typically requires us to respond spontaneously to external events, without any deliberation whatsoever. It ought to be easy – all you have to do is let go – yet it can be infuriatingly difficult.

“Stop thinking about it!” is the standard remedial advice, although cancelling thought with thought is something of a paradox. The retort, “I am trying!”, is equally puzzling, for deliberate intent is precisely what we are here struggling to avoid. So what is this act of choosing not to choose, of consciously relinquishing control over our actions? Our new study, published in Communications Biology, has finally provided insights into how this capacity is expressed in the brain.

Astonishingly, this fundamental human phenomenon has no name. It might have escaped academic recognition entirely had the German philosopher Friedrich Nietzsche not given it a brilliant gloss in his first book The Birth of Tragedy, itself a paradoxical work of philosophy in tacitly encouraging the reader to stop reading and get a drink instead. Whereas other thinkers saw culture on a single continuum, evolving into ever greater refinement, order and rationality, Nietzsche saw it as distributed across two radically different but equally important planes.

Perpendicular to the conventional “Apolline” dimension of culture, he introduced the “Dionysiac”: chaotic, spontaneous, vigorous and careless of the austere demands of rationality. Neither aspect was held to be superior, each may be done badly or well, and both are needed for a civilisation to find its most profound creative expression. Every Batman needs a Joker, he might have said, had he lived in a more comical age.

Of course, Nietzsche was not the first to observe that human beings sometimes behave with wanton abandon. His innovation consisted in realising it is a constitutional characteristic we could and should develop. And as with any behavioural characteristic, the facility to acquire it will vary from one person to another.

Seeing the light

As Dionysus and neuroscientists are mostly strangers, it should come as no surprise that the capacity for “meta-volition” – to give it a name that captures the notion of choosing not to choose one’s actions – has until now escaped experimental study. To find out how our brains allow us to give up control and explain why some of us are better at it than others, my colleagues and I wanted to develop a behavioural test and examine the patterns of brain activity that go with lesser or greater ability.

Most tests in behavioural neuroscience pit conscious, deliberate, complex actions against their opposites, measuring the power to suppress them. A classic example is the anti-saccade task, which purportedly measures “cognitive control”. Participants are instructed not to look towards the light when they see a brief flash in the visual periphery, but instead to the opposite side. That’s hard to do because looking towards the light is the natural inclination. People who are better at this are said to have greater cognitive control.

To measure how good people are at relinquishing control, we cannot simply flip a task around. If people are asked to look into the light, will and instinct are placed in perfect agreement. To put the two in opposition, we must make the automatic task unconscious so that volition could only be a hindrance.


White matter map of the brain (ray traced rendering), with the area correlated with spontaneity in red. Credit: Parashkev Nachev

It turns out that this is easy to do by flashing two lights on opposite sides of the visual periphery nearly simultaneously, and asking the subject to orient as fast as possible to the one they see first. If a flash comes a few dozen milliseconds before the next, people typically obtain an automatic bias to the first flash. You need at least double that amount of time to reach the threshold for consciously detecting which one comes first. Thinking about what came first could only impair your performance because your instinct operates well beneath the threshold at which the conscious gets a foothold.

Amazingly for such a simple task, people vary dramatically in their ability. Some – the Dionysiacs – effortlessly relax into allowing themselves to be guided by the first light, requiring no more than a few milliseconds between the flashes. Others – the Apollines – cannot let go, even when the flashes are many times further apart. Since trying harder does not help, the differences are not a matter of effort but appear to be part of who we are.

We used magnetic resonance imaging to investigate the brains of people performing the task, focusing on white matter – the brain’s wiring. A striking picture emerged. Extensive sections of the wiring of the right prefrontal lobe, a region heavily implicated in complex decision making, was revealed to be stronger in those who were worse at the task: the Apollines. The more developed the neural substrates of volition, it seems, the harder to switch them off.

By contrast, no part of the Dionysiac brain showed evidence of stronger wiring. Suppressing volition appears to depend less on a “meta-volitional centre” that is better developed than on the interplay between spontaneous and deliberate actions. We can think of it as two coalitions of brain cells in competition, with the outcome dependent on the relative strength of the teams, not the qualities of any umpire.

The competitive brain

The results demonstrate how the brain operates by competition at least as much as by cooperation. It may fail at a task not because it does not have the power, but because another, more dominant power stands in opposition. Our decisions reflect the outcomes of battles between warring factions that differ in their characteristics and evolutionary lineage, battles we can do little to influence because we are ourselves their products.

People also differ widely in their qualities, including spontaneity, not because evolution has not yet arrived at an optimum, but because it seeks to diversify the field as far as possible. That’s why it creates individuals tuned to respond to their environment in very different ways. The task of evolution is less to optimise a species for the present than to prepare it for a multiplicity of futures unknown.

That our lives are now dominated by a rational, Apolline order does not mean we shall not one day descend into an instinctual, Dionysiac chaos. Our brains are ready for it – our culture should be too.

https://medicalxpress.com/news/2019-03-neuroscience-nietzsche-people-wired-spontaneous.html

Clumps of harmful proteins that interfere with brain functions have been partially cleared in mice using nothing but light and sound.

Research led by MIT has found strobe lights and a low pitched buzz can be used to recreate brain waves lost in the disease, which in turn remove plaque and improve cognitive function in mice engineered to display Alzheimer’s-like behaviour.

It’s a little like using light and sound to trigger their own brain waves to help fight the disease.

This technique hasn’t been clinically trialled in humans as yet, so it’s too soon to get excited – brain waves are known to work differently in humans and mice.

But, if replicated, these early results hint at a possible cheap and drug-free way to treat the common form of dementia.

So how does it work?

Advancing a previous study that showed flashing light 40 times a second into the eyes of engineered mice treated their version of Alzheimer’s disease, researchers added sound of a similar frequency and found it dramatically improved their results.

“When we combine visual and auditory stimulation for a week, we see the engagement of the prefrontal cortex and a very dramatic reduction of amyloid,” says Li-Huei Tsai, one of the researchers from MIT’s Picower Institute for Learning and Memory.

It’s not the first study to investigate the role sound can play in clearing the brain of the tangles and clumps of tau and amyloid proteins at least partially responsible for the disease.

Previous studies showed bursts of ultrasound make blood vessels leaky enough to allow powerful treatments to slip into the brain, while also encouraging the nervous system’s waste-removal experts, microglia, to pick up the pace.

Several years ago, Tsai discovered light flickering at a frequency of about 40 flashes a second had similar benefits in mice engineered to build up amyloid in their brain’s nerve cells.

“The result was so mind-boggling and so robust, it took a while for the idea to sink in, but we knew we needed to work out a way of trying out the same thing in humans,” Tsai told Helen Thomson at Nature at the time.

The only problem was this effect was confined to visual parts of the brain, missing key areas that contribute to the formation and retrieval of memory.

While the method’s practical applications looked a little limited, the results pointed to a way oscillations could help the brain recover from the grip of Alzheimer’s disease.

As our brain’s neurons transmit signals they also generate electromagnetic waves that help keep remote regions in sync – so-called ‘brain waves’.

One such set of oscillations are defined as gamma-frequencies, rippling across the brain at around 30 to 90 waves per second. These brain waves are most active when we’re paying close attention, searching our memories in order to make sense of what’s going on.

Tsai’s previous study had suggested these gamma waves are impeded in individuals with Alzheimer’s, and might play a pivotal role in the pathology itself.

Light was just one way to trick the parts of the brain into humming in the key of gamma. Sounds can also manage this in other areas.

Instead of the high pitched scream of ultrasound, Tsui used a much lower droning noise of just 40 Hertz, a sound only just high enough for humans to hear.

Exposing their mouse subjects to just one hour of this monotonous buzz every day for a week led to a significant drop in the amount of amyloid build up in the auditory regions, while also stimulating those microglial cells and blood vessels.

“What we have demonstrated here is that we can use a totally different sensory modality to induce gamma oscillations in the brain,” says Tsai.

As an added bonus, it also helped clear the nearby hippocampus – an important section associated with memory.

The effects weren’t just evident in the test subjects’ brain chemistry. Functionally, mice exposed to the treatment performed better in a range of cognitive tasks.

Adding the light therapy from the previous study saw an even more dramatic effect, clearing plaques in a number of areas across the brain, including in the prefrontal cortex. Those trash-clearing microglia also went to town.

“These microglia just pile on top of one another around the plaques,” says Tsai.

Discovering new mechanisms in the way nervous systems clear waste and synchronise activity is a huge step forward in the development of treatments for all kinds of neurological disorders.

Translating discoveries like this to human brains will take more work, especially when there are potential contrasts in how gamma waves appear in mice and human Alzheimer’s brains.

So far early testing for safety has shown the process seems to have no clear side effects.

This research was published in Cell.

https://www.sciencealert.com/astonishing-new-study-treats-alzheimer-s-in-mice-with-a-light-and-sound-show