Poor Sleep Linked with Future Amyloid-β Build Up

by Abby Olena

There’s evidence in people and animals that short-term sleep deprivation can change the levels of amyloid-β, a peptide that can accumulate in the aging brain and cause Alzheimer’s disease. Scientists now show long-term consequences may also result from sustained poor sleep. In a study published September 3 in Current Biology, researchers found that healthy individuals with lower-quality sleep were more likely to have amyloid-β accumulation in the brain years later. The study could not say whether poor sleep caused amyloid-β accumulation or vice versa, but the authors say that sleep could be an indicator of present and future amyloid-β levels.

“Traditionally, sleep disruptions have been accepted as a symptom of Alzheimer’s disease,” says Ksenia Kastanenka, a neuroscientist at Massachusetts General Hospital who was not involved in the work. Her group showed in 2017 that improving sleep in a mouse model of Alzheimer’s disease, in which the animals’ slow wave sleep is disrupted as it usually is in people with the disease, halted disease progression.

Collectively, the results from these studies and others raise the possibility that “sleep rhythm disruptions are not an artifact of disease progression, but actually are active contributors, if not a cause,” she says, hinting at the prospect of using these sleep measures as a biomarker for Alzheimer’s disease.

As a graduate student at the University of California, Berkeley, Joseph Winer, who is now a postdoc at Stanford University, and his colleagues were interested in whether or not sleep could predict how the brain changes over time. They collaborated with the team behind the Berkeley Aging Cohort Study, which includes a group of 32 cognitively healthy adults averaging about 75 years of age. They participated in a sleep study, then had periodic cognitive assessments and between two and five positron emission tomography (PET) scans to check for the presence of amyloid-β in their brains for an average of about four years after the sleep study.

The researchers found at their baseline PET scan, which happened within six months of their sleep study, that 20 of the 32 participants already had some amyloid-β accumulation, which was not unexpected based on their average age. They also showed that both slow wave sleep, an indicator of depth of sleep, and sleep efficiency, the amount of time sleeping compared to time in bed, were both predictive of the rate of amyloid change several years later. In other words, people with lower levels of slow wave sleep and sleep efficiency were more likely to have faster amyloid build up.

The subjects all remained cognitively healthy over the duration of the study, says Winer. “We do expect that they’re at higher risk for developing Alzheimer’s in their lifetime because of the amyloid plaque.”

The strengths of the study include the well-characterized participants with detailed sleep assessments, as well as cognitive testing and longitudinal amyloid PET imaging, says Brendan Lucey, a sleep neurologist at Washington University in St. Louis who did not participate in the work.

There are still open questions about the link between sleep and amyloid deposition over time. “Amyloid accumulation on PET increases at different rates in amyloid-negative and amyloid-positive individuals, and even within amyloid-positive individuals,” Lucey explains. “Without adjusting for participants’ starting amyloid [levels], we don’t know if some participants would have been more likely to have increased amyloid compared to others, independent of sleep.”

“It is very hard to untangle this question of baselines,” acknowledges Winer. Because the sleep measures the team identified in the study are related to amyloid levels, to actually tease apart the effect of sleep quality on amyloid deposition and vice versa, it’d be necessary to study people starting as early as their fifties, when they’re much less likely to have amyloid accumulation, he says.

This study is “a great start,” David Holtzman, a neurologist and collaborator of Lucey at Washington University in St. Louis who did not participate in the work, tells The Scientist. In addition to controlling for the amount of amyloid deposition that is present in a subject’s brain at the beginning of the study, it would be important to see if the findings bear out in larger numbers of people and what role genetic factors play.

“The most important question down the road is to test the idea in some sort of a treatment paradigm,” Holtzman adds. “You can do something to improve the quality of sleep or increase slow wave sleep, and then determine if it actually slows down the onset of Alzheimer’s disease clinically.”

J.R. Winer et al., “Sleep disturbance forecasts β-amyloid accumulation across subsequent years,” Current Biology, doi:10.1016/j.cub.2020.08.017, 2020.


Researchers implant a memory into a bird’s brain


Animals learn by imitating behaviors, such as when a baby mimics her mother’s speaking voice or a young male zebra finch copies the mating song of an older male tutor, often his father. In a study published today in Science, researchers identified the neural circuit that a finch uses to learn the duration of the syllables of a song and then manipulated this pathway with optogenetics to create a false memory that juvenile birds used to develop their courtship song.

“In order to learn from observation, you have to create a memory of someone doing something right and then use this sensory information to guide your motor system to learn to perform the behavior. We really don’t know where and how these memories are formed,” says Dina Lipkind, a biologist at York College who did not participate in the study. The authors “addressed the first step of the process, which is how you form the memory that will later guide [you] towards performing this behavior.”

“Our original goals were actually much more modest,” says Todd Roberts, a neuroscientist at UT Southwestern Medical Center. Initially, Wenchan Zhao, a graduate student in his lab, set out to test whether or not disrupting neural activity while a young finch interacted with a tutor could block the bird’s ability to form a memory of the interchange. She used light to manipulate cells genetically engineered to be sensitive to illumination in a brain circuit previously implicated in song learning in juvenile birds.

Zhao turned the cells on by shining a light into the birds’ brains while they spent time with their tutors and, as a control experiment, when the birds were alone. Then she noticed that the songs that the so-called control birds developed were unusual—different from the songs of birds that had never met a tutor but also unlike the songs of those that interacted with an older bird.

Once Zhao and her colleagues picked up on the unusual songs, they decided to “test whether or not the activity in this circuit would be sufficient to implant memories,” says Roberts.

The researchers stimulated birds’ neural circuits with sessions of 50- or 300-millisecond optogenetic pulses over five days during the time at which they would typically be interacting with a tutor but without an adult male bird present. When these finches grew up, they sang adult courtship songs that corresponded to the duration of light they’d received. Those that got the short pulses sang songs with sounds that lasted about 50 milliseconds, while the ones that received the extended pulses held their notes longer. Some song features—including pitch and how noisy harmonic syllables were in the song—didn’t seem to be affected by optogenetic manipulation. Another measure, entropy, which approximates the amount of information carried in the communication, was not distinguishable in the songs of normally tutored birds and those that received 50-millisecond optogenetic pulses, but was higher in the songs of birds who’d received tutoring than in the songs of either isolated birds or those that received the 300-millisecond light pulses.

While the manipulation of the circuit affected the duration of the sounds in the finches’ songs, other elements of singing behavior—including the timeline of vocal development, how frequently the birds practiced, and in what social contexts they eventually used the songs—were similar to juveniles who’d learned from an adult bird.

The researchers then determined that when the birds received light stimulation at the same time as they interacted with a singing tutor, their adult songs were more like those of birds that had only received light stimulation, indicating that optogenetic stimulation can supplant tutoring.

When the team lesioned the circuit before young birds met their tutors, they didn’t make attempts to imitate the adult courtship songs. But if the juveniles were given a chance to interact with a tutor before the circuit was damaged, they had no problem learning the song. This finding points to an essential role for the pathway in forming the initial memory of the timing of vocalizations, but not in storing it long-term so that it can be referenced to guide song formation.

“What we were able to implant was information about the duration of syllables that the birds want to attempt to learn how to sing,” Roberts tells The Scientist. But there are many more characteristics birds have to attend to when they’re learning a song, including pitch and how to put the syllables in the correct order, he says. The next steps are to identify the circuits that are carrying other types of information and to investigate the mechanisms for encoding these memories and where in the brain they’re stored.

Sarah London, a neuroscientist at the University of Chicago who did not participate in the study, agrees that the strategies used here could serve as a template to tease apart where other characteristics of learned song come from. But more generally, this work in songbirds connects to the bigger picture of our understanding of learning and memory, she says.

Song learning “is a complicated behavior that requires multiple brain areas coordinating their functions over long stretches of development. The brain is changing anyway, and then on top of that the behavior’s changing in the brain,” she explains. Studying the development of songs in zebra finches can give insight into “how maturing neural circuits are influenced by the environment,” both the brain’s internal environment and the external, social environment, she adds. “This is a really unique opportunity, not just for song, not just for language, but for learning in a little larger context—of kids trying to understand and adopt behavioral patterns appropriate to their time and place.”

W. Zhao et al., “Inception of memories that guide vocal learning in the songbird,” Science, doi:10.1126/science.aaw4226, 2019.


New lung cell with role in cystic fibrosis discovered

Ionocytes (orange) extend through neighboring epithelial cells (nuclei, cyan) to the surface of the respiratory epithelial lining. This newly identified cell type expresses high levels of CFTR, a gene that is associated with cystic fibrosis when mutated.


Two independent research teams have used single-cell RNA sequencing to generate detailed molecular atlases of mouse and human airway cells. The findings, reported in two studies today (August 1) in Nature, reveal the gene-expression patterns of thousands of lung cells, as well as the existence of a previously unknown cell type that expresses high levels of the gene mutated in cystic fibrosis, the cystic fibrosis transmembrane conductance regulator (CFTR).

“These papers are extremely exciting,” says Amy Ryan, a lung biologist at the University of Southern California who was not involved in either study. “They’ve interrogated the cellular composition and the cellular hierarchy of the airways by using a single-cell RNA-sequencing technique. That kind of information is going to have a significant impact on advancing the research that we can do, and hopefully the derivation of new therapeutic approaches for any number of airway diseases.”

Jayaraj Rajagopal, a pulmonary physician at Massachusetts General Hospital and Harvard University and coauthor of one of the studies, had been studying lung regeneration and wanted to use single-cell sequencing to look at differences in the lungs’ stem-cell populations. He and his colleagues teamed up with Aviv Regev, a computational biologist at the Broad Institute of MIT and Harvard University, and together, the two groups characterized the transcriptomes of thousands of epithelial cells from the adult mouse trachea.

Rajagopal, Regev, and colleagues uncovered previously unknown differences in gene expression in several groups of airway cells; identified novel structures in the lung; and found new paths of cellular differentiation. They also described several new cell types, including one that the team has named the pulmonary ionocyte, after salt-regulating cells in fish and amphibian skin. These lung cells express similar genes as fish and amphibian ionocytes, the team found, including a gene coding for the transcription factor Foxi1, which regulates genes that play a role in ion transport.

The team also showed that pulmonary ionocytes highly express CFTR, and are in fact the primary source of its product, CFTR—a membrane protein that helps regulate fluid transport and the consistency of mucus—in both mouse and human lungs, suggesting that the cells might play a role in cystic fibrosis.

“So much that we found rewrites the way we think about lung biology and lung cells,” says Rajagopal. “I think the entire community of pulmonologists and lung biologists will have to take a step back and think about their problems with respect to all these new cell types.”

For the other study, Aron Jaffe, a biologist at Novartis who studies how different airway cell types are made, combined forces with Harvard systems biologist Allon Klein and his team. Klein’s group had previously developed a single-cell RNA-sequencing method that Jaffe describes as “the perfect technology to take a big picture view and really define the full repertoire of epithelial cell types in the airway.”

Jaffe, Klein, and colleagues sequenced RNA from thousands of single human bronchial epithelial and mouse tracheal epithelial cells. The atlas generated by their sequencing analysis revealed pulmonary ionocytes, as well as new gene-expression patterns in familiar cells. The team examined the expression of CFTR in human and mouse ionocytes in order to better understand the possible role for the cells in cystic fibrosis. Consistent with the findings of the other study, the researchers showed that pulmonary ionocytes make the majority of CFTR protein in the airways of humans and mice.

“Finding this new rare cell type that accounts for the majority of CFTR activity in the airway epithelium was really the biggest surprise,” Jaffe tells The Scientist. “CFTR has been studied for a long time, and it was thought that the gene was broadly expressed in many cells in the airway. It turns out that the epithelium is more complicated than previously appreciated.”

These studies are “very exciting work [and] a wonderful example of how new technologies that have come online in the last few years—in this case single-cell RNA sequencing—have made a very dramatic advance in our understanding of aspects of biology,” says Ann Harris, a geneticist at Case Western Reserve University who did not participate in either study.

In terms of future directions, the authors “have shown that transcription factor [Foxi1] is central to the transcriptional program of these ionocytes,” says Harris. One of the next questions is, “does it directly interact with the CFTR gene or is it working through other transcription factors or other proteins that regulate CFTR gene expression?”

According to Jennifer Alexander-Brett, a pulmonary physician and researcher at Washington University School of Medicine in St. Louis who was not involved in the studies, the possibility that a rare cell type could be playing a part in regulating airway physiology is “captivating.”

Apart from investigating the potential role for ionocytes in lung function, Alexander-Brett says that researchers can likely make broad use of the data from the studies—particularly details on the expression of genes coding for transcription factors and cell-surface markers. “One area that we really struggle with in airway biology . . . is [that] we just don’t have good markers” to differentiate cell types, she explains. But these papers are “very comprehensive. There’s a ton of data here.”

D.T. Montoro et al., “A revised airway epithelial hierarchy includes CFTR-expressing ionocytes,” Nature, doi:10.1038/s41586-018-0393-7, 2018.

L.W. Plasschaert et al., “A single-cell atlas of the airway epithelium reveals the CFTR-rich pulmonary ionocyte,” Nature, doi:10.1038/s41586-018-0394-6, 2018.


Ultrasound Fires Up the Auditory Cortex—Even Though Animals Can’t Hear It

Ultrasound activates auditory pathways in the rodent brain (red arrows) regardless of where in the brain the ultrasound-generating transducer is placed.

By Abby Olena

Activating or suppressing neuronal activity with ultrasound has shown promise both in the lab and the clinic, based on the ability to focus noninvasive, high-frequency sound waves on specific brain areas. But in mice and guinea pigs, it appears that the technique has effects that scientists didn’t expect. In two studies published today (May 24) in Neuron, researchers demonstrate that ultrasound activates the brains of rodents by stimulating an auditory response—not, as researchers had presumed, only the specific neurons where the ultrasound is focused.

“These papers are a very good warning to folks who are trying to use ultrasound as a tool to manipulate brain activity,” says Raag Airan, a neuroradiologist and researcher at Stanford University Medical Center who did not participate in either study, but coauthored an accompanying commentary. “In doing these experiments going forward [the hearing component] is something that every single experimenter is going to have to think about and control,” he adds.

Over the past decade, researchers have used ultrasound to elicit electrical responses from cells in culture and motor and sensory responses from the brains of rodents and primates. Clinicians have also used so-called ultrasonic neuromodulation to treat movement disorders. But the mechanism by which high frequency sound waves work to exert their influence is not well understood.

The University of Minnesota’s Hubert Lim studies ways to restore hearing, but many of the strategies that his group uses are invasive, such as cochlear implants, which require surgery to insert a device inside the ear. He says that he and his colleagues were excited by the prospect of using noninvasive and precise ultrasound to activate the parts of the brain responsible for hearing.

Lim’s team started by stimulating the brains of guinea pigs with audible noise or with pulsed ultrasound directly over the auditory cortex. They were surprised to observe similar neuronal responses to the two different stimuli because ultrasound is outside the spectrum that the guinea pigs—and humans—can hear. The researchers also found that the rodents’ neurons showed comparable electrical activity in the auditory cortex regardless of where in the brain the researchers directed the ultrasound. This raised the question: are the animals’ brains responding directly to the ultrasound or to responses of the auditory system?

When the authors cut the guinea pigs’ auditory nerves or removed their cochlear fluid, the guinea pigs stopped responding to the ultrasound and to audible noise. Lim’s team concluded that what must be happening is ultrasound moves through brain tissue and vibrates the cochlear fluid. This vibration then triggers auditory signaling and indirectly activates the auditory cortex and other brain regions, rather than ultrasound having a direct effect on the activity of the neurons.

“I am actually very hopeful that ultrasound can be a powerful tool that can not only modulate but also treat different neurologic and psychiatric disorders, and that it can achieve a noninvasive yet localized activation,” says Lim. “But what we’re trying to show in this paper is that there are many confounding effects that are actually happening with ultrasound, and we have to remove those effects to really see how it’s activating the brain.”

A coauthor on the companion study, Mikhail Shapiro of Caltech, says that previous work showing that it is possible to apply ultrasound to the brains of mice and rats to elicit electrical activity and movement in their limbs left him and his colleagues curious about how it works. To determine where and when neural activation happens, they applied ultrasonic pulses to the brains of transgenic mice that have neurons that light up when stimulated. As with guinea pigs, ultrasound is inaudible to mice.

“To our surprise, we found that the main activation pattern that we were seeing was not in the region where we were applying the ultrasound directly, but actually in the auditory areas of the brain, those responsible for processing information about sound,” Shapiro tells The Scientist.

Consistent with the findings of Lim and colleagues, Shapiro and his coauthors determined that the mouse brains lit up across the cortex, starting from the auditory cortex. And as in the guinea pigs, the mouse neurons responded similarly to ultrasound and audible sounds. The researchers also showed that both ultrasound and audible noise elicited motor movements that decreased when they used chemicals to deafen the mice.

“We’re not trying to imply that [the effects of ultrasound observed in previous studies are] due to this auditory side effect,” says Shapiro. “We’re very optimistic that now that we know that it’s there, we will be able to design ways to get around it and still be able to use this technology scientifically.”

Shy Shoham, a neuroscientist and biomedical engineer at New York University Langone Medical Center who did not participate in the studies, tells The Scientist that these papers highlight how careful researchers must be in the future when using ultrasound to modify neuronal function. “In the field of neural stimulation in general, we should always be very concerned about off-target effects,” he says. We must “delineate what is real and what isn’t.”

“The big take home point here is that we need to take care of the auditory effects,” says Kim Butts Pauly, who studies ultrasound neuromodulation at Stanford University Medical Center and who coauthored the accompanying commentary with Airan. “There’s been very compelling data from other studies that ultrasound can stimulate the brain and change recordings from the brain that are completely separate from any auditory effects. As we get rid of the auditory effects, then the more subtle effects may become apparent.”

H. Guo et al., “Ultrasound produces extensive brain activation via a cochlear pathway,” Neuron, doi:10.1016/j.neuron.2018.04.036, 2018.

T. Sato et al., “Ultrasonic neuromodulation causes widespread cortical activation via an indirect auditory mechanism,” Neuron, doi:10.1016/j.neuron.2018.05.009, 2018.