Scientists can now induce hallucinations in mice with lasers


A mouse exploring one of the custom hologram generators used in the experiments at Stanford. By stimulating particular neurons, scientists were able to make engineered mice see visual patterns that weren’t there.

By Carl Zimmer

In a laboratory at the Stanford University School of Medicine, the mice are seeing things. And it’s not because they’ve been given drugs.

With new laser technology, scientists have triggered specific hallucinations in mice by switching on a few neurons with beams of light. The researchers reported the results on Thursday in the journal Science.

The technique promises to provide clues to how the billions of neurons in the brain make sense of the environment. Eventually the research also may lead to new treatments for psychological disorders, including uncontrollable hallucinations.

“This is spectacular — this is the dream,” said Lindsey Glickfeld, a neuroscientist at Duke University, who was not involved in the new study.

In the early 2000s, Dr. Karl Deisseroth, a psychiatrist and neuroscientist at Stanford, and other scientists engineered neurons in the brains of living mouse mice to switch on when exposed to a flash of light. The technique is known as optogenetics.

In the first wave of these experiments, researchers used light to learn how various types of neurons worked. But Dr. Deisseroth wanted to be able to pick out any individual cell in the brain and turn it on and off with light.

So he and his colleagues designed a new device: Instead of just bathing a mouse’s brain in light, it allowed the researchers to deliver tiny beams of red light that could strike dozens of individual brain neurons at once.

To try out this new system, Dr. Deisseroth and his colleagues focused on the brain’s perception of the visual world. When light enters the eyes — of a mouse or a human — it triggers nerve endings in the retina that send electrical impulses to the rear of the brain.

There, in a region called the visual cortex, neurons quickly detect edges and other patterns, which the brain then assembles into a picture of reality.

The scientists inserted two genes into neurons in the visual cortices of mice. One gene made the neurons sensitive to the red laser light. The other caused neurons to produce a green flash when turned on, letting the researchers track their activity in response to stimuli.

The engineered mice were shown pictures on a monitor. Some were of vertical stripes, others of horizontal stripes. Sometimes the stripes were bright, sometimes fuzzy. The researchers trained the mice to lick a pipe only if they saw vertical stripes. If they performed the test correctly, they were rewarded with a drop of water.

As the mice were shown images, thousands of neurons in their visual cortices flashed green. One population of cells switched on in response to vertical stripes; other neurons flipped on when the mice were shown horizontal ones.

The researchers picked a few dozen neurons from each group to target. They again showed the stripes to the mice, and this time they also fired light at the neurons from the corresponding group. Switching on the correct neurons helped the mice do better at recognizing stripes.

Then the researchers turned off the monitor, leaving the mice in darkness. Now the scientists switched on the neurons for horizontal and vertical stripes, without anything for the rodents to see. The mice responded by licking the pipe, as if they were actually seeing vertical stripes.

Anne Churchland, a neuroscientist at Cold Spring Harbor Laboratory who was not involved in the study, cautioned that this kind of experiment can’t reveal much about a mouse’s inner experience.

“It’s not like a creature can tell you, ‘Oh, wow, I saw a horizontal bar,’” she said.

Dr. Churchland said that it would take more research to better understand why the mice behaved as they did in response to the flashes of red light. Did they see the horizontal stripes more clearly, or were they less distracted by misleading signals?

One of the most remarkable results from the study came about when Dr. Deisseroth and his colleagues narrowed their beams of red light to fewer and fewer neurons. They kept getting the mice to lick the pipe as if they were seeing the vertical stripes.

In the end, the scientists found they could trigger the hallucinations by stimulating as few as two neurons. Thousands of other neurons in the visual cortex would follow the lead of those two cells, flashing green as they became active.

Clusters of neurons in the brain may be tuned so that they’re ready to fire at even a slight stimulus, Dr. Deisseroth and his colleagues concluded — like a snowbank poised to become an avalanche.

But it doesn’t take a fancy optogenetic device to make a few neurons fire. Even when they’re not receiving a stimulus, neurons sometimes just fire at random.

That raises a puzzle: If all it takes is two neurons, why are we not hallucinating all the time?

Maybe our brain wiring prevents it, Dr. Deisseroth said. When a neuron randomly fires, others may send signal it to quiet down.

Dr. Glickfeld speculated that attention may be crucial to triggering the avalanche of neuronal action only at the right times. “Attention allows you to ignore a lot of the background activity,” she said.

Dr. Deisseroth hopes to see what other hallucinations he can trigger with light. In other parts of the brain, he might be able to cause mice to perceive more complex images, such as the face of a cat. He might be able to coax neurons to create phantom sounds, or even phantom smells.

As a psychiatrist, Dr. Deisseroth has treated patients who have suffered from visual hallucinations. In his role as a neuroscientist, he’d like to find out more about how individual neurons give rise to these images — and how to stop them.

“Now we know where those cells are, what they look like, what their shape is,” he said. “In future work, we can get to know them in much more detail.”

Immune cells invade aging brains, disrupt new nerve cell formation

A study by Stanford University School of Medicine investigators has revealed that immune cells infiltrate the rare newborn nerve-cell nurseries of the aging brain. There’s every reason to think those interlopers are up to no good. Experiments in a dish and in living animals indicate they’re secreting a substance that chokes off new nerve cell production.

While most of the experiments in the study were carried out in mice, the central finding—the invasion, by immune cells called killer T cells, of neurogenic niches (specialized spots in the brain where new nerve cells, or neurons, are generated)—was corroborated in tissue excised from autopsied human brains.

The findings could accelerate progress in hunting down the molecules in the body that promote the common deterioration of brain function in older individuals and in finding treatments that might stall or even reverse that deterioration. They also signify a crack in the wall of dogma that’s deemed the healthy brain impervious to invasion by the body’s immune cells, whose unbridled access to the organ could cause damage.

“The textbooks say that immune cells can’t easily get into the healthy brain, and that’s largely true,” said Anne Brunet, Ph.D., professor of genetics and senior author of the study. “But we’ve shown that not only do they get into otherwise healthy aging brains—including human brains—but they reach the very part of the brain where new neurons arise.”

Lead authorship of the study, to be published online July 3 in Nature, is shared by medical student Ben Dulken, Ph.D., graduate student Matthew Buckley and postdoctoral scholar Paloma Navarro Negredo, Ph.D.

The cells that aid memory

Many a spot in a young mammal’s brain is bursting with brand new neurons. But for the most part, those neurons have to last a lifetime. Older mammals’ brains retain only a couple of neurogenic niches, consisting of several cell types whose mix is critical for supporting neural stem cells that can both differentiate into neurons and generate more of themselves. New neurons spawned in these niches are considered essential to forming new memories and to learning, as well as to odor discrimination.

In order to learn more about the composition of the neurogenic niche, the Stanford researchers catalogued, one cell at a time, the activation levels of the genes in each of nearly 15,000 cells extracted from the subventricular zone (a neurogenic niche found in mice and human brains) of healthy 3-month-old mice and healthy 28- or 29-month-old mice.

This high-resolution, single-cell analysis allowed the scientists to characterize each cell they looked at and see what activities it was engaged in. Their analysis confirmed the presence of nine familiar cell types known to compose the neurogenic niche. But when Brunet and her colleagues compared their observations in the brains of young mice (equivalent in human years to young adults) with what they saw in the brains of old mice (equivalent to people in their 80s), they identified a couple of cell types in the older mice not typically expected to be there—and barely present in the young mice. In particular, they found immune cells known as killer T cells lurking in the older mice’s subventricular zone.

The healthy brain is by no means devoid of immune cells. In fact, it boasts its own unique version of them, called microglia. But a much greater variety of immune cells abounding in the blood, spleen, gut and elsewhere in the body are ordinarily denied entry to the brain, as the blood vessels pervading the brain have tightly sealed walls. The resulting so-called blood-brain barrier renders a healthy brain safe from the intrusion of potentially harmful immune cells on an inflammatory tear as the result of a systemic illness or injury.

“We did find an extremely sparse population of killer T cells in the subventricular zone of young mice,” said Brunet, who is the Michele and Timothy Barakett Endowed Professor. “But in the older mice, their numbers were expanded by 16-fold.”

That dovetailed with reduced numbers of proliferation-enabled neural stem cells in the older mice’s subventricular zone. Further experiments demonstrated several aspects of the killer T cells’ not-so-mellow interaction with neural stem cells. For one thing, tests in laboratory dishware and in living animals indicated that killer T cells isolated from old mice’s subventricular zone were far more disposed than those from the same mice’s blood to pump out an inflammation-promoting substance that stopped neural stem cells from generating new nerve cells.

Second, killer T cells were seen nestled next to neural stem cells in old mice’s subventricular zones and in tissue taken from the corresponding neurogenic niche in autopsied brains of old humans; where this was the case, the neural stem cells were less geared up to proliferate.

Possible brain-based antigens

A third finding was especially intriguing. Killer T cells’ job is to roam through the body probing the surfaces of cells for biochemical signs of a pathogen’s presence or of the possibility that a cell is becoming, or already is, cancerous. Such telltale biochemical features are called antigens. The tens of billions of killer T cells in a human body are able to recognize a gigantic range of antigens by means of receptors on their own surfaces. That’s because every unexposed, or naïve, killer T cell has its own unique receptor shape.

When an initially naïve killer T cell is exposed to an unfamiliar antigen that fits its uniquely shaped receptor, it reacts by undergoing multiple successive rounds of replication, culminating in a large set of warlike cells all sharing the same receptor and all poised to destroy any cells bearing the offending antigen. This process is called clonal expansion.

The killer T cells found in old mice’s brains had undergone clonal expansion, indicating likely exposure to triggering antigens. But the receptors on those killer T cells differed from the ones found in the old mice’s blood, suggesting that the brain-localized killer T cells hadn’t just traipsed through a disrupted blood-brain barrier via passive diffusion but were, rather, reacting to different, possibly brain-based, antigens.

Brunet’s group is now trying to determine what those antigens are. “They may bear some responsibility for the disruption of new neuron production in the aging brain’s neurogenic niches,” she said.

Single cell analysis reveals T cell infiltration in old neurogenic niches, Nature (2019). DOI: 10.1038/s41586-019-1362-5 , https://www.nature.com/articles/s41586-019-1362-5

https://medicalxpress.com/news/2019-07-immune-cells-invade-aging-brains.html

Low-carb ‘keto’ diet (‘Atkins-style’) may modestly improve cognition in older adults

In a pilot study of 14 older adults with mild cognitive problems suggestive of early Alzheimer’s disease, Johns Hopkins Medicine researchers report that a high-fat, low-carbohydrate diet may improve brain function and memory.

Although the researchers say that finding participants willing to undertake restrictive diets for the three-month study—or partners willing to help them stick to those diets—was challenging, those who adhered to a modified Atkins diet (very low carbohydrates and extra fat) had small but measurable improvements on standardized tests of memory compared with those on a low-fat diet.

The short-term results, published in the April issue of the Journal of Alzheimer’s Disease, are far from proof that the modified Atkins diet has the potential to stave off progression from mild cognitive impairment to Alzheimer’s disease or other dementias. However, they are promising enough, the researchers say, to warrant larger, longer-term studies of dietary impact on brain function.

“Our early findings suggest that perhaps we don’t need to cut carbs as strictly as we initially tried. We may eventually see the same beneficial effects by adding a ketone supplement that would make the diet easier to follow,” says Jason Brandt, Ph.D., professor of psychiatry and behavioral sciences and neurology at the Johns Hopkins University School of Medicine. “Most of all, if we can confirm these preliminary findings, using dietary changes to mitigate cognitive loss in early-stage dementia would be a real game-changer. It’s something that 400-plus experimental drugs haven’t been able to do in clinical trials.”

Brandt explains that, typically, the brain uses the sugar glucose—a product of carbohydrate breakdown—as a primary fuel. However, research has shown that in the early stage of Alzheimer’s disease the brain isn’t able to efficiently use glucose as an energy source. Some experts, he says, even refer to Alzheimer’s as “type 3 diabetes.”

Using brain scans that show energy use, researchers have also found that ketones—chemicals formed during the breakdown of dietary fat—can be used as an alternative energy source in the brains of healthy people and those with mild cognitive impairment. For example, when a person is on a ketogenic diet, consisting of lots of fat and very few sugars and starches, the brain and body use ketones as an energy source instead of carbs.

For the current study, the researchers wanted to see if people with mild cognitive impairment, often an indicator of developing Alzheimer’s disease, would benefit from a diet that forced the brain to use ketones instead of carbohydrates for fuel.

After 2 1/2 years of recruitment efforts, the researchers were able to enroll 27 people in the 12-week diet study. There were a few dropouts, and so far, 14 participants have completed the study. The participants were an average age of 71. Half were women, and all but one were white.

To enroll, each participant required a study partner (typically a spouse) who was responsible for ensuring that the participant followed one of two diets for the full 12 weeks. Nine participants followed a modified Atkins diet meant to restrict carbs to 20 grams per day or less, with no restriction on calories. The typical American consumes between 200 and 300 grams of carbs a day. The other five participants followed a National Institute of Aging diet, similar to the Mediterranean diet, that doesn’t restrict carbohydrates, but favors fruits, vegetables, low- or fat-free dairy, whole grains and lean proteins such as seafood or chicken.

The participants and their partners were also asked to keep food diaries. Prior to starting the diets, those assigned to the modified Atkins diet were consuming about 158 grams of carbs per day. By week six of the diet, they had cut back to an average of 38.5 grams of carbs per day and continued dropping at nine weeks, but still short of the 20-gram target, before rising to an average of 53 grams of carbs by week 12. Participants on the National Institute of Aging diet continued to eat well over 100 grams of carbs per day.

Each participant also gave urine samples at the start of the dietary regimens and every three weeks up to the end of the study, which were used to track ketone levels. More than half of the participants on the modified Atkins diet had at least some ketones in their urine by six weeks into the diet until the end; as expected, none of the participants on the National Institute of Aging control diet had any detectable ketones.

Participants completed the Montreal Cognitive Assessment, the Mini-Mental State Examination and the Clinical Dementia Rating Scale at the start of the study. They were tested with a brief collection of neuropsychological memory tests before starting their diets and at six weeks and 12 weeks on the diet. At the six-week mark, the researchers found a significant improvement on memory tests, which coincided with the highest levels of ketones and lowest carb intakes.

When comparing the results of tests of delayed recall—the ability to recollect something they were told or shown a few minutes earlier—those who stuck to the modified Atkins diet improved by a couple of points on average (about 15% of the total score), whereas those who didn’t follow the diet on average dropped a couple of points.

The researchers say the biggest hurdle for researchers was finding people willing to make drastic changes to their eating habits and partners willing to enforce the diets. The increase in carbohydrate intake later in the study period, they said, suggests that the diet becomes unpalatable over long periods.

“Many people would rather take a pill that causes them all kinds of nasty side effects than change their diet,” says Brandt. “Older people often say that eating the foods they love is one of the few pleasures they still enjoy in life, and they aren’t willing to give that up.”

But, because Brandt’s team observed promising results even in those lax with the diet, they believe that a milder version of the high-fat/low-carb diet, perhaps in conjunction with ketone supplement drinks, is worth further study. As this study also depended on caregivers/partners to do most of the work preparing and implementing the diet, the group also wants to see if participants with less severe mild cognitive impairment can make their own dietary choices and be more apt to stick to a ketogenic diet.

A standardized modified Atkins diet was created and tested at Johns Hopkins Medicine in 2002, initially to treat some seizure disorders. It’s still used very successfully for this purpose.

According to the Alzheimer’s Association, about 5.8 million Americans have Alzheimer’s disease, and by 2050 the number is projected to increase to 14 million people.

Jason Brandt et al. Preliminary Report on the Feasibility and Efficacy of the Modified Atkins Diet for Treatment of Mild Cognitive Impairment and Early Alzheimer’s Disease, Journal of Alzheimer’s Disease (2019). DOI: 10.3233/JAD-180995

https://medicalxpress.com/news/2019-06-low-carb-keto-diet-atkins-style-modestly.html

Study suggests neuronal origin of ‘they all look alike’

by Bob Yirka

A team of researchers from the University of California and Stanford University has found that the tendency to see people from different racial groups as interchangeable has a neuronal basis. In their paper published in Proceedings of the National Academy of Sciences, the group describes studies they conducted with volunteers and what they found.

One often-heard phrase connected with racial profiling is “they all look the same to me,” a phrase usually perceived as racist. It implies that people of one race have difficulty discerning the facial characteristics of people of another race. In this new effort, the researchers conducted experiments to find out if this is valid—at least among one small group of young, white men.

In the first experiment, young, white male volunteers looked at photographs of human faces, some depicting black people, others white, while undergoing an fMRI scan. Afterward, the researchers found that the part of the brain involved in facial recognition activated more for white faces than it did for black faces.

In the second experiment, the same volunteers looked at photographs of faces that had been doctored to make the subjects appear more alike, regardless of skin color. The researchers report that the brains of the volunteers activated when dissimilarities were spotted, regardless of skin color, though it was more pronounced when the photo was of a white face.

In a third series of experiments, the volunteers rated how different they found faces in a series of photographs or whether they had seen a given face before. The researchers report that the volunteers had a tendency to rate the black faces as more similar to one another than the white faces. And they found it easier to tell if they had seen a particular white face before.

The researchers suggest that the results of their experiments indicate a neural basis that makes it more difficult for people to see differences between individuals of other races. They note that they did account for social contexts such as whether the volunteers had friends and/or associates of other races. They suggest that more work is required to determine if such neuronal biases can be changed based on social behavior.

Brent L. Hughes et al. Neural adaptation to faces reveals racial outgroup homogeneity effects in early perception, Proceedings of the National Academy of Sciences (2019). DOI: 10.1073/pnas.1822084116

https://medicalxpress.com/news/2019-07-neuronal-alike.html

Stem Cells Sprayed into the Nose Restore Mice’s Ability to Smell

by KERRY GRENS

In mice whose sense of smell has been disabled, a squirt of stem cells into the nose can restore olfaction, researchers report today (May 30) in Stem Cell Reports. The introduced “globose basal cells,” which are precursors to smell-sensing neurons, engrafted in the nose, matured into nerve cells, and sent axons to the mice’s olfactory bulbs in the brain.

“We were a bit surprised to find that cells could engraft fairly robustly with a simple nose drop delivery,” senior author Bradley Goldstein of the University of Miami Miller School of Medicine says in a press release. “To be potentially useful in humans, the main hurdle would be to identify a source of cells capable of engrafting, differentiating into olfactory neurons, and properly connecting to the olfactory bulbs of the brain. Further, one would need to define what clinical situations might be appropriate, rather than the animal model of acute olfactory injury.”

Goldstein and others have independently tried stem cell therapies to restore olfaction in animals previously, but he and his coauthors note in their study that it’s been difficult to determine whether the regained function came from the transplant or from endogenous repair stimulated by the experimental injury to induce a loss of olfaction. So his team developed a mouse whose resident globose basal cells only made nonfunctional neurons, and any restoration of smell would be attributed to the introduced cells.
The team developed the stem cell transplant by engineering mice that produce easily traceable green fluorescent cells. The researchers then harvested glowing green globose basal cells (as identified by the presence of a receptor called c-kit) and delivered them into the noses of the genetically engineered, smell-impaired mice. Four weeks later, the team observed the green cells in the nasal epithelium, with axons working their way into the olfactory bulb.

Behaviorally, the mice appeared to have a functioning sense of smell after the stem cell treatment. Unlike untreated animals, they avoided an area of an enclosure that had a bad smell to normal mice.

To move this technology into humans suffering from a loss of olfaction, more experiments in animals are necessary, says James Schwob, an olfactory researcher at Tufts University who has collaborated with Goldstein but was not involved in the latest study, in an interview with Gizmodo. “The challenge is going to be trying to [engraft analogous cells] in humans in a way . . . that [would] not make things worse.”

https://www.the-scientist.com/news-opinion/stem-cells-delivered-to-the-nose-restore-mices-ability-to-smell-65953

Brain imaging biomarker developed for suicidal ideation in patients with PTSD


Brains of individuals with PTSD and suicidal thoughts (top) show higher levels of mGluR5 compared to healthy controls (bottom).

By Bill Hathawaymay

The risk of suicide among individuals with post-traumatic stress disorder (PTSD) is much higher than the general population, but identifying those individuals at greatest risk has been difficult. However, a team at Yale has discovered a biological marker linked to individuals with PTSD who are most likely to think about suicide, the researchers report May 13 in the journal Proceedings of the National Academy of Sciences.

Researchers used PET imaging to measure levels of metabotropic glutamatergic receptor 5 (mGluR5) — which has been implicated in anxiety and mood disorders — in individuals with PTSD and major depressive disorder. They found high levels of mGluR5 in the PTSD group with current suicidal thoughts. They found no such elevated levels in the PTSD group with no suicidal thoughts or in those with depression, with or without current suicidal thoughts.

There are two FDA approved treatments for PTSD, both of which are anti-depressants. It can take weeks or months to determine whether they are effective. That can be too late for those who are suicidal, note the researchers.

“If you have people who suffer from high blood pressure, you want to reduce those levels right away,” said Irina Esterlis, associate professor of psychiatry at Yale and senior author of the study. “We don’t have that option with PTSD.”

Esterlis said testing for levels of mGluR5 in people who have experienced severe trauma might help identify those at greatest risk of harming themselves and prompt psychiatric interventions. Also, researchers might investigate ways to regulate levels mGluR5 with hopes of minimizing suicide risk in PTSD patients, she said.

https://news.yale.edu/2019/05/13/biomarker-reveals-ptsd-sufferers-risk-suicide

A newly identified type of dementia that is sometimes mistaken for Alzheimer’s disease

Doctors have newly outlined a type of dementia that could be more common than Alzheimer’s among the oldest adults, according to a report published Tuesday in the journal Brain.

The disease, called LATE, may often mirror the symptoms of Alzheimer’s disease, though it affects the brain differently and develops more slowly than Alzheimer’s. Doctors say the two are frequently found together, and in those cases may lead to a steeper cognitive decline than either by itself.

In developing its report, the international team of authors is hoping to spur research — and, perhaps one day, treatments — for a disease that tends to affect people over 80 and “has an expanding but under-recognized impact on public health,” according to the paper.

“We’re really overhauling the concept of what dementia is,” said lead author Dr. Peter Nelson, director of neuropathology at the University of Kentucky Medical Center.

Still, the disease itself didn’t come out of the blue. The evidence has been building for years, including reports of patients who didn’t quite fit the mold for known types of dementia such as Alzheimer’s.

“There isn’t going to be one single disease that is causing all forms of dementia,” said Sandra Weintraub, a professor of psychiatry, behavioral sciences and neurology at Northwestern University Feinberg School of Medicine. She was not involved in the new paper.

Weintraub said researchers have been well aware of the “heterogeneity of dementia,” but figuring out precisely why each type can look so different has been a challenge. Why do some people lose memory first, while others lose language or have personality changes? Why do some develop dementia earlier in life, while others develop it later?

Experts say this heterogeneity has complicated dementia research, including Alzheimer’s, because it hasn’t always been clear what the root cause was — and thus, if doctors were treating the right thing.

What is it?

The acronym LATE stands for limbic-predominant age-related TDP-43 encephalopathy. The full name refers to the area in the brain most likely to be affected, as well as the protein at the center of it all.

“These age-related dementia diseases are frequently associated with proteinaceous glop,” Nelson said. “But different proteins can contribute to the glop.”

In Alzheimer’s, you’ll find one set of glops. In Lewy body dementia, another glop.

And in LATE, the glop is a protein called TDP-43. Doctors aren’t sure why the protein is found in a modified, misfolded form in a disease like LATE.

“TDP-43 likes certain parts of the brain that the Alzheimer’s pathology is less enamored of,” explained Weintraub, who is also a member of Northwestern’s Mesulam Center for Cognitive Neurology and Alzheimer’s Disease.

“This is an area that’s going to be really huge in the future. What are the individual vulnerabilities that cause the proteins to go to particular regions of the brain?” she said. “It’s not just what the protein abnormality is, but where it is.”

More than a decade ago, doctors first linked the TDP protein to amyotrophic lateral sclerosis, otherwise known as ALS or Lou Gehrig’s disease. It was also linked to another type of dementia, called frontotemporal lobar degeneration.

LATE “is a disease that’s 100 times more common than either of those, and nobody knows about it,” said Nelson.

The new paper estimates, based on autopsy studies, that between 20 and 50% of people over 80 will have brain changes associated with LATE. And that prevalence increases with age.

Experts say nailing down these numbers — as well as finding better ways to detect and research the disease — is what they hope comes out of consensus statements like the new paper, which gives scientists a common language to discuss it, according to Nelson.

“People have, in their own separate bailiwicks, found different parts of the elephant,” he said. “But this is the first place where everybody gets together and says, ‘This is the whole elephant.’ ”

What this could mean for Alzheimer’s

The new guidelines could have an impact on Alzheimer’s research, as well. For one, experts say some high-profile drug trials may have suffered as a result of some patients having unidentified LATE — and thus not responding to treatment.

In fact, Nelson’s colleagues recently saw that firsthand: a patient, now deceased, who was part of an Alzheimer’s drug trial but developed dementia anyway.

“So, the clinical trial was a failure for Alzheimer’s disease,” Nelson said, “but it turns out he didn’t have Alzheimer’s disease. He had LATE.”

Nina Silverberg, director of the Alzheimer’s Disease Research Centers Program at the National Institute on Aging, said she suspects examples like this are not the majority — in part because people in clinical trials tend to be on the younger end of the spectrum.

“I’m sure it plays some part, but maybe not as much as one might think at first,” said Silverberg, who co-chaired the working group that led to the new paper.

Advances in testing had already shown that some patients in these trials lacked “the telltale signs of Alzheimer’s,” she said.

In some cases, perhaps it was LATE — “and it’s certainly possible that there are other, as yet undiscovered, pathologies that people may have,” she added.

“We could go back and screen all the people that had failed their Alzheimer’s disease therapies,” Nelson said. “But what we really need to do is go forward and try to get these people out of the Alzheimer’s clinical trials — and instead get them into their own clinical trials.”

Silverberg describes the new paper as “a roadmap” for research that could change as we come to discover more about the disease. And researchers can’t do it without a large, diverse group of patients, she added.

“It’s probably going to take years and research participants to help us understand all of that,” she said.

https://www.cnn.com/2019/04/30/health/dementia-late-alzheimers-study/index.html

New high-risk genes for schizophrenia discovered

ummary: Study identifies 104 high-risk genes for schizophrenia. One gene considered high-risk is also suspected in the development of autism.

Source: Vanderbilt University

Using a unique computational framework they developed, a team of scientist cyber-sleuths in the Vanderbilt University Department of Molecular Physiology and Biophysics and the Vanderbilt Genetics Institute (VGI) has identified 104 high-risk genes for schizophrenia.

Their discovery, which was reported April 15 in the journal Nature Neuroscience, supports the view that schizophrenia is a developmental disease, one which potentially can be detected and treated even before the onset of symptoms.

“This framework opens the door for several research directions,” said the paper’s senior author, Bingshan Li, PhD, associate professor of Molecular Physiology and Biophysics and an investigator in the VGI.

One direction is to determine whether drugs already approved for other, unrelated diseases could be repurposed to improve the treatment of schizophrenia. Another is to find in which cell types in the brain these genes are active along the development trajectory.

Ultimately, Li said, “I think we’ll have a better understanding of how prenatally these genes predispose risk, and that will give us a hint of how to potentially develop intervention strategies. It’s an ambitious goal … (but) by understanding the mechanism, drug development could be more targeted.”

Schizophrenia is a chronic, severe mental disorder characterized by hallucinations and delusions, “flat” emotional expression and cognitive difficulties.

Symptoms usually start between the ages of 16 and 30. Antipsychotic medications can relieve symptoms, but there is no cure for the disease.

Genetics plays a major role. While schizophrenia occurs in 1% of the population, the risk rises sharply to 50% for a person whose identical twin has the disease.

Recent genome-wide association studies (GWAS) have identified more than 100 loci, or fixed positions on different chromosomes, associated with schizophrenia. That may not be where high-risk genes are located, however. The loci could be regulating the activity of the genes at a distance — nearby or very far away.

To solve the problem, Li, with first authors Rui Chen, PhD, research instructor in Molecular Physiology and Biophysics, and postdoctoral research fellow Quan Wang, PhD, developed a computational framework they called the “Integrative Risk Genes Selector.”

The framework pulled the top genes from previously reported loci based on their cumulative supporting evidence from multi-dimensional genomics data as well as gene networks.

Which genes have high rates of mutation? Which are expressed prenatally? These are the kinds of questions a genetic “detective” might ask to identify and narrow the list of “suspects.”

The result was a list of 104 high-risk genes, some of which encode proteins targeted in other diseases by drugs already on the market. One gene is suspected in the development of autism spectrum disorder.

Much work remains to be done. But, said Chen, “Our framework can push GWAS a step forward … to further identify genes.” It also could be employed to help track down genetic suspects in other complex diseases.

Also contributing to the study were Li’s lab members Qiang Wei, PhD, Ying Ji and Hai Yang, PhD; VGI investigators Xue Zhong, PhD, Ran Tao, PhD, James Sutcliffe, PhD, and VGI Director Nancy Cox, PhD.

Chen also credits investigators in the Vanderbilt Center for Neuroscience Drug Discovery — Colleen Niswender, PhD, Branden Stansley, PhD, and center Director P. Jeffrey Conn, PhD — for their critical input.

Funding: The study was supported by the Vanderbilt Analysis Center for the Genome Sequencing Program and National Institutes of Health grant HG009086.

High-risk genes for schizophrenia discovered

Synthetic speech generated from brain recordings


Illustrations of electrode placements on the research participants’ neural speech centers, from which activity patterns recorded during speech (colored dots) were translated into a computer simulation of the participant’s vocal tract (model, right) which then could be synthesized to reconstruct the sentence that had been spoken (sound wave & sentence, below). Credit: Chang lab / UCSF Dept. of Neurosurgery

A state-of-the-art brain-machine interface created by UC San Francisco neuroscientists can generate natural-sounding synthetic speech by using brain activity to control a virtual vocal tract—an anatomically detailed computer simulation including the lips, jaw, tongue, and larynx. The study was conducted in research participants with intact speech, but the technology could one day restore the voices of people who have lost the ability to speak due to paralysis and other forms of neurological damage.

Stroke, traumatic brain injury, and neurodegenerative diseases such as Parkinson’s disease, multiple sclerosis, and amyotrophic lateral sclerosis (ALS, or Lou Gehrig’s disease) often result in an irreversible loss of the ability to speak. Some people with severe speech disabilities learn to spell out their thoughts letter-by-letter using assistive devices that track very small eye or facial muscle movements. However, producing text or synthesized speech with such devices is laborious, error-prone, and painfully slow, typically permitting a maximum of 10 words per minute, compared to the 100-150 words per minute of natural speech.

The new system being developed in the laboratory of Edward Chang, MD—described April 24, 2019 in Nature—demonstrates that it is possible to create a synthesized version of a person’s voice that can be controlled by the activity of their brain’s speech centers. In the future, this approach could not only restore fluent communication to individuals with severe speech disability, the authors say, but could also reproduce some of the musicality of the human voice that conveys the speaker’s emotions and personality.

“For the first time, this study demonstrates that we can generate entire spoken sentences based on an individual’s brain activity,” said Chang, a professor of neurological surgery and member of the UCSF Weill Institute for Neuroscience. “This is an exhilarating proof of principle that with technology that is already within reach, we should be able to build a device that is clinically viable in patients with speech loss.”

Brief animation illustrates how patterns of brain activity from the brain’s speech centers in somatosensory cortex (top left) were first decoded into a computer simulation of a research participant’s vocal tract movements (top right), which were then translated into a synthesized version of the participant’s voice (bottom). Credit:Chang lab / UCSF Dept. of Neurosurgery. Simulated Vocal Tract Animation Credit:Speech Graphics
Virtual Vocal Tract Improves Naturalistic Speech Synthesis

The research was led by Gopala Anumanchipalli, Ph.D., a speech scientist, and Josh Chartier, a bioengineering graduate student in the Chang lab. It builds on a recent study in which the pair described for the first time how the human brain’s speech centers choreograph the movements of the lips, jaw, tongue, and other vocal tract components to produce fluent speech.

From that work, Anumanchipalli and Chartier realized that previous attempts to directly decode speech from brain activity might have met with limited success because these brain regions do not directly represent the acoustic properties of speech sounds, but rather the instructions needed to coordinate the movements of the mouth and throat during speech.

“The relationship between the movements of the vocal tract and the speech sounds that are produced is a complicated one,” Anumanchipalli said. “We reasoned that if these speech centers in the brain are encoding movements rather than sounds, we should try to do the same in decoding those signals.”

In their new study, Anumancipali and Chartier asked five volunteers being treated at the UCSF Epilepsy Center—patients with intact speech who had electrodes temporarily implanted in their brains to map the source of their seizures in preparation for neurosurgery—to read several hundred sentences aloud while the researchers recorded activity from a brain region known to be involved in language production.

Based on the audio recordings of participants’ voices, the researchers used linguistic principles to reverse engineer the vocal tract movements needed to produce those sounds: pressing the lips together here, tightening vocal cords there, shifting the tip of the tongue to the roof of the mouth, then relaxing it, and so on.

This detailed mapping of sound to anatomy allowed the scientists to create a realistic virtual vocal tract for each participant that could be controlled by their brain activity. This comprised two “neural network” machine learning algorithms: a decoder that transforms brain activity patterns produced during speech into movements of the virtual vocal tract, and a synthesizer that converts these vocal tract movements into a synthetic approximation of the participant’s voice.

The synthetic speech produced by these algorithms was significantly better than synthetic speech directly decoded from participants’ brain activity without the inclusion of simulations of the speakers’ vocal tracts, the researchers found. The algorithms produced sentences that were understandable to hundreds of human listeners in crowdsourced transcription tests conducted on the Amazon Mechanical Turk platform.

As is the case with natural speech, the transcribers were more successful when they were given shorter lists of words to choose from, as would be the case with caregivers who are primed to the kinds of phrases or requests patients might utter. The transcribers accurately identified 69 percent of synthesized words from lists of 25 alternatives and transcribed 43 percent of sentences with perfect accuracy. With a more challenging 50 words to choose from, transcribers’ overall accuracy dropped to 47 percent, though they were still able to understand 21 percent of synthesized sentences perfectly.

“We still have a ways to go to perfectly mimic spoken language,” Chartier acknowledged. “We’re quite good at synthesizing slower speech sounds like ‘sh’ and ‘z’ as well as maintaining the rhythms and intonations of speech and the speaker’s gender and identity, but some of the more abrupt sounds like ‘b’s and ‘p’s get a bit fuzzy. Still, the levels of accuracy we produced here would be an amazing improvement in real-time communication compared to what’s currently available.”

Artificial Intelligence, Linguistics, and Neuroscience Fueled Advance

The researchers are currently experimenting with higher-density electrode arrays and more advanced machine learning algorithms that they hope will improve the synthesized speech even further. The next major test for the technology is to determine whether someone who can’t speak could learn to use the system without being able to train it on their own voice and to make it generalize to anything they wish to say.


Image of an example array of intracranial electrodes of the type used to record brain activity in the current study. Credit: UCSF

Preliminary results from one of the team’s research participants suggest that the researchers’ anatomically based system can decode and synthesize novel sentences from participants’ brain activity nearly as well as the sentences the algorithm was trained on. Even when the researchers provided the algorithm with brain activity data recorded while one participant merely mouthed sentences without sound, the system was still able to produce intelligible synthetic versions of the mimed sentences in the speaker’s voice.

The researchers also found that the neural code for vocal movements partially overlapped across participants, and that one research subject’s vocal tract simulation could be adapted to respond to the neural instructions recorded from another participant’s brain. Together, these findings suggest that individuals with speech loss due to neurological impairment may be able to learn to control a speech prosthesis modeled on the voice of someone with intact speech.

“People who can’t move their arms and legs have learned to control robotic limbs with their brains,” Chartier said. “We are hopeful that one day people with speech disabilities will be able to learn to speak again using this brain-controlled artificial vocal tract.”

Added Anumanchipalli, “I’m proud that we’ve been able to bring together expertise from neuroscience, linguistics, and machine learning as part of this major milestone towards helping neurologically disabled patients.”

https://medicalxpress.com/news/2019-04-synthetic-speech-brain.html

Neuroscientists reverse some behavioral symptoms of Williams syndrome

Williams Syndrome, a rare neurodevelopmental disorder that affects about one in 10,000 babies born in the United States, produces a range of symptoms including cognitive impairments, cardiovascular problems, and extreme friendliness, or hypersociability.

In a study of mice, MIT neuroscientists have garnered new insight into the molecular mechanisms that underlie this hypersociability. They found that loss of one of the genes linked to Williams Syndrome leads to a thinning of the fatty layer that insulates neurons and helps them conduct electrical signals in the brain.

The researchers also showed that they could reverse the symptoms by boosting production of this coating, known as myelin. This is significant, because while Williams Syndrome is rare, many other neurodevelopmental disorders and neurological conditions have been linked to myelination deficits, says Guoping Feng, the James W. and Patricia Poitras Professor of Neuroscience and a member of MIT’s McGovern Institute for Brain Research.

“The importance is not only for Williams Syndrome,” says Feng, who is one of the senior authors of the study. “In other neurodevelopmental disorders, especially in some of the autism spectrum disorders, this could be potentially a new direction to look into, not only the pathology but also potential treatments.”

Zhigang He, a professor of neurology and ophthalmology at Harvard Medical School, is also a senior author of the paper, which appears in the April 22 issue of Nature Neuroscience. Former MIT postdoc Boaz Barak, currently a principal investigator at Tel Aviv University in Israel, is the lead author and a senior author of the paper.

Impaired myelination

Williams Syndrome, which is caused by the loss of one of the two copies of a segment of chromosome 7, can produce learning impairments, especially for tasks that require visual and motor skills, such as solving a jigsaw puzzle. Some people with the disorder also exhibit poor concentration and hyperactivity, and they are more likely to experience phobias.

In this study, the researchers decided to focus on one of the 25 genes in that segment, known as Gtf2i. Based on studies of patients with a smaller subset of the genes deleted, scientists have linked the Gtf2i gene to the hypersociability seen in Williams Syndrome.

Working with a mouse model, the researchers devised a way to knock out the gene specifically from excitatory neurons in the forebrain, which includes the cortex, the hippocampus, and the amygdala (a region important for processing emotions). They found that these mice did show increased levels of social behavior, measured by how much time they spent interacting with other mice. The mice also showed deficits in fine motor skills and increased nonsocial related anxiety, which are also symptoms of Williams Syndrome.

Next, the researchers sequenced the messenger RNA from the cortex of the mice to see which genes were affected by loss of Gtf2i. Gtf2i encodes a transcription factor, so it controls the expression of many other genes. The researchers found that about 70 percent of the genes with significantly reduced expression levels were involved in the process of myelination.

“Myelin is the insulation layer that wraps the axons that extend from the cell bodies of neurons,” Barak says. “When they don’t have the right properties, it will lead to faster or slower electrical signal transduction, which affects the synchronicity of brain activity.”

Further studies revealed that the mice had only about half the normal number of mature oligodendrocytes—the brain cells that produce myelin. However, the number of oligodendrocyte precursor cells was normal, so the researchers suspect that the maturation and differentiation processes of these cells are somehow impaired when Gtf2i is missing in the neurons.

This was surprising because Gtf2i was not knocked out in oligodendrocytes or their precursors. Thus, knocking out the gene in neurons may somehow influence the maturation process of oligodendrocytes, the researchers suggest. It is still unknown how this interaction might work.

“That’s a question we are interested in, but we don’t know whether it’s a secreted factor, or another kind of signal or activity,” Feng says.

In addition, the researchers found that the myelin surrounding axons of the forebrain was significantly thinner than in normal mice. Furthermore, electrical signals were smaller, and took more time to cross the brain in mice with Gtf2i missing.

Symptom reversal

It remains to be discovered precisely how this reduction in myelination leads to hypersociability. The researchers suspect that the lack of myelin affects brain circuits that normally inhibit social behaviors, making the mice more eager to interact with others.

“That’s probably the explanation, but exactly which circuits and how does it work, we still don’t know,” Feng says.

The researchers also found that they could reverse the symptoms by treating the mice with drugs that improve myelination. One of these drugs, an FDA-approved antihistamine called clemastine fumarate, is now in clinical trials to treat multiple sclerosis, which affects myelination of neurons in the brain and spinal cord. The researchers believe it would be worthwhile to test these drugs in Williams Syndrome patients because they found thinner myelin and reduced numbers of mature oligodendrocytes in brain samples from human subjects who had Williams Syndrome, compared to typical human brain samples.

“Mice are not humans, but the pathology is similar in this case, which means this could be translatable,” Feng says. “It could be that in these patients, if you improve their myelination early on, it could at least improve some of the conditions. That’s our hope.”

Such drugs would likely help mainly the social and fine-motor issues caused by Williams Syndrome, not the symptoms that are produced by deletion of other genes, the researchers say. They may also help treat other disorders, such as autism spectrum disorders, in which myelination is impaired in some cases, Feng says.

“We think this can be expanded into autism and other neurodevelopmental disorders. For these conditions, improved myelination may be a major factor in treatment,” he says. “We are now checking other animal models of neurodevelopmental disorders to see whether they have myelination defects, and whether improved myelination can improve some of the pathology of the defects.”

More information: Neuronal deletion of Gtf2i, associated with Williams syndrome, causes behavioral and myelin alterations rescuable by a remyelinating drug, Nature Neuroscience (2019). DOI: 10.1038/s41593-019-0380-9 , https://www.nature.com/articles/s41593-019-0380-9

https://medicalxpress.com/news/2019-04-neuroscientists-reverse-behavioral-symptoms-williams.html