Posts Tagged ‘neuroscience’

Illustrations of electrode placements on the research participants’ neural speech centers, from which activity patterns recorded during speech (colored dots) were translated into a computer simulation of the participant’s vocal tract (model, right) which then could be synthesized to reconstruct the sentence that had been spoken (sound wave & sentence, below). Credit: Chang lab / UCSF Dept. of Neurosurgery

A state-of-the-art brain-machine interface created by UC San Francisco neuroscientists can generate natural-sounding synthetic speech by using brain activity to control a virtual vocal tract—an anatomically detailed computer simulation including the lips, jaw, tongue, and larynx. The study was conducted in research participants with intact speech, but the technology could one day restore the voices of people who have lost the ability to speak due to paralysis and other forms of neurological damage.

Stroke, traumatic brain injury, and neurodegenerative diseases such as Parkinson’s disease, multiple sclerosis, and amyotrophic lateral sclerosis (ALS, or Lou Gehrig’s disease) often result in an irreversible loss of the ability to speak. Some people with severe speech disabilities learn to spell out their thoughts letter-by-letter using assistive devices that track very small eye or facial muscle movements. However, producing text or synthesized speech with such devices is laborious, error-prone, and painfully slow, typically permitting a maximum of 10 words per minute, compared to the 100-150 words per minute of natural speech.

The new system being developed in the laboratory of Edward Chang, MD—described April 24, 2019 in Nature—demonstrates that it is possible to create a synthesized version of a person’s voice that can be controlled by the activity of their brain’s speech centers. In the future, this approach could not only restore fluent communication to individuals with severe speech disability, the authors say, but could also reproduce some of the musicality of the human voice that conveys the speaker’s emotions and personality.

“For the first time, this study demonstrates that we can generate entire spoken sentences based on an individual’s brain activity,” said Chang, a professor of neurological surgery and member of the UCSF Weill Institute for Neuroscience. “This is an exhilarating proof of principle that with technology that is already within reach, we should be able to build a device that is clinically viable in patients with speech loss.”

Brief animation illustrates how patterns of brain activity from the brain’s speech centers in somatosensory cortex (top left) were first decoded into a computer simulation of a research participant’s vocal tract movements (top right), which were then translated into a synthesized version of the participant’s voice (bottom). Credit:Chang lab / UCSF Dept. of Neurosurgery. Simulated Vocal Tract Animation Credit:Speech Graphics
Virtual Vocal Tract Improves Naturalistic Speech Synthesis

The research was led by Gopala Anumanchipalli, Ph.D., a speech scientist, and Josh Chartier, a bioengineering graduate student in the Chang lab. It builds on a recent study in which the pair described for the first time how the human brain’s speech centers choreograph the movements of the lips, jaw, tongue, and other vocal tract components to produce fluent speech.

From that work, Anumanchipalli and Chartier realized that previous attempts to directly decode speech from brain activity might have met with limited success because these brain regions do not directly represent the acoustic properties of speech sounds, but rather the instructions needed to coordinate the movements of the mouth and throat during speech.

“The relationship between the movements of the vocal tract and the speech sounds that are produced is a complicated one,” Anumanchipalli said. “We reasoned that if these speech centers in the brain are encoding movements rather than sounds, we should try to do the same in decoding those signals.”

In their new study, Anumancipali and Chartier asked five volunteers being treated at the UCSF Epilepsy Center—patients with intact speech who had electrodes temporarily implanted in their brains to map the source of their seizures in preparation for neurosurgery—to read several hundred sentences aloud while the researchers recorded activity from a brain region known to be involved in language production.

Based on the audio recordings of participants’ voices, the researchers used linguistic principles to reverse engineer the vocal tract movements needed to produce those sounds: pressing the lips together here, tightening vocal cords there, shifting the tip of the tongue to the roof of the mouth, then relaxing it, and so on.

This detailed mapping of sound to anatomy allowed the scientists to create a realistic virtual vocal tract for each participant that could be controlled by their brain activity. This comprised two “neural network” machine learning algorithms: a decoder that transforms brain activity patterns produced during speech into movements of the virtual vocal tract, and a synthesizer that converts these vocal tract movements into a synthetic approximation of the participant’s voice.

The synthetic speech produced by these algorithms was significantly better than synthetic speech directly decoded from participants’ brain activity without the inclusion of simulations of the speakers’ vocal tracts, the researchers found. The algorithms produced sentences that were understandable to hundreds of human listeners in crowdsourced transcription tests conducted on the Amazon Mechanical Turk platform.

As is the case with natural speech, the transcribers were more successful when they were given shorter lists of words to choose from, as would be the case with caregivers who are primed to the kinds of phrases or requests patients might utter. The transcribers accurately identified 69 percent of synthesized words from lists of 25 alternatives and transcribed 43 percent of sentences with perfect accuracy. With a more challenging 50 words to choose from, transcribers’ overall accuracy dropped to 47 percent, though they were still able to understand 21 percent of synthesized sentences perfectly.

“We still have a ways to go to perfectly mimic spoken language,” Chartier acknowledged. “We’re quite good at synthesizing slower speech sounds like ‘sh’ and ‘z’ as well as maintaining the rhythms and intonations of speech and the speaker’s gender and identity, but some of the more abrupt sounds like ‘b’s and ‘p’s get a bit fuzzy. Still, the levels of accuracy we produced here would be an amazing improvement in real-time communication compared to what’s currently available.”

Artificial Intelligence, Linguistics, and Neuroscience Fueled Advance

The researchers are currently experimenting with higher-density electrode arrays and more advanced machine learning algorithms that they hope will improve the synthesized speech even further. The next major test for the technology is to determine whether someone who can’t speak could learn to use the system without being able to train it on their own voice and to make it generalize to anything they wish to say.

Image of an example array of intracranial electrodes of the type used to record brain activity in the current study. Credit: UCSF

Preliminary results from one of the team’s research participants suggest that the researchers’ anatomically based system can decode and synthesize novel sentences from participants’ brain activity nearly as well as the sentences the algorithm was trained on. Even when the researchers provided the algorithm with brain activity data recorded while one participant merely mouthed sentences without sound, the system was still able to produce intelligible synthetic versions of the mimed sentences in the speaker’s voice.

The researchers also found that the neural code for vocal movements partially overlapped across participants, and that one research subject’s vocal tract simulation could be adapted to respond to the neural instructions recorded from another participant’s brain. Together, these findings suggest that individuals with speech loss due to neurological impairment may be able to learn to control a speech prosthesis modeled on the voice of someone with intact speech.

“People who can’t move their arms and legs have learned to control robotic limbs with their brains,” Chartier said. “We are hopeful that one day people with speech disabilities will be able to learn to speak again using this brain-controlled artificial vocal tract.”

Added Anumanchipalli, “I’m proud that we’ve been able to bring together expertise from neuroscience, linguistics, and machine learning as part of this major milestone towards helping neurologically disabled patients.”


By Emily Underwood

One of the thorniest debates in neuroscience is whether people can make new neurons after their brains stop developing in adolescence—a process known as neurogenesis. Now, a new study finds that even people long past middle age can make fresh brain cells, and that past studies that failed to spot these newcomers may have used flawed methods.

The work “provides clear, definitive evidence that neurogenesis persists throughout life,” says Paul Frankland, a neuroscientist at the Hospital for Sick Children in Toronto, Canada. “For me, this puts the issue to bed.”

Researchers have long hoped that neurogenesis could help treat brain disorders like depression and Alzheimer’s disease. But last year, a study in Nature reported that the process peters out by adolescence, contradicting previous work that had found newborn neurons in older people using a variety of methods. The finding was deflating for neuroscientists like Frankland, who studies adult neurogenesis in the rodent hippocampus, a brain region involved in learning and memory. It “raised questions about the relevance of our work,” he says.

But there may have been problems with some of this earlier research. Last year’s Nature study, for example, looked for new neurons in 59 samples of human brain tissue, some of which came from brain banks where samples are often immersed in the fixative paraformaldehyde for months or even years. Over time, paraformaldehyde forms bonds between the components that make up neurons, turning the cells into a gel, says neuroscientist María Llorens-Martín of the Severo Ochoa Molecular Biology Center in Madrid. This makes it difficult for fluorescent antibodies to bind to the doublecortin (DCX) protein, which many scientists consider the “gold standard” marker of immature neurons, she says.

The number of cells that test positive for DCX in brain tissue declines sharply after just 48 hours in a paraformaldehyde bath, Llorens-Martín and her colleagues report today in Nature Medicine. After 6 months, detecting new neurons “is almost impossible,” she says.

When the researchers used a shorter fixation time—24 hours—to preserve donated brain tissue from 13 deceased adults, ranging in age from 43 to 87, they found tens of thousands of DCX-positive cells in the dentate gyrus, a curled sliver of tissue within the hippocampus that encodes memories of events. Under a microscope, the neurons had hallmarks of youth, Llorens-Martín says: smooth and plump, with simple, undeveloped branches.

In the sample from the youngest donor, who died at 43, the team found roughly 42,000 immature neurons per square millimeter of brain tissue. From the youngest to oldest donors, the number of apparent new neurons decreased by 30%—a trend that fits with previous studies in humans showing that adult neurogenesis declines with age. The team also showed that people with Alzheimer’s disease had 30% fewer immature neurons than healthy donors of the same age, and the more advanced the dementia, the fewer such cells.

Some scientists remain skeptical, including the authors of last year’s Nature paper. “While this study contains valuable data, we did not find the evidence for ongoing production of new neurons in the adult human hippocampus convincing,” says Shawn Sorrells, a neuroscientist at the University of Pittsburgh in Pennsylvania who co-authored the 2018 paper. One critique hinges on the DCX stain, which Sorrells says isn’t an adequate measure of young neurons because the DCX protein is also expressed in mature cells. That suggests the “new” neurons the team found were actually present since childhood, he says. The new study also found no evidence of pools of stem cells that could supply fresh neurons, he notes. What’s more, Sorrells says two of the brain samples he and his colleagues looked at were only fixed for 5 hours, yet they still couldn’t find evidence of young neurons in the hippocampus.

Llorens-Martín says her team used multiple other proteins associated with neuronal development to confirm that the DCX-positive cells were actually young, and were “very strict,” in their criteria for identifying young neurons.

Heather Cameron, a neuroscientist at the National Institute of Mental Health in Bethesda, Maryland, remains persuaded by the new work. Based on the “beauty of the data” in the new study, “I think we can all move forward pretty confidently in the knowledge that what we see in animals will be applicable in humans, she says. “Will this settle the debate? I’m not sure. Should it? Yes.”

by David Nield

How exactly do our brains sort between the five taste groups: sweet, sour, salty, bitter and umami? We’ve now got a much better idea, thanks to research that has pinned down where in the brain this taste processing happens.

Step forward: the insular cortex. Already thought to be responsible for everything from motor control to social empathy, we can now add flavour identification to its list of jobs.

It’s an area of the brain scientists have previously suspected could be responsible for sorting tastes, and which has been linked to taste in rodents, but this new study is much more precise in figuring out the role it plays in decoding what our tongues are telling us.

“We have known that tastes activate the human brain for some time, but not where primary taste types such as sweet, sour, salty, and bitter are distinguished,” says one of the team, Adam Anderson from Cornell University in New York.

“By using some new techniques that analyse fine-grained activity patterns, we found a specific portion of the insular cortex – an older cortex in the brain hidden behind the neocortex – represents distinct tastes.”

Anderson and his team used detailed fMRI scans of 20 adults as well as a new statistical model to dig deeper than previous studies into the link between the insular cortex and taste. This helped separate the taste response from other related responses – like the disgust we might feel when eating something sour or bitter.

Part of the problem in pinning down the taste-testing parts of the brain is that multiple regions of neurons get busy whenever we’re eating something. However, this study helps to cut through some of that noise.

In particular, it seems that different tastes don’t necessarily affect different parts of the insular cortex, but rather prompt different patterns of activity. Those patterns help the brain determine what it’s tasting.

For example, one particular section of the insular cortex was found to light up – in terms of neural activity – whenever something sweet was tasted. It’s a literal sweet spot, in other words, but it also showed that different brains have different wiring.

“While we identified a potential sweet spot, its precise location differed across people and this same spot responded to other tastes, but with distinct patterns of activity,” says Anderson.

“To know what people are tasting, we have to take into account not only where in the insula is stimulated, but also how.”

The work follows on from previous research showing just how big a role the brain plays in perceiving taste. It used to be thought that receptors on the tongue did most of the taste testing, but now it seems the brain is largely in charge of the process.

That prior study showed how switching certain brain cells on and off in mice was enough to prevent them from distinguishing between sweet and bitter. The conclusion is that while the tongue does identify certain chemicals, it’s the brain that interprets them.

The new research adds even more insight into what’s going on in the brain in humans when we need to work out what we’re tasting – and shows just how important a job the insular cortex is doing.

“The insular cortex represents experiences from inside our bodies,” says Anderson. “So taste is a bit like perceiving our own bodies, which is very different from other external senses such as sight, touch, hearing or smell.”

The research has been published in Nature Communications.

by Parashkev Nachev

“Why can’t you just relax into it?” is a question many of us have asked in frustration with ourselves or others – be it on the dance floor, the sporting field or in rather more private circumstances. The task typically requires us to respond spontaneously to external events, without any deliberation whatsoever. It ought to be easy – all you have to do is let go – yet it can be infuriatingly difficult.

“Stop thinking about it!” is the standard remedial advice, although cancelling thought with thought is something of a paradox. The retort, “I am trying!”, is equally puzzling, for deliberate intent is precisely what we are here struggling to avoid. So what is this act of choosing not to choose, of consciously relinquishing control over our actions? Our new study, published in Communications Biology, has finally provided insights into how this capacity is expressed in the brain.

Astonishingly, this fundamental human phenomenon has no name. It might have escaped academic recognition entirely had the German philosopher Friedrich Nietzsche not given it a brilliant gloss in his first book The Birth of Tragedy, itself a paradoxical work of philosophy in tacitly encouraging the reader to stop reading and get a drink instead. Whereas other thinkers saw culture on a single continuum, evolving into ever greater refinement, order and rationality, Nietzsche saw it as distributed across two radically different but equally important planes.

Perpendicular to the conventional “Apolline” dimension of culture, he introduced the “Dionysiac”: chaotic, spontaneous, vigorous and careless of the austere demands of rationality. Neither aspect was held to be superior, each may be done badly or well, and both are needed for a civilisation to find its most profound creative expression. Every Batman needs a Joker, he might have said, had he lived in a more comical age.

Of course, Nietzsche was not the first to observe that human beings sometimes behave with wanton abandon. His innovation consisted in realising it is a constitutional characteristic we could and should develop. And as with any behavioural characteristic, the facility to acquire it will vary from one person to another.

Seeing the light

As Dionysus and neuroscientists are mostly strangers, it should come as no surprise that the capacity for “meta-volition” – to give it a name that captures the notion of choosing not to choose one’s actions – has until now escaped experimental study. To find out how our brains allow us to give up control and explain why some of us are better at it than others, my colleagues and I wanted to develop a behavioural test and examine the patterns of brain activity that go with lesser or greater ability.

Most tests in behavioural neuroscience pit conscious, deliberate, complex actions against their opposites, measuring the power to suppress them. A classic example is the anti-saccade task, which purportedly measures “cognitive control”. Participants are instructed not to look towards the light when they see a brief flash in the visual periphery, but instead to the opposite side. That’s hard to do because looking towards the light is the natural inclination. People who are better at this are said to have greater cognitive control.

To measure how good people are at relinquishing control, we cannot simply flip a task around. If people are asked to look into the light, will and instinct are placed in perfect agreement. To put the two in opposition, we must make the automatic task unconscious so that volition could only be a hindrance.

White matter map of the brain (ray traced rendering), with the area correlated with spontaneity in red. Credit: Parashkev Nachev

It turns out that this is easy to do by flashing two lights on opposite sides of the visual periphery nearly simultaneously, and asking the subject to orient as fast as possible to the one they see first. If a flash comes a few dozen milliseconds before the next, people typically obtain an automatic bias to the first flash. You need at least double that amount of time to reach the threshold for consciously detecting which one comes first. Thinking about what came first could only impair your performance because your instinct operates well beneath the threshold at which the conscious gets a foothold.

Amazingly for such a simple task, people vary dramatically in their ability. Some – the Dionysiacs – effortlessly relax into allowing themselves to be guided by the first light, requiring no more than a few milliseconds between the flashes. Others – the Apollines – cannot let go, even when the flashes are many times further apart. Since trying harder does not help, the differences are not a matter of effort but appear to be part of who we are.

We used magnetic resonance imaging to investigate the brains of people performing the task, focusing on white matter – the brain’s wiring. A striking picture emerged. Extensive sections of the wiring of the right prefrontal lobe, a region heavily implicated in complex decision making, was revealed to be stronger in those who were worse at the task: the Apollines. The more developed the neural substrates of volition, it seems, the harder to switch them off.

By contrast, no part of the Dionysiac brain showed evidence of stronger wiring. Suppressing volition appears to depend less on a “meta-volitional centre” that is better developed than on the interplay between spontaneous and deliberate actions. We can think of it as two coalitions of brain cells in competition, with the outcome dependent on the relative strength of the teams, not the qualities of any umpire.

The competitive brain

The results demonstrate how the brain operates by competition at least as much as by cooperation. It may fail at a task not because it does not have the power, but because another, more dominant power stands in opposition. Our decisions reflect the outcomes of battles between warring factions that differ in their characteristics and evolutionary lineage, battles we can do little to influence because we are ourselves their products.

People also differ widely in their qualities, including spontaneity, not because evolution has not yet arrived at an optimum, but because it seeks to diversify the field as far as possible. That’s why it creates individuals tuned to respond to their environment in very different ways. The task of evolution is less to optimise a species for the present than to prepare it for a multiplicity of futures unknown.

That our lives are now dominated by a rational, Apolline order does not mean we shall not one day descend into an instinctual, Dionysiac chaos. Our brains are ready for it – our culture should be too.

Clumps of harmful proteins that interfere with brain functions have been partially cleared in mice using nothing but light and sound.

Research led by MIT has found strobe lights and a low pitched buzz can be used to recreate brain waves lost in the disease, which in turn remove plaque and improve cognitive function in mice engineered to display Alzheimer’s-like behaviour.

It’s a little like using light and sound to trigger their own brain waves to help fight the disease.

This technique hasn’t been clinically trialled in humans as yet, so it’s too soon to get excited – brain waves are known to work differently in humans and mice.

But, if replicated, these early results hint at a possible cheap and drug-free way to treat the common form of dementia.

So how does it work?

Advancing a previous study that showed flashing light 40 times a second into the eyes of engineered mice treated their version of Alzheimer’s disease, researchers added sound of a similar frequency and found it dramatically improved their results.

“When we combine visual and auditory stimulation for a week, we see the engagement of the prefrontal cortex and a very dramatic reduction of amyloid,” says Li-Huei Tsai, one of the researchers from MIT’s Picower Institute for Learning and Memory.

It’s not the first study to investigate the role sound can play in clearing the brain of the tangles and clumps of tau and amyloid proteins at least partially responsible for the disease.

Previous studies showed bursts of ultrasound make blood vessels leaky enough to allow powerful treatments to slip into the brain, while also encouraging the nervous system’s waste-removal experts, microglia, to pick up the pace.

Several years ago, Tsai discovered light flickering at a frequency of about 40 flashes a second had similar benefits in mice engineered to build up amyloid in their brain’s nerve cells.

“The result was so mind-boggling and so robust, it took a while for the idea to sink in, but we knew we needed to work out a way of trying out the same thing in humans,” Tsai told Helen Thomson at Nature at the time.

The only problem was this effect was confined to visual parts of the brain, missing key areas that contribute to the formation and retrieval of memory.

While the method’s practical applications looked a little limited, the results pointed to a way oscillations could help the brain recover from the grip of Alzheimer’s disease.

As our brain’s neurons transmit signals they also generate electromagnetic waves that help keep remote regions in sync – so-called ‘brain waves’.

One such set of oscillations are defined as gamma-frequencies, rippling across the brain at around 30 to 90 waves per second. These brain waves are most active when we’re paying close attention, searching our memories in order to make sense of what’s going on.

Tsai’s previous study had suggested these gamma waves are impeded in individuals with Alzheimer’s, and might play a pivotal role in the pathology itself.

Light was just one way to trick the parts of the brain into humming in the key of gamma. Sounds can also manage this in other areas.

Instead of the high pitched scream of ultrasound, Tsui used a much lower droning noise of just 40 Hertz, a sound only just high enough for humans to hear.

Exposing their mouse subjects to just one hour of this monotonous buzz every day for a week led to a significant drop in the amount of amyloid build up in the auditory regions, while also stimulating those microglial cells and blood vessels.

“What we have demonstrated here is that we can use a totally different sensory modality to induce gamma oscillations in the brain,” says Tsai.

As an added bonus, it also helped clear the nearby hippocampus – an important section associated with memory.

The effects weren’t just evident in the test subjects’ brain chemistry. Functionally, mice exposed to the treatment performed better in a range of cognitive tasks.

Adding the light therapy from the previous study saw an even more dramatic effect, clearing plaques in a number of areas across the brain, including in the prefrontal cortex. Those trash-clearing microglia also went to town.

“These microglia just pile on top of one another around the plaques,” says Tsai.

Discovering new mechanisms in the way nervous systems clear waste and synchronise activity is a huge step forward in the development of treatments for all kinds of neurological disorders.

Translating discoveries like this to human brains will take more work, especially when there are potential contrasts in how gamma waves appear in mice and human Alzheimer’s brains.

So far early testing for safety has shown the process seems to have no clear side effects.

This research was published in Cell.

Neuroscientists can read brain activity to predict decisions 11 seconds before people actFree will, from a neuroscience perspective, can look like quite quaint. In a study published this week in the journal Scientific Reports, researchers in Australia were able to predict basic choices participants made 11 seconds before they consciously declared their decisions.

In the study, 14 participants—each placed in an fMRI machine—were shown two patterns, one of red horizontal stripes and one of green vertical stripes. They were given a maximum of 20 seconds to choose between them. Once they’d made a decision, they pressed a button and had 10 seconds to visualize the pattern as hard as they could. Finally, they were asked “what did you imagine?” and “how vivid was it?” They answered these questions by pressing buttons.

Using the fMRI to monitor brain activity and machine learning to analyze the neuroimages, the researchers were able to predict which pattern participants would choose up to 11 seconds before they consciously made the decision. And they were able to predict how vividly the participants would be able to envisage it.

Lead author Joel Pearson, cognitive neuroscience professor at the University of South Wales in Australia, said that the study suggests traces of thoughts exist unconsciously before they become conscious. “We believe that when we are faced with the choice between two or more options of what to think about, non-conscious traces of the thoughts are there already, a bit like unconscious hallucinations,” he said in a statement. “As the decision of what to think about is made, executive areas of the brain choose the thought-trace which is stronger. In, other words, if any pre-existing brain activity matches one of your choices, then your brain will be more likely to pick that option as it gets boosted by the pre-existing brain activity.”

The work has implications for how we understand uncomfortable thoughts: Pearson believes the findings explain why thinking about something only leads to more thoughts on the subject, as it creates “a positive feedback loop.” The study also suggests that unwelcome visualizations, such as those experienced with post-traumatic stress disorder, begin as unconscious thoughts.

Though this is just one study, it’s not the first to show that thoughts can be predicted before they are conscious. As the researchers note, similar techniques have been able to predict motor decisions between seven and 10 seconds before they’re conscious, and abstract decisions up to four seconds before they’re conscious. Taken together, these studies show how understanding how the brain complicates our conception of free will.

Neuroscientists have long known that the brain prepares to act before you’re consciously aware, and there are just a few milliseconds between when a thought is conscious and when you enact it. Those milliseconds give us a chance to consciously reject unconscious impulses, seeming to form a foundation of free will.

Freedom, however, can be enacted by both the unconscious and conscious self—and there are neuroscientists who claim that being controlled by our own unconscious brain is hardly an affront to free will. Studies showing that neuroscientists can predict our actions long before we’re aware of them don’t necessarily negate the concept of free will, but they certainly complicate our conception of our own minds.

A team from the Department of Psychological Medicine and Department of Biochemistry at the Yong Loo Lin School of Medicine at the National University of Singapore (NUS) has found that seniors who consume more than two standard portions of mushrooms weekly may have 50 per cent reduced odds of having mild cognitive impairment (MCI).

A portion was defined as three quarters of a cup of cooked mushrooms with an average weight of around 150 grams. Two portions would be equivalent to approximately half a plate. While the portion sizes act as a guideline, it was shown that even one small portion of mushrooms a week may still be beneficial to reduce chances of MCI.

“This correlation is surprising and encouraging. It seems that a commonly available single ingredient could have a dramatic effect on cognitive decline,” said Assistant Professor Lei Feng, who is from the NUS Department of Psychological Medicine, and the lead author of this work.

The six-year study, which was conducted from 2011 to 2017, collected data from more than 600 Chinese seniors over the age of 60 living in Singapore. The research was carried out with support from the Life Sciences Institute and the Mind Science Centre at NUS, as well as the Singapore Ministry of Health’s National Medical Research Council. The results were published online in the Journal of Alzheimer’s Disease on 12 March 2019.

Determining MCI in seniors

MCI is typically viewed as the stage between the cognitive decline of normal ageing and the more serious decline of dementia. Seniors afflicted with MCI often display some form of memory loss or forgetfulness and may also show deficit on other cognitive function such as language, attention and visuospatial abilities. However, the changes can be subtle, as they do not experience disabling cognitive deficits that affect everyday life activities, which is characteristic of Alzheimer’s and other forms of dementia.

“People with MCI are still able to carry out their normal daily activities. So, what we had to determine in this study is whether these seniors had poorer performance on standard neuropsychologist tests than other people of the same age and education background,” explained Asst Prof Feng. “Neuropsychological tests are specifically designed tasks that can measure various aspects of a person’s cognitive abilities. In fact, some of the tests we used in this study are adopted from commonly used IQ test battery, the Wechsler Adult Intelligence Scale (WAIS).”

As such, the researchers conducted extensive interviews and tests with the senior citizens to determine an accurate diagnosis. “The interview takes into account demographic information, medical history, psychological factors, and dietary habits. A nurse will measure blood pressure, weight, height, handgrip, and walking speed. They will also do a simple screen test on cognition, depression, anxiety,” said Asst Prof Feng.

After this, a two-hour standard neuropsychological assessment was performed, along with a dementia rating. The overall results of these tests were discussed in depth with expert psychiatrists involved in the study to get a diagnostic consensus.

Mushrooms and cognitive impairment

Six commonly consumed mushrooms in Singapore were referenced in the study. They were golden, oyster, shiitake and white button mushrooms, as well as dried and canned mushrooms. However, it is likely that other mushrooms not referenced would also have beneficial effects.

The researchers believe the reason for the reduced prevalence of MCI in mushroom eaters may be down to a specific compound found in almost all varieties. “We’re very interested in a compound called ergothioneine (ET),” said Dr. Irwin Cheah, Senior Research Fellow at the NUS Department of Biochemistry. “ET is a unique antioxidant and anti-inflammatory which humans are unable to synthesise on their own. But it can be obtained from dietary sources, one of the main ones being mushrooms.”

An earlier study by the team on elderly Singaporeans revealed that plasma levels of ET in participants with MCI were significantly lower than age-matched healthy individuals. The work, which was published in the journal Biochemical and Biophysical Research Communications in 2016, led to the belief that a deficiency in ET may be a risk factor for neurodegeneration, and increasing ET intake through mushroom consumption might possibly promote cognitive health.

Other compounds contained within mushrooms may also be advantageous for decreasing the risk of cognitive decline. Certain hericenones, erinacines, scabronines and dictyophorines may promote the synthesis of nerve growth factors. Bioactive compounds in mushrooms may also protect the brain from neurodegeneration by inhibiting production of beta amyloid and phosphorylated tau, and acetylcholinesterase.

Next steps

The potential next stage of research for the team is to perform a randomised controlled trial with the pure compound of ET and other plant-based ingredients, such as L-theanine and catechins from tea leaves, to determine the efficacy of such phytonutrients in delaying cognitive decline. Such interventional studies will lead to more robust conclusion on causal relationship. In addition, Asst Prof Feng and his team also hope to identify other dietary factors that could be associated with healthy brain ageing and reduced risk of age-related conditions in the future.