Posts Tagged ‘brain’

ummary: Study identifies 104 high-risk genes for schizophrenia. One gene considered high-risk is also suspected in the development of autism.

Source: Vanderbilt University

Using a unique computational framework they developed, a team of scientist cyber-sleuths in the Vanderbilt University Department of Molecular Physiology and Biophysics and the Vanderbilt Genetics Institute (VGI) has identified 104 high-risk genes for schizophrenia.

Their discovery, which was reported April 15 in the journal Nature Neuroscience, supports the view that schizophrenia is a developmental disease, one which potentially can be detected and treated even before the onset of symptoms.

“This framework opens the door for several research directions,” said the paper’s senior author, Bingshan Li, PhD, associate professor of Molecular Physiology and Biophysics and an investigator in the VGI.

One direction is to determine whether drugs already approved for other, unrelated diseases could be repurposed to improve the treatment of schizophrenia. Another is to find in which cell types in the brain these genes are active along the development trajectory.

Ultimately, Li said, “I think we’ll have a better understanding of how prenatally these genes predispose risk, and that will give us a hint of how to potentially develop intervention strategies. It’s an ambitious goal … (but) by understanding the mechanism, drug development could be more targeted.”

Schizophrenia is a chronic, severe mental disorder characterized by hallucinations and delusions, “flat” emotional expression and cognitive difficulties.

Symptoms usually start between the ages of 16 and 30. Antipsychotic medications can relieve symptoms, but there is no cure for the disease.

Genetics plays a major role. While schizophrenia occurs in 1% of the population, the risk rises sharply to 50% for a person whose identical twin has the disease.

Recent genome-wide association studies (GWAS) have identified more than 100 loci, or fixed positions on different chromosomes, associated with schizophrenia. That may not be where high-risk genes are located, however. The loci could be regulating the activity of the genes at a distance — nearby or very far away.

To solve the problem, Li, with first authors Rui Chen, PhD, research instructor in Molecular Physiology and Biophysics, and postdoctoral research fellow Quan Wang, PhD, developed a computational framework they called the “Integrative Risk Genes Selector.”

The framework pulled the top genes from previously reported loci based on their cumulative supporting evidence from multi-dimensional genomics data as well as gene networks.

Which genes have high rates of mutation? Which are expressed prenatally? These are the kinds of questions a genetic “detective” might ask to identify and narrow the list of “suspects.”

The result was a list of 104 high-risk genes, some of which encode proteins targeted in other diseases by drugs already on the market. One gene is suspected in the development of autism spectrum disorder.

Much work remains to be done. But, said Chen, “Our framework can push GWAS a step forward … to further identify genes.” It also could be employed to help track down genetic suspects in other complex diseases.

Also contributing to the study were Li’s lab members Qiang Wei, PhD, Ying Ji and Hai Yang, PhD; VGI investigators Xue Zhong, PhD, Ran Tao, PhD, James Sutcliffe, PhD, and VGI Director Nancy Cox, PhD.

Chen also credits investigators in the Vanderbilt Center for Neuroscience Drug Discovery — Colleen Niswender, PhD, Branden Stansley, PhD, and center Director P. Jeffrey Conn, PhD — for their critical input.

Funding: The study was supported by the Vanderbilt Analysis Center for the Genome Sequencing Program and National Institutes of Health grant HG009086.


Illustrations of electrode placements on the research participants’ neural speech centers, from which activity patterns recorded during speech (colored dots) were translated into a computer simulation of the participant’s vocal tract (model, right) which then could be synthesized to reconstruct the sentence that had been spoken (sound wave & sentence, below). Credit: Chang lab / UCSF Dept. of Neurosurgery

A state-of-the-art brain-machine interface created by UC San Francisco neuroscientists can generate natural-sounding synthetic speech by using brain activity to control a virtual vocal tract—an anatomically detailed computer simulation including the lips, jaw, tongue, and larynx. The study was conducted in research participants with intact speech, but the technology could one day restore the voices of people who have lost the ability to speak due to paralysis and other forms of neurological damage.

Stroke, traumatic brain injury, and neurodegenerative diseases such as Parkinson’s disease, multiple sclerosis, and amyotrophic lateral sclerosis (ALS, or Lou Gehrig’s disease) often result in an irreversible loss of the ability to speak. Some people with severe speech disabilities learn to spell out their thoughts letter-by-letter using assistive devices that track very small eye or facial muscle movements. However, producing text or synthesized speech with such devices is laborious, error-prone, and painfully slow, typically permitting a maximum of 10 words per minute, compared to the 100-150 words per minute of natural speech.

The new system being developed in the laboratory of Edward Chang, MD—described April 24, 2019 in Nature—demonstrates that it is possible to create a synthesized version of a person’s voice that can be controlled by the activity of their brain’s speech centers. In the future, this approach could not only restore fluent communication to individuals with severe speech disability, the authors say, but could also reproduce some of the musicality of the human voice that conveys the speaker’s emotions and personality.

“For the first time, this study demonstrates that we can generate entire spoken sentences based on an individual’s brain activity,” said Chang, a professor of neurological surgery and member of the UCSF Weill Institute for Neuroscience. “This is an exhilarating proof of principle that with technology that is already within reach, we should be able to build a device that is clinically viable in patients with speech loss.”

Brief animation illustrates how patterns of brain activity from the brain’s speech centers in somatosensory cortex (top left) were first decoded into a computer simulation of a research participant’s vocal tract movements (top right), which were then translated into a synthesized version of the participant’s voice (bottom). Credit:Chang lab / UCSF Dept. of Neurosurgery. Simulated Vocal Tract Animation Credit:Speech Graphics
Virtual Vocal Tract Improves Naturalistic Speech Synthesis

The research was led by Gopala Anumanchipalli, Ph.D., a speech scientist, and Josh Chartier, a bioengineering graduate student in the Chang lab. It builds on a recent study in which the pair described for the first time how the human brain’s speech centers choreograph the movements of the lips, jaw, tongue, and other vocal tract components to produce fluent speech.

From that work, Anumanchipalli and Chartier realized that previous attempts to directly decode speech from brain activity might have met with limited success because these brain regions do not directly represent the acoustic properties of speech sounds, but rather the instructions needed to coordinate the movements of the mouth and throat during speech.

“The relationship between the movements of the vocal tract and the speech sounds that are produced is a complicated one,” Anumanchipalli said. “We reasoned that if these speech centers in the brain are encoding movements rather than sounds, we should try to do the same in decoding those signals.”

In their new study, Anumancipali and Chartier asked five volunteers being treated at the UCSF Epilepsy Center—patients with intact speech who had electrodes temporarily implanted in their brains to map the source of their seizures in preparation for neurosurgery—to read several hundred sentences aloud while the researchers recorded activity from a brain region known to be involved in language production.

Based on the audio recordings of participants’ voices, the researchers used linguistic principles to reverse engineer the vocal tract movements needed to produce those sounds: pressing the lips together here, tightening vocal cords there, shifting the tip of the tongue to the roof of the mouth, then relaxing it, and so on.

This detailed mapping of sound to anatomy allowed the scientists to create a realistic virtual vocal tract for each participant that could be controlled by their brain activity. This comprised two “neural network” machine learning algorithms: a decoder that transforms brain activity patterns produced during speech into movements of the virtual vocal tract, and a synthesizer that converts these vocal tract movements into a synthetic approximation of the participant’s voice.

The synthetic speech produced by these algorithms was significantly better than synthetic speech directly decoded from participants’ brain activity without the inclusion of simulations of the speakers’ vocal tracts, the researchers found. The algorithms produced sentences that were understandable to hundreds of human listeners in crowdsourced transcription tests conducted on the Amazon Mechanical Turk platform.

As is the case with natural speech, the transcribers were more successful when they were given shorter lists of words to choose from, as would be the case with caregivers who are primed to the kinds of phrases or requests patients might utter. The transcribers accurately identified 69 percent of synthesized words from lists of 25 alternatives and transcribed 43 percent of sentences with perfect accuracy. With a more challenging 50 words to choose from, transcribers’ overall accuracy dropped to 47 percent, though they were still able to understand 21 percent of synthesized sentences perfectly.

“We still have a ways to go to perfectly mimic spoken language,” Chartier acknowledged. “We’re quite good at synthesizing slower speech sounds like ‘sh’ and ‘z’ as well as maintaining the rhythms and intonations of speech and the speaker’s gender and identity, but some of the more abrupt sounds like ‘b’s and ‘p’s get a bit fuzzy. Still, the levels of accuracy we produced here would be an amazing improvement in real-time communication compared to what’s currently available.”

Artificial Intelligence, Linguistics, and Neuroscience Fueled Advance

The researchers are currently experimenting with higher-density electrode arrays and more advanced machine learning algorithms that they hope will improve the synthesized speech even further. The next major test for the technology is to determine whether someone who can’t speak could learn to use the system without being able to train it on their own voice and to make it generalize to anything they wish to say.

Image of an example array of intracranial electrodes of the type used to record brain activity in the current study. Credit: UCSF

Preliminary results from one of the team’s research participants suggest that the researchers’ anatomically based system can decode and synthesize novel sentences from participants’ brain activity nearly as well as the sentences the algorithm was trained on. Even when the researchers provided the algorithm with brain activity data recorded while one participant merely mouthed sentences without sound, the system was still able to produce intelligible synthetic versions of the mimed sentences in the speaker’s voice.

The researchers also found that the neural code for vocal movements partially overlapped across participants, and that one research subject’s vocal tract simulation could be adapted to respond to the neural instructions recorded from another participant’s brain. Together, these findings suggest that individuals with speech loss due to neurological impairment may be able to learn to control a speech prosthesis modeled on the voice of someone with intact speech.

“People who can’t move their arms and legs have learned to control robotic limbs with their brains,” Chartier said. “We are hopeful that one day people with speech disabilities will be able to learn to speak again using this brain-controlled artificial vocal tract.”

Added Anumanchipalli, “I’m proud that we’ve been able to bring together expertise from neuroscience, linguistics, and machine learning as part of this major milestone towards helping neurologically disabled patients.”

Williams Syndrome, a rare neurodevelopmental disorder that affects about one in 10,000 babies born in the United States, produces a range of symptoms including cognitive impairments, cardiovascular problems, and extreme friendliness, or hypersociability.

In a study of mice, MIT neuroscientists have garnered new insight into the molecular mechanisms that underlie this hypersociability. They found that loss of one of the genes linked to Williams Syndrome leads to a thinning of the fatty layer that insulates neurons and helps them conduct electrical signals in the brain.

The researchers also showed that they could reverse the symptoms by boosting production of this coating, known as myelin. This is significant, because while Williams Syndrome is rare, many other neurodevelopmental disorders and neurological conditions have been linked to myelination deficits, says Guoping Feng, the James W. and Patricia Poitras Professor of Neuroscience and a member of MIT’s McGovern Institute for Brain Research.

“The importance is not only for Williams Syndrome,” says Feng, who is one of the senior authors of the study. “In other neurodevelopmental disorders, especially in some of the autism spectrum disorders, this could be potentially a new direction to look into, not only the pathology but also potential treatments.”

Zhigang He, a professor of neurology and ophthalmology at Harvard Medical School, is also a senior author of the paper, which appears in the April 22 issue of Nature Neuroscience. Former MIT postdoc Boaz Barak, currently a principal investigator at Tel Aviv University in Israel, is the lead author and a senior author of the paper.

Impaired myelination

Williams Syndrome, which is caused by the loss of one of the two copies of a segment of chromosome 7, can produce learning impairments, especially for tasks that require visual and motor skills, such as solving a jigsaw puzzle. Some people with the disorder also exhibit poor concentration and hyperactivity, and they are more likely to experience phobias.

In this study, the researchers decided to focus on one of the 25 genes in that segment, known as Gtf2i. Based on studies of patients with a smaller subset of the genes deleted, scientists have linked the Gtf2i gene to the hypersociability seen in Williams Syndrome.

Working with a mouse model, the researchers devised a way to knock out the gene specifically from excitatory neurons in the forebrain, which includes the cortex, the hippocampus, and the amygdala (a region important for processing emotions). They found that these mice did show increased levels of social behavior, measured by how much time they spent interacting with other mice. The mice also showed deficits in fine motor skills and increased nonsocial related anxiety, which are also symptoms of Williams Syndrome.

Next, the researchers sequenced the messenger RNA from the cortex of the mice to see which genes were affected by loss of Gtf2i. Gtf2i encodes a transcription factor, so it controls the expression of many other genes. The researchers found that about 70 percent of the genes with significantly reduced expression levels were involved in the process of myelination.

“Myelin is the insulation layer that wraps the axons that extend from the cell bodies of neurons,” Barak says. “When they don’t have the right properties, it will lead to faster or slower electrical signal transduction, which affects the synchronicity of brain activity.”

Further studies revealed that the mice had only about half the normal number of mature oligodendrocytes—the brain cells that produce myelin. However, the number of oligodendrocyte precursor cells was normal, so the researchers suspect that the maturation and differentiation processes of these cells are somehow impaired when Gtf2i is missing in the neurons.

This was surprising because Gtf2i was not knocked out in oligodendrocytes or their precursors. Thus, knocking out the gene in neurons may somehow influence the maturation process of oligodendrocytes, the researchers suggest. It is still unknown how this interaction might work.

“That’s a question we are interested in, but we don’t know whether it’s a secreted factor, or another kind of signal or activity,” Feng says.

In addition, the researchers found that the myelin surrounding axons of the forebrain was significantly thinner than in normal mice. Furthermore, electrical signals were smaller, and took more time to cross the brain in mice with Gtf2i missing.

Symptom reversal

It remains to be discovered precisely how this reduction in myelination leads to hypersociability. The researchers suspect that the lack of myelin affects brain circuits that normally inhibit social behaviors, making the mice more eager to interact with others.

“That’s probably the explanation, but exactly which circuits and how does it work, we still don’t know,” Feng says.

The researchers also found that they could reverse the symptoms by treating the mice with drugs that improve myelination. One of these drugs, an FDA-approved antihistamine called clemastine fumarate, is now in clinical trials to treat multiple sclerosis, which affects myelination of neurons in the brain and spinal cord. The researchers believe it would be worthwhile to test these drugs in Williams Syndrome patients because they found thinner myelin and reduced numbers of mature oligodendrocytes in brain samples from human subjects who had Williams Syndrome, compared to typical human brain samples.

“Mice are not humans, but the pathology is similar in this case, which means this could be translatable,” Feng says. “It could be that in these patients, if you improve their myelination early on, it could at least improve some of the conditions. That’s our hope.”

Such drugs would likely help mainly the social and fine-motor issues caused by Williams Syndrome, not the symptoms that are produced by deletion of other genes, the researchers say. They may also help treat other disorders, such as autism spectrum disorders, in which myelination is impaired in some cases, Feng says.

“We think this can be expanded into autism and other neurodevelopmental disorders. For these conditions, improved myelination may be a major factor in treatment,” he says. “We are now checking other animal models of neurodevelopmental disorders to see whether they have myelination defects, and whether improved myelination can improve some of the pathology of the defects.”

More information: Neuronal deletion of Gtf2i, associated with Williams syndrome, causes behavioral and myelin alterations rescuable by a remyelinating drug, Nature Neuroscience (2019). DOI: 10.1038/s41593-019-0380-9 ,

Summary: A new study looks at Leonardo da Vinci’s contribution to neuroscience and the advancement of modern sciences.

Source: Profiles, Inc

May 2, 2019, marks the 500th anniversary of Leonardo da Vinci’s death. A cultural icon, artist, engineer and experimentalist of the Renaissance period, Leonardo continues to inspire people around the globe. Jonathan Pevsner, PhD, professor and research scientist at the Kennedy Krieger Institute, wrote an article featured in the April edition of The Lancet titled, “Leonardo da Vinci’s studies of the brain.” In the piece, Pevsner highlights the exquisite drawings and curiosity, dedication and scientific rigor that led Leonardo to make penetrating insights into how the brain functions.

Through his research, Pevsner shares that Leonardo was the first to identify the olfactory nerve as a cranial nerve. He details how Leonardo performed intricate studies on the peripheral nervous system, challenging the findings of earlier authorities and introducing methods centuries earlier than other anatomists and physiologists. Pevsner also delves into Leonardo’s pioneering experiment on the ventricles by replicating his technique of injecting wax to make a cast of the ventricles in the brain to determine their overall shape and size. This further demonstrates Leonardo’s original thinking and advanced intelligence.

“Leonardo’s work reflects the emergence of the modern scientific era and forms a key part of his integrative approach to art and science,” said Pevsner.

“He asked questions about how the brain works in health and in disease. He sought to understand changes in the brain that occur in epilepsy, or why the mental state of a pregnant mother can directly affect the physical well-being of her child. At the Kennedy Krieger Institute, many of us struggle to answer the same questions. While science and technology have advanced at a breathtaking pace, we still need Leonardo’s qualities of passion, curiosity, the ability to visualize knowledge, and clear thinking to guide us forward.”

While Pevsner is viewed as an expert in Leonardo da Vinci, his main profession and passion is research into the molecular basis of childhood and adult brain disorders in his lab at Kennedy Krieger Institute. His lab reported the mutation that causes Sturge-Weber syndrome, and ongoing studies include bipolar disorder, autism spectrum disorder and schizophrenia. He is the author of the textbook, Bioinformatics and Functional Genomics.

by Amirah Al Idrus

When Nicolas Tremblay put three electroencephalogram (EEG) electrodes into a baseball cap, he was trying to build a tool to track focus in children with ADHD. He was pitching the device at a health hackathon last October when a nurse from the Montreal Heart Institute approached him with an idea: What if it could be modified for use in hospitals to diagnose patients with delirium?

Delirium—a sudden state of confusion characterized by reduced awareness of the sufferer’s environment—comes on suddenly and can last from hours to days. The American Delirium Society estimates the condition affects more than 7 million hospitalized Americans each year and, according to a Harvard Health report, delirium is the most common complication of hospitalization in people 65 and older.

Compared to hospitalized patients without delirium, those who suffer delirium tend to stay longer in the hospital and are more likely to develop dementia or other types of cognitive impairment and need long-term care after leaving hospital. Delirium is commonly detected via the Confusion Assessment Method, which helps health professionals identify problems with attention, memory, orientation and visual ability. Essentially, patients are asked a set of questions to assess their mental state. Though the method is standardized, it is not an objective test for the condition. What’s more, this approach doesn’t detect delirium early.

“Current methods are only able to detect delirium when the brain is already malfunctioning,” Tremblay said. “When delirium is detected at a later stage, it takes longer to bring the patient back. It costs a lot to the hospital because they have to keep the patient in hospital to revert delirium.”

NeuroServo set about creating a device to catch attention problems in hospitalized patients early, before these deficits manifest physically. Its educational tool, the electrode-fitted hat, measures electrical activity in the brain and signals attention—or lack thereof—via a built-in light that changes color. The device can also send EEG results via Bluetooth to a tablet app used by a teacher.

With input from doctors and nurses, NeuroServo developed a sterile version of the device, a disposable plastic strip holding three EEG electrodes that can be adhered to the patient’s forehead. It attaches to a portable EEG module that clips onto the patient’s jacket.

Using EEG to detect delirium isn’t a new concept; there is scientific proof that delirium can be found with EEG, Tremblay said. But using a traditional EEG on large numbers of patients just isn’t practical: The equipment is cumbersome, the process can require as many as 256 electrodes placed all over the scalp and a neurologist is needed to interpret the results.

NeuroServo’s device uses several algorithms specialized in a specific area of signal analysis, Tremblay said.

“The sum of these analyses is then used to return an easy-to-read graph and results to the nurse or caregiver,” he said.

As for the number of electrodes, NeuroServo’s electronics and algorithms are designed to obtain the best medical-grade EEG signal out of the forehead. ”This allows us to carefully track brain signals in the prefrontal cortex who is responsible for executive functions like attention control or cognitive flexibility,” Tremblay said.

He hopes to keep serving the educational market even as NeuroServo makes a push into the medtech sector. The company is still selling the cap for kids with ADHD, and the device is currently in a pilot study in France in children with autism spectrum disorder. As for its use as a delirium diagnostic tool, the Montreal Heart Institute is kicking off a pilot study this month. McGill University Health Centre will start a pilot later this year, and NeuroServo is working on a third study at a hospital in Boston.

What comes next depends on the outcome of those studies.

“We are waiting for the pilot results to be able to apply for approval from Health Canada, the FDA and so on,” Tremblay said.

NeuroServo is just one player working to make EEG possible for an area in which it has historically not been viable. Mountain View, California-based Ceribell came up with a portable device that quickly detects nonconvulsive seizures in ICU patients. Like NeuroServo’s device, Ceribell’s system doesn’t require a specialist to read its results—instead, it converts EEG signals into sound for a yes/no diagnosis within minutes.

Two-photon imaging shows neurons firing in a mouse brain. Recordings like this enable researchers to track which neurons are firing, and how they potentially correspond to different behaviors. The image is credited to Yiyang Gong, Duke University.

Summary: Convolutional neural network model significantly outperforms previous methods and is as accurate as humans in segmenting active and overlapping neurons.

Source: Duke University

Biomedical engineers at Duke University have developed an automated process that can trace the shapes of active neurons as accurately as human researchers can, but in a fraction of the time.

This new technique, based on using artificial intelligence to interpret video images, addresses a critical roadblock in neuron analysis, allowing researchers to rapidly gather and process neuronal signals for real-time behavioral studies.

The research appeared this week in the Proceedings of the National Academy of Sciences.

To measure neural activity, researchers typically use a process known as two-photon calcium imaging, which allows them to record the activity of individual neurons in the brains of live animals. These recordings enable researchers to track which neurons are firing, and how they potentially correspond to different behaviors.

While these measurements are useful for behavioral studies, identifying individual neurons in the recordings is a painstaking process. Currently, the most accurate method requires a human analyst to circle every ‘spark’ they see in the recording, often requiring them to stop and rewind the video until the targeted neurons are identified and saved. To further complicate the process, investigators are often interested in identifying only a small subset of active neurons that overlap in different layers within the thousands of neurons that are imaged.

This process, called segmentation, is fussy and slow. A researcher can spend anywhere from four to 24 hours segmenting neurons in a 30-minute video recording, and that’s assuming they’re fully focused for the duration and don’t take breaks to sleep, eat or use the bathroom.

In contrast, a new open source automated algorithm developed by image processing and neuroscience researchers in Duke’s Department of Biomedical Engineering can accurately identify and segment neurons in minutes.

“As a critical step towards complete mapping of brain activity, we were tasked with the formidable challenge of developing a fast automated algorithm that is as accurate as humans for segmenting a variety of active neurons imaged under different experimental settings,” said Sina Farsiu, the Paul Ruffin Scarborough Associate Professor of Engineering in Duke BME.

“The data analysis bottleneck has existed in neuroscience for a long time — data analysts have spent hours and hours processing minutes of data, but this algorithm can process a 30-minute video in 20 to 30 minutes,” said Yiyang Gong, an assistant professor in Duke BME. “We were also able to generalize its performance, so it can operate equally well if we need to segment neurons from another layer of the brain with different neuron size or densities.”

“Our deep learning-based algorithm is fast, and is demonstrated to be as accurate as (if not better than) human experts in segmenting active and overlapping neurons from two-photon microscopy recordings,” said Somayyeh Soltanian-Zadeh, a PhD student in Duke BME and first author on the paper.

Deep-learning algorithms allow researchers to quickly process large amounts of data by sending it through multiple layers of nonlinear processing units, which can be trained to identify different parts of a complex image. In their framework, this team created an algorithm that could process both spatial and timing information in the input videos. They then ‘trained’ the algorithm to mimic the segmentation of a human analyst while improving the accuracy.

The advance is a critical step towards allowing neuroscientists to track neural activity in real time. Because of their tool’s widespread usefulness, the team has made their software and annotated dataset available online.

Gong is already using the new method to more closely study the neural activity associated with different behaviors in mice. By better understanding which neurons fire for different activities, Gong hopes to learn how researchers can manipulate brain activity to modify behavior.

“This improved performance in active neuron detection should provide more information about the neural network and behavioral states, and open the door for accelerated progress in neuroscience experiments,” said Soltanian-Zadeh.

CTE is a neurodegenerative disease that has been associated with a history of repetitive head impacts, including those that may or may not be associated with concussion symptoms in American football players. The image is in the public domain.

Summary: PET imaging of former NFL players who exhibited cognitive decline and psychiatric symptoms linked to CTE showed higher levels of tau in areas of the brain associated with the neurodegenerative disease. Using an experimental positron emission tomography (PET) scan, researchers have found elevated amounts of abnormal tau protein in brain regions affected by chronic traumatic encephalopathy (CTE) in a small group of living former National Football League (NFL) players with cognitive, mood and behavior symptoms. The study was published online in the New England Journal of Medicine.

Source: Boston University School of Medicine

The researchers also found the more years of tackle football played (across all levels of play), the higher the tau protein levels detected by the PET scan. However, there was no relationship between the tau PET levels and cognitive test performance or severity of mood and behavior symptoms.

“The results of this study provide initial support for the flortaucipir PET scan to detect abnormal tau from CTE during life. However, we’re not there yet,” cautioned corresponding author Robert Stern, PhD, professor of neurology, neurosurgery and anatomy and neurobiology at Boston University School of Medicine (BUSM). “These results do not mean that we can now diagnose CTE during life or that this experimental test is ready for use in the clinic.”

CTE is a neurodegenerative disease that has been associated with a history of repetitive head impacts, including those that may or may not be associated with concussion symptoms in American football players. At this time, CTE can only be diagnosed after death by a neuropathological examination, with the hallmark findings of the build-up of an abnormal form of tau protein in a specific pattern in the brain. Like Alzheimer’s disease (AD), CTE has been suggested to be associated with a progressive loss of brain cells. In contrast to AD, the diagnosis of CTE is based in part on the pattern of tau deposition and a relative lack of amyloid plaques.

The study was conducted in Boston and Arizona by a multidisciplinary group of researchers from BUSM, Banner Alzheimer’s Institute, Mayo Clinic Arizona, Brigham and Women’s Hospital and Avid Radiopharmaceuticals. Experimental flortaucipir PET scans were used to assess tau deposition and FDA-approved florbetapir PET scans were used to assess amyloid plaque deposition in the brains of 26 living former NFL players with cognitive, mood, and behavior symptoms (ages 40-69) and a control group of 31 same-age men without symptoms or history of traumatic brain injury. Results showed that the tau PET levels were significantly higher in the former NFL group than in the controls, and the tau was seen in the areas of the brain which have been shown to be affected in post-mortem cases of neuropathologically diagnosed CTE.

Interestingly, the former player and control groups did not differ in their amyloid PET measurements. Indeed, only one former player had amyloid PET measurements comparable to those seen in Alzheimer’s disease.

“Our findings suggest that mild cognitive, emotional, and behavioral symptoms observed in athletes with a history of repetitive impacts are not attributable to AD, and they provide a foundation for additional research studies to advance the scientific understanding, diagnosis, treatment, and prevention of CTE in living persons, said co-author, Eric Reiman, MD, Executive Director of Banner Alzheimer’s Institute in Phoenix, Arizona. “More research is needed to draw firm conclusions, and contact sports athletes, their families, and other stakeholders are waiting.

With support from NIH, the authors are working with additional researchers to conduct a longitudinal study called the DIAGNOSE CTE Research Project in former NFL players, former college football players, and persons without a history of contact sports play to help address these and other important questions. Initial results of that study are expected in early 2020.