by David Nield

Since the 1980s scientists have spotted a link between naval sonar systems and beaked whales seemingly killing themselves – by deliberately getting stranded on beaches. Now, researchers might have revealed the horrifying reason why.

In short, the sound pulses appear to scare the whales to death, acting like a shot of adrenaline might in a human, and causing deadly changes in their otherwise perfectly calibrated diving techniques.

By studying mass stranding events (MSEs) from recent history, the team found that beaked whales bring a sort of decompression sickness (also known as ‘the bends’ or ‘divers’ disease’) on themselves when they sense sonar. When panicked, their veins fill up with nitrogen gas bubbles, their brains suffer severe haemorrhaging, and other organs get damaged.

“In the presence of sonar they are stressed and swim vigorously away from the sound source, changing their diving pattern,” one of the researchers, Yara Bernaldo de Quiros from the University of Las Palmas de Gran Canaria in Spain, told AFP.

“The stress response, in other words, overrides the diving response, which makes the animals accumulate nitrogen.”

The end result is these poor creatures die in agony after getting the whale version of the bends – not something you would normally expect from whales that are so adept at navigating deep underwater.

Typically, these animals naturally lower their heart rate to reduce oxygen use and prevent nitrogen build-up when they plunge far below the surface. Tragically, it appears that a burst of sonar actually overrides these precautions.

The researchers weighed up the evidence from some 121 MSEs between the years 1960 and 2004, and particularly focussed on the autopsies of 10 dead whales stranded in the Canary Islands in 2002 after a nearby naval exercise.

It’s here that the decompression sickness effects were noticed, as they have been in other stranding events that the researchers looked at.

While the team notes that the effects of sonar on whales seem to “vary among individuals or populations”, and “predisposing factors may contribute to individual outcomes”, there does seem to be a common thread in terms of what happens to these unsuspecting mammals.

That’s especially true for Cuvier’s beaked whale (Ziphius cavirostris) – of the 121 MSEs we’ve mentioned, 61 involved Cuvier’s beaked whales, and the researchers say they appear particularly vulnerable to sonar.

There’s also a particular kind of sonar to be worried about: mid-frequency active sonar (MFAS), in the range of about 5 kilohertz.

Now the researchers behind the new report want to see the use of such sonar technology banned in areas where whales are known to live – such a ban has been in place in the Canary Islands since the 2002 incident.

“Up until then, the Canaries were a hotspot for this kind of atypical stranding,” de Quiros told AFP. “Since the moratorium, none have occurred.”

The research has been published in the Royal Society Journal Proceedings B.

https://www.sciencealert.com/this-is-the-horrifying-reason-why-sonar-makes-beaked-whales-beach-themselves

Advertisements

by SIDNEY FUSSELL

Walgreens is piloting a new line of “smart coolers”—fridges equipped with cameras that scan shoppers’ faces and make inferences on their age and gender. On January 14, the company announced its first trial at a store in Chicago in January, and plans to equip stores in New York and San Francisco with the tech.

Demographic information is key to retail shopping. Retailers want to know what people are buying, segmenting shoppers by gender, age, and income (to name a few characteristics) and then targeting them precisely. To that end, these smart coolers are a marvel.

If, for example, Pepsi launched an ad campaign targeting young women, it could use smart-cooler data to see if its campaign was working. These machines can draw all kinds of useful inferences: Maybe young men buy more Sprite if it’s displayed next to Mountain Dew. Maybe older women buy more ice cream on Thursday nights than any other day of the week. The tech also has “iris tracking” capabilities, meaning the company can collect data on which displayed items are the most looked at.

Crucially, the “Cooler Screens” system does not use facial recognition. Shoppers aren’t identified when the fridge cameras scan their face. Instead, the cameras analyze faces to make inferences about shoppers’ age and gender. First, the camera takes their picture, which an AI system will measure and analyze, say, the width of someone’s eyes, the distance between their lips and nose, and other micro measurements. From there, the system can estimate if the person who opened the door is, say, a woman in her early 20s or a male in his late 50s. It’s analysis, not recognition.

The distinction between the two is very important. In Illinois, facial recognition in public is outlawed under BIPA, the Biometric Privacy Act. For two years, Google and Facebook fought class-actions suits filed under the law, after plaintiffs claimed the companies obtained their facial data without their consent. Home-security cams with facial-recognition abilities, such as Nest or Amazon’s Ring, also have those features disabled in the state; even Google’s viral “art selfie” app is banned. The suit against Facebook was dismissed in January, but privacy advocates champion BIPA as a would-be template for a world where facial recognition is federally regulated.

Walgreens’s camera system makes note only of what shoppers picked up and basic information on their age and gender. Last year, a Canadian mall used cameras to track shoppers and make inferences about which demographics prefer which stores. Shoppers’ identities weren’t collected or stored, but the mall ended the pilot after widespread backlash.

The smart cooler is just one of dozens of tracking technologies emerging in retail. At Amazon Go stores, for example—which do not have cashiers or self-checkout stations—sensors make note of shoppers’ purchases and charge them to their Amazon account; the resulting data are part of the feedback loop the company uses to target ads at customers, making it more money.

https://www.theatlantic.com/technology/archive/2019/01/walgreens-tests-new-smart-coolers/581248/

Thanks to Kebmodee for bringing this to the It’s Interesting community.

by Debora MacKenzie

We may finally have found a long-elusive cause of Alzheimer’s disease: Porphyromonas gingivalis, the key bacteria in chronic gum disease. That’s bad, as gum disease affects around a third of all people. But the good news is that a drug that blocks the main toxins of P. gingivalis is entering major clinical trials this year, and research published this week shows it might stop and even reverse Alzheimer’s. There could even be a vaccine.

Alzheimer’s is one of the biggest mysteries in medicine. As populations have aged, dementia has skyrocketed to become the fifth biggest cause of death worldwide. Alzheimer’s constitutes some 70 per cent of these cases and yet, we don’t know what causes it. The disease often involves the accumulation of proteins called amyloid and tau in the brain, and the leading hypothesis has been that the disease arises from defective control of these two proteins. But research in recent years has revealed that people can have amyloid plaques without having dementia. So many efforts to treat Alzheimer’s by moderating these proteins have failed, and the hypothesis has now been seriously questioned.

Indeed, evidence has been growing that the function of amyloid proteins may be as a defence against bacteria, leading to a spate of recent studies looking at bacteria in Alzheimer’s, particularly those that cause gum disease, which is known to be a major risk factor for the condition.

Bacteria involved in gum disease and other illnesses have been found after death in the brains of people who had Alzheimer’s, but until now, it hasn’t been clear whether these bacteria caused the disease or simply got in via brain damage caused by the condition.

Gum disease link

Multiple research teams have been investigating P. gingivalis, and have so far found that it invades and inflames brain regions affected by Alzheimer’s; that gum infections can worsen symptoms in mice genetically engineered to have Alzheimer’s; and that it can cause Alzheimer’s-like brain inflammation, neural damage, and amyloid plaques in healthy mice.

“When science converges from multiple independent laboratories like this, it is very compelling,” says Casey Lynch of Cortexyme, a pharmaceutical firm in San Francisco, California.

In the new study, Cortexyme have now reported finding the toxic enzymes – called gingipains – that P. gingivalis uses to feed on human tissue in 96 per cent of the 54 Alzheimer’s brain samples they looked at, and found the bacteria themselves in all three Alzheimer’s brains whose DNA they examined.

“This is the first report showing P. gingivalis DNA in human brains, and the associated gingipains, co-lococalising with plaques,” says Sim Singhrao, of the University of Central Lancashire, UK. Her team previously found that P. gingivalis actively invades the brains of mice with gum infections. She adds that the new study is also the first to show that gingipains slice up tau protein in ways that could allow it to kill neurons, causing dementia.

The bacteria and its enzymes were found at higher levels in those who had experienced worse cognitive decline, and had more amyloid and tau accumulations. The team also found the bacteria in the spinal fluid of living people with Alzheimer’s, suggesting that this technique may provide a long-sought after method of diagnosing the disease.

When the team gave P. gingivalis gum disease to mice, it led to brain infection, amyloid production, tangles of tau protein, and neural damage in the regions and nerves normally affected by Alzheimer’s.

Cortexyme had previously developed molecules that block gingipains. Giving some of these to mice reduced their infections, halted amyloid production, lowered brain inflammation and even rescued damaged neurons.

The team found that an antibiotic that killed P. gingivalis did this too, but less effectively, and the bacteria rapidly developed resistance. They did not resist the gingipain blockers. “This provides hope of treating or preventing Alzheimer’s disease one day,” says Singhrao.

New treatment hope

Some brain samples from people without Alzheimer’s also had P. gingivalis and protein accumulations, but at lower levels. We already know that amyloid and tau can accumulate in the brain for 10 to 20 years before Alzheimer’s symptoms begin. This, say the researchers, shows P. gingivalis could be a cause of Alzheimer’s, but it is not a result.

Gum disease is far more common than Alzheimer’s. But “Alzheimer’s strikes people who accumulate gingipains and damage in the brain fast enough to develop symptoms during their lifetimes,” says Lynch. “We believe this is a universal hypothesis of pathogenesis.”

Cortexyme reported in October that the best of their gingipain blockers had passed initial safety tests in people, and entered the brain. It also seemed to improve participants with Alzheimer’s. Later this year the firm will launch a larger trial of the drug, looking for P. gingivalis in spinal fluid, and cognitive improvements, before and after.

They also plan to test it against gum disease itself. Efforts to fight that have led a team in Melbourne to develop a vaccine for P. gingivalis that started tests in 2018. A vaccine for gum disease would be welcome – but if it also stops Alzheimer’s the impact could be enormous.

Journal reference: Science Advances

https://www.newscientist.com/article/2191814-we-may-finally-know-what-causes-alzheimers-and-how-to-stop-it/


Coloured positron emission tomography (PET, centre) and computed tomography (CT, left) scans of the brain of a 62-year-old woman with Alzheimer’s disease.

By Pam Belluck

In dementia research, so many paths have led nowhere that any glimmer of optimism is noteworthy.

So some experts are heralding the results of a large new study, which found that people with hypertension who received intensive treatment to lower their blood pressure were less likely than those receiving standard blood pressure treatment to develop minor memory and thinking problems that often progress to dementia.

The study, published Monday in JAMA, is the first large, randomized clinical trial to find something that can help many older people reduce their risk of mild cognitive impairment — an early stage of faltering function and memory that is a frequent precursor to Alzheimer’s disease and other dementias.

The results apply only to those age 50 or older who have elevated blood pressure and who do not have diabetes or a history of stroke. But that’s a condition affecting a lot of people — more than 75 percent of people over 65 have hypertension, the study said. So millions might eventually benefit by reducing not only their risk of heart problems but of cognitive decline, too.

“It’s kind of remarkable that they found something,” said Dr. Kristine Yaffe, a professor of psychiatry and neurology at University of California San Francisco, who was not involved in the research. “I think it actually is very exciting because it tells us that by improving vascular health in a comprehensive way, we could actually have an effect on brain health.”

The research was part of a large cardiovascular study called Sprint, begun in 2010 and involving more than 9,000 racially and ethnically diverse people at 102 sites in the United States. The participants had hypertension, defined as a systolic blood pressure (the top number) from 130 to 180, without diabetes or a history of stroke.

These were people who could care for themselves, were able to walk and get themselves to doctors’ appointments, said the principal investigator, Dr. Jeff D. Williamson, chief of geriatric medicine and gerontology at Wake Forest School of Medicine.

The primary goal of the Sprint study was to see if people treated intensively enough that their blood pressure dropped below 120 would do better than people receiving standard treatment which brought their blood pressure just under 140. They did — so much so that in 2015, the trial was stopped because the intensively treated participants had significantly lower risk of cardiovascular events and death that it would have been unethical not to inform the standard group of the benefit of further lowering their blood pressure.

But the cognitive arm of the study, called Sprint Mind, continued to follow the participants for three more years even though they were no longer monitored for whether they continued with intensive blood pressure treatment. About 8,500 participants received at least one cognitive assessment.

The primary outcome researchers measured was whether patients developed “probable dementia.” Fewer patients did so in the group whose blood pressure was lowered to 120. But the difference — 149 people in the intensive-treatment group versus 176 people in the standard-treatment group — was not enough to be statistically significant.

But in the secondary outcome — developing mild cognitive impairment or MCI — results did show a statistically significant difference. In the intensive group, 287 people developed it, compared to 353 people in the standard group, giving the intensive treatment group a 19 percent lower risk of mild cognitive impairment, Dr. Williamson said.

Because dementia often develops over many years, Dr. Williamson said he believes that following the patients for longer would yield enough cases to definitively show whether intensive blood pressure treatment helps prevent dementia too. To find out, the Alzheimer’s Association said Monday it would fund two more years of the study.

“Sprint Mind 2.0 and the work leading up to it offers genuine, concrete hope,” Maria C. Carrillo, the association’s chief science officer, said in a statement. “MCI is a known risk factor for dementia, and everyone who experiences dementia passes through MCI. When you prevent new cases of MCI, you are preventing new cases of dementia.”

Dr. Yaffe said the study had several limitations and left many questions unanswered. It’s unclear how it applies to people with diabetes or other conditions that often accompany high blood pressure. And she said she would like to see data on the participants older than 80, since some studies have suggested that in people that age, hypertension might protect against dementia.

The researchers did not specify which type of medication people took, although Dr. Williamson said they plan to analyze by type to see if any of the drugs produced a stronger cognitive benefit. Side effects of the intensive treatment stopped being monitored after the main trial ended, but Dr. Williamson said the biggest negative effect was dehydration.

Dr. Williamson said the trial has changed how he treats patients, offering those with blood pressure over 130 the intensive treatment. “I’ll tell them it will give you a 19 percent lower chance of developing early memory loss,” he said.

Dr. Yaffe is more cautious about changing her approach. “I don’t think we’re ready to roll it out,” she said. “It’s not like I’m going to see a patient and say ‘Oh my gosh your blood pressure is 140; we need to go to 120.’ We really need to understand much more about how this might differ by your age, by the side effects, by maybe what else you have.”

Still, she said, “I do think the take-home message is that blood pressure and other measures of vascular health have a role in cognitive health,” she said. “And nothing else has worked.”

by George Dvorsky

Using brain-scanning technology, artificial intelligence, and speech synthesizers, scientists have converted brain patterns into intelligible verbal speech—an advance that could eventually give voice to those without.

It’s a shame Stephen Hawking isn’t alive to see this, as he may have gotten a real kick out of it. The new speech system, developed by researchers at the ​Neural Acoustic Processing Lab at Columbia University in New York City, is something the late physicist might have benefited from.

Hawking had amyotrophic lateral sclerosis (ALS), a motor neuron disease that took away his verbal speech, but he continued to communicate using a computer and a speech synthesizer. By using a cheek switch affixed to his glasses, Hawking was able to pre-select words on a computer, which were read out by a voice synthesizer. It was a bit tedious, but it allowed Hawking to produce around a dozen words per minute.

But imagine if Hawking didn’t have to manually select and trigger the words. Indeed, some individuals, whether they have ALS, locked-in syndrome, or are recovering from a stroke, may not have the motor skills required to control a computer, even by just a tweak of the cheek. Ideally, an artificial voice system would capture an individual’s thoughts directly to produce speech, eliminating the need to control a computer.

New research published today in Scientific Advances takes us an important step closer to that goal, but instead of capturing an individual’s internal thoughts to reconstruct speech, it uses the brain patterns produced while listening to speech.

To devise such a speech neuroprosthesis, neuroscientist Nima Mesgarani and his colleagues combined recent advances in deep learning with speech synthesis technologies. Their resulting brain-computer interface, though still rudimentary, captured brain patterns directly from the auditory cortex, which were then decoded by an AI-powered vocoder, or speech synthesizer, to produce intelligible speech. The speech was very robotic sounding, but nearly three in four listeners were able to discern the content. It’s an exciting advance—one that could eventually help people who have lost the capacity for speech.

To be clear, Mesgarani’s neuroprosthetic device isn’t translating an individual’s covert speech—that is, the thoughts in our heads, also called imagined speech—directly into words. Unfortunately, we’re not quite there yet in terms of the science. Instead, the system captured an individual’s distinctive cognitive responses as they listened to recordings of people speaking. A deep neural network was then able to decode, or translate, these patterns, allowing the system to reconstruct speech.

“This study continues a recent trend in applying deep learning techniques to decode neural signals,” Andrew Jackson, a professor of neural interfaces at Newcastle University who wasn’t involved in the new study, told Gizmodo. “In this case, the neural signals are recorded from the brain surface of humans during epilepsy surgery. The participants listen to different words and sentences which are read by actors. Neural networks are trained to learn the relationship between brain signals and sounds, and as a result can then reconstruct intelligible reproductions of the words/sentences based only on the brain signals.”

Epilepsy patients were chosen for the study because they often have to undergo brain surgery. Mesgarani, with the help of Ashesh Dinesh Mehta, a neurosurgeon at Northwell Health Physician Partners Neuroscience Institute and a co-author of the new study, recruited five volunteers for the experiment. The team used invasive electrocorticography (ECoG) to measure neural activity as the patients listened to continuous speech sounds. The patients listened, for example, to speakers reciting digits from zero to nine. Their brain patterns were then fed into the AI-enabled vocoder, resulting in the synthesized speech.

The results were very robotic-sounding, but fairly intelligible. In tests, listeners could correctly identify spoken digits around 75 percent of the time. They could even tell if the speaker was male or female. Not bad, and a result that even came as “a surprise” to Mesgaran, as he told Gizmodo in an email.

Recordings of the speech synthesizer can be found here (the researchers tested various techniques, but the best result came from the combination of deep neural networks with the vocoder).

The use of a voice synthesizer in this context, as opposed to a system that can match and recite pre-recorded words, was important to Mesgarani. As he explained to Gizmodo, there’s more to speech than just putting the right words together.

“Since the goal of this work is to restore speech communication in those who have lost the ability to talk, we aimed to learn the direct mapping from the brain signal to the speech sound itself,” he told Gizmodo. “It is possible to also decode phonemes [distinct units of sound] or words, however, speech has a lot more information than just the content—such as the speaker [with their distinct voice and style], intonation, emotional tone, and so on. Therefore, our goal in this particular paper has been to recover the sound itself.”

Looking ahead, Mesgarani would like to synthesize more complicated words and sentences, and collect brain signals of people who are simply thinking or imagining the act of speaking.

Jackson was impressed with the new study, but he said it’s still not clear if this approach will apply directly to brain-computer interfaces.

“In the paper, the decoded signals reflect actual words heard by the brain. To be useful, a communication device would have to decode words that are imagined by the user,” Jackson told Gizmodo. “Although there is often some overlap between brain areas involved in hearing, speaking, and imagining speech, we don’t yet know exactly how similar the associated brain signals will be.”

William Tatum, a neurologist at the Mayo Clinic who was also not involved in the new study, said the research is important in that it’s the first to use artificial intelligence to reconstruct speech from the brain waves involved in generating known acoustic stimuli. The significance is notable, “because it advances application of deep learning in the next generation of better designed speech-producing systems,” he told Gizmodo. That said, he felt the sample size of participants was too small, and that the use of data extracted directly from the human brain during surgery is not ideal.

Another limitation of the study is that the neural networks, in order for them do more than just reproduce words from zero to nine, would have to be trained on a large number of brain signals from each participant. The system is patient-specific, as we all produce different brain patterns when we listen to speech.

“It will be interesting in future to see how well decoders trained for one person generalize to other individuals,” said Jackson. “It’s a bit like early speech recognition systems that needed to be individually trained by the user, as opposed to today’s technology, such as Siri and Alexa, that can make sense of anyone’s voice, again using neural networks. Only time will tell whether these technologies could one day do the same for brain signals.”

No doubt, there’s still lots of work to do. But the new paper is an encouraging step toward the achievement of implantable speech neuroprosthetics.

https://gizmodo.com/neuroscientists-translate-brain-waves-into-recognizable-1832155006

https://www.nature.com/articles/s41598-018-37359-z

by Nicola Davies, PhD

Robots are infiltrating the field of psychiatry, with experts like Dr Joanne Pransky of the San Francisco Bay area in California advocating for robots to be embraced in the medical field. In this article, Dr Pransky shares some examples of robots that have shown impressive psychiatric applications, as well as her thoughts on giving robots the critical role of delivering healthcare to human beings.

Meet the world’s first robotic psychiatrist

Dr Pransky, who was named the world’s first “robotic psychiatrist” because her patients are robots, said, “In 1986, I said that one day, when robots are as intelligent as humans, they would need assistance in dealing with humans on a day-to-day basis.” She imagines that in the near future it will be normal for families to come to a clinic with their robot to help the robot deal with the emotions it develops as a result of interacting with human beings. She also believes that having a robot as part of the family will reshape human family dynamics.

While Dr Pransky’s expertise may sound like science fiction to some, it illustrates just how interlaced robotics and psychiatry are becoming. With 32 years of experience in robotics, she said technology has come a long way, “to the point where robots are used as therapeutic tools.”

Robots in psychiatry

Dr Pransky cites some cases of robots that have been developed to help people with psychiatric health needs. One example is Paro, a robotic baby harp seal developed by the National Institute of Advanced Industrial Science and Technology (AIST), one of the largest public research organizations in Japan. Paro is used in the care of elderly people with dementia, Alzheimer disease, and other mental conditions.1 It has an appealing physical appearance that helps create a calming effect and encourages emotional responses from people. “The designers found that Paro enhances social interaction and communication. Patients can hold and pet the fur-covered seal, which is equipped with different tactile sensors. The seal can also respond to sounds and learn names, including its own,” said Dr Pransky. In 2009, Paro was certified as a type of neurologic therapeutic device by the US Food and Drug Administration (FDA).

Mabu, which is being developed by the patient care management firm Catalia Health in San Francisco, California, is another example. Mabu is a voice-activated robot designed to provide cognitive behavioral therapy by coaching patients on their daily health needs and sending health data to medical professionals.2 Dr Pransky points out that the team developing Mabu is composed of experts in psychiatry and robotics.

Then there is ElliQ, which was developed by Intuition Robotics in San Francisco to provide a social companion for the elderly. ElliQ is powered by artificial intelligence (AI) to provide personalized advice to senior patients regarding activities that can help them stay engaged, active, and mentally sharp.3 It also provides a communication channel between elderly patients and their loved ones.

Beside small robot assistants, however, robotics technology is also integrated into current medical devices, such as Axilum Robotics (Strasbourg, France) TMS-Robot, which assists with transcranial magnetic stimulation (TMS). TMS is a painless, non-invasive brain stimulation technique performed in patients with major depression and other neurologic diseases.4 TMS is usually performed manually, but the TMS-robot automates the procedure, providing more accuracy for patients while saving the operator from performing a repetitive and painful task.

Chatbots are another way in which robotics technology is providing care to psychiatric patients. Using AI and a conversational user interface, chatbots interact with individuals in a human-like manner. For example, Woebot (Woebot Labs, Inc, San Francisco), which runs in Facebook Messenger, converses with users to monitor their mood, make assessments, and recommend psychological treatments.5

Will robots replace psychiatrists?

Robotics has started to become an integral part of mental health treatment and management. Yet critics say there are potential negative side-effects and safety issues in incorporating robotics technology too far into human lives. For instance, over-reliance on robots may have social and legal implications, as well as encroaching on human dignity.6 These issues can be distinctly problematic in the field of psychiatry, in which patients share highly emotional and sensitive personal information. Dr Pransky herself has worked on films such as Ender’s Game and Eagle Eye, which have presented the risks to humans of robots with excessive control and intelligence.

However, Dr Pransky points out that robots are meant to supplement, not supplant, and to facilitate physicians’ work, not replace them. “I think there will be therapeutic success for robotics, but there’s nothing like the understanding of the human experience by a qualified human being. Robotics should extend and augment what a psychiatrist can do, she said. “It’s not the technology I would worry about but the people developing and using it. Robotics needs to be safe, so we have to design safe,” she adds, explaining that emotional and psychological safety should be key components in the design.

Who stands to benefit from robotics in psychiatry?

Dr Pransky explains that robots can help address psychiatric issues that a psychiatrist may be unable to with traditional techniques and tools: “The greatest benefit of robotics use will be in filling gaps. For example, for people who are not comfortable or available to talk about their problems with another human being, a robotic tool can be a therapeutic asset or a diagnostic tool.”

An interesting example of a robot that could be used to fill gaps in psychiatric care is the robot used in BlabDroid, a 2012 documentary created by Alex Reben at the MIT Media Lab for his Master’s thesis. It was the first documentary ever filmed and directed by robots. The robot interviewed strangers on the streets of New York City7 and people surprisingly opened up to the robot. “Some humans are better off with something they feel is non-threatening,” said Dr Pransky.

https://www.psychiatryadvisor.com/practice-management/the-robot-will-see-you-now-the-increasing-role-of-robotics-in-psychiatric-care/article/828253/2/

Levels of a protein called neurofilament light chain increase in the blood and spinal fluid of some Alzheimer’s patients 16 years before they develop symptoms, according to a study published January 21 in Nature Medicine.

The results suggest that neurofilament light chain (NfL), which is part of the cytoskeleton of neurons and has previously been tied to brain damage in mice, could serve as a biomarker to noninvasively track the progression of the disease. “This is something that would be easy to incorporate into a screening test in a neurology clinic,” coauthor Brian Gordon, an assistant professor of radiology at Washington University, says in a press release.

Gordon and his colleagues measured NfL in nearly 250 people carrying an Alzheimer’s-risk allele and more than 160 of their relatives who did not carry the variant. They found that those at risk of developing the disease had higher levels of the protein early on, and that NfL levels in both the blood and spinal fluid were on the rise well before the patients began to show signs of neurodegeneration, more than 16 years before disease onset.

Examining a subset of the patients more closely, the team saw that the rate of increase in NfL correlated with the shrinkage of a brain region called the precuneus, and patients whose NfL levels were rising rapidly tested worse on cognitive tests. “It is not necessarily the absolute levels which tell you your neurodegeneration is ongoing, it is the rate of change,” coauthor Mathias Jucker, a professor of cellular neurology at the German Center for Neurodegenerative Diseases in Tübingen, tells The Guardian.

The Alzheimer’s-linked mutation carried by patients examined in this study only affects about 1 percent of people who get the neurodegenerative disease, so the approach must be validated in a broader patient population, James Pickett, the head of research at the Alzheimer’s Society, tells The Guardian.

“We validated it in people with Alzheimer’s disease because we know their brains undergo lots of neurodegeneration, but this marker isn’t specific for Alzheimer’s,” Gordon says in the release. “I could see this being used in the clinic in a few years to identify signs of brain damage in individual patients.”

Meanwhile, a research team at Seoul National University in South Korea described another potential blood test for Alzheimer’s, focusing on the tau and amyloid proteins known to be associated with the disease. According to their study published today in Brain, blood levels of tau and amyloid correlate with how much tau has accumulated in the brain, as well as other markers of neurodegeneration such as hippocampal volume. “These results indicate that combination of plasma tau and amyloid-β1–42 levels might be potential biomarkers for predicting brain tau pathology and neurodegeneration,” the researchers write in their report.

https://www.the-scientist.com/news-opinion/protein-changes-detected-in-blood-years-before-alzheimers-onset-65347