More steps per day could significantly reduce the risk for depression symptoms

ey takeaways:

  • Daily step counts between 5,000 to 10,000 or more reduced depression symptoms across 33 studies.
  • The associations may be due to several mechanisms, like improvement in sleep quality and inflammation.

Daily step counts of 5,000 or more corresponded with fewer depressive symptoms among adults, results of a systematic review and meta-analysis published in JAMA Network Open suggested.

The results are consistent with previous studies linking exercise to various risk reductions for mental health disorders and show that setting step goals “may be a promising and inclusive public health strategy for the prevention of depression,” the researchers wrote.

According to Bruno Bizzozero-Peroni, PhD, MPH, from Universidad De Castilla-La Mancha in Spain, and colleagues, daily step counts are a “simple and intuitive objective measure” of physical activity, while tracking such counts has become increasingly feasible for the general population thanks to the availability of fitness trackers.

“To our knowledge, the association between the number of daily steps measured

with wearable trackers and depression has not been previously examined through a meta-analytic approach,” they wrote.

The researchers searched multiple databases for analyses assessing the effects of daily step counts on depressive symptoms, ultimately including a total of 27 cross-sectional studies and six longitudinal studies comprising 96,173 adults aged 18 years or older.

They found that in the cross-sectional studies, daily step counts of 10,000 or more (standardized mean difference [SMD] = 0.26; 95% CI, 0.38 to 0.14), 7,500 to 9,999 (SMD = 0.27; 95% CI, 0.43 to 0.11) and 5,000 to 7,499 (SMD = 0.17; 95% CI, 0.3 to 0.04) corresponded with reduced depressive symptoms vs. daily step counts less than 5,000.

In the prospective cohort studies, people with 7,000 or more steps a day had a reduced risk for depression vs. with people with fewer than 7,000 daily steps (RR = 0.69; 95% CI, 0.62-0.77), whereas an increase of 1,000 steps a day suggested an association with a lower risk for depression (RR = 0.91; 95% CI, 0.87-0.94).

There were a couple study limitations. The researchers noted that reverse associations are possible, while they could not rule out residual confounders.

They also pointed out that there are some remaining questions, such as whether there is a ceiling limit after which further step counts would no longer reduce the risk for depression.

Bizzozero-Peroni and colleagues highlighted several possible biological and psychosocial mechanisms behind the associations, like changes in sleep quality, inflammation, social support, self-esteem, neuroplasticity and self-efficacy.

They concluded that “a daily active lifestyle may be a crucial factor in regulating and reinforcing these pathways” regardless of the exact combination of mechanisms responsible for the positive link.

“Specifically designed experimental studies are still needed to explore whether there are optimal and maximal step counts for specific population subgroups,” they wrote.

Sources/Disclosures

Collapse

Source: 

Bizzozero-Peroni B, et al. JAMA Netw Open. 2024;doi:10.1001/jamanetworkopen.2024.51208.

Impact of HD Gene on Childhood IQ and Brain Growth

by Jennifer Brown, University of Iowa

The genetic mutation that causes Huntington’s disease (HD)—a devastating brain disease that disrupts mobility and diminishes cognitive ability—may also enhance early brain development and play a role in promoting human intelligence.

This revelation comes from more than 10 years of brain imaging and brain function data, including motor, cognitive, and behavioral assessments, collected from a unique population—children and young adults who carry the gene for HD. While an HD mutation will eventually cause fatal brain disease in adulthood, the study finds that early in life, children with the HD mutation have bigger brains and higher IQ than children without the mutation.

“The finding suggests that early in life, the gene mutation is actually beneficial to brain development, but that early benefit later becomes a liability,” says Peg Nopoulos, MD, professor and head of psychiatry at the UI Carver College of Medicine, and senior author on the study published in The Annals of Neurology.

The finding may also have implications for developing effective treatments for HD. If the gene’s early action is beneficial, then simply aiming to knock out the gene might result in loss of the developmental benefit, too. Creating therapies that can disrupt the gene’s activity later in the patient’s lifetime might be more useful.

The new data about the gene’s positive effect on early brain development is also exciting to Nopoulos for another reason.

“We are very interested in the fact that this appears to be a gene that drives IQ,” she says. “No previous study has found any gene of significant effect on IQ, even though we know intelligence is heritable.”

HD gene linked to better brain development in early life

Huntington’s disease is caused by a mutation in the huntingtin (HTT) gene. The protein produced by the HTT gene is necessary for normal development, but variations within a segment of the protein have a profound effect on the brain.

The segment in question is a long repeat of one amino acid called glutamine. More repeats are associated with bigger, more complex brains. For example, species such as sea urchins or fish have no repeats, but these repeats start to appear higher up the evolutionary ladder. Rodents have a few repeats, while apes (our closest relatives) have even more repeats; and humans have the most.

Most people have repeats in the range of 10–26, but if a person has 40 or more repeats, then they develop HD. Although the gene expansion is present before birth, HD symptoms do not appear until middle age. Nopoulos’s team at the University of Iowa has a long history of studying how the HTT gene expansion affects brain development in the decades before disease onset.

“We know that the expanded gene causes a horrible degenerative disease later in life, but we also know it is a gene that is crucial for general development,” she says.

“We were surprised to find that it does have a positive effect on brain development early in life. Those who have the gene expansion have an enhanced brain with larger volumes of the cerebrum and higher IQ compared to those who don’t.”

In particular, the study found that decades before HD symptoms appeared, children with the HD gene expansion showed significantly better cognitive, behavioral, and motor scores compared to children with repeats within the normal range. Children with the expanded gene also had larger cerebral volumes and greater cortical surface area and folding. After this initial peak, a prolonged deterioration was seen in both brain function and structure.

The study gathered this data by following almost 200 participants in the Kids-HD study, the only longitudinal study of children and young adults at risk for HD due to having a parent or grandparent with the disease.

Evolutionary benefit comes at a cost

Although surprising, the findings are in line with studies by evolutionary biologists who believe that genes like HTT may have been “positively selected” for human brain evolution. This theory, known as antagonistic pleiotropy, suggests that certain genes can produce a beneficial effect early in life, but come at a cost later in life.

The finding also challenges the idea that the protein produced by the HD gene is solely a toxic protein that causes brain degeneration.

“Overall, our study suggests that we should rethink the notion of the toxic protein theory,” says Nopoulos, who is also a member of the Iowa Neuroscience Institute.

“Instead, we should consider the theory of antagonistic pleiotropy—a theory that suggests that genes like HTT build a better brain early in life, but the cost of the superior brain is that it isn’t built to last and may be prone to premature or accelerating aging.

“This means that instead of knocking down the gene for therapy, drugs that slow the aging process may be more effective.”

Next steps

Nopoulos’s team is already making progress extending the research from the Kids-HD program. Nopoulos has established the Children to Adult Neurodevelopment in Gene-Expanded Huntington’s Disease (ChANGE-HD), a multi-site study that aims to recruit hundreds of participants for a total of over 1,200 assessments to validate the key findings from the Kids-HD study and to enhance future research on HD.

A primary area of focus will be understanding how an enlarged brain can later lead to degeneration. One hypothesis Nopoulos and her team will explore involves the idea that an enlarged cortex might produce excess glutamate (an important neurotransmitter), which is beneficial in early brain development, but later leads to neurotoxicity and brain degeneration.

In addition to Nopoulos, the UI team included Mohit Neema, MD, UI research scientist and first author of the study; Jordan Schultz, PharmD; Douglas Langbehn, MD, Ph.D.; Amy Conrad, Ph.D.; Eric Epping, MD, Ph.D.; and Vincent Magnotta, Ph.D.

More information: Mohit Neema et al, Mutant Huntingtin Drives Development of an Advantageous Brain Early in Life: Evidence in Support of Antagonistic Pleiotropy, Annals of Neurology (2024). DOI: 10.1002/ana.27046

Journal information: Annals of Neurology 

Provided by University of Iowa 

https://medicalxpress.com/news/2024-11-huntington-disease-gene-early-brain.html

Copenhagen Scientists Unveil Appetite-Control Drug with No Side Effects

by University of Copenhagen

Scientists at the University of Copenhagen have discovered a new weight loss drug target that reduces appetite, increases energy expenditure, and improves insulin sensitivity without causing nausea or loss of muscle mass. The discovery was reported in the journal Nature and could lead to a new therapy for millions of people with both obesity and type 2 diabetes who do not respond well to current treatments.

Millions of people around the world benefit from weight-loss drugs based on the incretin hormone GLP-1. These drugs also improve kidney function, reduce the risk of fatal cardiac events, and are linked to protection against neurodegeneration.

However, many people stop taking the drugs due to common side effects, including nausea and vomiting. Studies also show that incretin-based therapies like Wegovy and Mounjaro are much less effective at lowering weight in people living with both obesity and type 2 diabetes—a group numbering more than 380 million people globally.

In the study, scientists from the University of Copenhagen describe a powerful new drug candidate that lowers appetite without loss of muscle mass or side effects like nausea and vomiting. And, unlike the current generation of treatments, the drug also increases the body’s energy expenditure—the capacity of the body to burn calories.

“While GLP-1-based therapies have revolutionized patient care for obesity and type 2 diabetes, safely harnessing energy expenditure and controlling appetite without nausea remain two Holy Grails in this field. By addressing these needs, we believe our discovery will propel current approaches to make more tolerable, effective treatments accessible to millions more individuals,” says Associate Professor Zach Gerhart-Hines from the NNF Foundation Center for Basic Metabolic Research (CBMR) at the University of Copenhagen.

NK2R activation lowers body weight and reverses diabetes

Our weight is largely determined by the balance between the energy we consume and the amount of energy we expend. Eating more and burning less creates a positive energy balance leading to weight gain, while eating less and burning more creates a negative balance, resulting in weight loss.

The current generation of incretin-based therapies tip the scales toward a negative energy balance by lowering appetite and the total calories a person consumes. But scientists have also recognized the potential on the other side of the equation—increasing the calories the body burns.

This approach is especially relevant, given recent research that has shown that our bodies seem to be burning fewer calories at rest than they did a few decades ago. However, there are currently no clinically approved ways to safely increase energy expenditure, and few options are in development.

This was the starting point when scientists at the University of Copenhagen decided to test the effect of activating the neurokinin 2 receptor (NK2R) in mice. The Gerhart-Hines Group identified the receptor through genetic screens that suggested NK2R played a role in maintaining energy balance and glucose control.

They were astonished by the results of the studies—not only did activating the receptor safely increase calorie-burning, it also lowered appetite without any signs of nausea.

Further studies in non-human primates with type 2 diabetes and obesity showed that NK2R activation lowered body weight and reversed their diabetes by increasing insulin sensitivity and lowering blood sugar, triglycerides, and cholesterol.

“One of the biggest hurdles in drug development is translation between mice and humans. This is why we were excited that the benefits of NK2R agonism translated to diabetic and obese nonhuman primates, which represents a big step towards clinical translation,” says Ph.D. Student Frederike Sass from CBMR at the University of Copenhagen, and first author of the study.

The discovery could result in the next generation of drug therapies that bring more efficacious and tolerable treatments for the almost 400 million people globally who live with both type 2 diabetes and obesity.

The University of Copenhagen holds the patent rights for targeting NK2R. To date, research by the Gerhart-Hines lab has led to the creation of three biotech companies—Embark Biotech, Embark Laboratories, and Incipiam Pharma.

In 2023, Embark Biotech was acquired by Novo Nordisk to develop next generation therapeutics for cardiometabolic disease.

More information: Zachary Gerhart-Hines, NK2R control of energy expenditure and feeding to treat metabolic diseases, Nature (2024). DOI: 10.1038/s41586-024-08207-0www.nature.com/articles/s41586-024-08207-0

Journal information: Nature 

Provided by University of Copenhagen 

https://medicalxpress.com/news/2024-11-weight-loss-drug-energy-lowers.html

Transforming Neurosurgery with FastGlioma AI Technology

by University of Michigan

Researchers have developed an AI-powered model that—in 10 seconds—can determine during surgery if any part of a cancerous brain tumor that could be removed remains, a study published in Nature suggests.

The technology, called FastGlioma, outperformed conventional methods for identifying what remains of a tumor by a wide margin, according to the research team led by University of Michigan and University of California San Francisco.

“FastGlioma is an artificial intelligence-based diagnostic system that has the potential to change the field of neurosurgery by immediately improving comprehensive management of patients with diffuse gliomas,” said senior author Todd Hollon, M.D., a neurosurgeon at University of Michigan Health and assistant professor of neurosurgery at U-M Medical School.

“The technology works faster and more accurately than current standard of care methods for tumor detection and could be generalized to other pediatric and adult brain tumor diagnoses. It could serve as a foundational model for guiding brain tumor surgery.”

When a neurosurgeon removes a life threatening tumor from a patient’s brain, they are rarely able to remove the entire mass.

What remains is known as residual tumor.

Commonly, the tumor is missed during the operation because surgeons are not able to differentiate between healthy brain and residual tumor in the cavity where the mass was removed. Residual tumor’s ability to resemble healthy brain tissue remains a major challenge in surgery.

Neurosurgical teams employ different methods to locate that residual tumor during a procedure.

They may get MRI imaging, which requires intraoperative machinery that is not available everywhere. The surgeon might also use a fluorescent imaging agent to identify tumor tissue, which is not applicable for all tumor types. These limitations prevent their widespread use.

In this international study of the AI-driven technology, neurosurgical teams analyzed fresh, unprocessed specimens sampled from 220 patients who had operations for low- or high-grade diffuse glioma.

FastGlioma detected and calculated how much tumor remained with an average accuracy of approximately 92%.

In a comparison of surgeries guided by FastGlioma predictions or image- and fluorescent-guided methods, the AI technology missed high-risk, residual tumor just 3.8% of the time—compared to a nearly 25% miss rate for conventional methods.

“This model is an innovative departure from existing surgical techniques by rapidly identifying tumor infiltration at microscopic resolution using AI, greatly reducing the risk of missing residual tumor in the area where a glioma is resected,” said co-senior author Shawn Hervey-Jumper, M.D., professor of neurosurgery at University of California San Francisco and a former neurosurgery resident at U-M Health.

“The development of FastGlioma can minimize the reliance on radiographic imaging, contrast enhancement or fluorescent labels to achieve maximal tumor removal.”

How it works

To assess what remains of a brain tumor, FastGlioma combines microscopic optical imaging with a type of artificial intelligence called foundation models. These are AI models, such as GPT-4 and DALL·E 3, trained on massive, diverse datasets that can be adapted to a wide range of tasks.

After large scale training, foundation models can classify images, act as chatbots, reply to emails and generate images from text descriptions.

To build FastGlioma, investigators pre-trained the visual foundation model using over 11,000 surgical specimens and 4 million unique microscopic fields of view.

The tumor specimens are imaged through stimulated Raman histology, a method of rapid, high resolution optical imaging developed at U-M. The same technology was used to train DeepGlioma, an AI based diagnostic screening system that detects a brain tumor’s genetic mutations in under 90 seconds.

“FastGlioma can detect residual tumor tissue without relying on time-consuming histology procedures and large, labeled datasets in medical AI, which are scarce,” said Honglak Lee, Ph.D., co-author and professor of computer science and engineering at U-M.

Full resolution images take around 100 seconds to acquire using stimulated Raman histology; a “fast mode” lower resolution image takes just 10 seconds.

Researchers found that the full resolution model achieved accuracy up to 92%, with the fast mode slightly lower at approximately 90%.

“This means that we can detect tumor infiltration in seconds with extremely high accuracy, which could inform surgeons if more resection is needed during an operation,” Hollon said.

AI’s future in cancer

Over the last 20 years, the rates of residual tumor after neurosurgery have not improved.

Not only does residual tumor result in worse quality of life and earlier death for patients, but it increases the burden on a health system that anticipates 45 million annual surgical procedures needed worldwide by 2030.

Global cancer initiatives have recommended incorporating new technologies, including advanced methods of imaging and AI, into cancer surgery.

In 2015, The Lancet Oncology Commission on global cancer surgery noted that “the need for cost effective… approaches to address surgical margins in cancer surgery provides a potent drive for novel technologies.”

Not only is FastGlioma an accessible and affordable tool for neurosurgical teams operating on gliomas, but researchers say, it can also accurately detect residual tumor for several non-glioma tumor diagnoses, including pediatric brain tumors, such as medulloblastoma and ependymoma, and meningiomas.

“These results demonstrate the advantage of visual foundation models such as FastGlioma for medical AI applications and the potential to generalize to other human cancers without requiring extensive model retraining or fine-tuning,” said co-author Aditya S. Pandey, M.D., chair of the Department of Neurosurgery at U-M Health.

“In future studies, we will focus on applying the FastGlioma workflow to other cancers, including lung, prostate, breast, and head and neck cancers.”

More information: Foundation models for fast, label-free detection of glioma infiltration, Nature (2024). DOI: 10.1038/s41586-024-08169-3www.nature.com/articles/s41586-024-08169-3

Journal information: Nature 

Provided by University of Michigan 

New research shows that different fears are controlled by different parts of the brain

Rodielon Putol

ByRodielon Putol

Earth.com staff writer

Fear strikes in many forms – standing on the edge of a towering skyscraper, glimpsing a tarantula, or feeling your heart race as you prepare to deliver a speech.

The scientific community long believed these scenarios stimulated brains similarly.

“There’s this story that we’ve had in the literature that the brain regions that predict fear are things like the amygdala, or the orbital frontal cortex area, or the brainstem,” said Ajay Satpute, an associate professor of psychology at Northeastern University.

“Those are thought to be part of a so-called ‘fear circuit’ that’s been a very dominant model in neuroscience for decades.”

Challenging the fear circuit model

In early October 2024, Satpute and his team released a study challenging this long-held belief.

The researchers used MRI scans to examine the brain’s response to three distinct fear-inducing scenarios: fear of heights, spiders, and public speaking.

Contrary to prior assumptions, the study revealed each type of fear activated different brain regions, debunking the idea of a universal “fear circuit.”

“Much of the debate on the nature of emotion concerns the uniformity or heterogeneity of representation for particular emotion categories,” noted the researchers.

The team discovered that “the overwhelming majority of brain regions that predict fear only do so for certain situations.”

Research suggests responses to fear are more specific than previously thought. These findings carry important implications for understanding anxiety across species, and how to develop neural signatures for personalized treatments.

Machine learning and fear in the brain

The research tested long-standing assumptions about how fear works, particularly as neuroscience increasingly relies on AI and machine learning to predict emotions.

“Most of those approaches assume that there is a single pattern that underlies the brain-behavior relationship: there’s a single pattern that predicts disgust. There’s a single pattern that predicts anger,” said Satpute.

“Well, if that’s true, then such a pattern should be apparent for different varieties of fear.”

However, when it comes to fear, the study showed a more complex picture.

Focus of the research

In the experiment, the researchers asked 21 participants to identify their fears and used magnetic resonance imaging (MRI) scans to monitor brain activity as they watched videos depicting anxiety-inducing scenarios.

“We tried to find really scary videos of spiders,” Satpute said. “Because I don’t want a neural predictive model that ‘says you’re looking at a spider.’ I want a neural predictive model that says ‘you’re experiencing fear.’”

Revealing fear’s hidden complexities

Following each video, participants rated their levels of fear, valence (how pleasant or unpleasant the experience was), and arousal on a questionnaire.

The study revealed two surprising insights: responses were observed in a wider array of brain regions and not all brain regions were involved across all scenarios.

“The amygdala, for instance, seemed to carry information that predicted fear during the heights context, but not some of the other contexts,” Satpute said. “We’re not seeing these so-called ‘classic threat areas’ involved in being predictive of fear across situations.”

Body’s response to emotional triggers

The research is part of a broader body of work from Satpute’s lab, which focuses on understanding how fear manifests in the body.

In a previous 2021 study, the team explored physiological responses to fear such as sweat and heart rate when facing different triggers like heights or confrontations with law enforcement.

The study also revealed that different triggers caused varied bodily reactions, supporting the idea that fear isn’t one-size-fits-all.

Implications for future treatments

Satpute hopes to replicate these findings with a larger and more diverse participant pool and factoring in demographics like age and gender.

While the current study has a small sample size, the results could reshape how health professionals approach treating fear and anxiety disorders.

“When we look at the brain and the neural correlates of fear, part of the reason we want to understand is so we can intervene on it,” noted Satpute. “Our findings suggest the interventions might also need to be tailored to the person and situation.”

Revolutionizing fear-based therapies

This shift in understanding could revolutionize behavior-based therapies for conditions like phobias and PTSD. It might even impact drug-based treatments.

“Drug-based therapies that target a particular circuit do work, but only for about fiftyish percent of people,” Satpute said. “It’s not really clear why.”

“Our research offers at least some explanation – the brain regions that are going to matter for any emotional experience are going to vary by the person and situation. If you focus only on what’s common, you ignore so much.”

This understanding of fear moves beyond the idea of a “fear circuit” and opens doors for personalized treatments.

Whether it’s the fear of falling, facing a spider, or standing in front of an audience, the research shows fear is more complex than once believed.

The study is published in The Journal of Neuroscience.

https://www.earth.com/news/spiders-heights-or-public-speaking-each-fear-has-a-unique-place-in-the-brain/

Biobots arise from the cells of dead organisms − pushing the boundaries of life, death and medicine

Life and death are traditionally viewed as opposites. But the emergence of new multicellular life-forms from the cells of a dead organism introduces a “third state” that lies beyond the traditional boundaries of life and death.

Usually, scientists consider death to be the irreversible halt of functioning of an organism as a whole. However, practices such as organ donation highlight how organs, tissues and cells can continue to function even after an organism’s demise. This resilience raises the question: What mechanisms allow certain cells to keep working after an organism has died?

We are researchers who investigate what happens within organisms after they die. In our recently published review, we describe how certain cells – when provided with nutrients, oxygen, bioelectricity or biochemical cues – have the capacity to transform into multicellular organisms with new functions after death.

Life, death and emergence of something new

The third state challenges how scientists typically understand cell behavior. While caterpillars metamorphosing into butterflies, or tadpoles evolving into frogs, may be familiar developmental transformations, there are few instances where organisms change in ways that are not predetermined. Tumors, organoids and cell lines that can indefinitely divide in a petri dish, like HeLa cells, are not considered part of the third state because they do not develop new functions.

However, researchers found that skin cells extracted from deceased frog embryos were able to adapt to the new conditions of a petri dish in a lab, spontaneously reorganizing into multicellular organisms called xenobots. These organisms exhibited behaviors that extend far beyond their original biological roles. Specifically, these xenobots use their cilia – small, hair-like structures – to navigate and move through their surroundings, whereas in a living frog embryo, cilia are typically used to move mucus.

Xenobots are also able to perform kinematic self-replication, meaning they can physically replicate their structure and function without growing. This differs from more common replication processes that involve growth within or on the organism’s body.

Researchers have also found that solitary human lung cells can self-assemble into miniature multicellular organisms that can move around. These anthrobots behave and are structured in new ways. They are not only able to navigate their surroundings but also repair both themselves and injured neuron cells placed nearby.

Taken together, these findings demonstrate the inherent plasticity of cellular systems and challenge the idea that cells and organisms can evolve only in predetermined ways. The third state suggests that organismal death may play a significant role in how life transforms over time.

Postmortem conditions

Several factors influence whether certain cells and tissues can survive and function after an organism dies. These include environmental conditions, metabolic activity and preservation techniques.

Different cell types have varying survival times. For example, in humans, white blood cells die between 60 and 86 hours after organismal death. In mice, skeletal muscle cells can be regrown after 14 days postmortem, while fibroblast cells from sheep and goats can be cultured up to a month or so postmortem.

Metabolic activity plays an important role in whether cells can continue to survive and function. Active cells that require a continuous and substantial supply of energy to maintain their function are more difficult to culture than cells with lower energy requirements. Preservation techniques such as cryopreservation can allow tissue samples such as bone marrow to function similarly to that of living donor sources.

Inherent survival mechanisms also play a key role in whether cells and tissues live on. For example, researchers have observed a significant increase in the activity of stress-related genes and immune-related genes after organismal death, likely to compensate for the loss of homeostasis. Moreover, factors such as traumainfection and the time elapsed since death significantly affect tissue and cell viability.

Microscopy image of developing white and red blood cells
Different cell types have different capacities for survival, including white blood cells. Ed Reschke/Stone via Getty Images

Factors such as age, health, sex and type of species further shape the postmortem landscape. This is seen in the challenge of culturing and transplanting metabolically active islet cells, which produce insulin in the pancreas, from donors to recipients. Researchers believe that autoimmune processes, high energy costs and the degradation of protective mechanisms could be the reason behind many islet transplant failures.

How the interplay of these variables allows certain cells to continue functioning after an organism dies remains unclear. One hypothesis is that specialized channels and pumps embedded in the outer membranes of cells serve as intricate electrical circuits. These channels and pumps generate electrical signals that allow cells to communicate with each other and execute specific functions such as growth and movement, shaping the structure of the organism they form.

The extent to which different types of cells can undergo transformation after death is also uncertain. Previous research has found that specific genes involved in stress, immunity and epigenetic regulation are activated after death in mice, zebrafish and people, suggesting widespread potential for transformation among diverse cell types.

Implications for biology and medicine

The third state not only offers new insights into the adaptability of cells. It also offers prospects for new treatments.

For example, anthrobots could be sourced from an individual’s living tissue to deliver drugs without triggering an unwanted immune response. Engineered anthrobots injected into the body could potentially dissolve arterial plaque in atherosclerosis patients and remove excess mucus in cystic fibrosis patients.

Importantly, these multicellular organisms have a finite life span, naturally degrading after four to six weeks. This “kill switch” prevents the growth of potentially invasive cells.

A better understanding of how some cells continue to function and metamorphose into multicellular entities some time after an organism’s demise holds promise for advancing personalized and preventive medicine.

https://theconversation.com/biobots-arise-from-the-cells-of-dead-organisms-pushing-the-boundaries-of-life-death-and-medicine-238176

Researchers identify new therapeutic approach targeting astrocytes, the brain’s most abundant cells

A team led by scientists at Case Western Reserve University School of Medicine has identified a new therapeutic approach for combating neurodegenerative diseases, offering hope of improved treatments for Alzheimer’s disease, Parkinson’s disease, Vanishing White Matter disease and multiple sclerosis, among others. 

Neurodegenerative diseases, which affect millions of people worldwide, occur when nerve cells in the brain or nervous system lose function over time and ultimately die, according to the National Institutes of Health. Alzheimer’s disease and Parkinson’s disease are the most common.

The research team’s new study, published online Feb. 20 in the journal Nature Neuroscience, focused on astrocytes—the brain’s most abundant cells, which normally support healthy brain function. Growing evidence indicates astrocytes can switch to a harmful state that increases nerve-cell loss in neurodegenerative diseases.

The researchers created a new cellular technique to test thousands of possible medications for their ability to prevent these rogue astrocytes from forming. 

“By harnessing the power of high-throughput drug-screening, we’ve identified a key protein regulator that, when inhibited, can prevent the formation of harmful astrocytes,” said Benjamin Clayton, lead author and National Multiple Sclerosis Society career transition fellow in the laboratory of Paul Tesar at the Case Western Reserve School of Medicine.

They found that blocking the activity of a particular protein, HDAC3, may prevent the development of dangerous astrocytes. The scientists discovered that by administering medications that specifically target HDAC3, they were able to prevent the development of dangerous astrocytes and significantly increase the survival of nerve cells in mouse models.

“This research establishes a platform for discovering therapies to control diseased astrocytes and highlights the therapeutic potential of regulating astrocyte states to treat neurodegenerative diseases,” said Tesar, the Dr. Donald and Ruth Weber Goodman Professor of Innovative Therapeutics and the study’s principal investigator.  

Tesar, also director of the School of Medicine’s Institute for Glial Sciences, said more research needs to be done before patients might benefit from the promising approach. But, he said, their findings could lead to the creation of novel therapies that disarm harmful astrocytes and support neuroprotection—perhaps improving the lives of people with neurodegenerative illnesses in the future.

“Therapies for neurodegenerative disease typically target the nerve cells directly,” Tesar said, “but here we asked if fixing the damaging effects of astrocytes could provide therapeutic benefit. Our findings redefine the landscape of neurodegenerative disease treatment and open the door to a new era of astrocyte targeting medicines.”

Additional contributing researchers from the Case Western Reserve School of Medicine, and from the George Washington School of MedicineThe Ohio State University and the University of Tampa included James Kristell, Kevin Allan, Erin Cohn, Yuka Maeno-Hikichi, Annalise Sturno, Alexis Kerr, Elizabeth Shick, Molly Karl, Eric Garrison, Robert Miller, Andrew Jerome, Jesse Sepeda, Andrew Sas, Benjamin Segal, and Eric Freundt.

The research was supported by grants from the National Institutes of Health, National Multiple Sclerosis Society and Hartwell Foundation, and philanthropic support by sTF5 Care and the R. Blane & Claudia Walter, Long, Goodman, Geller and Weidenthal families.

Researchers Make Mice Smell Odors that Aren’t Really There

by Ruth Williams

By activating a particular pattern of nerve endings in the brain’s olfactory bulb, researchers can make mice smell a non-existent odor, according to a paper published June 18 in Science. Manipulating these activity patterns reveals which aspects are important for odor recognition.

“This study is a beautiful example of the use of synthetic stimuli . . . to probe the workings of the brain in a way that is just not possible currently with natural stimuli,” neuroscientist Venkatesh Murthy of Harvard University who was not involved with the study writes in an email to The Scientist.

A fundamental goal of neuroscience is to understand how a stimulus—a sight, sound, taste, touch, or smell—is interpreted, or perceived, by the brain. While a large number of studies have shown the various ways in which such stimuli activate brain cells, very little is understood about what these activations actually contribute to perception.

In the case of smell, for example, it is well-known that odorous molecules traveling up the nose bind to receptors on cells that then transmit signals along their axons to bundles of nerve endings—glomeruli—in a brain area called the olfactory bulb. A single molecule can cause a whole array of different glomeruli to fire in quick succession, explains neurobiologist Kevin Franks of Duke University who also did not participate in the research. And because these activity patterns “have many different spatial and temporal features,” he says, “it is difficult to know which of those features is actually most relevant [for perception].”

To find out, neuroscientist Dmitry Rinberg of New York University and colleagues bypassed the nose entirely. “The clever part of their approach is to gain direct control of these neurons with light, rather than by sending odors up the animal’s nose,” Caltech neurobiologist Markus Meister, who was not involved in the work, writes in an email to The Scientist.

The team used mice genetically engineered to produce light-sensitive ion channels in their olfactory bulb cells. They then used precisely focused lasers to activate a specific pattern of glomeruli in the region of the bulb closest to the top of the animal’s head, through a surgically implanted window in the skull. The mice were trained to associate this activation pattern with a reward—water, delivered via a lick-tube. The same mice did not associate random activation patterns with the reward, suggesting they had learned to distinguish the reward-associated pattern, or synthetic smell, from others.

Although the activation patterns were not based on any particular odors, they were designed to be as life-like as possible. For example, the glomeruli were activated one after the other within the space of 300 milliseconds from the time at which the mouse sniffed—detected by a sensor. “But, I’ll be honest with you, I have no idea if it stinks [or] it is pleasant” for the mouse, Rinberg says.

Once the mice were thoroughly trained, the team made methodical alterations to the activity pattern—changing the order in which the glomeruli were activated, switching out individual activation sites for alternatives, and changing the timing of the activation relative to the sniff. They tried “hundreds of different combinations,” Rinberg says. He likened it to altering the notes in a tune. “If you change the notes, or the timing of the notes, does the song remain the same?” he asks. That is, would the mice still be able to recognize the induced scent?

From these experiments, a general picture emerged: alterations to the earliest-activated regions caused the most significant impairment to the animal’s ability to recognize the scent. “What they showed is that, even though an odor will [induce] a very complex pattern of activity, really it is just the earliest inputs, the first few glomeruli that are activated that are really important for perception,” says Franks.

Rinberg says he thinks these early glomeruli most likely represent the receptors to which an odorant binds most strongly.

With these insights into the importance of glomeruli firing times for scent recognition, “the obvious next question,” says Franks, is to go deeper into the brain to where the olfactory bulb neurons project and ask, “ How does the cortex make sense of this?”

E. Chong et al., “Manipulating synthetic optogenetic odors reveals the coding logic of olfactory perception,” Science, 368:eaba2357, 2020.

https://www.the-scientist.com/news-opinion/researchers-make-mice-smell-odors-that-arent-really-there-67643?utm_campaign=TS_DAILY%20NEWSLETTER_2020&utm_medium=email&_hsmi=89854591&_hsenc=p2ANqtz–BMhsu532UL56qwtB0yErPYlgoFTIZWsNouvTV9pnT1ikTw6CvyIPyun3rPGdciV29we7ugRVWYc1uuBDh5CN_F-0FzA&utm_content=89854591&utm_source=hs_email

Light Enables Long-Term Memory Maintenance in Fruit Flies


S. Inami et al., “Environmental light is required for maintenance of long-term memory in Drosophila,” J Neurosci, 40:1427–39, 2020.

by Diana Kwon

As Earth rotates around its axis, the organisms that inhabit its surface are exposed to daily cycles of darkness and light. In animals, light has a powerful influence on sleep, hormone release, and metabolism. Work by Takaomi Sakai, a neuroscientist at Tokyo Metropolitan University, and his team suggests that light may also be crucial for forming and maintaining long-term memories.

The puzzle of how memories persist in the brain has long been of interest to Sakai. Researchers had previously demonstrated, in both rodents and flies, that the production of new proteins is necessary for maintaining long-term memories, but Sakai wondered how this process persisted over several days given cells’ molecular turnover. Maybe, he thought, an environmental stimulus, such as the light-dark cycles, periodically triggered protein production to enable memory formation and storage.

Sakai and his colleagues conducted a series of experiments to see how constant darkness would affect the ability of Drosophila melanogaster to form long-term memories. Male flies exposed to light after interacting with an unreceptive female showed reduced courtship behaviors toward new female mates several days later, indicating they had remembered the initial rejection. Flies kept in constant darkness, however, continued their attempts to copulate.

The team then probed the molecular mechanisms of these behaviors and discovered a pathway by which light activates cAMP response element-binding protein (CREB)—a transcription factor previously identified as important for forming long-term memories—within certain neurons found in the mushroom bodies, the memory center in fly brains.

“The fact that light is essential for long-term memory maintenance is fundamentally interesting,” says Seth Tomchick, a neuroscientist at the Scripps Research Institute in Florida who wasn’t involved in the study. However, he adds, “more work will be necessary” to fully characterize the molecular mechanisms underlying these effects.

https://www.the-scientist.com/the-literature/lasting-memories-67441?utm_campaign=TS_DAILY%20NEWSLETTER_2020&utm_source=hs_email&utm_medium=email&utm_content=87927085&_hsenc=p2ANqtz-_7gIn7Nu8ghtWiBtiy6oqTctJuYb31bx6bzhbcV3gVpx0-YoIVNtAhnXXNJT0GC496PAntAiSvYpxLdVAnvITlfOG96g&_hsmi=87927085

How a New AI Translated Brain Activity to Speech With 97 Percent Accuracy

By Edd Gent

The idea of a machine that can decode your thoughts might sound creepy, but for thousands of people who have lost the ability to speak due to disease or disability it could be game-changing. Even for the able-bodied, being able to type out an email by just thinking or sending commands to your digital assistant telepathically could be hugely useful.

That vision may have come a step closer after researchers at the University of California, San Francisco demonstrated that they could translate brain signals into complete sentences with error rates as low as three percent, which is below the threshold for professional speech transcription.

While we’ve been able to decode parts of speech from brain signals for around a decade, so far most of the solutions have been a long way from consistently translating intelligible sentences. Last year, researchers used a novel approach that achieved some of the best results so far by using brain signals to animate a simulated vocal tract, but only 70 percent of the words were intelligible.

The key to the improved performance achieved by the authors of the new paper in Nature Neuroscience was their realization that there were strong parallels between translating brain signals to text and machine translation between languages using neural networks, which is now highly accurate for many languages.

While most efforts to decode brain signals have focused on identifying neural activity that corresponds to particular phonemes—the distinct chunks of sound that make up words—the researchers decided to mimic machine translation, where the entire sentence is translated at once. This has proven a powerful approach; as certain words are always more likely to appear close together, the system can rely on context to fill in any gaps.

The team used the same encoder-decoder approach commonly used for machine translation, in which one neural network analyzes the input signal—normally text, but in this case brain signals—to create a representation of the data, and then a second neural network translates this into the target language.

They trained their system using brain activity recorded from 4 women with electrodes implanted in their brains to monitor seizures as they read out a set of 50 sentences, including 250 unique words. This allowed the first network to work out what neural activity correlated with which parts of speech.

In testing, it relied only on the neural signals and was able to achieve error rates of below eight percent on two out of the four subjects, which matches the kinds of accuracy achieved by professional transcribers.

Inevitably, there are caveats. Firstly, the system was only able to decode 30-50 specific sentences using a limited vocabulary of 250 words. It also requires people to have electrodes implanted in their brains, which is currently only permitted for a limited number of highly specific medical reasons. However, there are a number of signs that this direction holds considerable promise.

One concern was that because the system was being tested on sentences that were included in its training data, it might simply be learning to match specific sentences to specific neural signatures. That would suggest it wasn’t really learning the constituent parts of speech, which would make it harder to generalize to unfamiliar sentences.

But when the researchers added another set of recordings to the training data that were not included in testing, it reduced error rates significantly, suggesting that the system is learning sub-sentence information like words.

They also found that pre-training the system on data from the volunteer that achieved the highest accuracy before training on data from one of the worst performers significantly reduced error rates. This suggests that in practical applications, much of the training could be done before the system is given to the end user, and they would only have to fine-tune it to the quirks of their brain signals.

The vocabulary of such a system is likely to improve considerably as people build upon this approach—but even a limited palette of 250 words could be incredibly useful to a paraplegic, and could likely be tailored to a specific set of commands for telepathic control of other devices.

Now the ball is back in the court of the scrum of companies racing to develop the first practical neural interfaces.

How a New AI Translated Brain Activity to Speech With 97 Percent Accuracy