Drinking Coffee in the Morning, but not All-Day, Decreased the Risk of Death

Key takeaways:

  • Morning coffee drinkers had a 16% risk reduction for death from all causes.
  • Morning coffee drinkers who consumed between over two to three or more cups achieved the greatest benefits.

People who drink coffee in the morning have a lower risk for death from all causes compared with those who do not drink coffee at all, results from an observational cohort study published in the European Heart Journal showed.

The association between morning coffee consumption and reduced mortality risk appeared especially strong with respect to CVD, according to researchers. Meanwhile, the analysis revealed that those who drank coffee throughout the day did not achieve the same mortality benefits as morning drinkers.

PC0125Qi_Graphic_01_WEB
Data derived from:  Wang X, et al. Eur Heart J. et al. 2025;doi:10.1093/eurheartj/ehae871.

“While moderate coffee drinking has been recommended for the beneficial relations with health based on previous studies, primary care providers [should] be informed that the time of coffee drinking also matters, beyond the amounts consumed,” Lu Qi, MD, PhD, a professor at Tulane University Celia Scott Weatherhead School of Public Health and Tropical Medicine, told Healio.

Current research suggests that coffee consumption “doesn’t raise the risk of cardiovascular disease, and it seems to lower the risk of some chronic diseases, such as type 2 diabetes,” Qi said in a press release.

“Given the effects that caffeine has on our bodies, we wanted to see if the time of day when you drink coffee has any impact on heart health.”

In the study, Qi and colleagues assessed links between mortality and coffee consumption — including the volume and timing — using data from the National Health and Nutrition Examination Survey from 1999 to 2018.

The analysis comprised 40,725 adults who had given dietary data of what they consumed on at least one day. This included a subgroup of 1,463 adults who completed a detailed food and drink diary for an entire week.

Overall, 48% of the cohort did not drink coffee, 36% had a morning-type coffee drinking pattern — primarily drinking from 4 a.m. to 11:59 a.m. — and 16% had an all-day drinking pattern.

The researchers found that, after adjusting for factors like sleep hours and caffeinated and decaffeinated coffee intake amounts, morning coffee drinkers were 16% (HR = 0.84; 95% CI, 0.74-0.95) less likely to die of any cause and 31% (HR = 0.69; 95% CI, 0.55-0.87) less likely to die from CVD compared with those who did not drink coffee.

People who drank coffee all day did not have any risk reductions vs. those who did not drink coffee.

The amount of coffee consumed among morning drinkers also influenced risk reductions, as researchers reported HRs for all-cause mortality of:

  • 0.85 (95% CI, 0.71-1.01) among those who consumed more than zero to one cup;
  • 0.84 (95% CI, 0.73-0.96) among those who consumed more than one to two cups;
  • 0.72 (95% CI, 0.6-0.86) among those who consumed more than two to three cups; and
  • 0.79 (95% CI, 0.65-0.97) among those who consumed more than three cups.

Study results showed similar patterns for mortality from CVD, “but the interaction term was not significant,” Qi and colleagues wrote.

The researchers identified a couple of study limitations. For example, the analysis used self-reported dietary data, opening the potential for recall bias, while they also could not rule out possible residual and unmeasured cofounders.

The study did not explain why morning coffee consumption reduced the risk for death from CVD, Qi said in the release.

“A possible explanation is that consuming coffee in the afternoon or evening may disrupt circadian rhythms and levels of hormones such as melatonin,” he said. “This, in turn, leads to changes in cardiovascular risk factors such as inflammation and [BP].”

Qi told Healio that regarding future research, “more studies are needed to investigate coffee drinking timing with other health outcomes, in different populations, and clinical trials would be helpful to provide evidence for causality.”

References:

More steps per day could significantly reduce the risk for depression symptoms

ey takeaways:

  • Daily step counts between 5,000 to 10,000 or more reduced depression symptoms across 33 studies.
  • The associations may be due to several mechanisms, like improvement in sleep quality and inflammation.

Daily step counts of 5,000 or more corresponded with fewer depressive symptoms among adults, results of a systematic review and meta-analysis published in JAMA Network Open suggested.

The results are consistent with previous studies linking exercise to various risk reductions for mental health disorders and show that setting step goals “may be a promising and inclusive public health strategy for the prevention of depression,” the researchers wrote.

According to Bruno Bizzozero-Peroni, PhD, MPH, from Universidad De Castilla-La Mancha in Spain, and colleagues, daily step counts are a “simple and intuitive objective measure” of physical activity, while tracking such counts has become increasingly feasible for the general population thanks to the availability of fitness trackers.

“To our knowledge, the association between the number of daily steps measured

with wearable trackers and depression has not been previously examined through a meta-analytic approach,” they wrote.

The researchers searched multiple databases for analyses assessing the effects of daily step counts on depressive symptoms, ultimately including a total of 27 cross-sectional studies and six longitudinal studies comprising 96,173 adults aged 18 years or older.

They found that in the cross-sectional studies, daily step counts of 10,000 or more (standardized mean difference [SMD] = 0.26; 95% CI, 0.38 to 0.14), 7,500 to 9,999 (SMD = 0.27; 95% CI, 0.43 to 0.11) and 5,000 to 7,499 (SMD = 0.17; 95% CI, 0.3 to 0.04) corresponded with reduced depressive symptoms vs. daily step counts less than 5,000.

In the prospective cohort studies, people with 7,000 or more steps a day had a reduced risk for depression vs. with people with fewer than 7,000 daily steps (RR = 0.69; 95% CI, 0.62-0.77), whereas an increase of 1,000 steps a day suggested an association with a lower risk for depression (RR = 0.91; 95% CI, 0.87-0.94).

There were a couple study limitations. The researchers noted that reverse associations are possible, while they could not rule out residual confounders.

They also pointed out that there are some remaining questions, such as whether there is a ceiling limit after which further step counts would no longer reduce the risk for depression.

Bizzozero-Peroni and colleagues highlighted several possible biological and psychosocial mechanisms behind the associations, like changes in sleep quality, inflammation, social support, self-esteem, neuroplasticity and self-efficacy.

They concluded that “a daily active lifestyle may be a crucial factor in regulating and reinforcing these pathways” regardless of the exact combination of mechanisms responsible for the positive link.

“Specifically designed experimental studies are still needed to explore whether there are optimal and maximal step counts for specific population subgroups,” they wrote.

Sources/Disclosures

Collapse

Source: 

Bizzozero-Peroni B, et al. JAMA Netw Open. 2024;doi:10.1001/jamanetworkopen.2024.51208.

Copenhagen Scientists Unveil Appetite-Control Drug with No Side Effects

by University of Copenhagen

Scientists at the University of Copenhagen have discovered a new weight loss drug target that reduces appetite, increases energy expenditure, and improves insulin sensitivity without causing nausea or loss of muscle mass. The discovery was reported in the journal Nature and could lead to a new therapy for millions of people with both obesity and type 2 diabetes who do not respond well to current treatments.

Millions of people around the world benefit from weight-loss drugs based on the incretin hormone GLP-1. These drugs also improve kidney function, reduce the risk of fatal cardiac events, and are linked to protection against neurodegeneration.

However, many people stop taking the drugs due to common side effects, including nausea and vomiting. Studies also show that incretin-based therapies like Wegovy and Mounjaro are much less effective at lowering weight in people living with both obesity and type 2 diabetes—a group numbering more than 380 million people globally.

In the study, scientists from the University of Copenhagen describe a powerful new drug candidate that lowers appetite without loss of muscle mass or side effects like nausea and vomiting. And, unlike the current generation of treatments, the drug also increases the body’s energy expenditure—the capacity of the body to burn calories.

“While GLP-1-based therapies have revolutionized patient care for obesity and type 2 diabetes, safely harnessing energy expenditure and controlling appetite without nausea remain two Holy Grails in this field. By addressing these needs, we believe our discovery will propel current approaches to make more tolerable, effective treatments accessible to millions more individuals,” says Associate Professor Zach Gerhart-Hines from the NNF Foundation Center for Basic Metabolic Research (CBMR) at the University of Copenhagen.

NK2R activation lowers body weight and reverses diabetes

Our weight is largely determined by the balance between the energy we consume and the amount of energy we expend. Eating more and burning less creates a positive energy balance leading to weight gain, while eating less and burning more creates a negative balance, resulting in weight loss.

The current generation of incretin-based therapies tip the scales toward a negative energy balance by lowering appetite and the total calories a person consumes. But scientists have also recognized the potential on the other side of the equation—increasing the calories the body burns.

This approach is especially relevant, given recent research that has shown that our bodies seem to be burning fewer calories at rest than they did a few decades ago. However, there are currently no clinically approved ways to safely increase energy expenditure, and few options are in development.

This was the starting point when scientists at the University of Copenhagen decided to test the effect of activating the neurokinin 2 receptor (NK2R) in mice. The Gerhart-Hines Group identified the receptor through genetic screens that suggested NK2R played a role in maintaining energy balance and glucose control.

They were astonished by the results of the studies—not only did activating the receptor safely increase calorie-burning, it also lowered appetite without any signs of nausea.

Further studies in non-human primates with type 2 diabetes and obesity showed that NK2R activation lowered body weight and reversed their diabetes by increasing insulin sensitivity and lowering blood sugar, triglycerides, and cholesterol.

“One of the biggest hurdles in drug development is translation between mice and humans. This is why we were excited that the benefits of NK2R agonism translated to diabetic and obese nonhuman primates, which represents a big step towards clinical translation,” says Ph.D. Student Frederike Sass from CBMR at the University of Copenhagen, and first author of the study.

The discovery could result in the next generation of drug therapies that bring more efficacious and tolerable treatments for the almost 400 million people globally who live with both type 2 diabetes and obesity.

The University of Copenhagen holds the patent rights for targeting NK2R. To date, research by the Gerhart-Hines lab has led to the creation of three biotech companies—Embark Biotech, Embark Laboratories, and Incipiam Pharma.

In 2023, Embark Biotech was acquired by Novo Nordisk to develop next generation therapeutics for cardiometabolic disease.

More information: Zachary Gerhart-Hines, NK2R control of energy expenditure and feeding to treat metabolic diseases, Nature (2024). DOI: 10.1038/s41586-024-08207-0www.nature.com/articles/s41586-024-08207-0

Journal information: Nature 

Provided by University of Copenhagen 

https://medicalxpress.com/news/2024-11-weight-loss-drug-energy-lowers.html

Transforming Neurosurgery with FastGlioma AI Technology

by University of Michigan

Researchers have developed an AI-powered model that—in 10 seconds—can determine during surgery if any part of a cancerous brain tumor that could be removed remains, a study published in Nature suggests.

The technology, called FastGlioma, outperformed conventional methods for identifying what remains of a tumor by a wide margin, according to the research team led by University of Michigan and University of California San Francisco.

“FastGlioma is an artificial intelligence-based diagnostic system that has the potential to change the field of neurosurgery by immediately improving comprehensive management of patients with diffuse gliomas,” said senior author Todd Hollon, M.D., a neurosurgeon at University of Michigan Health and assistant professor of neurosurgery at U-M Medical School.

“The technology works faster and more accurately than current standard of care methods for tumor detection and could be generalized to other pediatric and adult brain tumor diagnoses. It could serve as a foundational model for guiding brain tumor surgery.”

When a neurosurgeon removes a life threatening tumor from a patient’s brain, they are rarely able to remove the entire mass.

What remains is known as residual tumor.

Commonly, the tumor is missed during the operation because surgeons are not able to differentiate between healthy brain and residual tumor in the cavity where the mass was removed. Residual tumor’s ability to resemble healthy brain tissue remains a major challenge in surgery.

Neurosurgical teams employ different methods to locate that residual tumor during a procedure.

They may get MRI imaging, which requires intraoperative machinery that is not available everywhere. The surgeon might also use a fluorescent imaging agent to identify tumor tissue, which is not applicable for all tumor types. These limitations prevent their widespread use.

In this international study of the AI-driven technology, neurosurgical teams analyzed fresh, unprocessed specimens sampled from 220 patients who had operations for low- or high-grade diffuse glioma.

FastGlioma detected and calculated how much tumor remained with an average accuracy of approximately 92%.

In a comparison of surgeries guided by FastGlioma predictions or image- and fluorescent-guided methods, the AI technology missed high-risk, residual tumor just 3.8% of the time—compared to a nearly 25% miss rate for conventional methods.

“This model is an innovative departure from existing surgical techniques by rapidly identifying tumor infiltration at microscopic resolution using AI, greatly reducing the risk of missing residual tumor in the area where a glioma is resected,” said co-senior author Shawn Hervey-Jumper, M.D., professor of neurosurgery at University of California San Francisco and a former neurosurgery resident at U-M Health.

“The development of FastGlioma can minimize the reliance on radiographic imaging, contrast enhancement or fluorescent labels to achieve maximal tumor removal.”

How it works

To assess what remains of a brain tumor, FastGlioma combines microscopic optical imaging with a type of artificial intelligence called foundation models. These are AI models, such as GPT-4 and DALL·E 3, trained on massive, diverse datasets that can be adapted to a wide range of tasks.

After large scale training, foundation models can classify images, act as chatbots, reply to emails and generate images from text descriptions.

To build FastGlioma, investigators pre-trained the visual foundation model using over 11,000 surgical specimens and 4 million unique microscopic fields of view.

The tumor specimens are imaged through stimulated Raman histology, a method of rapid, high resolution optical imaging developed at U-M. The same technology was used to train DeepGlioma, an AI based diagnostic screening system that detects a brain tumor’s genetic mutations in under 90 seconds.

“FastGlioma can detect residual tumor tissue without relying on time-consuming histology procedures and large, labeled datasets in medical AI, which are scarce,” said Honglak Lee, Ph.D., co-author and professor of computer science and engineering at U-M.

Full resolution images take around 100 seconds to acquire using stimulated Raman histology; a “fast mode” lower resolution image takes just 10 seconds.

Researchers found that the full resolution model achieved accuracy up to 92%, with the fast mode slightly lower at approximately 90%.

“This means that we can detect tumor infiltration in seconds with extremely high accuracy, which could inform surgeons if more resection is needed during an operation,” Hollon said.

AI’s future in cancer

Over the last 20 years, the rates of residual tumor after neurosurgery have not improved.

Not only does residual tumor result in worse quality of life and earlier death for patients, but it increases the burden on a health system that anticipates 45 million annual surgical procedures needed worldwide by 2030.

Global cancer initiatives have recommended incorporating new technologies, including advanced methods of imaging and AI, into cancer surgery.

In 2015, The Lancet Oncology Commission on global cancer surgery noted that “the need for cost effective… approaches to address surgical margins in cancer surgery provides a potent drive for novel technologies.”

Not only is FastGlioma an accessible and affordable tool for neurosurgical teams operating on gliomas, but researchers say, it can also accurately detect residual tumor for several non-glioma tumor diagnoses, including pediatric brain tumors, such as medulloblastoma and ependymoma, and meningiomas.

“These results demonstrate the advantage of visual foundation models such as FastGlioma for medical AI applications and the potential to generalize to other human cancers without requiring extensive model retraining or fine-tuning,” said co-author Aditya S. Pandey, M.D., chair of the Department of Neurosurgery at U-M Health.

“In future studies, we will focus on applying the FastGlioma workflow to other cancers, including lung, prostate, breast, and head and neck cancers.”

More information: Foundation models for fast, label-free detection of glioma infiltration, Nature (2024). DOI: 10.1038/s41586-024-08169-3www.nature.com/articles/s41586-024-08169-3

Journal information: Nature 

Provided by University of Michigan 

Open-label placebo injection demonstrates ‘modest’ benefit in chronic back pain

Key takeaways:

  • A non-deceptive placebo injection reduced chronic back pain with effect size similar to typical treatments.
  • Secondary outcome benefits and brain changes lasted up to 1 year.

A single saline injection, openly prescribed as a placebo, yielded approximately 1 month of chronic back pain improvement, along with longer-term benefits in depression and sleep, according to data published in JAMA Network Open.

“We have known that placebos can be powerful pain relievers, but it has been unclear how to use them ethically, without patient deception,” Yoni K. Ashar, PhD, assistant professor at the University of Colorado Anschutz Medical Campus, told Healio. “This spurred the development of the ‘open label,’ non-deceptive placebo treatment, which we studied here.”

To investigate the long-term efficacy of open label placebo in chronic back pain, Ashar and colleagues recruited 101 adults (mean age, 40.4 years) with moderate chronic back pain from the Boulder, Colorado, area between November 2017 and August 2018, with a follow-up at 1 year.

Trial participants were randomly assigned to either continue their usual care alone or to also receive a single, open label lumbar saline injection, along with information about how the placebo effect can lead to pain relief. The primary outcome was average pain over the last week 1 month after treatment, measured using a scale of 0 to 10. Secondary outcomes also assessed pain interference, depression, anxiety, anger and sleep quality.

At 1 month, those who received placebo injections reported greater reductions in chronic back pain than the usual care group (relative reduction, 0.61; Hedges g = 0.45; 95% CI, –0.89 to 0.04), according to the researchers.

By 1 year post-treatment, the between-group difference in pain relief was reduced to insignificance. However, after 1 month, other significant benefits were seen in depression, anger, anxiety and sleep disruption, with “medium sized” effect sizes ranging from 0.3 to 0.5 (P < .03 for all).

The researchers also compared neuroimaging between the groups. Functional MRI scans were taken as participants performed both an “evoked” back pain procedure, which used an inflating balloon to cause back distention and pain, and a “spontaneous” pain procedure, where patients rated their pain once per minute over the course of an 8-minute scan.

Overall, the neuroimaging showed “altered brain responses to evoked back pain and altered functional connectivity during spontaneous pain consistent with engagement of descending modulatory pain pathways,” Ashar and colleagues wrote.

The researchers described the placebo injection’s pain relief benefit as “modest in magnitude” but clinically significant and comparable with the effect sizes of typical treatments such as NSAIDs, but with fewer adverse events.

“These findings speak to the power of healing rituals, even when we know they are healing rituals,” Ashar said. “Although we view this study as more mechanistic and conceptually provocative than as clinically applicable, it suggests that providers may be able to ethically prescribe a placebo for their patients one day, without deception. In addition, the duration of benefits on secondary outcomes and the observed brain changes were surprising, considering how brief and minimalist the intervention was.”

New research shows that different fears are controlled by different parts of the brain

Rodielon Putol

ByRodielon Putol

Earth.com staff writer

Fear strikes in many forms – standing on the edge of a towering skyscraper, glimpsing a tarantula, or feeling your heart race as you prepare to deliver a speech.

The scientific community long believed these scenarios stimulated brains similarly.

“There’s this story that we’ve had in the literature that the brain regions that predict fear are things like the amygdala, or the orbital frontal cortex area, or the brainstem,” said Ajay Satpute, an associate professor of psychology at Northeastern University.

“Those are thought to be part of a so-called ‘fear circuit’ that’s been a very dominant model in neuroscience for decades.”

Challenging the fear circuit model

In early October 2024, Satpute and his team released a study challenging this long-held belief.

The researchers used MRI scans to examine the brain’s response to three distinct fear-inducing scenarios: fear of heights, spiders, and public speaking.

Contrary to prior assumptions, the study revealed each type of fear activated different brain regions, debunking the idea of a universal “fear circuit.”

“Much of the debate on the nature of emotion concerns the uniformity or heterogeneity of representation for particular emotion categories,” noted the researchers.

The team discovered that “the overwhelming majority of brain regions that predict fear only do so for certain situations.”

Research suggests responses to fear are more specific than previously thought. These findings carry important implications for understanding anxiety across species, and how to develop neural signatures for personalized treatments.

Machine learning and fear in the brain

The research tested long-standing assumptions about how fear works, particularly as neuroscience increasingly relies on AI and machine learning to predict emotions.

“Most of those approaches assume that there is a single pattern that underlies the brain-behavior relationship: there’s a single pattern that predicts disgust. There’s a single pattern that predicts anger,” said Satpute.

“Well, if that’s true, then such a pattern should be apparent for different varieties of fear.”

However, when it comes to fear, the study showed a more complex picture.

Focus of the research

In the experiment, the researchers asked 21 participants to identify their fears and used magnetic resonance imaging (MRI) scans to monitor brain activity as they watched videos depicting anxiety-inducing scenarios.

“We tried to find really scary videos of spiders,” Satpute said. “Because I don’t want a neural predictive model that ‘says you’re looking at a spider.’ I want a neural predictive model that says ‘you’re experiencing fear.’”

Revealing fear’s hidden complexities

Following each video, participants rated their levels of fear, valence (how pleasant or unpleasant the experience was), and arousal on a questionnaire.

The study revealed two surprising insights: responses were observed in a wider array of brain regions and not all brain regions were involved across all scenarios.

“The amygdala, for instance, seemed to carry information that predicted fear during the heights context, but not some of the other contexts,” Satpute said. “We’re not seeing these so-called ‘classic threat areas’ involved in being predictive of fear across situations.”

Body’s response to emotional triggers

The research is part of a broader body of work from Satpute’s lab, which focuses on understanding how fear manifests in the body.

In a previous 2021 study, the team explored physiological responses to fear such as sweat and heart rate when facing different triggers like heights or confrontations with law enforcement.

The study also revealed that different triggers caused varied bodily reactions, supporting the idea that fear isn’t one-size-fits-all.

Implications for future treatments

Satpute hopes to replicate these findings with a larger and more diverse participant pool and factoring in demographics like age and gender.

While the current study has a small sample size, the results could reshape how health professionals approach treating fear and anxiety disorders.

“When we look at the brain and the neural correlates of fear, part of the reason we want to understand is so we can intervene on it,” noted Satpute. “Our findings suggest the interventions might also need to be tailored to the person and situation.”

Revolutionizing fear-based therapies

This shift in understanding could revolutionize behavior-based therapies for conditions like phobias and PTSD. It might even impact drug-based treatments.

“Drug-based therapies that target a particular circuit do work, but only for about fiftyish percent of people,” Satpute said. “It’s not really clear why.”

“Our research offers at least some explanation – the brain regions that are going to matter for any emotional experience are going to vary by the person and situation. If you focus only on what’s common, you ignore so much.”

This understanding of fear moves beyond the idea of a “fear circuit” and opens doors for personalized treatments.

Whether it’s the fear of falling, facing a spider, or standing in front of an audience, the research shows fear is more complex than once believed.

The study is published in The Journal of Neuroscience.

https://www.earth.com/news/spiders-heights-or-public-speaking-each-fear-has-a-unique-place-in-the-brain/

Is this how complex life evolved? Experiment that put bacteria inside fungi offers clues

Biologists created a symbiotic system that hints at how cell features such as mitochondria and chloroplasts might have emerged a billion years ago.

Scientists wielding a minute hollow needle — and a bike pump — have managed to implant bacteria into a larger cell, creating a relationship similar to those that sparked the evolution of complex life.

The feat — described1 in Nature on 2 October — could help researchers to understand the origins of pairings that gave rise to specialized organelles called mitochondria and chloroplasts more than one billion years ago.

Endosymbiotic relationships — in which a microbial partner lives harmoniously within the cells of another organism — are found in numerous life forms, including insects and fungi. Scientists think that mitochondria, the organelles that are responsible for cells’ energy production, evolved when a bacterium took up residence inside an ancestor of eukaryotic cells. Chloroplasts emerged when an ancestor of plants swallowed a photosynthetic microorganism.

Determining the factors that formed and sustained these couplings is difficult because they occurred so long ago. To get around this problem, a team led by microbiologist Julia Vorholt, at the Swiss Federal Institute of Technology in Zurich (ETH Zurich), has spent the past few years engineering endosymbioses in the laboratory. Their approach uses a 500-1000 nanometre wide needle to puncture host cells and then deliver bacterial cells one at a time.

Sparking symbiosis

Even with this technical wizardry, initial pairings tended to fail; for instance, because the would-be symbiont divided too fast and killed its host2. The team’s luck changed when they recreated a natural symbiosis that occurs between some strains of a fungal plant pathogen, Rhizopus microsporus, and the bacterium Mycetohabitans rhizoxinica, which produces a toxin that protects the fungus from predation.

Yet delivering bacterial cells into the fungi, which have thick cell walls that maintain a high internal pressure, was a challenge. After piercing the wall with the needle, the researchers used a bicycle pump — and later an air compressor — to maintain enough pressure to deliver the bacteria.

After overcoming the initial shock of surgery, the fungi continued their life cycles and produced spores, a fraction of which contained bacteria. When these spores germinated, bacteria were also present in the cells of the next generation of fungi. This showed that the new endosymbiosis could be passed onto offspring — a key finding.

Vanishing bacteria

But the germination success of the bacteria-containing spores was low. In a mixed population of spores (some with bacteria and some without), those with bacteria vanished after two generations. To see whether relations could be improved, the researchers used a fluorescent cell sorter to select spores containing bacteria — which had been labelled with a glowing protein — and propagated only these spores in future rounds of reproduction. By ten generations, the bacteria-containing spores germinated nearly as efficiently as those without bacteria.

The basis of this adaptation isn’t clear. Genome sequencing identified a handful of mutations associated with improved germination success in the fungus — which was a strain of R. microsporus not known to carry endosymbionts naturally — and found no changes in the bacteria.

The line that germinated most efficiently tended to limit the number of bacteria in each spore, says study co-author Gabriel Giger, a microbiologist at ETH Zurich. “There are ways for these two partners to make a better, easier living with each other. That’s something that’s really important for us to understand.”

Fungal immune system

Researchers don’t know much about the genetics of R. microsporus. But Thomas Richards, an evolutionary biologist at the University of Oxford, UK, wonders whether a fungal immune system is preventing symbiosis — and whether mutations to this system could be easing relations. “I’m a big fan of this work,” he adds.

Eva Nowack, a microbiologist at Heinrich Heine University Düsseldorf in Germany, was surprised at how quickly adaptations to symbiotic life seemed to evolve. In the future, she would love to see what happens after even longer time periods; for example, more than 1,000 generations.

Engineering such symbioses could lead to the development of novel organisms with useful traits, such as the ability to consume carbon dioxide or atmospheric nitrogen, says Vorholt. “That’s the idea: to bring in new traits that an organism doesn’t have, and that would be difficult to implement otherwise.”

doi: https://doi.org/10.1038/d41586-024-03224-5

References

  1. Giger, G. H. et al. Nature https://doi.org/10.1038/s41586-024-08010-x (2024).Article Google Scholar 
  2. Gäbelein, C. G., Reiter, M. A., Ernst, C., Giger, G. H. & Vorholt, J. A. ACS Synth. Biol. 11, 3388–3396 (2022).Article Google Scholar

Your Brain Divides the Day Into “Chapters” Based on Priorities

Summary: New research shows that the brain divides the day into “chapters” based on what a person focuses on. These mental boundaries aren’t solely prompted by changes in surroundings but also by internal goals and priorities. In experiments using audio narratives, participants’ brains organized events differently depending on whether they focused on specific details.

This study suggests that how we experience and remember events is influenced by both context and what matters most to us at the time.

Key Facts:

  • The brain forms new “chapters” based on attention and personal goals, not just environment.
  • MRI scans showed that people segmented stories differently depending on their focus.
  • The research may help explain how expectations influence memory formation.

Source: Columbia University

The moment a person steps off the street and into a restaurant—to take just one example—the brain mentally starts a new “chapter” of the day, a change that causes a big shift in brain activity. Shifts like this happen all day long, as people encounter new environments, like going out for lunch, attending their kid’s soccer game, or settling in for a night of watching TV.

But what determines how the brain divides the day into individual events that we can understand and remember separately?  

That’s what a new paper in the journal Current Biology aimed to find out. 

The research team, led by Christopher Baldassano, an associate professor of Psychology, and Alexandra De Soares, then a member of his lab, turned up interesting results.

The researchers wanted to better understand what prompts the brain to form a boundary around the events we encounter, effectively registering it as a new “chapter” in the day.

One possibility is that new chapters are entirely caused by big changes in a person’s surroundings, like how walking into a restaurant takes them from outdoors to indoors.

Another possibility, however, is that the new chapters are prompted by internal scripts that our brain writes based on past experience, and that even big environmental changes might be ignored by our brain if they are not related to our current priorities and goals.

To test their hypothesis, researchers developed a set of 16 audio narratives, each about three to four minutes long. Each narrative took place in one of four locations (a restaurant, an airport, a grocery store, and a lecture hall) and dealt with one of four social situations (a breakup, a proposal, a business deal, and a meet cute).

The researchers found that the way the brain divides up an experience into individual events depends on what a person currently cares about and is paying attention to.

When listening to a story about a marriage proposal at a restaurant, for example, subjects’ prefrontal cortex would usually be organizing the story into events related to the proposal, leading up (hopefully) to the final “yes.”

But the researchers found that they could force the prefrontal cortex to organize the story in a different way if they instead asked study participants to focus on the events related to the dinner orders of the couple. For study participants who were told to focus on these details, moments like ordering dishes became critical new chapters in the story.

“We wanted to challenge the theory that the sudden shifts in brain activity when we start a new chapter of our day are only being caused by sudden shifts in the world—that the brain isn’t really ‘doing’ anything interesting when it creates new chapters, it’s just responding passively to a change in sensory inputs,” Baldassano said.

“Our research found that isn’t the case: The brain is, in fact, actively organizing our life experiences into chunks that are meaningful to us.”

The researchers measured where the brain created new chapters both by looking at MRI scans of the brain to identify fresh brain activity, and, in a separate group of participants, by asking them to press a button to indicate when they thought a new part of the story had begun.

They found that the brain divided stories into separate chapters depending on the perspective they were told to be attuned to—and it didn’t just apply to the proposal-in-a-restaurant scenario: A person hearing a story about a breakup in an airport could, if prompted to pay attention to details of the airport experience, register new chapters as they went through security and arrived at their gate.

Meanwhile, a person who heard a story about a person closing a business deal while grocery shopping could be prompted to register either the new steps of the business deal as new chapters, or to be attuned primarily to the phases of grocery shopping instead.

The details that the study participants were prompted to pay attention to influenced what their brain perceived as a new chapter in the story.

Moving forward, the researchers hope to investigate the impact that expectations have on long-term memory. As part of this study, the researchers also asked each participant to tell them everything they remembered about each story.

They are still in the process of analyzing the data to understand how the perspective they were asked to adopt while listening to the story changes the way they remember it. More broadly, this study is part of an ongoing effort in the field to build a comprehensive theory about how real-life experiences are divided up into event memories.

The results indicate that prior knowledge and expectations are a key ingredient in how this cognitive system works.

Baldassano described the work as a passion project.

“Tracking activity patterns in the brain over time is a big challenge that requires using complex analysis tools,” he said: “Using meaningful stories and mathematical models to discover something new about cognition is exactly the kind of unconventional research in my lab that I am most proud of and excited about.”

About this neuroscience research news

Author: Christopher Shea
Source: Columbia University

Serotonin levels in the brain increase with reward value

by Dartmouth College

Serotonin is often referred to as the “happiness molecule.” It plays a critical role in affecting mood levels and is also a neurotransmitter that sends signals within the brain and the body.

Researchers have generally thought that the chemical plays a global role in modulating brain states by acting over a longer timescale than dopamine, which signals reward but operates on a much shorter timeframe.

Now, a Dartmouth study published in The Journal of Neuroscience reports that serotonin increases in anticipation of a reward and scales with the value of that reward.

For decades, prior research has looked at the release of dopamine levels in encoding the value of rewards at a subsecond timescale using a technique that enabled scientists to monitor it throughout different areas in the mouse brain.

Techniques for monitoring serotonin at this timescale did not previously exist, leaving many unknowns about when serotonin is released in the brain because of its widespread projections. Serotonin is an extraordinarily complex system, with the cells located in one small region of the brain, which then send their messages to pretty much every other area of the brain.

There are 14 serotonin receptors, which are like 14 different locks and the key, serotonin, can fit into any one of those locks, unlocking a different message depending on the door. This explains why past studies have focused on targeting those receptors before it was possible to examine serotonin itself.

“In this research we used a new biosensor called GRAB-serotonin, for short, that could, for the first time, measure the molecule by ‘grabbing’ serotonin released in the brain, while the mouse was running around receiving a tasty treat,” says senior author Kate Nautiyal, an assistant professor of psychological and brain sciences at Dartmouth.

Using a technique called fiber photometry, light can be used to trigger and then measure fluorescence fluctuations from a biosensor like GRAB, whenever serotonin is detected. The team was able to study the release of serotonin in mice while they received rewards, which in this case were varied concentrations of evaporated milk, which mice love. The researchers were then able to look at how serotonin levels changed depending on how good the reward was.

“We had a pretty good understanding that if you alter serotonin signaling by targeting receptors or manipulating reuptake such as with selective serotonin reuptake inhibitors, which are used in antidepressants, you get these broad impacts on mood and can change the way that animals or individuals seem to regulate behavior,” says co-author Mitchell Spring, a postdoctoral researcher who worked on this project in the Nautiyal Lab, a behavioral neuroscience lab in the Department of Psychological and Brain Sciences at Dartmouth.

The results showed that consumption of higher concentrations of the reward was associated with greater serotonin release. When the mice were thirsty and were given water, there was a big serotonin signal, and when they were satiated with a good reward and were full, the serotonin signal was not as strong.

The findings also showed that if you give mice a cue that predicts the reward, serotonin levels rise during the cue, or anticipation, of the reward.

“We found that you can modulate the serotonin signal with the subjective value of the reward,” says Nautiyal. “Our results tell us that serotonin is really a signal in the brain monitoring how good a reward is.”

In measuring the release of serotonin, the team focused on one brain region, the dorsomedial striatum, which has previously been associated very strongly with dopamine, decision-making, and impulsivity.

The researchers say that selective serotonin reuptake inhibitors are widely prescribed and generally effective but we don’t fully understand how they work or what serotonin is doing to address the behaviors that these antidepressants are treating.

“A better understanding of how serotonin is operating at baseline or in healthy individuals during a positive experience could be used to develop more targeted treatments for psychiatric disorders like depression and addiction,” says Nautiyal.

More information: Mitchell G. Spring et al, Striatal serotonin release signals reward value, The Journal of Neuroscience (2024). DOI: 10.1523/JNEUROSCI.0602-24.2024

Journal information: Journal of Neuroscience 

https://medicalxpress.com/news/2024-09-fiber-photometry-technique-serotonin-brain.html

Research Misconduct Committed by Director of the Division of Neuroscience at the National Institute on Aging

The NIH announced that Eliezer Masliah, MD, now former director of the division of neuroscience at the National Institute on Aging, engaged in research misconduct while serving in the agency.

The NIH said in a statement that Masliah committed falsification and/or fabrication involving repeated use and relabeling of “figure panels representing different experimental results in two publications.”

The NIH further stated it will notify the two journals in which the panels appeared of its findings so that appropriate action can be taken.

The agency initiated a research misconduct review process in May 2023 after it was notified of allegations from the HHS Office of Research Integrity. An investigation was subsequently initiated in December 2023 and concluded on Sept. 15.

Masliah joined the NIH in 2016 as director of the division of neuroscience at the National Institute on Aging (NIA) and an intramural researcher studying synaptic damage in patients with neurodegenerative disorders. Per the NIH, he no longer serves as director. NIA Deputy Director Amy Kelley, MD, has assumed the role of acting director.

The NIH declined to comment further when asked about details surrounding its decision.

https://www.healio.com/news/neurology/20240926/nih-finds-neuroscience-director-engaged-in-research-misconduct