The risk of everlasting consequences if our brains don’t get adequate stimulation in our early years

by Bahar Golipour

What is the earliest memory you have?

Most people can’t remember anything that happened to them or around them in their toddlerhood. The phenomenon, called childhood amnesia, has long puzzled scientists. Some have debated that we forget because the young brain hasn’t fully developed the ability to store memories. Others argue it is because the fast-growing brain is rewiring itself so much that it overwrites what it’s already registered.

New research that appears in Nature Neuroscience this week suggests that those memories are not forgotten. The study shows that when juvenile rats have an experience during this infantile amnesia period, the memory of that experience is not lost. Instead, it is stored as a “latent memory trace” for a long time. If something later reminds them of the original experience, the memory trace reemerges as a full blown, long-lasting memory.

Taking a (rather huge) leap from rats to humans, this could explain how early life experiences that you don’t remember still shape your personality; how growing up in a rich environment makes you a smarter person and how early trauma puts you at higher risk for mental health problems later on.

Scientists don’t know whether we can access those memories. But the new study shows childhood amnesia coincides with a critical time for the brain ― specifically the hippocampus, a seahorse-shaped brain structure crucial for memory and learning. Childhood amnesia corresponds to the time that your brain matures and new experiences fuel the growth of the hippocampus.

In humans, this period occurs before pre-school, likely between the ages 2 and 4. During this time, a child’s brain needs adequate stimulation (mostly from healthy social interactions) so it can better develop the ability to learn.

And not getting enough healthy mental activation during this period may impede the development of a brain’s learning and memory centers in a way that it cannot be compensated later.

“What our findings tell us is that children’s brains need to get enough and healthy activation even before they enter pre-school,” said study leader Cristina Alberini, a professor at New York University’s Center for Neural Science. “Without this, the neurological system runs the risk of not properly developing learning and memory functions.”

The findings may illustrate one mechanism that could in part explain scientific research that shows poverty can shrink children’s brains.

Extensive research spanning decades has shown that low socioeconomic status is linked to problems with cognitive abilities, higher risk for mental health issues and poorer performance in school. In recent years, psychologists and neuroscientists have found that the brain’s anatomy may look different in poor children. Poverty is also linked to smaller brain surface area and smaller volume of the white matter connecting brain areas, as well as smaller hippocampus. And a 2015 study found that the differences in brain development explain up to 20 percent of academic performance gap between children from high- and low-income families.

Critical Periods

For the brain, the first few years of life set the stage for the rest of life.

Even though the nervous system keeps some of its ability to rewire throughout life, several biochemical events that shape its core structure happen only at certain times. During these critical periods of the developmental stages, the brain is acutely sensitive to new sights, sounds, experiences and external stimulation.

Critical periods are best studied in the visual system. In the 1960s, scientists David Hubel and Torsten Wiesel showed that if they close one eye of a kitten from birth for just for a few months, its brain never learns to see properly. The neurons in the visual areas of the brain would lose their ability respond to the deprived eye. Adult cats treated the same way don’t show this effect, which demonstrates the importance of critical periods in brain development for proper functioning. This finding was part of the pioneering work that earned Hubel and Wiesel the 1981 Nobel Prize in Physiology or Medicine.

In the new study in rats, the team shows that a similar critical period may be happening to the hippocampus.

Alberini and her colleagues took a close look at what exactly happens in the brain of rats in their first 17 days of life (equivalent to the first three years of a human’s life). They created a memory for the rodents of a negative experience: every time the animals entered a specific corner of their cage, they received a mildly painful shock to their foot. Young rats, like kids, aren’t great at remembering things that happened to them during their infantile amnesia. So although they avoided that corner right after the shock, they returned to it only a day later. In contrast, a group of older rats retained the memory and avoided this place for a long time.

However, the younger rats, had actually kept a trace of the memory. A reminder (such as another foot shock in another corner) was enough to resurrect the memory and make the animals avoid the first corner of the cage.

Researchers found a cascade of biochemical events in the young rats’ brains that are typically seen in developmental critical periods.

“We were excited to see the same type of mechanism in the hippocampus,” Alberini told The Huffington Post.

The Learning Brain And Its Mysteries

Just like the kittens’ brain needed light from the eyes to learn to see, the hippocampus may need novel experiences to learn to form memories.

“Early in life, while the brain cannot efficiently form long-term memories, it is ‘learning’ how to do so, making it possible to establish the abilities to memorize long-term,” Alberini said. “However, the brain needs stimulation through learning so that it can get in the practice of memory formation―without these experiences, the ability of the neurological system to learn will be impaired.”

This does not mean that you should put your kids in pre-pre-school, Alberini told HuffPost. Rather, it highlights the importance of healthy social interaction, especially with parents, and growing up in an environment rich in stimulation. Most kids in developed countries are already benefiting from this, she said.

But what does this all mean for children who grow up exposed to low levels of environmental stimulation, something more likely in poor families? Does it explain why poverty is linked to smaller brains? Alberini thinks many other factors likely contribute to the link between poverty and brain. But it is possible, she said, that low stimulation during the development of the hippocampus, too, plays a part.

Psychologist Seth Pollak of University of Wisconsin at Madison who has found children raised in poverty show differences in hippocampal development agrees.

Pollak believes the findings of the new study represent “an extremely plausible link between early childhood adversity and later problems.”

“We must always be cautious about generalizing studies of rodents to understanding human children,” Pollas added. “But the nonhuman animal studies, such as this one, provide testable hypotheses about specific mechanisms underlying human behavior.”

Although the link between poverty and cognitive performance has been repeatedly seen in numerous studies, scientists don’t have a good handle on how exactly many related factors unfold inside the developing brain, said Elizabeth Sowell, a researcher from the Children’s Hospital Los Angeles. Studies like this one provide “a lot of food for thought,” she added.

http://www.huffingtonpost.com.au/2016/07/24/the-things-you-dont-remember-shape-who-you-are/

Scientists discover key brain cells that control eating portion size

111064_web

While researching the brain’s learning and memory system, scientists at Johns Hopkins say they stumbled upon a new type of nerve cell that seems to control feeding behaviors in mice. The finding, they report, adds significant detail to the way brains tell animals when to stop eating and, if confirmed in humans, could lead to new tools for fighting obesity. Details of the study were published by the journal Science today.

“When the type of brain cell we discovered fires and sends off signals, our laboratory mice stop eating soon after,” says Richard Huganir, Ph.D., director of the Department of Neuroscience at the Johns Hopkins University School of Medicine. “The signals seem to tell the mice they’ve had enough.”

Huganir says his team’s discovery grew out of studies of the proteins that strengthen and weaken the intersections, or synapses, between brain cells. These are an important target of research because synapse strength, particularly among cells in the hippocampus and cortex of the brain, is important in learning and memory.

In a search for details about synapse strength, Huganir and graduate student Olof Lagerlöf, M.D., focused on the enzyme OGT — a biological catalyst involved in many bodily functions, including insulin use and sugar chemistry. The enzyme’s job is to add a molecule called N-acetylglucosamine (GlcNAc), a derivative of glucose, to proteins, a phenomenon first discovered in 1984 by Gerald Hart, Ph.D., director of the Johns Hopkins University School of Medicine’s Department of Biological Chemistry and co-leader of the current study. By adding GlcNAc molecules, OGT alters the proteins’ behavior.

To learn about OGT’s role in the brain, Lagerlöf deleted the gene that codes for it from the primary nerve cells of the hippocampus and cortex in adult mice. Even before he looked directly at the impact of the deletion in the rodents’ brains, Lagerlöf reports, he noticed that the mice doubled in weight in just three weeks. It turned out that fat buildup, not muscle mass, was responsible.

When the team monitored the feeding patterns of the mice, they found that those missing OGT ate the same number of meals — on average, 18 a day — as their normal littermates but tarried over the food longer and ate more calories at each meal. When their food intake was restricted to that of a normal lab diet, they no longer gained extra weight, suggesting that the absence of OGT interfered with the animals’ ability to sense when they were full.

“These mice don’t understand that they’ve had enough food, so they keep eating,” says Lagerlöf.

Because the hippocampus and cortex are not known to directly regulate feeding behaviors in rodents or other mammals, the researchers looked for changes elsewhere in the brain, particularly in the hypothalamus, which is known to control body temperature, feeding, sleep and metabolism. There, they found OGT missing from a small subset of nerve cells within a cluster of neurons called the paraventricular nucleus.

Lagerlöf says these cells already were known to send and receive multiple signals related to appetite and food intake. When he looked for changes in the levels of those factors that might be traced to the absence of OGT, he found that most of them were not affected, and the activity of the appetite signals that many other research groups have focused on didn’t seem to be causing the weight gain, he adds.

Next, the team examined the chemical and biological activity of the OGT-negative cells. By measuring the background electrical activity in nonfiring brain cells, the researchers estimated the number of incoming synapses on the cells and found that they were three times as few, compared to normal cells.

“That result suggests that, in these cells, OGT helps maintain synapses,” says Huganir. “The number of synapses on these cells was so low that they probably aren’t receiving enough input to fire. In turn, that suggests that these cells are responsible for sending the message to stop eating.”

To verify this idea, the researchers genetically manipulated the cells in the paraventricular nucleus so that they would add blue light-sensitive proteins to their membranes. When they stimulated the cells with a beam of blue light, the cells fired and sent signals to other parts of the brain, and the mice decreased the amount they ate in a day by about 25 percent.

Finally, because glucose is needed to produce GlcNAc, they thought that glucose levels, which increase after meals, might affect the activity of OGT. Indeed, they found that if they added glucose to nerve cells in petri dishes, the level of proteins with the GlcNAc addition increased in proportion to the amount of glucose in the dishes. And when they looked at cells in the paraventricular nucleus of mice that hadn’t eaten in a while, they saw low levels of GlcNAc-decorated proteins.

“There are still many things about this system that we don’t know,” says Lagerlöf, “but we think that glucose works with OGT in these cells to control ‘portion size’ for the mice. We believe we have found a new receiver of information that directly affects brain activity and feeding behavior, and if our findings bear out in other animals, including people, they may advance the search for drugs or other means of controlling appetites.”

http://www.eurekalert.org/pub_releases/2016-03/jhm-pcc031416.php

New study shows that medical marijuana cuts average number of migraine headaches in half

Marijuana may give relief to migraine sufferers, according to research published online in Pharmacotherapy.

The research included 121 patients diagnosed with migraines and treated with medical marijuana between January 2010 and September 2014. Patients in the study used both inhaled marijuana and edible marijuana. The researchers said inhaled marijuana seemed to be preferred for treating current headaches, and edibles seemed to be favored for headache prevention.

The researchers found that 103 study participants said they had a decrease in their monthly migraines. Fifteen patients said they had the same number of migraines, and 3 reported an increase in headaches. Overall, the patients’ number of migraines fell from 10.4 to 4.6 per month, which is statistically and clinically significant.

“There was a substantial improvement for patients in their ability to function and feel better,” senior author Laura Borgelt, PharmD, a professor in the School of Pharmacy and Pharmaceutical Sciences at the University of Colorado Anschutz Medical Campus in Aurora, said in a university news release. “Like any drug, marijuana has potential benefits and potential risks. It’s important for people to be aware that using medical marijuana can also have adverse effects.”

Reference

Rhyne D, Anderson SL, Gedde M, Borgelt LM. Effects of Medical Marijuana on Migraine Headache Frequency in an Adult Population. Pharmacotherapy. 2016;

Uploading Our Minds into Digital Space


Human cortical neurons in the brain. (David Scharf/Corbis)

By Jerry Adler
Smithsonian Magazine

Ken Hayworth, a neuroscientist, wants to be around in 100 years but recognizes that, at 43, he’s not likely to make it on his own. Nor does he expect to get there preserved in alcohol or a freezer; despite the claims made by advocates of cryonics, he says, the ability to revivify a frozen body “isn’t really on the horizon.” So Hayworth is hoping for what he considers the next best thing. He wishes to upload his mind—his memories, skills and personality—to a computer that can be programmed to emulate the processes of his brain, making him, or a simulacrum, effectively immortal (as long as someone keeps the power on).

Hayworth’s dream, which he is pursuing as president of the Brain Preservation Foundation, is one version of the “technological singularity.” It envisions a future of “substrate-independent minds,” in which human and machine consciousness will merge, transcending biological limits of time, space and memory. “This new substrate won’t be dependent on an oxygen atmosphere,” says Randal Koene, who works on the same problem at his organization, Carboncopies.org. “It can go on a journey of 1,000 years, it can process more information at a higher speed, it can see in the X-ray spectrum if we build it that way.” Whether Hayworth or Koene will live to see this is an open question. Their most optimistic scenarios call for at least 50 years, and uncounted billions of dollars, to implement their goal. Meanwhile, Hayworth hopes to achieve the ability to preserve an entire human brain at death—through chemicals, cryonics or both—to keep the structure intact with enough detail that it can, at some future time, be scanned into a database and emulated on a computer.

That approach presumes, of course, that all of the subtleties of a human mind and memory are contained in its anatomical structure—conventional wisdom among neuroscientists, but it’s still a hypothesis. There are electrochemical processes at work. Are they captured by a static map of cells and synapses? We won’t know, advocates argue, until we try to do it.

The initiatives require a big bet on the future of technology. A 3-D map of all the cells and synapses in a nervous system is called a “connectome,” and so far researchers have produced exactly one, for a roundworm called Caenorhabditis elegans, with 302 neurons and about 7,000 connections among them. A human brain, according to one reasonable estimate, has about 86 billion neurons and 100 trillion synapses. And then there’s the electrochemical activity on top of that. In 2013, announcing a federal initiative to produce a complete model of the human brain, Francis Collins, head of the National Institutes of Health, said it could generate “yottabytes” of data—a million million million megabytes. To scan an entire human brain at the scale Hayworth thinks is necessary—effectively slicing it into virtual cubes ten nanometers on a side—would require, with today’s technology, “a million electron microscopes running in parallel for ten years.” Mainstream researchers are divided between those who regard Hayworth’s quest as impossible in practice, and those, like Miguel Nicolelis of Duke University, who consider it impossible in theory. “The brain,” he says, “is not computable.”

And what does it mean for a mind to exist outside a brain? One immediately thinks of the disembodied HAL in 2001: A Space Odyssey. But Koene sees no reason that, if computers continue to grow smaller and more powerful, an uploaded mind couldn’t have a body—a virtual one, or a robotic one. Will it sleep? Experience hunger, pain, desire? In the absence of hormones and chemical neurotransmitters, will it feel emotion? It will be you, in a sense, but will you be it?

These questions don’t trouble Hayworth. To him, the brain is the most sophisticated computer on earth, but only that, and he figures his mind could also live in one made of transistors instead. He hopes to become the first human being to live entirely in cyberspace, to send his virtual self into the far future.

Read more: http://www.smithsonianmag.com/innovation/quest-upload-mind-into-digital-space-180954946/#OBRGToqVzeqftrBt.99

Women can navigate better when given testosterone, study finds

To investigate whether the differences in how men and women navigate are related to our sex or to cultural conditioning, researchers in Norway measured male and female brain activity while volunteers tried to find their way through a virtual reality maze.

Wearing 3D goggles and using a joystick to make their way through an artificial environment, the participants (18 males and 18 females) had their brain functions continuously recorded by an fMRI scanner as they carried out virtual navigation tasks.

In line with previous findings, the men performed better, using shortcuts, orienting themselves more using cardinal directions, and solving 50 percent more tasks than the women in the study.

“Men’s sense of direction was more effective,” said Carl Pintzka, a neuroscientist at the Norwegian University of Science and Technology (NTNU). “They quite simply got to their destination faster.”

One of the reasons for this is because of the difference in how men and women use their brains when we’re finding our way around. According to the researchers, men use the hippocampus more, whereas women place greater reliance on their brains’ frontal areas.

“That’s in sync with the fact that the hippocampus is necessary to make use of cardinal directions,” said Pintzka. “[M]en usually go in the general direction where [their destination is] located. Women usually orient themselves along a route to get there.”

Generally, the cardinal approach is more efficient, as it depends less on where you start.

But women’s brains make them better at finding objects locally, the researchers say. “In ancient times, men were hunters and women were gatherers. Therefore, our brains probably evolved differently,” said Pintzka. “In simple terms, women are faster at finding things in the house, and men are faster at finding the house.”

What was most remarkable about the study was what happened when the researchers gave women a drop of testosterone to see how it affected their ability to navigate the virtual maze. In a separate experiment, 21 women received a drop of testosterone under their tongues, while 21 got a placebo.

The researchers found that the women receiving testosterone showed improved knowledge of the layout of the maze, and relied on their hippocampus more to find their way around. Having said that, these hormone-derived benefits didn’t enable them to solve more maze tasks in the exercise.

It’s worth bearing in mind that the study used a fairly small sample size in both of the experiments carried out, so the findings need to be read in light of that. Nonetheless, the scientists believe their paper, which is published in Behavioural Brain Research, will help us to better understand the different ways male and female brains work, which could assist in the fight against diseases such as Alzheimer’s.

“Almost all brain-related diseases are different in men and women, either in the number of affected individuals or in severity,” said Pintzka. “Therefore, something is likely protecting or harming people of one sex. Since we know that twice as many women as men are diagnosed with Alzheimer’s disease, there might be something related to sex hormones that is harmful.”

http://www.sciencealert.com/women-can-navigate-better-when-given-testosterone-study-finds

Thanks to Dr. Enrique Leira for bringing this to the It’s Interesting community.

Scientists encode memories in a way that bypasses damaged brain tissue

Researchers at University of South Carolina (USC) and Wake Forest Baptist Medical Center have developed a brain prosthesis that is designed to help individuals suffering from memory loss.

The prosthesis, which includes a small array of electrodes implanted into the brain, has performed well in laboratory testing in animals and is currently being evaluated in human patients.

Designed originally at USC and tested at Wake Forest Baptist, the device builds on decades of research by Ted Berger and relies on a new algorithm created by Dong Song, both of the USC Viterbi School of Engineering. The development also builds on more than a decade of collaboration with Sam Deadwyler and Robert Hampson of the Department of Physiology & Pharmacology of Wake Forest Baptist who have collected the neural data used to construct the models and algorithms.

When your brain receives the sensory input, it creates a memory in the form of a complex electrical signal that travels through multiple regions of the hippocampus, the memory center of the brain. At each region, the signal is re-encoded until it reaches the final region as a wholly different signal that is sent off for long-term storage.

If there’s damage at any region that prevents this translation, then there is the possibility that long-term memory will not be formed. That’s why an individual with hippocampal damage (for example, due to Alzheimer’s disease) can recall events from a long time ago – things that were already translated into long-term memories before the brain damage occurred – but have difficulty forming new long-term memories.

Song and Berger found a way to accurately mimic how a memory is translated from short-term memory into long-term memory, using data obtained by Deadwyler and Hampson, first from animals, and then from humans. Their prosthesis is designed to bypass a damaged hippocampal section and provide the next region with the correctly translated memory.

That’s despite the fact that there is currently no way of “reading” a memory just by looking at its electrical signal.

“It’s like being able to translate from Spanish to French without being able to understand either language,” Berger said.

Their research was presented at the 37th Annual International Conference of the IEEE Engineering in Medicine and Biology Society in Milan on August 27, 2015.

The effectiveness of the model was tested by the USC and Wake Forest Baptist teams. With the permission of patients who had electrodes implanted in their hippocampi to treat chronic seizures, Hampson and Deadwyler read the electrical signals created during memory formation at two regions of the hippocampus, then sent that information to Song and Berger to construct the model. The team then fed those signals into the model and read how the signals generated from the first region of the hippocampus were translated into signals generated by the second region of the hippocampus.

In hundreds of trials conducted with nine patients, the algorithm accurately predicted how the signals would be translated with about 90 percent accuracy.

“Being able to predict neural signals with the USC model suggests that it can be used to design a device to support or replace the function of a damaged part of the brain,” Hampson said.
Next, the team will attempt to send the translated signal back into the brain of a patient with damage at one of the regions in order to try to bypass the damage and enable the formation of an accurate long-term memory.

http://medicalxpress.com/news/2015-09-scientists-bypass-brain-re-encoding-memories.html#nRlv

How LSD works in the brain

by Natalie Wolchover

The main theory of psychedelics, first fleshed out by a Swiss researcher named Franz Vollenweider, is that drugs like LSD and psilocybin, the active ingredient in “magic” mushrooms, tune down the thalamus’ activity. Essentially, the thalamus on a psychedelic drug lets unprocessed information through to consciousness, like a bad email spam filter. “Colors become brighter , people see things they never noticed before and make associations that they never made before,” Sewell said.

LSD, or acid, and its mind-bending effects have been made famous by pop culture hits like “Fear and Loathing in Las Vegas,” a film about the psychedelic escapades of writer Hunter S. Thompson. Oversaturated colors, swirling walls and intense emotions all supposedly come into play when you’re tripping. But how does acid make people trip?

Life’s Little Mysteries asked Andrew Sewell, a Yale psychiatrist and one of the few U.S.-based psychedelic drug researchers, to explain why LSD short for lysergic acid diethylamide does what it does to the brain.

His explanation begins with a brief rundown of how the brain processes information under normal circumstances. It all starts in the thalamus, a node perched on top of the brain stem, right smack dab in the middle of the brain. “Most sensory impressions are routed through the thalamus, which acts as a gatekeeper, determining what’s relevant and what isn’t and deciding where the signals should go,” Sewell said.

“Consequently, your perception of the world is governed by a combination of ‘bottom-up’ processing, starting … with incoming signals, combined with ‘top-down’ processing, in which selective filters are applied by your brain to cut down the overwhelming amount of information to a more manageable and relevant subset that you can then make decisions about.

“In other words, people tend to see what they’ve been trained to see, and hear what they’ve been trained to hear.”

The main theory of psychedelics, first fleshed out by a Swiss researcher named Franz Vollenweider, is that drugs like LSD and psilocybin, the active ingredient in “magic” mushrooms, tune down the thalamus’ activity. Essentially, the thalamus on a psychedelic drug lets unprocessed information through to consciousness, like a bad email spam filter. “Colors become brighter , people see things they never noticed before and make associations that they never made before,” Sewell said.

n a recent paper advocating the revival of psychedelic drug research, psychiatrist Ben Sessa of the University of Bristol in England explained the benefits that psychedelics lend to creativity. “A particular feature of the experience is … a general increase in complexity and openness, such that the usual ego-bound restraints that allow humans to accept given pre-conceived ideas about themselves and the world around them are necessarily challenged. Another important feature is the tendency for users to assign unique and novel meanings to their experience together with an appreciation that they are part of a bigger, universal cosmic oneness.”

But according to Sewell, these unique feelings and experiences come at a price: “disorganization, and an increased likelihood of being overwhelmed.” At least until the drugs wear off, and then you’re left just trying to make sense of it all.

http://www.livescience.com/33167-how-acid-lsd-make-people-trip.html?li_source=pm&li_medium=most-popular&li_campaign=related_test

An axon self-destruct mechanism that kills neurons

Just as losing a limb can spare a life, parting with a damaged axon by way of Wallerian degeneration can spare a neuron. A protein called SARM1 acts as the self-destruct button, and now researchers led by Jeffrey Milbrandt of Washington University Medical School in St. Louis believe they have figured out how. They report in the April 24 Science that SARM1 forms dimers that trigger the destruction of NAD+. Basic biochemistry dictates that this enzyme cofactor is essential for cell survival.

ARM1 and NAD+ have emerged as key players in the complex, orderly process underlying Wallerian degeneration. Scientists are still filling in other parts of the pathway. SARM1, short for sterile alpha and TIR motif-containing 1, seems to act as a damage sensor, but researchers are not sure how. Recently, researchers led by Marc Tessier-Lavigne at Rockefeller University, New York, found that SARM1 turns on a mitogen-activated protein (MAP) kinase cascade that is involved. Loss of NAD+ may also contribute to axon degeneration, because its concentration drops in dying axons, and Wlds mutant mice that overproduce an NAD+ synthase have slower Wallerian degeneration.

Now, first author Josiah Gerdts confirms that SARM1 is the self-destruct switch. He engineered a version of the protein with a target sequence for tobacco etch virus (TEV) protease embedded in it. Using a rapamycin-activated form of TEV, he eliminated SARM1 from axons he had sliced off of mouse dorsal root ganglion (DRG) neurons. Without SARM1, the severed axons survived.

SARM1 contains SAM and TIR domains, which promote protein-protein interactions. Previously, Gerdts discovered that the TIR domain was sufficient to induce degeneration, even in healthy axons, but it relied on the SAM region to bring multiple SARM1 molecules together. He hypothesized that axonal SARM1 multimerizes upon axon damage. To test this idea, he used a standard biochemical technique to force the SARM1 TIR domains together. He fused domains to one or another of the rapamycin-binding peptides Frb and Fkbp and expressed them in DRG neurons. When he added rapamycin to the cultures, the Frb and Fkbp snapped the TIR domains together within minutes. As Gerdts had predicted, this destroyed axons, confirming that SARM1 activates via dimerization.

Next, the authors investigated what happens to NAD+ during that process. Using high-performance liquid chromatography, Gerdts measured the concentration of NAD+ in the disembodied axons. Normally, its level dropped by about two-thirds within 15 minutes of severing. In axons from SARM1 knockout mice, however, the NAD+ concentration stayed unchanged. In neurons carrying the forced-dimerization constructs, adding rapamycin was sufficient to knock down NAD+ levels—Gerdts did not even have to cut the axons. Ramping up NAD+ production by overexpressing its synthases, NMNAT and NAMPT, overcame the effects of TIR dimerization, and the axons survived. Gerdts concluded that loss of NAD+ was a crucial, SARM1-controlled step on the way to degeneration.

He still wondered what caused the loss of NAD+. It might be that the axon simply stopped making it, or maybe the Wallerian pathway actively destroyed it. To distinguish between these possibilities, Gerdts added radiolabeled exogenous NAD+ to human embryonic kidney HEK293 cultures expressing the forced-dimerization TIR domains. Rapamycin caused them to rapidly degrade the radioactive NAD+, confirming that the cell actively disposes of it.

Gerdts suspects that with this essential cofactor gone, the axon runs out of energy and can no longer survive. He speculated that the MAP kinase cascade reportedly turned on by SARM1 might lead to NAD+ destruction. Alternatively, SARM1 might induce distinct MAP kinase and NAD+ destruction pathways in parallel, he suggested.

“Demonstrating how NAD+ is actively and locally degraded in the axon is a big advance,” commented Andrew Pieper of the Iowa Carver College of Medicine in Iowa City, who was not involved in the study. Jonathan Gilley and Michael Coleman of the Babraham Institute in Cambridge, U.K., predict that there will be more to the story. They note that a drug called FK866, which prevents NAD+ production, protects axons in some instances. Gerdts suggested that FK866 acts on processes upstream of SARM1, delaying the start of axon degeneration. In contrast, his paper only addressed what happens after SARM1 activates. “It will be fascinating to see how the apparent contradictions raised by this new study will be resolved,” wrote Gilley and Coleman.

Could these findings help researchers looking for ways to prevent neurodegeneration? “The study supports the notion that augmenting NAD+ levels is potentially a valuable approach,” said Pieper. He and his colleagues developed a small molecule that enhances NAD+ synthesis, now under commercial development. It improved symptoms in ALS model mice, and protected neurons in mice mimicking Parkinson’s. NAD+ also activates sirtuin, an enzyme important for longevity and stress resistance as well as learning and memory.

However, both Pieper and Gerdts cautioned that they cannot clearly predict which conditions might benefit from an anti-SARM1 or NAD+-boosting therapy. At this point, Gerdts said, researchers do not fully understand how much axon degeneration contributes to symptoms of diseases like Alzheimer’s and Parkinson’s. He suggested that crossing SARM1 knockout mice with models for various neurodegenerative conditions would indicate how well an anti-Wallerian therapy might work.

—Amber Dance

http://www.alzforum.org/news/research-news/axon-self-destruct-button-triggers-energy-woes

Scientists manage to give mice ‘eating disorders’ by knocking out one gene

By Rachel Feltman

If you give a mouse an eating disorder, you might just figure out how to treat the disease in humans. In a new study published Thursday in Cell Press, researchers created mice who lacked a gene associated with disordered eating in humans. Without it, the mice showed behaviors not unlike those seen in humans with eating disorders: They tended to be obsessive compulsive and have trouble socializing, and they were less interested in eating high-fat food than the control mice. The findings could lead to novel drug treatments for some of the 24 million Americans estimated to suffer from eating disorders.

In a 2013 study, the same researchers went looking for genes that might contribute to the risk of an eating disorder. Anorexia nervosa and bulimia nervosa aren’t straightforwardly inherited — there’s definitely more to an eating disorder than your genes — but it does seem like some families might have higher risks than others. Sure enough, the study of two large families, each with several members who had eating disorders, yielded mutations in two interacting genes. In one family, the estrogen-related receptor α (ESRRA) gene was mutated. The other family had a mutation on another gene that seemed to affect how well ESRRA could do its job.

So in the latest study, they created mice that didn’t have ESRRA in the parts of the brain associated with eating disorders.

“You can’t go testing this kind of gene expression in a human,” lead author and University of Iowa neuroscientist Michael Lutter said. “But in mice, you can manipulate the expression of the gene and then look at how it changes their behavior.”

It’s not a perfect analogy to what the gene mutation might do in a human, but the similarities can allow researchers to figure out the mechanism that causes the connection between your DNA and your eating habits.

The mice without ESRRA were tested for several eating-disorder-like behaviors: The researchers tested how hard they were willing to work for high fat food when they were hungry (less, it seemed, so much so that they weighed 15 percent less than their unaltered littermates), how compulsive they were, and how they behaved socially.

In general, the ESRRA-lacking mice were twitchier: They tended to overgroom, a common sign of anxiety in mice, and they were more wary of novelty, growing anxious when researchers put marbles into their cages. They also showed an inability to adapt: When researchers taught the mice how to exit a maze and then changed where the exit was, the mice without ESRRA spent way more time checking out the area where the exit should have been before looking for where it had gone.

The social changes were even more striking: Mice will usually show more interest in a new mouse than one they’ve met before, but in tests the modified mice showed the opposite preference, socializing with a familiar mouse when a new one was also presented.

They were also universally submissive to other mice, something the researchers detected with a sort of scientific game of chicken. Two mice are placed at either end of a tube, and one always plows past the other to get to the opposite side. It’s just the way mice size each other up — someone has to be on top. But every single one of the modified mice let themselves get pushed around.

“100% of the mice lacking this gene were subordinate,” Lutter said. “I’ve never seen an experiment before that produced a 0% verses 100% result.”

The avoidance of fats has an obvious connection to human disorders. But the social anxiety and rigidity are also close analogies to disordered eating in humans.

Now that Lutter and his colleagues know that the gene does something similar in mice, they can start looking for the actual mechanism that’s tripping these switches in the brain. They know that the gene’s pathway is very important for energy metabolism, especially in the breakdown of glucose. It’s possible that mutations in the gene cause some kind of impairment in neurons’ ability to get and process energy, but they can’t be sure yet.

They’ll see if they can pinpoint affected neurons and fix them. They’re also going to test some drugs that are known to affect this gene and its pathways. It’s possible that they’ll land on a treatment that helps calm these negative behaviors in affected mice, leading to treatments for humans with the mutation.

http://www.washingtonpost.com/news/speaking-of-science/wp/2015/04/09/scientists-manage-to-give-mice-eating-disorders-by-knocking-out-one-gene/

Open Access Article here: http://www.cell.com/cell-reports/abstract/S2211-1247(15)00301-0

Scientists achieve implantation of memory into the brains of mice while they sleep

Sleeping minds: prepare to be hacked. For the first time, conscious memories have been implanted into the minds of mice while they sleep. The same technique could one day be used to alter memories in people who have undergone traumatic events.

When we sleep, our brain replays the day’s activities. The pattern of brain activity exhibited by mice when they explore a new area during the day, for example, will reappear, speeded up, while the animal sleeps. This is thought to be the brain practising an activity – an essential part of learning. People who miss out on sleep do not learn as well as those who get a good night’s rest, and when the replay process is disrupted in mice, so too is their ability to remember what they learned the previous day.

Karim Benchenane and his colleagues at the Industrial Physics and Chemistry Higher Educational Institution in Paris, France, hijacked this process to create new memories in sleeping mice. The team targeted the rodents’ place cells – neurons that fire in response to being in or thinking about a specific place. These cells are thought to help us form internal maps, and their discoverers won a Nobel prize last year.

Benchenane’s team used electrodes to monitor the activity of mice’s place cells as the animals explored an enclosed arena, and in each mouse they identified a cell that fired only in a certain arena location. Later, when the mice were sleeping, the researchers monitored the animals’ brain activity as they replayed the day’s experiences. A computer recognised when the specific place cell fired; each time it did, a separate electrode would stimulate brain areas associated with reward.

When the mice awoke, they made a beeline for the location represented by the place cell that had been linked to a rewarding feeling in their sleep. A brand new memory – linking a place with reward – had been formed.

It is the first time a conscious memory has been created in animals during sleep. In recent years, researchers have been able to form subconscious associations in sleeping minds – smokers keen to quit can learn to associate cigarettes with the smells of rotten eggs and fish in their sleep, for example.

Previous work suggested that if this kind of subconscious learning had occurred in Benchenane’s mice, they would have explored the arena in a random manner, perhaps stopping at the reward-associated location. But these mice headed straight for the location, suggesting a conscious memory. “The mouse develops a goal-directed behaviour to go towards the place,” says Benchenane. “It proves that it’s not an automatic behaviour. What we create is an association between a particular place and a reward that can be consciously accessed by the mouse.”

“The mouse is remembering enough abstract information to think ‘I want to go to a certain place’, and go there when it wakes up,” says neuroscientist Neil Burgess at University College London. “It’s a bigger breakthrough [than previous studies] because it really does show what the man in the street would call a memory – the ability to bring to mind abstract knowledge which can guide behaviour in a directed way.”

Benchenane doesn’t think the technique can be used to implant many other types of memories, such as skills – at least for the time being. Spatial memories are easier to modify because they are among the best understood.

His team’s findings also provide some of the strongest evidence for the way in which place cells work. It is almost impossible to test whether place cells function as an internal map while animals are awake, says Benchenane, because these animals also use external cues, such as landmarks, to navigate. By specifically targeting place cells while the mouse is asleep, the team were able to directly test theories that specific cells represent specific places.

“Even when those place cells fire in sleep, they still convey spatial information,” says Benchenane. “That provides evidence that when you’ve got activation of place cells during the consolidation of memories in sleep, you’ve got consolidation of the spatial information.”

Benchenane hopes that his technique could be developed to help alter people’s memories, perhaps of traumatic events (see “Now it’s our turn”, below).

Loren Frank at the University of California, San Francisco, agrees. “I think this is a really important step towards helping people with memory impairments or depression,” he says. “It is surprising to me how many neurological and psychiatric illnesses have something to do with memory, including schizophrenia and obsessive compulsive disorder.”

“In principle, you could selectively change brain processing during sleep to soften memories or change their emotional content,” he adds.

Journal reference: Nature Neuroscience, doi:10.1038/nn.3970

http://www.newscientist.com/article/dn27115-new-memories-implanted-in-mice-while-they-sleep.html#.VP_L9uOVquD

Thanks to Kebmodee for bringing this to the attention of the It’s Interesting community.