Archive for the ‘Neuroscience’ Category

by Bahar Golipour

What is the earliest memory you have?

Most people can’t remember anything that happened to them or around them in their toddlerhood. The phenomenon, called childhood amnesia, has long puzzled scientists. Some have debated that we forget because the young brain hasn’t fully developed the ability to store memories. Others argue it is because the fast-growing brain is rewiring itself so much that it overwrites what it’s already registered.

New research that appears in Nature Neuroscience this week suggests that those memories are not forgotten. The study shows that when juvenile rats have an experience during this infantile amnesia period, the memory of that experience is not lost. Instead, it is stored as a “latent memory trace” for a long time. If something later reminds them of the original experience, the memory trace reemerges as a full blown, long-lasting memory.

Taking a (rather huge) leap from rats to humans, this could explain how early life experiences that you don’t remember still shape your personality; how growing up in a rich environment makes you a smarter person and how early trauma puts you at higher risk for mental health problems later on.

Scientists don’t know whether we can access those memories. But the new study shows childhood amnesia coincides with a critical time for the brain ― specifically the hippocampus, a seahorse-shaped brain structure crucial for memory and learning. Childhood amnesia corresponds to the time that your brain matures and new experiences fuel the growth of the hippocampus.

In humans, this period occurs before pre-school, likely between the ages 2 and 4. During this time, a child’s brain needs adequate stimulation (mostly from healthy social interactions) so it can better develop the ability to learn.

And not getting enough healthy mental activation during this period may impede the development of a brain’s learning and memory centers in a way that it cannot be compensated later.

“What our findings tell us is that children’s brains need to get enough and healthy activation even before they enter pre-school,” said study leader Cristina Alberini, a professor at New York University’s Center for Neural Science. “Without this, the neurological system runs the risk of not properly developing learning and memory functions.”

The findings may illustrate one mechanism that could in part explain scientific research that shows poverty can shrink children’s brains.

Extensive research spanning decades has shown that low socioeconomic status is linked to problems with cognitive abilities, higher risk for mental health issues and poorer performance in school. In recent years, psychologists and neuroscientists have found that the brain’s anatomy may look different in poor children. Poverty is also linked to smaller brain surface area and smaller volume of the white matter connecting brain areas, as well as smaller hippocampus. And a 2015 study found that the differences in brain development explain up to 20 percent of academic performance gap between children from high- and low-income families.

Critical Periods

For the brain, the first few years of life set the stage for the rest of life.

Even though the nervous system keeps some of its ability to rewire throughout life, several biochemical events that shape its core structure happen only at certain times. During these critical periods of the developmental stages, the brain is acutely sensitive to new sights, sounds, experiences and external stimulation.

Critical periods are best studied in the visual system. In the 1960s, scientists David Hubel and Torsten Wiesel showed that if they close one eye of a kitten from birth for just for a few months, its brain never learns to see properly. The neurons in the visual areas of the brain would lose their ability respond to the deprived eye. Adult cats treated the same way don’t show this effect, which demonstrates the importance of critical periods in brain development for proper functioning. This finding was part of the pioneering work that earned Hubel and Wiesel the 1981 Nobel Prize in Physiology or Medicine.

In the new study in rats, the team shows that a similar critical period may be happening to the hippocampus.

Alberini and her colleagues took a close look at what exactly happens in the brain of rats in their first 17 days of life (equivalent to the first three years of a human’s life). They created a memory for the rodents of a negative experience: every time the animals entered a specific corner of their cage, they received a mildly painful shock to their foot. Young rats, like kids, aren’t great at remembering things that happened to them during their infantile amnesia. So although they avoided that corner right after the shock, they returned to it only a day later. In contrast, a group of older rats retained the memory and avoided this place for a long time.

However, the younger rats, had actually kept a trace of the memory. A reminder (such as another foot shock in another corner) was enough to resurrect the memory and make the animals avoid the first corner of the cage.

Researchers found a cascade of biochemical events in the young rats’ brains that are typically seen in developmental critical periods.

“We were excited to see the same type of mechanism in the hippocampus,” Alberini told The Huffington Post.

The Learning Brain And Its Mysteries

Just like the kittens’ brain needed light from the eyes to learn to see, the hippocampus may need novel experiences to learn to form memories.

“Early in life, while the brain cannot efficiently form long-term memories, it is ‘learning’ how to do so, making it possible to establish the abilities to memorize long-term,” Alberini said. “However, the brain needs stimulation through learning so that it can get in the practice of memory formation―without these experiences, the ability of the neurological system to learn will be impaired.”

This does not mean that you should put your kids in pre-pre-school, Alberini told HuffPost. Rather, it highlights the importance of healthy social interaction, especially with parents, and growing up in an environment rich in stimulation. Most kids in developed countries are already benefiting from this, she said.

But what does this all mean for children who grow up exposed to low levels of environmental stimulation, something more likely in poor families? Does it explain why poverty is linked to smaller brains? Alberini thinks many other factors likely contribute to the link between poverty and brain. But it is possible, she said, that low stimulation during the development of the hippocampus, too, plays a part.

Psychologist Seth Pollak of University of Wisconsin at Madison who has found children raised in poverty show differences in hippocampal development agrees.

Pollak believes the findings of the new study represent “an extremely plausible link between early childhood adversity and later problems.”

“We must always be cautious about generalizing studies of rodents to understanding human children,” Pollas added. “But the nonhuman animal studies, such as this one, provide testable hypotheses about specific mechanisms underlying human behavior.”

Although the link between poverty and cognitive performance has been repeatedly seen in numerous studies, scientists don’t have a good handle on how exactly many related factors unfold inside the developing brain, said Elizabeth Sowell, a researcher from the Children’s Hospital Los Angeles. Studies like this one provide “a lot of food for thought,” she added.

http://www.huffingtonpost.com.au/2016/07/24/the-things-you-dont-remember-shape-who-you-are/

Advertisements

111064_web

While researching the brain’s learning and memory system, scientists at Johns Hopkins say they stumbled upon a new type of nerve cell that seems to control feeding behaviors in mice. The finding, they report, adds significant detail to the way brains tell animals when to stop eating and, if confirmed in humans, could lead to new tools for fighting obesity. Details of the study were published by the journal Science today.

“When the type of brain cell we discovered fires and sends off signals, our laboratory mice stop eating soon after,” says Richard Huganir, Ph.D., director of the Department of Neuroscience at the Johns Hopkins University School of Medicine. “The signals seem to tell the mice they’ve had enough.”

Huganir says his team’s discovery grew out of studies of the proteins that strengthen and weaken the intersections, or synapses, between brain cells. These are an important target of research because synapse strength, particularly among cells in the hippocampus and cortex of the brain, is important in learning and memory.

In a search for details about synapse strength, Huganir and graduate student Olof Lagerlöf, M.D., focused on the enzyme OGT — a biological catalyst involved in many bodily functions, including insulin use and sugar chemistry. The enzyme’s job is to add a molecule called N-acetylglucosamine (GlcNAc), a derivative of glucose, to proteins, a phenomenon first discovered in 1984 by Gerald Hart, Ph.D., director of the Johns Hopkins University School of Medicine’s Department of Biological Chemistry and co-leader of the current study. By adding GlcNAc molecules, OGT alters the proteins’ behavior.

To learn about OGT’s role in the brain, Lagerlöf deleted the gene that codes for it from the primary nerve cells of the hippocampus and cortex in adult mice. Even before he looked directly at the impact of the deletion in the rodents’ brains, Lagerlöf reports, he noticed that the mice doubled in weight in just three weeks. It turned out that fat buildup, not muscle mass, was responsible.

When the team monitored the feeding patterns of the mice, they found that those missing OGT ate the same number of meals — on average, 18 a day — as their normal littermates but tarried over the food longer and ate more calories at each meal. When their food intake was restricted to that of a normal lab diet, they no longer gained extra weight, suggesting that the absence of OGT interfered with the animals’ ability to sense when they were full.

“These mice don’t understand that they’ve had enough food, so they keep eating,” says Lagerlöf.

Because the hippocampus and cortex are not known to directly regulate feeding behaviors in rodents or other mammals, the researchers looked for changes elsewhere in the brain, particularly in the hypothalamus, which is known to control body temperature, feeding, sleep and metabolism. There, they found OGT missing from a small subset of nerve cells within a cluster of neurons called the paraventricular nucleus.

Lagerlöf says these cells already were known to send and receive multiple signals related to appetite and food intake. When he looked for changes in the levels of those factors that might be traced to the absence of OGT, he found that most of them were not affected, and the activity of the appetite signals that many other research groups have focused on didn’t seem to be causing the weight gain, he adds.

Next, the team examined the chemical and biological activity of the OGT-negative cells. By measuring the background electrical activity in nonfiring brain cells, the researchers estimated the number of incoming synapses on the cells and found that they were three times as few, compared to normal cells.

“That result suggests that, in these cells, OGT helps maintain synapses,” says Huganir. “The number of synapses on these cells was so low that they probably aren’t receiving enough input to fire. In turn, that suggests that these cells are responsible for sending the message to stop eating.”

To verify this idea, the researchers genetically manipulated the cells in the paraventricular nucleus so that they would add blue light-sensitive proteins to their membranes. When they stimulated the cells with a beam of blue light, the cells fired and sent signals to other parts of the brain, and the mice decreased the amount they ate in a day by about 25 percent.

Finally, because glucose is needed to produce GlcNAc, they thought that glucose levels, which increase after meals, might affect the activity of OGT. Indeed, they found that if they added glucose to nerve cells in petri dishes, the level of proteins with the GlcNAc addition increased in proportion to the amount of glucose in the dishes. And when they looked at cells in the paraventricular nucleus of mice that hadn’t eaten in a while, they saw low levels of GlcNAc-decorated proteins.

“There are still many things about this system that we don’t know,” says Lagerlöf, “but we think that glucose works with OGT in these cells to control ‘portion size’ for the mice. We believe we have found a new receiver of information that directly affects brain activity and feeding behavior, and if our findings bear out in other animals, including people, they may advance the search for drugs or other means of controlling appetites.”

http://www.eurekalert.org/pub_releases/2016-03/jhm-pcc031416.php

Marijuana may give relief to migraine sufferers, according to research published online in Pharmacotherapy.

The research included 121 patients diagnosed with migraines and treated with medical marijuana between January 2010 and September 2014. Patients in the study used both inhaled marijuana and edible marijuana. The researchers said inhaled marijuana seemed to be preferred for treating current headaches, and edibles seemed to be favored for headache prevention.

The researchers found that 103 study participants said they had a decrease in their monthly migraines. Fifteen patients said they had the same number of migraines, and 3 reported an increase in headaches. Overall, the patients’ number of migraines fell from 10.4 to 4.6 per month, which is statistically and clinically significant.

“There was a substantial improvement for patients in their ability to function and feel better,” senior author Laura Borgelt, PharmD, a professor in the School of Pharmacy and Pharmaceutical Sciences at the University of Colorado Anschutz Medical Campus in Aurora, said in a university news release. “Like any drug, marijuana has potential benefits and potential risks. It’s important for people to be aware that using medical marijuana can also have adverse effects.”

Reference

Rhyne D, Anderson SL, Gedde M, Borgelt LM. Effects of Medical Marijuana on Migraine Headache Frequency in an Adult Population. Pharmacotherapy. 2016;


Human cortical neurons in the brain. (David Scharf/Corbis)

By Jerry Adler
Smithsonian Magazine

Ken Hayworth, a neuroscientist, wants to be around in 100 years but recognizes that, at 43, he’s not likely to make it on his own. Nor does he expect to get there preserved in alcohol or a freezer; despite the claims made by advocates of cryonics, he says, the ability to revivify a frozen body “isn’t really on the horizon.” So Hayworth is hoping for what he considers the next best thing. He wishes to upload his mind—his memories, skills and personality—to a computer that can be programmed to emulate the processes of his brain, making him, or a simulacrum, effectively immortal (as long as someone keeps the power on).

Hayworth’s dream, which he is pursuing as president of the Brain Preservation Foundation, is one version of the “technological singularity.” It envisions a future of “substrate-independent minds,” in which human and machine consciousness will merge, transcending biological limits of time, space and memory. “This new substrate won’t be dependent on an oxygen atmosphere,” says Randal Koene, who works on the same problem at his organization, Carboncopies.org. “It can go on a journey of 1,000 years, it can process more information at a higher speed, it can see in the X-ray spectrum if we build it that way.” Whether Hayworth or Koene will live to see this is an open question. Their most optimistic scenarios call for at least 50 years, and uncounted billions of dollars, to implement their goal. Meanwhile, Hayworth hopes to achieve the ability to preserve an entire human brain at death—through chemicals, cryonics or both—to keep the structure intact with enough detail that it can, at some future time, be scanned into a database and emulated on a computer.

That approach presumes, of course, that all of the subtleties of a human mind and memory are contained in its anatomical structure—conventional wisdom among neuroscientists, but it’s still a hypothesis. There are electrochemical processes at work. Are they captured by a static map of cells and synapses? We won’t know, advocates argue, until we try to do it.

The initiatives require a big bet on the future of technology. A 3-D map of all the cells and synapses in a nervous system is called a “connectome,” and so far researchers have produced exactly one, for a roundworm called Caenorhabditis elegans, with 302 neurons and about 7,000 connections among them. A human brain, according to one reasonable estimate, has about 86 billion neurons and 100 trillion synapses. And then there’s the electrochemical activity on top of that. In 2013, announcing a federal initiative to produce a complete model of the human brain, Francis Collins, head of the National Institutes of Health, said it could generate “yottabytes” of data—a million million million megabytes. To scan an entire human brain at the scale Hayworth thinks is necessary—effectively slicing it into virtual cubes ten nanometers on a side—would require, with today’s technology, “a million electron microscopes running in parallel for ten years.” Mainstream researchers are divided between those who regard Hayworth’s quest as impossible in practice, and those, like Miguel Nicolelis of Duke University, who consider it impossible in theory. “The brain,” he says, “is not computable.”

And what does it mean for a mind to exist outside a brain? One immediately thinks of the disembodied HAL in 2001: A Space Odyssey. But Koene sees no reason that, if computers continue to grow smaller and more powerful, an uploaded mind couldn’t have a body—a virtual one, or a robotic one. Will it sleep? Experience hunger, pain, desire? In the absence of hormones and chemical neurotransmitters, will it feel emotion? It will be you, in a sense, but will you be it?

These questions don’t trouble Hayworth. To him, the brain is the most sophisticated computer on earth, but only that, and he figures his mind could also live in one made of transistors instead. He hopes to become the first human being to live entirely in cyberspace, to send his virtual self into the far future.

Read more: http://www.smithsonianmag.com/innovation/quest-upload-mind-into-digital-space-180954946/#OBRGToqVzeqftrBt.99

To investigate whether the differences in how men and women navigate are related to our sex or to cultural conditioning, researchers in Norway measured male and female brain activity while volunteers tried to find their way through a virtual reality maze.

Wearing 3D goggles and using a joystick to make their way through an artificial environment, the participants (18 males and 18 females) had their brain functions continuously recorded by an fMRI scanner as they carried out virtual navigation tasks.

In line with previous findings, the men performed better, using shortcuts, orienting themselves more using cardinal directions, and solving 50 percent more tasks than the women in the study.

“Men’s sense of direction was more effective,” said Carl Pintzka, a neuroscientist at the Norwegian University of Science and Technology (NTNU). “They quite simply got to their destination faster.”

One of the reasons for this is because of the difference in how men and women use their brains when we’re finding our way around. According to the researchers, men use the hippocampus more, whereas women place greater reliance on their brains’ frontal areas.

“That’s in sync with the fact that the hippocampus is necessary to make use of cardinal directions,” said Pintzka. “[M]en usually go in the general direction where [their destination is] located. Women usually orient themselves along a route to get there.”

Generally, the cardinal approach is more efficient, as it depends less on where you start.

But women’s brains make them better at finding objects locally, the researchers say. “In ancient times, men were hunters and women were gatherers. Therefore, our brains probably evolved differently,” said Pintzka. “In simple terms, women are faster at finding things in the house, and men are faster at finding the house.”

What was most remarkable about the study was what happened when the researchers gave women a drop of testosterone to see how it affected their ability to navigate the virtual maze. In a separate experiment, 21 women received a drop of testosterone under their tongues, while 21 got a placebo.

The researchers found that the women receiving testosterone showed improved knowledge of the layout of the maze, and relied on their hippocampus more to find their way around. Having said that, these hormone-derived benefits didn’t enable them to solve more maze tasks in the exercise.

It’s worth bearing in mind that the study used a fairly small sample size in both of the experiments carried out, so the findings need to be read in light of that. Nonetheless, the scientists believe their paper, which is published in Behavioural Brain Research, will help us to better understand the different ways male and female brains work, which could assist in the fight against diseases such as Alzheimer’s.

“Almost all brain-related diseases are different in men and women, either in the number of affected individuals or in severity,” said Pintzka. “Therefore, something is likely protecting or harming people of one sex. Since we know that twice as many women as men are diagnosed with Alzheimer’s disease, there might be something related to sex hormones that is harmful.”

http://www.sciencealert.com/women-can-navigate-better-when-given-testosterone-study-finds

Thanks to Dr. Enrique Leira for bringing this to the It’s Interesting community.

Researchers at University of South Carolina (USC) and Wake Forest Baptist Medical Center have developed a brain prosthesis that is designed to help individuals suffering from memory loss.

The prosthesis, which includes a small array of electrodes implanted into the brain, has performed well in laboratory testing in animals and is currently being evaluated in human patients.

Designed originally at USC and tested at Wake Forest Baptist, the device builds on decades of research by Ted Berger and relies on a new algorithm created by Dong Song, both of the USC Viterbi School of Engineering. The development also builds on more than a decade of collaboration with Sam Deadwyler and Robert Hampson of the Department of Physiology & Pharmacology of Wake Forest Baptist who have collected the neural data used to construct the models and algorithms.

When your brain receives the sensory input, it creates a memory in the form of a complex electrical signal that travels through multiple regions of the hippocampus, the memory center of the brain. At each region, the signal is re-encoded until it reaches the final region as a wholly different signal that is sent off for long-term storage.

If there’s damage at any region that prevents this translation, then there is the possibility that long-term memory will not be formed. That’s why an individual with hippocampal damage (for example, due to Alzheimer’s disease) can recall events from a long time ago – things that were already translated into long-term memories before the brain damage occurred – but have difficulty forming new long-term memories.

Song and Berger found a way to accurately mimic how a memory is translated from short-term memory into long-term memory, using data obtained by Deadwyler and Hampson, first from animals, and then from humans. Their prosthesis is designed to bypass a damaged hippocampal section and provide the next region with the correctly translated memory.

That’s despite the fact that there is currently no way of “reading” a memory just by looking at its electrical signal.

“It’s like being able to translate from Spanish to French without being able to understand either language,” Berger said.

Their research was presented at the 37th Annual International Conference of the IEEE Engineering in Medicine and Biology Society in Milan on August 27, 2015.

The effectiveness of the model was tested by the USC and Wake Forest Baptist teams. With the permission of patients who had electrodes implanted in their hippocampi to treat chronic seizures, Hampson and Deadwyler read the electrical signals created during memory formation at two regions of the hippocampus, then sent that information to Song and Berger to construct the model. The team then fed those signals into the model and read how the signals generated from the first region of the hippocampus were translated into signals generated by the second region of the hippocampus.

In hundreds of trials conducted with nine patients, the algorithm accurately predicted how the signals would be translated with about 90 percent accuracy.

“Being able to predict neural signals with the USC model suggests that it can be used to design a device to support or replace the function of a damaged part of the brain,” Hampson said.
Next, the team will attempt to send the translated signal back into the brain of a patient with damage at one of the regions in order to try to bypass the damage and enable the formation of an accurate long-term memory.

http://medicalxpress.com/news/2015-09-scientists-bypass-brain-re-encoding-memories.html#nRlv

by Natalie Wolchover

The main theory of psychedelics, first fleshed out by a Swiss researcher named Franz Vollenweider, is that drugs like LSD and psilocybin, the active ingredient in “magic” mushrooms, tune down the thalamus’ activity. Essentially, the thalamus on a psychedelic drug lets unprocessed information through to consciousness, like a bad email spam filter. “Colors become brighter , people see things they never noticed before and make associations that they never made before,” Sewell said.

LSD, or acid, and its mind-bending effects have been made famous by pop culture hits like “Fear and Loathing in Las Vegas,” a film about the psychedelic escapades of writer Hunter S. Thompson. Oversaturated colors, swirling walls and intense emotions all supposedly come into play when you’re tripping. But how does acid make people trip?

Life’s Little Mysteries asked Andrew Sewell, a Yale psychiatrist and one of the few U.S.-based psychedelic drug researchers, to explain why LSD short for lysergic acid diethylamide does what it does to the brain.

His explanation begins with a brief rundown of how the brain processes information under normal circumstances. It all starts in the thalamus, a node perched on top of the brain stem, right smack dab in the middle of the brain. “Most sensory impressions are routed through the thalamus, which acts as a gatekeeper, determining what’s relevant and what isn’t and deciding where the signals should go,” Sewell said.

“Consequently, your perception of the world is governed by a combination of ‘bottom-up’ processing, starting … with incoming signals, combined with ‘top-down’ processing, in which selective filters are applied by your brain to cut down the overwhelming amount of information to a more manageable and relevant subset that you can then make decisions about.

“In other words, people tend to see what they’ve been trained to see, and hear what they’ve been trained to hear.”

The main theory of psychedelics, first fleshed out by a Swiss researcher named Franz Vollenweider, is that drugs like LSD and psilocybin, the active ingredient in “magic” mushrooms, tune down the thalamus’ activity. Essentially, the thalamus on a psychedelic drug lets unprocessed information through to consciousness, like a bad email spam filter. “Colors become brighter , people see things they never noticed before and make associations that they never made before,” Sewell said.

n a recent paper advocating the revival of psychedelic drug research, psychiatrist Ben Sessa of the University of Bristol in England explained the benefits that psychedelics lend to creativity. “A particular feature of the experience is … a general increase in complexity and openness, such that the usual ego-bound restraints that allow humans to accept given pre-conceived ideas about themselves and the world around them are necessarily challenged. Another important feature is the tendency for users to assign unique and novel meanings to their experience together with an appreciation that they are part of a bigger, universal cosmic oneness.”

But according to Sewell, these unique feelings and experiences come at a price: “disorganization, and an increased likelihood of being overwhelmed.” At least until the drugs wear off, and then you’re left just trying to make sense of it all.

http://www.livescience.com/33167-how-acid-lsd-make-people-trip.html?li_source=pm&li_medium=most-popular&li_campaign=related_test