As the result of a six-year long research process, Fredrick R. Schumacher, a cancer epidemiology researcher at Case Western Reserve University School of Medicine, and an international team of more than 100 colleagues have identified 63 new genetic variations that could indicate higher risk of prostate cancer in men of European descent. The findings, published in a research letter in Nature Genetics, contain significant implications for which men may need to be regularly screened because of higher genetic risk of prostate cancer. The new findings also represent the largest increase in genetic markers for prostate cancer since they were first identified in 2006.

The changes, known as genetic markers or SNPs (“snips”), occur when a single base in the DNA differs from the usual base at that position. There are four types of bases: adenine (A), thymine (T), guanine (G) and cytosine (C). The order of these bases determines DNA’s instructions, or genetic code. They can serve as a flag to physicians that a person may be at higher risk for a certain disease. Previously, about 100 SNPs were associated with increased risk of prostate cancer. There are 3 billion base pairs in the human genome; of these, 163 have now been associated with prostate cancer.

One in seven men will be diagnosed with prostate cancer during their lifetimes.

“Our findings will allow us to identify which men should have early and regular PSA screenings and these findings may eventually inform treatment decisions,” said Schumacher. Prostate-specific antigen (PSA) screenings measure how much PSA, a protein produced by both cancerous and noncancerous tissue in the prostate, is in the blood.

Adding the 63 new SNPs to the 100 that are already known allows for the creation of a genetic risk score for prostate cancer. In the new study, the researchers found that men in the top one percent of the genetic risk score had a six-fold risk-increase of prostate cancer compared to men with an average genetic risk score. Those who had the fewest number of these SNPs, or a low genetic risk score, had the lowest likelihood of having prostate cancer.

In a meta-analysis that combined both previous and new research data, Schumacher, with colleagues from Europe and Australia, examined DNA sequences of about 80,000 men with prostate cancer and about 60,000 men who didn’t have the disease. They found that men with cancer had a higher frequency of 63 different SNPs (also known as single nucleotide polymorphisms) that men without the disease did not have. Additionally, the more of these SNPs that a man has, the more likely he is to develop prostate cancer.

The researchers estimate that there are about 500-1,000 genetic variants possibly linked to prostate cancer, not all of which have yet been identified. “We probably only need to know 10 percent to 20 percent of these to provide relevant screening guidelines,” continued Schumacher, who is an associate professor in the Department of Population and Quantitative Health Sciences at Case Western Reserve School of Medicine.

Currently, researchers don’t know which of the SNPs are the most predictive of increased prostate cancer risk. Schumacher and a number of colleagues are working to rank those most likely to be linked with prostate cancer, especially with aggressive forms of the disease that require surgery, as opposed to slowly developing versions that call for “watchful waiting” and monitoring.

The research lays a foundation for determining who and how often men should undergo PSA tests. “In the future, your genetic risk score may be highly indicative of your prostate cancer risk, which will determine the intensity of PSA screening,” said Schumacher. “We will be working to determine that precise genetic risk score range that would trigger testing. Additionally, if you have a low score, you may need screening less frequently such as every two to five years.” A further implication of the findings of the new study is the possibility of precise treatments that do not involve surgery. “Someday it may be feasible to target treatments based on a patient’s prostate cancer genetic risk score,” said Schumacher.

In addition to the work in the new study, which targets men of European background, there are parallel efforts underway looking at genetic signals of prostate cancer in men of African-American and Asian descent.

http://thedaily.case.edu/researchers-identify-dozens-new-gene-changes-point-elevated-risk-prostate-cancer-men-european-descent/

Advertisements

By Aaron E. Carroll

The medical research grant system in the United States, run through the National Institutes of Health, is intended to fund work that spurs innovation and fosters research careers. In many ways, it may be failing.

It has been getting harder for researchers to obtain grant support. A study published in 2015 in JAMA showed that from 2004 to 2012, research funding in the United States increased only 0.8 percent year to year. It hasn’t kept up with the rate of inflation; officials say the N.I.H. has lost about 23 percent of its purchasing power in a recent 12-year span.

Because the money available for research doesn’t go as far as it used to, it now takes longer for scientists to get funding. The average researcher with an M.D. is 45 years old (for a Ph.D. it’s 42 years old) before she or he obtains that first R01 (think “big” grant).

Given that R01-level funding is necessary to obtain promotion and tenure (not to mention its role in the science itself), this means that more promising researchers are washing out than ever before. Only about 20 percent of postdoctoral candidates who aim to earn a tenured position in a university achieve that goal.

This new reality can be justified only if those who are weeded out really aren’t as good as those who remain. Are we sure that those who make it are better than those who don’t?

A recent study suggests the grant-making system may be unreliable in distinguishing between grants that are funded versus those that get nothing — its very purpose.

When a health researcher believes she or he has a good idea for a research study, they most often submit a proposal to the N.I.H. It’s not easy to do so. Grants are hard to write, take a lot of time, and require a lot of experience to obtain.

After they are submitted, applications are sorted by topic areas and then sent to a group of experts called a study section. If any experts have a conflict of interest, they recuse themselves. Applications are usually first reviewed by three members of the study section and then scored on a number of domains from 1 (best) to 9 (worst).

The scores are averaged. Although the bottom half of applications will receive written comments and scores from reviewers, the applications are not discussed in the study section meetings. The top half are presented in the meeting by the reviewers, then the entire study section votes using the same nine-point scale. The grants are then ranked by scores, and the best are funded based on how much money is available. Grants have to have a percentile better than the “payline,” which is, today, usually between 10 and 15 percent.

Given that there are far more applications than can be funded, and that only the best ones are even discussed, we hope that the study sections can agree on the grades they receive, especially at the top end of the spectrum.

In this study of the system, researchers obtained 25 funded proposals from the National Cancer Institute. Sixteen of them were considered “excellent,” as they were funded the first time they were submitted. The other nine were funded on resubmission — grant applications can be submitted twice — and so can still be considered “very good.”

They then set up mock study sections. They recruited researchers to serve on them just as they do on actual study sections. They assigned those researchers to grant applications, which were reviewed as they would be for the N.I.H. They brought those researchers together in groups of eight to 10 and had them discuss and then score the proposals as they would were this for actual funding.

The intraclass correlation — a statistic that refers to how much groups agree — was 0 for the scores assigned. This meant that there was no agreement at all on the quality of any application. Because they were concerned about the reliability of this result, the researchers also computed a Krippendorff’s alpha, another statistic of agreement. A score above 0.7 (range 0 to 1) is considered “acceptable.” None were; the values were all very close to zero. A final statistic measured overall similarity scores and found that scores for the same application were no more similar than scores for different applications.

There wasn’t even any difference between the scores for those funded immediately and those requiring resubmission.

New evidence suggests a mechanism by which progressive accumulation of Tau protein in brain cells may lead to Alzheimer’s disease. Scientists studied more than 600 human brains and fruit fly models of Alzheimer’s disease and found the first evidence of a strong link between Tau protein within neurons and the activity of particular DNA sequences called transposable elements, which might trigger neurodegeneration. The study appears in the journal Cell Reports.

“One of the key characteristics of Alzheimer’s disease is the accumulation of Tau protein within brain cells, in combination with progressive cell death,” said corresponding author Dr. Joshua Shulman, associate professor of neurology, neuroscience and molecular and human genetics at Baylor College of Medicine and investigator at the Jan and Dan Duncan Neurological Research Institute at Texas Children’s Hospital. “In this study we provide novel insights into how accumulation of Tau protein may contribute to the development of Alzheimer’s disease.”

Although scientists have studied for years what happens when Tau forms aggregates inside neurons, it still is not clear why brain cells ultimately die. One thing that scientists have noticed is that neurons affected by Tau accumulation also appear to have genomic instability.

“Genomic instability refers to an increased tendency to have alterations in the genetic material, DNA, such as mutations or other impairments. This means that the genome is not functioning correctly. Genomic instability is known to be a major driving force behind other diseases such as cancer,” Shulman said. “Our study focused on a new possible causal connection between Tau accumulation within neurons and the resulting genomic instability in Alzheimer’s disease.”

Enter transposable elements
Previous studies of brain tissues from patients with other neurologic diseases and of animal models have suggested that the neurons not only present with genomic instability, but also with activation of transposable elements.

“Transposable elements are short pieces of DNA that do not seem to contribute to the production of proteins that make cells function. They behave in a way similar to viruses; they can make copies of themselves that are inserted within the genome and this can create mutations that lead to disease,” Shulman said. “Although most transposable elements are dormant or dysfunctional, some may become active in human brains late in life or in disease. That’s what led us to look specifically at Alzheimer’s disease and the possible association between Tau accumulation and activated transposable elements.”

Shulman and his colleagues began their investigations by studying more than 600 human brains from a population study run by co-author Dr. David Bennett at Rush University Medical Center in Chicago. This population study follows participants throughout their lives and at death, allowing the researchers to examine their brains in detail postmortem. One of the evaluations is the amount of Tau accumulation across many brain regions. In addition, co-author Dr. Philip De Jager at the Broad Institute and Columbia University comprehensively profiled gene expression in the same brains.

“With this large amount of data, we looked to identify signatures of active transposable elements, but this was not easy,” Shulman said. “We therefore reached out to Dr. Zhandong Liu, a co-author in this study, and together we developed a new software tool to detect signatures of active transposable elements from postmortem human brains. Then we conducted a statistical analysis in which we compared the amount of active transposable elements signatures with the amount of Tau accumulation, brain by brain.” Liu also is assistant professor of pediatrics – neurology at Baylor and a member of the Dan L Duncan Comprehensive Cancer Center.

The researchers found a strong link between the amount of Tau accumulation in neurons and detectable activity of transposable elements.

“We identified individual transposable elements that were active when Tau aggregates were present. Surprisingly, we also found evidence that the activation of transposable elements was quite broad across the genome,” Shulman said.

Other research has shown that Tau may disrupt the tightly packed architecture of the genome. It is believed that tightly packed DNA limits gene activation, while opening up the DNA may promote it. Keeping the DNA tightly packed may be an important mechanism to suppress the activity of transposable elements that lead to disease.

“The fact that Tau aggregates can affect that architecture of the genome may be one possible mechanism by which transposable elements are activated in Alzheimer’s disease,” Shulman said. “However, our studies in human brains only establish an association between Tau accumulation and activation of transposable elements. To determine whether Tau accumulation could in fact cause transposable element activation, we conducted studies with a fruit fly model of Alzheimer’s disease.”

In this fruit fly model of the disease, the researchers found that triggering Tau changes similar to those observed in human brains resulted in the activation of fruit fly transposable elements, strongly suggesting that Tau aggregates that disrupt the architecture of the genome can potentially mediate the activation of transposable elements and ultimately cause neurodegeneration.

“We think our experiments reveal new and potentially important insights relevant for understanding Alzheimer’s disease mechanisms,” Shulman said. “There is still a lot of work to be done, but by presenting our results we hope we can stimulate others in the research community to help work on this problem.”

https://www.bcm.edu/news/neurology/research-links-tau-aggregates-cell-death

by CHRISTIAN COTRONEO

The mind may seem to thrive on stimuli — the honking horns, the pixels percolating on this screen at this very moment, or even the way the keyboard feels under your fingers at any given time of day.

But, in fact, it may be what lies between — the time between honks, if you will — when the brain focuses on encoding the information, according to a new study from Neuroscience Research Australia (NeuRA) and the University of New South Wales.

Of course, we’ve long known that silence is golden — especially when it comes to mental health and dealing with stress. But the new research points to the absence of stimulation as a window when the brain has a chance to learn from its environment.

Think of it as a micro-breather for the mind, allowing it to grasp and distill what it’s experiencing.

To reach that conclusion, researchers Ingvars Birznieks and Richard Vickery developed a unique way to control the neural information that’s presented to the brain. Essentially, they delivered short mechanical taps to the fingertips of study subjects.

Birznieks and Vickery ensured that each tap generated a corresponding nerve impulse to a neuron in the brain. By triggering the sense of touch — which the brain registers from vibrations along the ridge of our fingertips — the scientists were able to monitor how nerve impulses encoded the information.

The thing is, the frequency of those neuron bursts didn’t match the frequency of taps.

“Instead, it was the silent period between bursts that best explained the subjects’ experiences,” Birznieks noted in the NeuRA blog.

Prevailing theories had it that every vibration or tap would have a corresponding nerve impulse, or the brain would be able to detect a periodic regularity in the impulse patterns.

“We were hoping to disprove one of the two competing theories, but showing they were both incorrect and finding a completely new coding strategy totally surprised us,” Birznieks added.

The brain just kept ticking along to its own beat, independent of how often those fingertips were stimulated.

For neuroscience, the findings could be a game-changer. A better understanding of how the brain fields daily neural impulses could pave the way for more efficient interfaces between brain and machine.

And for the rest of us, it suggests that in a increasingly noise-addled society — where every sense seems in danger of over-stimulation — it may do a body good to give the brain a breather.

https://www.mnn.com/green-tech/research-innovations/stories/silence-brain-study-touch-stimulation

When the disembodied cockroach leg twitched, Yeongin Kim knew he had finally made it.

A graduate student at Stanford, Kim had been working with an international team of neuroengineers on a crazy project: an artificial nerve that acts like the real thing. Like sensory neurons embedded in our skin, the device—which kind of looks like a bendy Band-Aid—detects touch, processes the information, and sends it off to other nerves.

Yup, even if that downstream nerve is inside a cockroach leg.

Of course, the end goal of the project isn’t to fiddle with bugs for fun. Rather, the artificial nerve could soon provide prosthetics with a whole new set of sensations.

Touch is just the beginning: future versions could include a sense of temperature, feelings of movement, texture, and different types of pressure—everything that helps us navigate the environment.

The artificial nerve fundamentally processes information differently than current computer systems. Rather than dealing with 0s and 1s, the nerve “fires” like its biological counterpart. Because it uses the same language as a biological nerve, the device can directly communicate with the body—whether it be the leg of a cockroach or residual nerve endings from an amputated limb.

But prosthetics are only part of it. The artificial nerve can potentially combine with an artificial “brain”—for example, a neuromorphic computer chip that processes input somewhat like our brains—to interpret its output signals. The result is a simple but powerful multi-sensory artificial nervous system, ready to power our next generation of bio-robots.

“I think that would be really, really interesting,” said materials engineer Dr. Alec Talin at Sandia National Laboratory in California, who was not involved in the work. The team described their device in Science.

Feeling Good

Current prosthetic devices are already pretty remarkable. They can read a user’s brain activity and move accordingly. Some have sensors embedded, allowing the user to receive sparse feelings of touch or pressure. Newer experimental devices even incorporate a bio-hack that gives its wearer a sense of movement and position in space, so that the user can grab a cup of coffee or open a door without having to watch their prosthetic hand.

Yet our natural senses are far more complex, and even state-of-the-art prosthetics can generate a sense of “other,” often resulting in the device being abandoned. Replicating all the sensors in our skin has been a longtime goal of bioengineers, but hard to achieve without—here’s the kicker—actually replicating how our skin’s sensors work.

Embedded inside a sliver of our skin are thousands of receptors sensitive to pressure, temperature, pain, itchiness, and texture. When activated, these sensors shoot electrical signals down networks of sensory nerves, integrating at “nodes” along the way. Only if the signals are strong enough—if they reach a threshold—does the information get passed on to the next node, and eventually, to the spinal cord and brain for interpretation.

This “integrate-and-fire” mode of neuronal chatter is partly why our sensory system is so effective. It manages to ignore hundreds of insignificant, noisy inputs and only passes on information that is useful. Ask a classic computer to process all these data in parallel—even if running state-of-the-art deep learning algorithms—and it chokes.

Neuromorphic Code

One thing was clear to Kim and his colleagues: forget computers, it’s time to go neural.

Working with Dr. Zhenan Bao at Stanford University and Dr. Tae-Woo Lee at the Seoul National University in Seoul, South Korea, Kim set his sights on fabricating a flexible organic device that works like an artificial nerve.

The device contained three parts. The first is a series of sensitive touch sensors that can detect the slightest changes in pressure. Touching these sensors sparks an electrical voltage, which is then picked up by the next component: a “ring oscillator.” This is just a fancy name for a circuit that transforms voltage into electrical pulses, much like a biological neuron.

The pulses are then passed down to the third component, a synaptic transistor. That’s the Grand Central Station for the device: it takes in all of the electrical pulses from all active sensors, which then integrates the signals. If the input is sufficiently strong the transistor fires off a chain of electrical pulses of various frequencies and magnitudes, similar to those produced by biological neurons.

In other words, the outputs of the artificial nerve are electrical patterns that the body can understand—the “neural code.”

“The neural code is at the same time rich and efficient, being an optimal choice to design artificial systems for sensing and perception,” explained Dr. Chiara Bartolozzi at the Italian Institute of Technology in Genova, who was not involved in the work.

Neural Magic

In a series of tests, the team proved her right.

In one experiment, they moved a small rod across the pressure sensor in different directions and found that it could distinguish between each movement and provide an estimate of the speed.

Another test showed that a more complicated artificial nerve could differentiate between various Braille letters. The team hooked up two sets of synaptic transistors with oscillators. When the device “felt” the Braille characters, the pressure signals integrated, generating a specific output electrical pattern for each letter.

“This approach mimics the process of tactile information processing in a biological somatosensory system,” said the authors, adding that raw inputs are partially processed at synapses first before delivery to the brain.

Then there was the cockroach experiment. Here, the team hooked up the device to a single, detached cockroach leg. They then applied pressure to the device in tiny increments, which was processed and passed on to the cockroach through the synaptic transistor. The cockroach’s nervous system took the outputs as its own, twitching its leg more or less vigorously depending on how much pressure was initially applied.

The device can be used in a “hybrid bioelectronics reflex arc,” the authors explained, in that it can be used to control biological muscles. Future artificial nerves could potentially act the same way, giving prosthetics and robots both touch sensations and reflexes.

The work is still in its infancy, but the team has high hopes for their strategy. Because organic electronics like the ones used here are small and cheap to make, bioengineers could potentially pack more sensors into smaller areas. This would allow multiple artificial nerves to transmit a wider array of sensations for future prosthetic wearers, transforming the robotic appendage into something that feels more natural and “self.”

Natural haptic feedback could help users with fine motor control in prosthetic hands, such as gently holding a ripe banana. When embedded in the feet of lower-limb prosthetics, the artificial nerves could help the user walk more naturally because of pressure feedback from the ground.

The team also dreams of covering entire robots with the stretchy device. Tactile information could help robots better interact with objects, or allow surgeons to more precisely control remote surgical robots that require finesse.

And perhaps one day, the artificial nerve could even be combined with a neuromorphic chip—a computer chip that acts somewhat like the brain—and result in a simple but powerful multi-sensory artificial nervous system for future robots.

“We take skin for granted but it’s a complex sensing, signaling, and decision-making system,” said study author Dr. Zhenan Bao at Stanford University. “This artificial sensory nerve system is a step toward making skin-like sensory neural networks for all sorts of applications.”

https://singularityhub.com/2018/06/05/robots-will-be-able-to-feel-touch-with-this-artificial-nerve/?utm_source=Singularity+Hub+Newsletter&utm_campaign=b70e3ec468-Hub_Daily_Newsletter&utm_medium=email&utm_term=0_f0cf60cdae-b70e3ec468-58158129#sm.000wtv3xx101kcveszb20uis8nwpj


Fresh or frozen human blood samples can be directly transformed into patient-specific neurons to study disorders such as schizophrenia and autism, Stanford researcher Marius Wernig has found.

Human immune cells in blood can be converted directly into functional neurons in the laboratory in about three weeks with the addition of just four proteins, researchers at the Stanford University School of Medicine have found.

The dramatic transformation does not require the cells to first enter a state called pluripotency but instead occurs through a more direct process called transdifferentiation.

The conversion occurs with relatively high efficiency — generating as many as 50,000 neurons from 1 milliliter of blood — and it can be achieved with fresh or previously frozen and stored blood samples, which vastly enhances opportunities for the study of neurological disorders such as schizophrenia and autism.

“Blood is one of the easiest biological samples to obtain,” said Marius Wernig, MD, associate professor of pathology and a member of Stanford’s Institute for Stem Cell Biology and Regenerative Medicine. “Nearly every patient who walks into a hospital leaves a blood sample, and often these samples are frozen and stored for future study. This technique is a breakthrough that opens the possibility to learn about complex disease processes by studying large numbers of patients.”

A paper describing the findings was published online June 4 in the Proceedings of the National Academy of Sciences. Wernig is the senior author. Former postdoctoral scholar Koji Tanabe, PhD, and graduate student Cheen Ang are the lead authors.

Dogged by challenges

The transdifferentiation technique was first developed in Wernig’s laboratory in 2010 when he and his colleagues showed that they could convert mouse skin cells into mouse neurons without first inducing the cells to become pluripotent — a developmentally flexible stage from which the cells can become nearly any type of tissue. They went on to show the technique could also be used on human skin and liver cells.

But each approach has been dogged by challenges, particularly for researchers wishing to study genetically complex mental disorders, such as autism or schizophrenia, for which many hundreds of individual, patient-specific samples are needed in order to suss out the relative contributions of dozens or more disease-associated mutations.

“Generating induced pluripotent stem cells from large numbers of patients is expensive and laborious. Moreover, obtaining skin cells involves an invasive and painful procedure,” Wernig said. “The prospect of generating iPS cells from hundreds of patients is daunting and would require automation of the complex reprogramming process.”

Although it’s possible to directly convert skin cells to neurons, the biopsied skin cells first have to be grown in the laboratory for a period of time until their numbers increase — a process likely to introduce genetic mutations not found in the person from whom the cells were obtained.

The researchers wondered if there was an easier, more efficient way to generate patient-specific neurons.

‘Somewhat mind-boggling’
In the new study, Wernig and his colleague focused on highly specialized immune cells called T cells that circulate in the blood. T cells protect us from disease by recognizing and killing infected or cancerous cells. In contrast, neurons are long and skinny cells capable of conducting electrical impulses along their length and passing them from cell to cell. But despite the cells’ vastly different shapes, locations and biological missions, the researchers found it unexpectedly easy to complete their quest.

“It’s kind of shocking how simple it is to convert T cells into functional neurons in just a few days,” Wernig said. “T cells are very specialized immune cells with a simple round shape, so the rapid transformation is somewhat mind-boggling.”

The resulting human neurons aren’t perfect. They lack the ability to form mature synapses, or connections, with one another. But they are able to carry out the main fundamental functions of neurons, and Wernig and his colleague are hopeful they will be able to further optimize the technique in the future. In the meantime, they’ve started to collect blood samples from children with autism.

“We now have a way to directly study the neuronal function of, in principle, hundreds of people with schizophrenia and autism,” Wernig said. “For decades we’ve had very few clues about the origins of these disorders or how to treat them. Now we can start to answer so many questions.”

Other Stanford co-authors are postdoctoral scholars Soham Chanda, PhD, and Daniel Haag, PhD; undergraduate student Victor Olmos; professor of psychiatry and behavioral sciences Douglas Levinson, MD; and professor of molecular and cellular physiology Thomas Südhof, MD.

The research was supported by the National Institutes of Health (grants MH092931 and MH104172), the California Institute for Regenerative Medicine, the New York Stem Cell Foundation, the Howard Hughes Medical Institute, the Siebel Foundation and the Stanford Schizophrenia Genetics Research Fund.

http://med.stanford.edu/news/all-news/2018/06/human-blood-cells-transformed-into-functional-neurons.html

The majority of the cells in the brain are no neurons, but Glia (from “glue”) cells, that support the structure and function of the brain. Astrocytes (“start cells”) are star-shaped glial cells providing many supportive functions for the neurons surrounding them, such as the provision of nutrients and the regulation of their chemical environment. Newer studies showed that astrocytes also monitor and modulate neuronal activity. For example, these studies have shown that astrocytes are necessary for the ability of neurons to change the strength of the connections between them, the process underlying learning and memory, and indeed astrocytes are also necessary for normal cognitive function. However, it is still unknown whether astrocytic activity is only necessary, or is it may also be sufficient to induce synaptic potentiation and enhance cognitive performance.

In a new study published in Cell, two graduate students, Adar Adamsky and Adi Kol, from Inbal Goshen’s lab, employed chemogenetic and optogenetic tools that allow specific activation of astrocytes in behaving mice, to explore their role in synaptic activity and memory performance. They found that astrocytic activation in the hippocampus, a brain region that plays an important role in memory acquisition and consolidation, potentiated the synaptic connections in this region, measured in brain slices. Moreover, in the intact brain, astrocytic activation enhanced hippocampal neuronal activity in a task-dependent way: i.e. only during when it was combined with memory acquisition, but not when mice were at their home cage with no meaningful stimuli. The ability of astrocytes to increase neuronal activity during memory acquisition had a significant effect on cognitive function: Specifically, astrocytic activation during learning resulted in enhanced memory in two memory tests. In contrast, direct neuronal activation in the hippocampus induced a non-selective increase in activity (during learning or in the home cage), and thus resulted in drastic memory impairment.

The results suggest that the memory enhancement induced by astrocytic activation during learning is not simply a result of a general increase in hippocampal neuronal activity. Rather, the astrocytes, which sense and respond to changes in the surrounding neuronal activity, can detect and specifically enhance only the neuronal activity involved in learning, without affecting the general activity. This may explain why general astrocytic activation improves memory performance, whereas a similar activation of neurons impairs it.

Memory is not a binary process (remember/don’t remember); the strength of a memory can vary greatly, either for the same memory or between different memories. Here, we show that activating astrocytes in mice with intact cognition improves their memory performance. This finding has important clinical implications for cognitive augmentation treatments. Furthermore, the ability of astrocytes to strengthen neuronal communication and improve memory performance supports the claim that astrocytes are able to take an active part in the neuronal processes underlying cognitive function. This perspective expands the definition of the role of astrocytes, from passive support cells to active cells that can modulate neural activity and thus shape behavior.

Link: https://www.cell.com/cell/pdf/S0092-8674(18)30575-0.pdf

https://elsc.huji.ac.il/content/article-month-june-2018-goshens-lab