Posts Tagged ‘neuroscience’


When bad things happen, we don’t want to remember. We try to block, resist, ignore – but we should perhaps be doing the opposite, researchers say.

A new study led by scientists in Texas suggests the act of intentionally forgetting is linked to increased cerebral engagement with the unwanted information in question. In other words, to forget something, you actually need to focus on it.

“A moderate level of brain activity is critical to this forgetting mechanism,” explains psychologist Tracy Wang from the University of Texas at Austin.

“Too strong, and it will strengthen the memory; too weak, and you won’t modify it.”

Trying to actively forget unwanted memories doesn’t just help prevent your brain from getting overloaded.

It also lets people move on from painful experiences and emotions they’d rather not recall, which is part of the reason it’s an area of active interest to neuroscientists.

“We may want to discard memories that trigger maladaptive responses, such as traumatic memories, so that we can respond to new experiences in more adaptive ways,” says one of the researchers, Jarrod Lewis-Peacock.

“Decades of research has shown that we have the ability to voluntarily forget something, but how our brains do that is still being questioned.”

Much prior research on intentional forgetting has focussed on brain activity in the prefrontal cortex, and the brain’s memory centre, the hippocampus.

In the new study, the researchers monitored a different part of the brain called the ventral temporal cortex, which helps us process and categorise visual stimuli.

In an experiment with 24 healthy young adults, the participants were shown pictures of scenes and people’s faces, and were instructed to either remember or forget each image.

During the experiment, each of the participants had their brain activity monitored by functional magnetic resonance imaging (fMRI) machines.

When the researchers examined activity in the ventral temporal cortex, they found that the act of forgetting effectively uses more brain power than remembering.

“Pictures followed by a forget instruction elicited higher levels of processing in [the] ventral temporal cortex compared to those followed by a remember instruction,” the authors write in their paper.

“This boost in processing led to more forgetting, particularly for items that showed moderate (vs. weak or strong) activation.”

Of course, forgetting specific images on demand in a contrived laboratory experiment is very different to moving on from painful or traumatic memories of events experienced in the real world.

But the mechanisms at work could be the same, researchers say, and figuring out how to activate them could be a huge benefit to people around the world who need to forget things, but don’t know how.

Especially since this finding in particular challenges our natural intuition to suppress things; instead, we should involve more rather than less attention to unwanted information, in order to forget it.

“Importantly, it’s the intention to forget that increases the activation of the memory,” Wang says.

“When this activation hits the ‘moderate level’ sweet spot, that’s when it leads to later forgetting of that experience.”

The findings are reported in JNeurosci.


by Ruth Williams

The brains of people in vegetative, partially conscious, or fully conscious states have differing profiles of activity as revealed by functional magnetic resonance imaging (fMRI), according to a report today (February 6) in Science Advances. The results of the study indicate that, compared with patients lacking consciousness, the brains of healthy individuals exhibit highly dynamic and complex connectivity.

“This new study provides a substantial advance in characterizing the ‘fingerprints’ of consciousness in the brain” Anil Seth, a neuroscientist at the University of Sussex, UK, who was not involved in the project, writes in an email to The Scientist. “It opens new doors to determining conscious states—or their absence—in a range of different conditions.”

A person can lose consciousness temporarily, such as during sleep or anesthesia, or more permanently as is the case with certain brain injuries. But while unconsciousness manifests behaviorally as a failure to respond to stimuli, such behavior is not necessarily the result of unconsciousness.

Some seemingly unresponsive patients, for example, can display brain activities similar to those of fully conscious individuals when asked to imagine performing a physical task such as playing tennis. Such a mental response in the absence of physical feedback is a condition known as cognitive-motor dissociation.

Researchers are therefore attempting to build a better picture of what is happening in the human brain during consciousness and unconsciousness. In some studies, electroencephalography (EEG) recordings of the brain’s electrical activities during sleep, under anesthesia, or after brain injury have revealed patterns of brain waves associated with consciousness. But, says Jacobo Sitt of the Institute of Brain and Spinal Cord in Paris, such measurements do not provide good spatial information about brain activity. With fMRI, on the other hand, “we know where the activity is coming from.”

Sitt and colleagues performed fMRI brain scans on a total of 47 healthy individuals and 78 patients who either had unresponsive wakefulness syndrome (UWS)—a vegetative state in which the patient’s eyes open, but they never exhibit voluntary movement—or were in a minimally conscious state (MCS)—having more complex behaviors, such as the ability to follow an object with their eyes, but remaining unable to communicate thoughts or feelings. The scans were performed by an international team of collaborators at three different facilities in Paris, New York, and Liège, Belgium.

Data from the fMRI scans, which generated roughly 400 images in approximately 20 minutes for each patient, was computationally analyzed for identifiable patterns of activity. Four patterns were reproducibly detected within the data from each facility. And, for two of these patterns, the likelihood of their occurrence in a given individual’s scan depended on diagnosis.

Healthy individuals, for example, were more likely than patients to display pattern 1—characterized by high spatial complexity and interregional connectivity indicating brain-wide coordination. Patients with UWS, on the other hand, rarely displayed pattern 1, most often displaying pattern 4—characterized by low complexity and reduced interregional connectivity. Generally speaking, MCS patients fell somewhere between. The occurrence of patterns 2 and 3 were equally likely across all groups.

The team went on to analyze a second set of 11 patients at a facility in Ontario, Canada. Again the four distinct patterns were detected within the fMRI images. Six of these patients had UWS and predominantly displayed pattern 4, while the remaining five, who had cognitive-motor dissociation, had higher rates of pattern 1, supporting previous evidence for consciousness in these patients.

With such a mix of patients, facilities, scanners, and researchers, the study “had every possibility of failing,” says neuroscientist Tristan Bekinschtein of the University of Cambridge, UK, who did not participate in the research. However, the results were “brutally consistent,” he says.

Having identifiable signatures of consciousness and unconsciousness might ultimately help doctors and families make difficult decisions about continuing life support for vegetative patients, says anesthesiology researcher Anthony Hudetz of the University of Michigan who was not involved with the work. It might also provide insight into whether particular rehabilitation methods or other treatments are working.

“All that hinges on a better understanding of what goes on in the brains of these patients versus healthy or aware [people],” Hudetz says. To that end, this paper “makes a major step forward.”

A. Demertzi et al., “Human consciousness is supported by dynamic complex patterns of brain signal coordination,” Sci Adv, 5: eaat7603, 2019.

To determine whether someone is a psychopath, they have to score highly on tests like the Hare Psychopathy Checklist, answering questions about superficial charm, impulsive behaviour, and pathological lies.

But there could be a simpler test: yawning.

It’s hard not to yawn when someone else does, because yawning is so contagious. Even dogs can catch them. But according to a study from 2015, published in the journal Personality and Individual Differences, psychopaths aren’t so susceptible.

The researchers from Baylor University recruited 135 students and measured their personalities for psychopathic traits. They then subjected them to a contagious yawning experiment.

Those who scored highly on the psychopathic scale were much less likely to catch a yawn.

In previous research, yawning has been linked to empathy. For example, in one study, children with autism were less likely to catch yawns, possibly because they find it harder to read other people. Babies don’t catch yawns either, and won’t until they are at least 4 years old, when they have more emotional awareness.

The researchers suggest empathy could be at play in their experiment, as psychopaths tend to lack it.

This isn’t to say if someone doesn’t yawn when you do they must be a psychopath. It’s just an intriguing symptom of the people who struggle to connect with other people’s emotions.

Also, people can catch yawns to different degrees. For some, it’s just reading the word “yawn” is enough to set them off. So if you yawned the whole way through reading this article, you might be able to conclude that your empathy is pretty high.

by George Dvorsky

Using brain-scanning technology, artificial intelligence, and speech synthesizers, scientists have converted brain patterns into intelligible verbal speech—an advance that could eventually give voice to those without.

It’s a shame Stephen Hawking isn’t alive to see this, as he may have gotten a real kick out of it. The new speech system, developed by researchers at the ​Neural Acoustic Processing Lab at Columbia University in New York City, is something the late physicist might have benefited from.

Hawking had amyotrophic lateral sclerosis (ALS), a motor neuron disease that took away his verbal speech, but he continued to communicate using a computer and a speech synthesizer. By using a cheek switch affixed to his glasses, Hawking was able to pre-select words on a computer, which were read out by a voice synthesizer. It was a bit tedious, but it allowed Hawking to produce around a dozen words per minute.

But imagine if Hawking didn’t have to manually select and trigger the words. Indeed, some individuals, whether they have ALS, locked-in syndrome, or are recovering from a stroke, may not have the motor skills required to control a computer, even by just a tweak of the cheek. Ideally, an artificial voice system would capture an individual’s thoughts directly to produce speech, eliminating the need to control a computer.

New research published today in Scientific Advances takes us an important step closer to that goal, but instead of capturing an individual’s internal thoughts to reconstruct speech, it uses the brain patterns produced while listening to speech.

To devise such a speech neuroprosthesis, neuroscientist Nima Mesgarani and his colleagues combined recent advances in deep learning with speech synthesis technologies. Their resulting brain-computer interface, though still rudimentary, captured brain patterns directly from the auditory cortex, which were then decoded by an AI-powered vocoder, or speech synthesizer, to produce intelligible speech. The speech was very robotic sounding, but nearly three in four listeners were able to discern the content. It’s an exciting advance—one that could eventually help people who have lost the capacity for speech.

To be clear, Mesgarani’s neuroprosthetic device isn’t translating an individual’s covert speech—that is, the thoughts in our heads, also called imagined speech—directly into words. Unfortunately, we’re not quite there yet in terms of the science. Instead, the system captured an individual’s distinctive cognitive responses as they listened to recordings of people speaking. A deep neural network was then able to decode, or translate, these patterns, allowing the system to reconstruct speech.

“This study continues a recent trend in applying deep learning techniques to decode neural signals,” Andrew Jackson, a professor of neural interfaces at Newcastle University who wasn’t involved in the new study, told Gizmodo. “In this case, the neural signals are recorded from the brain surface of humans during epilepsy surgery. The participants listen to different words and sentences which are read by actors. Neural networks are trained to learn the relationship between brain signals and sounds, and as a result can then reconstruct intelligible reproductions of the words/sentences based only on the brain signals.”

Epilepsy patients were chosen for the study because they often have to undergo brain surgery. Mesgarani, with the help of Ashesh Dinesh Mehta, a neurosurgeon at Northwell Health Physician Partners Neuroscience Institute and a co-author of the new study, recruited five volunteers for the experiment. The team used invasive electrocorticography (ECoG) to measure neural activity as the patients listened to continuous speech sounds. The patients listened, for example, to speakers reciting digits from zero to nine. Their brain patterns were then fed into the AI-enabled vocoder, resulting in the synthesized speech.

The results were very robotic-sounding, but fairly intelligible. In tests, listeners could correctly identify spoken digits around 75 percent of the time. They could even tell if the speaker was male or female. Not bad, and a result that even came as “a surprise” to Mesgaran, as he told Gizmodo in an email.

Recordings of the speech synthesizer can be found here (the researchers tested various techniques, but the best result came from the combination of deep neural networks with the vocoder).

The use of a voice synthesizer in this context, as opposed to a system that can match and recite pre-recorded words, was important to Mesgarani. As he explained to Gizmodo, there’s more to speech than just putting the right words together.

“Since the goal of this work is to restore speech communication in those who have lost the ability to talk, we aimed to learn the direct mapping from the brain signal to the speech sound itself,” he told Gizmodo. “It is possible to also decode phonemes [distinct units of sound] or words, however, speech has a lot more information than just the content—such as the speaker [with their distinct voice and style], intonation, emotional tone, and so on. Therefore, our goal in this particular paper has been to recover the sound itself.”

Looking ahead, Mesgarani would like to synthesize more complicated words and sentences, and collect brain signals of people who are simply thinking or imagining the act of speaking.

Jackson was impressed with the new study, but he said it’s still not clear if this approach will apply directly to brain-computer interfaces.

“In the paper, the decoded signals reflect actual words heard by the brain. To be useful, a communication device would have to decode words that are imagined by the user,” Jackson told Gizmodo. “Although there is often some overlap between brain areas involved in hearing, speaking, and imagining speech, we don’t yet know exactly how similar the associated brain signals will be.”

William Tatum, a neurologist at the Mayo Clinic who was also not involved in the new study, said the research is important in that it’s the first to use artificial intelligence to reconstruct speech from the brain waves involved in generating known acoustic stimuli. The significance is notable, “because it advances application of deep learning in the next generation of better designed speech-producing systems,” he told Gizmodo. That said, he felt the sample size of participants was too small, and that the use of data extracted directly from the human brain during surgery is not ideal.

Another limitation of the study is that the neural networks, in order for them do more than just reproduce words from zero to nine, would have to be trained on a large number of brain signals from each participant. The system is patient-specific, as we all produce different brain patterns when we listen to speech.

“It will be interesting in future to see how well decoders trained for one person generalize to other individuals,” said Jackson. “It’s a bit like early speech recognition systems that needed to be individually trained by the user, as opposed to today’s technology, such as Siri and Alexa, that can make sense of anyone’s voice, again using neural networks. Only time will tell whether these technologies could one day do the same for brain signals.”

No doubt, there’s still lots of work to do. But the new paper is an encouraging step toward the achievement of implantable speech neuroprosthetics.

Levels of a protein called neurofilament light chain increase in the blood and spinal fluid of some Alzheimer’s patients 16 years before they develop symptoms, according to a study published January 21 in Nature Medicine.

The results suggest that neurofilament light chain (NfL), which is part of the cytoskeleton of neurons and has previously been tied to brain damage in mice, could serve as a biomarker to noninvasively track the progression of the disease. “This is something that would be easy to incorporate into a screening test in a neurology clinic,” coauthor Brian Gordon, an assistant professor of radiology at Washington University, says in a press release.

Gordon and his colleagues measured NfL in nearly 250 people carrying an Alzheimer’s-risk allele and more than 160 of their relatives who did not carry the variant. They found that those at risk of developing the disease had higher levels of the protein early on, and that NfL levels in both the blood and spinal fluid were on the rise well before the patients began to show signs of neurodegeneration, more than 16 years before disease onset.

Examining a subset of the patients more closely, the team saw that the rate of increase in NfL correlated with the shrinkage of a brain region called the precuneus, and patients whose NfL levels were rising rapidly tested worse on cognitive tests. “It is not necessarily the absolute levels which tell you your neurodegeneration is ongoing, it is the rate of change,” coauthor Mathias Jucker, a professor of cellular neurology at the German Center for Neurodegenerative Diseases in Tübingen, tells The Guardian.

The Alzheimer’s-linked mutation carried by patients examined in this study only affects about 1 percent of people who get the neurodegenerative disease, so the approach must be validated in a broader patient population, James Pickett, the head of research at the Alzheimer’s Society, tells The Guardian.

“We validated it in people with Alzheimer’s disease because we know their brains undergo lots of neurodegeneration, but this marker isn’t specific for Alzheimer’s,” Gordon says in the release. “I could see this being used in the clinic in a few years to identify signs of brain damage in individual patients.”

Meanwhile, a research team at Seoul National University in South Korea described another potential blood test for Alzheimer’s, focusing on the tau and amyloid proteins known to be associated with the disease. According to their study published today in Brain, blood levels of tau and amyloid correlate with how much tau has accumulated in the brain, as well as other markers of neurodegeneration such as hippocampal volume. “These results indicate that combination of plasma tau and amyloid-β1–42 levels might be potential biomarkers for predicting brain tau pathology and neurodegeneration,” the researchers write in their report.

Dr. Lewis L. Judd led the National Institute of Mental Health from 1988 to 1990. (National Library of Medicine)

By Emily Langer

Lewis L. Judd, a nationally known psychiatrist who helped turn the focus of his profession from psychoanalysis to neuroscience, an approach that sought to destigmatize mental illness by treating it as cancer, heart disease or any other medical problem, died Dec. 16 in La Jolla, Calif. He was 88.

The cause was cardiac arrest, said his wife, Pat Judd.

For decades, psychiatrists were schooled in the theories of Sigmund Freud, the founder of psychoanalysis, who posited that mental disturbances could be treated through dialogue with a therapist. Practitioners sought to interpret their patients’ dreams, giving little attention to the physical functioning of the brain or the chemicals that regulate it.

Dr. Judd agreed, he once told the Associated Press, that a physician must look at patients as a “whole individual,” with all their “worries, concerns, aspirations and needs,” and not resort to simply “popping a pill in their mouth.” But he found the long-prevailing psychoanalytic approach too limiting to explain or treat afflictions such as depression, bipolar disorder, severe anxiety and schizophrenia — “these serious mental disorders that have defied our understanding for centuries,” he once told the Chicago Tribune.

Instead, he advocated a biological approach, starting at the molecular level of the brain. As director of the National Institute of Mental Health in Bethesda, Md. — a post he held from 1988 to 1990, during a hiatus from his decades-long chairmanship of the psychiatry department at the University of California at San Diego — he helped launch a federal research initiative known as the “Decade of the Brain.”

“He was obsessed with educating the public and the profession . . . that mental illnesses were biological illnesses, that schizophrenia and depression were diseases of the brain,” Alan I. Leshner, Dr. Judd’s deputy at NIMH and later chief executive of the American Association for the Advancement of Science, said in an interview. “At the time, that was a heretical thought.”

Today, the biological component of many mental illnesses is widely accepted. When Dr. Judd led NIMH, it was not; he once cited a survey in which 71 percent of respondents said mental illness was a result of personal weakness and a third attributed it to sinful behavior. Poor parenting was another common alleged culprit.

Dr. Judd argued that the biological approach to psychiatry held the promise not only of deepening understanding of the body’s most complex organ but of improving lives: If mental disorders could be shown to be a result of brain chemistry or of physical dysfunction, patients might feel less stigmatized and therefore more willing to seek treatment.

“We look at the homeless and feel that if they only got their act together, they could lift themselves up,” Dr. Judd told the Los Angeles Times in 1988, discussing the prevalence of mental illness among homeless people. “We would never believe that about someone who has cancer or some other physical disease.”

As head of NIMH, which is an arm of the National Institutes of Health and the chief federal agency for research on mental illness, Dr. Judd oversaw more than $500 million in research money. He described the Decade of the Brain — a designation conferred by Congress and President George H.W. Bush — as a “research plan designed to bring a precise and detailed understanding of all the elements of brain function within our own lifetimes.”

During his tenure at NIMH, scientists for the first time successfully grew brain tissue in a laboratory. Dr. Judd was among those scientists who touted the potential of medical imaging, such as MRIs and PET scans, to reveal the inner workings of the brain and the potential causes of diseases such as schizophrenia.

Almost 30 years after the Decade of the Brain began, much about the organ remains elusive. Leshner credited the initiative with helping bring attention to the importance of brain research as well as inspiring the Brain Initiative, a public-private research effort advanced by the Obama administration.

“The brain is really the last frontier for scientists,” Dr. Judd said.

Lewis Lund Judd was born in Los Angeles on Feb. 10, 1930. His father was an obstetrician-gynecologist, and his mother was a homemaker. Dr. Judd’s brother, Howard Judd, also became an OB/GYN and a noted researcher in women’s health at the University of California at Los Angeles.

Dr. Judd received a bachelor’s degree in psychology from the University of Utah in 1954 and a medical degree from UCLA in 1958. In the early years of his career, he served in the Air Force as a base psychiatrist.

He joined UC-San Diego in 1970 and became department chairman in 1977, helping grow his faculty into one of the most respected the country. He stepped down as chairman in 2013 and retired in 2015.

Dr. Judd’s first marriage, to Anne Nealy, ended in divorce. Survivors include his wife of 45 years, the former Patricia Hoffman, who is also a psychiatry professor at UC-San Diego, of La Jolla; three daughters from his first marriage, Allison Fee of Whidbey Island, Wash., Catherine Judd of Miami and Stephanie Judd of Chevy Chase, Md.; and four grandchildren.

Ever exploring the outer reaches of his field, Dr. Judd participated in a dialogue with the Dalai Lama in 1989 about life and the mind.

“Our model of mental health is mostly defined in terms of the absence of mental illness,” Dr. Judd told the New York Times, reflecting on the Tibetan Buddhist leader’s discussion of wisdom and compassion. “They may have more positive ones that might be worth our study.”

Patterns of gene expression unite the prairie vole Microtus ochrogaster with other monogamous species, including certain frogs, fish, and birds. YVA MOMATIUK AND JOHN EASTCOTT/MINDEN PICTURES

By Kelly Servick

In the animal world, monogamy has some clear perks. Living in pairs can give animals some stability and certainty in the constant struggle to reproduce and protect their young—which may be why it has evolved independently in various species. Now, an analysis of gene activity within the brains of frogs, rodents, fish, and birds suggests there may be a pattern common to monogamous creatures. Despite very different brain structures and evolutionary histories, these animals all seem to have developed monogamy by turning on and off some of the same sets of genes.

“It is quite surprising,” says Harvard University evolutionary biologist Hopi Hoekstra, who was not involved in the new work. “It suggests that there’s a sort of genomic strategy to becoming monogamous that evolution has repeatedly tapped into.”

Evolutionary biologists have proposed various benefits to so-called social monogamy, where mates pair up for at least a breeding season to care for their young and defend their territory. When potential mates are scarce or widely dispersed, for example, forming a single-pair bond can ensure they get to keep reproducing.

Neuroscientist Hans Hofmann and evolutionary biologist Rebecca Young at the University of Texas in Austin wanted to explore how the regulation of genes in the brain might have changed when a nonmonogamous species evolved to become monogamous. For example, the complex set of genes that underlie the ability to tolerate the presence of another member of one’s species presumably exists in nonmonogamous animals, but might be activated in different patterns to allow prolonged partnerships in monogamous ones.

“We wanted to be bold—and maybe a little bit crazy” in the new experiment, Hofmann says. Instead of doing a relatively straightforward genetic comparison between closely related species on either side of the monogamy divide, he and colleagues wanted to hunt down a gene activity signature associated with monogamy in males across a wide variety of species—frogs, mice, voles, birds, and fish. So in each of these groups, they selected two species, one monogamous and one nonmonogamous.

Rounding up the brains of those animals took an international team and years of effort. Hostile regional authorities and a complicated permitting system confronted the team in Romania as they tried to capture two types of a native songbird. Hofmann donned scuba gear and plunged into Africa’s Lake Tanganyika to chase finger-length cichlid fish into nets. Delicately debraining them while aboard a rocking boat, he says, was a struggle.

Back the lab, the researchers then grouped roughly comparable genes across all 10 species based on similarities in their sequences. For each of these cross-species gene groups, they measured activity based on how much the cells in the brain transcribed the DNA’s proteinmaking instructions into strands of RNA.

Among the monogamous animals, a pattern emerged. The researchers found certain sets of genes were more likely to be “turned up” or “turned down” in those creatures than in the nonmonogamous species. And they ruled out other reasons why these monogamous animals might have similar gene expression patterns, including similar environments or close evolutionary relationships.

Among the genes with increased activity in monogamous species were those involved in neural development, signaling between cells, learning, and memory, the researchers report online today in the Proceedings of the National Academy of Sciences. They speculate that genes that make the brain more adaptable—and better able to remember—might also help animals recognize their mates and find their presence rewarding.

It makes sense that genes involved in brain development and function would underlie a complex behavior like monogamy, says behavioral neuroscientist Claudio Mello of Oregon Health & Science University in Portland. But because the researchers didn’t dissect out specific brain regions and analyze their RNA production independently, they can’t describe the finely tuned patterns of gene expression in areas that are key to reproductive behavior. “It seems to me unlikely that by themselves these genes will be able to ‘explain’ this behavior,” he says.

“The fact that they got any common genes at all is interesting,” adds Lisa Stubbs, a developmental geneticist at the University of Illinois in Urbana. “It is a superb data set and an expert analysis,” she says, “[but] the authors have not actually uncovered many important biological insights into monogamy.”

The study did turn up a curious outlier. Some of the genes with decreased expression in most of the monogamous species showed increased expression in one of them—the poison dart frog Ranitomeya imitator. Young notes that in this species’s evolutionary history, fathers cared for the young before cooperative parenting evolved. As a result, these frogs may have had a different evolutionary starting point than other animals in the study, later tapping into different genes to become monogamous.

Hoekstra, who has studied the genetics of monogamy in mice, sees “a lot of exciting next steps.” There are likely mutations in other regions of DNA that regulate the expression of the genes this study identified. But it will take more work to show a causal relationship between any particular genetic sequence and monogamous behavior.

People also often opt for monogamy, albeit for a complicated set of social and cultural reasons. So, do we share the gene activity signature common to monogamous birds, fish, and frogs? “We don’t know that,” says Hofmann, but “we certainly would speculate that the kind of gene expression patterns … might [show up] in humans as well.”