Posts Tagged ‘neuroscience’

By Laura Counts

Can’t stop checking your phone, even when you’re not expecting any important messages? Blame your brain.

A new study by researchers at UC Berkeley’s Haas School of Business has found that information acts on the brain’s dopamine-producing reward system in the same way as money or food.

“To the brain, information is its own reward, above and beyond whether it’s useful,” says Assoc. Prof. Ming Hsu, a neuroeconomist whose research employs functional magnetic imaging (fMRI), psychological theory, economic modeling, and machine learning. “And just as our brains like empty calories from junk food, they can overvalue information that makes us feel good but may not be useful—what some may call idle curiosity.”

The paper, “Common neural code for reward and information value,” was published this month by the Proceedings of the National Academy of Sciences. Authored by Hsu and graduate student Kenji Kobayashi, now a post-doctoral researcher at the University of Pennsylvania, it demonstrates that the brain converts information into the same common scale as it does for money. It also lays the groundwork for unraveling the neuroscience behind how we consume information—and perhaps even digital addiction.

“We were able to demonstrate for the first time the existence of a common neural code for information and money, which opens the door to a number of exciting questions about how people consume, and sometimes over-consume, information,” Hsu says.

Rooted in the study of curiosity

The paper is rooted in the study of curiosity and what it looks like inside the brain. While economists have tended to view curiosity as a means to an end, valuable when it can help us get information to gain an edge in making decisions, psychologists have long seen curiosity as an innate motivation that can spur actions by itself. For example, sports fans might check the odds on a game even if they have no intention of ever betting.

Sometimes, we want to know something, just to know.

“Our study tried to answer two questions. First, can we reconcile the economic and psychological views of curiosity, or why do people seek information? Second, what does curiosity look like inside the brain?” Hsu says.

The neuroscience of curiosity

To understand more about the neuroscience of curiosity, the researchers scanned the brains of people while they played a gambling game. Each participant was presented with a series of lotteries and needed to decide how much they were willing to pay to find out more about the odds of winning. In some lotteries, the information was valuable—for example, when what seemed like a longshot was revealed to be a sure thing. In other cases, the information wasn’t worth much, such as when little was at stake.

For the most part, the study subjects made rational choices based on the economic value of the information (how much money it could help them win). But that didn’t explain all their choices: People tended to over-value information in general, and particularly in higher-valued lotteries. It appeared that the higher stakes increased people’s curiosity in the information, even when the information had no effect on their decisions whether to play.

The researchers determined that this behavior could only be explained by a model that captured both economic and psychological motives for seeking information. People acquired information based not only on its actual benefit, but also on the anticipation of its benefit, whether or not it had use.

Hsu says that’s akin to wanting to know whether we received a great job offer, even if we have no intention of taking it. “Anticipation serves to amplify how good or bad something seems, and the anticipation of a more pleasurable reward makes the information appear even more valuable,” he says.

Common neural code for information and money

How does the brain respond to information? Analyzing the fMRI scans, the researchers found that the information about the games’ odds activated the regions of the brain specifically known to be involved in valuation (the striatum and ventromedial prefrontal cortex or VMPFC), which are the same dopamine-producing reward areas activated by food, money, and many drugs. This was the case whether the information was useful, and changed the person’s original decision, or not.

Next, the researchers were able to determine that the brain uses the same neural code for information about the lottery odds as it does for money by using a machine learning technique (called support vector regression). That allowed them to look at the neural code for how the brain responds to varying amounts of money, and then ask if the same code can be used to predict how much a person will pay for information. It can.

In other words, just as we can convert such disparate things as a painting, a steak dinner, and a vacation into a dollar value, the brain converts curiosity about information into the same common code it uses for concrete rewards like money, Hsu says.

“We can look into the brain and tell how much someone wants a piece of information, and then translate that brain activity into monetary amounts,” he says.

Raising questions about digital addiction

While the research does not directly address overconsumption of digital information, the fact that information engages the brain’s reward system is a necessary condition for the addiction cycle, he says. And it explains why we find those alerts saying we’ve been tagged in a photo so irresistible.

“The way our brains respond to the anticipation of a pleasurable reward is an important reason why people are susceptible to clickbait,” he says. “Just like junk food, this might be a situation where previously adaptive mechanisms get exploited now that we have unprecedented access to novel curiosities.”

How information is like snacks, money, and drugs—to your brain

Advertisements

by ABBY OLENA

Animals learn by imitating behaviors, such as when a baby mimics her mother’s speaking voice or a young male zebra finch copies the mating song of an older male tutor, often his father. In a study published today in Science, researchers identified the neural circuit that a finch uses to learn the duration of the syllables of a song and then manipulated this pathway with optogenetics to create a false memory that juvenile birds used to develop their courtship song.

“In order to learn from observation, you have to create a memory of someone doing something right and then use this sensory information to guide your motor system to learn to perform the behavior. We really don’t know where and how these memories are formed,” says Dina Lipkind, a biologist at York College who did not participate in the study. The authors “addressed the first step of the process, which is how you form the memory that will later guide [you] towards performing this behavior.”

“Our original goals were actually much more modest,” says Todd Roberts, a neuroscientist at UT Southwestern Medical Center. Initially, Wenchan Zhao, a graduate student in his lab, set out to test whether or not disrupting neural activity while a young finch interacted with a tutor could block the bird’s ability to form a memory of the interchange. She used light to manipulate cells genetically engineered to be sensitive to illumination in a brain circuit previously implicated in song learning in juvenile birds.

Zhao turned the cells on by shining a light into the birds’ brains while they spent time with their tutors and, as a control experiment, when the birds were alone. Then she noticed that the songs that the so-called control birds developed were unusual—different from the songs of birds that had never met a tutor but also unlike the songs of those that interacted with an older bird.

Once Zhao and her colleagues picked up on the unusual songs, they decided to “test whether or not the activity in this circuit would be sufficient to implant memories,” says Roberts.

The researchers stimulated birds’ neural circuits with sessions of 50- or 300-millisecond optogenetic pulses over five days during the time at which they would typically be interacting with a tutor but without an adult male bird present. When these finches grew up, they sang adult courtship songs that corresponded to the duration of light they’d received. Those that got the short pulses sang songs with sounds that lasted about 50 milliseconds, while the ones that received the extended pulses held their notes longer. Some song features—including pitch and how noisy harmonic syllables were in the song—didn’t seem to be affected by optogenetic manipulation. Another measure, entropy, which approximates the amount of information carried in the communication, was not distinguishable in the songs of normally tutored birds and those that received 50-millisecond optogenetic pulses, but was higher in the songs of birds who’d received tutoring than in the songs of either isolated birds or those that received the 300-millisecond light pulses.

While the manipulation of the circuit affected the duration of the sounds in the finches’ songs, other elements of singing behavior—including the timeline of vocal development, how frequently the birds practiced, and in what social contexts they eventually used the songs—were similar to juveniles who’d learned from an adult bird.

The researchers then determined that when the birds received light stimulation at the same time as they interacted with a singing tutor, their adult songs were more like those of birds that had only received light stimulation, indicating that optogenetic stimulation can supplant tutoring.

When the team lesioned the circuit before young birds met their tutors, they didn’t make attempts to imitate the adult courtship songs. But if the juveniles were given a chance to interact with a tutor before the circuit was damaged, they had no problem learning the song. This finding points to an essential role for the pathway in forming the initial memory of the timing of vocalizations, but not in storing it long-term so that it can be referenced to guide song formation.

“What we were able to implant was information about the duration of syllables that the birds want to attempt to learn how to sing,” Roberts tells The Scientist. But there are many more characteristics birds have to attend to when they’re learning a song, including pitch and how to put the syllables in the correct order, he says. The next steps are to identify the circuits that are carrying other types of information and to investigate the mechanisms for encoding these memories and where in the brain they’re stored.

Sarah London, a neuroscientist at the University of Chicago who did not participate in the study, agrees that the strategies used here could serve as a template to tease apart where other characteristics of learned song come from. But more generally, this work in songbirds connects to the bigger picture of our understanding of learning and memory, she says.

Song learning “is a complicated behavior that requires multiple brain areas coordinating their functions over long stretches of development. The brain is changing anyway, and then on top of that the behavior’s changing in the brain,” she explains. Studying the development of songs in zebra finches can give insight into “how maturing neural circuits are influenced by the environment,” both the brain’s internal environment and the external, social environment, she adds. “This is a really unique opportunity, not just for song, not just for language, but for learning in a little larger context—of kids trying to understand and adopt behavioral patterns appropriate to their time and place.”

W. Zhao et al., “Inception of memories that guide vocal learning in the songbird,” Science, doi:10.1126/science.aaw4226, 2019.

https://www.the-scientist.com/news-opinion/researchers-implant-memories-in-zebra-finch-brains-66527?utm_campaign=TS_DAILY%20NEWSLETTER_2019&utm_source=hs_email&utm_medium=email&utm_content=77670023&_hsenc=p2ANqtz-87EBXf6eeNZge06b_5Aa8n7uTBGdQV0pm3iz03sqCnkbGRyfd6O5EXFMKR1hB7lhth1KN_lMxkB_08Kb9sVBXDAMT7gQ&_hsmi=77670023


Brain tissue from deceased patients with Alzheimer’s has more tau protein buildup (brown spots) and fewer neurons (red spots) as compared to healthy brain tissue.

By Yasemin Saplakoglu

Alzheimer’s disease might be attacking the brain cells responsible for keeping people awake, resulting in daytime napping, according to a new study.

Excessive daytime napping might thus be considered an early symptom of Alzheimer’s disease, according to a statement from the University of California, San Francisco (UCSF).

Some previous studies suggested that such sleepiness in patients with Alzheimer’s results directly from poor nighttime sleep due to the disease, while others have suggested that sleep problems might cause the disease to progress. The new study suggests a more direct biological pathway between Alzheimer’s disease and daytime sleepiness.

In the current study, researchers studied the brains of 13 people who’d had Alzheimer’s and died, as well as the brains from seven people who had not had the disease. The researchers specifically examined three parts of the brain that are involved in keeping us awake: the locus coeruleus, the lateral hypothalamic area and the tuberomammillary nucleus. These three parts of the brain work together in a network to keep us awake during the day.

The researchers compared the number of neurons, or brain cells, in these regions in the healthy and diseased brains. They also measured the level of a telltale sign of Alzheimer’s: tau proteins. These proteins build up in the brains of patients with Alzheimer’s and are thought to slowly destroy brain cells and the connections between them.

The brains from patients who had Alzheimer’s in this study had significant levels of tau tangles in these three brain regions, compared to the brains from people without the disease. What’s more, in these three brain regions, people with Alzheimer’s had lost up to 75% of their neurons.

“It’s remarkable because it’s not just a single brain nucleus that’s degenerating, but the whole wakefulness-promoting network,” lead author Jun Oh, a research associate at UCSF, said in the statement. “This means that the brain has no way to compensate, because all of these functionally related cell types are being destroyed at the same time.”

The researchers also compared the brains from people with Alzheimer’s with tissue samples from seven people who had two other forms of dementia caused by the accumulation of tau: progressive supranuclear palsy and corticobasal disease. Results showed that despite the buildup of tau, these brains did not show damage to the neurons that promote wakefulness.

“It seems that the wakefulness-promoting network is particularly vulnerable in Alzheimer’s disease,” Oh said in the statement. “Understanding why this is the case is something we need to follow up in future research.”

Though amyloid proteins, and the plaques that they form, have been the major target in several clinical trials of potential Alzheimer’s treatments, increasing evidence suggests that tau proteins play a more direct role in promoting symptoms of the disease, according to the statement.

The new findings suggest that “we need to be much more focused on understanding the early stages of tau accumulation in these brain areas in our ongoing search for Alzheimer’s treatments,” senior author Dr. Lea Grinberg, an associate professor of neurology and pathology at the UCSF Memory and Aging Center, said in the statement.

The findings were published Monday (Aug. 12) in Alzheimer’s & Dementia: The Journal of the Alzheimer’s Association.

https://www.livescience.com/alzheimers-attacks-wakefulness-neurons.html?utm_source=notification

by Bob Yirka

A team of researchers from the University of California and Stanford University has found that the tendency to see people from different racial groups as interchangeable has a neuronal basis. In their paper published in Proceedings of the National Academy of Sciences, the group describes studies they conducted with volunteers and what they found.

One often-heard phrase connected with racial profiling is “they all look the same to me,” a phrase usually perceived as racist. It implies that people of one race have difficulty discerning the facial characteristics of people of another race. In this new effort, the researchers conducted experiments to find out if this is valid—at least among one small group of young, white men.

In the first experiment, young, white male volunteers looked at photographs of human faces, some depicting black people, others white, while undergoing an fMRI scan. Afterward, the researchers found that the part of the brain involved in facial recognition activated more for white faces than it did for black faces.

In the second experiment, the same volunteers looked at photographs of faces that had been doctored to make the subjects appear more alike, regardless of skin color. The researchers report that the brains of the volunteers activated when dissimilarities were spotted, regardless of skin color, though it was more pronounced when the photo was of a white face.

In a third series of experiments, the volunteers rated how different they found faces in a series of photographs or whether they had seen a given face before. The researchers report that the volunteers had a tendency to rate the black faces as more similar to one another than the white faces. And they found it easier to tell if they had seen a particular white face before.

The researchers suggest that the results of their experiments indicate a neural basis that makes it more difficult for people to see differences between individuals of other races. They note that they did account for social contexts such as whether the volunteers had friends and/or associates of other races. They suggest that more work is required to determine if such neuronal biases can be changed based on social behavior.

Brent L. Hughes et al. Neural adaptation to faces reveals racial outgroup homogeneity effects in early perception, Proceedings of the National Academy of Sciences (2019). DOI: 10.1073/pnas.1822084116

https://medicalxpress.com/news/2019-07-neuronal-alike.html


Illustrations of electrode placements on the research participants’ neural speech centers, from which activity patterns recorded during speech (colored dots) were translated into a computer simulation of the participant’s vocal tract (model, right) which then could be synthesized to reconstruct the sentence that had been spoken (sound wave & sentence, below). Credit: Chang lab / UCSF Dept. of Neurosurgery

A state-of-the-art brain-machine interface created by UC San Francisco neuroscientists can generate natural-sounding synthetic speech by using brain activity to control a virtual vocal tract—an anatomically detailed computer simulation including the lips, jaw, tongue, and larynx. The study was conducted in research participants with intact speech, but the technology could one day restore the voices of people who have lost the ability to speak due to paralysis and other forms of neurological damage.

Stroke, traumatic brain injury, and neurodegenerative diseases such as Parkinson’s disease, multiple sclerosis, and amyotrophic lateral sclerosis (ALS, or Lou Gehrig’s disease) often result in an irreversible loss of the ability to speak. Some people with severe speech disabilities learn to spell out their thoughts letter-by-letter using assistive devices that track very small eye or facial muscle movements. However, producing text or synthesized speech with such devices is laborious, error-prone, and painfully slow, typically permitting a maximum of 10 words per minute, compared to the 100-150 words per minute of natural speech.

The new system being developed in the laboratory of Edward Chang, MD—described April 24, 2019 in Nature—demonstrates that it is possible to create a synthesized version of a person’s voice that can be controlled by the activity of their brain’s speech centers. In the future, this approach could not only restore fluent communication to individuals with severe speech disability, the authors say, but could also reproduce some of the musicality of the human voice that conveys the speaker’s emotions and personality.

“For the first time, this study demonstrates that we can generate entire spoken sentences based on an individual’s brain activity,” said Chang, a professor of neurological surgery and member of the UCSF Weill Institute for Neuroscience. “This is an exhilarating proof of principle that with technology that is already within reach, we should be able to build a device that is clinically viable in patients with speech loss.”

Brief animation illustrates how patterns of brain activity from the brain’s speech centers in somatosensory cortex (top left) were first decoded into a computer simulation of a research participant’s vocal tract movements (top right), which were then translated into a synthesized version of the participant’s voice (bottom). Credit:Chang lab / UCSF Dept. of Neurosurgery. Simulated Vocal Tract Animation Credit:Speech Graphics
Virtual Vocal Tract Improves Naturalistic Speech Synthesis

The research was led by Gopala Anumanchipalli, Ph.D., a speech scientist, and Josh Chartier, a bioengineering graduate student in the Chang lab. It builds on a recent study in which the pair described for the first time how the human brain’s speech centers choreograph the movements of the lips, jaw, tongue, and other vocal tract components to produce fluent speech.

From that work, Anumanchipalli and Chartier realized that previous attempts to directly decode speech from brain activity might have met with limited success because these brain regions do not directly represent the acoustic properties of speech sounds, but rather the instructions needed to coordinate the movements of the mouth and throat during speech.

“The relationship between the movements of the vocal tract and the speech sounds that are produced is a complicated one,” Anumanchipalli said. “We reasoned that if these speech centers in the brain are encoding movements rather than sounds, we should try to do the same in decoding those signals.”

In their new study, Anumancipali and Chartier asked five volunteers being treated at the UCSF Epilepsy Center—patients with intact speech who had electrodes temporarily implanted in their brains to map the source of their seizures in preparation for neurosurgery—to read several hundred sentences aloud while the researchers recorded activity from a brain region known to be involved in language production.

Based on the audio recordings of participants’ voices, the researchers used linguistic principles to reverse engineer the vocal tract movements needed to produce those sounds: pressing the lips together here, tightening vocal cords there, shifting the tip of the tongue to the roof of the mouth, then relaxing it, and so on.

This detailed mapping of sound to anatomy allowed the scientists to create a realistic virtual vocal tract for each participant that could be controlled by their brain activity. This comprised two “neural network” machine learning algorithms: a decoder that transforms brain activity patterns produced during speech into movements of the virtual vocal tract, and a synthesizer that converts these vocal tract movements into a synthetic approximation of the participant’s voice.

The synthetic speech produced by these algorithms was significantly better than synthetic speech directly decoded from participants’ brain activity without the inclusion of simulations of the speakers’ vocal tracts, the researchers found. The algorithms produced sentences that were understandable to hundreds of human listeners in crowdsourced transcription tests conducted on the Amazon Mechanical Turk platform.

As is the case with natural speech, the transcribers were more successful when they were given shorter lists of words to choose from, as would be the case with caregivers who are primed to the kinds of phrases or requests patients might utter. The transcribers accurately identified 69 percent of synthesized words from lists of 25 alternatives and transcribed 43 percent of sentences with perfect accuracy. With a more challenging 50 words to choose from, transcribers’ overall accuracy dropped to 47 percent, though they were still able to understand 21 percent of synthesized sentences perfectly.

“We still have a ways to go to perfectly mimic spoken language,” Chartier acknowledged. “We’re quite good at synthesizing slower speech sounds like ‘sh’ and ‘z’ as well as maintaining the rhythms and intonations of speech and the speaker’s gender and identity, but some of the more abrupt sounds like ‘b’s and ‘p’s get a bit fuzzy. Still, the levels of accuracy we produced here would be an amazing improvement in real-time communication compared to what’s currently available.”

Artificial Intelligence, Linguistics, and Neuroscience Fueled Advance

The researchers are currently experimenting with higher-density electrode arrays and more advanced machine learning algorithms that they hope will improve the synthesized speech even further. The next major test for the technology is to determine whether someone who can’t speak could learn to use the system without being able to train it on their own voice and to make it generalize to anything they wish to say.


Image of an example array of intracranial electrodes of the type used to record brain activity in the current study. Credit: UCSF

Preliminary results from one of the team’s research participants suggest that the researchers’ anatomically based system can decode and synthesize novel sentences from participants’ brain activity nearly as well as the sentences the algorithm was trained on. Even when the researchers provided the algorithm with brain activity data recorded while one participant merely mouthed sentences without sound, the system was still able to produce intelligible synthetic versions of the mimed sentences in the speaker’s voice.

The researchers also found that the neural code for vocal movements partially overlapped across participants, and that one research subject’s vocal tract simulation could be adapted to respond to the neural instructions recorded from another participant’s brain. Together, these findings suggest that individuals with speech loss due to neurological impairment may be able to learn to control a speech prosthesis modeled on the voice of someone with intact speech.

“People who can’t move their arms and legs have learned to control robotic limbs with their brains,” Chartier said. “We are hopeful that one day people with speech disabilities will be able to learn to speak again using this brain-controlled artificial vocal tract.”

Added Anumanchipalli, “I’m proud that we’ve been able to bring together expertise from neuroscience, linguistics, and machine learning as part of this major milestone towards helping neurologically disabled patients.”

https://medicalxpress.com/news/2019-04-synthetic-speech-brain.html

By Emily Underwood

One of the thorniest debates in neuroscience is whether people can make new neurons after their brains stop developing in adolescence—a process known as neurogenesis. Now, a new study finds that even people long past middle age can make fresh brain cells, and that past studies that failed to spot these newcomers may have used flawed methods.

The work “provides clear, definitive evidence that neurogenesis persists throughout life,” says Paul Frankland, a neuroscientist at the Hospital for Sick Children in Toronto, Canada. “For me, this puts the issue to bed.”

Researchers have long hoped that neurogenesis could help treat brain disorders like depression and Alzheimer’s disease. But last year, a study in Nature reported that the process peters out by adolescence, contradicting previous work that had found newborn neurons in older people using a variety of methods. The finding was deflating for neuroscientists like Frankland, who studies adult neurogenesis in the rodent hippocampus, a brain region involved in learning and memory. It “raised questions about the relevance of our work,” he says.

But there may have been problems with some of this earlier research. Last year’s Nature study, for example, looked for new neurons in 59 samples of human brain tissue, some of which came from brain banks where samples are often immersed in the fixative paraformaldehyde for months or even years. Over time, paraformaldehyde forms bonds between the components that make up neurons, turning the cells into a gel, says neuroscientist María Llorens-Martín of the Severo Ochoa Molecular Biology Center in Madrid. This makes it difficult for fluorescent antibodies to bind to the doublecortin (DCX) protein, which many scientists consider the “gold standard” marker of immature neurons, she says.

The number of cells that test positive for DCX in brain tissue declines sharply after just 48 hours in a paraformaldehyde bath, Llorens-Martín and her colleagues report today in Nature Medicine. After 6 months, detecting new neurons “is almost impossible,” she says.

When the researchers used a shorter fixation time—24 hours—to preserve donated brain tissue from 13 deceased adults, ranging in age from 43 to 87, they found tens of thousands of DCX-positive cells in the dentate gyrus, a curled sliver of tissue within the hippocampus that encodes memories of events. Under a microscope, the neurons had hallmarks of youth, Llorens-Martín says: smooth and plump, with simple, undeveloped branches.

In the sample from the youngest donor, who died at 43, the team found roughly 42,000 immature neurons per square millimeter of brain tissue. From the youngest to oldest donors, the number of apparent new neurons decreased by 30%—a trend that fits with previous studies in humans showing that adult neurogenesis declines with age. The team also showed that people with Alzheimer’s disease had 30% fewer immature neurons than healthy donors of the same age, and the more advanced the dementia, the fewer such cells.

Some scientists remain skeptical, including the authors of last year’s Nature paper. “While this study contains valuable data, we did not find the evidence for ongoing production of new neurons in the adult human hippocampus convincing,” says Shawn Sorrells, a neuroscientist at the University of Pittsburgh in Pennsylvania who co-authored the 2018 paper. One critique hinges on the DCX stain, which Sorrells says isn’t an adequate measure of young neurons because the DCX protein is also expressed in mature cells. That suggests the “new” neurons the team found were actually present since childhood, he says. The new study also found no evidence of pools of stem cells that could supply fresh neurons, he notes. What’s more, Sorrells says two of the brain samples he and his colleagues looked at were only fixed for 5 hours, yet they still couldn’t find evidence of young neurons in the hippocampus.

Llorens-Martín says her team used multiple other proteins associated with neuronal development to confirm that the DCX-positive cells were actually young, and were “very strict,” in their criteria for identifying young neurons.

Heather Cameron, a neuroscientist at the National Institute of Mental Health in Bethesda, Maryland, remains persuaded by the new work. Based on the “beauty of the data” in the new study, “I think we can all move forward pretty confidently in the knowledge that what we see in animals will be applicable in humans, she says. “Will this settle the debate? I’m not sure. Should it? Yes.”

https://www.sciencemag.org/news/2019/03/new-neurons-life-old-people-can-still-make-fresh-brain-cells-study-finds?utm_campaign=news_daily_2019-03-25&et_rid=17036503&et_cid=2734364

by David Nield

How exactly do our brains sort between the five taste groups: sweet, sour, salty, bitter and umami? We’ve now got a much better idea, thanks to research that has pinned down where in the brain this taste processing happens.

Step forward: the insular cortex. Already thought to be responsible for everything from motor control to social empathy, we can now add flavour identification to its list of jobs.

It’s an area of the brain scientists have previously suspected could be responsible for sorting tastes, and which has been linked to taste in rodents, but this new study is much more precise in figuring out the role it plays in decoding what our tongues are telling us.

“We have known that tastes activate the human brain for some time, but not where primary taste types such as sweet, sour, salty, and bitter are distinguished,” says one of the team, Adam Anderson from Cornell University in New York.

“By using some new techniques that analyse fine-grained activity patterns, we found a specific portion of the insular cortex – an older cortex in the brain hidden behind the neocortex – represents distinct tastes.”

Anderson and his team used detailed fMRI scans of 20 adults as well as a new statistical model to dig deeper than previous studies into the link between the insular cortex and taste. This helped separate the taste response from other related responses – like the disgust we might feel when eating something sour or bitter.

Part of the problem in pinning down the taste-testing parts of the brain is that multiple regions of neurons get busy whenever we’re eating something. However, this study helps to cut through some of that noise.

In particular, it seems that different tastes don’t necessarily affect different parts of the insular cortex, but rather prompt different patterns of activity. Those patterns help the brain determine what it’s tasting.

For example, one particular section of the insular cortex was found to light up – in terms of neural activity – whenever something sweet was tasted. It’s a literal sweet spot, in other words, but it also showed that different brains have different wiring.

“While we identified a potential sweet spot, its precise location differed across people and this same spot responded to other tastes, but with distinct patterns of activity,” says Anderson.

“To know what people are tasting, we have to take into account not only where in the insula is stimulated, but also how.”

The work follows on from previous research showing just how big a role the brain plays in perceiving taste. It used to be thought that receptors on the tongue did most of the taste testing, but now it seems the brain is largely in charge of the process.

That prior study showed how switching certain brain cells on and off in mice was enough to prevent them from distinguishing between sweet and bitter. The conclusion is that while the tongue does identify certain chemicals, it’s the brain that interprets them.

The new research adds even more insight into what’s going on in the brain in humans when we need to work out what we’re tasting – and shows just how important a job the insular cortex is doing.

“The insular cortex represents experiences from inside our bodies,” says Anderson. “So taste is a bit like perceiving our own bodies, which is very different from other external senses such as sight, touch, hearing or smell.”

The research has been published in Nature Communications.

https://www.sciencealert.com/now-we-know-the-part-of-the-brain-that-tells-us-what-we-re-tasting