Researchers Make Mice Smell Odors that Aren’t Really There

by Ruth Williams

By activating a particular pattern of nerve endings in the brain’s olfactory bulb, researchers can make mice smell a non-existent odor, according to a paper published June 18 in Science. Manipulating these activity patterns reveals which aspects are important for odor recognition.

“This study is a beautiful example of the use of synthetic stimuli . . . to probe the workings of the brain in a way that is just not possible currently with natural stimuli,” neuroscientist Venkatesh Murthy of Harvard University who was not involved with the study writes in an email to The Scientist.

A fundamental goal of neuroscience is to understand how a stimulus—a sight, sound, taste, touch, or smell—is interpreted, or perceived, by the brain. While a large number of studies have shown the various ways in which such stimuli activate brain cells, very little is understood about what these activations actually contribute to perception.

In the case of smell, for example, it is well-known that odorous molecules traveling up the nose bind to receptors on cells that then transmit signals along their axons to bundles of nerve endings—glomeruli—in a brain area called the olfactory bulb. A single molecule can cause a whole array of different glomeruli to fire in quick succession, explains neurobiologist Kevin Franks of Duke University who also did not participate in the research. And because these activity patterns “have many different spatial and temporal features,” he says, “it is difficult to know which of those features is actually most relevant [for perception].”

To find out, neuroscientist Dmitry Rinberg of New York University and colleagues bypassed the nose entirely. “The clever part of their approach is to gain direct control of these neurons with light, rather than by sending odors up the animal’s nose,” Caltech neurobiologist Markus Meister, who was not involved in the work, writes in an email to The Scientist.

The team used mice genetically engineered to produce light-sensitive ion channels in their olfactory bulb cells. They then used precisely focused lasers to activate a specific pattern of glomeruli in the region of the bulb closest to the top of the animal’s head, through a surgically implanted window in the skull. The mice were trained to associate this activation pattern with a reward—water, delivered via a lick-tube. The same mice did not associate random activation patterns with the reward, suggesting they had learned to distinguish the reward-associated pattern, or synthetic smell, from others.

Although the activation patterns were not based on any particular odors, they were designed to be as life-like as possible. For example, the glomeruli were activated one after the other within the space of 300 milliseconds from the time at which the mouse sniffed—detected by a sensor. “But, I’ll be honest with you, I have no idea if it stinks [or] it is pleasant” for the mouse, Rinberg says.

Once the mice were thoroughly trained, the team made methodical alterations to the activity pattern—changing the order in which the glomeruli were activated, switching out individual activation sites for alternatives, and changing the timing of the activation relative to the sniff. They tried “hundreds of different combinations,” Rinberg says. He likened it to altering the notes in a tune. “If you change the notes, or the timing of the notes, does the song remain the same?” he asks. That is, would the mice still be able to recognize the induced scent?

From these experiments, a general picture emerged: alterations to the earliest-activated regions caused the most significant impairment to the animal’s ability to recognize the scent. “What they showed is that, even though an odor will [induce] a very complex pattern of activity, really it is just the earliest inputs, the first few glomeruli that are activated that are really important for perception,” says Franks.

Rinberg says he thinks these early glomeruli most likely represent the receptors to which an odorant binds most strongly.

With these insights into the importance of glomeruli firing times for scent recognition, “the obvious next question,” says Franks, is to go deeper into the brain to where the olfactory bulb neurons project and ask, “ How does the cortex make sense of this?”

E. Chong et al., “Manipulating synthetic optogenetic odors reveals the coding logic of olfactory perception,” Science, 368:eaba2357, 2020.

https://www.the-scientist.com/news-opinion/researchers-make-mice-smell-odors-that-arent-really-there-67643?utm_campaign=TS_DAILY%20NEWSLETTER_2020&utm_medium=email&_hsmi=89854591&_hsenc=p2ANqtz–BMhsu532UL56qwtB0yErPYlgoFTIZWsNouvTV9pnT1ikTw6CvyIPyun3rPGdciV29we7ugRVWYc1uuBDh5CN_F-0FzA&utm_content=89854591&utm_source=hs_email

Light Enables Long-Term Memory Maintenance in Fruit Flies


S. Inami et al., “Environmental light is required for maintenance of long-term memory in Drosophila,” J Neurosci, 40:1427–39, 2020.

by Diana Kwon

As Earth rotates around its axis, the organisms that inhabit its surface are exposed to daily cycles of darkness and light. In animals, light has a powerful influence on sleep, hormone release, and metabolism. Work by Takaomi Sakai, a neuroscientist at Tokyo Metropolitan University, and his team suggests that light may also be crucial for forming and maintaining long-term memories.

The puzzle of how memories persist in the brain has long been of interest to Sakai. Researchers had previously demonstrated, in both rodents and flies, that the production of new proteins is necessary for maintaining long-term memories, but Sakai wondered how this process persisted over several days given cells’ molecular turnover. Maybe, he thought, an environmental stimulus, such as the light-dark cycles, periodically triggered protein production to enable memory formation and storage.

Sakai and his colleagues conducted a series of experiments to see how constant darkness would affect the ability of Drosophila melanogaster to form long-term memories. Male flies exposed to light after interacting with an unreceptive female showed reduced courtship behaviors toward new female mates several days later, indicating they had remembered the initial rejection. Flies kept in constant darkness, however, continued their attempts to copulate.

The team then probed the molecular mechanisms of these behaviors and discovered a pathway by which light activates cAMP response element-binding protein (CREB)—a transcription factor previously identified as important for forming long-term memories—within certain neurons found in the mushroom bodies, the memory center in fly brains.

“The fact that light is essential for long-term memory maintenance is fundamentally interesting,” says Seth Tomchick, a neuroscientist at the Scripps Research Institute in Florida who wasn’t involved in the study. However, he adds, “more work will be necessary” to fully characterize the molecular mechanisms underlying these effects.

https://www.the-scientist.com/the-literature/lasting-memories-67441?utm_campaign=TS_DAILY%20NEWSLETTER_2020&utm_source=hs_email&utm_medium=email&utm_content=87927085&_hsenc=p2ANqtz-_7gIn7Nu8ghtWiBtiy6oqTctJuYb31bx6bzhbcV3gVpx0-YoIVNtAhnXXNJT0GC496PAntAiSvYpxLdVAnvITlfOG96g&_hsmi=87927085

How a New AI Translated Brain Activity to Speech With 97 Percent Accuracy

By Edd Gent

The idea of a machine that can decode your thoughts might sound creepy, but for thousands of people who have lost the ability to speak due to disease or disability it could be game-changing. Even for the able-bodied, being able to type out an email by just thinking or sending commands to your digital assistant telepathically could be hugely useful.

That vision may have come a step closer after researchers at the University of California, San Francisco demonstrated that they could translate brain signals into complete sentences with error rates as low as three percent, which is below the threshold for professional speech transcription.

While we’ve been able to decode parts of speech from brain signals for around a decade, so far most of the solutions have been a long way from consistently translating intelligible sentences. Last year, researchers used a novel approach that achieved some of the best results so far by using brain signals to animate a simulated vocal tract, but only 70 percent of the words were intelligible.

The key to the improved performance achieved by the authors of the new paper in Nature Neuroscience was their realization that there were strong parallels between translating brain signals to text and machine translation between languages using neural networks, which is now highly accurate for many languages.

While most efforts to decode brain signals have focused on identifying neural activity that corresponds to particular phonemes—the distinct chunks of sound that make up words—the researchers decided to mimic machine translation, where the entire sentence is translated at once. This has proven a powerful approach; as certain words are always more likely to appear close together, the system can rely on context to fill in any gaps.

The team used the same encoder-decoder approach commonly used for machine translation, in which one neural network analyzes the input signal—normally text, but in this case brain signals—to create a representation of the data, and then a second neural network translates this into the target language.

They trained their system using brain activity recorded from 4 women with electrodes implanted in their brains to monitor seizures as they read out a set of 50 sentences, including 250 unique words. This allowed the first network to work out what neural activity correlated with which parts of speech.

In testing, it relied only on the neural signals and was able to achieve error rates of below eight percent on two out of the four subjects, which matches the kinds of accuracy achieved by professional transcribers.

Inevitably, there are caveats. Firstly, the system was only able to decode 30-50 specific sentences using a limited vocabulary of 250 words. It also requires people to have electrodes implanted in their brains, which is currently only permitted for a limited number of highly specific medical reasons. However, there are a number of signs that this direction holds considerable promise.

One concern was that because the system was being tested on sentences that were included in its training data, it might simply be learning to match specific sentences to specific neural signatures. That would suggest it wasn’t really learning the constituent parts of speech, which would make it harder to generalize to unfamiliar sentences.

But when the researchers added another set of recordings to the training data that were not included in testing, it reduced error rates significantly, suggesting that the system is learning sub-sentence information like words.

They also found that pre-training the system on data from the volunteer that achieved the highest accuracy before training on data from one of the worst performers significantly reduced error rates. This suggests that in practical applications, much of the training could be done before the system is given to the end user, and they would only have to fine-tune it to the quirks of their brain signals.

The vocabulary of such a system is likely to improve considerably as people build upon this approach—but even a limited palette of 250 words could be incredibly useful to a paraplegic, and could likely be tailored to a specific set of commands for telepathic control of other devices.

Now the ball is back in the court of the scrum of companies racing to develop the first practical neural interfaces.

How a New AI Translated Brain Activity to Speech With 97 Percent Accuracy

New evidence that dogs can recognize vowel changes in words

by Gege Li

Dogs pay much closer attention to what humans say than we realised, even to words that are probably meaningless to them.

Holly Root-Gutteridge at the University of Sussex, UK, and her colleagues played audio recordings of people saying six words to 70 pet dogs of various breeds. The dogs had never heard these voices before and the words only differed by their vowels, such as “had”, “hid” and “who’d”.

Each recording was altered so the voices were at the same pitch, ensuring that the only cue the dogs had was the difference between vowels, rather than how people said the words.

After hearing the recordings just once, 48 of the dogs reacted when either the same speaker said a new word or the same word was said by a different speaker. The remainder either didn’t visibly respond or got distracted.

The team based its assessment of the dogs’ reactions on how long they paid attention when the voice or word changed – if the dogs moved their ears or shifted eye contact, for example, it showed that they noticed the change. In contrast, when the dogs heard the same word repeated several times, their attention waned.

Until now, it was thought that only humans could detect vowels in words and realise that these sounds stay the same across different speakers. But the dogs could do both spontaneously without any previous training.

“I was surprised by how well some of the dogs responded to unfamiliar voices,” says Root-Gutteridge. “It might mean that they comprehend more than we give them credit for.”

This ability may be the result of domestication, says Root-Guttridge, as dogs that pay closer attention to human sounds are more likely to have been chosen for breeding.

The work highlights the strength of social interactions between humans and dogs, says Britta Osthaus at Canterbury Christ Church University, UK. “It would be interesting to see whether a well-trained dog would react differently to the command of ‘sat’ instead of ‘sit’,” she says.

Journal reference: Biology Letters, DOI: 10.1098/rsbl.2019.0555

Read more: https://www.newscientist.com/article/2225746-dogs-have-a-better-ear-for-language-than-we-thought/#ixzz679cb3PFN

Ketamine Could Help Cut Alcohol Consumption by Rewiring Memory of Alcohol Reward


Preliminary findings from a clinical trial of heavy drinkers suggest that the drug can weaken certain memories tied to the reward of imbibing, although the mechanisms aren’t fully clear.

by CATHERINE OFFORD

he anesthetic drug ketamine could be used to rewire heavy drinkers’ memories and help them cut down on alcohol consumption, according to a study published yesterday (November 26) in Nature Communications. In a clinical trial of people who reported consuming around 590 grams of alcohol—equivalent to nearly two cases of beer—per week on average, researchers found that a procedure that involved administering the drug while people were thinking about drinking durably reduced consumption.

While it’s not clear how the method works at a neurological level, the study represents “a really exciting development,” Amy Milton, a behavioral neuroscientist at the University of Cambridge who was not involved in the work, tells STAT. She adds that the findings mark “the first time it’s been shown in a clinical population that this can be effective.”

The study was designed to manipulate the brain’s retrieval and stabilization of memories—in this case, those linking the sight and thoughts of alcohol to the reward of drinking it, study coauthor Ravi Das, a psychopharmacologist at University College London, tells Science News. “We’re trying to break down those memories to stop that process from happening.”

To do that, the team asked 30 of the participants to look at a glass of beer, followed by a sequence of images of alcoholic and non-alcoholic drinks. On the first day of tests, the session ended with participants being invited to drink the beer. On the second day, after viewing the beer and images, the screen cut off, and instead of drinking the beer, participants were given a shot of ketamine.

Among various functions, ketamine blocks NMDA receptors—key proteins in the brain’s reward pathways—so the researchers hypothesized that administering the drug during memory retrieval would help weaken participants’ associations between the sight or contemplation of alcohol and the reward of drinking it. Their results somewhat support that hypothesis. Nine months following the several-day trial, the volunteers reported cutting their drinking back by half.

“To actually get changes in [participants’] behavior when they go home and they’re not in the lab is a big deal,” Mary Torregrossa, a neuroscientist at the University of Pittsburgh who was not involved in the work, tells Science. But she notes that it’s not clear whether it was the ketamine or some other part of the procedure that led to the effect.

Another 60 participants, split into two control groups, received slightly different procedures that involved either beer or ketamine and still showed, on average, a 35 percent decrease in alcohol consumption after nine months. The participants themselves were recruited to the study through online ads—meaning that the researchers may have selected for people already interested in reducing consumption.

Whatever the mechanisms behind the effect, the results so far suggest the method is worth investigating, David Epstein, an addiction researcher at the National Institute on Drug Abuse, tells Science News. “If a seemingly small one-time experience in a lab produces any effects that are detectable later in real life, the data are probably pointing toward something important.”

Catherine Offord is an associate editor at The Scientist. Email her at cofford@the-scientist.com.

https://www.the-scientist.com/news-opinion/ketamine-could-help-cut-alcohol-consumption-by-rewiring-memory-66792?utm_campaign=TS_DAILY%20NEWSLETTER_2019&utm_source=hs_email&utm_medium=email&utm_content=80070748&_hsenc=p2ANqtz-_mk5jB1Vyqx3xPsKPzk1WcGdxEqSmuirpfpluu4Opm4tMO6n7rXROJrCvQp0yKBw2eCo4R4TZ422Hk6FcfJ7tDWkMpyg&_hsmi=80070748

Study shows extra virgin olive oil staves off multiple forms of dementia in mice

Boosting brain function is key to staving off the effects of aging. And if there was one thing every person should consider doing right now to keep their brain young, it is to add extra virgin olive oil (EVOO) to their diet, according to research by scientists at the Lewis Katz School of Medicine at Temple University (LKSOM). EVOO is a superfood, rich in cell-protecting antioxidants and known for its multiple health benefits, including helping put the brakes on diseases linked to aging, most notably cardiovascular disease. Previous LKSOM research on mice also showed that EVOO preserves memory and protects the brain against Alzheimer’s disease.

In a new study in mice published online in the journal Aging Cell, LKSOM scientists show that yet another group of aging-related diseases can be added to that list—tauopathies, which are characterized by the gradual buildup of an abnormal form of a protein called tau in the brain. This process leads to a decline in mental function, or dementia. The findings are the first to suggest that EVOO can defend against a specific type of mental decline linked to tauopathy known as frontotemporal dementia.

Alzheimer’s disease is itself one form of dementia. It primarily affects the hippocampus—the memory storage center in the brain. Frontotemporal dementia affects the areas of the brain near the forehead and ears. Symptoms typically emerge between ages 40 and 65 and include changes in personality and behavior, difficulties with language and writing, and eventual deterioration of memory and ability to learn from prior experience.

Senior investigator Domenico Praticò, MD, Scott Richards North Star Foundation Chair for Alzheimer’s Research, Professor in the Departments of Pharmacology and Microbiology, and Director of the Alzheimer’s Center at Temple at LKSOM, describes the new work as supplying another piece in the story about EVOO’s ability to ward off cognitive decline and to protect the junctions where neurons come together to exchange information, which are known as synapses.

“EVOO has been a part of the human diet for a very long time and has many benefits for health, for reasons that we do not yet fully understand,” he said. “The realization that EVOO can protect the brain against different forms of dementia gives us an opportunity to learn more about the mechanisms through which it acts to support brain health.”

In previous work using a mouse model in which animals were destined to develop Alzheimer’s disease, Dr. Praticò’s team showed that EVOO supplied in the diet protected young mice from memory and learning impairment as they aged. Most notably, when the researchers looked at brain tissue from mice fed EVOO, they did not see features typical of cognitive decline, particularly amyloid plaques—sticky proteins that gum up communication pathways between neurons in the brain. Rather, the animals’ brains looked normal.

The team’s new study shows that the same is true in the case of mice engineered to develop tauopathy. In these mice, normal tau protein turns defective and accumulates in the brain, forming harmful tau deposits, also called tangles. Tau deposits, similar to amyloid plaques in Alzheimer’s disease, block neuron communication and thereby impair thinking and memory, resulting in frontotemporal dementia.

Tau mice were put on a diet supplemented with EVOO at a young age, comparable to about age 30 or 40 in humans. Six months later, when mice were the equivalent of age 60 in humans, tauopathy-prone animals experienced a 60 percent reduction in damaging tau deposits, compared to littermates that were not fed EVOO. Animals on the EVOO diet also performed better on memory and learning tests than animals deprived of EVOO.

When Dr. Praticò and colleagues examined brain tissue from EVOO-fed mice, they found that improved brain function was likely facilitated by healthier synapse function, which in turn was associated with greater-than-normal levels of a protein known as complexin-1. Complexin-1 is known to play a critical role in maintaining healthy synapses.

Dr. Praticò and colleagues now plan to explore what happens when EVOO is fed to older animals that have begun to develop tau deposits and signs of cognitive decline, which more closely reflects the clinical scenario in humans. “We are particularly interested in knowing whether EVOO can reverse tau damage and ultimately treat tauopathy in older mice,” Dr. Praticò added.

More information: Elisabetta Lauretti et al, Extra virgin olive oil improves synaptic activity, short‐term plasticity, memory, and neuropathology in a tauopathy model, Aging Cell (2019). DOI: 10.1111/acel.13076

https://m.medicalxpress.com/news/2019-11-extra-virgin-olive-oil-staves.html

People who cannot read may be three times as likely to develop dementia

New research has found that people who are illiterate, meaning they never learned to read or write, may have nearly three times greater risk of developing dementia than people who can read and write. The study is published in the November 13, 2019, online issue of Neurology®, the medical journal of the American Academy of Neurology.

According to the United States Department of Education, approximately 32 million adults in the country are illiterate.

“Being able to read and write allows people to engage in more activities that use the brain, like reading newspapers and helping children and grandchildren with homework,” said study author Jennifer J. Manly, Ph.D., of Columbia University Vagelos College of Physicians and Surgeons in New York. “Previous research has shown such activities may reduce the risk of dementia. Our new study provides more evidence that reading and writing may be important factors in helping maintain a healthy brain.”

The study looked at people with low levels of education who lived in northern Manhattan. Many were born and raised in rural areas in the Dominican Republic where access to education was limited. The study involved 983 people with an average age of 77. Each person went to school for four years or less. Researchers asked each person, “Did you ever learn to read or write?” Researchers then divided people into two groups; 237 people were illiterate and 746 people were literate.

Participants had medical exams and took memory and thinking tests at the beginning of the study and at follow-up appointments that occurred every 18 months to two years. Testing included recalling unrelated words and producing as many words as possible when given a category like fruit or clothing.

Researchers found of the people who were illiterate, 83 of 237 people, or 35 percent, had dementia at the start of the study. Of the people who were literate, 134 of 746 people, or 18 percent, had dementia. After adjusting for age, socioeconomic status and cardiovascular disease, people who could not read and write had nearly a three times greater chance of having dementia at the start of the study.

Among participants without dementia at the start of the study, during follow-up an average of four years later, 114 of 237 people who were illiterate, or 48 percent, had dementia. Of the people who were literate, 201 of 746 people, or 27 percent, had dementia. After adjusting for age, socioeconomic status and cardiovascular disease, researchers found that people who could not read and write were twice as likely to develop dementia during the study.

When researchers evaluated language, speed, spatial, and reasoning skills, they found that adults who were illiterate had lower scores at the start of the study. But their test scores did not decline at a more rapid rate as the study progressed.

“Our study also found that literacy was linked to higher scores on memory and thinking tests overall, not just reading and language scores,” said Manly. “These results suggest that reading may help strengthen the brain in many ways that may help prevent or delay the onset of dementia.”

Manly continued, “Even if they only have a few years of education, people who learn to read and write may have lifelong advantages over people who never learn these skills.”

Manly said future studies should find out if putting more resources into programs that teach people to read and write help reduce the risk of dementia.

A limitation of the study was that researchers did not ask how or when literate study participants learned to read and write.

The study was supported by the National Institutes of Health and National Institute on Aging.

Story Source:

Materials provided by American Academy of Neurology. Note: Content may be edited for style and length.

Journal Reference:

Miguel Arce Rentería, Jet M.J. Vonk, Gloria Felix, Justina F. Avila, Laura B. Zahodne, Elizabeth Dalchand, Kirsten M. Frazer, Michelle N. Martinez, Heather L. Shouel, Jennifer J. Manly. Illiteracy, dementia risk, and cognitive trajectories among older adults with low education. Neurology, 2019; 10.1212/WNL.0000000000008587 DOI: 10.1212/WNL.0000000000008587

https://www.sciencedaily.com/releases/2019/11/191114180033.htm

Scientists reverse cognitive symptoms in a mouse model of Down Syndrome

By Kristin Houser

Down syndrome is a cognitive disability that can affect a person’s memory or ability to learn — intellectual impairments researchers traditionally thought were untreatable and irreversible.

But now, researchers from the University of California San Francisco and Baylor College of Medicine say they’ve reversed the impairments in mouse models of Down syndrome — potentially foreshadowing an ethically-fraught future in which doctors can do the same for humans with the condition.

All people with Down syndrome share one thing in common: an extra copy of chromosome 21. For that reason, much of the research on Down syndrome has focused on genetics.

But for this new study, published Friday in the prestigious journal Science, researchers focused on the protein-producing cells in the brains of mice with Down syndrome. That led them to the discovery that the animals’ hippocampus regions produced 39 percent less protein than those of typical mice.

Further study led the researchers to conclude that the presence of an extra chromosome likely prompted the animals’ hippocampal cells to trigger the integrated stress response (ISR), which decreased protein production.

“The cell is constantly monitoring its own health,” researcher Peter Walter said in a press release. “When something goes wrong, the cell responds by making less protein, which is usually a sound response to cellular stress. But you need protein synthesis for higher cognitive functions, so when protein synthesis is reduced, you get a pathology of memory formation.”

By blocking the activity of PKR, the enzyme that prompted the ISR in the mouse model’s hippocampal cells, the researchers found they could not only reverse the decreased protein production but also improve the animals’ cognitive function.

Of course, just because something works in mice doesn’t mean it’ll work in humans.

However, when the researchers analyzed postmortem brain tissue samples of people with Down syndrome, they found evidence that the ISR had been activated. They also obtained a tissue sample from a person with Down syndrome who only had the extra copy of chromosome 21 in some of their cells — and those cells were the only ones with ISR activated.

“We started with a situation that looked hopeless,” Walter said. “Nobody thought anything could be done. But we may have struck gold.”

https://futurism.com/neoscope/scientists-reverse-cognitive-deficiets-of-down-syndrome-mice

Thanks to Kebmodee for bringing this to the It’s Interesting community.

Robert Provine, neuroscientist with pioneering work in laughter, yawning, hiccupping, and tears, dies.

by EMILY MAKOWSKI

Neuroscientist Robert Provine, known for his groundbreaking research on common but mysterious human behavior such as laughter and yawning, died October 17 of complications from non-Hodgkin’s lymphoma, according to The Washington Post. He was 76.

Provine studied human social behaviors through innovative methods. In one 1993 study, his team observed people laughing outside of the lab setting, such as in shopping malls or while walking down the street. He found that, contrary to scientific belief of the time, most instances of laughter were based not in response to overt humor, but instead in an effort to strengthen social bonds, acknowledge a superior’s authority, or, when used negatively, to exclude someone from a group.

“Laughter is part of this universal human vocabulary. Everyone speaks this language. Just as birds of a given species all sing their species’ typical song, laughter is part of our own human song,” Provine once told NPR, according to the Post.

Born on May 11, 1943 in Tulsa, Oklahoma, Provine showed an aptitude for science at a young age when he built telescopes in high school. He received his bachelor’s degree in psychology from Oklahoma State University in 1965 and PhD in psychology from Washington University in St. Louis in 1971. He was a member of the American Association for the Advancement of Science, the Association for Psychological Science, and the Psychonomic Society, and wrote two popular science books: Laughter: A Scientific Investigation in 2000 and Curious Behavior: Yawning, Laughing, Hiccupping, and Beyond in 2012.

“Provine’s research on topics such as yawning, laughter, tickling, and emotional tears provided fascinating insights into the fundamental building blocks of human social behavior,” according to a memorial on the University of Maryland, Baltimore County’s (UMBC) website. Provine taught at UMBC for four decades before becoming a professor emeritus in 2013.

“His approach was just amazing. It was different than what pretty much anyone was doing,” Robert Spencer, one of Provine’s former PhD students and currently the chief of neuropsychology at the VA Ann Arbor Healthcare System, tells The Scientist. “He was opening up a whole new set of methods, things that he would refer to as ‘sidewalk neuroscience,’ which was essentially ethology as applied to humans. And he answered questions you just can’t answer in synthetic lab situations,” he says.

Spencer remembers Provine as having a quirky personality, a distinctive Oklahoma accent, and a lab that “looked like a museum,” adding that it was “just full of equipment that he had built himself” in order to conduct experiments. In addition, he had many interests outside of the lab, such as saxophone playing, race car driving, and martial arts.

He is survived by his wife of 23 years, his son and daughter from his first marriage, and three grandchildren.

https://www.the-scientist.com/news-opinion/robert-provine–researcher-of-universal-human-behavior–dies-66622?utm_campaign=TS_DAILY%20NEWSLETTER_2019&utm_source=hs_email&utm_medium=email&utm_content=78428564&_hsenc=p2ANqtz-9rEINL9K9sYAcnp9kSxgo46D44ioSo_zR3e3MkXqwoeczqjTYDR5a4v3X7Cc4X3sqANvMx6eWvkiUiGKo7lYg5Cj8Sjw&_hsmi=78428564

How information is like snacks, money, and drugs—to your brain

By Laura Counts

Can’t stop checking your phone, even when you’re not expecting any important messages? Blame your brain.

A new study by researchers at UC Berkeley’s Haas School of Business has found that information acts on the brain’s dopamine-producing reward system in the same way as money or food.

“To the brain, information is its own reward, above and beyond whether it’s useful,” says Assoc. Prof. Ming Hsu, a neuroeconomist whose research employs functional magnetic imaging (fMRI), psychological theory, economic modeling, and machine learning. “And just as our brains like empty calories from junk food, they can overvalue information that makes us feel good but may not be useful—what some may call idle curiosity.”

The paper, “Common neural code for reward and information value,” was published this month by the Proceedings of the National Academy of Sciences. Authored by Hsu and graduate student Kenji Kobayashi, now a post-doctoral researcher at the University of Pennsylvania, it demonstrates that the brain converts information into the same common scale as it does for money. It also lays the groundwork for unraveling the neuroscience behind how we consume information—and perhaps even digital addiction.

“We were able to demonstrate for the first time the existence of a common neural code for information and money, which opens the door to a number of exciting questions about how people consume, and sometimes over-consume, information,” Hsu says.

Rooted in the study of curiosity

The paper is rooted in the study of curiosity and what it looks like inside the brain. While economists have tended to view curiosity as a means to an end, valuable when it can help us get information to gain an edge in making decisions, psychologists have long seen curiosity as an innate motivation that can spur actions by itself. For example, sports fans might check the odds on a game even if they have no intention of ever betting.

Sometimes, we want to know something, just to know.

“Our study tried to answer two questions. First, can we reconcile the economic and psychological views of curiosity, or why do people seek information? Second, what does curiosity look like inside the brain?” Hsu says.

The neuroscience of curiosity

To understand more about the neuroscience of curiosity, the researchers scanned the brains of people while they played a gambling game. Each participant was presented with a series of lotteries and needed to decide how much they were willing to pay to find out more about the odds of winning. In some lotteries, the information was valuable—for example, when what seemed like a longshot was revealed to be a sure thing. In other cases, the information wasn’t worth much, such as when little was at stake.

For the most part, the study subjects made rational choices based on the economic value of the information (how much money it could help them win). But that didn’t explain all their choices: People tended to over-value information in general, and particularly in higher-valued lotteries. It appeared that the higher stakes increased people’s curiosity in the information, even when the information had no effect on their decisions whether to play.

The researchers determined that this behavior could only be explained by a model that captured both economic and psychological motives for seeking information. People acquired information based not only on its actual benefit, but also on the anticipation of its benefit, whether or not it had use.

Hsu says that’s akin to wanting to know whether we received a great job offer, even if we have no intention of taking it. “Anticipation serves to amplify how good or bad something seems, and the anticipation of a more pleasurable reward makes the information appear even more valuable,” he says.

Common neural code for information and money

How does the brain respond to information? Analyzing the fMRI scans, the researchers found that the information about the games’ odds activated the regions of the brain specifically known to be involved in valuation (the striatum and ventromedial prefrontal cortex or VMPFC), which are the same dopamine-producing reward areas activated by food, money, and many drugs. This was the case whether the information was useful, and changed the person’s original decision, or not.

Next, the researchers were able to determine that the brain uses the same neural code for information about the lottery odds as it does for money by using a machine learning technique (called support vector regression). That allowed them to look at the neural code for how the brain responds to varying amounts of money, and then ask if the same code can be used to predict how much a person will pay for information. It can.

In other words, just as we can convert such disparate things as a painting, a steak dinner, and a vacation into a dollar value, the brain converts curiosity about information into the same common code it uses for concrete rewards like money, Hsu says.

“We can look into the brain and tell how much someone wants a piece of information, and then translate that brain activity into monetary amounts,” he says.

Raising questions about digital addiction

While the research does not directly address overconsumption of digital information, the fact that information engages the brain’s reward system is a necessary condition for the addiction cycle, he says. And it explains why we find those alerts saying we’ve been tagged in a photo so irresistible.

“The way our brains respond to the anticipation of a pleasurable reward is an important reason why people are susceptible to clickbait,” he says. “Just like junk food, this might be a situation where previously adaptive mechanisms get exploited now that we have unprecedented access to novel curiosities.”

How information is like snacks, money, and drugs—to your brain