Self-Taught AI Masters Rubik’s Cube Without Human Help

by George Dvorsky

Fancy algorithms capable of solving a Rubik’s Cube have appeared before, but a new system from the University of California, Irvine uses artificial intelligence to solve the 3D puzzle from scratch and without any prior help from humans—and it does so with impressive speed and efficiency.

New research published this week in Nature Machine Intelligence describes DeepCubeA, a system capable of solving any jumbled Rubik’s Cube it’s presented with. More impressively, it can find the most efficient path to success—that is, the solution requiring the fewest number of moves—around 60 percent of the time. On average, DeepCubeA needed just 28 moves to solve the puzzle, requiring 1.2 seconds to calculate the solution.

Sounds fast, but other systems have solved the 3D puzzle in less time, including a robot that can solve the Rubik’s cube in just 0.38 seconds. But these systems were specifically designed for the task, using human-scripted algorithms to solve the puzzle in the most efficient manner possible. DeepCubeA, on the other hand, taught itself to solve Rubik’s Cube using an approach to artificial intelligence known as reinforcement learning.

“Artificial intelligence can defeat the world’s best human chess and Go players, but some of the more difficult puzzles, such as the Rubik’s Cube, had not been solved by computers, so we thought they were open for AI approaches,” said Pierre Baldi, the senior author of the new paper, in a press release. “The solution to the Rubik’s Cube involves more symbolic, mathematical and abstract thinking, so a deep learning machine that can crack such a puzzle is getting closer to becoming a system that can think, reason, plan and make decisions.”

Indeed, an expert system designed for one task and one task only—like solving a Rubik’s Cube—will forever be limited to that domain, but a system like DeepCubeA, with its highly adaptable neural net, could be leveraged for other tasks, such as solving complex scientific, mathematical, and engineering problems. What’s more, this system “is a small step toward creating agents that are able to learn how to think and plan for themselves in new environments,” Stephen McAleer, a co-author of the new paper, told Gizmodo.

Reinforcement learning works the way it sounds. Systems are motivated to achieve a designated goal, during which time they gain points for deploying successful actions or strategies, and lose points for straying off course. This allows the algorithms to improve over time, and without human intervention.

Reinforcement learning makes sense for a Rubik’s Cube, owing to the hideous number of possible combinations on the 3x3x3 puzzle, which amount to around 43 quintillion. Simply choosing random moves with the hopes of solving the cube is simply not going to work, neither for humans nor the world’s most powerful supercomputers.

DeepCubeA is not the first kick at the can for these University of California, Irvine researchers. Their earlier system, called DeepCube, used a conventional tree-search strategy and a reinforcement learning scheme similar to the one employed by DeepMind’s AlphaZero. But while this approach works well for one-on-one board games like chess and Go, it proved clumsy for Rubik’s Cube. In tests, the DeepCube system required too much time to make its calculations, and its solutions were often far from ideal.

The UCI team used a different approach with DeepCubeA. Starting with a solved cube, the system made random moves to scramble the puzzle. Basically, it learned to be proficient at Rubik’s Cube by playing it in reverse. At first the moves were few, but the jumbled state got more and more complicated as training progressed. In all, DeepCubeA played 10 billion different combinations in two days as it worked to solve the cube in less than 30 moves.

“DeepCubeA attempts to solve the cube using the least number of moves,” explained McAleer. “Consequently, the moves tend to look much different from how a human would solve the cube.”

After training, the system was tasked with solving 1,000 randomly scrambled Rubik’s Cubes. In tests, DeepCubeA found a solution to 100 percent of all cubes, and it found a shortest path to the goal state 60.3 percent of the time. The system required 28 moves on average to solve the cube, which it did in about 1.2 seconds. By comparison, the fastest human puzzle solvers require around 50 moves.

“Since we found that DeepCubeA is solving the cube in the fewest moves 60 percent of the time, it’s pretty clear that the strategy it is using is close to the optimal strategy, colloquially referred to as God’s algorithm,” study co-author Forest Agostinelli told Gizmodo. “While human strategies are easily explainable with step-by-step instructions, defining an optimal strategy often requires sophisticated knowledge of group theory and combinatorics. Though mathematically defining this strategy is not in the scope of this paper, we can see that the strategy DeepCubeA is employing is one that is not readily obvious to humans.”

To showcase the flexibility of the system, DeepCubeA was also taught to solve other puzzles, including sliding-tile puzzle games, Lights Out, and Sokoban, which it did with similar proficiency.

https://gizmodo.com/self-taught-ai-masters-rubik-s-cube-without-human-help-1836420294

These fungi drug cicadas with psilocybin or amphetamine to make them mate nonstop


The cicada-infecting Massospora cicadina fungus makes an amphetamine called cathinone, which spurs cicadas to mate and spread fungal spores. Other species of the fungus produce psilocybin, more often found in hallucinogenic mushrooms.

A cicada-infecting fungus produces drugs that make the insects literally mate their butts off.

Massospora fungi make either a drug found in hallucinogenic mushrooms or an amphetamine found in khat leaves, plant pathologist Matthew Kasson of West Virginia University in Morgantown reported June 22 at the ASM Microbe 2019 meeting.

The fungi may use psilocybin, which causes people to hallucinate, or the amphetamine cathinone to suppress cicadas’ appetites and keep the insects moving and mating even after they lose big chunks of their bodies. The finding marks the first time that researchers have discovered a fungus, other than mushrooms, producing psilocybin, and the first organism outside of plants to make an amphetamine.

Massospora fungi are transmitted sexually from cicada to cicada. Huge plugs of fungi form on the insects’ abdomens, and during mating, parts of the abdomens may break away, Kasson said.

Losing body parts would surely slow most organisms down, and yet for the fungal-infected cicadas, “two-thirds of their body might be missing, and they would be whistling as they walk down the street,” Kasson said. The infected insects mate nearly nonstop, spreading the fungi to partners, he and colleagues report June 25 in Fungal Ecology.

Overall, the team discovered 1,176 small molecules in fungus-infected cicadas, including the two psychoactive drugs. The researchers aren’t sure how the fungi produce the drugs, which in other organisms require enzymes that seem to be missing from Massospora. So the fungi may be using new ways to make the compounds, Kasson said. The team is also trying to determine what the other molecules do to influence cicada behavior.

These fungi drug cicadas with psilocybin or amphetamine to make them mate nonstop

Experience Apollo 11 in real time

If the 50th anniversary coverage of the first Moon landing is getting you inspired, step back in time to the real thing. Apollo 11 in Real Time is a website that will drop you into the mission in progress at that very second, exactly 50 years ago.

The website streams photos, television broadcasts, film shot by the astronauts and transcripts of the mission in real time — including, for the first time, 50 channels of mission-control audio.

https://apolloinrealtime.org/11/?utm_source=Nature+Briefing&utm_campaign=c2e1c3b228-briefing-dy-20190716&utm_medium=email&utm_term=0_c9dfd39373-c2e1c3b228-44039353

Goats reveal their feelings with the sound of distinctive bleats

By Clare Wilson

“Maaah.” Goat calls might all sound the same to us, but the animals seem to recognise when one of their herd-mates is happy or sad from their bleats alone.

When goats hear a series of calls that change in emotional tone, they look towards the source of the sound – and their heart-rate readings indicate the animals’ own emotions are swayed by the noises.

Luigi Baciadonna of Queen Mary University of London and colleagues recorded goats bleating in different emotional states to see how they are affected by hearing each other’s calls.

To elicit positive sounds, they recorded goats that could see someone approaching with a bucket of food. To get negative ones they let an animal see another being fed while not getting any food themselves, or kept one in isolation for five minutes. “This was not extreme distress – I don’t think most people could tell the difference in their calls,” says Baciadonna.

Bleats with meaning
Then, to a different goat, the team played a bleat every 20 seconds, with nine positive ones followed by three negative or vice versa. At the start, the animal looked towards the source of the sound, but this tailed off as it got used to it. When the switch between emotional bleats happened, the goat was more likely to look again – but only with the second call of the batch of three. “There’s a bit of a delay in spotting the difference,” says Baciadonna.

The team also tried to see how the goats hearing the recordings felt, by measuring the variation in time between each heartbeat. In people, a high value for this is linked with more positive mood, while low values correlate with feeling depressed or stressed. Sure enough, when goats heard the happy bleats, their heart-rate variability was higher than when they heard the sad ones.

“I don’t doubt any of this,” says David Harwood, senior vice-president of the UK’s Goat Veterinary Society. “Goat owners are always telling us how intelligent their animals are.”

Journal reference: Frontiers in Zoology , DOI: 10.1186/s12983-019-0323-z

https://www.newscientist.com/article/2209218-goats-reveal-their-feelings-with-the-sound-of-distinctive-bleats/

Bluehead wrasse fish switch from female to male in just 20 days

By Michael Le Page

For many fish, changing sex is a normal part of life. For the first time, we have found out exactly how one of these species – a small cleaner fish called the bluehead wrasse – does it.

Erica Todd at the University of Otago in New Zealand and her colleagues removed some male bluehead wrasse from a few sites on reefs off Key Largo in Florida. This triggers females to change sex. They then caught changing females at regular intervals and looked at what was happening in their bodies down to the level of which genes were turning on or off.

They found that the loss of males makes some females stressed. They become more aggressive and start performing male courtship behaviours.

In individuals that become dominant in a social group, the genes associated with female hormones shut down in a day or two, and their colours begin to change – females of the species are yellow and brown (see above), while the males are green and blue.

At the same time, the egg-producing tissues in their ovaries start to shrink and begin to be replaced by sperm-producing tissues. In just 8 to 10 days, the mature ovaries are transformed into testes, and the fish can mate with females and sire offspring.

Read more: Zoologger: Shrimp plays chicken with its sex change
After around 20 days, the fish have the full male colours and the process is complete. “The bluehead is certainly remarkable for its speed,” says Todd. “Other species do take much longer.”

However, as the fish only live around two or three years, those 20 days are a fair chunk of their lifespan, equivalent to 2 years of a human lifetime.

Around 500 species of fish can change sex, a fact long known to biologists but which got wider attention recently when the Blue Planet II documentary narrated by David Attenborough showed Asian sheepshead wrasse changing sex. It is most common for female fish to turn into males but in some species including clownfish the males turn into females.

In at least one species, the hawkfish found around southern Japan, the females can not only turn into males but also turn back into females again if circumstances require it. For one species of shrimp, there is no need to change back. It starts out male but becomes an hermaphrodite – a phenomenon known as protandric simultaneous hermaphroditism.

Journal reference: Science Advances, DOI: 10.1126/sciadv.aaw7006

https://www.newscientist.com/article/2209254-bluehead-wrasse-fish-switch-from-female-to-male-in-just-20-days/

The Pentagon has a laser that can identify people from at least 200 meters away by the pattern of their heartbeat.

by David Hambling

Everyone’s heart is different. Like the iris or fingerprint, our unique cardiac signature can be used as a way to tell us apart. Crucially, it can be done from a distance.

It’s that last point that has intrigued US Special Forces. Other long-range biometric techniques include gait analysis, which identifies someone by the way he or she walks. This method was supposedly used to identify an infamous ISIS terrorist before a drone strike. But gaits, like faces, are not necessarily distinctive. An individual’s cardiac signature is unique, though, and unlike faces or gait, it remains constant and cannot be altered or disguised.

Long-range detection
A new device, developed for the Pentagon after US Special Forces requested it, can identify people without seeing their face: instead it detects their unique cardiac signature with an infrared laser. While it works at 200 meters (219 yards), longer distances could be possible with a better laser. “I don’t want to say you could do it from space,” says Steward Remaly, of the Pentagon’s Combatting Terrorism Technical Support Office, “but longer ranges should be possible.”

Contact infrared sensors are often used to automatically record a patient’s pulse. They work by detecting the changes in reflection of infrared light caused by blood flow. By contrast, the new device, called Jetson, uses a technique known as laser vibrometry to detect the surface movement caused by the heartbeat. This works though typical clothing like a shirt and a jacket (though not thicker clothing such as a winter coat).

The most common way of carrying out remote biometric identification is by face recognition. But this needs good, frontal view of the face, which can be hard to obtain, especially from a drone. Face recognition may also be confused by beards, sunglasses, or headscarves.

Cardiac signatures are already used for security identification. The Canadian company Nymi has developed a wrist-worn pulse sensor as an alternative to fingerprint identification. The technology has been trialed by the Halifax building society in the UK.

Jetson extends this approach by adapting an off-the shelf device that is usually used to check vibration from a distance in structures such as wind turbines. For Jetson, a special gimbal was added so that an invisible, quarter-size laser spot could be kept on a target. It takes about 30 seconds to get a good return, so at present the device is only effective where the subject is sitting or standing.

Better than face recognition
Remaly’s team then developed algorithms capable of extracting a cardiac signature from the laser signals. He claims that Jetson can achieve over 95% accuracy under good conditions, and this might be further improved. In practice, it’s likely that Jetson would be used alongside facial recognition or other identification methods.

Wenyao Xu of the State University of New York at Buffalo has also developed a remote cardiac sensor, although it works only up to 20 meters away and uses radar. He believes the cardiac approach is far more robust than facial recognition. “Compared with face, cardiac biometrics are more stable and can reach more than 98% accuracy,” he says.

One glaring limitation is the need for a database of cardiac signatures, but even without this the system has its uses. For example, an insurgent seen in a group planting an IED could later be positively identified from a cardiac signature, even if the person’s name and face are unknown. Biometric data is also routinely collected by US armed forces in Iraq and Afghanistan, so cardiac data could be added to that library.

In the longer run, this technology could find many more uses, its developers believe. For example, a doctor could scan for arrythmias and other conditions remotely, or hospitals could monitor the condition of patients without having to wire them up to machines.

https://www.technologyreview.com/s/613891/the-pentagon-has-a-laser-that-can-identify-people-from-a-distanceby-their-heartbeat/

Immune cells invade aging brains, disrupt new nerve cell formation

A study by Stanford University School of Medicine investigators has revealed that immune cells infiltrate the rare newborn nerve-cell nurseries of the aging brain. There’s every reason to think those interlopers are up to no good. Experiments in a dish and in living animals indicate they’re secreting a substance that chokes off new nerve cell production.

While most of the experiments in the study were carried out in mice, the central finding—the invasion, by immune cells called killer T cells, of neurogenic niches (specialized spots in the brain where new nerve cells, or neurons, are generated)—was corroborated in tissue excised from autopsied human brains.

The findings could accelerate progress in hunting down the molecules in the body that promote the common deterioration of brain function in older individuals and in finding treatments that might stall or even reverse that deterioration. They also signify a crack in the wall of dogma that’s deemed the healthy brain impervious to invasion by the body’s immune cells, whose unbridled access to the organ could cause damage.

“The textbooks say that immune cells can’t easily get into the healthy brain, and that’s largely true,” said Anne Brunet, Ph.D., professor of genetics and senior author of the study. “But we’ve shown that not only do they get into otherwise healthy aging brains—including human brains—but they reach the very part of the brain where new neurons arise.”

Lead authorship of the study, to be published online July 3 in Nature, is shared by medical student Ben Dulken, Ph.D., graduate student Matthew Buckley and postdoctoral scholar Paloma Navarro Negredo, Ph.D.

The cells that aid memory

Many a spot in a young mammal’s brain is bursting with brand new neurons. But for the most part, those neurons have to last a lifetime. Older mammals’ brains retain only a couple of neurogenic niches, consisting of several cell types whose mix is critical for supporting neural stem cells that can both differentiate into neurons and generate more of themselves. New neurons spawned in these niches are considered essential to forming new memories and to learning, as well as to odor discrimination.

In order to learn more about the composition of the neurogenic niche, the Stanford researchers catalogued, one cell at a time, the activation levels of the genes in each of nearly 15,000 cells extracted from the subventricular zone (a neurogenic niche found in mice and human brains) of healthy 3-month-old mice and healthy 28- or 29-month-old mice.

This high-resolution, single-cell analysis allowed the scientists to characterize each cell they looked at and see what activities it was engaged in. Their analysis confirmed the presence of nine familiar cell types known to compose the neurogenic niche. But when Brunet and her colleagues compared their observations in the brains of young mice (equivalent in human years to young adults) with what they saw in the brains of old mice (equivalent to people in their 80s), they identified a couple of cell types in the older mice not typically expected to be there—and barely present in the young mice. In particular, they found immune cells known as killer T cells lurking in the older mice’s subventricular zone.

The healthy brain is by no means devoid of immune cells. In fact, it boasts its own unique version of them, called microglia. But a much greater variety of immune cells abounding in the blood, spleen, gut and elsewhere in the body are ordinarily denied entry to the brain, as the blood vessels pervading the brain have tightly sealed walls. The resulting so-called blood-brain barrier renders a healthy brain safe from the intrusion of potentially harmful immune cells on an inflammatory tear as the result of a systemic illness or injury.

“We did find an extremely sparse population of killer T cells in the subventricular zone of young mice,” said Brunet, who is the Michele and Timothy Barakett Endowed Professor. “But in the older mice, their numbers were expanded by 16-fold.”

That dovetailed with reduced numbers of proliferation-enabled neural stem cells in the older mice’s subventricular zone. Further experiments demonstrated several aspects of the killer T cells’ not-so-mellow interaction with neural stem cells. For one thing, tests in laboratory dishware and in living animals indicated that killer T cells isolated from old mice’s subventricular zone were far more disposed than those from the same mice’s blood to pump out an inflammation-promoting substance that stopped neural stem cells from generating new nerve cells.

Second, killer T cells were seen nestled next to neural stem cells in old mice’s subventricular zones and in tissue taken from the corresponding neurogenic niche in autopsied brains of old humans; where this was the case, the neural stem cells were less geared up to proliferate.

Possible brain-based antigens

A third finding was especially intriguing. Killer T cells’ job is to roam through the body probing the surfaces of cells for biochemical signs of a pathogen’s presence or of the possibility that a cell is becoming, or already is, cancerous. Such telltale biochemical features are called antigens. The tens of billions of killer T cells in a human body are able to recognize a gigantic range of antigens by means of receptors on their own surfaces. That’s because every unexposed, or naïve, killer T cell has its own unique receptor shape.

When an initially naïve killer T cell is exposed to an unfamiliar antigen that fits its uniquely shaped receptor, it reacts by undergoing multiple successive rounds of replication, culminating in a large set of warlike cells all sharing the same receptor and all poised to destroy any cells bearing the offending antigen. This process is called clonal expansion.

The killer T cells found in old mice’s brains had undergone clonal expansion, indicating likely exposure to triggering antigens. But the receptors on those killer T cells differed from the ones found in the old mice’s blood, suggesting that the brain-localized killer T cells hadn’t just traipsed through a disrupted blood-brain barrier via passive diffusion but were, rather, reacting to different, possibly brain-based, antigens.

Brunet’s group is now trying to determine what those antigens are. “They may bear some responsibility for the disruption of new neuron production in the aging brain’s neurogenic niches,” she said.

Single cell analysis reveals T cell infiltration in old neurogenic niches, Nature (2019). DOI: 10.1038/s41586-019-1362-5 , https://www.nature.com/articles/s41586-019-1362-5

https://medicalxpress.com/news/2019-07-immune-cells-invade-aging-brains.html

Low-carb ‘keto’ diet (‘Atkins-style’) may modestly improve cognition in older adults

In a pilot study of 14 older adults with mild cognitive problems suggestive of early Alzheimer’s disease, Johns Hopkins Medicine researchers report that a high-fat, low-carbohydrate diet may improve brain function and memory.

Although the researchers say that finding participants willing to undertake restrictive diets for the three-month study—or partners willing to help them stick to those diets—was challenging, those who adhered to a modified Atkins diet (very low carbohydrates and extra fat) had small but measurable improvements on standardized tests of memory compared with those on a low-fat diet.

The short-term results, published in the April issue of the Journal of Alzheimer’s Disease, are far from proof that the modified Atkins diet has the potential to stave off progression from mild cognitive impairment to Alzheimer’s disease or other dementias. However, they are promising enough, the researchers say, to warrant larger, longer-term studies of dietary impact on brain function.

“Our early findings suggest that perhaps we don’t need to cut carbs as strictly as we initially tried. We may eventually see the same beneficial effects by adding a ketone supplement that would make the diet easier to follow,” says Jason Brandt, Ph.D., professor of psychiatry and behavioral sciences and neurology at the Johns Hopkins University School of Medicine. “Most of all, if we can confirm these preliminary findings, using dietary changes to mitigate cognitive loss in early-stage dementia would be a real game-changer. It’s something that 400-plus experimental drugs haven’t been able to do in clinical trials.”

Brandt explains that, typically, the brain uses the sugar glucose—a product of carbohydrate breakdown—as a primary fuel. However, research has shown that in the early stage of Alzheimer’s disease the brain isn’t able to efficiently use glucose as an energy source. Some experts, he says, even refer to Alzheimer’s as “type 3 diabetes.”

Using brain scans that show energy use, researchers have also found that ketones—chemicals formed during the breakdown of dietary fat—can be used as an alternative energy source in the brains of healthy people and those with mild cognitive impairment. For example, when a person is on a ketogenic diet, consisting of lots of fat and very few sugars and starches, the brain and body use ketones as an energy source instead of carbs.

For the current study, the researchers wanted to see if people with mild cognitive impairment, often an indicator of developing Alzheimer’s disease, would benefit from a diet that forced the brain to use ketones instead of carbohydrates for fuel.

After 2 1/2 years of recruitment efforts, the researchers were able to enroll 27 people in the 12-week diet study. There were a few dropouts, and so far, 14 participants have completed the study. The participants were an average age of 71. Half were women, and all but one were white.

To enroll, each participant required a study partner (typically a spouse) who was responsible for ensuring that the participant followed one of two diets for the full 12 weeks. Nine participants followed a modified Atkins diet meant to restrict carbs to 20 grams per day or less, with no restriction on calories. The typical American consumes between 200 and 300 grams of carbs a day. The other five participants followed a National Institute of Aging diet, similar to the Mediterranean diet, that doesn’t restrict carbohydrates, but favors fruits, vegetables, low- or fat-free dairy, whole grains and lean proteins such as seafood or chicken.

The participants and their partners were also asked to keep food diaries. Prior to starting the diets, those assigned to the modified Atkins diet were consuming about 158 grams of carbs per day. By week six of the diet, they had cut back to an average of 38.5 grams of carbs per day and continued dropping at nine weeks, but still short of the 20-gram target, before rising to an average of 53 grams of carbs by week 12. Participants on the National Institute of Aging diet continued to eat well over 100 grams of carbs per day.

Each participant also gave urine samples at the start of the dietary regimens and every three weeks up to the end of the study, which were used to track ketone levels. More than half of the participants on the modified Atkins diet had at least some ketones in their urine by six weeks into the diet until the end; as expected, none of the participants on the National Institute of Aging control diet had any detectable ketones.

Participants completed the Montreal Cognitive Assessment, the Mini-Mental State Examination and the Clinical Dementia Rating Scale at the start of the study. They were tested with a brief collection of neuropsychological memory tests before starting their diets and at six weeks and 12 weeks on the diet. At the six-week mark, the researchers found a significant improvement on memory tests, which coincided with the highest levels of ketones and lowest carb intakes.

When comparing the results of tests of delayed recall—the ability to recollect something they were told or shown a few minutes earlier—those who stuck to the modified Atkins diet improved by a couple of points on average (about 15% of the total score), whereas those who didn’t follow the diet on average dropped a couple of points.

The researchers say the biggest hurdle for researchers was finding people willing to make drastic changes to their eating habits and partners willing to enforce the diets. The increase in carbohydrate intake later in the study period, they said, suggests that the diet becomes unpalatable over long periods.

“Many people would rather take a pill that causes them all kinds of nasty side effects than change their diet,” says Brandt. “Older people often say that eating the foods they love is one of the few pleasures they still enjoy in life, and they aren’t willing to give that up.”

But, because Brandt’s team observed promising results even in those lax with the diet, they believe that a milder version of the high-fat/low-carb diet, perhaps in conjunction with ketone supplement drinks, is worth further study. As this study also depended on caregivers/partners to do most of the work preparing and implementing the diet, the group also wants to see if participants with less severe mild cognitive impairment can make their own dietary choices and be more apt to stick to a ketogenic diet.

A standardized modified Atkins diet was created and tested at Johns Hopkins Medicine in 2002, initially to treat some seizure disorders. It’s still used very successfully for this purpose.

According to the Alzheimer’s Association, about 5.8 million Americans have Alzheimer’s disease, and by 2050 the number is projected to increase to 14 million people.

Jason Brandt et al. Preliminary Report on the Feasibility and Efficacy of the Modified Atkins Diet for Treatment of Mild Cognitive Impairment and Early Alzheimer’s Disease, Journal of Alzheimer’s Disease (2019). DOI: 10.3233/JAD-180995

https://medicalxpress.com/news/2019-06-low-carb-keto-diet-atkins-style-modestly.html

Study suggests neuronal origin of ‘they all look alike’

by Bob Yirka

A team of researchers from the University of California and Stanford University has found that the tendency to see people from different racial groups as interchangeable has a neuronal basis. In their paper published in Proceedings of the National Academy of Sciences, the group describes studies they conducted with volunteers and what they found.

One often-heard phrase connected with racial profiling is “they all look the same to me,” a phrase usually perceived as racist. It implies that people of one race have difficulty discerning the facial characteristics of people of another race. In this new effort, the researchers conducted experiments to find out if this is valid—at least among one small group of young, white men.

In the first experiment, young, white male volunteers looked at photographs of human faces, some depicting black people, others white, while undergoing an fMRI scan. Afterward, the researchers found that the part of the brain involved in facial recognition activated more for white faces than it did for black faces.

In the second experiment, the same volunteers looked at photographs of faces that had been doctored to make the subjects appear more alike, regardless of skin color. The researchers report that the brains of the volunteers activated when dissimilarities were spotted, regardless of skin color, though it was more pronounced when the photo was of a white face.

In a third series of experiments, the volunteers rated how different they found faces in a series of photographs or whether they had seen a given face before. The researchers report that the volunteers had a tendency to rate the black faces as more similar to one another than the white faces. And they found it easier to tell if they had seen a particular white face before.

The researchers suggest that the results of their experiments indicate a neural basis that makes it more difficult for people to see differences between individuals of other races. They note that they did account for social contexts such as whether the volunteers had friends and/or associates of other races. They suggest that more work is required to determine if such neuronal biases can be changed based on social behavior.

Brent L. Hughes et al. Neural adaptation to faces reveals racial outgroup homogeneity effects in early perception, Proceedings of the National Academy of Sciences (2019). DOI: 10.1073/pnas.1822084116

https://medicalxpress.com/news/2019-07-neuronal-alike.html

Drug to treat malaria could mitigate hereditary hearing loss


Kumar Alagramam. PhD, Case Western Reserve University

The ability to hear depends on proteins to reach the outer membrane of sensory cells in the inner ear. But in certain types of hereditary hearing loss, mutations in the protein prevent it from reaching these membranes. Using a zebrafish model, researchers at Case Western Reserve University School of Medicine have found that an anti-malarial drug called artemisinin may help prevent hearing loss associated with this genetic disorder.

In a recent study, published in the Proceedings of the National Academy of Sciences (PNAS), researchers found the classic anti-malarial drug can help sensory cells of the inner ear recognize and transport an essential protein to specialized membranes using established pathways within the cell.

The sensory cells of the inner ear are marked by hair-like projections on the surface, earning them the nickname “hair cells.” Hair cells convert sound and movement-induced vibrations into electrical signals that are conveyed through nerves and translated in the brain as information used for hearing and balance.

The mutant form of the protein–clarin1–render hair cells unable to recognize and transport them to membranes essential for hearing using typical pathways within the cell. Instead, most mutant clarin1 proteins gets trapped inside hair cells, where they are ineffective and detrimental to cell survival. Faulty clarin1 secretion can occur in people with Usher syndrome, a common genetic cause of hearing and vision loss.

The study found artemisinin restores inner ear sensory cell function—and thus hearing and balance—in zebrafish genetically engineered to have human versions of an essential hearing protein.

Senior author on the study, Kumar N. Alagramam, the Anthony J. Maniglia Chair for Research and Education and associate professor at Case Western Reserve University School of Medicine Department of Otolaryngology at University Hospitals Cleveland Medical Center, has been studying ways to get mutant clarin1 protein to reach cell membranes to improve hearing in people with Usher syndrome.

“We knew mutant protein largely fails to reach the cell membrane, except patients with this mutation are born hearing,” Alagramam said. “This suggested to us that, somehow, at least a fraction of the mutant protein must get to cell membranes in the inner ear.”

Alagramam’s team searched for any unusual secretion pathways mutant clarin1 could take to get to hair cell membranes. “If we can understand how the human clarin1 mutant protein is transported to the membrane, then we can exploit that mechanism therapeutically,” Alagramam said.

For the PNAS study, Alagramam’s team created several new zebrafish models. They swapped the genes encoding zebrafish clarin1 with human versions—either normal clarin1, or clarin1 containing mutations found in humans with a type of Usher syndrome, which can lead to profound hearing loss.

“Using these ‘humanized’ fish models,” Alagramam said, “we were able to study the function of normal clarin1 and, more importantly, the functional consequences of its mutant counterpart. To our knowledge, this is the first time a human protein involved in hearing loss has been examined in this manner.”

Zebrafish offer several advantages to study hearing. Their larvae are transparent, making it easy to monitor inner ear cell shape and function. Their genes are also nearly identical to humans—particularly when it comes to genes that underlie hearing. Replacing zebrafish clarin1 with human clarin1 made an even more precise model.

The researchers found the unconventional cellular secretion pathway they were looking for by using florescent labels to track human clarin1 moving through zebrafish hair cells. The mutated clarin1 gets to the cell membrane using proteins and trafficking mechanisms within the cell, normally reserved for misfolded proteins “stuck” in certain cellular compartments.

“As far as we know, this is the first time a human mutant protein associated with hearing loss has been shown to be ‘escorted’ by the unconventional cellular secretion pathway,” Alagramam said. “This mechanism may shed light on the process underlying hearing loss associated with other mutant membrane proteins.”

The study showed the majority of mutant clarin1 gets trapped inside a network of tubules within the cell analogous to stairs and hallways helping proteins, including clarin1, get from place to place. Alagramam’s team surmised that liberating the mutant protein from this tubular network would be therapeutic and tested two drugs that target it: thapsigargin (an anti-cancer drug) and artemisinin (an anti-malarial drug).

The drugs did enable zebrafish larvae to liberate the trapped proteins and have higher clarin1 levels in the membrane; but artemisinin was the more effective of the two. Not only did the drug help mutant clarin1 to reach the membrane, hearing and balance functions were better preserved in zebrafish treated with the anti-malarial drug than untreated fish.

In zebrafish, survival depends on normal swim behavior, which in turn depends on balance and the ability to detect water movement, both of which are tied to hair cell function. Survival rates in zebrafish expressing the mutant clarin1 jumped from 5% to 45% after artemisinin treatment.

“Our report highlights the potential of artemisinin to mitigate both hearing and vision loss caused by clarin1 mutations,” Alagramam said. “This could be a re-purposable drug, with a safe profile, to treat Usher syndrome patients.”

Alagramam added that the unconventional secretion mechanism and the activation of that mechanism using artemisinin or similar drugs may also be relevant to other genetic disorders that involve mutant membrane proteins aggregating in the cell’s tubular network, including sensory and non-sensory disorders.

Gopal SR, et al. “Unconventional secretory pathway activation restores hair cell mechanotransduction in an USH3A model.” PNAS.

Drug to treat malaria could mitigate hereditary hearing loss