Posts Tagged ‘communication’

The world’s smallest bears copy one another’s facial expressions as a means of communication.

A team at the University of Portsmouth, UK, studied 22 sun bears at the Bornean Sun Bear Conservation Centre in Malaysia. In total, 21 matched the open-mouthed expressions of their playmates during face-to-face interactions.

When they were facing each other, 13 bears made the expressions within 1 second of observing a similar expression from their playmate.

“Mimicking the facial expressions of others in exact ways is one of the pillars of human communication,” says Marina Davila-Ross, who was part of the team. “Other primates and dogs are known to mimic each other, but only great apes and humans were previously known to show such complexity in their facial mimicry.”

Sun bears have no special evolutionary link to humans, unlike monkeys or apes, nor are they domesticated animals like dogs. The team believes this means the behaviour must also be present in various other species.

Also known as honey bears, sun bears are the smallest members of the bear family. They grow to between 120 centimetres and 150 centimetres long and weigh up to 80 kilograms. The species is endangered and lives in the tropical forests of South-East Asia.

While the bears prefer a solitary life, the team says that they engage in gentle and rough play and may use facial mimicry to indicate they are ready to play more roughly or strengthen social bonds.

“It is widely believed that we only find complex forms of communication in species with complex social systems,” says Derry Taylor, also on the team. “As sun bears are a largely solitary species, our study of their facial communication questions this belief, because it shows a complex form of facial communication that until now was known only in more social species.”

Journal reference: Scientific Reports, DOI: 10.1038/s41598-019-39932-6

By Alison George

Human speech contains more than 2000 different sounds, from the ubiquitous “m” and “a” to the rare clicks of some southern African languages. But why are certain sounds more common than others? A ground-breaking, five-year investigation shows that diet-related changes in human bite led to new speech sounds that are now found in half the world’s languages.

More than 30 years ago, the linguist Charles Hockett noted that speech sounds called labiodentals, such as “f” and “v”, were more common in the languages of societies that ate softer foods. Now a team of researchers led by Damián Blasi at the University of Zurich, Switzerland, has pinpointed how and why this trend arose.

They found that the upper and lower incisors of ancient human adults were aligned, making it hard to produce labiodentals, which are formed by touching the lower lip to the upper teeth. Later, our jaws changed to an overbite structure, making it easier to produce such sounds.

The team showed that this change in bite correlated with the development of agriculture in the Neolithic period. Food became easier to chew at this point, which led to changes in human jaws and teeth: for instance, because it takes less pressure to chew softer, farmed foods, the jawbone doesn’t have to do as much work and so doesn’t grow to be so large.

Analyses of a language database also confirmed that there was a global change in the sound of world languages after the Neolithic era, with the use of “f” and “v” increasing dramatically in recent millennia. These sounds are still not found in the languages of many hunter-gatherer people today.

This research overturns the prevailing view that all human speech sounds were present when Homo sapiens evolved around 300,000 years ago. “The set of speech sounds we use has not necessarily remained stable since the emergence of our species, but rather the immense diversity of speech sounds that we find today is the product of a complex interplay of factors involving biological change and cultural evolution,” said team member Steven Moran, a linguist at the University of Zurich, at a briefing about this study.

This new approach to studying language evolution is a game changer, says Sean Roberts at the University of Bristol, UK. “For the first time, we can look at patterns in global data and spot new relationships between the way we speak and the way we live,” he says. “It’s an exciting time to be a linguist.”

Journal reference: Science, DOI: 10.1126/science.aav3218

https://www.newscientist.com/article/2196580-humans-couldnt-pronounce-f-and-v-sounds-before-farming-developed/

Songbirds known as Japanese tits communicate using human-like rules for language and can mentally picture what they’re talking about, research suggests.

by Brandon Keim

Hear a word, particularly an important one — like “snake!” — and an image appears in your mind. Now scientists are finding that this basic property of human language is shared by certain birds and, perhaps, many other creatures.

In a series of clever tests, a researcher has found that birds called Japanese tits not only chirp out a distinctive warning for snakes, but also appear to imagine a snake when they hear that cry. This glimpse into the mind’s eye of a bird hints at just how widespread this ostensibly human-like capacity may be.

“Animal communication has been considered very different from human speech,” says Toshitaka Suzuki, an ethologist at Japan’s Kyoto University. “My results suggest that birds and humans may share similar cognitive abilities for communication.”

Perhaps this went unappreciated for so long, says Suzuki, simply because “we have not yet found a way to look at the animals’ minds.”

Over the last several years, Suzuki conducted a series of experiments deciphering the vocalizations of Japanese tits — or Parus minor, whose family includes such everyday birds as chickadees and titmice — and describing their possession of syntax, or the ability to produce new meanings by combining words in various orders. (“Open the door,” for example, versus “the open door.”)

Syntax has long been considered unique to human language, and language in turn is often thought to set humans apart from other animals. Yet Suzuki found it not in a bird typically celebrated for intelligence, like crows or parrots, but in humble P. minor.

MENTAL PICTURES
Once he realized that birds are using their own form of language, Suzuki wondered: what happens in their minds when they talk? Might words evoke corresponding images, as happens for us?

Suzuki tested that proposition by broadcasting recordings of P. minor’s snake-specific alarm call from a tree-mounted speaker. Then he analyzed the birds’ responses to a stick that he’d hung along the trunk and could manipulate to mimic a climbing snake.

If the call elicited a mental image, Suzuki figured the birds would pay extra-close attention to the snake-like stick. Indeed they did, he recently reported in the journal Proceedings of the National Academy of Sciences.

In contrast, when Suzuki broadcast a call used by tits to convey a general, non-specific alarm, the birds didn’t pay much notice to the stick. And when he set the stick swinging from side to side in a decidedly non-snakelike manner, the birds ignored it.

“Simply hearing these calls causes tits to become more visually perceptive to objects resembling snakes,” he writes in PNAS. “Before detecting a real snake, tits retrieve its visual image from snake-specific alarm calls and use this to search out snakes.”

Rob Magrath, a behavioral ecologist at Australia National University who specializes in bird communication, thinks Suki’s interpretation is consistent with the results. He also calls the work “truly delightful.”

“I love the way that Suzuki employs simple experiments, literally using sticks and string, to test ideas,” Magrath says. Similarly impressed is ecologist Christine Sheppard of the American Bird Conservancy. “It’s incredibly challenging to devise an experiment that would allow you to answer this question,” she says. “It’s really neat.”

MINDS OF THEIR OWN
Sheppard says it makes evolutionary sense for animals to possess a ‘mind’s eye’ that works in tandem with their communications: It allows individuals to respond more quickly to threats. Suzuki agrees, and believes it’s likely found not only in P. minor and their close relatives, but in many other birds and across the animal kingdom.

“Many other animals produce specific calls when finding specific types of food or predators,” he says. He hopes researchers will use his methodology to peek into the mind’s eyes of other animals.

For Sheppard, the findings also speak to how people think about birds: not just as pretty or interesting or ecologically important, but as fellow beings with rich minds of their own.

“When I was in school, people still thought that birds were little automata. Now “bird brain” is becoming a compliment,” she says.

“I think this kind of insight helps people see birds as living, breathing creatures with whom we share the planet,” she says.

https://news.nationalgeographic.com/2018/02/japanese-songbirds-process-language-syntax/

From the way you move and sleep, to how you interact with people around you, depression changes just about everything. It is even noticeable in the way you speak and express yourself in writing. Sometimes this “language of depression” can have a powerful effect on others. Just consider the impact of the poetry and song lyrics of Sylvia Plath and Kurt Cobain, who both killed themselves after suffering from depression.

Scientists have long tried to pin down the exact relationship between depression and language, and technology is helping us get closer to a full picture. Our new study, published in Clinical Psychological Science, has now unveiled a class of words that can help accurately predict whether someone is suffering from depression.

Traditionally, linguistic analyses in this field have been carried out by researchers reading and taking notes. Nowadays, computerised text analysis methods allow the processing of extremely large data banks in minutes. This can help spot linguistic features which humans may miss, calculating the percentage prevalence of words and classes of words, lexical diversity, average sentence length, grammatical patterns and many other metrics.

So far, personal essays and diary entries by depressed people have been useful, as has the work of well-known artists such as Cobain and Plath. For the spoken word, snippets of natural language of people with depression have also provided insight. Taken together, the findings from such research reveal clear and consistent differences in language between those with and without symptoms of depression.

Content
Language can be separated into two components: content and style. The content relates to what we express – that is, the meaning or subject matter of statements. It will surprise no one to learn that those with symptoms of depression use an excessive amount of words conveying negative emotions, specifically negative adjectives and adverbs – such as “lonely”, “sad” or “miserable”.

More interesting is the use of pronouns. Those with symptoms of depression use significantly more first person singular pronouns – such as “me”, “myself” and “I” – and significantly fewer second and third person pronouns – such as “they”, “them” or “she”. This pattern of pronoun use suggests people with depression are more focused on themselves, and less connected with others. Researchers have reported that pronouns are actually more reliable in identifying depression than negative emotion words.

We know that rumination (dwelling on personal problems) and social isolation are common features of depression. However, we don’t know whether these findings reflect differences in attention or thinking style. Does depression cause people to focus on themselves, or do people who focus on themselves get symptoms of depression?

Style
The style of language relates to how we express ourselves, rather than the content we express. Our lab recently conducted a big data text analysis of 64 different online mental health forums, examining over 6,400 members. “Absolutist words” – which convey absolute magnitudes or probabilities, such as “always”, “nothing” or “completely” – were found to be better markers for mental health forums than either pronouns or negative emotion words.

From the outset, we predicted that those with depression will have a more black and white view of the world, and that this would manifest in their style of language. Compared to 19 different control forums (for example, Mumsnet and StudentRoom), the prevalence of absolutist words is approximately 50% greater in anxiety and depression forums, and approximately 80% greater for suicidal ideation forums.

Pronouns produced a similar distributional pattern as absolutist words across the forums, but the effect was smaller. By contrast, negative emotion words were paradoxically less prevalent in suicidal ideation forums than in anxiety and depression forums.

Our research also included recovery forums, where members who feel they have recovered from a depressive episode write positive and encouraging posts about their recovery. Here we found that negative emotion words were used at comparable levels to control forums, while positive emotion words were elevated by approximately 70%. Nevertheless, the prevalence of absolutist words remained significantly greater than that of controls, but slightly lower than in anxiety and depression forums.

Crucially, those who have previously had depressive symptoms are more likely to have them again. Therefore, their greater tendency for absolutist thinking, even when there are currently no symptoms of depression, is a sign that it may play a role in causing depressive episodes. The same effect is seen in use of pronouns, but not for negative emotion words.

Practical implications
Understanding the language of depression can help us understand the way those with symptoms of depression think, but it also has practical implications. Researchers are combining automated text analysis with machine learning (computers that can learn from experience without being programmed) to classify a variety of mental health conditions from natural language text samples such as blog posts.

Such classification is already outperforming that made by trained therapists. Importantly, machine learning classification will only improve as more data is provided and more sophisticated algorithms are developed. This goes beyond looking at the broad patterns of absolutism, negativity and pronouns already discussed. Work has begun on using computers to accurately identify increasingly specific subcategories of mental health problems – such as perfectionism, self-esteem problems and social anxiety.

That said, it is of course possible to use a language associated with depression without actually being depressed. Ultimately, it is how you feel over time that determines whether you are suffering. But as the World Health Organisation estimates that more than 300m people worldwide are now living with depression, an increase of more than 18% since 2005, having more tools available to spot the condition is certainly important to improve health and prevent tragic suicides such as those of Plath and Cobain.

https://theconversation.com/people-with-depression-use-language-differently-heres-how-to-spot-it-90877

By Drake Baer

Everybody knows that humpback whales make excellent professional wrestlers: With zero hesitation, these gentle giants will leap out of the sea, corkscrew their bodies, and then slam back into the water with 66,000 pounds of fury.

It turns out that these cetaceans aren’t just doing this to show off: According to a recent paper in Marine Mammal Science, the breaching serves as an acoustic telegram, communicating with far-off pods. It’s like how European or African peoples would send sonic signals from village to village via drum, or how wolves howl at the moon. Make a big enough splash, and the percussion speaks for itself.

As noted in the marine-life publication Hakai magazine, University of Queensland marine biologist Ailbhe S. Kavanagh and colleagues observed 76 humpback groups off the coast of Australia for 200 hours between 2010 and 2011. They found that breaching is way more common when pods are at least 2.5 miles apart, with fin- or fluke-slapping deployed when fellow whales are nearby.

The breaching probably carries better than whales’ signature songs: “They’re potentially using [these behaviors] when background noise levels are higher,” Kavanagh tells Hakai, “as the acoustic signal possibly travels better than a vocal signal would.” Given that whale songs have regional accents, you have to wonder if their aerial gymnastics have a certain patois, too.

http://nymag.com/scienceofus/2017/02/why-whales-jump-into-the-air.html

Thanks to Pete Cuomo for bringing this to the It’s Interesting community.

by Tori Rodriguez, MA, LPC

The social transmission of emotions has been reported in several studies in recent years. Research published in 2013, for example, found that joy and fear are transmissible between people, while a 2011 study showed that stress — as measured by an increase in cortisol — can be transmitted from others who are under pressure.1,2 Results of a new study that appeared in Science Advances suggest that pain may also be communicable.3

“Being able to perceive and communicate pain to others probably gives an evolutionary advantage to animals,” study co-author Andrey E. Ryabinin, PhD, a professor of behavioral neuroscience at Oregon Health & Science University, told Clinical Pain Advisor. Such awareness may trigger self-protective or caretaking behaviors, for instance, that facilitate the survival of the individual and the group.
In the current study, Ryabinin and colleagues investigated whether “bystander” mice would develop hyperalgesia after being housed in the same room as “primary” mice who had received a noxious stimulus. In one experiment, the paws of primary mice were injected with complete Freund’s adjuvant (CFA), which, as expected, induced persistent hypersensitivity that was apparent for 2 weeks. Bystander mice who had been injected with phosphate-buffered saline (PBS) similarly demonstrated hypersensitivity throughout the same 2-week period.

Bystander mice also displayed acquired hypersensitivity in another set of experiments in which primary mice experienced pain related to withdrawal from morphine and alcohol. This suggests that the transfer of hyperalgesia is not limited to the effects of inflammatory stimuli. In addition, the transfer was consistent across mechanical, thermal, and chemical modalities of nociception.

Tests revealed that nociceptive thresholds returned to basal levels in both primary and bystander mice within 4 days, and the transferred hyperalgesia was not accounted for by familiarity, as the effects were similar between mice that were not familiar with the others and those that were.
Finally, the authors determined that the transfer of hyperalgesia was mediated by olfactory cues (as measured by exposing naïve mice to the bedding of hypersensitive co-housed mice), and it could not be accounted for by anxiety, visual cues, or stress-induced hyperalgesia.

Future research is needed to pinpoint the molecular messenger involved in the transfer of hyperalgesia, and whether a similar process occurs in humans.

“Here we show for the first time that you do not need an injury or inflammation to develop a pain state–pain can develop simply because of social cues,” said Dr Ryabinin. These findings have important implications for the treatment of chronic pain patients. “We cannot dismiss people with chronic pain if they have no physical pathology. They can be in pain without the pathology and need to be treated for their pain despite lack of injury.”

References
Dezecache G, Conty L, Chadwick M, et al. Evidence for Unintentional Emotional Contagion Beyond Dyads.PLoS One. 2013; 8(6): e67371.
Buchanan TW , Bagley SL, Stansfield RB, Preston SD. The empathic, physiological resonance of stress. Soc Neurosci. 2012; 7(2):191-201.
Smith ML, Hostetler CM, Heinricher MM, Ryabinin AE. Social transfer of pain in mice. Sci Adv. 2016; 2(10): e1600855.

http://www.psychiatryadvisor.com/anxiety/social-transfer-of-hyperalgesia/article/571087/?DCMP=EMC-PA_Update_RD&cpn=psych_md%2cpsych_all&hmSubId=&NID=1710903786&dl=0&spMailingID=15837872&spUserID=MTQ4MTYyNjcyNzk2S0&spJobID=902320519&spReportId=OTAyMzIwNTE5S0

By Ben Westcott

A conversation between dolphins may have been recorded by scientists for the first time, a Russian researcher claims.

Two adult Black Sea bottlenose dolphins, named Yasha and Yana, didn’t interrupt each other during an interaction taped by scientists and may have formed words and sentences with a series of pulses, Vyacheslav Ryabov says in a new paper.

“Essentially, this exchange resembles a conversation between two people,” Ryabov said.

Joshua Smith, a research fellow at Murdoch University Cetacean Research Unit, says there will need to be more research before scientists can be sure whether dolphins are chatting.

“I think it’s very early days to be drawing conclusions that the dolphins are using signals in a kind of language context, similar to humans,” he told CNN.

There are two different types of noises dolphins use for communication, whistles and clicks, also known as pulses.

Using new recording techniques, Ryabov separated the individual “non coherent pulses” the two dolphins made and theorized each pulse was a word in the dolphins’ language, while a collection of pulses is a sentence.

“As this language exhibits all the design features present in the human spoken language, this indicates a high level of intelligence and consciousness in dolphins,” he said in the paper, which was published in the St. Petersburg Polytechnical University Journal: Physics and Mathematics last month.

“Their language can be ostensibly considered a high developed spoken language.”

In his paper, Ryabov calls for humans to create a device by which human beings can communicate with dolphins.

“Humans must take the first step to establish relationships with the first intelligent inhabitants of the planet Earth by creating devices capable of overcoming the barriers that stand in the way of … communications between dolphins and people,” he said.

Smith said while the results were an exciting advance in the under-researched field of dolphin communication, the results first needed to be replicated in open water environments.

“If we boil it down we pretty much have two animals in an artificial environment where reverberations are a problem … It wouldn’t make much sense for animals (in a small area) to make sounds over each other because they wouldn’t get much (sonar) information,” he said.

“It would be nice to see a variety of alternate explanations to this rather than the one they’re settling on.”

http://www.cnn.com/2016/09/13/europe/dolphin-language-conversation-research/index.html