New evidence that dogs can recognize vowel changes in words

by Gege Li

Dogs pay much closer attention to what humans say than we realised, even to words that are probably meaningless to them.

Holly Root-Gutteridge at the University of Sussex, UK, and her colleagues played audio recordings of people saying six words to 70 pet dogs of various breeds. The dogs had never heard these voices before and the words only differed by their vowels, such as “had”, “hid” and “who’d”.

Each recording was altered so the voices were at the same pitch, ensuring that the only cue the dogs had was the difference between vowels, rather than how people said the words.

After hearing the recordings just once, 48 of the dogs reacted when either the same speaker said a new word or the same word was said by a different speaker. The remainder either didn’t visibly respond or got distracted.

The team based its assessment of the dogs’ reactions on how long they paid attention when the voice or word changed – if the dogs moved their ears or shifted eye contact, for example, it showed that they noticed the change. In contrast, when the dogs heard the same word repeated several times, their attention waned.

Until now, it was thought that only humans could detect vowels in words and realise that these sounds stay the same across different speakers. But the dogs could do both spontaneously without any previous training.

“I was surprised by how well some of the dogs responded to unfamiliar voices,” says Root-Gutteridge. “It might mean that they comprehend more than we give them credit for.”

This ability may be the result of domestication, says Root-Guttridge, as dogs that pay closer attention to human sounds are more likely to have been chosen for breeding.

The work highlights the strength of social interactions between humans and dogs, says Britta Osthaus at Canterbury Christ Church University, UK. “It would be interesting to see whether a well-trained dog would react differently to the command of ‘sat’ instead of ‘sit’,” she says.

Journal reference: Biology Letters, DOI: 10.1098/rsbl.2019.0555

Read more: https://www.newscientist.com/article/2225746-dogs-have-a-better-ear-for-language-than-we-thought/#ixzz679cb3PFN

New Hearing Aid Includes Fitness Tracking, Language Translation

new-hearing-aid-includes-fitness-tracking-language-translation-309458
Starkey Hearing Technologies recently unveiled their latest hearing aid, the Livio AI. The aid leverages artificially intelligent software to adapt to users’ listening environments. Starkey says the device does a lot more than just assist in hearing, and includes a range of additional technology, such as a physical activity tracker and integrated language translation.

Hearing loss has a disabling effect on 466 million people worldwide, including over 7 million children under 5 years old. Modern hearing aids already include some pretty sophisticated connectivity, including Bluetooth and internet functionality. The Livio device, however, goes quite a few steps further, and capitalizes on the current craze for fitness devices by including a host of health-minded integrations.

The Future is Hear
Launched August 27 at an event at Starkey’s Minnesota HQ, the Livio contains advances which the Starkey CTO Achin Bhowmik was keen to compare to those seen in the phone market over the last twenty years. The eponymous “artificial intelligence” aspect of the device includes the ability to detect the location and environment in which the user is wearing the aid and optimize the listening experience based on this information. This is, arguably, not the most eye-catching (ear-catching?) feature of the Livio – such capabilities have been advertised in other hearing aid technology.

Rather the Livio’s integration of inertial sensors is its main party trick – this enables it to count physical activity much like other fitness devices. It can count your steps and exercise, and cleverly integrates this with a “brain health” measurement to derive a mind and body health score. The brain health measurement is partly calculated from how much you wear the device, and while it’s arguable whether simply wearing a hearing aid represents training your brain, another component that increases its users’ score when they interact with different people in different environments sounds like a neat way to check on the social health of elderly users. Furthermore, the inertia sensor can detect whether a wearer has fallen, which Bhowmik was keen to point out is a major health hazard for older people.

The translation software is also a major draw, and the promise of sci-fi level language conversion, covering 27 languages, shows Starkey are aiming to bring the multi-billion-dollar hearing aid industry into the future.

As for whether the device can meet these lofty promises, you’ll simply have to keep an eye (and er, ear) out to see if the Livio performs as well as Starkey hope.

Vaitheki Maheswaran, Audiology Specialist for UK-based charity Action on Hearing Loss, said: “The innovation in technology is interesting, not only enabling users to hear better but to monitor their body and mental fitness with the use of an app. However, while this technology is not currently available in the UK, it is important to speak to an audiologist who can help you in choosing the most suitable type of hearing aid for your needs because one type of hearing aid is not suitable for everyone.”

https://www.technologynetworks.com/informatics/news/new-hearing-aid-includes-fitness-tracking-language-translation-309458?utm_campaign=Sanjay%20September%20Import&utm_source=hs_email&utm_medium=email&utm_content=65893142&_hsenc=p2ANqtz-_CC9W1Y_evlwrgOG0GdRhfYJ_mOHrGxnEpu1HE6y-7cm33CbRDTUVa6V0mxPwdOreS8vfPP4WXVAlEOoHebb4_S9KOxA&_hsmi=65893142

Previously unknown language in Southeast Asia has many words to describe sharing and cooperation, and none for stealing.

STARRE VARTAN

Linguists from Lund University in Sweden have discovered a previously undocumented language — a perfect example of why field research is so important in the social sciences. Only spoken by about 280 people in northern Peninsular Malaysia, this language includes a “rich vocabulary of words to describe exchanging and sharing,” according to researchers Niclas Burenhult and Joanne Yager, who published their findings in the journal Linguistic Typology.

Burenhult and Yager discovered the language while surveying for a subproject of the DOBES (Documentation of Endangered Languages) initiative. Under the Tongues of the Samang project, they were looking for language data from various speakers of the Asilan language.

They named the new language Jedek. “Jedek is not a language spoken by an unknown tribe in the jungle, as you would perhaps imagine, but in a village previously studied by anthropologists. As linguists, we had a different set of questions and found something that the anthropologists missed,” Burenhult, an associate professor of general linguistics, said in a university release.

The people who speak Jedek are settled hunter-gatherers, and their language may influence — or reflect — other aspects of their culture. As detailed by the linguists, “There are no indigenous verbs to denote ownership such as borrow, steal, buy or sell, but there is a rich vocabulary of words to describe exchanging and sharing.”

The community in which Jedek is spoken is different in other ways than just sharing versus owning. It’s more gender-equal than Western societies, according to the linguists. They also report that there are no professions; everyone knows how to do everything. “There are no indigenous words for occupations or for courts of law. There is almost no interpersonal violence, they consciously encourage their children not to compete, and there are no laws or courts.”

https://www.mnn.com/lifestyle/arts-culture/blogs/malaysias-jedek-language-rich-vocabulary-words-describe-sharing-cooperation

Experiments Reveal What Birds See in Their Mind’s Eye

Songbirds known as Japanese tits communicate using human-like rules for language and can mentally picture what they’re talking about, research suggests.

by Brandon Keim

Hear a word, particularly an important one — like “snake!” — and an image appears in your mind. Now scientists are finding that this basic property of human language is shared by certain birds and, perhaps, many other creatures.

In a series of clever tests, a researcher has found that birds called Japanese tits not only chirp out a distinctive warning for snakes, but also appear to imagine a snake when they hear that cry. This glimpse into the mind’s eye of a bird hints at just how widespread this ostensibly human-like capacity may be.

“Animal communication has been considered very different from human speech,” says Toshitaka Suzuki, an ethologist at Japan’s Kyoto University. “My results suggest that birds and humans may share similar cognitive abilities for communication.”

Perhaps this went unappreciated for so long, says Suzuki, simply because “we have not yet found a way to look at the animals’ minds.”

Over the last several years, Suzuki conducted a series of experiments deciphering the vocalizations of Japanese tits — or Parus minor, whose family includes such everyday birds as chickadees and titmice — and describing their possession of syntax, or the ability to produce new meanings by combining words in various orders. (“Open the door,” for example, versus “the open door.”)

Syntax has long been considered unique to human language, and language in turn is often thought to set humans apart from other animals. Yet Suzuki found it not in a bird typically celebrated for intelligence, like crows or parrots, but in humble P. minor.

MENTAL PICTURES
Once he realized that birds are using their own form of language, Suzuki wondered: what happens in their minds when they talk? Might words evoke corresponding images, as happens for us?

Suzuki tested that proposition by broadcasting recordings of P. minor’s snake-specific alarm call from a tree-mounted speaker. Then he analyzed the birds’ responses to a stick that he’d hung along the trunk and could manipulate to mimic a climbing snake.

If the call elicited a mental image, Suzuki figured the birds would pay extra-close attention to the snake-like stick. Indeed they did, he recently reported in the journal Proceedings of the National Academy of Sciences.

In contrast, when Suzuki broadcast a call used by tits to convey a general, non-specific alarm, the birds didn’t pay much notice to the stick. And when he set the stick swinging from side to side in a decidedly non-snakelike manner, the birds ignored it.

“Simply hearing these calls causes tits to become more visually perceptive to objects resembling snakes,” he writes in PNAS. “Before detecting a real snake, tits retrieve its visual image from snake-specific alarm calls and use this to search out snakes.”

Rob Magrath, a behavioral ecologist at Australia National University who specializes in bird communication, thinks Suki’s interpretation is consistent with the results. He also calls the work “truly delightful.”

“I love the way that Suzuki employs simple experiments, literally using sticks and string, to test ideas,” Magrath says. Similarly impressed is ecologist Christine Sheppard of the American Bird Conservancy. “It’s incredibly challenging to devise an experiment that would allow you to answer this question,” she says. “It’s really neat.”

MINDS OF THEIR OWN
Sheppard says it makes evolutionary sense for animals to possess a ‘mind’s eye’ that works in tandem with their communications: It allows individuals to respond more quickly to threats. Suzuki agrees, and believes it’s likely found not only in P. minor and their close relatives, but in many other birds and across the animal kingdom.

“Many other animals produce specific calls when finding specific types of food or predators,” he says. He hopes researchers will use his methodology to peek into the mind’s eyes of other animals.

For Sheppard, the findings also speak to how people think about birds: not just as pretty or interesting or ecologically important, but as fellow beings with rich minds of their own.

“When I was in school, people still thought that birds were little automata. Now “bird brain” is becoming a compliment,” she says.

“I think this kind of insight helps people see birds as living, breathing creatures with whom we share the planet,” she says.

https://news.nationalgeographic.com/2018/02/japanese-songbirds-process-language-syntax/

People with depression use language differently

From the way you move and sleep, to how you interact with people around you, depression changes just about everything. It is even noticeable in the way you speak and express yourself in writing. Sometimes this “language of depression” can have a powerful effect on others. Just consider the impact of the poetry and song lyrics of Sylvia Plath and Kurt Cobain, who both killed themselves after suffering from depression.

Scientists have long tried to pin down the exact relationship between depression and language, and technology is helping us get closer to a full picture. Our new study, published in Clinical Psychological Science, has now unveiled a class of words that can help accurately predict whether someone is suffering from depression.

Traditionally, linguistic analyses in this field have been carried out by researchers reading and taking notes. Nowadays, computerised text analysis methods allow the processing of extremely large data banks in minutes. This can help spot linguistic features which humans may miss, calculating the percentage prevalence of words and classes of words, lexical diversity, average sentence length, grammatical patterns and many other metrics.

So far, personal essays and diary entries by depressed people have been useful, as has the work of well-known artists such as Cobain and Plath. For the spoken word, snippets of natural language of people with depression have also provided insight. Taken together, the findings from such research reveal clear and consistent differences in language between those with and without symptoms of depression.

Content
Language can be separated into two components: content and style. The content relates to what we express – that is, the meaning or subject matter of statements. It will surprise no one to learn that those with symptoms of depression use an excessive amount of words conveying negative emotions, specifically negative adjectives and adverbs – such as “lonely”, “sad” or “miserable”.

More interesting is the use of pronouns. Those with symptoms of depression use significantly more first person singular pronouns – such as “me”, “myself” and “I” – and significantly fewer second and third person pronouns – such as “they”, “them” or “she”. This pattern of pronoun use suggests people with depression are more focused on themselves, and less connected with others. Researchers have reported that pronouns are actually more reliable in identifying depression than negative emotion words.

We know that rumination (dwelling on personal problems) and social isolation are common features of depression. However, we don’t know whether these findings reflect differences in attention or thinking style. Does depression cause people to focus on themselves, or do people who focus on themselves get symptoms of depression?

Style
The style of language relates to how we express ourselves, rather than the content we express. Our lab recently conducted a big data text analysis of 64 different online mental health forums, examining over 6,400 members. “Absolutist words” – which convey absolute magnitudes or probabilities, such as “always”, “nothing” or “completely” – were found to be better markers for mental health forums than either pronouns or negative emotion words.

From the outset, we predicted that those with depression will have a more black and white view of the world, and that this would manifest in their style of language. Compared to 19 different control forums (for example, Mumsnet and StudentRoom), the prevalence of absolutist words is approximately 50% greater in anxiety and depression forums, and approximately 80% greater for suicidal ideation forums.

Pronouns produced a similar distributional pattern as absolutist words across the forums, but the effect was smaller. By contrast, negative emotion words were paradoxically less prevalent in suicidal ideation forums than in anxiety and depression forums.

Our research also included recovery forums, where members who feel they have recovered from a depressive episode write positive and encouraging posts about their recovery. Here we found that negative emotion words were used at comparable levels to control forums, while positive emotion words were elevated by approximately 70%. Nevertheless, the prevalence of absolutist words remained significantly greater than that of controls, but slightly lower than in anxiety and depression forums.

Crucially, those who have previously had depressive symptoms are more likely to have them again. Therefore, their greater tendency for absolutist thinking, even when there are currently no symptoms of depression, is a sign that it may play a role in causing depressive episodes. The same effect is seen in use of pronouns, but not for negative emotion words.

Practical implications
Understanding the language of depression can help us understand the way those with symptoms of depression think, but it also has practical implications. Researchers are combining automated text analysis with machine learning (computers that can learn from experience without being programmed) to classify a variety of mental health conditions from natural language text samples such as blog posts.

Such classification is already outperforming that made by trained therapists. Importantly, machine learning classification will only improve as more data is provided and more sophisticated algorithms are developed. This goes beyond looking at the broad patterns of absolutism, negativity and pronouns already discussed. Work has begun on using computers to accurately identify increasingly specific subcategories of mental health problems – such as perfectionism, self-esteem problems and social anxiety.

That said, it is of course possible to use a language associated with depression without actually being depressed. Ultimately, it is how you feel over time that determines whether you are suffering. But as the World Health Organisation estimates that more than 300m people worldwide are now living with depression, an increase of more than 18% since 2005, having more tools available to spot the condition is certainly important to improve health and prevent tragic suicides such as those of Plath and Cobain.

https://theconversation.com/people-with-depression-use-language-differently-heres-how-to-spot-it-90877

Swearing Is Good For You—And Chimps Do It, Too

By Simon Worrall

wearing is usually regarded as simply lazy language or an abusive lapse in civility. But as Emma Byrne shows in her book, Swearing Is Good for You: The Amazing Science of Bad Language, new research reveals that profanity has many positive virtues, from promoting trust and teamwork in the office to increasing our tolerance to pain.

When National Geographic caught up with Byrne at her home in London, she explained why humans aren’t the only primates that can curse and why, though women are swearing more today than before, it is still regarded by many as “unfeminine.”

You write, “I’ve had a certain pride in my knack for colorful and well-timed swearing.” Tell us about your relationship to bad language, and in what sense it is good for us?

My first memory of being punished for swearing was calling my little brother a four-letter word, twat, which I thought was just an odd pronunciation of the word twit. I must have been about eight at the time; my brother was still pre-school. My mother froze, then belted me round the ear. That made me realize that some words had considerably more power than others, and that the mere shift in a vowel was enough to completely change the emotional impact of a word.

I’ve always had a curiosity about things I’ve been told I am not meant to be interested in, which is why I wound up in a fairly male-dominated field of artificial intelligence for my career. There’s a certain cussedness to my personality that means, as soon as someone says, “No, that’s not for you,” I absolutely have to know about it.

My relationship with swearing is definitely one example. I tend to use it as a way of marking myself out as being more like my male colleagues, like having a working knowledge of the offside rule in soccer. It’s a good way of making sure that I’m not seen as this weird, other person, based on my gender.

There’s great research coming out of Australia and New Zealand, which is perhaps not surprising, that says that jocular abuse, particularly swearing among friends, is a strong signal of the degree of trust that those friends share. When you look at the transcripts of these case studies of effective teams in sectors like manufacturing and IT, those that can joke with each other in ways that transgress polite speech, which includes a lot of swearing, tend to report that they trust each other more.

One of the reasons why there’s probably this strong correlation is that swearing has such an emotional impact. You’re demonstrating that you have a sophisticated theory of mind about the person that you’re talking to, and that you have worked out where the limit is between being shocking enough to make them giggle or notice you’ve used it but not so shocking that they’ll be mortally offended. That’s a hard target to hit right in the bullseye. Using swear words appropriate for that person shows how well you know them; and how well you understand their mental model.

You were inspired to write this book by a study carried out by Dr. Richard Stephens. Tell us about the experiment, and why it was important in our understanding of swearing.

Richard Stephens works out of Keele University in the U.K. He’s a behavioral psychologist, who is interested in why we do things that we’ve been told are bad for us. For years, the medical profession has been saying that swearing is incredibly bad for you if you’re in pain. It’s what’s called a “catastrophizing response,” focusing on the negative thing that’s happened. His take on this was, if it’s so maladaptive, why do we keep doing it?

He initially had 67 volunteers, although he’s replicated this multiple times. He stuck their hands in ice water and randomized whether or not they were using a swear word or a neutral word and compared how long they could keep their hands in ice water. On average, when they were swearing they could keep their hands in the iced water for half as long again as when they were using a neutral word. This shows that the results are anything but maladaptive. Swearing really does allow you to withstand pain for longer.

Have men always sworn more than women? And, if so, why?

Definitely not! Historians of the English language describe how women were equally praised for their command of exceedingly expressive insults and swearing, right up to the point in 1673 when a book by Richard Allestree was published titled The Ladies Calling.” Allestree says that women who swear are acting in a way that is biologically incompatible with being a woman and, as a result, will begin to take on masculine characteristics, like growing facial hair or becoming infertile. He wrote, “There is no sound more odious to the ears of God than an oath in the mouth of a woman.”

Today we are horribly still in the same place on men versus women swearing. Although women are still considered to swear less than men, we know from studies that they don’t. They swear just as much as men. But attitudinal surveys show that both men and women tend to judge women’s swearing much more harshly. And that judgement can have serious implications. For example, when women with breast cancer or arthritis swear as a result of their condition, they’re much more likely to lose friends, particularly female friends. Whereas men who swear about conditions like testicular cancer tend to bond more closely with other men using the same vocabulary. The idea that swearing is a legitimate means of expressing a negative emotion is much more circumscribed for women.

I was fascinated to discover that it’s not just humans that swear—primates do it, too! Tell us about Project Washoe.

Out in the wild, chimps are inveterate users of their excrement to mark their territory or show their annoyance. So the first thing you do, if you want to teach a primate sign language, is potty train them. That means, just like human children at a similar age, that they end up with a taboo around excrement. In Project Washoe, the sign for “dirty” was bringing the knuckles up to the underside of the chin. And what happened spontaneously, without the scientists teaching them, was that the chimps started to use the sign for “dirty” in exactly the same way as we use our own excremental swear words.

Washoe was a female chimpanzee that was originally adopted by R. Allen Gardner and Beatrix T. Gardner in the 1960s. Later, she was taken on by a researcher in Washington State called Roger Fouts. Washoe was the matriarch to three younger chimps: Loulis, Tatu, and Dar. By the time they brought in Loulis, the youngest, the humans had stopped teaching them language, so they looked to see if the chimps would transmit language through the generations, which they did.

Not only that: as soon as they had internalized the toilet taboo, with the sign “dirty” as something shameful, they started using that sign as an admonition or to express anger, like a swear word. When Washoe and the other chimps were really angry, they would smack their knuckles on the underside of their chins, so you could hear this chimp-teeth-clacking sound.

Washoe and the other chimps would sign things like “Dirty Roger!” or “Dirty Monkey!” when they were angry. The humans hadn’t taught them this! What had happened is that they had internalized that taboo, they had a sign associated with that taboo, so all of a sudden that language was incredibly powerful and was being thrown about, just like real excrement is thrown about by wild chimpanzees.

You say, “swearing is a bellwether—a foul-beaked canary in the coalmine—that tells us what our social taboos are.” Unpack that idea for us, and how it has changed over the centuries.

The example that most people will be familiar with in English-speaking countries is blasphemy. There are still parts of the U.S. that are more observant of Christianity than others but, in general, the kinds of language that would have resulted in censorship in other eras is now freely used in print and TV media. However, the “n-word,” which was once used as the title of an Agatha Christie book and even in nursery rhymes, is now taboo because there is a greater awareness that it is a painful reminder of how African-Americans suffered because of racism over the centuries. In some communities, where that usage is reclaimed, they are saying that if I use it, it immunizes me against its negative effects.

That is an example of a word that has fallen out of general conversation and literature into the realm of the unsayable. It’s quite different from the copulatory or excretory swearing in that it is so divisive. The great thing about the copulatory and excretory swearing is that they are common to the entire human race.


In the digital world, you can swear at someone without actually being face to face. Is this changing the way we curse? And what will swearing in tomorrow’s world look like?

One of the difficulties with swearing in online discourse is that there is no face-to-face repercussion, so it allows people to lash out without seeing the person that they’re speaking to as fully human. But it’s not swearing that is the problem. It’s possible to say someone is worth less as a human being based on their race, gender or sexuality using the most civil of language. For example, when Donald Trump called Hillary Clinton “a nasty woman” rather than using the c-word, most of us were able to break the code. We knew what he meant but because he hadn’t sworn it was seen as acceptable discourse.

In the future, I think that swearing will inevitably be reinvented; we’ve seen it change so much over the years. As our taboos change, that core of language that has the ability to surprise, shock or stun the emotional side of the brain will change, too. But I can’t predict where those taboos will go.

https://news.nationalgeographic.com/2018/01/science-swearing-profanity-curse-emma-byrne/

Facebook realized that its AI Chat-Bots were talking to eachother in their own invented language, so they shut it down.

BY MARK WILSON

Bob: “I can can I I everything else.”

Alice: “Balls have zero to me to me to me to me to me to me to me to me to.”

To you and I, that passage looks like nonsense. But what if I told you this nonsense was the discussion of what might be the most sophisticated negotiation software on the planet? Negotiation software that had learned, and evolved, to get the best deal possible with more speed and efficiency–and perhaps, hidden nuance–than you or I ever could? Because it is.

This conversation occurred between two AI agents developed inside Facebook. At first, they were speaking to each other in plain old English. But then researchers realized they’d made a mistake in programming.

“There was no reward to sticking to English language,” says Dhruv Batra, visiting research scientist from Georgia Tech at Facebook AI Research (FAIR). As these two agents competed to get the best deal–a very effective bit of AI vs. AI dogfighting researchers have dubbed a “generative adversarial network”–neither was offered any sort of incentive for speaking as a normal person would. So they began to diverge, eventually rearranging legible words into seemingly nonsensical sentences.

“Agents will drift off understandable language and invent codewords for themselves,” says Batra, speaking to a now-predictable phenomenon that Facebook as observed again, and again, and again. “Like if I say ‘the’ five times, you interpret that to mean I want five copies of this item. This isn’t so different from the way communities of humans create shorthands.”

Indeed. Humans have developed unique dialects for everything from trading pork bellies on the floor of the Mercantile Exchange to hunting down terrorists as Seal Team Six–simply because humans sometimes perform better by not abiding to normal language conventions.
So should we let our software do the same thing? Should we allow AI to evolve its dialects for specific tasks that involve speaking to other AIs? To essentially gossip out of our earshot? Maybe; it offers us the possibility of a more interoperable world, a more perfect place where iPhones talk to refrigerators that talk to your car without a second thought.

The tradeoff is that we, as humanity, would have no clue what those machines were actually saying to one another.

WE TEACH BOTS TO TALK, BUT WE’LL NEVER LEARN THEIR LANGUAGE
Facebook ultimately opted to require its negotiation bots to speak in plain old English. “Our interest was having bots who could talk to people,” says Mike Lewis, research scientist at FAIR. Facebook isn’t alone in that perspective. When I inquired to Microsoft about computer-to-computer languages, a spokesperson clarified that Microsoft was more interested in human-to-computer speech. Meanwhile, Google, Amazon, and Apple are all also focusing incredible energies on developing conversational personalities for human consumption. They’re the next wave of user interface, like the mouse and keyboard for the AI era.

The other issue, as Facebook admits, is that it has no way of truly understanding any divergent computer language. “It’s important to remember, there aren’t bilingual speakers of AI and human languages,” says Batra. We already don’t generally understand how complex AIs think because we can’t really see inside their thought process. Adding AI-to-AI conversations to this scenario would only make that problem worse.

But at the same time, it feels shortsighted, doesn’t it? If we can build software that can speak to other software more efficiently, shouldn’t we use that? Couldn’t there be some benefit?

Because, again, we absolutely can lead machines to develop their own languages. Facebook has three published papers proving it. “It’s definitely possible, it’s possible that [language] can be compressed, not just to save characters, but compressed to a form that it could express a sophisticated thought,” says Batra. Machines can converse with any baseline building blocks they’re offered. That might start with human vocabulary, as with Facebook’s negotiation bots. Or it could start with numbers, or binary codes. But as machines develop meanings, these symbols become “tokens”–they’re imbued with rich meanings. As Dauphin points out, machines might not think as you or I do, but tokens allow them to exchange incredibly complex thoughts through the simplest of symbols. The way I think about it is with algebra: If A + B = C, the “A” could encapsulate almost anything. But to a computer, what “A” can mean is so much bigger than what that “A” can mean to a person, because computers have no outright limit on processing power.

“It’s perfectly possible for a special token to mean a very complicated thought,” says Batra. “The reason why humans have this idea of decomposition, breaking ideas into simpler concepts, it’s because we have a limit to cognition.” Computers don’t need to simplify concepts. They have the raw horsepower to process them.

WHY WE SHOULD LET BOTS GOSSIP
But how could any of this technology actually benefit the world, beyond these theoretical discussions? Would our servers be able to operate more efficiently with bots speaking to one another in shorthand? Could microsecond processes, like algorithmic trading, see some reasonable increase? Chatting with Facebook, and various experts, I couldn’t get a firm answer.

However, as paradoxical as this might sound, we might see big gains in such software better understanding our intent. While two computers speaking their own language might be more opaque, an algorithm predisposed to learn new languages might chew through strange new data we feed it more effectively. For example, one researcher recently tried to teach a neural net to create new colors and name them. It was terrible at it, generating names like Sudden Pine and Clear Paste (that clear paste, by the way, was labeled on a light green). But then they made a simple change to the data they were feeding the machine to train it. They made everything lowercase–because lowercase and uppercase letters were confusing it. Suddenly, the color-creating AI was working, well, pretty well! And for whatever reason, it preferred, and performed better, with RGB values as opposed to other numerical color codes.

Why did these simple data changes matter? Basically, the researcher did a better job at speaking the computer’s language. As one coder put it to me, “Getting the data into a format that makes sense for machine learning is a huge undertaking right now and is more art than science. English is a very convoluted and complicated language and not at all amicable for machine learning.”

In other words, machines allowed to speak and generate machine languages could somewhat ironically allow us to communicate with (and even control) machines better, simply because they’d be predisposed to have a better understanding of the words we speak.
As one insider at a major AI technology company told me: No, his company wasn’t actively interested in AIs that generated their own custom languages. But if it were, the greatest advantage he imagined was that it could conceivably allow software, apps, and services to learn to speak to each other without human intervention.

Right now, companies like Apple have to build APIs–basically a software bridge–involving all sorts of standards that other companies need to comply with in order for their products to communicate. However, APIs can take years to develop, and their standards are heavily debated across the industry in decade-long arguments. But software, allowed to freely learn how to communicate with other software, could generate its own shorthands for us. That means our “smart devices” could learn to interoperate, no API required.

https://www.fastcodesign.com/90132632/ai-is-inventing-its-own-perfect-languages-should-we-let-it

Thanks to Michael Moore for bringing this to the It’s Interesting community.

Dolphins may have a spoken language, new research suggests

By Ben Westcott

A conversation between dolphins may have been recorded by scientists for the first time, a Russian researcher claims.

Two adult Black Sea bottlenose dolphins, named Yasha and Yana, didn’t interrupt each other during an interaction taped by scientists and may have formed words and sentences with a series of pulses, Vyacheslav Ryabov says in a new paper.

“Essentially, this exchange resembles a conversation between two people,” Ryabov said.

Joshua Smith, a research fellow at Murdoch University Cetacean Research Unit, says there will need to be more research before scientists can be sure whether dolphins are chatting.

“I think it’s very early days to be drawing conclusions that the dolphins are using signals in a kind of language context, similar to humans,” he told CNN.

There are two different types of noises dolphins use for communication, whistles and clicks, also known as pulses.

Using new recording techniques, Ryabov separated the individual “non coherent pulses” the two dolphins made and theorized each pulse was a word in the dolphins’ language, while a collection of pulses is a sentence.

“As this language exhibits all the design features present in the human spoken language, this indicates a high level of intelligence and consciousness in dolphins,” he said in the paper, which was published in the St. Petersburg Polytechnical University Journal: Physics and Mathematics last month.

“Their language can be ostensibly considered a high developed spoken language.”

In his paper, Ryabov calls for humans to create a device by which human beings can communicate with dolphins.

“Humans must take the first step to establish relationships with the first intelligent inhabitants of the planet Earth by creating devices capable of overcoming the barriers that stand in the way of … communications between dolphins and people,” he said.

Smith said while the results were an exciting advance in the under-researched field of dolphin communication, the results first needed to be replicated in open water environments.

“If we boil it down we pretty much have two animals in an artificial environment where reverberations are a problem … It wouldn’t make much sense for animals (in a small area) to make sounds over each other because they wouldn’t get much (sonar) information,” he said.

“It would be nice to see a variety of alternate explanations to this rather than the one they’re settling on.”

http://www.cnn.com/2016/09/13/europe/dolphin-language-conversation-research/index.html

New Real-Time In-Ear Device Translator By Waverly Labs To Be Released Soon

Language barrier will no longer be a problem around the world as an in-ear device will be the answer to this. The device can translate foreign language to the wearer’s native language and it works real time.

A company called Waverly Labs has developed a device called “The Pilot.” that will do a real-time translation while on the wearer’s ears.

A smart phone app will also let the user choose different foreign languages, currently Spanish, French, Italian and English. Additional languages will be available soon after, which include East Asian, Hindi, Semitic, Arabic, Slavic, African, and more.The device also works only with always-on data connection of the wearer’s smartphone.

To use the device, the earpieces can be shared by two people. While talking in different languages, the in-ear device will serve as the wearers’ translators to understand each other.

The device will cost $129.

New research shows that infants need to be able to freely move their tongues in order to distinguish sounds.

A team of researchers led by Dr Alison Bruderer, a postdoctoral fellow at the University of British Columbia, has discovered a direct link between tongue movements of infants and their ability to distinguish speech sounds.

“Until now, research in speech perception development and language acquisition has primarily used the auditory experience as the driving factor. Researchers should actually be looking at babies’ oral-motor movements as well,” said Dr Bruderer, who is the lead author on a study published in the Proceedings of the National Academy of Sciences on October 12, 2015.

In the study, teething toys were placed in the mouths of six-month-old English-learning infants while they listened to speech sounds – two different Hindi ‘d’ sounds that infants at this age can readily distinguish.

When the teethers restricted movements of the tip of the tongue, the infants were unable to distinguish between the two sounds.

But when their tongues were free to move, the babies were able to make the distinction.

“Before infants are able to speak, their articulatory configurations affect the way they perceive speech, suggesting that the speech production system shapes speech perception from early in life,” the scientists said.

“These findings implicate oral-motor movements as more significant to speech perception development and language acquisition than current theories would assume and point to the need for more research.”

http://www.sci-news.com/othersciences/psychology/science-infants-tongue-movement-speech-sounds-03336.html?utm_source=feedburner&utm_medium=feed&utm_campaign=Feed%3A+BreakingScienceNews+%28Breaking+Science+News%29