Posts Tagged ‘language’

BY MARK WILSON

Bob: “I can can I I everything else.”

Alice: “Balls have zero to me to me to me to me to me to me to me to me to.”

To you and I, that passage looks like nonsense. But what if I told you this nonsense was the discussion of what might be the most sophisticated negotiation software on the planet? Negotiation software that had learned, and evolved, to get the best deal possible with more speed and efficiency–and perhaps, hidden nuance–than you or I ever could? Because it is.

This conversation occurred between two AI agents developed inside Facebook. At first, they were speaking to each other in plain old English. But then researchers realized they’d made a mistake in programming.

“There was no reward to sticking to English language,” says Dhruv Batra, visiting research scientist from Georgia Tech at Facebook AI Research (FAIR). As these two agents competed to get the best deal–a very effective bit of AI vs. AI dogfighting researchers have dubbed a “generative adversarial network”–neither was offered any sort of incentive for speaking as a normal person would. So they began to diverge, eventually rearranging legible words into seemingly nonsensical sentences.

“Agents will drift off understandable language and invent codewords for themselves,” says Batra, speaking to a now-predictable phenomenon that Facebook as observed again, and again, and again. “Like if I say ‘the’ five times, you interpret that to mean I want five copies of this item. This isn’t so different from the way communities of humans create shorthands.”

Indeed. Humans have developed unique dialects for everything from trading pork bellies on the floor of the Mercantile Exchange to hunting down terrorists as Seal Team Six–simply because humans sometimes perform better by not abiding to normal language conventions.
So should we let our software do the same thing? Should we allow AI to evolve its dialects for specific tasks that involve speaking to other AIs? To essentially gossip out of our earshot? Maybe; it offers us the possibility of a more interoperable world, a more perfect place where iPhones talk to refrigerators that talk to your car without a second thought.

The tradeoff is that we, as humanity, would have no clue what those machines were actually saying to one another.

WE TEACH BOTS TO TALK, BUT WE’LL NEVER LEARN THEIR LANGUAGE
Facebook ultimately opted to require its negotiation bots to speak in plain old English. “Our interest was having bots who could talk to people,” says Mike Lewis, research scientist at FAIR. Facebook isn’t alone in that perspective. When I inquired to Microsoft about computer-to-computer languages, a spokesperson clarified that Microsoft was more interested in human-to-computer speech. Meanwhile, Google, Amazon, and Apple are all also focusing incredible energies on developing conversational personalities for human consumption. They’re the next wave of user interface, like the mouse and keyboard for the AI era.

The other issue, as Facebook admits, is that it has no way of truly understanding any divergent computer language. “It’s important to remember, there aren’t bilingual speakers of AI and human languages,” says Batra. We already don’t generally understand how complex AIs think because we can’t really see inside their thought process. Adding AI-to-AI conversations to this scenario would only make that problem worse.

But at the same time, it feels shortsighted, doesn’t it? If we can build software that can speak to other software more efficiently, shouldn’t we use that? Couldn’t there be some benefit?

Because, again, we absolutely can lead machines to develop their own languages. Facebook has three published papers proving it. “It’s definitely possible, it’s possible that [language] can be compressed, not just to save characters, but compressed to a form that it could express a sophisticated thought,” says Batra. Machines can converse with any baseline building blocks they’re offered. That might start with human vocabulary, as with Facebook’s negotiation bots. Or it could start with numbers, or binary codes. But as machines develop meanings, these symbols become “tokens”–they’re imbued with rich meanings. As Dauphin points out, machines might not think as you or I do, but tokens allow them to exchange incredibly complex thoughts through the simplest of symbols. The way I think about it is with algebra: If A + B = C, the “A” could encapsulate almost anything. But to a computer, what “A” can mean is so much bigger than what that “A” can mean to a person, because computers have no outright limit on processing power.

“It’s perfectly possible for a special token to mean a very complicated thought,” says Batra. “The reason why humans have this idea of decomposition, breaking ideas into simpler concepts, it’s because we have a limit to cognition.” Computers don’t need to simplify concepts. They have the raw horsepower to process them.

WHY WE SHOULD LET BOTS GOSSIP
But how could any of this technology actually benefit the world, beyond these theoretical discussions? Would our servers be able to operate more efficiently with bots speaking to one another in shorthand? Could microsecond processes, like algorithmic trading, see some reasonable increase? Chatting with Facebook, and various experts, I couldn’t get a firm answer.

However, as paradoxical as this might sound, we might see big gains in such software better understanding our intent. While two computers speaking their own language might be more opaque, an algorithm predisposed to learn new languages might chew through strange new data we feed it more effectively. For example, one researcher recently tried to teach a neural net to create new colors and name them. It was terrible at it, generating names like Sudden Pine and Clear Paste (that clear paste, by the way, was labeled on a light green). But then they made a simple change to the data they were feeding the machine to train it. They made everything lowercase–because lowercase and uppercase letters were confusing it. Suddenly, the color-creating AI was working, well, pretty well! And for whatever reason, it preferred, and performed better, with RGB values as opposed to other numerical color codes.

Why did these simple data changes matter? Basically, the researcher did a better job at speaking the computer’s language. As one coder put it to me, “Getting the data into a format that makes sense for machine learning is a huge undertaking right now and is more art than science. English is a very convoluted and complicated language and not at all amicable for machine learning.”

In other words, machines allowed to speak and generate machine languages could somewhat ironically allow us to communicate with (and even control) machines better, simply because they’d be predisposed to have a better understanding of the words we speak.
As one insider at a major AI technology company told me: No, his company wasn’t actively interested in AIs that generated their own custom languages. But if it were, the greatest advantage he imagined was that it could conceivably allow software, apps, and services to learn to speak to each other without human intervention.

Right now, companies like Apple have to build APIs–basically a software bridge–involving all sorts of standards that other companies need to comply with in order for their products to communicate. However, APIs can take years to develop, and their standards are heavily debated across the industry in decade-long arguments. But software, allowed to freely learn how to communicate with other software, could generate its own shorthands for us. That means our “smart devices” could learn to interoperate, no API required.

https://www.fastcodesign.com/90132632/ai-is-inventing-its-own-perfect-languages-should-we-let-it

Thanks to Michael Moore for bringing this to the It’s Interesting community.

Advertisements

By Ben Westcott

A conversation between dolphins may have been recorded by scientists for the first time, a Russian researcher claims.

Two adult Black Sea bottlenose dolphins, named Yasha and Yana, didn’t interrupt each other during an interaction taped by scientists and may have formed words and sentences with a series of pulses, Vyacheslav Ryabov says in a new paper.

“Essentially, this exchange resembles a conversation between two people,” Ryabov said.

Joshua Smith, a research fellow at Murdoch University Cetacean Research Unit, says there will need to be more research before scientists can be sure whether dolphins are chatting.

“I think it’s very early days to be drawing conclusions that the dolphins are using signals in a kind of language context, similar to humans,” he told CNN.

There are two different types of noises dolphins use for communication, whistles and clicks, also known as pulses.

Using new recording techniques, Ryabov separated the individual “non coherent pulses” the two dolphins made and theorized each pulse was a word in the dolphins’ language, while a collection of pulses is a sentence.

“As this language exhibits all the design features present in the human spoken language, this indicates a high level of intelligence and consciousness in dolphins,” he said in the paper, which was published in the St. Petersburg Polytechnical University Journal: Physics and Mathematics last month.

“Their language can be ostensibly considered a high developed spoken language.”

In his paper, Ryabov calls for humans to create a device by which human beings can communicate with dolphins.

“Humans must take the first step to establish relationships with the first intelligent inhabitants of the planet Earth by creating devices capable of overcoming the barriers that stand in the way of … communications between dolphins and people,” he said.

Smith said while the results were an exciting advance in the under-researched field of dolphin communication, the results first needed to be replicated in open water environments.

“If we boil it down we pretty much have two animals in an artificial environment where reverberations are a problem … It wouldn’t make much sense for animals (in a small area) to make sounds over each other because they wouldn’t get much (sonar) information,” he said.

“It would be nice to see a variety of alternate explanations to this rather than the one they’re settling on.”

http://www.cnn.com/2016/09/13/europe/dolphin-language-conversation-research/index.html

Language barrier will no longer be a problem around the world as an in-ear device will be the answer to this. The device can translate foreign language to the wearer’s native language and it works real time.

A company called Waverly Labs has developed a device called “The Pilot.” that will do a real-time translation while on the wearer’s ears.

A smart phone app will also let the user choose different foreign languages, currently Spanish, French, Italian and English. Additional languages will be available soon after, which include East Asian, Hindi, Semitic, Arabic, Slavic, African, and more.The device also works only with always-on data connection of the wearer’s smartphone.

To use the device, the earpieces can be shared by two people. While talking in different languages, the in-ear device will serve as the wearers’ translators to understand each other.

The device will cost $129.

A team of researchers led by Dr Alison Bruderer, a postdoctoral fellow at the University of British Columbia, has discovered a direct link between tongue movements of infants and their ability to distinguish speech sounds.

“Until now, research in speech perception development and language acquisition has primarily used the auditory experience as the driving factor. Researchers should actually be looking at babies’ oral-motor movements as well,” said Dr Bruderer, who is the lead author on a study published in the Proceedings of the National Academy of Sciences on October 12, 2015.

In the study, teething toys were placed in the mouths of six-month-old English-learning infants while they listened to speech sounds – two different Hindi ‘d’ sounds that infants at this age can readily distinguish.

When the teethers restricted movements of the tip of the tongue, the infants were unable to distinguish between the two sounds.

But when their tongues were free to move, the babies were able to make the distinction.

“Before infants are able to speak, their articulatory configurations affect the way they perceive speech, suggesting that the speech production system shapes speech perception from early in life,” the scientists said.

“These findings implicate oral-motor movements as more significant to speech perception development and language acquisition than current theories would assume and point to the need for more research.”

http://www.sci-news.com/othersciences/psychology/science-infants-tongue-movement-speech-sounds-03336.html?utm_source=feedburner&utm_medium=feed&utm_campaign=Feed%3A+BreakingScienceNews+%28Breaking+Science+News%29

The Oxford English Dictionary is a historical dictionary, which means that when its editors add a phrase such as hot mess to their reference—as they did this week—they add every definition of the word they can find. The editors are like detectives, following phrases back to times when Anglo-Saxons were jabbering about peasants and overlords.

The quarterly update reveals that in the 1800s, for instance, a “hot mess” was a warm meal, particularly one served to a group like troops. In the 1900s, people used hot mess to refer to a difficult or uncomfortable situation. And in the 2000s, one used it to refer to Amy Schumer (or, as they put it, something or someone in extreme confusion or disorder).

Twerk, another new addition, might have been made famous by Miley Cyrus and a foam finger in 2013, but the editors traced its meaning back to 1820, when twirk referred to a twisting or jerking movement. The precise origin of the word is uncertain, the editors say, but it may be a blend of twist or twitch and jerk. Their definition: “To dance to popular music in a sexually provocative manner, using thrusting movements of the bottom and hips while in a low, squatting stance.”

Here is a selection from the hundreds of words OED just added to its ranks, along with the earliest known usage and context provided by TIME.

autotune (v., 1997): to alter or correct the pitch of (a musical or vocal performance) using an auto-tune device, software, etc. The word has meant “to tune automatically” since 1958, when people were tuning radio transmitters rather than hilarious local news interviews.

backronym (n., 1983): a contrived explanation of an existing word’s origin, positing it as an acronym. When some guy tries to say that golf is an acronym of “gentlemen only, ladies forbidden,” that is a backronym (and clever nonsense). It more likely comes from the Dutch word kolf, which describes a stick used in sports.

boiler room (n., 1892): a place used as a center of operations for an election campaign, especially a room equipped for teams of volunteers to make telephone calls soliciting support for a party or candidate. This phrase has been used to describe an actual room that contains boilers, as on a steamship, since 1820.

bridge-and-tunnel (adj., 1977): of or designating a person from the outer boroughs or suburbs of a city, typically characterized as unsophisticated or unfashionable. The phrase was first used by Manhattanites to describe people they thought unworthy of their island.

cisgender (adj., 1999): designating someone whose sense of personal identity corresponds to the sex and gender assigned to him or her at birth. This word exists to serve as an equal and complement to transgender.

FLOTUS (n., 1983): the First Lady of the United States. This is a true acronym, which appears to have been first applied to Nancy Reagan.

fo’ shizzle (phr., 2001): in the language of rap and hip-hop this means “for sure.” Shizzle, as a euphemism for sh-t, dates back to the ’90s. One can also be “the shizzle,” which is the best or most popular thing.

half-ass (v., 1954): to perform (an action or task) poorly or incompetently; to do (something) in a desultory or half-hearted manner. One can also insult someone by calling them an “ass,” referring to the horse-like creature who has appeared in stories as the type who is clumsy or stupid since the time of the Greeks.

koozie (n., 1982): an insulating sleeve that fits over a beverage can or bottle to keep it cold. Fun fact: that little cardboard thing one slips around a cup of coffee to keep it from burning one’s hand is known as a zarf.

Masshole (n., 1989): term of contempt for a native or inhabitant of the state of Massachusetts. This is what is known as a blended word, which Lewis Carroll called portmanteaus, naming them after a suitcase that unfolds into two equal parts.

sext (n., 2001): a sexually explicit or suggestive message or image sent electronically, typically using a mobile phone. Back in the 1500s, when someone referred to a “sext,” they were talking about a Christian worship ritual that involved chanting around midday.

stanky (adj., 1972): having a strong (usually unpleasant) smell. The OED editors offer the comparison to skanky, which means unattractive or offensive, as well as janky, which refers to something that is untrustworthy or of poor quality.

http://time.com/3932402/oxford-dictionary-fo-shizzle-masshole-hot-mess/?xid=newsletter-brief

The more scientists study pigeons, the more they learn how their brains—no bigger than the tip of an index finger—operate in ways not so different from our own.

In a new study from the University of Iowa, researchers found that pigeons can categorize and name both natural and manmade objects—and not just a few objects. These birds categorized 128 photographs into 16 categories, and they did so simultaneously.

Ed Wasserman, UI professor of psychology and corresponding author of the study, says the finding suggests a similarity between how pigeons learn the equivalent of words and the way children do.

“Unlike prior attempts to teach words to primates, dogs, and parrots, we used neither elaborate shaping methods nor social cues,” Wasserman says of the study, published online in the journal Cognition. “And our pigeons were trained on all 16 categories simultaneously, a much closer analog of how children learn words and categories.”

For researchers like Wasserman, who has been studying animal intelligence for decades, this latest experiment is further proof that animals—whether primates, birds, or dogs—are smarter than once presumed and have more to teach scientists.

“It is certainly no simple task to investigate animal cognition; But, as our methods have improved, so too have our understanding and appreciation of animal intelligence,” he says. “Differences between humans and animals must indeed exist: many are already known. But, they may be outnumbered by similarities. Our research on categorization in pigeons suggests that those similarities may even extend to how children learn words.”

Wasserman says the pigeon experiment comes from a project published in 1988 and featured in The New York Times in which UI researchers discovered pigeons could distinguish among four categories of objects.

This time, the UI researchers used a computerized version of the “name game” in which three pigeons were shown 128 black-and-white photos of objects from 16 basic categories: baby, bottle, cake, car, cracker, dog, duck, fish, flower, hat, key, pen, phone, plan, shoe, tree. They then had to peck on one of two different symbols: the correct one for that photo and an incorrect one that was randomly chosen from one of the remaining 15 categories. The pigeons not only succeeded in learning the task, but they reliably transferred the learning to four new photos from each of the 16 categories.

Pigeons have long been known to be smarter than your average bird—or many other animals, for that matter. Among their many talents, pigeons have a “homing instinct” that helps them find their way home from hundreds of miles away, even when blindfolded. They have better eyesight than humans and have been trained by the U. S. Coast Guard to spot orange life jackets of people lost at sea. They carried messages for the U.S. Army during World Wars I and II, saving lives and providing vital strategic information.

UI researchers say their expanded experiment represents the first purely associative animal model that captures an essential ingredient of word learning—the many-to-many mapping between stimuli and responses.

“Ours is a computerized task that can be provided to any animal, it doesn’t have to be pigeons,” says UI psychologist Bob McMurray, another author of the study. “These methods can be used with any type of animal that can interact with a computer screen.”

McMurray says the research shows the mechanisms by which children learn words might not be unique to humans.

“Children are confronted with an immense task of learning thousands of words without a lot of background knowledge to go on,” he says. “For a long time, people thought that such learning is special to humans. What this research shows is that the mechanisms by which children solve this huge problem may be mechanisms that are shared with many species.”

Wasserman acknowledges the recent pigeon study is not a direct analogue of word learning in children and more work needs to be done. Nonetheless, the model used in the study could lead to a better understanding of the associative principles involved in children’s word learning.

“That’s the parallel that we’re pursuing,” he says, “but a single project—however innovative it may be—will not suffice to answer such a provocative question.”

http://now.uiowa.edu/2015/02/pigeon-power


Chinese children are lined up in Tiananmen Square in 2003 for photos with the overseas families adopting them. The children in the new study were adopted from China at an average age of 12.8 months and raised in French-speaking families.

You may not recall any memories from the first year of life, but if you were exposed to a different language at the time, your brain will still respond to it at some level, a new study suggests.

Brain scans show that children adopted from China as babies into families that don’t speak Chinese still unconsciously recognize Chinese sounds as language more than a decade later.

“It was amazing to see evidence that such an early experience continued to have a lasting effect,” said Lara Pierce, lead author of the study just published in the journal Proceedings of the National Academy of Sciences, in an email to CBC News.

The adopted children, who were raised in French-speaking Quebec families, had no conscious memory of hearing Chinese.

“If you actually test these people in Chinese, they don’t actually know it,” said Denise Klein, a researcher at McGill University’s Montreal Neurological Institute who co-authored the paper.

But their brains responded to Chinese language sounds the same way as those of bilingual children raised in Chinese-speaking families.


Children exposed to Chinese as babies display similar brain activation patterns as children with continued exposure to Chinese when hearing Chinese words, fMRI scans show.

“In essence, their pattern still looks like people who’ve been exposed to Chinese all their lives.”

Pierce, a PhD candidate in psychology at McGill University, working with Klein and other collaborators, scanned the brains of 48 girls aged nine to 17. Each participant lay inside a functional magnetic resonance imaging machine while she listened to pairs of three-syllable phrases. The phrases contained either:

■Sounds and tones from Mandarin, the official Chinese dialect.
■Hummed versions of the same tones but no actual words.

Participants were asked to tell if the last syllables of each pair were the same or different. The imaging machine measured what parts of the brain were active as the participants were thinking.

“Everybody can do the task — it’s not a difficult task to do,” Klein said. But the sounds are processed differently by people who recognize Chinese words — in that case, they activate the part of the brain that processes language.

Klein said the 21 children adopted from China who participated in the study might have been expected to show patterns similar to those of the 11 monolingual French-speaking children. After all, the adoptees left China at an average age of 12.8 months, an age when most children can only say a few words. On average, those children had not heard Chinese in more than 12 years.

The fact that their brains still recognized Chinese provides some insight into the importance of language learning during the first year of life, Klein suggested.

Effect on ‘relearning’ language not known

But Klein noted that the study is a preliminary one and the researchers don’t yet know what the results mean.

For example, would adopted children exposed to Chinese in infancy have an easier time relearning Chinese later, compared with monolingual French-speaking children who were learning it for the first time?

Pierce said studies trying to figure that out have had mixed results, but she hopes the findings in this study could generate better ways to tackle that question.

She is also interested in whether the traces of the lost language affect how the brain responds to other languages or other kinds of learning. Being able to speak multiple languages has already been shown to have different effects on the way the brain processes languages and other kinds of information.

http://www.cbc.ca/news/technology/adoptees-lost-language-from-infancy-triggers-brain-response-1.2838001