Posts Tagged ‘Facebook’

BY MARK WILSON

Bob: “I can can I I everything else.”

Alice: “Balls have zero to me to me to me to me to me to me to me to me to.”

To you and I, that passage looks like nonsense. But what if I told you this nonsense was the discussion of what might be the most sophisticated negotiation software on the planet? Negotiation software that had learned, and evolved, to get the best deal possible with more speed and efficiency–and perhaps, hidden nuance–than you or I ever could? Because it is.

This conversation occurred between two AI agents developed inside Facebook. At first, they were speaking to each other in plain old English. But then researchers realized they’d made a mistake in programming.

“There was no reward to sticking to English language,” says Dhruv Batra, visiting research scientist from Georgia Tech at Facebook AI Research (FAIR). As these two agents competed to get the best deal–a very effective bit of AI vs. AI dogfighting researchers have dubbed a “generative adversarial network”–neither was offered any sort of incentive for speaking as a normal person would. So they began to diverge, eventually rearranging legible words into seemingly nonsensical sentences.

“Agents will drift off understandable language and invent codewords for themselves,” says Batra, speaking to a now-predictable phenomenon that Facebook as observed again, and again, and again. “Like if I say ‘the’ five times, you interpret that to mean I want five copies of this item. This isn’t so different from the way communities of humans create shorthands.”

Indeed. Humans have developed unique dialects for everything from trading pork bellies on the floor of the Mercantile Exchange to hunting down terrorists as Seal Team Six–simply because humans sometimes perform better by not abiding to normal language conventions.
So should we let our software do the same thing? Should we allow AI to evolve its dialects for specific tasks that involve speaking to other AIs? To essentially gossip out of our earshot? Maybe; it offers us the possibility of a more interoperable world, a more perfect place where iPhones talk to refrigerators that talk to your car without a second thought.

The tradeoff is that we, as humanity, would have no clue what those machines were actually saying to one another.

WE TEACH BOTS TO TALK, BUT WE’LL NEVER LEARN THEIR LANGUAGE
Facebook ultimately opted to require its negotiation bots to speak in plain old English. “Our interest was having bots who could talk to people,” says Mike Lewis, research scientist at FAIR. Facebook isn’t alone in that perspective. When I inquired to Microsoft about computer-to-computer languages, a spokesperson clarified that Microsoft was more interested in human-to-computer speech. Meanwhile, Google, Amazon, and Apple are all also focusing incredible energies on developing conversational personalities for human consumption. They’re the next wave of user interface, like the mouse and keyboard for the AI era.

The other issue, as Facebook admits, is that it has no way of truly understanding any divergent computer language. “It’s important to remember, there aren’t bilingual speakers of AI and human languages,” says Batra. We already don’t generally understand how complex AIs think because we can’t really see inside their thought process. Adding AI-to-AI conversations to this scenario would only make that problem worse.

But at the same time, it feels shortsighted, doesn’t it? If we can build software that can speak to other software more efficiently, shouldn’t we use that? Couldn’t there be some benefit?

Because, again, we absolutely can lead machines to develop their own languages. Facebook has three published papers proving it. “It’s definitely possible, it’s possible that [language] can be compressed, not just to save characters, but compressed to a form that it could express a sophisticated thought,” says Batra. Machines can converse with any baseline building blocks they’re offered. That might start with human vocabulary, as with Facebook’s negotiation bots. Or it could start with numbers, or binary codes. But as machines develop meanings, these symbols become “tokens”–they’re imbued with rich meanings. As Dauphin points out, machines might not think as you or I do, but tokens allow them to exchange incredibly complex thoughts through the simplest of symbols. The way I think about it is with algebra: If A + B = C, the “A” could encapsulate almost anything. But to a computer, what “A” can mean is so much bigger than what that “A” can mean to a person, because computers have no outright limit on processing power.

“It’s perfectly possible for a special token to mean a very complicated thought,” says Batra. “The reason why humans have this idea of decomposition, breaking ideas into simpler concepts, it’s because we have a limit to cognition.” Computers don’t need to simplify concepts. They have the raw horsepower to process them.

WHY WE SHOULD LET BOTS GOSSIP
But how could any of this technology actually benefit the world, beyond these theoretical discussions? Would our servers be able to operate more efficiently with bots speaking to one another in shorthand? Could microsecond processes, like algorithmic trading, see some reasonable increase? Chatting with Facebook, and various experts, I couldn’t get a firm answer.

However, as paradoxical as this might sound, we might see big gains in such software better understanding our intent. While two computers speaking their own language might be more opaque, an algorithm predisposed to learn new languages might chew through strange new data we feed it more effectively. For example, one researcher recently tried to teach a neural net to create new colors and name them. It was terrible at it, generating names like Sudden Pine and Clear Paste (that clear paste, by the way, was labeled on a light green). But then they made a simple change to the data they were feeding the machine to train it. They made everything lowercase–because lowercase and uppercase letters were confusing it. Suddenly, the color-creating AI was working, well, pretty well! And for whatever reason, it preferred, and performed better, with RGB values as opposed to other numerical color codes.

Why did these simple data changes matter? Basically, the researcher did a better job at speaking the computer’s language. As one coder put it to me, “Getting the data into a format that makes sense for machine learning is a huge undertaking right now and is more art than science. English is a very convoluted and complicated language and not at all amicable for machine learning.”

In other words, machines allowed to speak and generate machine languages could somewhat ironically allow us to communicate with (and even control) machines better, simply because they’d be predisposed to have a better understanding of the words we speak.
As one insider at a major AI technology company told me: No, his company wasn’t actively interested in AIs that generated their own custom languages. But if it were, the greatest advantage he imagined was that it could conceivably allow software, apps, and services to learn to speak to each other without human intervention.

Right now, companies like Apple have to build APIs–basically a software bridge–involving all sorts of standards that other companies need to comply with in order for their products to communicate. However, APIs can take years to develop, and their standards are heavily debated across the industry in decade-long arguments. But software, allowed to freely learn how to communicate with other software, could generate its own shorthands for us. That means our “smart devices” could learn to interoperate, no API required.

https://www.fastcodesign.com/90132632/ai-is-inventing-its-own-perfect-languages-should-we-let-it

Thanks to Michael Moore for bringing this to the It’s Interesting community.

Advertisements

A Vietnamese man has taken the unusual step of posting a picture of his passport on social media after being repeatedly blocked by Facebook.

The unfortunately named Phuc Dat Bich – whose name is actually pronounced Phoo Da Bic – posted the image after the tech giant banned him several times.

The picture, and its accompanying message, has been shared more than 123,000 times.

“I find it highly irritating the fact that nobody seems to believe me when I say that my full legal name is how you see it,” he said.

“I’ve been accused of using a false and misleading name of which I find very offensive.”

He went on to explain that his frustration was due to what he suggested was a lack of understanding in the West for names which appear amusing to some.

“Is it because I’m Asian? Is it?” he asked in the post.

“Having my [Facebook] shut down multiple times and forced to change my name to my ‘real’ name, so just to put it out there. My name.

“Yours sincerely, Phuc Dat Bich”.

It is not the first time Facebook has blocked users from their profile accounts as a result of their name.

Recently, a woman whose first name is Isis said Facebook would not let her sign in – tweeting that the social media site thought she was “a terrorist”.

Isis Anchalee

A man who changed his name to Something Long and Complicated – from William Wood – was blocked in October this year by the site.

Members of the Native American community have also reported having their accounts suspended, as well as members of the drag queen community.

Facebook’s chief product officer, Chris Cox, issued an apology on the site after the latest incident.

The social media giant has an authentic name policy in place to make its users accountable for what they say.

http://www.independent.co.uk/news/world/australasia/man-called-phuc-dat-bich-posts-passport-to-facebook-to-prove-his-name-is-real-a6741586.html

By Traci Watson, National Geographic

Don’t blame the lure of a glowing smartphone for keeping you up too late. Even people without modern technology don’t sleep the night away, new research says.

Members of three hunter-gatherer societies who lack electricity—and thus evenings filled with Facebook, Candy Crush, and 200 TV channels—get an average of only 6.4 hours of shut-eye a night, scientists have found. That’s no more than many humans who lead a harried industrial lifestyle, and less than the seven to nine hours recommended for most adults by the National Sleep Foundation.

People from these groups—two in Africa, one in South America—tend to nod off long after sundown and wake before dawn, contrary to the romantic vision of life without electric lights and electronic gadgets, the researchers report in Thursday’s Current Biology.

“Seeing the same pattern in three groups separated by thousands of miles on two continents (makes) it pretty clear that this is the natural pattern,” says study leader and sleep researcher Jerome Siegel of the University of California, Los Angeles. “Maybe people should be a little bit more relaxed about sleeping. If you sleep seven hours a night, that’s close to what our ancestors were sleeping.”

Previous research has linked lack of sleep to ills ranging from poor judgment to obesity to heart disease. The rise of mesmerizing electronic devices small enough to carry into bed has only heightened worries about a modern-day epidemic of bad sleep. One recent study found that after bedtime sessions with an eBook reader, test subjects took longer to fall asleep and were groggier in the morning than when they’d curled up with an old-fashioned paper book.

Many scientists argue that artificial lighting curtailed our rest, leading to sleep deficits. But Siegel questioned that storyline. He was studying the sleep of wild lions when he got the inspiration to monitor the sleep of pre-industrial people, whose habits might provide insight into the slumber of early humans.

Siegel and his colleagues recruited members of Bolivia’s Tsimane, who hunt and grow crops in the Amazonian basin, and hunter-gatherers from the Hadza society of Tanzania and the San people in Namibia. These are among the few remaining societies without electricity, artificial lighting, and climate control. At night, they build small fires and retire to simple houses built of materials such as grass and branches.

The researchers asked members of each group to wear wristwatch-like devices that record light levels and the smallest twitch and jerk. Many Tsimane thought the request comical, but almost all wanted to participate, says study co-author Gandhi Yetish of the University of New Mexico. People in the study fell asleep an average of just under three and a half hours after sunset, sleep records showed, and mostly awakened an average of an hour before sunrise.

The notable slugabeds are the San, who in the summer get up an hour after sunrise. The researchers noticed that at both the San and Tsimane research sites, summer nights during the study period lasted 11 hours, but mornings were chillier in the San village. That fits with other data showing the three groups tend to nod off when the night grows cold and rouse when temperature bottoms out before dawn.

Our time to wake and our time to sleep, Siegel says, seem to be dictated in part by natural temperature and light levels—and modern humans are divorced from both. He suggests some insomniacs might benefit from re-creating our ancient exposure to warmth and cold.

http://news.nationalgeographic.com/2015/10/20151015-paleo-sleep-time-hadza-san-tsimane-science/

By Peter Shadbolt, for CNN

How long will the data last in your hard-drive or USB stick? Five years? 10 years? Longer?

Already a storage company called Backblaze is running 25,000 hard drives simultaneously to get to the bottom of the question. As each hard drive coughs its last, the company replaces it and logs its lifespan.

While this census has only been running five years, the statistics show a 22% attrition rate over four years.

Some may last longer than a decade, the company says, others may last little more than a year; but the short answer is that storage devices don’t last forever.

Science is now looking to nature, however, to find the best way to store data in a way that will make it last for millions of years.

Researchers at ETH Zurich, in Switzerland, believe the answer may lie in the data storage system that exists in every living cell: DNA.

So compact and complex are its strands that just 1 gram of DNA is theoretically capable of containing all the data of internet giants such as Google and Facebook, with room to spare.

In data storage terms, that gram would be capable of holding 455 exabytes, where one exabyte is equivalent to a billion gigabytes.

Fossilization has been known to preserve DNA in strands long enough to gain an animal’s entire genome — the complete set of genes present in a cell or organism.

So far, scientists have extracted and sequenced the genome of a 110,000-year-old polar bear and more recently a 700,000-year-old horse.

Robert Grass, lecturer at the Department of Chemistry and Applied Biosciences, said the problem with DNA is that it degrades quickly. The project, he said, wanted to find ways of combining the possibility of the large storage density in DNA with the stability of the DNA found in fossils.

“We have found elegant ways of making DNA very stable,” he told CNN. “So we wanted to combine these two stories — to get the high storage density of DNA and combine it with the archaeological aspects of DNA.”

The synthetic process of preserving DNA actually mimics processes found in nature.

As with fossils, keeping the DNA cool, dry and encased — in this case, with microscopic spheres of glass – could keep the information contained in its strands intact for thousands of years.

“The time limit with DNA in fossils is about 700,000 years but people speculate about finding one-million-year storage of genomic material in fossil bones,” he said.

“We were able to show that decay of our DNA and store of information decays at the same rate as the fossil DNA so we get to similar time frames of close to a million years.”

Fresh fossil discoveries are throwing up new surprises about the preservation of DNA.

Human bones discovered in the Sima de los Huesos cave network in Spain show maternally inherited “mitochondrial” DNA that is 400,000 years old – a new record for human remains.

The fact that the DNA survived in the relatively cool climate of a cave — rather than in a frozen environment as with the DNA extracted from mammoth remains in Siberia – has added to the mystery about DNA longevity.

“A lot of it is not really known,” Grass says. “What we’re trying to understand is how DNA decays and what the mechanisms are to get more insight into that.”

What is known is that water and oxygen are the enemy of DNA survival. DNA in a test tube and exposed to air will last little more than two to three years. Encasing it in glass — an inert, neutral agent – and cooling it increases its chances of survival.

Grass says sol-gel technology, which produces solid materials from small molecules, has made it a relatively easy process to get the glass around the DNA molecules.

While the team’s work invites immediate comparison with Jurassic Park, where DNA was extracted from amber fossils, Grass says that prehistoric insects encased in amber are a poor source of prehistoric DNA.

“The best DNA comes from sources that are ceramic and dry — so teeth, bones and even eggshells,” he said.

So far the team has tested their storage method by preserving just 83 kilobytes of data.

“The first is the Swiss Federal Charter of 1291 — it’s like the Swiss Magna Carta — and the other was the Archimedes Palimpsest; a copy of an Ancient Greek mathematics treatise made by a monk in the 10th century but which had been overwritten by other monks in the 15th century.

“We wanted to preserve these documents to show not just that the method works, but that the method is important too,” he said.

He estimates that the information will be readable in 10,000 years’ time, and if frozen, as long as a million years.

The cost of encoding just 83Kb of data cost about $2,000, making it a relatively expensive process, but Grass is optimistic that price will come down over time. Advances in technology for medical analysis, he said, are likely to help with this.

“Already the prices for human genome sequences have dropped from several millions of dollars a few years ago to just hundreds of dollars now,” Grass said.

“It makes sense to integrate these advances in medical and genome analysis into the world of IT.”

http://www.cnn.com/2015/02/25/tech/make-create-innovate-fossil-dna-data-storage/index.html?utm_source=feedburner&utm_medium=feed&utm_campaign=Feed%3A+rss%2Fcnn_latest+%28RSS%3A+Most+Recent%29

Men who post selfies on social media such as Instagram and Facebook have higher than average traits of narcissism and psychopathy, according to a new study from academics at Ohio State University.

Furthermore, people who use filters to edit shots score even higher for anti-social behaviour such as narcissism, an obsession with one’s own appearance.

Psychologists from the University of Ohio sampled 800 men aged 18 to 40 about their photo-posting habits on social media.

As well as questionnaires to test their levels of vanity, they were also asked if they edited their photos by cropping them or adding a filter.

Assistant Professor Jesse Fox, lead author of the study at The Ohio State University, said: ‘It’s not surprising that men who post a lot of selfies and spend more time editing them are more narcissistic, but this is the first time it has actually been confirmed in a study.

‘The more interesting finding is that they also score higher on this other anti-social personality trait, psychopathy, and are more prone to self-objectification” she said.

http://www.timeslive.co.za/lifestyle/2015/01/08/men-who-post-selfies-have-narcissistic-and-psychopathic-tendencies-study