Archive for the ‘artificial intelligence’ Category

by Daniel Oberhaus

Amanda Feilding used to take lysergic acid diethylamide every day to boost creativity and productivity at work before LSD, known as acid, was made illegal in 1968. During her downtime, Feilding, who now runs the Beckley Foundation for psychedelic research, would get together with her friends to play the ancient Chinese game of Go, and came to notice something curious about her winning streaks.

“I found that if I was on LSD and my opponent wasn’t, I won more games,” Feilding told me over Skype. “For me that was a very clear indication that it improves cognitive function, particularly a kind of intuitive pattern recognition.”

An interesting observation to be sure. But was LSD actually helping Feilding in creative problem solving?

A half-century ban on psychedelic research has made answering this question in a scientific manner impossible. In recent years, however, psychedelic research has been experiencing something of a “renaissance” and now Feilding wants to put her intuition to the test by running a study in which participants will “microdose” while playing Go—a strategy game that is like chess on steroids—against an artificial intelligence.

Microdosing LSD is one of the hallmarks of the so-called “Psychedelic Renaissance.” It’s a regimen that involves regularly taking doses of acid that are so low they don’t impart any of the drug’s psychedelic effects. Microdosers claim the practice results in heightened creativity, lowered depression, and even relief from chronic somatic pain.

But so far, all evidence in favor of microdosing LSD has been based on self-reports, raising the possibility that these reported positive effects could all be placebo. So the microdosing community is going to have to do some science to settle the debate. That means clinical trials with quantifiable results like the one proposed by Feilding.

As the first scientific trial to investigate the effects of microdosing, Feilding’s study will consist of 20 participants who will be given low doses—10, 20 and 50 micrograms of LSD—or a placebo on four different occasions. After taking the acid, the brains of these subjects will be imaged using MRI and MEG while they engage in a variety of cognitive tasks, such as the neuropsychology staples the Wisconsin Card Sorting test and the Tower of London test. Importantly, the participants will also be playing Go against an AI, which will assess the players’ performance during the match.

By imaging the brain while it’s under the influence of small amounts of LSD, Feilding hopes to learn how the substance changes connectivity in the brain to enhance creativity and problem solving. If the study goes forward, this will only be the second time that subjects on LSD have had their brain imaged while tripping. (That 2016 study at Imperial College London was also funded by the Beckley Foundation, which found that there was a significant uptick in neural activity in areas of the brain associated with vision during acid trips.)

Before Feilding can go ahead with her planned research, a number of obstacles remain in her way, starting with funding. She estimates she’ll need to raise about $350,000 to fund the study.

“It’s frightening how expensive this kind of research is,” Feilding said. “I’m very keen on trying to alter how drug policy categorizes these compounds because the research is much more costly simply because LSD is a controlled substance.”

To tackle this problem, Feilding has partnered with Rodrigo Niño, a New York entrepreneur who recently launched Fundamental, a platform for donations to support psychedelic research at institutions like the Beckley Foundation, Johns Hopkins University, and New York University.

The study is using smaller doses of LSD than Feilding’s previous LSD study, so she says she doesn’t anticipate problems getting ethical clearance to pursue this. A far more difficult challenge will be procuring the acid to use in her research. In 2016, she was able to use LSD that had been synthesized for research purposes by a government certified lab, but she suspects that this stash has long since been used up.

But if there’s anyone who can make the impossible possible, it would be Feilding, a psychedelic science pioneer known as much for drilling a hole in her own head (https://www.vice.com/en_us/article/drilling-a-hole-in-your-head-for-a-higher-state-of-consciousness) to explore consciousness as for the dozens of peer-reviewed scientific studies on psychedelic use she has authored in her lifetime. And according to Feilding, the potential benefits of microdosing are too great to be ignored and may even come to replace selective serotonin reuptake inhibitors, or SSRIs as a common antidepressant.

“I think the microdose is a very delicate and sensitive way of treating people,” said Feilding. “We need to continue to research it and make it available to people.”

https://motherboard.vice.com/en_us/article/first-ever-lsd-microdosing-study-will-pit-the-human-brain-against-ai

To create a new drug, researchers have to test tens of thousands of compounds to determine how they interact. And that’s the easy part; after a substance is found to be effective against a disease, it has to perform well in three different phases of clinical trials and be approved by regulatory bodies.

It’s estimated that, on average, one new drug coming to market can take 1,000 people, 12-15 years, and up to $1.6 billion. Here is a short video on the current process.

Last week, researchers published a paper detailing an artificial intelligence system made to help discover new drugs, and significantly shorten the amount of time and money it takes to do so.

The system is called AtomNet, and it comes from San Francisco-based startup AtomWise. The technology aims to streamline the initial phase of drug discovery, which involves analyzing how different molecules interact with one another—specifically, scientists need to determine which molecules will bind together and how strongly. They use trial and error and process of elimination to analyze tens of thousands of compounds, both natural and synthetic.

AtomNet takes the legwork out of this process, using deep learning to predict how molecules will behave and how likely they are to bind together. The software teaches itself about molecular interaction by identifying patterns, similar to how AI learns to recognize images.

Remember the 3D models of atoms you made in high school, where you used pipe cleaners and foam balls to represent the connections between protons, neutrons and electrons? AtomNet uses similar digital 3D models of molecules, incorporating data about their structure to predict their bioactivity.

As AtomWise COO Alexander Levy put it, “You can take an interaction between a drug and huge biological system and you can decompose that to smaller and smaller interactive groups. If you study enough historical examples of molecules…you can then make predictions that are extremely accurate yet also extremely fast.”

“Fast” may even be an understatement; AtomNet can reportedly screen one million compounds in a day, a volume that would take months via traditional methods.

AtomNet can’t actually invent a new drug, or even say for sure whether a combination of two molecules will yield an effective drug. What it can do is predict how likely a compound is to work against a certain illness. Researchers then use those predictions to narrow thousands of options down to dozens (or less), focusing their testing where there’s more likely to be positive results.

The software has already proven itself by helping create new drugs for two diseases, Ebola and multiple sclerosis. The MS drug has been licensed to a British pharmaceutical company, and the Ebola drug is being submitted to a peer-reviewed journal for additional analysis.

https://singularityhub.com/2017/05/07/drug-discovery-ai-can-do-in-a-day-what-currently-takes-months/

Thanks to Kebmodee for bringing this to the It’s Interesting community.

MICROSOFT IS BUYING a deep learning startup based in Montreal, a global hub for deep learning research. But two years ago, this startup wasn’t based in Montreal, and it had nothing to do with deep learning. Which just goes to show: striking it big in the world of tech is all about being in the right place at the right time with the right idea.

Sam Pasupalak and Kaheer Suleman founded Maluuba in 2011 as students at the University of Waterloo, about 400 miles from Montreal. The company’s name is an insider’s nod to one of their undergraduate computer science classes. From an office in Waterloo, they started building something like Siri, the digital assistant that would soon arrive on the iPhone, and they built it in much the same way Apple built the original, using techniques that had driven the development of conversational computing for years—techniques that require extremely slow and meticulous work, where engineers construct AI one tiny piece at a time. But as they toiled away in Waterloo, companies like Google and Facebook embraced deep neural networks, and this technology reinvented everything from image recognition to machine translations, rapidly learning these tasks by analyzing vast amounts of data. Soon, Pasupalak and Suleman realized they should change tack.

In December 2015, the two founders opened a lab in Montreal, and they started recruiting deep learning specialists from places like McGill University and the University of Montreal. Just thirteen months later, after growing to a mere 50 employees, the company sold itself to Microsoft. And that’s not an unusual story. The giants of tech are buying up deep learning startups almost as quickly as they’re created. At the end of December, Uber acquired Geometric Logic, a two-year old AI startup spanning fifteen academic researchers that offered no product and no published research. The previous summer, Twitter paid a reported $150 million for Magic Pony, a two-year-old deep learning startup based in the UK. And in recent months, similarly small, similarly young deep learning companies have disappeared into the likes of General Electric, Salesforce, and Apple.

Microsoft did not disclose how much it paid for Maluuba, but some of these deep learning acquisitions have reached hefty sums, including Intel’s $400 million purchase of Nervana and Google’s $650 million acquisition of DeepMind, the British AI lab that made headlines last spring when it cracked the ancient game of Go, a feat experts didn’t expect for another decade.

At the same time, Microsoft’s buy is a little different than the rest. Maluuba is a deep learning company that focuses on natural language understanding, the ability to not just recognize the words that come out of our mouths but actually understand them and respond in kind—the breed of AI needed to build a good chatbot. Now that deep learning has proven so effective with speech recognition, image recognition, and translation, natural language is the next frontier. “In the past, people had to build large lexicons, dictionaries, ontologies,” Suleman says. “But with neural nets, we no longer need to do that. A neural net can learn from raw data.”

The acquisition is part of an industry-wide race towards digital assistants and chatbots that can converse like a human. Yes, we already have digital assistants like Microsoft Cortana, the Google Search Assistant, Facebook M, and Amazon Alexa. And chatbots are everywhere. But none of these services know how to chat (a particular problem for the chatbots). So, Microsoft, Google, Facebook, and Amazon are now looking at deep learning as a way of improving the state of the art.

Two summers ago, Google published a research paper describing a chatbot underpinned by deep learning that could debate the meaning of life (in a way). Around the same time, Facebook described an experimental system that could read a shortened form of The Lord of the Rings and answer questions about the Tolkien trilogy. Amazon is gathering data for similar work. And, none too surprisingly, Microsoft is gobbling up a startup that only just moved into the same field.

Winning the Game
Deep neural networks are complex mathematical systems that learn to perform discrete tasks by recognizing patterns in vast amounts of digital data. Feed millions of photos into a neural network, for instance, and it can learn to identify objects and people in photos. Pairing these systems with the enormous amounts of computing power inside their data centers, companies like Google, Facebook, and Microsoft have pushed artificial intelligence far further, far more quickly, than they ever could in the past.

Now, these companies hope to reinvent natural language understanding in much the same way. But there are big caveats: It’s a much harder task, and the work has only just begun. “Natural language is an area where more research needs to be done in terms of research, even basic research,” says University of Montreal professor Yoshua Bengio, one of the founding fathers of the deep learning movement and an advisor to Maluuba.

Part of the problem is that researchers don’t yet have the data needed to train neural networks for true conversation, and Maluuba is among those working to fill the void. Like Facebook and Amazon, it’s building brand new datasets for training natural language models: One involves questions and answers, and the other focuses on conversational dialogue. What’s more, the company is sharing this data with the larger community of researchers and encouraging then\m to share their own—a common strategy that seeks to accelerate the progress of AI research.

But even with adequate data, the task is quite different from image recognition or translation. Natural language isn’t necessarily something that neural networks can solve on their own. Dialogue isn’t a single task. It’s a series of tasks, each building on the one before. A neural network can’t just identify a pattern in a single piece of data. It must somehow identify patterns across an endless stream of data—and a keep a “memory” of this stream. That’s why Maluuba is exploring AI beyond neural networks, including a technique called reinforcement learning.

With reinforcement learning, a system repeats the same task over and over again, while carefully keeping tabs on what works and what doesn’t. Engineers at Google’s DeepMind lab used this method in building AlphaGo, the system that topped Korean grandmaster Lee Sedol at the ancient game of Go. In essence, the machine learned to play Go at a higher level than any human by playing game after game against itself, tracking which moves won the most territory on the board. In similar fashion, reinforcement learning can help machines learn to carry on a conversation. Like a game, Bengio says, dialogue is interactive. It’s a back and forth.

For Microsoft, winning the game of conversation means winning an enormous market. Natural language could streamline practically any computer interface. With this in mind, the company is already building an army of chatbots, but so far, the results are mixed. In China, the company says, its Xiaoice chatbot has been used by 40 million people. But when it first unleashed a similar bot in the US, the service was coaxed into spewing racism, and the replacement is flawed in so many other ways. That’s why Microsoft acquired Maluuba. The startup was in the right place at the right time. And it may carry the right idea.

https://www.wired.com/2017/01/microsoft-thinks-machines-can-learn-converse-chats-become-game/

Google AI computers have created their own secret language, creating a fascinating and existentially challenging development.

In September, Google announced that its Neural Machine Translation system had gone live. It uses deep learning to produce better, more natural translations between languages.

Following on this success, GNMT’s creators were curious about something. If you teach the translation system to translate English to Korean and vice versa, and also English to Japanese and vice versa… could it translate Korean to Japanese, without resorting to English as a bridge between them?

This is called zero-shot translation, illustrated below.

Indeed, Google’s AI has evolves to produce reasonable translations between two languages that it has not explicitly linked in any way.

But this raised a second question. If the computer is able to make connections between concepts and words that have not been formally linked… does that mean that the computer has formed a concept of shared meaning for those words, meaning at a deeper level than simply that one word or phrase is the equivalent of another?

n other words, has the computer developed its own internal language to represent the concepts it uses to translate between other languages? Based on how various sentences are related to one another in the memory space of the neural network, Google’s language and AI boffins think that it has.

This “interlingua” seems to exist as a deeper level of representation that sees similarities between a sentence or word in all three languages. Beyond that, it’s hard to say, since the inner processes of complex neural networks are infamously difficult to describe.

It could be something sophisticated, or it could be something simple. But the fact that it exists at all — an original creation of the system’s own to aid in its understanding of concepts it has not been trained to understand — is, philosophically speaking, pretty powerful stuff.

Google’s AI translation tool seems to have invented its own secret internal language

Thanks to Kebmodee for bringing this to the attention of the It’s Interesting community.

by Jeremy Kahn

Google’s DeepMind AI unit, which earlier this year achieved a breakthrough in computer intelligence by creating software that beat the world’s best human player at the strategy game Go, is turning its attention to the sci-fi video game Starcraft II.

The company said it had reached a deal with Blizzard Entertainment Inc., the Irvine, California-based division of Activision Blizzard, which makes the Starcraft game series, to create an interface to let artificial intelligence researchers connect machine-learning software to the game.

London-based DeepMind, which Google purchased in 2014, has not said it has created software that can play Starcraft expertly — at least not yet. “We’re still a long way from being able to challenge a professional human player,” DeepMind research scientist Oriol Vinyals said in a blog post Friday. But the company’s announcement shows it’s looking seriously at Starcraft as a candidate for a breakthrough in machine intelligence.

Starcraft fascinates artificial intelligence researchers because it comes closer to simulating “the messiness of the real world” than games like chess or Go, Vinyals said. “An agent that can play Starcraft will need to demonstrate effective use of memory, an ability to plan over a long time and the capacity to adapt plans to new information,” he said, adding that techniques required to create a machine-learning system that mastered these skills in order to play Starcraft “could ultimately transfer to real-world tasks.”

Virtual Mining

In the game, which is played in real-time over the internet, players choose one of three character types, each of which has distinct strengths and weaknesses. Players must run an in-game economy, discovering and mining minerals and other commodities in order to conquer new territory.A successful player needs to remember large volumes of information about places they’ve scouted in the past, even when those places are not immediately observable on their screen.

The player’s view of what an opposing player is doing is limited — unlike chess or Go where opponents can observe the whole board at one time. Furthermore,unlike in a game where players take turns, a machine-learning system has to deal with an environment that is constantly in flux. Starcraft in particular also requires an ability to plan both a long-term strategy and make very quick tactical decisions to stay ahead of an opponent — and designing software that is good at both types of decision-making is difficult.

Facebook, Microsoft

Researchers at Facebook Inc. and Microsoft Corp. have also published papers on ways to interface artificial intelligence systems with earlier versions of Starcraft. And some Starcraft-playing bots have already been created, but so far these systems have not been able to defeat talented human players.

Microsoft Chief Executive Officer Satya Nadella has taken swipes at Google’s focus on games in its AI research, telling the audience at a company event in Atlanta in September that Microsoft was “not pursuing AI to beat humans at games” and that Microsoft wanted to build AI “to solve the most pressing problems of our society and economy.”

Games have long-served as important tests and milestones for artificial intelligence research. In the mid-1990s, International Business Machines Corp.’s supercomputer Deep Blue defeated world chess champion Garry Kasparov on several occasions. IBM’s Watson artificial intelligence beat top human players in the game show Jeopardy in 2011, an achievement that showcased IBM’s strides in natural language processing. In 2015, DeepMind developed machine learning software that taught itself how to play dozens of retro Atari games, such as Breakout, as well or better than a human. Then, in March of 2016, DeepMind’s Alpha Go program, trained in a different way, defeated Go world champion Lee Sodol.

In the twenty years since Starcraft debuted, the game has acquired a massive and devoted following. More than 9.5 million copies of the original game were sold within the first decade of its release, with more than half of those being sold in Korea, where the game was especially popular. Starcraft II shattered sales records for a strategy game when it was released in 2010, selling 1.5 million copies within 48 hours. Pitting two players against one another in real-time, Starcraft was a pioneer in professional video game competitions and remains an important game in the world of e-sports, although its prominence has since been eclipsed by other games.

http://www.detroitnews.com/story/business/2016/11/05/deepmind-master-go-takes-video-game-starcraft/93370028/

by Bryan Nelson

Quantum physics has some spooky, anti-intuitive effects, but it could also be essential to how actual intuition works, at least in regards to artificial intelligence.

In a new study, researcher Vedran Dunjko and co-authors applied a quantum analysis to a field within artificial intelligence called reinforcement learning, which deals with how to program a machine to make appropriate choices to maximize a cumulative reward. The field is surprisingly complex and must take into account everything from game theory to information theory.

Dunjko and his team found that quantum effects, when applied to reinforcement learning in artificial intelligence systems, could provide quadratic improvements in learning efficiency, reports Phys.org. Exponential improvements might even be possible over short-term performance tasks. The study was published in the journal Physical Review Letters.

“This is, to our knowledge, the first work which shows that quantum improvements are possible in more general, interactive learning tasks,” explained Dunjko. “Thus, it opens up a new frontier of research in quantum machine learning.”

One of the key quantum effects in regards to learning is quantum superposition, which potentially allows a machine to perform many steps simultaneously. Such a system has vastly improved processing power, which allows it to compute more variables when making decisions.

The research is tantalizing, in part because it mirrors some theories about how biological brains might produce higher cognitive states, possibly even being related to consciousness. For instance, some scientists have proposed the idea that our brains pull off their complex calculations by making use of quantum computation.

Could quantum effects unlock consciousness in our machines? Quantum physics isn’t likely to produce HAL from “2001: A Space Odyssey” right away; the most immediate improvements in artificial intelligence will likely come in complex fields such as climate modeling or automated cars. But eventually, who knows?

You probably won’t want to be taking a joyride in an automated vehicle the moment it becomes conscious, if HAL is an example of what to expect.

“While the initial results are very encouraging, we have only begun to investigate the potential of quantum machine learning,” said Dunjko. “We plan on furthering our understanding of how quantum effects can aid in aspects of machine learning in an increasingly more general learning setting. One of the open questions we are interested in is whether quantum effects can play an instrumental role in the design of true artificial intelligence.”

http://www.mnn.com/green-tech/research-innovations/stories/quantum-artificial-intelligence-could-lead-super-smart-machines

Elon Musk has said that there is only a “one in billions” chance that we’re not living in a computer simulation.

Our lives are almost certainly being conducted within an artificial world powered by AI and highly-powered computers, like in The Matrix, the Tesla and SpaceX CEO suggested at a tech conference in California.

Mr Musk, who has donated huge amounts of money to research into the dangers of artificial intelligence, said that he hopes his prediction is true because otherwise it means the world will end.

“The strongest argument for us probably being in a simulation I think is the following,” he told the Code Conference. “40 years ago we had Pong – two rectangles and a dot. That’s where we were.

“Now 40 years later we have photorealistic, 3D simulations with millions of people playing simultaneously and it’s getting better every year. And soon we’ll have virtual reality, we’ll have augmented reality.

“If you assume any rate of improvement at all, then the games will become indistinguishable from reality, just indistinguishable.”

He said that even if the speed of those advancements dropped by 1000, we would still be moving forward at an intense speed relative to the age of life.

Since that would lead to games that would be indistinguishable from reality that could be played anywhere, “it would seem to follow that the odds that we’re in ‘base reality’ is one in billions”, Mr Musk said.

Asked whether he was saying that the answer to the question of whether we are in a simulated computer game was “yes”, he said the answer is “probably”.

He said that arguably we should hope that it’s true that we live in a simulation. “Otherwise, if civilisation stops advancing, then that may be due to some calamitous event that stops civilisation.”

He said that either we will make simulations that we can’t tell apart from the real world, “or civilisation will cease to exist”.

Mr Musk said that he has had “so many simulation discussions it’s crazy”, and that it got to the point where “every conversation [he had] was the AI/simulation conversation”.

The question of whether what we see is real or simulated has perplexed humans since at least the Ancient philosophers. But it has been given a new and different edge in recent years with the development of powerful computers and artificial intelligence, which some have argued shows how easily such a simulation could be created.

http://www.independent.co.uk/life-style/gadgets-and-tech/news/elon-musk-ai-artificial-intelligence-computer-simulation-gaming-virtual-reality-a7060941.html