Posts Tagged ‘artificial intelligence’

by Daniel Oberhaus

Amanda Feilding used to take lysergic acid diethylamide every day to boost creativity and productivity at work before LSD, known as acid, was made illegal in 1968. During her downtime, Feilding, who now runs the Beckley Foundation for psychedelic research, would get together with her friends to play the ancient Chinese game of Go, and came to notice something curious about her winning streaks.

“I found that if I was on LSD and my opponent wasn’t, I won more games,” Feilding told me over Skype. “For me that was a very clear indication that it improves cognitive function, particularly a kind of intuitive pattern recognition.”

An interesting observation to be sure. But was LSD actually helping Feilding in creative problem solving?

A half-century ban on psychedelic research has made answering this question in a scientific manner impossible. In recent years, however, psychedelic research has been experiencing something of a “renaissance” and now Feilding wants to put her intuition to the test by running a study in which participants will “microdose” while playing Go—a strategy game that is like chess on steroids—against an artificial intelligence.

Microdosing LSD is one of the hallmarks of the so-called “Psychedelic Renaissance.” It’s a regimen that involves regularly taking doses of acid that are so low they don’t impart any of the drug’s psychedelic effects. Microdosers claim the practice results in heightened creativity, lowered depression, and even relief from chronic somatic pain.

But so far, all evidence in favor of microdosing LSD has been based on self-reports, raising the possibility that these reported positive effects could all be placebo. So the microdosing community is going to have to do some science to settle the debate. That means clinical trials with quantifiable results like the one proposed by Feilding.

As the first scientific trial to investigate the effects of microdosing, Feilding’s study will consist of 20 participants who will be given low doses—10, 20 and 50 micrograms of LSD—or a placebo on four different occasions. After taking the acid, the brains of these subjects will be imaged using MRI and MEG while they engage in a variety of cognitive tasks, such as the neuropsychology staples the Wisconsin Card Sorting test and the Tower of London test. Importantly, the participants will also be playing Go against an AI, which will assess the players’ performance during the match.

By imaging the brain while it’s under the influence of small amounts of LSD, Feilding hopes to learn how the substance changes connectivity in the brain to enhance creativity and problem solving. If the study goes forward, this will only be the second time that subjects on LSD have had their brain imaged while tripping. (That 2016 study at Imperial College London was also funded by the Beckley Foundation, which found that there was a significant uptick in neural activity in areas of the brain associated with vision during acid trips.)

Before Feilding can go ahead with her planned research, a number of obstacles remain in her way, starting with funding. She estimates she’ll need to raise about $350,000 to fund the study.

“It’s frightening how expensive this kind of research is,” Feilding said. “I’m very keen on trying to alter how drug policy categorizes these compounds because the research is much more costly simply because LSD is a controlled substance.”

To tackle this problem, Feilding has partnered with Rodrigo Niño, a New York entrepreneur who recently launched Fundamental, a platform for donations to support psychedelic research at institutions like the Beckley Foundation, Johns Hopkins University, and New York University.

The study is using smaller doses of LSD than Feilding’s previous LSD study, so she says she doesn’t anticipate problems getting ethical clearance to pursue this. A far more difficult challenge will be procuring the acid to use in her research. In 2016, she was able to use LSD that had been synthesized for research purposes by a government certified lab, but she suspects that this stash has long since been used up.

But if there’s anyone who can make the impossible possible, it would be Feilding, a psychedelic science pioneer known as much for drilling a hole in her own head ( to explore consciousness as for the dozens of peer-reviewed scientific studies on psychedelic use she has authored in her lifetime. And according to Feilding, the potential benefits of microdosing are too great to be ignored and may even come to replace selective serotonin reuptake inhibitors, or SSRIs as a common antidepressant.

“I think the microdose is a very delicate and sensitive way of treating people,” said Feilding. “We need to continue to research it and make it available to people.”

To create a new drug, researchers have to test tens of thousands of compounds to determine how they interact. And that’s the easy part; after a substance is found to be effective against a disease, it has to perform well in three different phases of clinical trials and be approved by regulatory bodies.

It’s estimated that, on average, one new drug coming to market can take 1,000 people, 12-15 years, and up to $1.6 billion. Here is a short video on the current process.

Last week, researchers published a paper detailing an artificial intelligence system made to help discover new drugs, and significantly shorten the amount of time and money it takes to do so.

The system is called AtomNet, and it comes from San Francisco-based startup AtomWise. The technology aims to streamline the initial phase of drug discovery, which involves analyzing how different molecules interact with one another—specifically, scientists need to determine which molecules will bind together and how strongly. They use trial and error and process of elimination to analyze tens of thousands of compounds, both natural and synthetic.

AtomNet takes the legwork out of this process, using deep learning to predict how molecules will behave and how likely they are to bind together. The software teaches itself about molecular interaction by identifying patterns, similar to how AI learns to recognize images.

Remember the 3D models of atoms you made in high school, where you used pipe cleaners and foam balls to represent the connections between protons, neutrons and electrons? AtomNet uses similar digital 3D models of molecules, incorporating data about their structure to predict their bioactivity.

As AtomWise COO Alexander Levy put it, “You can take an interaction between a drug and huge biological system and you can decompose that to smaller and smaller interactive groups. If you study enough historical examples of molecules…you can then make predictions that are extremely accurate yet also extremely fast.”

“Fast” may even be an understatement; AtomNet can reportedly screen one million compounds in a day, a volume that would take months via traditional methods.

AtomNet can’t actually invent a new drug, or even say for sure whether a combination of two molecules will yield an effective drug. What it can do is predict how likely a compound is to work against a certain illness. Researchers then use those predictions to narrow thousands of options down to dozens (or less), focusing their testing where there’s more likely to be positive results.

The software has already proven itself by helping create new drugs for two diseases, Ebola and multiple sclerosis. The MS drug has been licensed to a British pharmaceutical company, and the Ebola drug is being submitted to a peer-reviewed journal for additional analysis.

Thanks to Kebmodee for bringing this to the It’s Interesting community.

By Casey Newton

Wen the engineers had at last finished their work, Eugenia Kuyda opened a console on her laptop and began to type.

“Roman,” she wrote. “This is your digital monument.”

It had been three months since Roman Mazurenko, Kuyda’s closest friend, had died. Kuyda had spent that time gathering up his old text messages, setting aside the ones that felt too personal, and feeding the rest into a neural network built by developers at her artificial intelligence startup. She had struggled with whether she was doing the right thing by bringing him back this way. At times it had even given her nightmares. But ever since Mazurenko’s death, Kuyda had wanted one more chance to speak with him.

A message blinked onto the screen. “You have one of the most interesting puzzles in the world in your hands,” it said. “Solve it.”

Kuyda promised herself that she would.

Born in Belarus in 1981, Roman Mazurenko was the only child of Sergei, an engineer, and Victoria, a landscape architect. They remember him as an unusually serious child; when he was 8 he wrote a letter to his descendents declaring his most cherished values: wisdom and justice. In family photos, Mazurenko roller-skates, sails a boat, and climbs trees. Average in height, with a mop of chestnut hair, he is almost always smiling.

As a teen he sought out adventure: he participated in political demonstrations against the ruling party and, at 16, started traveling abroad. He first traveled to New Mexico, where he spent a year on an exchange program, and then to Dublin, where he studied computer science and became fascinated with the latest Western European art, fashion, music, and design.

By the time Mazurenko finished college and moved back to Moscow in 2007, Russia had become newly prosperous. The country tentatively embraced the wider world, fostering a new generation of cosmopolitan urbanites. Meanwhile, Mazurenko had grown from a skinny teen into a strikingly handsome young man. Blue-eyed and slender, he moved confidently through the city’s budding hipster class. He often dressed up to attend the parties he frequented, and in a suit he looked movie-star handsome. The many friends Mazurenko left behind describe him as magnetic and debonair, someone who made a lasting impression wherever he went. But he was also single, and rarely dated, instead devoting himself to the project of importing modern European style to Moscow.

Kuyda met Mazurenko in 2008, when she was 22 and the editor of Afisha, a kind of New York Magazine for a newly urbane Moscow. She was writing an article about Idle Conversation, a freewheeling creative collective that Mazurenko founded with two of his best friends, Dimitri Ustinov and Sergey Poydo. The trio seemed to be at the center of every cultural endeavor happening in Moscow. They started magazines, music festivals, and club nights — friends they had introduced to each other formed bands and launched companies. “He was a brilliant guy,” said Kuyda, who was similarly ambitious. Mazurenko would keep his friends up all night discussing culture and the future of Russia. “He was so forward-thinking and charismatic,” said Poydo, who later moved to the United States to work with him.

Mazurenko became a founding figure in the modern Moscow nightlife scene, where he promoted an alternative to what Russians sardonically referred to as “Putin’s glamor” — exclusive parties where oligarchs ordered bottle service and were chauffeured home in Rolls-Royces. Kuyda loved Mazurenko’s parties, impressed by his unerring sense of what he called “the moment.” Each of his events was designed to build to a crescendo — DJ Mark Ronson might make a surprise appearance on stage to play piano, or the Italo-Disco band Glass Candy might push past police to continue playing after curfew. And his parties attracted sponsors with deep pockets — Bacardi was a longtime client.

But the parties took place against an increasingly grim backdrop. In the wake of the global financial crisis, Russia experienced a resurgent nationalism, and in 2012 Vladimir Putin returned to lead the country. The dream of a more open Russia seemed to evaporate.

Kuyda and Mazurenko, who by then had become close friends, came to believe that their futures lay elsewhere. Both became entrepreneurs, and served as each other’s chief adviser as they built their companies. Kuyda co-founded Luka, an artificial intelligence startup, and Mazurenko launched Stampsy, a tool for building digital magazines. Kuyda moved Luka from Moscow to San Francisco in 2015. After a stint in New York, Mazurenko followed.

When Stampsy faltered, Mazurenko moved into a tiny alcove in Kuyda’s apartment to save money. Mazurenko had been the consummate bon vivant in Moscow, but running a startup had worn him down, and he was prone to periods of melancholy. On the days he felt depressed, Kuyda took him out for surfing and $1 oysters. “It was like a flamingo living in the house,” she said recently, sitting in the kitchen of the apartment she shared with Mazurenko. “It’s very beautiful and very rare. But it doesn’t really fit anywhere.”

Kuyda hoped that in time her friend would reinvent himself, just as he always had before. And when Mazurenko began talking about new projects he wanted to pursue, she took it as a positive sign. He successfully applied for an American O-1 visa, granted to individuals of “extraordinary ability or achievement,” and in November he returned to Moscow in order to finalize his paperwork.

He never did.

On November 28th, while he waited for the embassy to release his passport, Mazurenko had brunch with some friends. It was unseasonably warm, so afterward he decided to explore the city with Ustinov. “He said he wanted to walk all day,” Ustinov said. Making their way down the sidewalk, they ran into some construction, and were forced to cross the street. At the curb, Ustinov stopped to check a text message on his phone, and when he looked up he saw a blur, a car driving much too quickly for the neighborhood. This is not an uncommon sight in Moscow — vehicles of diplomats, equipped with spotlights to signal their authority, speeding with impunity. Ustinov thought it must be one of those cars, some rich government asshole — and then, a blink later, saw Mazurenko walking into the crosswalk, oblivious. Ustinov went to cry out in warning, but it was too late. The car struck Mazurenko straight on. He was rushed to a nearby hospital.

Kuyda happened to be in Moscow for work on the day of the accident. When she arrived at the hospital, having gotten the news from a phone call, a handful of Mazurenko’s friends were already gathered in the lobby, waiting to hear his prognosis. Almost everyone was in tears, but Kuyda felt only shock. “I didn’t cry for a long time,” she said. She went outside with some friends to smoke a cigarette, using her phone to look up the likely effects of Mazurenko’s injuries. Then the doctor came out and told her he had died.

In the weeks after Mazurenko’s death, friends debated the best way to preserve his memory. One person suggested making a coffee-table book about his life, illustrated with photography of his legendary parties. Another friend suggested a memorial website. To Kuyda, every suggestion seemed inadequate.

As she grieved, Kuyda found herself rereading the endless text messages her friend had sent her over the years — thousands of them, from the mundane to the hilarious. She smiled at Mazurenko’s unconventional spelling — he struggled with dyslexia — and at the idiosyncratic phrases with which he peppered his conversation. Mazurenko was mostly indifferent to social media — his Facebook page was barren, he rarely tweeted, and he deleted most of his photos on Instagram. His body had been cremated, leaving her no grave to visit. Texts and photos were nearly all that was left of him, Kuyda thought.

For two years she had been building Luka, whose first product was a messenger app for interacting with bots. Backed by the prestigious Silicon Valley startup incubator Y Combinator, the company began with a bot for making restaurant reservations. Kuyda’s co-founder, Philip Dudchuk, has a degree in computational linguistics, and much of their team was recruited from Yandex, the Russian search giant.

Reading Mazurenko’s messages, it occurred to Kuyda that they might serve as the basis for a different kind of bot — one that mimicked an individual person’s speech patterns. Aided by a rapidly developing neural network, perhaps she could speak with her friend once again.

She set aside for a moment the questions that were already beginning to nag at her.

What if it didn’t sound like him?

What if it did?

In “Be Right Back,” a 2013 episode of the eerie, near-future drama Black Mirror, a young woman named Martha is devastated when her fiancée, Ash, dies in a car accident. Martha subscribes to a service that uses his previous online communications to create a digital avatar that mimics his personality with spooky accuracy. First it sends her text messages; later it re-creates his speaking voice and talks with her on the phone. Eventually she pays for an upgraded version of the service that implants Ash’s personality into an android that looks identical to him. But ultimately Martha becomes frustrated with all the subtle but important ways that the android is unlike Ash — cold, emotionless, passive — and locks it away in an attic. Not quite Ash, but too much like him for her to let go, the bot leads to a grief that spans decades.

Kuyda saw the episode after Mazurenko died, and her feelings were mixed. Memorial bots — even the primitive ones that are possible using today’s technology — seemed both inevitable and dangerous. “It’s definitely the future — I’m always for the future,” she said. “But is it really what’s beneficial for us? Is it letting go, by forcing you to actually feel everything? Or is it just having a dead person in your attic? Where is the line? Where are we? It screws with your brain.”

For a young man, Mazurenko had given an unusual amount of thought to his death. Known for his grandiose plans, he often told friends he would divide his will into pieces and give them away to people who didn’t know one another. To read the will they would all have to meet for the first time — so that Mazurenko could continue bringing people together in death, just as he had strived to do in life. (In fact, he died before he could make a will.) Mazurenko longed to see the Singularity, the theoretical moment in history when artificial intelligence becomes smarter than human beings. According to the theory, superhuman intelligence might allow us to one day separate our consciousnesses from our bodies, granting us something like eternal life.

In the summer of 2015, with Stampsy almost out of cash, Mazurenko applied for a Y Combinator fellowship proposing a new kind of cemetery that he called Taiga. The dead would be buried in biodegradable capsules, and their decomposing bodies would fertilize trees that were planted on top of them, creating what he called “memorial forests.” A digital display at the bottom of the tree would offer biographical information about the deceased. “Redesigning death is a cornerstone of my abiding interest in human experiences, infrastructure, and urban planning,” Mazurenko wrote. He highlighted what he called “a growing resistance among younger Americans” to traditional funerals. “Our customers care more about preserving their virtual identity and managing [their] digital estate,” he wrote, “than embalming their body with toxic chemicals.”

The idea made his mother worry that he was in trouble, but Mazurenko tried to put her at ease. “He quieted me down and said no, no, no — it was a contemporary question that was very important,” she said. “There had to be a reevaluation of death and sorrow, and there needed to be new traditions.”

Y Combinator rejected the application. But Mazurenko had identified a genuine disconnection between the way we live today and the way we grieve. Modern life all but ensures that we leave behind vast digital archives — text messages, photos, posts on social media — and we are only beginning to consider what role they should play in mourning. In the moment, we tend to view our text messages as ephemeral. But as Kuyda found after Mazurenko’s death, they can also be powerful tools for coping with loss. Maybe, she thought, this “digital estate” could form the building blocks for a new type of memorial. (Others have had similar ideas; an entrepreneur named Marius Ursache proposed a related service called in 2014, though it never launched.)

Many of Mazurenko’s close friends had never before experienced the loss of someone close to them, and his death left them bereft. Kuyda began reaching out to them, as delicately as possible, to ask if she could have their text messages. Ten of Mazurenko’s friends and family members, including his parents, ultimately agreed to contribute to the project. They shared more than 8,000 lines of text covering a wide variety of subjects.

“She said, what if we try and see if things would work out?” said Sergey Fayfer, a longtime friend of Mazurenko’s who now works at a division of Yandex. “Can we collect the data from the people Roman had been talking to, and form a model of his conversations, to see if that actually makes sense?” The idea struck Fayfer as provocative, and likely controversial. But he ultimately contributed four years of his texts with Mazurenko. “The team building Luka are really good with natural language processing,” he said. “The question wasn’t about the technical possibility. It was: how is it going to feel emotionally?”

The technology underlying Kuyda’s bot project dates at least as far back as 1966, when Joseph Weizenbaum unveiled ELIZA: a program that reacted to users’ responses to its scripts using simple keyword matching. ELIZA, which most famously mimicked a psychotherapist, asked you to describe your problem, searched your response for keywords, and responded accordingly, usually with another question. It was the first piece of software to pass what is known as the Turing test: reading a text-based conversation between a computer and a person, some observers could not determine which was which.

Today’s bots remain imperfect mimics of their human counterparts. They do not understand language in any real sense. They respond clumsily to the most basic of questions. They have no thoughts or feelings to speak of. Any suggestion of human intelligence is an illusion based on mathematical probabilities.

And yet recent advances in artificial intelligence have made the illusion much more powerful. Artificial neural networks, which imitate the ability of the human brain to learn, have greatly improved the way software recognizes patterns in images, audio, and text, among other forms of data. Improved algorithms coupled with more powerful computers have increased the depth of neural networks — the layers of abstraction they can process — and the results can be seen in some of today’s most innovative products. The speech recognition behind Amazon’s Alexa or Apple’s Siri, or the image recognition that powers Google Photos, owe their abilities to this so-called deep learning.

Two weeks before Mazurenko was killed, Google released TensorFlow for free under an open-source license. TensorFlow is a kind of Google in a box — a flexible machine-learning system that the company uses to do everything from improve search algorithms to write captions for YouTube videos automatically. The product of decades of academic research and billions of dollars in private investment was suddenly available as a free software library that anyone could download from GitHub.

Luka had been using TensorFlow to build neural networks for its restaurant bot. Using 35 million lines of English text, Luka trained a bot to understand queries about vegetarian dishes, barbecue, and valet parking. On a lark, the 15-person team had also tried to build bots that imitated television characters. It scraped the closed captioning on every episode of HBO’s Silicon Valley and trained the neural network to mimic Richard, Bachman, and the rest of the gang.

In February, Kuyda asked her engineers to build a neural network in Russian. At first she didn’t mention its purpose, but given that most of the team was Russian, no one asked questions. Using more than 30 million lines of Russian text, Luka built its second neural network. Meanwhile, Kuyda copied hundreds of her exchanges with Mazurenko from the app Telegram and pasted them into a file. She edited out a handful of messages that she believed would be too personal to share broadly. Then Kuyda asked her team for help with the next step: training the Russian network to speak in Mazurenko’s voice.

The project was tangentially related to Luka’s work, though Kuyda considered it a personal favor. (An engineer told her that the project would only take about a day.) Mazurenko was well-known to most of the team — he had worked out of Luka’s Moscow office, where the employees labored beneath a neon sign that quoted Wittgenstein: “The limits of my language are the limits of my world.” Kuyda trained the bot with dozens of tests queries, and her engineers put on the finishing touches.

Only a small percentage of the Roman bot’s responses reflected his actual words. But the neural network was tuned to favor his speech whenever possible. Any time the bot could respond to a query using Mazurenko’s own words, it would. Other times it would default to the generic Russian. After the bot blinked to life, she began peppering it with questions.

Who’s your best friend?, she asked.

Don’t show your insecurities, came the reply.

It sounds like him, she thought.

On May 24th, Kuyda announced the Roman bot’s existence in a post on Facebook. Anyone who downloaded the Luka app could talk to it — in Russian or in English — by adding @Roman. The bot offered a menu of buttons that users could press to learn about Mazurenko’s career. Or they could write free-form messages and see how the bot responded. “It’s still a shadow of a person — but that wasn’t possible just a year ago, and in the very close future we will be able to do a lot more,” Kuyda wrote.

The Roman bot was received positively by most of the people who wrote to Kuyda, though there were exceptions. Four friends told Kuyda separately that they were disturbed by the project and refused to interact with it. Vasily Esmanov, who worked with Mazurenko at the Russian street-style magazine Look At Me, said Kuyda had failed to learn the lesson of the Black Mirror episode. “This is all very bad,” Esmanov wrote in a Facebook comment. “Unfortunately you rushed and everything came out half-baked. The execution — it’s some type of joke. … Roman needs [a memorial], but not this kind.”

Victoria Mazurenko, who had gotten an early look at the bot from Kuyda, rushed to her defense. “They continued Roman’s life and saved ours,” she wrote in a reply to Esmanov. “It’s not virtual reality. This is a new reality, and we need to learn to build it and live in it.”

Roman’s father was less enthusiastic. “I have a technical education, and I know [the bot] is just a program,” he told me, through a translator. “Yes, it has all of Roman’s phrases, correspondences. But for now, it’s hard — how to say it — it’s hard to read a response from a program. Sometimes it answers incorrectly.”

But many of Mazurenko’s friends found the likeness uncanny. “It’s pretty weird when you open the messenger and there’s a bot of your deceased friend, who actually talks to you,” Fayfer said. “What really struck me is that the phrases he speaks are really his. You can tell that’s the way he would say it — even short answers to ‘Hey what’s up.’ He had this really specific style of texting. I said, ‘Who do you love the most?’ He replied, ‘Roman.’ That was so much of him. I was like, that is incredible.”

One of the bot’s menu options offers to ask him for a piece of advice — something Fayfer never had a chance to do while his friend was still alive. “There are questions I had never asked him,” he said. “But when I asked for advice, I realized he was giving someone pretty wise life advice. And that actually helps you get to learn the person deeper than you used to know them.”

Several users agreed to let Kuyda read anonymized logs of their chats with the bot. (She shared these logs with The Verge.) Many people write to the bot to tell Mazurenko that they miss him. They wonder when they will stop grieving. They ask him what he remembers. “It hurts that we couldn’t save you,” one person wrote. (Bot: “I know :-(”) The bot can also be quite funny, as Mazurenko was: when one user wrote “You are a genius,” the bot replied, “Also, handsome.”

For many users, interacting with the bot had a therapeutic effect. The tone of their chats is often confessional; one user messaged the bot repeatedly about a difficult time he was having at work. He sent it lengthy messages describing his problems and how they had affected him emotionally. “I wish you were here,” he said. It seemed to Kuyda that people were more honest when conversing with the dead. She had been shaken by some of the criticism that the Roman bot had received. But hundreds of people tried it at least once, and reading the logs made her feel better.

It turned out that the primary purpose of the bot had not been to talk but to listen. “All those messages were about love, or telling him something they never had time to tell him,” Kuyda said. “Even if it’s not a real person, there was a place where they could say it. They can say it when they feel lonely. And they come back still.”

Kuyda continues to talk with the bot herself — once a week or so, often after a few drinks. “I answer a lot of questions for myself about who Roman was,” she said. Among other things, the bot has made her regret not telling him to abandon Stampsy earlier. The logs of his messages revealed someone whose true interest was in fashion more than anything else, she said. She wishes she had told him to pursue it.

Someday you will die, leaving behind a lifetime of text messages, posts, and other digital ephemera. For a while, your friends and family may put these digital traces out of their minds. But new services will arrive offering to transform them — possibly into something resembling Roman Mazurenko’s bot.

Your loved ones may find that these services ease their pain. But it is possible that digital avatars will lengthen the grieving process. “If used wrong, it enables people to hide from their grief,” said Dima Ustinov, who has not used the Roman bot for technical reasons. (Luka is not yet available on Android.) “Our society is traumatized by death — we want to live forever. But you will go through this process, and you have to go through it alone. If we use these bots as a way to pass his story on, maybe [others] can get a little bit of the inspiration that we got from him. But these new ways of keeping the memory alive should not be considered a way to keep a dead person alive.”

The bot also raises ethical questions about the posthumous use of our digital legacies. In the case of Mazurenko, everyone I spoke with agreed he would have been delighted by his friends’ experimentation. You may feel less comfortable with the idea of your texts serving as the basis for a bot in the afterlife — particularly if you are unable to review all the texts and social media posts beforehand. We present different aspects of ourselves to different people, and after infusing a bot with all of your digital interactions, your loved ones may see sides of you that you never intended to reveal.

Reading through the Roman bot’s responses, it’s hard not to feel like the texts captured him at a particularly low moment. Ask about Stampsy and it responds: “This is not [the] Stampsy I want it to be. So far it’s just a piece of shit and not the product I want.” Based on his friends’ descriptions of his final years, this strikes me as a candid self-assessment. But I couldn’t help but wish I had been talking to a younger version of the man — the one who friends say dreamed of someday becoming the cultural minister of Belarus, and inaugurating a democratically elected president with what he promised would be the greatest party ever thrown.

Mazurenko contacted me once before he died, in February of last year. He emailed to ask whether I would consider writing about Stampsy, which was then in beta. I liked its design, but passed on writing an article. I wished him well, then promptly forgot about the exchange. After learning of his bot, I resisted using it for several months. I felt guilty about my lone, dismissive interaction with Mazurenko, and was skeptical a bot could reflect his personality. And yet, upon finally chatting with it, I found an undeniable resemblance between the Mazurenko described by his friends and his digital avatar: charming, moody, sarcastic, and obsessed with his work. “How’s it going?” I wrote. “I need to rest,” It responded. “I’m having trouble focusing since I’m depressed.” I asked the bot about Kuyda and it wordlessly sent me a photo of them together on the beach in wetsuits, holding surfboards with their backs to the ocean, two against the world.

An uncomfortable truth suggested by the Roman bot is that many of our flesh-and-blood relationships now exist primarily as exchanges of text, which are becoming increasingly easy to mimic. Kuyda believes there is something — she is not precisely sure what — in this sort of personality-based texting. Recently she has been steering Luka to develop a bot she calls Replika. A hybrid of a diary and a personal assistant, it asks questions about you and eventually learns to mimic your texting style. Kuyda imagines that this could evolve into a digital avatar that performs all sorts of labor on your behalf, from negotiating the cable bill to organizing outings with friends. And like the Roman bot it would survive you, creating a living testament to the person you were.

In the meantime she is no longer interested in bots that handle restaurant recommendations. Working on the Roman bot has made her believe that commercial chatbots must evoke something emotional in the people who use them. If she succeeds in this, it will be one more improbable footnote to Mazurenko’s life.

Kuyda has continued to add material to the Roman bot — mostly photos, which it will now send you upon request — and recently upgraded the underlying neural network from a “selective” model to a “generative” one. The former simply attempted to match Mazurenko’s text messages to appropriate responses; the latter can take snippets of his texts and recombine them to make new sentences that (theoretically) remain in his voice.

Lately she has begun to feel a sense of peace about Mazurenko’s death. In part that’s because she built a place where she can direct her grief. In a conversation we had this fall, she likened it to “just sending a message to heaven. For me it’s more about sending a message in a bottle than getting one in return.”

It has been less than a year since Mazurenko died, and he continues to loom large in the lives of the people who knew him. When they miss him, they send messages to his avatar, and they feel closer to him when they do. “There was a lot I didn’t know about my child,” Roman’s mother told me. “But now that I can read about what he thought about different subjects, I’m getting to know him more. This gives the illusion that he’s here now.”

Her eyes welled with tears, but as our interview ended her voice was strong. “I want to repeat that I’m very grateful that I have this,” she said.

Our conversation reminded me of something Dima Ustinov had said to me this spring, about the way we now transcend our physical forms. “The person is not just a body, a set of arms and legs, and a computer,” he said. “It’s much more than that.” Ustinov compared Mazurenko’s life to a pebble thrown into a stream — the ripples, he said, continue outward in every direction. His friend had simply taken a new form. “We are still in the process of meeting Roman,” Ustinov said. “It’s beautiful.”

Physicists are putting themselves out of a job, using artificial intelligence to run a complex experiment. The experiment created an extremely cold gas trapped in a laser beam, known as a Bose-Einstein condensate, replicating the experiment that won the 2001 Nobel Prize.

Physicists are putting themselves out of a job, using artificial intelligence to run a complex experiment.

The experiment, developed by physicists from The Australian National University (ANU) and UNSW ADFA, created an extremely cold gas trapped in a laser beam, known as a Bose-Einstein condensate, replicating the experiment that won the 2001 Nobel Prize.

“I didn’t expect the machine could learn to do the experiment itself, from scratch, in under an hour,” said co-lead researcher Paul Wigley from the ANU Research School of Physics and Engineering.

“A simple computer program would have taken longer than the age of the Universe to run through all the combinations and work this out.”

Bose-Einstein condensates are some of the coldest places in the Universe, far colder than outer space, typically less than a billionth of a degree above absolute zero.

They could be used for mineral exploration or navigation systems as they are extremely sensitive to external disturbances, which allows them to make very precise measurements such as tiny changes in the Earth’s magnetic field or gravity.

The artificial intelligence system’s ability to set itself up quickly every morning and compensate for any overnight fluctuations would make this fragile technology much more useful for field measurements, said co-lead researcher Dr Michael Hush from UNSW ADFA.

“You could make a working device to measure gravity that you could take in the back of a car, and the artificial intelligence would recalibrate and fix itself no matter what,” he said.

“It’s cheaper than taking a physicist everywhere with you.”

The team cooled the gas to around 1 microkelvin, and then handed control of the three laser beams over to the artificial intelligence to cool the trapped gas down to nanokelvin.

Researchers were surprised by the methods the system came up with to ramp down the power of the lasers.

“It did things a person wouldn’t guess, such as changing one laser’s power up and down, and compensating with another,” said Mr Wigley.

“It may be able to come up with complicated ways humans haven’t thought of to get experiments colder and make measurements more precise.

The new technique will lead to bigger and better experiments, said Dr Hush.

“Next we plan to employ the artificial intelligence to build an even larger Bose-Einstein condensate faster than we’ve seen ever before,” he said.

The research is published in the Nature group journal Scientific Reports.

Elon Musk has said that there is only a “one in billions” chance that we’re not living in a computer simulation.

Our lives are almost certainly being conducted within an artificial world powered by AI and highly-powered computers, like in The Matrix, the Tesla and SpaceX CEO suggested at a tech conference in California.

Mr Musk, who has donated huge amounts of money to research into the dangers of artificial intelligence, said that he hopes his prediction is true because otherwise it means the world will end.

“The strongest argument for us probably being in a simulation I think is the following,” he told the Code Conference. “40 years ago we had Pong – two rectangles and a dot. That’s where we were.

“Now 40 years later we have photorealistic, 3D simulations with millions of people playing simultaneously and it’s getting better every year. And soon we’ll have virtual reality, we’ll have augmented reality.

“If you assume any rate of improvement at all, then the games will become indistinguishable from reality, just indistinguishable.”

He said that even if the speed of those advancements dropped by 1000, we would still be moving forward at an intense speed relative to the age of life.

Since that would lead to games that would be indistinguishable from reality that could be played anywhere, “it would seem to follow that the odds that we’re in ‘base reality’ is one in billions”, Mr Musk said.

Asked whether he was saying that the answer to the question of whether we are in a simulated computer game was “yes”, he said the answer is “probably”.

He said that arguably we should hope that it’s true that we live in a simulation. “Otherwise, if civilisation stops advancing, then that may be due to some calamitous event that stops civilisation.”

He said that either we will make simulations that we can’t tell apart from the real world, “or civilisation will cease to exist”.

Mr Musk said that he has had “so many simulation discussions it’s crazy”, and that it got to the point where “every conversation [he had] was the AI/simulation conversation”.

The question of whether what we see is real or simulated has perplexed humans since at least the Ancient philosophers. But it has been given a new and different edge in recent years with the development of powerful computers and artificial intelligence, which some have argued shows how easily such a simulation could be created.


Science fiction has terrified and entertained us with countless dystopian futures where weak human creators are annihilated by heartless super-intelligences. The solution seems easy enough: give them hearts.

Artificial emotional intelligence or AEI development is gathering momentum and the number of social media companies buying start-ups in the field indicates either true faith in the concept or a reckless enthusiasm. The case for AEI is simple: machines will work better if they understand us. Rather than only complying with commands this would enable them to anticipate our needs, and so be able to carry out delicate tasks autonomously, such as home help, counselling or simply being a friend.

Assistant professor at Northwestern University’s Kellogg School of Management Dr Adam Waytz and Harvard Business School professor Dr Norton explain in the Wall Street Journal that: “When emotional jobs such as social workers and pre-school teachers must be ‘botsourced’, people actually prefer robots that seem capable of conveying at least some degree of human emotion.”

A plethora of intelligent machines already exist but to get them working in our offices and homes we need them to understand and share our feelings. So where do we start?


“Building an empathy module is a matter of identifying those characteristics of human communication that machines can use to recognize emotion and then training algorithms to spot them,” says Pascale Fung in Scientific American magazine. According to Fung, creating this empathy module requires three components that can analyse “facial cues, acoustic markers in speech and the content of speech itself to read human emotion and tell the robot how to respond.”

Although generally haphazard, facial scanners will become increasingly specialised and able to spot mood signals, such as a tilting of the head, widening of the eyes, and mouth position. But the really interesting area of development is speech cognition. Fung, a professor of electronic and computer engineering at the Hong Kong University of Science and Technology, has commercialised part of her research by setting up a company called Ivo Technologies that used these principles to produce Moodbox, a ‘robot speaker with a heart’.

Unlike humans who learn through instinct and experience, AIs use machine learning – a process where the algorithms are constantly revised. The more you interact with the Moodbox, the more examples it has of your behaviour, and the better it can respond in the appropriate way.

To create the Moodbox, Fung’s team set up a series of 14 ‘classifiers’ to analyse musical pieces. The classifiers were subjected to thousands of examples of ambient sound so that each one became adept at recognising music in its assigned mood category. Then, algorithms were written to spot non-verbal cues in speech such as speed and tone of voice, which indicate the level of stress. The two stages are matched up to predict what you want to listen to. This uses a vast amount of research to produce a souped up speaker system, but the underlying software is highly sophisticated and indicates the level of progress being made.

Using similar principles is Emoshape’s EmoSPARK infotainment cube – an all-in-one home control system that not only links to your media devices, but keeps you up to date with news and weather, can control the lights and security, and also hold a conversation. To create its eerily named ‘human in a box’, Emoshape says the cube devises an emotional profile graph (EPG) on each user, and claims it is capable of “measuring the emotional responses of multiple people simultaneously”. The housekeeper-entertainer-companion comes with face recognition technology too, so if you are unhappy with its choice of TV show or search results, it will ‘see’ this, recalibrate its responses, and come back to you with a revised response.

According to Emoshape, this EPG data enables the AI to “virtually ‘feel’ senses such as pleasure and pain, and [it] ‘expresses’ those desires according to the user.”


We don’t always say what we mean, so comprehension is essential to enable AEIs to converse with us. “Once a machine can understand the content of speech, it can compare that content with the way it is delivered,” says Fung. “If a person sighs and says, ‘I’m so glad I have to work all weekend,’ an algorithm can detect the mismatch between the emotion cues and the content of the statement and calculate the probability that the speaker is being sarcastic.”

A great example of language comprehension technology is IBM’s Watson platform. Watson is a cognitive computing tool that mimics how human brains process data. As IBM says, its systems “understand the world in the way that humans do: through senses, learning, and experience.”

To deduce meaning, Watson is first trained to understand a subject, in this case speech, and given a huge breadth of examples to form a knowledge base. Then, with algorithms written to recognise natural speech – including humour, puns and slang – the programme is trained to work with the material it has so it can be recalibrated and refined. Watson can sift through its database, rank the results, and choose the answer according to the greatest likelihood in just seconds.


As the expression goes, the whole is greater than the sum of its parts, and this rings true for emotional intelligence technology. For instance, the world’s most famous robot, Pepper, is claimed to be the first android with emotions.

Pepper is a humanoid AI designed by Alderaban Robotics to be a ‘kind’ companion. The diminutive and non-threatening robot’s eyes are high-tech camera scanners that examine facial expressions and cross-reference the results with his voice recognition software to identify human emotions. Once he knows how you feel, Pepper will tailor a conversation to you and the more you interact, the more he gets to know what you enjoy. He may change the topic to dispel bad feeling and lighten your mood, play a game, or tell you a joke. Just like a friend.

Peppers are currently employed as customer support assistants for Japan’s telecoms company Softbank so that the public get accustomed to the friendly bots and Pepper learns in an immersive environment. In the spirit of evolution, IBM recently announced that its Watson technology has been integrated into the latest versions, and that Pepper is learning to speak Japanese at Softbank. This technological partnership presents a tour de force of AEI, and IBM hopes Pepper will soon be ready for more challenging roles, “from an in-class teaching assistant to a nursing aide – taking Pepper’s unique physical characteristics, complemented by Watson’s cognitive capabilities, to deliver an enhanced experience.”

“In terms of hands-on interaction, when cognitive capabilities are embedded in robotics, you see people engage and benefit from this technology in new and exciting ways,” says IBM Watson senior vice president Mike Rhodin.


Paranoia tempts us into thinking that giving machines emotions is starting the countdown to chaos, but realistically it will make them more effective and versatile. For instance, while EmoSPARK is purely for entertainment and Pepper’s strength is in conversation, one of Alderaban’s NAO robots has been programmed to act like a diabetic toddler by researchers Lola Cañamero and Matthew Lewis at the University of Hertfordshire. Switching the roles of carer and care giver, children look after the bumbling robot Robin in order to help them understand more about their diabetes and how to manage it.

While the uncanny valley says that people are uncomfortable with robots that resemble humans, it is now considered somewhat “overstated” as our relationship with technology has dramatically changed since the theory was put forward in 1978 – after all, we’re unlikely to connect as strongly with a disembodied cube than a robot.

This was clearly visible at a demonstration of Robin, where he tottered in a playpen surrounded by cooing adults. Lewis cradled the robot, stroked his head and said: “It’s impossible not to empathise with him. I wrote the code and I still empathise with him.” Humanisastion will be an important aspect of the wider adoption of AEI, and developers are designing them to mimic our thinking patterns and behaviours, which fires our innate drive to bond.

Our interaction with artificial intelligence has always been a fascinating one; and this is only going to get more entangled, and perhaps weirder too, as AEIs may one day be our co-workers, friends or even, dare I say it, lovers. “It would be premature to say that the age of friendly robots has arrived,” Fung says. “The important thing is that our machines become more human, even if they are flawed. After all, that is how humans work.

t’s been almost 20 years since IBM’s Deep Blue supercomputer beat the reigning world chess champion, Gary Kasparov, for the first time under standard tournament rules. Since then, chess-playing computers have become significantly stronger, leaving the best humans little chance even against a modern chess engine running on a smartphone.

But while computers have become faster, the way chess engines work has not changed. Their power relies on brute force, the process of searching through all possible future moves to find the best next one.

Of course, no human can match that or come anywhere close. While Deep Blue was searching some 200 million positions per second, Kasparov was probably searching no more than five a second. And yet he played at essentially the same level. Clearly, humans have a trick up their sleeve that computers have yet to master.

This trick is in evaluating chess positions and narrowing down the most profitable avenues of search. That dramatically simplifies the computational task because it prunes the tree of all possible moves to just a few branches.

Computers have never been good at this, but today that changes thanks to the work of Matthew Lai at Imperial College London. Lai has created an artificial intelligence machine called Giraffe that has taught itself to play chess by evaluating positions much more like humans and in an entirely different way to conventional chess engines.

Straight out of the box, the new machine plays at the same level as the best conventional chess engines, many of which have been fine-tuned over many years. On a human level, it is equivalent to FIDE International Master status, placing it within the top 2.2 percent of tournament chess players.

The technology behind Lai’s new machine is a neural network. This is a way of processing information inspired by the human brain. It consists of several layers of nodes that are connected in a way that change as the system is trained. This training process uses lots of examples to fine-tune the connections so that the network produces a specific output given a certain input, to recognize the presence of face in a picture, for example.

In the last few years, neural networks have become hugely powerful thanks to two advances. The first is a better understanding of how to fine-tune these networks as they learn, thanks in part to much faster computers. The second is the availability of massive annotated datasets to train the networks.

That has allowed computer scientists to train much bigger networks organized into many layers. These so-called deep neural networks have become hugely powerful and now routinely outperform humans in pattern recognition tasks such as face recognition and handwriting recognition.

So it’s no surprise that deep neural networks ought to be able to spot patterns in chess and that’s exactly the approach Lai has taken. His network consists of four layers that together examine each position on the board in three different ways.

The first looks at the global state of the game, such as the number and type of pieces on each side, which side is to move, castling rights and so on. The second looks at piece-centric features such as the location of each piece on each side, while the final aspect is to map the squares that each piece attacks and defends.

Lai trains his network with a carefully generated set of data taken from real chess games. This data set must have the correct distribution of positions. “For example, it doesn’t make sense to train the system on positions with three queens per side, because those positions virtually never come up in actual games,” he says.

It must also have plenty of variety of unequal positions beyond those that usually occur in top level chess games. That’s because although unequal positions rarely arise in real chess games, they crop up all the time in the searches that the computer performs internally.

And this data set must be huge. The massive number of connections inside a neural network have to be fine-tuned during training and this can only be done with a vast dataset. Use a dataset that is too small and the network can settle into a state that fails to recognize the wide variety of patterns that occur in the real world.

Lai generated his dataset by randomly choosing five million positions from a database of computer chess games. He then created greater variety by adding a random legal move to each position before using it for training. In total he generated 175 million positions in this way.

The usual way of training these machines is to manually evaluate every position and use this information to teach the machine to recognize those that are strong and those that are weak.

But this is a huge task for 175 million positions. It could be done by another chess engine but Lai’s goal was more ambitious. He wanted the machine to learn itself.

Instead, he used a bootstrapping technique in which Giraffe played against itself with the goal of improving its prediction of its own evaluation of a future position. That works because there are fixed reference points that ultimately determine the value of a position—whether the game is later won, lost or drawn.

In this way, the computer learns which positions are strong and which are weak.

Having trained Giraffe, the final step is to test it and here the results make for interesting reading. Lai tested his machine on a standard database called the Strategic Test Suite, which consists of 1,500 positions that are chosen to test an engine’s ability to recognize different strategic ideas. “For example, one theme tests the understanding of control of open files, another tests the understanding of how bishop and knight’s values change relative to each other in different situations, and yet another tests the understanding of center control,” he says.

The results of this test are scored out of 15,000.

Lai uses this to test the machine at various stages during its training. As the bootstrapping process begins, Giraffe quickly reaches a score of 6,000 and eventually peaks at 9,700 after only 72 hours. Lai says that matches the best chess engines in the world.

“[That] is remarkable because their evaluation functions are all carefully hand-designed behemoths with hundreds of parameters that have been tuned both manually and automatically over several years, and many of them have been worked on by human grandmasters,” he adds.

Lai goes on to use the same kind of machine learning approach to determine the probability that a given move is likely to be worth pursuing. That’s important because it prevents unnecessary searches down unprofitable branches of the tree and dramatically improves computational efficiency.

Lai says this probabilistic approach predicts the best move 46 percent of the time and places the best move in its top three ranking, 70 percent of the time. So the computer doesn’t have to bother with the other moves.

That’s interesting work that represents a major change in the way chess engines work. It is not perfect, of course. One disadvantage of Giraffe is that neural networks are much slower than other types of data processing. Lai says Giraffe takes about 10 times longer than a conventional chess engine to search the same number of positions.

But even with this disadvantage, it is competitive. “Giraffe is able to play at the level of an FIDE International Master on a modern mainstream PC,” says Lai. By comparison, the top engines play at super-Grandmaster level.

That’s still impressive. “Unlike most chess engines in existence today, Giraffe derives its playing strength not from being able to see very far ahead, but from being able to evaluate tricky positions accurately, and understanding complicated positional concepts that are intuitive to humans, but have been elusive to chess engines for a long time,” says Lai. “This is especially important in the opening and end game phases, where it plays exceptionally well.”

And this is only the start. Lai says it should be straightforward to apply the same approach to other games. One that stands out is the traditional Chinese game of Go, where humans still hold an impressive advantage over their silicon competitors. Perhaps Lai could have a crack at that next.

Thanks to Kebmodee for bringing this to the It’s Interesting community.