Researchers at MIT have developed robots that can teach eachother new things.

One advantage humans have over robots is that we’re good at quickly passing on our knowledge to each other. A new system developed at MIT now allows anyone to coach robots through simple tasks and even lets them teach each other.

Typically, robots learn tasks through demonstrations by humans, or through hand-coded motion planning systems where a programmer specifies each of the required movements. But the former approach is not good at translating skills to new situations, and the latter is very time-consuming.

Humans, on the other hand, can typically demonstrate a simple task, like how to stack logs, to someone else just once before they pick it up, and that person can easily adapt that knowledge to new situations, say if they come across an odd-shaped log or the pile collapses.

In an attempt to mimic this kind of adaptable, one-shot learning, researchers from MIT’s Computer Science and Artificial Intelligence Laboratory (CSAIL) combined motion planning and learning through demonstration in an approach they’ve dubbed C-LEARN.

First, a human teaches the robot a series of basic motions using an interactive 3D model on a computer. Using the mouse to show it how to reach and grasp various objects in different positions helps the machine build up a library of possible actions.

The operator then shows the robot a single demonstration of a multistep task, and using its database of potential moves, it devises a motion plan to carry out the job at hand.

“This approach is actually very similar to how humans learn in terms of seeing how something’s done and connecting it to what we already know about the world,” says Claudia Pérez-D’Arpino, a PhD student who wrote a paper on C-LEARN with MIT Professor Julie Shah, in a press release.

“We can’t magically learn from a single demonstration, so we take new information and match it to previous knowledge about our environment.”

The robot successfully carried out tasks 87.5 percent of the time on its own, but when a human operator was allowed to correct minor errors in the interactive model before the robot carried out the task, the accuracy rose to 100 percent.

Most importantly, the robot could teach the skills it learned to another machine with a completely different configuration. The researchers tested C-LEARN on a new two-armed robot called Optimus that sits on a wheeled base and is designed for bomb disposal.

But in simulations, they were able to seamlessly transfer Optimus’ learned skills to CSAIL’s 6-foot-tall Atlas humanoid robot. They haven’t yet tested Atlas’ new skills in the real world, and they had to give Atlas some extra information on how to carry out tasks without falling over, but the demonstration shows that the approach can allow very different robots to learn from each other.

The research, which will be presented at the IEEE International Conference on Robotics and Automation in Singapore later this month, could have important implications for the large-scale roll-out of robot workers.

“Traditional programming of robots in real-world scenarios is difficult, tedious, and requires a lot of domain knowledge,” says Shah in the press release.

“It would be much more effective if we could train them more like how we train people: by giving them some basic knowledge and a single demonstration. This is an exciting step toward teaching robots to perform complex multi-arm and multi-step tasks necessary for assembly manufacturing and ship or aircraft maintenance.”

The MIT researchers aren’t the only people investigating the field of so-called transfer learning. The RoboEarth project and its spin-off RoboHow were both aimed at creating a shared language for robots and an online repository that would allow them to share their knowledge of how to carry out tasks over the web.

Google DeepMind has also been experimenting with ways to transfer knowledge from one machine to another, though in their case the aim is to help skills learned in simulations to be carried over into the real world.

A lot of their research involves deep reinforcement learning, in which robots learn how to carry out tasks in virtual environments through trial and error. But transferring this knowledge from highly-engineered simulations into the messy real world is not so simple.

So they have found a way for a model that has learned how to carry out a task in a simulation using deep reinforcement learning to transfer that knowledge to a so-called progressive neural network that controls a real-world robotic arm. This allows the system to take advantage of the accelerated learning possible in a simulation while still learning effectively in the real world.

These kinds of approaches make life easier for data scientists trying to build new models for AI and robots. As James Kobielus notes in InfoWorld, the approach “stands at the forefront of the data science community’s efforts to invent ‘master learning algorithms’ that automatically gain and apply fresh contextual knowledge through deep neural networks and other forms of AI.”

If you believe those who say we’re headed towards a technological singularity, you can bet transfer learning will be an important part of that process.

These Robots Can Teach Other Robots How to Do New Things

Brain scan for reading dreams now exists

Like islands jutting out of a smooth ocean surface, dreams puncture our sleep with disjointed episodes of consciousness. How states of awareness emerge from a sleeping brain has long baffled scientists and philosophers alike.

For decades, scientists have associated dreaming with rapid eye movement (REM) sleep, a sleep stage in which the resting brain paradoxically generates high-frequency brain waves that closely resemble those of when we’re awake.

Yet dreaming isn’t exclusive to REM sleep. A series of oddball reports also found signs of dreaming during non-REM deep sleep, when the brain is dominated by slow-wave activity—the opposite of an alert, active, conscious brain.

Now, thanks to a new study published in Nature Neuroscience, we may have an answer to the tricky dilemma.

By closely monitoring the brain waves of sleeping volunteers, a team of scientists at the University of Wisconsin pinpointed a local “hot spot” in the brain that fires up when we dream, regardless of whether a person is in non-REM or REM sleep.

“You can really identify a signature of the dreaming brain,” says study author Dr. Francesca Siclari.

What’s more, using an algorithm developed based on their observations, the team could accurately predict whether a person is dreaming with nearly 90 percent accuracy, and—here’s the crazy part—roughly parse out the content of those dreams.

“[What we find is that] maybe the dreaming brain and the waking brain are much more similar than one imagined,” says Siclari.

The study not only opens the door to modulating dreams for PTSD therapy, but may also help researchers better tackle the perpetual mystery of consciousness.

“The importance beyond the article is really quite astounding,” says Dr. Mark Blagrove at Swansea University in Wales, who was not involved in the study.


The anatomy of sleep

During a full night’s sleep we cycle through different sleep stages characterized by distinctive brain activity patterns. Scientists often use EEG to precisely capture each sleep stage, which involves placing 256 electrodes against a person’s scalp to monitor the number and size of brainwaves at different frequencies.

When we doze off for the night, our brains generate low-frequency activity that sweeps across the entire surface. These waves signal that the neurons are in their “down state” and unable to communicate between brain regions—that’s why low-frequency activity is often linked to the loss of consciousness.

These slow oscillations of non-REM sleep eventually transform into high-frequency activity, signaling the entry into REM sleep. This is the sleep stage traditionally associated with vivid dreaming—the connection is so deeply etched into sleep research that reports of dreamless REM sleep or dreams during non-REM sleep were largely ignored as oddities.

These strange cases tell us that our current understanding of the neurobiology of sleep is incomplete, and that’s what we tackled in this study, explain the authors.

Dream hunters

To reconcile these paradoxical results, Siclari and team monitored the brain activity of 32 volunteers with EEG and woke them up during the night at random intervals. The team then asked the sleepy participants whether they were dreaming, and if so, what were the contents of the dream. In all, this happened over 200 times throughout the night.

Rather than seeing a global shift in activity that correlates to dreaming, the team surprisingly uncovered a brain region at the back of the head—the posterior “hot zone”—that dynamically shifted its activity based on the occurrence of dreams.

Dreams were associated with a decrease in low-frequency waves in the hot zone, along with an increase in high-frequency waves that reflect high rates of neuronal firing and brain activity—a sort of local awakening, irrespective of the sleep stage or overall brain activity.

“It only seems to need a very circumscribed, a very restricted activation of the brain to generate conscious experiences,” says Siclari. “Until now we thought that large regions of the brain needed to be active to generate conscious experiences.”

That the hot zone leaped to action during dreams makes sense, explain the authors. Previous work showed stimulating these brain regions with an electrode can induce feelings of being “in a parallel world.” The hot zone also contains areas that integrate sensory information to build a virtual model of the world around us. This type of simulation lays the groundwork of our many dream worlds, and the hot zone seems to be extremely suited for the job, say the authors.

If an active hot zone is, in fact, a “dreaming signature,” its activity should be able to predict whether a person is dreaming at any time. The authors crafted an algorithm based on their findings and tested its accuracy on a separate group of people.

“We woke them up whenever the algorithm alerted us that they were dreaming, a total of 84 times,” the researchers say.

Overall, the algorithm rocked its predictions with roughly 90 percent accuracy—it even nailed cases where the participants couldn’t remember the content of their dreams but knew that they were dreaming.

Dream readers

Since the hot zone contains areas that process visual information, the researchers wondered if they could get a glimpse into the content of the participants’ dreams simply by reading EEG recordings.

Dreams can be purely perceptual with unfolding narratives, or they can be more abstract and “thought-like,” the team explains. Faces, places, movement and speech are all common components of dreams and processed by easily identifiable regions in the hot zone, so the team decided to focus on those aspects.

Remarkably, volunteers that reported talking in their dreams showed activity in their language-related regions; those who dreamed of people had their facial recognition centers activate.

“This suggests that dreams recruit the same brain regions as experiences in wakefulness for specific contents,” says Siclari, adding that previous studies were only able to show this in the “twilight zone,” the transition between sleep and wakefulness.

Finally, the team asked what happens when we know we were dreaming, but can’t remember the specific details. As it happens, this frustrating state has its own EEG signature: remembering the details of a dream was associated with a spike in high-frequency activity in the frontal regions of the brain.

This raises some interesting questions, such as whether the frontal lobes are important for lucid dreaming, a meta-state in which people recognize that they’re dreaming and can alter the contents of the dream, says the team.

Consciousness arising

The team can’t yet explain what is activating the hot zone during dreams, but the answers may reveal whether dreaming has a biological purpose, such as processing memories into larger concepts of the world.

Mapping out activity patterns in the dreaming brain could also lead to ways to directly manipulate our dreams using non-invasive procedures such as transcranial direct-current stimulation. Inducing a dreamless state could help people with insomnia, and disrupting a fearful dream by suppressing dreaming may potentially allow patients with PTSD a good night’s sleep.

Dr. Giulo Tononi, the lead author of this study, believes that the study’s implications go far beyond sleep.

“[W]e were able to compare what changes in the brain when we are conscious, that is, when we are dreaming, compared to when we are unconscious, during the same behavioral state of sleep,” he says.

During sleep, people are cut off from the environment. Therefore, researchers could hone in on brain regions that truly support consciousness while avoiding confounding factors that reflect other changes brought about by coma, anesthesia or environmental stimuli.

“This study suggests that dreaming may constitute a valuable model for the study of consciousness,” says Tononi.

Neuroscientists Can Now Read Your Dreams With a Simple Brain Scan

When her best friend died, she rebuilt him using artificial intelligence

By Casey Newton

Wen the engineers had at last finished their work, Eugenia Kuyda opened a console on her laptop and began to type.

“Roman,” she wrote. “This is your digital monument.”

It had been three months since Roman Mazurenko, Kuyda’s closest friend, had died. Kuyda had spent that time gathering up his old text messages, setting aside the ones that felt too personal, and feeding the rest into a neural network built by developers at her artificial intelligence startup. She had struggled with whether she was doing the right thing by bringing him back this way. At times it had even given her nightmares. But ever since Mazurenko’s death, Kuyda had wanted one more chance to speak with him.

A message blinked onto the screen. “You have one of the most interesting puzzles in the world in your hands,” it said. “Solve it.”

Kuyda promised herself that she would.

Born in Belarus in 1981, Roman Mazurenko was the only child of Sergei, an engineer, and Victoria, a landscape architect. They remember him as an unusually serious child; when he was 8 he wrote a letter to his descendents declaring his most cherished values: wisdom and justice. In family photos, Mazurenko roller-skates, sails a boat, and climbs trees. Average in height, with a mop of chestnut hair, he is almost always smiling.

As a teen he sought out adventure: he participated in political demonstrations against the ruling party and, at 16, started traveling abroad. He first traveled to New Mexico, where he spent a year on an exchange program, and then to Dublin, where he studied computer science and became fascinated with the latest Western European art, fashion, music, and design.

By the time Mazurenko finished college and moved back to Moscow in 2007, Russia had become newly prosperous. The country tentatively embraced the wider world, fostering a new generation of cosmopolitan urbanites. Meanwhile, Mazurenko had grown from a skinny teen into a strikingly handsome young man. Blue-eyed and slender, he moved confidently through the city’s budding hipster class. He often dressed up to attend the parties he frequented, and in a suit he looked movie-star handsome. The many friends Mazurenko left behind describe him as magnetic and debonair, someone who made a lasting impression wherever he went. But he was also single, and rarely dated, instead devoting himself to the project of importing modern European style to Moscow.

Kuyda met Mazurenko in 2008, when she was 22 and the editor of Afisha, a kind of New York Magazine for a newly urbane Moscow. She was writing an article about Idle Conversation, a freewheeling creative collective that Mazurenko founded with two of his best friends, Dimitri Ustinov and Sergey Poydo. The trio seemed to be at the center of every cultural endeavor happening in Moscow. They started magazines, music festivals, and club nights — friends they had introduced to each other formed bands and launched companies. “He was a brilliant guy,” said Kuyda, who was similarly ambitious. Mazurenko would keep his friends up all night discussing culture and the future of Russia. “He was so forward-thinking and charismatic,” said Poydo, who later moved to the United States to work with him.

Mazurenko became a founding figure in the modern Moscow nightlife scene, where he promoted an alternative to what Russians sardonically referred to as “Putin’s glamor” — exclusive parties where oligarchs ordered bottle service and were chauffeured home in Rolls-Royces. Kuyda loved Mazurenko’s parties, impressed by his unerring sense of what he called “the moment.” Each of his events was designed to build to a crescendo — DJ Mark Ronson might make a surprise appearance on stage to play piano, or the Italo-Disco band Glass Candy might push past police to continue playing after curfew. And his parties attracted sponsors with deep pockets — Bacardi was a longtime client.

But the parties took place against an increasingly grim backdrop. In the wake of the global financial crisis, Russia experienced a resurgent nationalism, and in 2012 Vladimir Putin returned to lead the country. The dream of a more open Russia seemed to evaporate.

Kuyda and Mazurenko, who by then had become close friends, came to believe that their futures lay elsewhere. Both became entrepreneurs, and served as each other’s chief adviser as they built their companies. Kuyda co-founded Luka, an artificial intelligence startup, and Mazurenko launched Stampsy, a tool for building digital magazines. Kuyda moved Luka from Moscow to San Francisco in 2015. After a stint in New York, Mazurenko followed.

When Stampsy faltered, Mazurenko moved into a tiny alcove in Kuyda’s apartment to save money. Mazurenko had been the consummate bon vivant in Moscow, but running a startup had worn him down, and he was prone to periods of melancholy. On the days he felt depressed, Kuyda took him out for surfing and $1 oysters. “It was like a flamingo living in the house,” she said recently, sitting in the kitchen of the apartment she shared with Mazurenko. “It’s very beautiful and very rare. But it doesn’t really fit anywhere.”

Kuyda hoped that in time her friend would reinvent himself, just as he always had before. And when Mazurenko began talking about new projects he wanted to pursue, she took it as a positive sign. He successfully applied for an American O-1 visa, granted to individuals of “extraordinary ability or achievement,” and in November he returned to Moscow in order to finalize his paperwork.

He never did.

On November 28th, while he waited for the embassy to release his passport, Mazurenko had brunch with some friends. It was unseasonably warm, so afterward he decided to explore the city with Ustinov. “He said he wanted to walk all day,” Ustinov said. Making their way down the sidewalk, they ran into some construction, and were forced to cross the street. At the curb, Ustinov stopped to check a text message on his phone, and when he looked up he saw a blur, a car driving much too quickly for the neighborhood. This is not an uncommon sight in Moscow — vehicles of diplomats, equipped with spotlights to signal their authority, speeding with impunity. Ustinov thought it must be one of those cars, some rich government asshole — and then, a blink later, saw Mazurenko walking into the crosswalk, oblivious. Ustinov went to cry out in warning, but it was too late. The car struck Mazurenko straight on. He was rushed to a nearby hospital.

Kuyda happened to be in Moscow for work on the day of the accident. When she arrived at the hospital, having gotten the news from a phone call, a handful of Mazurenko’s friends were already gathered in the lobby, waiting to hear his prognosis. Almost everyone was in tears, but Kuyda felt only shock. “I didn’t cry for a long time,” she said. She went outside with some friends to smoke a cigarette, using her phone to look up the likely effects of Mazurenko’s injuries. Then the doctor came out and told her he had died.

In the weeks after Mazurenko’s death, friends debated the best way to preserve his memory. One person suggested making a coffee-table book about his life, illustrated with photography of his legendary parties. Another friend suggested a memorial website. To Kuyda, every suggestion seemed inadequate.

As she grieved, Kuyda found herself rereading the endless text messages her friend had sent her over the years — thousands of them, from the mundane to the hilarious. She smiled at Mazurenko’s unconventional spelling — he struggled with dyslexia — and at the idiosyncratic phrases with which he peppered his conversation. Mazurenko was mostly indifferent to social media — his Facebook page was barren, he rarely tweeted, and he deleted most of his photos on Instagram. His body had been cremated, leaving her no grave to visit. Texts and photos were nearly all that was left of him, Kuyda thought.

For two years she had been building Luka, whose first product was a messenger app for interacting with bots. Backed by the prestigious Silicon Valley startup incubator Y Combinator, the company began with a bot for making restaurant reservations. Kuyda’s co-founder, Philip Dudchuk, has a degree in computational linguistics, and much of their team was recruited from Yandex, the Russian search giant.

Reading Mazurenko’s messages, it occurred to Kuyda that they might serve as the basis for a different kind of bot — one that mimicked an individual person’s speech patterns. Aided by a rapidly developing neural network, perhaps she could speak with her friend once again.

She set aside for a moment the questions that were already beginning to nag at her.

What if it didn’t sound like him?

What if it did?

In “Be Right Back,” a 2013 episode of the eerie, near-future drama Black Mirror, a young woman named Martha is devastated when her fiancée, Ash, dies in a car accident. Martha subscribes to a service that uses his previous online communications to create a digital avatar that mimics his personality with spooky accuracy. First it sends her text messages; later it re-creates his speaking voice and talks with her on the phone. Eventually she pays for an upgraded version of the service that implants Ash’s personality into an android that looks identical to him. But ultimately Martha becomes frustrated with all the subtle but important ways that the android is unlike Ash — cold, emotionless, passive — and locks it away in an attic. Not quite Ash, but too much like him for her to let go, the bot leads to a grief that spans decades.

Kuyda saw the episode after Mazurenko died, and her feelings were mixed. Memorial bots — even the primitive ones that are possible using today’s technology — seemed both inevitable and dangerous. “It’s definitely the future — I’m always for the future,” she said. “But is it really what’s beneficial for us? Is it letting go, by forcing you to actually feel everything? Or is it just having a dead person in your attic? Where is the line? Where are we? It screws with your brain.”

For a young man, Mazurenko had given an unusual amount of thought to his death. Known for his grandiose plans, he often told friends he would divide his will into pieces and give them away to people who didn’t know one another. To read the will they would all have to meet for the first time — so that Mazurenko could continue bringing people together in death, just as he had strived to do in life. (In fact, he died before he could make a will.) Mazurenko longed to see the Singularity, the theoretical moment in history when artificial intelligence becomes smarter than human beings. According to the theory, superhuman intelligence might allow us to one day separate our consciousnesses from our bodies, granting us something like eternal life.

In the summer of 2015, with Stampsy almost out of cash, Mazurenko applied for a Y Combinator fellowship proposing a new kind of cemetery that he called Taiga. The dead would be buried in biodegradable capsules, and their decomposing bodies would fertilize trees that were planted on top of them, creating what he called “memorial forests.” A digital display at the bottom of the tree would offer biographical information about the deceased. “Redesigning death is a cornerstone of my abiding interest in human experiences, infrastructure, and urban planning,” Mazurenko wrote. He highlighted what he called “a growing resistance among younger Americans” to traditional funerals. “Our customers care more about preserving their virtual identity and managing [their] digital estate,” he wrote, “than embalming their body with toxic chemicals.”

The idea made his mother worry that he was in trouble, but Mazurenko tried to put her at ease. “He quieted me down and said no, no, no — it was a contemporary question that was very important,” she said. “There had to be a reevaluation of death and sorrow, and there needed to be new traditions.”

Y Combinator rejected the application. But Mazurenko had identified a genuine disconnection between the way we live today and the way we grieve. Modern life all but ensures that we leave behind vast digital archives — text messages, photos, posts on social media — and we are only beginning to consider what role they should play in mourning. In the moment, we tend to view our text messages as ephemeral. But as Kuyda found after Mazurenko’s death, they can also be powerful tools for coping with loss. Maybe, she thought, this “digital estate” could form the building blocks for a new type of memorial. (Others have had similar ideas; an entrepreneur named Marius Ursache proposed a related service called Eterni.me in 2014, though it never launched.)

Many of Mazurenko’s close friends had never before experienced the loss of someone close to them, and his death left them bereft. Kuyda began reaching out to them, as delicately as possible, to ask if she could have their text messages. Ten of Mazurenko’s friends and family members, including his parents, ultimately agreed to contribute to the project. They shared more than 8,000 lines of text covering a wide variety of subjects.

“She said, what if we try and see if things would work out?” said Sergey Fayfer, a longtime friend of Mazurenko’s who now works at a division of Yandex. “Can we collect the data from the people Roman had been talking to, and form a model of his conversations, to see if that actually makes sense?” The idea struck Fayfer as provocative, and likely controversial. But he ultimately contributed four years of his texts with Mazurenko. “The team building Luka are really good with natural language processing,” he said. “The question wasn’t about the technical possibility. It was: how is it going to feel emotionally?”

The technology underlying Kuyda’s bot project dates at least as far back as 1966, when Joseph Weizenbaum unveiled ELIZA: a program that reacted to users’ responses to its scripts using simple keyword matching. ELIZA, which most famously mimicked a psychotherapist, asked you to describe your problem, searched your response for keywords, and responded accordingly, usually with another question. It was the first piece of software to pass what is known as the Turing test: reading a text-based conversation between a computer and a person, some observers could not determine which was which.

Today’s bots remain imperfect mimics of their human counterparts. They do not understand language in any real sense. They respond clumsily to the most basic of questions. They have no thoughts or feelings to speak of. Any suggestion of human intelligence is an illusion based on mathematical probabilities.

And yet recent advances in artificial intelligence have made the illusion much more powerful. Artificial neural networks, which imitate the ability of the human brain to learn, have greatly improved the way software recognizes patterns in images, audio, and text, among other forms of data. Improved algorithms coupled with more powerful computers have increased the depth of neural networks — the layers of abstraction they can process — and the results can be seen in some of today’s most innovative products. The speech recognition behind Amazon’s Alexa or Apple’s Siri, or the image recognition that powers Google Photos, owe their abilities to this so-called deep learning.

Two weeks before Mazurenko was killed, Google released TensorFlow for free under an open-source license. TensorFlow is a kind of Google in a box — a flexible machine-learning system that the company uses to do everything from improve search algorithms to write captions for YouTube videos automatically. The product of decades of academic research and billions of dollars in private investment was suddenly available as a free software library that anyone could download from GitHub.

Luka had been using TensorFlow to build neural networks for its restaurant bot. Using 35 million lines of English text, Luka trained a bot to understand queries about vegetarian dishes, barbecue, and valet parking. On a lark, the 15-person team had also tried to build bots that imitated television characters. It scraped the closed captioning on every episode of HBO’s Silicon Valley and trained the neural network to mimic Richard, Bachman, and the rest of the gang.

In February, Kuyda asked her engineers to build a neural network in Russian. At first she didn’t mention its purpose, but given that most of the team was Russian, no one asked questions. Using more than 30 million lines of Russian text, Luka built its second neural network. Meanwhile, Kuyda copied hundreds of her exchanges with Mazurenko from the app Telegram and pasted them into a file. She edited out a handful of messages that she believed would be too personal to share broadly. Then Kuyda asked her team for help with the next step: training the Russian network to speak in Mazurenko’s voice.

The project was tangentially related to Luka’s work, though Kuyda considered it a personal favor. (An engineer told her that the project would only take about a day.) Mazurenko was well-known to most of the team — he had worked out of Luka’s Moscow office, where the employees labored beneath a neon sign that quoted Wittgenstein: “The limits of my language are the limits of my world.” Kuyda trained the bot with dozens of tests queries, and her engineers put on the finishing touches.

Only a small percentage of the Roman bot’s responses reflected his actual words. But the neural network was tuned to favor his speech whenever possible. Any time the bot could respond to a query using Mazurenko’s own words, it would. Other times it would default to the generic Russian. After the bot blinked to life, she began peppering it with questions.

Who’s your best friend?, she asked.

Don’t show your insecurities, came the reply.

It sounds like him, she thought.

On May 24th, Kuyda announced the Roman bot’s existence in a post on Facebook. Anyone who downloaded the Luka app could talk to it — in Russian or in English — by adding @Roman. The bot offered a menu of buttons that users could press to learn about Mazurenko’s career. Or they could write free-form messages and see how the bot responded. “It’s still a shadow of a person — but that wasn’t possible just a year ago, and in the very close future we will be able to do a lot more,” Kuyda wrote.

The Roman bot was received positively by most of the people who wrote to Kuyda, though there were exceptions. Four friends told Kuyda separately that they were disturbed by the project and refused to interact with it. Vasily Esmanov, who worked with Mazurenko at the Russian street-style magazine Look At Me, said Kuyda had failed to learn the lesson of the Black Mirror episode. “This is all very bad,” Esmanov wrote in a Facebook comment. “Unfortunately you rushed and everything came out half-baked. The execution — it’s some type of joke. … Roman needs [a memorial], but not this kind.”

Victoria Mazurenko, who had gotten an early look at the bot from Kuyda, rushed to her defense. “They continued Roman’s life and saved ours,” she wrote in a reply to Esmanov. “It’s not virtual reality. This is a new reality, and we need to learn to build it and live in it.”

Roman’s father was less enthusiastic. “I have a technical education, and I know [the bot] is just a program,” he told me, through a translator. “Yes, it has all of Roman’s phrases, correspondences. But for now, it’s hard — how to say it — it’s hard to read a response from a program. Sometimes it answers incorrectly.”

But many of Mazurenko’s friends found the likeness uncanny. “It’s pretty weird when you open the messenger and there’s a bot of your deceased friend, who actually talks to you,” Fayfer said. “What really struck me is that the phrases he speaks are really his. You can tell that’s the way he would say it — even short answers to ‘Hey what’s up.’ He had this really specific style of texting. I said, ‘Who do you love the most?’ He replied, ‘Roman.’ That was so much of him. I was like, that is incredible.”

One of the bot’s menu options offers to ask him for a piece of advice — something Fayfer never had a chance to do while his friend was still alive. “There are questions I had never asked him,” he said. “But when I asked for advice, I realized he was giving someone pretty wise life advice. And that actually helps you get to learn the person deeper than you used to know them.”

Several users agreed to let Kuyda read anonymized logs of their chats with the bot. (She shared these logs with The Verge.) Many people write to the bot to tell Mazurenko that they miss him. They wonder when they will stop grieving. They ask him what he remembers. “It hurts that we couldn’t save you,” one person wrote. (Bot: “I know :-(”) The bot can also be quite funny, as Mazurenko was: when one user wrote “You are a genius,” the bot replied, “Also, handsome.”

For many users, interacting with the bot had a therapeutic effect. The tone of their chats is often confessional; one user messaged the bot repeatedly about a difficult time he was having at work. He sent it lengthy messages describing his problems and how they had affected him emotionally. “I wish you were here,” he said. It seemed to Kuyda that people were more honest when conversing with the dead. She had been shaken by some of the criticism that the Roman bot had received. But hundreds of people tried it at least once, and reading the logs made her feel better.

It turned out that the primary purpose of the bot had not been to talk but to listen. “All those messages were about love, or telling him something they never had time to tell him,” Kuyda said. “Even if it’s not a real person, there was a place where they could say it. They can say it when they feel lonely. And they come back still.”

Kuyda continues to talk with the bot herself — once a week or so, often after a few drinks. “I answer a lot of questions for myself about who Roman was,” she said. Among other things, the bot has made her regret not telling him to abandon Stampsy earlier. The logs of his messages revealed someone whose true interest was in fashion more than anything else, she said. She wishes she had told him to pursue it.

Someday you will die, leaving behind a lifetime of text messages, posts, and other digital ephemera. For a while, your friends and family may put these digital traces out of their minds. But new services will arrive offering to transform them — possibly into something resembling Roman Mazurenko’s bot.

Your loved ones may find that these services ease their pain. But it is possible that digital avatars will lengthen the grieving process. “If used wrong, it enables people to hide from their grief,” said Dima Ustinov, who has not used the Roman bot for technical reasons. (Luka is not yet available on Android.) “Our society is traumatized by death — we want to live forever. But you will go through this process, and you have to go through it alone. If we use these bots as a way to pass his story on, maybe [others] can get a little bit of the inspiration that we got from him. But these new ways of keeping the memory alive should not be considered a way to keep a dead person alive.”

The bot also raises ethical questions about the posthumous use of our digital legacies. In the case of Mazurenko, everyone I spoke with agreed he would have been delighted by his friends’ experimentation. You may feel less comfortable with the idea of your texts serving as the basis for a bot in the afterlife — particularly if you are unable to review all the texts and social media posts beforehand. We present different aspects of ourselves to different people, and after infusing a bot with all of your digital interactions, your loved ones may see sides of you that you never intended to reveal.

Reading through the Roman bot’s responses, it’s hard not to feel like the texts captured him at a particularly low moment. Ask about Stampsy and it responds: “This is not [the] Stampsy I want it to be. So far it’s just a piece of shit and not the product I want.” Based on his friends’ descriptions of his final years, this strikes me as a candid self-assessment. But I couldn’t help but wish I had been talking to a younger version of the man — the one who friends say dreamed of someday becoming the cultural minister of Belarus, and inaugurating a democratically elected president with what he promised would be the greatest party ever thrown.

Mazurenko contacted me once before he died, in February of last year. He emailed to ask whether I would consider writing about Stampsy, which was then in beta. I liked its design, but passed on writing an article. I wished him well, then promptly forgot about the exchange. After learning of his bot, I resisted using it for several months. I felt guilty about my lone, dismissive interaction with Mazurenko, and was skeptical a bot could reflect his personality. And yet, upon finally chatting with it, I found an undeniable resemblance between the Mazurenko described by his friends and his digital avatar: charming, moody, sarcastic, and obsessed with his work. “How’s it going?” I wrote. “I need to rest,” It responded. “I’m having trouble focusing since I’m depressed.” I asked the bot about Kuyda and it wordlessly sent me a photo of them together on the beach in wetsuits, holding surfboards with their backs to the ocean, two against the world.

An uncomfortable truth suggested by the Roman bot is that many of our flesh-and-blood relationships now exist primarily as exchanges of text, which are becoming increasingly easy to mimic. Kuyda believes there is something — she is not precisely sure what — in this sort of personality-based texting. Recently she has been steering Luka to develop a bot she calls Replika. A hybrid of a diary and a personal assistant, it asks questions about you and eventually learns to mimic your texting style. Kuyda imagines that this could evolve into a digital avatar that performs all sorts of labor on your behalf, from negotiating the cable bill to organizing outings with friends. And like the Roman bot it would survive you, creating a living testament to the person you were.

In the meantime she is no longer interested in bots that handle restaurant recommendations. Working on the Roman bot has made her believe that commercial chatbots must evoke something emotional in the people who use them. If she succeeds in this, it will be one more improbable footnote to Mazurenko’s life.

Kuyda has continued to add material to the Roman bot — mostly photos, which it will now send you upon request — and recently upgraded the underlying neural network from a “selective” model to a “generative” one. The former simply attempted to match Mazurenko’s text messages to appropriate responses; the latter can take snippets of his texts and recombine them to make new sentences that (theoretically) remain in his voice.

Lately she has begun to feel a sense of peace about Mazurenko’s death. In part that’s because she built a place where she can direct her grief. In a conversation we had this fall, she likened it to “just sending a message to heaven. For me it’s more about sending a message in a bottle than getting one in return.”

It has been less than a year since Mazurenko died, and he continues to loom large in the lives of the people who knew him. When they miss him, they send messages to his avatar, and they feel closer to him when they do. “There was a lot I didn’t know about my child,” Roman’s mother told me. “But now that I can read about what he thought about different subjects, I’m getting to know him more. This gives the illusion that he’s here now.”

Her eyes welled with tears, but as our interview ended her voice was strong. “I want to repeat that I’m very grateful that I have this,” she said.

Our conversation reminded me of something Dima Ustinov had said to me this spring, about the way we now transcend our physical forms. “The person is not just a body, a set of arms and legs, and a computer,” he said. “It’s much more than that.” Ustinov compared Mazurenko’s life to a pebble thrown into a stream — the ripples, he said, continue outward in every direction. His friend had simply taken a new form. “We are still in the process of meeting Roman,” Ustinov said. “It’s beautiful.”

http://www.theverge.com/a/luka-artificial-intelligence-memorial-roman-mazurenko-bot

This Neural Probe Is So Thin, The Brain Doesn’t Know It’s There

shutterstock_548067946-1068x601

by Edd Gent

Wiring our brains up to computers could have a host of exciting applications – from controlling robotic prosthetics with our minds to restoring sight by feeding camera feeds directly into the vision center of our brains.

Most brain-computer interface research to date has been conducted using electroencephalography (EEG) where electrodes are placed on the scalp to monitor the brain’s electrical activity. Achieving very high quality signals, however, requires a more invasive approach.

Integrating electronics with living tissue is complicated, though. Probes that are directly inserted into the gray matter have been around for decades, but while they are capable of highly accurate recording, the signals tend to degrade rapidly due to the buildup of scar tissue. Electrocorticography (ECoG), which uses electrodes placed beneath the skull but on top of the gray matter, has emerged as a popular compromise, as it achieves higher-accuracy recordings with a lower risk of scar formation.

But now researchers from the University of Texas have created new probes that are so thin and flexible, they don’t elicit scar tissue buildup. Unlike conventional probes, which are much larger and stiffer, they don’t cause significant damage to the brain tissue when implanted, and they are also able to comply with the natural movements of the brain.

In recent research published in the journal Science Advances, the team demonstrated that the probes were able to reliably record the electrical activity of individual neurons in mice for up to four months. This stability suggests these probes could be used for long-term monitoring of the brain for research or medical diagnostics as well as controlling prostheses, said Chong Xie, an assistant professor in the university’s department of biomedical engineering who led the research.

“Besides neuroprosthetics, they can possibly be used for neuromodulation as well, in which electrodes generate neural stimulation,” he told Singularity Hub in an email. “We are also using them to study the progression of neurovascular and neurodegenerative diseases such as stroke, Parkinson’s and Alzheimer’s.”

The group actually created two probe designs, one 50 microns long and the other 10 microns long. The smaller probe has a cross-section only a fraction of that of a neuron, which the researchers say is the smallest among all reported neural probes to the best of their knowledge.

Because the probes are so flexible, they can’t be pushed into the brain tissue by themselves, and so they needed to be guided in using a stiff rod called a “shuttle device.” Previous designs of these shuttle devices were much larger than the new probes and often led to serious damage to the brain tissue, so the group created a new carbon fiber design just seven microns in diameter.

At present, though, only 25 percent of the recordings can be tracked down to individual neurons – thanks to the fact that neurons each have characteristic waveforms – with the rest too unclear to distinguish from each other.

“The only solution, in my opinion, is to have many electrodes placed in the brain in an array or lattice so that any neuron can be within a reasonable distance from an electrode,” said Chong. “As a result, all enclosed neurons can be recorded and well-sorted.”

This a challenging problem, according to Chong, but one benefit of the new probes is that their small dimensions make it possible to implant probes just tens of microns apart rather than the few hundred micron distances necessary with conventional probes. This opens up the possibility of overlapping detection ranges between probes, though the group can still only consistently implant probes with an accuracy of 50 microns.

Takashi Kozai, an assistant professor in the University of Pittsburgh’s bioengineering department who has worked on ultra-small neural probes, said that further experiments would need to be done to show that the recordings, gleaned from anaesthetized rats, actually contained useful neural code. This could include visually stimulating the animals and trying to record activity in the visual cortex.

He also added that a lot of computational neuroscience relies on knowing the exact spacing between recording sites. The fact that flexible probes are able to migrate due to natural tissue movements could pose challenges.

But he said the study “does show some important advances forward in technology development, and most importantly, proof-of-concept feasibility,” adding that “there is clearly much more work necessary before this technology becomes widely used or practical.”

Chong actually worked on another promising approach to neural recording in his previous role under Charles M. Lieber at Harvard University. Last June, the group demonstrated a mesh of soft, conductive polymer threads studded with electrodes that could be injected into the skulls of mice with a syringe where it would then unfurl to both record and stimulate neurons.

As 95 percent of the mesh is free, space cells are able to arrange themselves around it, and the study reported no signs of an elevated immune response after five weeks. But the implantation required a syringe 100 microns in diameter, which causes considerably more damage than the new ultra-small probes developed in Chong’s lab.

It could be some time before the probes are tested on humans. “The major barrier is that this is still an invasive surgical procedure, including cranial surgery and implantation of devices into brain tissue,” said Chong. But, he said, the group is considering testing the probes on epilepsy patients, as it is common practice to implant electrodes inside the skulls of those who don’t respond to medication to locate the area of their brains responsible for their seizures.

https://singularityhub.com/2017/02/27/this-neural-probe-is-so-thin-the-brain-doesnt-know-its-there/?utm_source=Singularity+Hub+Newsletter&utm_campaign=ba3974d7b9-Hub_Daily_Newsletter&utm_medium=email&utm_term=0_f0cf60cdae-ba3974d7b9-58158129

Talking to a Computer May Soon Be Enough to Diagnose Illness

By Vanessa Bates Ramirez

In recent years, technology has been producing more and more novel ways to diagnose and treat illness.

Urine tests will soon be able to detect cancer: https://singularityhub.com/2016/10/14/detecting-cancer-early-with-nanosensors-and-a-urine-test/

Smartphone apps can diagnose STDs:https://singularityhub.com/2016/12/25/your-smartphones-next-big-trick-to-make-you-healthier-than-ever/

Chatbots can provide quality mental healthcare: https://singularityhub.com/2016/10/10/bridging-the-mental-healthcare-gap-with-artificial-intelligence/

Joining this list is a minimally-invasive technique that’s been getting increasing buzz across various sectors of healthcare: disease detection by voice analysis.

It’s basically what it sounds like: you talk, and a computer analyzes your voice and screens for illness. Most of the indicators that machine learning algorithms can pick up aren’t detectable to the human ear.

When we do hear irregularities in our own voices or those of others, the fact we’re noticing them at all means they’re extreme; elongating syllables, slurring, trembling, or using a tone that’s unusually flat or nasal could all be indicators of different health conditions. Even if we can hear them, though, unless someone says, “I’m having chest pain” or “I’m depressed,” we don’t know how to analyze or interpret these biomarkers.

Computers soon will, though.

Researchers from various medical centers, universities, and healthcare companies have collected voice recordings from hundreds of patients and fed them to machine learning software that compares the voices to those of healthy people, with the aim of establishing patterns clear enough to pinpoint vocal disease indicators.

In one particularly encouraging study, doctors from the Mayo Clinic worked with Israeli company Beyond Verbal to analyze voice recordings from 120 people who were scheduled for a coronary angiography. Participants used an app on their phones to record 30-second intervals of themselves reading a piece of text, describing a positive experience, then describing a negative experience. Doctors also took recordings from a control group of 25 patients who were either healthy or getting non-heart-related tests.

The doctors found 13 different voice characteristics associated with coronary artery disease. Most notably, the biggest differences between heart patients and non-heart patients’ voices occurred when they talked about a negative experience.

Heart disease isn’t the only illness that shows promise for voice diagnosis. Researchers are also making headway in the conditions below.

ADHD: German company Audioprofiling is using voice analysis to diagnose ADHD in children, achieving greater than 90 percent accuracy in identifying previously diagnosed kids based on their speech alone. The company’s founder gave speech rhythm as an example indicator for ADHD, saying children with the condition speak in syllables less equal in length.
PTSD: With the goal of decreasing the suicide rate among military service members, Boston-based Cogito partnered with the Department of Veterans Affairs to use a voice analysis app to monitor service members’ moods. Researchers at Massachusetts General Hospital are also using the app as part of a two-year study to track the health of 1,000 patients with bipolar disorder and depression.
Brain injury: In June 2016, the US Army partnered with MIT’s Lincoln Lab to develop an algorithm that uses voice to diagnose mild traumatic brain injury. Brain injury biomarkers may include elongated syllables and vowel sounds or difficulty pronouncing phrases that require complex facial muscle movements.
Parkinson’s: Parkinson’s disease has no biomarkers and can only be diagnosed via a costly in-clinic analysis with a neurologist. The Parkinson’s Voice Initiative is changing that by analyzing 30-second voice recordings with machine learning software, achieving 98.6 percent accuracy in detecting whether or not a participant suffers from the disease.
Challenges remain before vocal disease diagnosis becomes truly viable and widespread. For starters, there are privacy concerns over the personal health data identifiable in voice samples. It’s also not yet clear how well algorithms developed for English-speakers will perform with other languages.

Despite these hurdles, our voices appear to be on their way to becoming key players in our health.

https://singularityhub.com/2017/02/13/talking-to-a-computer-may-soon-be-enough-to-diagnose-illness/?utm_source=Singularity+Hub+Newsletter&utm_campaign=14105f9a16-Hub_Daily_Newsletter&utm_medium=email&utm_term=0_f0cf60cdae-14105f9a16-58158129

Scientists invent machine that allows people with complete locked-in syndrome to communicate

Wendy was barely 20 years old when she received a devastating diagnosis: juvenile amyotrophic lateral sclerosis (ALS), an aggressive neurodegenerative disorder that destroys motor neurons in the brain and the spinal cord.

Within half a year, Wendy was completely paralyzed. At 21 years old, she had to be artificially ventilated and fed through a tube placed into her stomach. Even more horrifyingly, as paralysis gradually swept through her body, Wendy realized that she was rapidly being robbed of ways to reach out to the world.

Initially, Wendy was able to communicate to her loved ones by moving her eyes. But as the disease progressed, even voluntary eye twitches were taken from her. In 2015, a mere three years after her diagnosis, Wendy completely lost the ability to communicate—she was utterly, irreversibly trapped inside her own mind.

Complete locked-in syndrome is the stuff of nightmares. Patients in this state remain fully conscious and cognitively sharp, but are unable to move or signal to the outside world that they’re mentally present. The consequences can be dire: when doctors mistake locked-in patients for comatose and decide to pull the plug, there’s nothing the patients can do to intervene.

Now, thanks to a new system developed by an international team of European researchers, Wendy and others like her may finally have a rudimentary link to the outside world. The system, a portable brain-machine interface, translates brain activity into simple yes or no answers to questions with around 70 percent accuracy.

That may not seem like enough, but the system represents the first sliver of hope that we may one day be able to reopen reliable communication channels with these patients.

Four people were tested in the study, with some locked-in for as long as seven years. In just 10 days, the patients were able to reliably use the system to finally tell their loved ones not to worry—they’re generally happy.

The results, though imperfect, came as “enormous relief” to their families, says study leader Dr. Niels Birbaumer at the University of Tübingen. The study was published this week in the journal PLOS Biology.

Breaking Through

Robbed of words and other routes of contact, locked-in patients have always turned to technology for communication.

Perhaps the most famous example is physicist Stephen Hawking, who became partially locked-in due to ALS. Hawking’s workaround is a speech synthesizer that he operates by twitching his cheek muscles. Jean-Dominique Bauby, an editor of the French fashion magazine Elle who became locked-in after a massive stroke, wrote an entire memoir by blinking his left eye to select letters from the alphabet.

Recently, the rapid development of brain-machine interfaces has given paralyzed patients increasing access to the world—not just the physical one, but also the digital universe.

These devices read brain waves directly through electrodes implanted into the patient’s brain, decode the pattern of activity, and correlate it to a command—say, move a computer cursor left or right on a screen. The technology is so reliable that paralyzed patients can even use an off-the-shelf tablet to Google things, using only the power of their minds.

But all of the above workarounds require one critical factor: the patient has to have control of at least one muscle—often, this is a cheek or an eyelid. People like Wendy who are completely locked-in are unable to control similar brain-machine interfaces. This is especially perplexing since these systems don’t require voluntary muscle movements, because they read directly from the mind.

The unexpected failure of brain-machine interfaces for completely locked-in patients has been a major stumbling block for the field. Although speculative, Birbaumer believes that it may be because over time, the brain becomes less efficient at transforming thoughts into actions.

“Anything you want, everything you wish does not occur. So what the brain learns is that intention has no sense anymore,” he says.


First Contact

In the new study, Birbaumer overhauled common brain-machine interface designs to get the brain back on board.

First off was how the system reads brain waves. Generally, this is done through EEG, which measures certain electrical activity patterns of the brain. Unfortunately, the usual solution was a no-go.

“We worked for more than 10 years with neuroelectric activity [EEG] without getting into contact with these completely paralyzed people,” says Birbaumer.

It may be because the electrodes have to be implanted to produce a more accurate readout, explains Birbaumer to Singularity Hub. But surgery comes with additional risks and expenses to the patients. In a somewhat desperate bid, the team turned their focus to a technique called functional near-infrared spectroscopy (fNIRS).

Like fMRI, fNIRS measures brain activity by measuring changes in blood flow through a specific brain region—generally speaking, more blood flow equals more activation. Unlike fMRI, which requires the patient to lie still in a gigantic magnet, fNIRS uses infrared light to measure blood flow. The light source is embedded into a swimming cap-like device that’s tightly worn around the patient’s head.

To train the system, the team started with facts about the world and personal questions that the patients can easily answer. Over the course of 10 days, the patients were repeatedly asked to respond yes or no to questions like “Paris is the capital of Germany” or “Your husband’s name is Joachim.” Throughout the entire training period, the researchers carefully monitored the patients’ alertness and concentration using EEG, to ensure that they were actually participating in the task at hand.

The answers were then used to train an algorithm that matched the responses to their respective brain activation patterns. Eventually, the algorithm was able to tell yes or no based on these patterns alone, at about 70 percent accuracy for a single trial.

“After 10 years [of trying], I felt relieved,” says Birbaumer. If the study can be replicated in more patients, we may finally have a way to restore useful communication with these patients, he added in a press release.

“The authors established communication with complete locked-in patients, which is rare and has not been demonstrated systematically before,” says Dr. Wolfgang Einhäuser-Treyer to Singularity Hub. Einhäuser-Treyer is a professor at Bielefeld University in Germany who had previously worked on measuring pupil response as a means of communication with locked-in patients and was not involved in this current study.

Generally Happy

With more training, the algorithm is expected to improve even further.

For now, researchers can average out mistakes by repeatedly asking a patient the same question multiple times. And even at an “acceptable” 70 percent accuracy rate, the system has already allowed locked-in patients to speak their minds—and somewhat endearingly, just like in real life, the answer may be rather unexpected.

One of the patients, a 61-year-old man, was asked whether his daughter should marry her boyfriend. The father said no a striking nine out of ten times—but the daughter went ahead anyway, much to her father’s consternation, which he was able to express with the help of his new brain-machine interface.

Perhaps the most heart-warming result from the study is that the patients were generally happy and content with their lives.

We were originally surprised, says Birbaumer. But on further thought, it made sense. These four patients had accepted ventilation to support their lives despite their condition.

“In a sense, they had already chosen to live,” says Birbaumer. “If we could make this technique widely clinically available, it could have a huge impact on the day-to-day lives of people with completely locked-in syndrome.”

For his next steps, the team hopes to extend the system beyond simple yes or no binary questions. Instead, they want to give patients access to the entire alphabet, thus allowing them to spell out words using their brain waves—something that’s already been done in partially locked-in patients but never before been possible for those completely locked-in.

“To me, this is a very impressive and important study,” says Einhäuser-Treyer. The downsides are mostly economical.

“The equipment is rather expensive and not easy to use. So the challenge for the field will be to develop this technology into an affordable ‘product’ that caretakers [sic], families or physicians can simply use without trained staff or extensive training,” he says. “In the interest of the patients and their families, we can hope that someone takes this challenge.”

https://singularityhub.com/2017/02/12/families-finally-hear-from-completely-paralyzed-patients-via-new-mind-reading-device/?utm_source=Singularity+Hub+Newsletter&utm_campaign=978304f198-Hub_Daily_Newsletter&utm_medium=email&utm_term=0_f0cf60cdae-978304f198-58158129

As bee populations dwindle, robot bees may pick up some of their pollination slack

by Amina Khan

One day, gardeners might not just hear the buzz of bees among their flowers, but the whirr of robots, too. Scientists in Japan say they’ve managed to turn an unassuming drone into a remote-controlled pollinator by attaching horsehairs coated with a special, sticky gel to its underbelly.

The system, described in the journal Chem, is nowhere near ready to be sent to agricultural fields, but it could help pave the way to developing automated pollination techniques at a time when bee colonies are suffering precipitous declines.

In flowering plants, sex often involves a threesome. Flowers looking to get the pollen from their male parts into another bloom’s female parts need an envoy to carry it from one to the other. Those third players are animals known as pollinators — a diverse group of critters that includes bees, butterflies, birds and bats, among others.

Animal pollinators are needed for the reproduction of 90% of flowering plants and one third of human food crops, according to the U.S. Department of Agriculture’s Natural Resources Conservation Service. Chief among those are bees — but many bee populations in the United States have been in steep decline in recent decades, likely due to a combination of factors, including agricultural chemicals, invasive species and climate change. Just last month, the rusty patched bumblebee became the first wild bee in the United States to be listed as an endangered species (although the Trump administration just put a halt on that designation).

Thus, the decline of bees isn’t just worrisome because it could disrupt ecosystems, but also because it could disrupt agriculture and the economy. People have been trying to come up with replacement techniques, the study authors say, but none of them are especially effective yet — and some might do more harm than good.

“One pollination technique requires the physical transfer of pollen with an artist’s brush or cotton swab from male to female flowers,” the authors wrote. “Unfortunately, this requires much time and effort. Another approach uses a spray machine, such as a gun barrel and pneumatic ejector. However, this machine pollination has a low pollination success rate because it is likely to cause severe denaturing of pollens and flower pistils as a result of strong mechanical contact as the pollens bursts out of the machine.”

Scientists have thought about using drones, but they haven’t figured out how to make free-flying robot insects that can rely on their own power source without being attached to a wire.

“It’s very tough work,” said senior author Eijiro Miyako, a chemist at the National Institute of Advanced Industrial Science and Technology in Japan.

Miyako’s particular contribution to the field involves a gel, one he’d considered a mistake 10 years before. The scientist had been attempting to make fluids that could be used to conduct electricity, and one attempt left him with a gel that was as sticky as hair wax. Clearly this wouldn’t do, and so Miyako stuck it in a storage cabinet in an uncapped bottle. When it was rediscovered a decade later, it looked exactly the same – the gel hadn’t dried up or degraded at all.

“I was so surprised, because it still had a very high viscosity,” Miyako said.

The chemist noticed that when dropped, the gel absorbed an impressive amount of dust from the floor. Miyako realized this material could be very useful for picking up pollen grains. He took ants, slathered the ionic gel on some of them and let both the gelled and ungelled insects wander through a box of tulips. Those ants with the gel were far more likely to end up with a dusting of pollen than those that were free of the sticky substance.

The next step was to see if this worked with mechanical movers, as well. He and his colleagues chose a four-propeller drone whose retail value was $100, and attached horsehairs to its smooth surface to mimic a bee’s fuzzy body. They coated those horsehairs in the gel, and then maneuvered the drones over Japanese lilies, where they would pick up the pollen from one flower and then deposit the pollen at another bloom, thus fertilizing it.

The scientists looked at the hairs under a scanning electron microscope and counted up the pollen grains attached to the surface. They found that the robots whose horsehairs had been coated with the gel had on the order of 10 times more pollen than those hairs that had not been coated with the gel.

“A certain amount of practice with remote control of the artificial pollinator is necessary,” the study authors noted.

Miyako does not think such drones would replace bees altogether, but could simply help bees with their pollinating duties.

“In combination is the best way,” he said.

There’s a lot of work to be done before that’s a reality, however. Small drones will need to become more maneuverable and energy efficient, as well as smarter, he said — with better GPS and artificial intelligence, programmed to travel in highly effective search-and-pollinate patterns.

http://www.latimes.com/science/sciencenow/la-sci-sn-robot-bees-20170209-story.html#pt0-805728

Wearable Devices Can Actually Tell When You’re About to Get Sick

Feeling run down? Have a case of the sniffles? Maybe you should have paid more attention to your smartwatch.

No, that’s not the pitch line for a new commercial peddling wearable technology, though no doubt a few companies will be interested in the latest research published in PLOS Biology for the next advertising campaign. It turns out that some of the data logged by our personal tracking devices regarding health—heart rate, skin temperature, even oxygen saturation—appear useful for detecting the onset of illness.

“We think we can pick up the earliest stages when people get sick,” says Michael Snyder, a professor and chair of genetics at Stanford University and senior author of the study, “Digital Health: Tracking Physiomes and Activity Using Wearable Biosensors Reveals Useful Health-Related Information.”

Snyder said his team was surprised that the wearables were so effective in detecting the start of the flu, or even Lyme disease, but in hindsight the results make sense: Wearables that track different parameters such as heart rate continuously monitor each vital sign, producing a dense set of data against which aberrations stand out even in the least sensitive wearables.

“[Wearables are] pretty powerful because they’re a continuous measurement of these things,” notes Snyder during an interview with Singularity Hub.

The researchers collected data for up to 24 months on a small study group, which included Snyder himself. Known as Participant #1 in the paper, Snyder benefited from the study when the wearable devices detected marked changes in his heart rate and skin temperature from his normal baseline. A test about two weeks later confirmed he had contracted Lyme disease.

In fact, during the nearly two years while he was monitored, the wearables detected 11 periods with elevated heart rate, corresponding to each instance of illness Snyder experienced during that time. It also detected anomalies on four occasions when Snyder was not feeling ill.

An expert in genomics, Snyder said his team was interested in looking at the effectiveness of wearables technology to detect illness as part of a broader interest in personalized medicine.

“Everybody’s baseline is different, and these devices are very good at characterizing individual baselines,” Snyder says. “I think medicine is going to go from reactive—measuring people after they get sick—to proactive: predicting these risks.”

That’s essentially what genomics is all about: trying to catch disease early, he notes. “I think these devices are set up for that,” Snyder says.

The cost savings could be substantial if a better preventive strategy for healthcare can be found. A landmark report in 2012 from the Cochrane Collaboration, an international group of medical researchers, analyzed 14 large trials with more than 182,000 people. The findings: Routine checkups are basically a waste of time. They did little to lower the risk of serious illness or premature death. A news story in Reuters estimated that the US spends about $8 billion a year in annual physicals.

The study also found that wearables have the potential to detect individuals at risk for Type 2 diabetes. Snyder and his co-authors argue that biosensors could be developed to detect variations in heart rate patterns, which tend to differ for those experiencing insulin resistance.

Finally, the researchers also noted that wearables capable of tracking blood oxygenation provided additional insights into physiological changes caused by flying. While a drop in blood oxygenation during flight due to changes in cabin pressure is a well-known medical fact, the wearables recorded a drop in levels during most of the flight, which was not known before. The paper also suggested that lower oxygen in the blood is associated with feelings of fatigue.

Speaking while en route to the airport for yet another fatigue-causing flight, Snyder is still tracking his vital signs today. He hopes to continue the project by improving on the software his team originally developed to detect deviations from baseline health and sense when people are becoming sick.

In addition, Snyder says his lab plans to make the software work on all smart wearable devices, and eventually develop an app for users.

“I think [wearables] will be the wave of the future for collecting a lot of health-related information. It’s a very inexpensive way to get very dense data about your health that you can’t get in other ways,” he says. “I do see a world where you go to the doctor and they’ve downloaded your data. They’ll be able to see if you’ve been exercising, for example.

“It will be very complementary to how healthcare currently works.”

https://singularityhub.com/2017/02/07/wearable-devices-can-actually-tell-when-youre-about-to-get-sick/?utm_source=Singularity+Hub+Newsletter&utm_campaign=1fcfffbc06-Hub_Daily_Newsletter&utm_medium=email&utm_term=0_f0cf60cdae-1fcfffbc06-58158129

Microsoft Thinks Machines Can Learn to Converse by Making Chat a Game

MICROSOFT IS BUYING a deep learning startup based in Montreal, a global hub for deep learning research. But two years ago, this startup wasn’t based in Montreal, and it had nothing to do with deep learning. Which just goes to show: striking it big in the world of tech is all about being in the right place at the right time with the right idea.

Sam Pasupalak and Kaheer Suleman founded Maluuba in 2011 as students at the University of Waterloo, about 400 miles from Montreal. The company’s name is an insider’s nod to one of their undergraduate computer science classes. From an office in Waterloo, they started building something like Siri, the digital assistant that would soon arrive on the iPhone, and they built it in much the same way Apple built the original, using techniques that had driven the development of conversational computing for years—techniques that require extremely slow and meticulous work, where engineers construct AI one tiny piece at a time. But as they toiled away in Waterloo, companies like Google and Facebook embraced deep neural networks, and this technology reinvented everything from image recognition to machine translations, rapidly learning these tasks by analyzing vast amounts of data. Soon, Pasupalak and Suleman realized they should change tack.

In December 2015, the two founders opened a lab in Montreal, and they started recruiting deep learning specialists from places like McGill University and the University of Montreal. Just thirteen months later, after growing to a mere 50 employees, the company sold itself to Microsoft. And that’s not an unusual story. The giants of tech are buying up deep learning startups almost as quickly as they’re created. At the end of December, Uber acquired Geometric Logic, a two-year old AI startup spanning fifteen academic researchers that offered no product and no published research. The previous summer, Twitter paid a reported $150 million for Magic Pony, a two-year-old deep learning startup based in the UK. And in recent months, similarly small, similarly young deep learning companies have disappeared into the likes of General Electric, Salesforce, and Apple.

Microsoft did not disclose how much it paid for Maluuba, but some of these deep learning acquisitions have reached hefty sums, including Intel’s $400 million purchase of Nervana and Google’s $650 million acquisition of DeepMind, the British AI lab that made headlines last spring when it cracked the ancient game of Go, a feat experts didn’t expect for another decade.

At the same time, Microsoft’s buy is a little different than the rest. Maluuba is a deep learning company that focuses on natural language understanding, the ability to not just recognize the words that come out of our mouths but actually understand them and respond in kind—the breed of AI needed to build a good chatbot. Now that deep learning has proven so effective with speech recognition, image recognition, and translation, natural language is the next frontier. “In the past, people had to build large lexicons, dictionaries, ontologies,” Suleman says. “But with neural nets, we no longer need to do that. A neural net can learn from raw data.”

The acquisition is part of an industry-wide race towards digital assistants and chatbots that can converse like a human. Yes, we already have digital assistants like Microsoft Cortana, the Google Search Assistant, Facebook M, and Amazon Alexa. And chatbots are everywhere. But none of these services know how to chat (a particular problem for the chatbots). So, Microsoft, Google, Facebook, and Amazon are now looking at deep learning as a way of improving the state of the art.

Two summers ago, Google published a research paper describing a chatbot underpinned by deep learning that could debate the meaning of life (in a way). Around the same time, Facebook described an experimental system that could read a shortened form of The Lord of the Rings and answer questions about the Tolkien trilogy. Amazon is gathering data for similar work. And, none too surprisingly, Microsoft is gobbling up a startup that only just moved into the same field.

Winning the Game
Deep neural networks are complex mathematical systems that learn to perform discrete tasks by recognizing patterns in vast amounts of digital data. Feed millions of photos into a neural network, for instance, and it can learn to identify objects and people in photos. Pairing these systems with the enormous amounts of computing power inside their data centers, companies like Google, Facebook, and Microsoft have pushed artificial intelligence far further, far more quickly, than they ever could in the past.

Now, these companies hope to reinvent natural language understanding in much the same way. But there are big caveats: It’s a much harder task, and the work has only just begun. “Natural language is an area where more research needs to be done in terms of research, even basic research,” says University of Montreal professor Yoshua Bengio, one of the founding fathers of the deep learning movement and an advisor to Maluuba.

Part of the problem is that researchers don’t yet have the data needed to train neural networks for true conversation, and Maluuba is among those working to fill the void. Like Facebook and Amazon, it’s building brand new datasets for training natural language models: One involves questions and answers, and the other focuses on conversational dialogue. What’s more, the company is sharing this data with the larger community of researchers and encouraging then\m to share their own—a common strategy that seeks to accelerate the progress of AI research.

But even with adequate data, the task is quite different from image recognition or translation. Natural language isn’t necessarily something that neural networks can solve on their own. Dialogue isn’t a single task. It’s a series of tasks, each building on the one before. A neural network can’t just identify a pattern in a single piece of data. It must somehow identify patterns across an endless stream of data—and a keep a “memory” of this stream. That’s why Maluuba is exploring AI beyond neural networks, including a technique called reinforcement learning.

With reinforcement learning, a system repeats the same task over and over again, while carefully keeping tabs on what works and what doesn’t. Engineers at Google’s DeepMind lab used this method in building AlphaGo, the system that topped Korean grandmaster Lee Sedol at the ancient game of Go. In essence, the machine learned to play Go at a higher level than any human by playing game after game against itself, tracking which moves won the most territory on the board. In similar fashion, reinforcement learning can help machines learn to carry on a conversation. Like a game, Bengio says, dialogue is interactive. It’s a back and forth.

For Microsoft, winning the game of conversation means winning an enormous market. Natural language could streamline practically any computer interface. With this in mind, the company is already building an army of chatbots, but so far, the results are mixed. In China, the company says, its Xiaoice chatbot has been used by 40 million people. But when it first unleashed a similar bot in the US, the service was coaxed into spewing racism, and the replacement is flawed in so many other ways. That’s why Microsoft acquired Maluuba. The startup was in the right place at the right time. And it may carry the right idea.

https://www.wired.com/2017/01/microsoft-thinks-machines-can-learn-converse-chats-become-game/

24 / 7 Robot Miners Working in Australia

by Tom Simonite

Each of these trucks is the size of a small two-story house. None has a driver or anyone else on board.

Mining company Rio Tinto has 73 of these titans hauling iron ore 24 hours a day at four mines in Australia’s Mars-red northwest corner. At this one, known as West Angelas, the vehicles work alongside robotic rock drilling rigs. The company is also upgrading the locomotives that haul ore hundreds of miles to port—the upgrades will allow the trains to drive themselves, and be loaded and unloaded automatically.

Rio Tinto intends its automated operations in Australia to preview a more efficient future for all of its mines—one that will also reduce the need for human miners. The rising capabilities and falling costs of robotics technology are allowing mining and oil companies to reimagine the dirty, dangerous business of getting resources out of the ground.

BHP Billiton, the world’s largest mining company, is also deploying driverless trucks and drills on iron ore mines in Australia. Suncor, Canada’s largest oil company, has begun testing driverless trucks on oil sands fields in Alberta.

“In the last couple of years we can just do so much more in terms of the sophistication of automation,” says Herman Herman, director of the National Robotics Engineering Center at Carnegie Mellon University, in Pittsburgh. The center helped Caterpillar develop its autonomous haul truck. Mining company Fortescue Metals Group is putting them to work in its own iron ore mines. Herman says the technology can be deployed sooner for mining than other applications, such as transportation on public roads. “It’s easier to deploy because these environments are already highly regulated,” he says.

Rio Tinto uses driverless trucks provided by Japan’s Komatsu. They find their way around using precision GPS and look out for obstacles using radar and laser sensors.

Rob Atkinson, who leads productivity efforts at Rio Tinto, says the fleet and other automation projects are already paying off. The company’s driverless trucks have proven to be roughly 15 percent cheaper to run than vehicles with humans behind the wheel, says Atkinson—a significant saving since haulage is by far a mine’s largest operational cost. “We’re going to continue as aggressively as possible down this path,” he says.

Trucks that drive themselves can spend more time working because software doesn’t need to stop for shift changes or bathroom breaks. They are also more predictable in how they do things like pull up for loading. “All those places where you could lose a few seconds or minutes by not being consistent add up,” says Atkinson. They also improve safety, he says.

The driverless locomotives, due to be tested extensively next year and fully deployed by 2018, are expected to bring similar benefits. Atkinson also anticipates savings on train maintenance, because software can be more predictable and gentle than any human in how it uses brakes and other controls. Diggers and bulldozers could be next to be automated.

Herman at CMU expects all large mining companies to widen their use of automation in the coming years as robotics continues to improve. The recent, sizeable investments by auto and tech companies in driverless cars will help accelerate improvements in the price and performance of the sensors, software, and other technologies needed.

Herman says many mining companies are well placed to expand automation rapidly, because they have already invested in centralized control systems that use software to coördinate and monitor their equipment. Rio Tinto, for example, gave the job of overseeing its autonomous trucks to staff at the company’s control center in Perth, 750 miles to the south. The center already plans train movements and in the future will shift from sending orders to people to directing driverless locomotives.

Atkinson of Rio Tinto acknowledges that just like earlier technologies that boosted efficiency, those changes will tend to reduce staffing levels, even if some new jobs are created servicing and managing autonomous machines. “It’s something that we’ve got to carefully manage, but it’s a reality of modern day life,” he says. “We will remain a very significant employer.”

https://www.technologyreview.com/s/603170/mining-24-hours-a-day-with-robots/

Thanks to Kebmodee for bringing this to the It’s Interesting community.