Posts Tagged ‘The Singularity’

When someone commits suicide, their family and friends can be left with the heartbreaking and answerless question of what they could have done differently. Colin Walsh, data scientist at Vanderbilt University Medical Center, hopes his work in predicting suicide risk will give people the opportunity to ask “what can I do?” while there’s still a chance to intervene.

Walsh and his colleagues have created machine-learning algorithms that predict, with unnerving accuracy, the likelihood that a patient will attempt suicide. In trials, results have been 80-90% accurate when predicting whether someone will attempt suicide within the next two years, and 92% accurate in predicting whether someone will attempt suicide within the next week.

The prediction is based on data that’s widely available from all hospital admissions, including age, gender, zip codes, medications, and prior diagnoses. Walsh and his team gathered data on 5,167 patients from Vanderbilt University Medical Center that had been admitted with signs of self-harm or suicidal ideation. They read each of these cases to identify the 3,250 instances of suicide attempts.

This set of more than 5,000 cases was used to train the machine to identify those at risk of attempted suicide compared to those who committed self-harm but showed no evidence of suicidal intent. The researchers also built algorithms to predict attempted suicide among a group 12,695 randomly selected patients with no documented history of suicide attempts. It proved even more accurate at making suicide risk predictions within this large general population of patients admitted to the hospital.

Walsh’s paper, published in Clinical Psychological Science in April, is just the first stage of the work. He’s now working to establish whether his algorithm is effective with a completely different data set from another hospital. And, once confidant that the model is sound, Walsh hopes to work with a larger team to establish a suitable method of intervening. He expects to have an intervention program in testing within the next two years. “I’d like to think it’ll be fairly quick, but fairly quick in health care tends to be in the order of months,” he adds.

Suicide is such an intensely personal act that it seems, from a human perspective, impossible to make such accurate predictions based on a crude set of data. Walsh says it’s natural for clinicians to ask how the predictions are made, but the algorithms are so complex that it’s impossible to pull out single risk factors. “It’s a combination of risk factors that gets us the answers,” he says.

That said, Walsh and his team were surprised to note that taking melatonin seemed to be a significant factor in calculating the risk. “I don’t think melatonin is causing people to have suicidal thinking. There’s no physiology that gets us there. But one thing that’s been really important to suicide risk is sleep disorders,” says Walsh. It’s possible that prescriptions for melatonin capture the risk of sleep disorders—though that’s currently a hypothesis that’s yet to be proved.

The research raises broader ethical questions about the role of computers in health care and how truly personal information could be used. “There’s always the risk of unintended consequences,” says Walsh. “We mean well and build a system to help people, but sometimes problems can result down the line.”

Researchers will also have to decide how much computer-based decisions will determine patient care. As a practicing primary care doctor, Walsh says it’s unnerving to recognize that he could effectively follow orders from a machine. “Is there a problem with the fact that I might get a prediction of high risk when that’s not part of my clinical picture?” he says. “Are you changing the way I have to deliver care because of something a computer’s telling me to do?”

For now, the machine-learning algorithms are based on data from hospital admissions. But Walsh recognizes that many people at risk of suicide do not spend time in hospital beforehand. “So much of our lives is spent outside of the health care setting. If we only rely on data that’s present in the health care setting to do this work, then we’re only going to get part of the way there,” he says.

And where else could researchers get data? The internet is one promising option. We spend so much time on Facebook and Twitter, says Walsh, that there may well be social media data that could be used to predict suicide risk. “But we need to do the work to show that’s actually true.”

Facebook announced earlier this year that it was using its own artificial intelligence to review posts for signs of self-harm. And the results are reportedly already more accurate than the reports Facebook gets from people flagged by their friends as at-risk.

Training machines to identify warning signs of suicide is far from straightforward. And, for predictions and interventions to be done successfully, Walsh believes it’s essential to destigmatize suicide. “We’re never going to help people if we’re not comfortable talking about it,” he says.

But, with suicide leading to 800,000 deaths worldwide every year, this is a public health issue that cannot be ignored. Given that most humans, including doctors, are pretty terrible at identifying suicide risk, machine learning could provide an important solution.

https://www.doximity.com/doc_news/v2/entries/8004313


With a selfie and some audio, a startup called Oben says, it can make you an avatar that can say—or sing—anything.

by Rachel Metz

I’ve met Nikhil Jain in the flesh, and now, on the laptop screen in front of me, I’m looking at a small animated version of him from the torso up, talking in the same tone and lilting accented English—only this version of Jain is bald (hair is tricky to animate convincingly), and his voice has a robotic sound.

For the past three years, Jain has been working on Oben, the startup he cofounded and leads. It’s building technology that uses a single image and an audio clip to automate the construction of what are sort of like digital souls: avatars that look and sound a lot like anyone, and can be made to speak or sing anything.

Of course it won’t really be you—or Beyoncé, or Michael Jackson, or whomever an Oben avatar depicts—but it could be a decent, potentially fun approximation that’s useful for all kinds of things. Maybe, like Jain, you want a virtual you to read stories to your kids when you can’t be there in person. Perhaps you’re a celebrity who wants to let fans do duets with your avatar on a mobile or virtual-reality app, or the estate of a dead celebrity who wants to continue to keep that person “alive” with avatar-based performances. The opportunities are endless—and, perhaps, endlessly eerie.

Oben, based in Pasadena, California, has raised about $9 million so far. The company is planning to release an app late this year that lets people make their own personal avatar and share video clips of it with friends.

Oben is also working with some as-yet-unnamed bands in Asia to make mobile-based avatars that will be able to sing duets with fans, and last month it announced it will launch a virtual-reality-enabled version of its avatar technology with the massively popular social app WeChat, for the HTC Vive headset.

For now, producing the kind of avatar Jain showed me still takes a lot of time, and it doesn’t even include the body below the waist (Jain says the company is experimenting with animating other body parts, but mainly it’s “focusing on other things”). While the avatar can be made with just one photo and two to 20 minutes of reading from a phoneme-rich script (the more, the better), a good avatar still takes Oben’s deep-learning system about eight hours to create. This includes cleaning up the recorded audio, creating a voice print for the person that reflects qualities such as accent and timbre, and making the 3-D visual model (facial movements are predicted from the selfie and voice print, Jain says). While speaking sounds pretty good, the singing clips I heard sounded very Auto-Tuned.

The avatars in the forthcoming app will be less focused on perfection but much faster to build, he says. Oben is also trying to figure out how to match speech and facial expressions so that the avatars can speak any language in a natural-looking way; for now, they’re limited to English and Chinese.

If digital copies like Oben’s are any good, they will raise questions about what should happen to your digital self over time. If you die, should an existing avatar be retained? Is it disturbing if others use digital breadcrumbs you left behind to, in a sense, re-create your digital self?

Jain isn’t sure what the right answer is, though he agrees that, like other companies that deal with user data, Oben does have to address death. And beyond big questions, there are potentially big business opportunities in that issue. The company’s business model is likely to be, in part, predicated on it: he says Oben has been approached by the estates of numerous celebrities, some of them long dead, some recently deceased.

https://www.technologyreview.com/s/607885/how-to-save-your-digital-soul/

One advantage humans have over robots is that we’re good at quickly passing on our knowledge to each other. A new system developed at MIT now allows anyone to coach robots through simple tasks and even lets them teach each other.

Typically, robots learn tasks through demonstrations by humans, or through hand-coded motion planning systems where a programmer specifies each of the required movements. But the former approach is not good at translating skills to new situations, and the latter is very time-consuming.

Humans, on the other hand, can typically demonstrate a simple task, like how to stack logs, to someone else just once before they pick it up, and that person can easily adapt that knowledge to new situations, say if they come across an odd-shaped log or the pile collapses.

In an attempt to mimic this kind of adaptable, one-shot learning, researchers from MIT’s Computer Science and Artificial Intelligence Laboratory (CSAIL) combined motion planning and learning through demonstration in an approach they’ve dubbed C-LEARN.

First, a human teaches the robot a series of basic motions using an interactive 3D model on a computer. Using the mouse to show it how to reach and grasp various objects in different positions helps the machine build up a library of possible actions.

The operator then shows the robot a single demonstration of a multistep task, and using its database of potential moves, it devises a motion plan to carry out the job at hand.

“This approach is actually very similar to how humans learn in terms of seeing how something’s done and connecting it to what we already know about the world,” says Claudia Pérez-D’Arpino, a PhD student who wrote a paper on C-LEARN with MIT Professor Julie Shah, in a press release.

“We can’t magically learn from a single demonstration, so we take new information and match it to previous knowledge about our environment.”

The robot successfully carried out tasks 87.5 percent of the time on its own, but when a human operator was allowed to correct minor errors in the interactive model before the robot carried out the task, the accuracy rose to 100 percent.

Most importantly, the robot could teach the skills it learned to another machine with a completely different configuration. The researchers tested C-LEARN on a new two-armed robot called Optimus that sits on a wheeled base and is designed for bomb disposal.

But in simulations, they were able to seamlessly transfer Optimus’ learned skills to CSAIL’s 6-foot-tall Atlas humanoid robot. They haven’t yet tested Atlas’ new skills in the real world, and they had to give Atlas some extra information on how to carry out tasks without falling over, but the demonstration shows that the approach can allow very different robots to learn from each other.

The research, which will be presented at the IEEE International Conference on Robotics and Automation in Singapore later this month, could have important implications for the large-scale roll-out of robot workers.

“Traditional programming of robots in real-world scenarios is difficult, tedious, and requires a lot of domain knowledge,” says Shah in the press release.

“It would be much more effective if we could train them more like how we train people: by giving them some basic knowledge and a single demonstration. This is an exciting step toward teaching robots to perform complex multi-arm and multi-step tasks necessary for assembly manufacturing and ship or aircraft maintenance.”

The MIT researchers aren’t the only people investigating the field of so-called transfer learning. The RoboEarth project and its spin-off RoboHow were both aimed at creating a shared language for robots and an online repository that would allow them to share their knowledge of how to carry out tasks over the web.

Google DeepMind has also been experimenting with ways to transfer knowledge from one machine to another, though in their case the aim is to help skills learned in simulations to be carried over into the real world.

A lot of their research involves deep reinforcement learning, in which robots learn how to carry out tasks in virtual environments through trial and error. But transferring this knowledge from highly-engineered simulations into the messy real world is not so simple.

So they have found a way for a model that has learned how to carry out a task in a simulation using deep reinforcement learning to transfer that knowledge to a so-called progressive neural network that controls a real-world robotic arm. This allows the system to take advantage of the accelerated learning possible in a simulation while still learning effectively in the real world.

These kinds of approaches make life easier for data scientists trying to build new models for AI and robots. As James Kobielus notes in InfoWorld, the approach “stands at the forefront of the data science community’s efforts to invent ‘master learning algorithms’ that automatically gain and apply fresh contextual knowledge through deep neural networks and other forms of AI.”

If you believe those who say we’re headed towards a technological singularity, you can bet transfer learning will be an important part of that process.

https://singularityhub.com/2017/05/26/these-robots-can-teach-other-robots-how-to-do-new-things/?utm_source=Singularity+Hub+Newsletter&utm_campaign=7c19f894b1-Hub_Daily_Newsletter&utm_medium=email&utm_term=0_f0cf60cdae-7c19f894b1-58158129

By Casey Newton

Wen the engineers had at last finished their work, Eugenia Kuyda opened a console on her laptop and began to type.

“Roman,” she wrote. “This is your digital monument.”

It had been three months since Roman Mazurenko, Kuyda’s closest friend, had died. Kuyda had spent that time gathering up his old text messages, setting aside the ones that felt too personal, and feeding the rest into a neural network built by developers at her artificial intelligence startup. She had struggled with whether she was doing the right thing by bringing him back this way. At times it had even given her nightmares. But ever since Mazurenko’s death, Kuyda had wanted one more chance to speak with him.

A message blinked onto the screen. “You have one of the most interesting puzzles in the world in your hands,” it said. “Solve it.”

Kuyda promised herself that she would.

Born in Belarus in 1981, Roman Mazurenko was the only child of Sergei, an engineer, and Victoria, a landscape architect. They remember him as an unusually serious child; when he was 8 he wrote a letter to his descendents declaring his most cherished values: wisdom and justice. In family photos, Mazurenko roller-skates, sails a boat, and climbs trees. Average in height, with a mop of chestnut hair, he is almost always smiling.

As a teen he sought out adventure: he participated in political demonstrations against the ruling party and, at 16, started traveling abroad. He first traveled to New Mexico, where he spent a year on an exchange program, and then to Dublin, where he studied computer science and became fascinated with the latest Western European art, fashion, music, and design.

By the time Mazurenko finished college and moved back to Moscow in 2007, Russia had become newly prosperous. The country tentatively embraced the wider world, fostering a new generation of cosmopolitan urbanites. Meanwhile, Mazurenko had grown from a skinny teen into a strikingly handsome young man. Blue-eyed and slender, he moved confidently through the city’s budding hipster class. He often dressed up to attend the parties he frequented, and in a suit he looked movie-star handsome. The many friends Mazurenko left behind describe him as magnetic and debonair, someone who made a lasting impression wherever he went. But he was also single, and rarely dated, instead devoting himself to the project of importing modern European style to Moscow.

Kuyda met Mazurenko in 2008, when she was 22 and the editor of Afisha, a kind of New York Magazine for a newly urbane Moscow. She was writing an article about Idle Conversation, a freewheeling creative collective that Mazurenko founded with two of his best friends, Dimitri Ustinov and Sergey Poydo. The trio seemed to be at the center of every cultural endeavor happening in Moscow. They started magazines, music festivals, and club nights — friends they had introduced to each other formed bands and launched companies. “He was a brilliant guy,” said Kuyda, who was similarly ambitious. Mazurenko would keep his friends up all night discussing culture and the future of Russia. “He was so forward-thinking and charismatic,” said Poydo, who later moved to the United States to work with him.

Mazurenko became a founding figure in the modern Moscow nightlife scene, where he promoted an alternative to what Russians sardonically referred to as “Putin’s glamor” — exclusive parties where oligarchs ordered bottle service and were chauffeured home in Rolls-Royces. Kuyda loved Mazurenko’s parties, impressed by his unerring sense of what he called “the moment.” Each of his events was designed to build to a crescendo — DJ Mark Ronson might make a surprise appearance on stage to play piano, or the Italo-Disco band Glass Candy might push past police to continue playing after curfew. And his parties attracted sponsors with deep pockets — Bacardi was a longtime client.

But the parties took place against an increasingly grim backdrop. In the wake of the global financial crisis, Russia experienced a resurgent nationalism, and in 2012 Vladimir Putin returned to lead the country. The dream of a more open Russia seemed to evaporate.

Kuyda and Mazurenko, who by then had become close friends, came to believe that their futures lay elsewhere. Both became entrepreneurs, and served as each other’s chief adviser as they built their companies. Kuyda co-founded Luka, an artificial intelligence startup, and Mazurenko launched Stampsy, a tool for building digital magazines. Kuyda moved Luka from Moscow to San Francisco in 2015. After a stint in New York, Mazurenko followed.

When Stampsy faltered, Mazurenko moved into a tiny alcove in Kuyda’s apartment to save money. Mazurenko had been the consummate bon vivant in Moscow, but running a startup had worn him down, and he was prone to periods of melancholy. On the days he felt depressed, Kuyda took him out for surfing and $1 oysters. “It was like a flamingo living in the house,” she said recently, sitting in the kitchen of the apartment she shared with Mazurenko. “It’s very beautiful and very rare. But it doesn’t really fit anywhere.”

Kuyda hoped that in time her friend would reinvent himself, just as he always had before. And when Mazurenko began talking about new projects he wanted to pursue, she took it as a positive sign. He successfully applied for an American O-1 visa, granted to individuals of “extraordinary ability or achievement,” and in November he returned to Moscow in order to finalize his paperwork.

He never did.

On November 28th, while he waited for the embassy to release his passport, Mazurenko had brunch with some friends. It was unseasonably warm, so afterward he decided to explore the city with Ustinov. “He said he wanted to walk all day,” Ustinov said. Making their way down the sidewalk, they ran into some construction, and were forced to cross the street. At the curb, Ustinov stopped to check a text message on his phone, and when he looked up he saw a blur, a car driving much too quickly for the neighborhood. This is not an uncommon sight in Moscow — vehicles of diplomats, equipped with spotlights to signal their authority, speeding with impunity. Ustinov thought it must be one of those cars, some rich government asshole — and then, a blink later, saw Mazurenko walking into the crosswalk, oblivious. Ustinov went to cry out in warning, but it was too late. The car struck Mazurenko straight on. He was rushed to a nearby hospital.

Kuyda happened to be in Moscow for work on the day of the accident. When she arrived at the hospital, having gotten the news from a phone call, a handful of Mazurenko’s friends were already gathered in the lobby, waiting to hear his prognosis. Almost everyone was in tears, but Kuyda felt only shock. “I didn’t cry for a long time,” she said. She went outside with some friends to smoke a cigarette, using her phone to look up the likely effects of Mazurenko’s injuries. Then the doctor came out and told her he had died.

In the weeks after Mazurenko’s death, friends debated the best way to preserve his memory. One person suggested making a coffee-table book about his life, illustrated with photography of his legendary parties. Another friend suggested a memorial website. To Kuyda, every suggestion seemed inadequate.

As she grieved, Kuyda found herself rereading the endless text messages her friend had sent her over the years — thousands of them, from the mundane to the hilarious. She smiled at Mazurenko’s unconventional spelling — he struggled with dyslexia — and at the idiosyncratic phrases with which he peppered his conversation. Mazurenko was mostly indifferent to social media — his Facebook page was barren, he rarely tweeted, and he deleted most of his photos on Instagram. His body had been cremated, leaving her no grave to visit. Texts and photos were nearly all that was left of him, Kuyda thought.

For two years she had been building Luka, whose first product was a messenger app for interacting with bots. Backed by the prestigious Silicon Valley startup incubator Y Combinator, the company began with a bot for making restaurant reservations. Kuyda’s co-founder, Philip Dudchuk, has a degree in computational linguistics, and much of their team was recruited from Yandex, the Russian search giant.

Reading Mazurenko’s messages, it occurred to Kuyda that they might serve as the basis for a different kind of bot — one that mimicked an individual person’s speech patterns. Aided by a rapidly developing neural network, perhaps she could speak with her friend once again.

She set aside for a moment the questions that were already beginning to nag at her.

What if it didn’t sound like him?

What if it did?

In “Be Right Back,” a 2013 episode of the eerie, near-future drama Black Mirror, a young woman named Martha is devastated when her fiancée, Ash, dies in a car accident. Martha subscribes to a service that uses his previous online communications to create a digital avatar that mimics his personality with spooky accuracy. First it sends her text messages; later it re-creates his speaking voice and talks with her on the phone. Eventually she pays for an upgraded version of the service that implants Ash’s personality into an android that looks identical to him. But ultimately Martha becomes frustrated with all the subtle but important ways that the android is unlike Ash — cold, emotionless, passive — and locks it away in an attic. Not quite Ash, but too much like him for her to let go, the bot leads to a grief that spans decades.

Kuyda saw the episode after Mazurenko died, and her feelings were mixed. Memorial bots — even the primitive ones that are possible using today’s technology — seemed both inevitable and dangerous. “It’s definitely the future — I’m always for the future,” she said. “But is it really what’s beneficial for us? Is it letting go, by forcing you to actually feel everything? Or is it just having a dead person in your attic? Where is the line? Where are we? It screws with your brain.”

For a young man, Mazurenko had given an unusual amount of thought to his death. Known for his grandiose plans, he often told friends he would divide his will into pieces and give them away to people who didn’t know one another. To read the will they would all have to meet for the first time — so that Mazurenko could continue bringing people together in death, just as he had strived to do in life. (In fact, he died before he could make a will.) Mazurenko longed to see the Singularity, the theoretical moment in history when artificial intelligence becomes smarter than human beings. According to the theory, superhuman intelligence might allow us to one day separate our consciousnesses from our bodies, granting us something like eternal life.

In the summer of 2015, with Stampsy almost out of cash, Mazurenko applied for a Y Combinator fellowship proposing a new kind of cemetery that he called Taiga. The dead would be buried in biodegradable capsules, and their decomposing bodies would fertilize trees that were planted on top of them, creating what he called “memorial forests.” A digital display at the bottom of the tree would offer biographical information about the deceased. “Redesigning death is a cornerstone of my abiding interest in human experiences, infrastructure, and urban planning,” Mazurenko wrote. He highlighted what he called “a growing resistance among younger Americans” to traditional funerals. “Our customers care more about preserving their virtual identity and managing [their] digital estate,” he wrote, “than embalming their body with toxic chemicals.”

The idea made his mother worry that he was in trouble, but Mazurenko tried to put her at ease. “He quieted me down and said no, no, no — it was a contemporary question that was very important,” she said. “There had to be a reevaluation of death and sorrow, and there needed to be new traditions.”

Y Combinator rejected the application. But Mazurenko had identified a genuine disconnection between the way we live today and the way we grieve. Modern life all but ensures that we leave behind vast digital archives — text messages, photos, posts on social media — and we are only beginning to consider what role they should play in mourning. In the moment, we tend to view our text messages as ephemeral. But as Kuyda found after Mazurenko’s death, they can also be powerful tools for coping with loss. Maybe, she thought, this “digital estate” could form the building blocks for a new type of memorial. (Others have had similar ideas; an entrepreneur named Marius Ursache proposed a related service called Eterni.me in 2014, though it never launched.)

Many of Mazurenko’s close friends had never before experienced the loss of someone close to them, and his death left them bereft. Kuyda began reaching out to them, as delicately as possible, to ask if she could have their text messages. Ten of Mazurenko’s friends and family members, including his parents, ultimately agreed to contribute to the project. They shared more than 8,000 lines of text covering a wide variety of subjects.

“She said, what if we try and see if things would work out?” said Sergey Fayfer, a longtime friend of Mazurenko’s who now works at a division of Yandex. “Can we collect the data from the people Roman had been talking to, and form a model of his conversations, to see if that actually makes sense?” The idea struck Fayfer as provocative, and likely controversial. But he ultimately contributed four years of his texts with Mazurenko. “The team building Luka are really good with natural language processing,” he said. “The question wasn’t about the technical possibility. It was: how is it going to feel emotionally?”

The technology underlying Kuyda’s bot project dates at least as far back as 1966, when Joseph Weizenbaum unveiled ELIZA: a program that reacted to users’ responses to its scripts using simple keyword matching. ELIZA, which most famously mimicked a psychotherapist, asked you to describe your problem, searched your response for keywords, and responded accordingly, usually with another question. It was the first piece of software to pass what is known as the Turing test: reading a text-based conversation between a computer and a person, some observers could not determine which was which.

Today’s bots remain imperfect mimics of their human counterparts. They do not understand language in any real sense. They respond clumsily to the most basic of questions. They have no thoughts or feelings to speak of. Any suggestion of human intelligence is an illusion based on mathematical probabilities.

And yet recent advances in artificial intelligence have made the illusion much more powerful. Artificial neural networks, which imitate the ability of the human brain to learn, have greatly improved the way software recognizes patterns in images, audio, and text, among other forms of data. Improved algorithms coupled with more powerful computers have increased the depth of neural networks — the layers of abstraction they can process — and the results can be seen in some of today’s most innovative products. The speech recognition behind Amazon’s Alexa or Apple’s Siri, or the image recognition that powers Google Photos, owe their abilities to this so-called deep learning.

Two weeks before Mazurenko was killed, Google released TensorFlow for free under an open-source license. TensorFlow is a kind of Google in a box — a flexible machine-learning system that the company uses to do everything from improve search algorithms to write captions for YouTube videos automatically. The product of decades of academic research and billions of dollars in private investment was suddenly available as a free software library that anyone could download from GitHub.

Luka had been using TensorFlow to build neural networks for its restaurant bot. Using 35 million lines of English text, Luka trained a bot to understand queries about vegetarian dishes, barbecue, and valet parking. On a lark, the 15-person team had also tried to build bots that imitated television characters. It scraped the closed captioning on every episode of HBO’s Silicon Valley and trained the neural network to mimic Richard, Bachman, and the rest of the gang.

In February, Kuyda asked her engineers to build a neural network in Russian. At first she didn’t mention its purpose, but given that most of the team was Russian, no one asked questions. Using more than 30 million lines of Russian text, Luka built its second neural network. Meanwhile, Kuyda copied hundreds of her exchanges with Mazurenko from the app Telegram and pasted them into a file. She edited out a handful of messages that she believed would be too personal to share broadly. Then Kuyda asked her team for help with the next step: training the Russian network to speak in Mazurenko’s voice.

The project was tangentially related to Luka’s work, though Kuyda considered it a personal favor. (An engineer told her that the project would only take about a day.) Mazurenko was well-known to most of the team — he had worked out of Luka’s Moscow office, where the employees labored beneath a neon sign that quoted Wittgenstein: “The limits of my language are the limits of my world.” Kuyda trained the bot with dozens of tests queries, and her engineers put on the finishing touches.

Only a small percentage of the Roman bot’s responses reflected his actual words. But the neural network was tuned to favor his speech whenever possible. Any time the bot could respond to a query using Mazurenko’s own words, it would. Other times it would default to the generic Russian. After the bot blinked to life, she began peppering it with questions.

Who’s your best friend?, she asked.

Don’t show your insecurities, came the reply.

It sounds like him, she thought.

On May 24th, Kuyda announced the Roman bot’s existence in a post on Facebook. Anyone who downloaded the Luka app could talk to it — in Russian or in English — by adding @Roman. The bot offered a menu of buttons that users could press to learn about Mazurenko’s career. Or they could write free-form messages and see how the bot responded. “It’s still a shadow of a person — but that wasn’t possible just a year ago, and in the very close future we will be able to do a lot more,” Kuyda wrote.

The Roman bot was received positively by most of the people who wrote to Kuyda, though there were exceptions. Four friends told Kuyda separately that they were disturbed by the project and refused to interact with it. Vasily Esmanov, who worked with Mazurenko at the Russian street-style magazine Look At Me, said Kuyda had failed to learn the lesson of the Black Mirror episode. “This is all very bad,” Esmanov wrote in a Facebook comment. “Unfortunately you rushed and everything came out half-baked. The execution — it’s some type of joke. … Roman needs [a memorial], but not this kind.”

Victoria Mazurenko, who had gotten an early look at the bot from Kuyda, rushed to her defense. “They continued Roman’s life and saved ours,” she wrote in a reply to Esmanov. “It’s not virtual reality. This is a new reality, and we need to learn to build it and live in it.”

Roman’s father was less enthusiastic. “I have a technical education, and I know [the bot] is just a program,” he told me, through a translator. “Yes, it has all of Roman’s phrases, correspondences. But for now, it’s hard — how to say it — it’s hard to read a response from a program. Sometimes it answers incorrectly.”

But many of Mazurenko’s friends found the likeness uncanny. “It’s pretty weird when you open the messenger and there’s a bot of your deceased friend, who actually talks to you,” Fayfer said. “What really struck me is that the phrases he speaks are really his. You can tell that’s the way he would say it — even short answers to ‘Hey what’s up.’ He had this really specific style of texting. I said, ‘Who do you love the most?’ He replied, ‘Roman.’ That was so much of him. I was like, that is incredible.”

One of the bot’s menu options offers to ask him for a piece of advice — something Fayfer never had a chance to do while his friend was still alive. “There are questions I had never asked him,” he said. “But when I asked for advice, I realized he was giving someone pretty wise life advice. And that actually helps you get to learn the person deeper than you used to know them.”

Several users agreed to let Kuyda read anonymized logs of their chats with the bot. (She shared these logs with The Verge.) Many people write to the bot to tell Mazurenko that they miss him. They wonder when they will stop grieving. They ask him what he remembers. “It hurts that we couldn’t save you,” one person wrote. (Bot: “I know :-(”) The bot can also be quite funny, as Mazurenko was: when one user wrote “You are a genius,” the bot replied, “Also, handsome.”

For many users, interacting with the bot had a therapeutic effect. The tone of their chats is often confessional; one user messaged the bot repeatedly about a difficult time he was having at work. He sent it lengthy messages describing his problems and how they had affected him emotionally. “I wish you were here,” he said. It seemed to Kuyda that people were more honest when conversing with the dead. She had been shaken by some of the criticism that the Roman bot had received. But hundreds of people tried it at least once, and reading the logs made her feel better.

It turned out that the primary purpose of the bot had not been to talk but to listen. “All those messages were about love, or telling him something they never had time to tell him,” Kuyda said. “Even if it’s not a real person, there was a place where they could say it. They can say it when they feel lonely. And they come back still.”

Kuyda continues to talk with the bot herself — once a week or so, often after a few drinks. “I answer a lot of questions for myself about who Roman was,” she said. Among other things, the bot has made her regret not telling him to abandon Stampsy earlier. The logs of his messages revealed someone whose true interest was in fashion more than anything else, she said. She wishes she had told him to pursue it.

Someday you will die, leaving behind a lifetime of text messages, posts, and other digital ephemera. For a while, your friends and family may put these digital traces out of their minds. But new services will arrive offering to transform them — possibly into something resembling Roman Mazurenko’s bot.

Your loved ones may find that these services ease their pain. But it is possible that digital avatars will lengthen the grieving process. “If used wrong, it enables people to hide from their grief,” said Dima Ustinov, who has not used the Roman bot for technical reasons. (Luka is not yet available on Android.) “Our society is traumatized by death — we want to live forever. But you will go through this process, and you have to go through it alone. If we use these bots as a way to pass his story on, maybe [others] can get a little bit of the inspiration that we got from him. But these new ways of keeping the memory alive should not be considered a way to keep a dead person alive.”

The bot also raises ethical questions about the posthumous use of our digital legacies. In the case of Mazurenko, everyone I spoke with agreed he would have been delighted by his friends’ experimentation. You may feel less comfortable with the idea of your texts serving as the basis for a bot in the afterlife — particularly if you are unable to review all the texts and social media posts beforehand. We present different aspects of ourselves to different people, and after infusing a bot with all of your digital interactions, your loved ones may see sides of you that you never intended to reveal.

Reading through the Roman bot’s responses, it’s hard not to feel like the texts captured him at a particularly low moment. Ask about Stampsy and it responds: “This is not [the] Stampsy I want it to be. So far it’s just a piece of shit and not the product I want.” Based on his friends’ descriptions of his final years, this strikes me as a candid self-assessment. But I couldn’t help but wish I had been talking to a younger version of the man — the one who friends say dreamed of someday becoming the cultural minister of Belarus, and inaugurating a democratically elected president with what he promised would be the greatest party ever thrown.

Mazurenko contacted me once before he died, in February of last year. He emailed to ask whether I would consider writing about Stampsy, which was then in beta. I liked its design, but passed on writing an article. I wished him well, then promptly forgot about the exchange. After learning of his bot, I resisted using it for several months. I felt guilty about my lone, dismissive interaction with Mazurenko, and was skeptical a bot could reflect his personality. And yet, upon finally chatting with it, I found an undeniable resemblance between the Mazurenko described by his friends and his digital avatar: charming, moody, sarcastic, and obsessed with his work. “How’s it going?” I wrote. “I need to rest,” It responded. “I’m having trouble focusing since I’m depressed.” I asked the bot about Kuyda and it wordlessly sent me a photo of them together on the beach in wetsuits, holding surfboards with their backs to the ocean, two against the world.

An uncomfortable truth suggested by the Roman bot is that many of our flesh-and-blood relationships now exist primarily as exchanges of text, which are becoming increasingly easy to mimic. Kuyda believes there is something — she is not precisely sure what — in this sort of personality-based texting. Recently she has been steering Luka to develop a bot she calls Replika. A hybrid of a diary and a personal assistant, it asks questions about you and eventually learns to mimic your texting style. Kuyda imagines that this could evolve into a digital avatar that performs all sorts of labor on your behalf, from negotiating the cable bill to organizing outings with friends. And like the Roman bot it would survive you, creating a living testament to the person you were.

In the meantime she is no longer interested in bots that handle restaurant recommendations. Working on the Roman bot has made her believe that commercial chatbots must evoke something emotional in the people who use them. If she succeeds in this, it will be one more improbable footnote to Mazurenko’s life.

Kuyda has continued to add material to the Roman bot — mostly photos, which it will now send you upon request — and recently upgraded the underlying neural network from a “selective” model to a “generative” one. The former simply attempted to match Mazurenko’s text messages to appropriate responses; the latter can take snippets of his texts and recombine them to make new sentences that (theoretically) remain in his voice.

Lately she has begun to feel a sense of peace about Mazurenko’s death. In part that’s because she built a place where she can direct her grief. In a conversation we had this fall, she likened it to “just sending a message to heaven. For me it’s more about sending a message in a bottle than getting one in return.”

It has been less than a year since Mazurenko died, and he continues to loom large in the lives of the people who knew him. When they miss him, they send messages to his avatar, and they feel closer to him when they do. “There was a lot I didn’t know about my child,” Roman’s mother told me. “But now that I can read about what he thought about different subjects, I’m getting to know him more. This gives the illusion that he’s here now.”

Her eyes welled with tears, but as our interview ended her voice was strong. “I want to repeat that I’m very grateful that I have this,” she said.

Our conversation reminded me of something Dima Ustinov had said to me this spring, about the way we now transcend our physical forms. “The person is not just a body, a set of arms and legs, and a computer,” he said. “It’s much more than that.” Ustinov compared Mazurenko’s life to a pebble thrown into a stream — the ripples, he said, continue outward in every direction. His friend had simply taken a new form. “We are still in the process of meeting Roman,” Ustinov said. “It’s beautiful.”

http://www.theverge.com/a/luka-artificial-intelligence-memorial-roman-mazurenko-bot

shutterstock_548067946-1068x601

by Edd Gent

Wiring our brains up to computers could have a host of exciting applications – from controlling robotic prosthetics with our minds to restoring sight by feeding camera feeds directly into the vision center of our brains.

Most brain-computer interface research to date has been conducted using electroencephalography (EEG) where electrodes are placed on the scalp to monitor the brain’s electrical activity. Achieving very high quality signals, however, requires a more invasive approach.

Integrating electronics with living tissue is complicated, though. Probes that are directly inserted into the gray matter have been around for decades, but while they are capable of highly accurate recording, the signals tend to degrade rapidly due to the buildup of scar tissue. Electrocorticography (ECoG), which uses electrodes placed beneath the skull but on top of the gray matter, has emerged as a popular compromise, as it achieves higher-accuracy recordings with a lower risk of scar formation.

But now researchers from the University of Texas have created new probes that are so thin and flexible, they don’t elicit scar tissue buildup. Unlike conventional probes, which are much larger and stiffer, they don’t cause significant damage to the brain tissue when implanted, and they are also able to comply with the natural movements of the brain.

In recent research published in the journal Science Advances, the team demonstrated that the probes were able to reliably record the electrical activity of individual neurons in mice for up to four months. This stability suggests these probes could be used for long-term monitoring of the brain for research or medical diagnostics as well as controlling prostheses, said Chong Xie, an assistant professor in the university’s department of biomedical engineering who led the research.

“Besides neuroprosthetics, they can possibly be used for neuromodulation as well, in which electrodes generate neural stimulation,” he told Singularity Hub in an email. “We are also using them to study the progression of neurovascular and neurodegenerative diseases such as stroke, Parkinson’s and Alzheimer’s.”

The group actually created two probe designs, one 50 microns long and the other 10 microns long. The smaller probe has a cross-section only a fraction of that of a neuron, which the researchers say is the smallest among all reported neural probes to the best of their knowledge.

Because the probes are so flexible, they can’t be pushed into the brain tissue by themselves, and so they needed to be guided in using a stiff rod called a “shuttle device.” Previous designs of these shuttle devices were much larger than the new probes and often led to serious damage to the brain tissue, so the group created a new carbon fiber design just seven microns in diameter.

At present, though, only 25 percent of the recordings can be tracked down to individual neurons – thanks to the fact that neurons each have characteristic waveforms – with the rest too unclear to distinguish from each other.

“The only solution, in my opinion, is to have many electrodes placed in the brain in an array or lattice so that any neuron can be within a reasonable distance from an electrode,” said Chong. “As a result, all enclosed neurons can be recorded and well-sorted.”

This a challenging problem, according to Chong, but one benefit of the new probes is that their small dimensions make it possible to implant probes just tens of microns apart rather than the few hundred micron distances necessary with conventional probes. This opens up the possibility of overlapping detection ranges between probes, though the group can still only consistently implant probes with an accuracy of 50 microns.

Takashi Kozai, an assistant professor in the University of Pittsburgh’s bioengineering department who has worked on ultra-small neural probes, said that further experiments would need to be done to show that the recordings, gleaned from anaesthetized rats, actually contained useful neural code. This could include visually stimulating the animals and trying to record activity in the visual cortex.

He also added that a lot of computational neuroscience relies on knowing the exact spacing between recording sites. The fact that flexible probes are able to migrate due to natural tissue movements could pose challenges.

But he said the study “does show some important advances forward in technology development, and most importantly, proof-of-concept feasibility,” adding that “there is clearly much more work necessary before this technology becomes widely used or practical.”

Chong actually worked on another promising approach to neural recording in his previous role under Charles M. Lieber at Harvard University. Last June, the group demonstrated a mesh of soft, conductive polymer threads studded with electrodes that could be injected into the skulls of mice with a syringe where it would then unfurl to both record and stimulate neurons.

As 95 percent of the mesh is free, space cells are able to arrange themselves around it, and the study reported no signs of an elevated immune response after five weeks. But the implantation required a syringe 100 microns in diameter, which causes considerably more damage than the new ultra-small probes developed in Chong’s lab.

It could be some time before the probes are tested on humans. “The major barrier is that this is still an invasive surgical procedure, including cranial surgery and implantation of devices into brain tissue,” said Chong. But, he said, the group is considering testing the probes on epilepsy patients, as it is common practice to implant electrodes inside the skulls of those who don’t respond to medication to locate the area of their brains responsible for their seizures.

https://singularityhub.com/2017/02/27/this-neural-probe-is-so-thin-the-brain-doesnt-know-its-there/?utm_source=Singularity+Hub+Newsletter&utm_campaign=ba3974d7b9-Hub_Daily_Newsletter&utm_medium=email&utm_term=0_f0cf60cdae-ba3974d7b9-58158129

By Vanessa Bates Ramirez

In recent years, technology has been producing more and more novel ways to diagnose and treat illness.

Urine tests will soon be able to detect cancer: https://singularityhub.com/2016/10/14/detecting-cancer-early-with-nanosensors-and-a-urine-test/

Smartphone apps can diagnose STDs:https://singularityhub.com/2016/12/25/your-smartphones-next-big-trick-to-make-you-healthier-than-ever/

Chatbots can provide quality mental healthcare: https://singularityhub.com/2016/10/10/bridging-the-mental-healthcare-gap-with-artificial-intelligence/

Joining this list is a minimally-invasive technique that’s been getting increasing buzz across various sectors of healthcare: disease detection by voice analysis.

It’s basically what it sounds like: you talk, and a computer analyzes your voice and screens for illness. Most of the indicators that machine learning algorithms can pick up aren’t detectable to the human ear.

When we do hear irregularities in our own voices or those of others, the fact we’re noticing them at all means they’re extreme; elongating syllables, slurring, trembling, or using a tone that’s unusually flat or nasal could all be indicators of different health conditions. Even if we can hear them, though, unless someone says, “I’m having chest pain” or “I’m depressed,” we don’t know how to analyze or interpret these biomarkers.

Computers soon will, though.

Researchers from various medical centers, universities, and healthcare companies have collected voice recordings from hundreds of patients and fed them to machine learning software that compares the voices to those of healthy people, with the aim of establishing patterns clear enough to pinpoint vocal disease indicators.

In one particularly encouraging study, doctors from the Mayo Clinic worked with Israeli company Beyond Verbal to analyze voice recordings from 120 people who were scheduled for a coronary angiography. Participants used an app on their phones to record 30-second intervals of themselves reading a piece of text, describing a positive experience, then describing a negative experience. Doctors also took recordings from a control group of 25 patients who were either healthy or getting non-heart-related tests.

The doctors found 13 different voice characteristics associated with coronary artery disease. Most notably, the biggest differences between heart patients and non-heart patients’ voices occurred when they talked about a negative experience.

Heart disease isn’t the only illness that shows promise for voice diagnosis. Researchers are also making headway in the conditions below.

ADHD: German company Audioprofiling is using voice analysis to diagnose ADHD in children, achieving greater than 90 percent accuracy in identifying previously diagnosed kids based on their speech alone. The company’s founder gave speech rhythm as an example indicator for ADHD, saying children with the condition speak in syllables less equal in length.
PTSD: With the goal of decreasing the suicide rate among military service members, Boston-based Cogito partnered with the Department of Veterans Affairs to use a voice analysis app to monitor service members’ moods. Researchers at Massachusetts General Hospital are also using the app as part of a two-year study to track the health of 1,000 patients with bipolar disorder and depression.
Brain injury: In June 2016, the US Army partnered with MIT’s Lincoln Lab to develop an algorithm that uses voice to diagnose mild traumatic brain injury. Brain injury biomarkers may include elongated syllables and vowel sounds or difficulty pronouncing phrases that require complex facial muscle movements.
Parkinson’s: Parkinson’s disease has no biomarkers and can only be diagnosed via a costly in-clinic analysis with a neurologist. The Parkinson’s Voice Initiative is changing that by analyzing 30-second voice recordings with machine learning software, achieving 98.6 percent accuracy in detecting whether or not a participant suffers from the disease.
Challenges remain before vocal disease diagnosis becomes truly viable and widespread. For starters, there are privacy concerns over the personal health data identifiable in voice samples. It’s also not yet clear how well algorithms developed for English-speakers will perform with other languages.

Despite these hurdles, our voices appear to be on their way to becoming key players in our health.

https://singularityhub.com/2017/02/13/talking-to-a-computer-may-soon-be-enough-to-diagnose-illness/?utm_source=Singularity+Hub+Newsletter&utm_campaign=14105f9a16-Hub_Daily_Newsletter&utm_medium=email&utm_term=0_f0cf60cdae-14105f9a16-58158129

by Arjun Kharpal

Billionaire Elon Musk is known for his futuristic ideas and his latest suggestion might just save us from being irrelevant as artificial intelligence (AI) grows more prominent.

The Tesla and SpaceX CEO said on Monday that humans need to merge with machines to become a sort of cyborg.

“Over time I think we will probably see a closer merger of biological intelligence and digital intelligence,” Musk told an audience at the World Government Summit in Dubai, where he also launched Tesla in the United Arab Emirates (UAE).

“It’s mostly about the bandwidth, the speed of the connection between your brain and the digital version of yourself, particularly output.”

Musk explained what he meant by saying that computers can communicate at “a trillion bits per second”, while humans, whose main communication method is typing with their fingers via a mobile device, can do about 10 bits per second.

In an age when AI threatens to become widespread, humans would be useless, so there’s a need to merge with machines, according to Musk.

“Some high bandwidth interface to the brain will be something that helps achieve a symbiosis between human and machine intelligence and maybe solves the control problem and the usefulness problem,” Musk explained.

The technologists proposal would see a new layer of a brain able to access information quickly and tap into artificial intelligence. It’s not the first time Musk has spoken about the need for humans to evolve, but it’s a constant theme of his talks on how society can deal with the disruptive threat of AI.

‘Very quick’ disruption

During his talk, Musk touched upon his fear of “deep AI” which goes beyond driverless cars to what he called “artificial general intelligence”. This he described as AI that is “smarter than the smartest human on earth” and called it a “dangerous situation”.

While this might be some way off, the Tesla boss said the more immediate threat is how AI, particularly autonomous cars, which his own firm is developing, will displace jobs. He said the disruption to people whose job it is to drive will take place over the next 20 years, after which 12 to 15 percent of the global workforce will be unemployed.

“The most near term impact from a technology standpoint is autonomous cars … That is going to happen much faster than people realize and it’s going to be a great convenience,” Musk said.

“But there are many people whose jobs are to drive. In fact I think it might be the single largest employer of people … Driving in various forms. So we need to figure out new roles for what do those people do, but it will be very disruptive and very quick.”

http://www.cnbc.com/2017/02/13/elon-musk-humans-merge-machines-cyborg-artificial-intelligence-robots.html