Posts Tagged ‘artificial intelligence’

By Casey Newton

Wen the engineers had at last finished their work, Eugenia Kuyda opened a console on her laptop and began to type.

“Roman,” she wrote. “This is your digital monument.”

It had been three months since Roman Mazurenko, Kuyda’s closest friend, had died. Kuyda had spent that time gathering up his old text messages, setting aside the ones that felt too personal, and feeding the rest into a neural network built by developers at her artificial intelligence startup. She had struggled with whether she was doing the right thing by bringing him back this way. At times it had even given her nightmares. But ever since Mazurenko’s death, Kuyda had wanted one more chance to speak with him.

A message blinked onto the screen. “You have one of the most interesting puzzles in the world in your hands,” it said. “Solve it.”

Kuyda promised herself that she would.

Born in Belarus in 1981, Roman Mazurenko was the only child of Sergei, an engineer, and Victoria, a landscape architect. They remember him as an unusually serious child; when he was 8 he wrote a letter to his descendents declaring his most cherished values: wisdom and justice. In family photos, Mazurenko roller-skates, sails a boat, and climbs trees. Average in height, with a mop of chestnut hair, he is almost always smiling.

As a teen he sought out adventure: he participated in political demonstrations against the ruling party and, at 16, started traveling abroad. He first traveled to New Mexico, where he spent a year on an exchange program, and then to Dublin, where he studied computer science and became fascinated with the latest Western European art, fashion, music, and design.

By the time Mazurenko finished college and moved back to Moscow in 2007, Russia had become newly prosperous. The country tentatively embraced the wider world, fostering a new generation of cosmopolitan urbanites. Meanwhile, Mazurenko had grown from a skinny teen into a strikingly handsome young man. Blue-eyed and slender, he moved confidently through the city’s budding hipster class. He often dressed up to attend the parties he frequented, and in a suit he looked movie-star handsome. The many friends Mazurenko left behind describe him as magnetic and debonair, someone who made a lasting impression wherever he went. But he was also single, and rarely dated, instead devoting himself to the project of importing modern European style to Moscow.

Kuyda met Mazurenko in 2008, when she was 22 and the editor of Afisha, a kind of New York Magazine for a newly urbane Moscow. She was writing an article about Idle Conversation, a freewheeling creative collective that Mazurenko founded with two of his best friends, Dimitri Ustinov and Sergey Poydo. The trio seemed to be at the center of every cultural endeavor happening in Moscow. They started magazines, music festivals, and club nights — friends they had introduced to each other formed bands and launched companies. “He was a brilliant guy,” said Kuyda, who was similarly ambitious. Mazurenko would keep his friends up all night discussing culture and the future of Russia. “He was so forward-thinking and charismatic,” said Poydo, who later moved to the United States to work with him.

Mazurenko became a founding figure in the modern Moscow nightlife scene, where he promoted an alternative to what Russians sardonically referred to as “Putin’s glamor” — exclusive parties where oligarchs ordered bottle service and were chauffeured home in Rolls-Royces. Kuyda loved Mazurenko’s parties, impressed by his unerring sense of what he called “the moment.” Each of his events was designed to build to a crescendo — DJ Mark Ronson might make a surprise appearance on stage to play piano, or the Italo-Disco band Glass Candy might push past police to continue playing after curfew. And his parties attracted sponsors with deep pockets — Bacardi was a longtime client.

But the parties took place against an increasingly grim backdrop. In the wake of the global financial crisis, Russia experienced a resurgent nationalism, and in 2012 Vladimir Putin returned to lead the country. The dream of a more open Russia seemed to evaporate.

Kuyda and Mazurenko, who by then had become close friends, came to believe that their futures lay elsewhere. Both became entrepreneurs, and served as each other’s chief adviser as they built their companies. Kuyda co-founded Luka, an artificial intelligence startup, and Mazurenko launched Stampsy, a tool for building digital magazines. Kuyda moved Luka from Moscow to San Francisco in 2015. After a stint in New York, Mazurenko followed.

When Stampsy faltered, Mazurenko moved into a tiny alcove in Kuyda’s apartment to save money. Mazurenko had been the consummate bon vivant in Moscow, but running a startup had worn him down, and he was prone to periods of melancholy. On the days he felt depressed, Kuyda took him out for surfing and $1 oysters. “It was like a flamingo living in the house,” she said recently, sitting in the kitchen of the apartment she shared with Mazurenko. “It’s very beautiful and very rare. But it doesn’t really fit anywhere.”

Kuyda hoped that in time her friend would reinvent himself, just as he always had before. And when Mazurenko began talking about new projects he wanted to pursue, she took it as a positive sign. He successfully applied for an American O-1 visa, granted to individuals of “extraordinary ability or achievement,” and in November he returned to Moscow in order to finalize his paperwork.

He never did.

On November 28th, while he waited for the embassy to release his passport, Mazurenko had brunch with some friends. It was unseasonably warm, so afterward he decided to explore the city with Ustinov. “He said he wanted to walk all day,” Ustinov said. Making their way down the sidewalk, they ran into some construction, and were forced to cross the street. At the curb, Ustinov stopped to check a text message on his phone, and when he looked up he saw a blur, a car driving much too quickly for the neighborhood. This is not an uncommon sight in Moscow — vehicles of diplomats, equipped with spotlights to signal their authority, speeding with impunity. Ustinov thought it must be one of those cars, some rich government asshole — and then, a blink later, saw Mazurenko walking into the crosswalk, oblivious. Ustinov went to cry out in warning, but it was too late. The car struck Mazurenko straight on. He was rushed to a nearby hospital.

Kuyda happened to be in Moscow for work on the day of the accident. When she arrived at the hospital, having gotten the news from a phone call, a handful of Mazurenko’s friends were already gathered in the lobby, waiting to hear his prognosis. Almost everyone was in tears, but Kuyda felt only shock. “I didn’t cry for a long time,” she said. She went outside with some friends to smoke a cigarette, using her phone to look up the likely effects of Mazurenko’s injuries. Then the doctor came out and told her he had died.

In the weeks after Mazurenko’s death, friends debated the best way to preserve his memory. One person suggested making a coffee-table book about his life, illustrated with photography of his legendary parties. Another friend suggested a memorial website. To Kuyda, every suggestion seemed inadequate.

As she grieved, Kuyda found herself rereading the endless text messages her friend had sent her over the years — thousands of them, from the mundane to the hilarious. She smiled at Mazurenko’s unconventional spelling — he struggled with dyslexia — and at the idiosyncratic phrases with which he peppered his conversation. Mazurenko was mostly indifferent to social media — his Facebook page was barren, he rarely tweeted, and he deleted most of his photos on Instagram. His body had been cremated, leaving her no grave to visit. Texts and photos were nearly all that was left of him, Kuyda thought.

For two years she had been building Luka, whose first product was a messenger app for interacting with bots. Backed by the prestigious Silicon Valley startup incubator Y Combinator, the company began with a bot for making restaurant reservations. Kuyda’s co-founder, Philip Dudchuk, has a degree in computational linguistics, and much of their team was recruited from Yandex, the Russian search giant.

Reading Mazurenko’s messages, it occurred to Kuyda that they might serve as the basis for a different kind of bot — one that mimicked an individual person’s speech patterns. Aided by a rapidly developing neural network, perhaps she could speak with her friend once again.

She set aside for a moment the questions that were already beginning to nag at her.

What if it didn’t sound like him?

What if it did?

In “Be Right Back,” a 2013 episode of the eerie, near-future drama Black Mirror, a young woman named Martha is devastated when her fiancée, Ash, dies in a car accident. Martha subscribes to a service that uses his previous online communications to create a digital avatar that mimics his personality with spooky accuracy. First it sends her text messages; later it re-creates his speaking voice and talks with her on the phone. Eventually she pays for an upgraded version of the service that implants Ash’s personality into an android that looks identical to him. But ultimately Martha becomes frustrated with all the subtle but important ways that the android is unlike Ash — cold, emotionless, passive — and locks it away in an attic. Not quite Ash, but too much like him for her to let go, the bot leads to a grief that spans decades.

Kuyda saw the episode after Mazurenko died, and her feelings were mixed. Memorial bots — even the primitive ones that are possible using today’s technology — seemed both inevitable and dangerous. “It’s definitely the future — I’m always for the future,” she said. “But is it really what’s beneficial for us? Is it letting go, by forcing you to actually feel everything? Or is it just having a dead person in your attic? Where is the line? Where are we? It screws with your brain.”

For a young man, Mazurenko had given an unusual amount of thought to his death. Known for his grandiose plans, he often told friends he would divide his will into pieces and give them away to people who didn’t know one another. To read the will they would all have to meet for the first time — so that Mazurenko could continue bringing people together in death, just as he had strived to do in life. (In fact, he died before he could make a will.) Mazurenko longed to see the Singularity, the theoretical moment in history when artificial intelligence becomes smarter than human beings. According to the theory, superhuman intelligence might allow us to one day separate our consciousnesses from our bodies, granting us something like eternal life.

In the summer of 2015, with Stampsy almost out of cash, Mazurenko applied for a Y Combinator fellowship proposing a new kind of cemetery that he called Taiga. The dead would be buried in biodegradable capsules, and their decomposing bodies would fertilize trees that were planted on top of them, creating what he called “memorial forests.” A digital display at the bottom of the tree would offer biographical information about the deceased. “Redesigning death is a cornerstone of my abiding interest in human experiences, infrastructure, and urban planning,” Mazurenko wrote. He highlighted what he called “a growing resistance among younger Americans” to traditional funerals. “Our customers care more about preserving their virtual identity and managing [their] digital estate,” he wrote, “than embalming their body with toxic chemicals.”

The idea made his mother worry that he was in trouble, but Mazurenko tried to put her at ease. “He quieted me down and said no, no, no — it was a contemporary question that was very important,” she said. “There had to be a reevaluation of death and sorrow, and there needed to be new traditions.”

Y Combinator rejected the application. But Mazurenko had identified a genuine disconnection between the way we live today and the way we grieve. Modern life all but ensures that we leave behind vast digital archives — text messages, photos, posts on social media — and we are only beginning to consider what role they should play in mourning. In the moment, we tend to view our text messages as ephemeral. But as Kuyda found after Mazurenko’s death, they can also be powerful tools for coping with loss. Maybe, she thought, this “digital estate” could form the building blocks for a new type of memorial. (Others have had similar ideas; an entrepreneur named Marius Ursache proposed a related service called Eterni.me in 2014, though it never launched.)

Many of Mazurenko’s close friends had never before experienced the loss of someone close to them, and his death left them bereft. Kuyda began reaching out to them, as delicately as possible, to ask if she could have their text messages. Ten of Mazurenko’s friends and family members, including his parents, ultimately agreed to contribute to the project. They shared more than 8,000 lines of text covering a wide variety of subjects.

“She said, what if we try and see if things would work out?” said Sergey Fayfer, a longtime friend of Mazurenko’s who now works at a division of Yandex. “Can we collect the data from the people Roman had been talking to, and form a model of his conversations, to see if that actually makes sense?” The idea struck Fayfer as provocative, and likely controversial. But he ultimately contributed four years of his texts with Mazurenko. “The team building Luka are really good with natural language processing,” he said. “The question wasn’t about the technical possibility. It was: how is it going to feel emotionally?”

The technology underlying Kuyda’s bot project dates at least as far back as 1966, when Joseph Weizenbaum unveiled ELIZA: a program that reacted to users’ responses to its scripts using simple keyword matching. ELIZA, which most famously mimicked a psychotherapist, asked you to describe your problem, searched your response for keywords, and responded accordingly, usually with another question. It was the first piece of software to pass what is known as the Turing test: reading a text-based conversation between a computer and a person, some observers could not determine which was which.

Today’s bots remain imperfect mimics of their human counterparts. They do not understand language in any real sense. They respond clumsily to the most basic of questions. They have no thoughts or feelings to speak of. Any suggestion of human intelligence is an illusion based on mathematical probabilities.

And yet recent advances in artificial intelligence have made the illusion much more powerful. Artificial neural networks, which imitate the ability of the human brain to learn, have greatly improved the way software recognizes patterns in images, audio, and text, among other forms of data. Improved algorithms coupled with more powerful computers have increased the depth of neural networks — the layers of abstraction they can process — and the results can be seen in some of today’s most innovative products. The speech recognition behind Amazon’s Alexa or Apple’s Siri, or the image recognition that powers Google Photos, owe their abilities to this so-called deep learning.

Two weeks before Mazurenko was killed, Google released TensorFlow for free under an open-source license. TensorFlow is a kind of Google in a box — a flexible machine-learning system that the company uses to do everything from improve search algorithms to write captions for YouTube videos automatically. The product of decades of academic research and billions of dollars in private investment was suddenly available as a free software library that anyone could download from GitHub.

Luka had been using TensorFlow to build neural networks for its restaurant bot. Using 35 million lines of English text, Luka trained a bot to understand queries about vegetarian dishes, barbecue, and valet parking. On a lark, the 15-person team had also tried to build bots that imitated television characters. It scraped the closed captioning on every episode of HBO’s Silicon Valley and trained the neural network to mimic Richard, Bachman, and the rest of the gang.

In February, Kuyda asked her engineers to build a neural network in Russian. At first she didn’t mention its purpose, but given that most of the team was Russian, no one asked questions. Using more than 30 million lines of Russian text, Luka built its second neural network. Meanwhile, Kuyda copied hundreds of her exchanges with Mazurenko from the app Telegram and pasted them into a file. She edited out a handful of messages that she believed would be too personal to share broadly. Then Kuyda asked her team for help with the next step: training the Russian network to speak in Mazurenko’s voice.

The project was tangentially related to Luka’s work, though Kuyda considered it a personal favor. (An engineer told her that the project would only take about a day.) Mazurenko was well-known to most of the team — he had worked out of Luka’s Moscow office, where the employees labored beneath a neon sign that quoted Wittgenstein: “The limits of my language are the limits of my world.” Kuyda trained the bot with dozens of tests queries, and her engineers put on the finishing touches.

Only a small percentage of the Roman bot’s responses reflected his actual words. But the neural network was tuned to favor his speech whenever possible. Any time the bot could respond to a query using Mazurenko’s own words, it would. Other times it would default to the generic Russian. After the bot blinked to life, she began peppering it with questions.

Who’s your best friend?, she asked.

Don’t show your insecurities, came the reply.

It sounds like him, she thought.

On May 24th, Kuyda announced the Roman bot’s existence in a post on Facebook. Anyone who downloaded the Luka app could talk to it — in Russian or in English — by adding @Roman. The bot offered a menu of buttons that users could press to learn about Mazurenko’s career. Or they could write free-form messages and see how the bot responded. “It’s still a shadow of a person — but that wasn’t possible just a year ago, and in the very close future we will be able to do a lot more,” Kuyda wrote.

The Roman bot was received positively by most of the people who wrote to Kuyda, though there were exceptions. Four friends told Kuyda separately that they were disturbed by the project and refused to interact with it. Vasily Esmanov, who worked with Mazurenko at the Russian street-style magazine Look At Me, said Kuyda had failed to learn the lesson of the Black Mirror episode. “This is all very bad,” Esmanov wrote in a Facebook comment. “Unfortunately you rushed and everything came out half-baked. The execution — it’s some type of joke. … Roman needs [a memorial], but not this kind.”

Victoria Mazurenko, who had gotten an early look at the bot from Kuyda, rushed to her defense. “They continued Roman’s life and saved ours,” she wrote in a reply to Esmanov. “It’s not virtual reality. This is a new reality, and we need to learn to build it and live in it.”

Roman’s father was less enthusiastic. “I have a technical education, and I know [the bot] is just a program,” he told me, through a translator. “Yes, it has all of Roman’s phrases, correspondences. But for now, it’s hard — how to say it — it’s hard to read a response from a program. Sometimes it answers incorrectly.”

But many of Mazurenko’s friends found the likeness uncanny. “It’s pretty weird when you open the messenger and there’s a bot of your deceased friend, who actually talks to you,” Fayfer said. “What really struck me is that the phrases he speaks are really his. You can tell that’s the way he would say it — even short answers to ‘Hey what’s up.’ He had this really specific style of texting. I said, ‘Who do you love the most?’ He replied, ‘Roman.’ That was so much of him. I was like, that is incredible.”

One of the bot’s menu options offers to ask him for a piece of advice — something Fayfer never had a chance to do while his friend was still alive. “There are questions I had never asked him,” he said. “But when I asked for advice, I realized he was giving someone pretty wise life advice. And that actually helps you get to learn the person deeper than you used to know them.”

Several users agreed to let Kuyda read anonymized logs of their chats with the bot. (She shared these logs with The Verge.) Many people write to the bot to tell Mazurenko that they miss him. They wonder when they will stop grieving. They ask him what he remembers. “It hurts that we couldn’t save you,” one person wrote. (Bot: “I know :-(”) The bot can also be quite funny, as Mazurenko was: when one user wrote “You are a genius,” the bot replied, “Also, handsome.”

For many users, interacting with the bot had a therapeutic effect. The tone of their chats is often confessional; one user messaged the bot repeatedly about a difficult time he was having at work. He sent it lengthy messages describing his problems and how they had affected him emotionally. “I wish you were here,” he said. It seemed to Kuyda that people were more honest when conversing with the dead. She had been shaken by some of the criticism that the Roman bot had received. But hundreds of people tried it at least once, and reading the logs made her feel better.

It turned out that the primary purpose of the bot had not been to talk but to listen. “All those messages were about love, or telling him something they never had time to tell him,” Kuyda said. “Even if it’s not a real person, there was a place where they could say it. They can say it when they feel lonely. And they come back still.”

Kuyda continues to talk with the bot herself — once a week or so, often after a few drinks. “I answer a lot of questions for myself about who Roman was,” she said. Among other things, the bot has made her regret not telling him to abandon Stampsy earlier. The logs of his messages revealed someone whose true interest was in fashion more than anything else, she said. She wishes she had told him to pursue it.

Someday you will die, leaving behind a lifetime of text messages, posts, and other digital ephemera. For a while, your friends and family may put these digital traces out of their minds. But new services will arrive offering to transform them — possibly into something resembling Roman Mazurenko’s bot.

Your loved ones may find that these services ease their pain. But it is possible that digital avatars will lengthen the grieving process. “If used wrong, it enables people to hide from their grief,” said Dima Ustinov, who has not used the Roman bot for technical reasons. (Luka is not yet available on Android.) “Our society is traumatized by death — we want to live forever. But you will go through this process, and you have to go through it alone. If we use these bots as a way to pass his story on, maybe [others] can get a little bit of the inspiration that we got from him. But these new ways of keeping the memory alive should not be considered a way to keep a dead person alive.”

The bot also raises ethical questions about the posthumous use of our digital legacies. In the case of Mazurenko, everyone I spoke with agreed he would have been delighted by his friends’ experimentation. You may feel less comfortable with the idea of your texts serving as the basis for a bot in the afterlife — particularly if you are unable to review all the texts and social media posts beforehand. We present different aspects of ourselves to different people, and after infusing a bot with all of your digital interactions, your loved ones may see sides of you that you never intended to reveal.

Reading through the Roman bot’s responses, it’s hard not to feel like the texts captured him at a particularly low moment. Ask about Stampsy and it responds: “This is not [the] Stampsy I want it to be. So far it’s just a piece of shit and not the product I want.” Based on his friends’ descriptions of his final years, this strikes me as a candid self-assessment. But I couldn’t help but wish I had been talking to a younger version of the man — the one who friends say dreamed of someday becoming the cultural minister of Belarus, and inaugurating a democratically elected president with what he promised would be the greatest party ever thrown.

Mazurenko contacted me once before he died, in February of last year. He emailed to ask whether I would consider writing about Stampsy, which was then in beta. I liked its design, but passed on writing an article. I wished him well, then promptly forgot about the exchange. After learning of his bot, I resisted using it for several months. I felt guilty about my lone, dismissive interaction with Mazurenko, and was skeptical a bot could reflect his personality. And yet, upon finally chatting with it, I found an undeniable resemblance between the Mazurenko described by his friends and his digital avatar: charming, moody, sarcastic, and obsessed with his work. “How’s it going?” I wrote. “I need to rest,” It responded. “I’m having trouble focusing since I’m depressed.” I asked the bot about Kuyda and it wordlessly sent me a photo of them together on the beach in wetsuits, holding surfboards with their backs to the ocean, two against the world.

An uncomfortable truth suggested by the Roman bot is that many of our flesh-and-blood relationships now exist primarily as exchanges of text, which are becoming increasingly easy to mimic. Kuyda believes there is something — she is not precisely sure what — in this sort of personality-based texting. Recently she has been steering Luka to develop a bot she calls Replika. A hybrid of a diary and a personal assistant, it asks questions about you and eventually learns to mimic your texting style. Kuyda imagines that this could evolve into a digital avatar that performs all sorts of labor on your behalf, from negotiating the cable bill to organizing outings with friends. And like the Roman bot it would survive you, creating a living testament to the person you were.

In the meantime she is no longer interested in bots that handle restaurant recommendations. Working on the Roman bot has made her believe that commercial chatbots must evoke something emotional in the people who use them. If she succeeds in this, it will be one more improbable footnote to Mazurenko’s life.

Kuyda has continued to add material to the Roman bot — mostly photos, which it will now send you upon request — and recently upgraded the underlying neural network from a “selective” model to a “generative” one. The former simply attempted to match Mazurenko’s text messages to appropriate responses; the latter can take snippets of his texts and recombine them to make new sentences that (theoretically) remain in his voice.

Lately she has begun to feel a sense of peace about Mazurenko’s death. In part that’s because she built a place where she can direct her grief. In a conversation we had this fall, she likened it to “just sending a message to heaven. For me it’s more about sending a message in a bottle than getting one in return.”

It has been less than a year since Mazurenko died, and he continues to loom large in the lives of the people who knew him. When they miss him, they send messages to his avatar, and they feel closer to him when they do. “There was a lot I didn’t know about my child,” Roman’s mother told me. “But now that I can read about what he thought about different subjects, I’m getting to know him more. This gives the illusion that he’s here now.”

Her eyes welled with tears, but as our interview ended her voice was strong. “I want to repeat that I’m very grateful that I have this,” she said.

Our conversation reminded me of something Dima Ustinov had said to me this spring, about the way we now transcend our physical forms. “The person is not just a body, a set of arms and legs, and a computer,” he said. “It’s much more than that.” Ustinov compared Mazurenko’s life to a pebble thrown into a stream — the ripples, he said, continue outward in every direction. His friend had simply taken a new form. “We are still in the process of meeting Roman,” Ustinov said. “It’s beautiful.”

http://www.theverge.com/a/luka-artificial-intelligence-memorial-roman-mazurenko-bot


Physicists are putting themselves out of a job, using artificial intelligence to run a complex experiment. The experiment created an extremely cold gas trapped in a laser beam, known as a Bose-Einstein condensate, replicating the experiment that won the 2001 Nobel Prize.

Physicists are putting themselves out of a job, using artificial intelligence to run a complex experiment.

The experiment, developed by physicists from The Australian National University (ANU) and UNSW ADFA, created an extremely cold gas trapped in a laser beam, known as a Bose-Einstein condensate, replicating the experiment that won the 2001 Nobel Prize.

“I didn’t expect the machine could learn to do the experiment itself, from scratch, in under an hour,” said co-lead researcher Paul Wigley from the ANU Research School of Physics and Engineering.

“A simple computer program would have taken longer than the age of the Universe to run through all the combinations and work this out.”

Bose-Einstein condensates are some of the coldest places in the Universe, far colder than outer space, typically less than a billionth of a degree above absolute zero.

They could be used for mineral exploration or navigation systems as they are extremely sensitive to external disturbances, which allows them to make very precise measurements such as tiny changes in the Earth’s magnetic field or gravity.

The artificial intelligence system’s ability to set itself up quickly every morning and compensate for any overnight fluctuations would make this fragile technology much more useful for field measurements, said co-lead researcher Dr Michael Hush from UNSW ADFA.

“You could make a working device to measure gravity that you could take in the back of a car, and the artificial intelligence would recalibrate and fix itself no matter what,” he said.

“It’s cheaper than taking a physicist everywhere with you.”

The team cooled the gas to around 1 microkelvin, and then handed control of the three laser beams over to the artificial intelligence to cool the trapped gas down to nanokelvin.

Researchers were surprised by the methods the system came up with to ramp down the power of the lasers.

“It did things a person wouldn’t guess, such as changing one laser’s power up and down, and compensating with another,” said Mr Wigley.

“It may be able to come up with complicated ways humans haven’t thought of to get experiments colder and make measurements more precise.

The new technique will lead to bigger and better experiments, said Dr Hush.

“Next we plan to employ the artificial intelligence to build an even larger Bose-Einstein condensate faster than we’ve seen ever before,” he said.

The research is published in the Nature group journal Scientific Reports.

https://www.sciencedaily.com/releases/2016/05/160516091544.htm

Elon Musk has said that there is only a “one in billions” chance that we’re not living in a computer simulation.

Our lives are almost certainly being conducted within an artificial world powered by AI and highly-powered computers, like in The Matrix, the Tesla and SpaceX CEO suggested at a tech conference in California.

Mr Musk, who has donated huge amounts of money to research into the dangers of artificial intelligence, said that he hopes his prediction is true because otherwise it means the world will end.

“The strongest argument for us probably being in a simulation I think is the following,” he told the Code Conference. “40 years ago we had Pong – two rectangles and a dot. That’s where we were.

“Now 40 years later we have photorealistic, 3D simulations with millions of people playing simultaneously and it’s getting better every year. And soon we’ll have virtual reality, we’ll have augmented reality.

“If you assume any rate of improvement at all, then the games will become indistinguishable from reality, just indistinguishable.”

He said that even if the speed of those advancements dropped by 1000, we would still be moving forward at an intense speed relative to the age of life.

Since that would lead to games that would be indistinguishable from reality that could be played anywhere, “it would seem to follow that the odds that we’re in ‘base reality’ is one in billions”, Mr Musk said.

Asked whether he was saying that the answer to the question of whether we are in a simulated computer game was “yes”, he said the answer is “probably”.

He said that arguably we should hope that it’s true that we live in a simulation. “Otherwise, if civilisation stops advancing, then that may be due to some calamitous event that stops civilisation.”

He said that either we will make simulations that we can’t tell apart from the real world, “or civilisation will cease to exist”.

Mr Musk said that he has had “so many simulation discussions it’s crazy”, and that it got to the point where “every conversation [he had] was the AI/simulation conversation”.

The question of whether what we see is real or simulated has perplexed humans since at least the Ancient philosophers. But it has been given a new and different edge in recent years with the development of powerful computers and artificial intelligence, which some have argued shows how easily such a simulation could be created.

http://www.independent.co.uk/life-style/gadgets-and-tech/news/elon-musk-ai-artificial-intelligence-computer-simulation-gaming-virtual-reality-a7060941.html


ADVANCES IN EMOTIONAL TECHNOLOGIES ARE WARMING UP HUMAN-ROBOT RELATIONSHIPS, BUT CAN AI EVER FULFILL OUR EMOTIONAL NEEDS?

Science fiction has terrified and entertained us with countless dystopian futures where weak human creators are annihilated by heartless super-intelligences. The solution seems easy enough: give them hearts.

Artificial emotional intelligence or AEI development is gathering momentum and the number of social media companies buying start-ups in the field indicates either true faith in the concept or a reckless enthusiasm. The case for AEI is simple: machines will work better if they understand us. Rather than only complying with commands this would enable them to anticipate our needs, and so be able to carry out delicate tasks autonomously, such as home help, counselling or simply being a friend.

Assistant professor at Northwestern University’s Kellogg School of Management Dr Adam Waytz and Harvard Business School professor Dr Norton explain in the Wall Street Journal that: “When emotional jobs such as social workers and pre-school teachers must be ‘botsourced’, people actually prefer robots that seem capable of conveying at least some degree of human emotion.”

A plethora of intelligent machines already exist but to get them working in our offices and homes we need them to understand and share our feelings. So where do we start?

TEACHING EMOTION

“Building an empathy module is a matter of identifying those characteristics of human communication that machines can use to recognize emotion and then training algorithms to spot them,” says Pascale Fung in Scientific American magazine. According to Fung, creating this empathy module requires three components that can analyse “facial cues, acoustic markers in speech and the content of speech itself to read human emotion and tell the robot how to respond.”

Although generally haphazard, facial scanners will become increasingly specialised and able to spot mood signals, such as a tilting of the head, widening of the eyes, and mouth position. But the really interesting area of development is speech cognition. Fung, a professor of electronic and computer engineering at the Hong Kong University of Science and Technology, has commercialised part of her research by setting up a company called Ivo Technologies that used these principles to produce Moodbox, a ‘robot speaker with a heart’.

Unlike humans who learn through instinct and experience, AIs use machine learning – a process where the algorithms are constantly revised. The more you interact with the Moodbox, the more examples it has of your behaviour, and the better it can respond in the appropriate way.

To create the Moodbox, Fung’s team set up a series of 14 ‘classifiers’ to analyse musical pieces. The classifiers were subjected to thousands of examples of ambient sound so that each one became adept at recognising music in its assigned mood category. Then, algorithms were written to spot non-verbal cues in speech such as speed and tone of voice, which indicate the level of stress. The two stages are matched up to predict what you want to listen to. This uses a vast amount of research to produce a souped up speaker system, but the underlying software is highly sophisticated and indicates the level of progress being made.

Using similar principles is Emoshape’s EmoSPARK infotainment cube – an all-in-one home control system that not only links to your media devices, but keeps you up to date with news and weather, can control the lights and security, and also hold a conversation. To create its eerily named ‘human in a box’, Emoshape says the cube devises an emotional profile graph (EPG) on each user, and claims it is capable of “measuring the emotional responses of multiple people simultaneously”. The housekeeper-entertainer-companion comes with face recognition technology too, so if you are unhappy with its choice of TV show or search results, it will ‘see’ this, recalibrate its responses, and come back to you with a revised response.

According to Emoshape, this EPG data enables the AI to “virtually ‘feel’ senses such as pleasure and pain, and [it] ‘expresses’ those desires according to the user.”

PUTTING LANGUAGE INTO CONTEXT

We don’t always say what we mean, so comprehension is essential to enable AEIs to converse with us. “Once a machine can understand the content of speech, it can compare that content with the way it is delivered,” says Fung. “If a person sighs and says, ‘I’m so glad I have to work all weekend,’ an algorithm can detect the mismatch between the emotion cues and the content of the statement and calculate the probability that the speaker is being sarcastic.”

A great example of language comprehension technology is IBM’s Watson platform. Watson is a cognitive computing tool that mimics how human brains process data. As IBM says, its systems “understand the world in the way that humans do: through senses, learning, and experience.”

To deduce meaning, Watson is first trained to understand a subject, in this case speech, and given a huge breadth of examples to form a knowledge base. Then, with algorithms written to recognise natural speech – including humour, puns and slang – the programme is trained to work with the material it has so it can be recalibrated and refined. Watson can sift through its database, rank the results, and choose the answer according to the greatest likelihood in just seconds.

EMOTIONAL AI

As the expression goes, the whole is greater than the sum of its parts, and this rings true for emotional intelligence technology. For instance, the world’s most famous robot, Pepper, is claimed to be the first android with emotions.

Pepper is a humanoid AI designed by Alderaban Robotics to be a ‘kind’ companion. The diminutive and non-threatening robot’s eyes are high-tech camera scanners that examine facial expressions and cross-reference the results with his voice recognition software to identify human emotions. Once he knows how you feel, Pepper will tailor a conversation to you and the more you interact, the more he gets to know what you enjoy. He may change the topic to dispel bad feeling and lighten your mood, play a game, or tell you a joke. Just like a friend.

Peppers are currently employed as customer support assistants for Japan’s telecoms company Softbank so that the public get accustomed to the friendly bots and Pepper learns in an immersive environment. In the spirit of evolution, IBM recently announced that its Watson technology has been integrated into the latest versions, and that Pepper is learning to speak Japanese at Softbank. This technological partnership presents a tour de force of AEI, and IBM hopes Pepper will soon be ready for more challenging roles, “from an in-class teaching assistant to a nursing aide – taking Pepper’s unique physical characteristics, complemented by Watson’s cognitive capabilities, to deliver an enhanced experience.”

“In terms of hands-on interaction, when cognitive capabilities are embedded in robotics, you see people engage and benefit from this technology in new and exciting ways,” says IBM Watson senior vice president Mike Rhodin.

HUMANS AND ROBOTS

Paranoia tempts us into thinking that giving machines emotions is starting the countdown to chaos, but realistically it will make them more effective and versatile. For instance, while EmoSPARK is purely for entertainment and Pepper’s strength is in conversation, one of Alderaban’s NAO robots has been programmed to act like a diabetic toddler by researchers Lola Cañamero and Matthew Lewis at the University of Hertfordshire. Switching the roles of carer and care giver, children look after the bumbling robot Robin in order to help them understand more about their diabetes and how to manage it.

While the uncanny valley says that people are uncomfortable with robots that resemble humans, it is now considered somewhat “overstated” as our relationship with technology has dramatically changed since the theory was put forward in 1978 – after all, we’re unlikely to connect as strongly with a disembodied cube than a robot.

This was clearly visible at a demonstration of Robin, where he tottered in a playpen surrounded by cooing adults. Lewis cradled the robot, stroked his head and said: “It’s impossible not to empathise with him. I wrote the code and I still empathise with him.” Humanisastion will be an important aspect of the wider adoption of AEI, and developers are designing them to mimic our thinking patterns and behaviours, which fires our innate drive to bond.

Our interaction with artificial intelligence has always been a fascinating one; and this is only going to get more entangled, and perhaps weirder too, as AEIs may one day be our co-workers, friends or even, dare I say it, lovers. “It would be premature to say that the age of friendly robots has arrived,” Fung says. “The important thing is that our machines become more human, even if they are flawed. After all, that is how humans work.

http://factor-tech.com/

t’s been almost 20 years since IBM’s Deep Blue supercomputer beat the reigning world chess champion, Gary Kasparov, for the first time under standard tournament rules. Since then, chess-playing computers have become significantly stronger, leaving the best humans little chance even against a modern chess engine running on a smartphone.

But while computers have become faster, the way chess engines work has not changed. Their power relies on brute force, the process of searching through all possible future moves to find the best next one.

Of course, no human can match that or come anywhere close. While Deep Blue was searching some 200 million positions per second, Kasparov was probably searching no more than five a second. And yet he played at essentially the same level. Clearly, humans have a trick up their sleeve that computers have yet to master.

This trick is in evaluating chess positions and narrowing down the most profitable avenues of search. That dramatically simplifies the computational task because it prunes the tree of all possible moves to just a few branches.

Computers have never been good at this, but today that changes thanks to the work of Matthew Lai at Imperial College London. Lai has created an artificial intelligence machine called Giraffe that has taught itself to play chess by evaluating positions much more like humans and in an entirely different way to conventional chess engines.

Straight out of the box, the new machine plays at the same level as the best conventional chess engines, many of which have been fine-tuned over many years. On a human level, it is equivalent to FIDE International Master status, placing it within the top 2.2 percent of tournament chess players.

The technology behind Lai’s new machine is a neural network. This is a way of processing information inspired by the human brain. It consists of several layers of nodes that are connected in a way that change as the system is trained. This training process uses lots of examples to fine-tune the connections so that the network produces a specific output given a certain input, to recognize the presence of face in a picture, for example.

In the last few years, neural networks have become hugely powerful thanks to two advances. The first is a better understanding of how to fine-tune these networks as they learn, thanks in part to much faster computers. The second is the availability of massive annotated datasets to train the networks.

That has allowed computer scientists to train much bigger networks organized into many layers. These so-called deep neural networks have become hugely powerful and now routinely outperform humans in pattern recognition tasks such as face recognition and handwriting recognition.

So it’s no surprise that deep neural networks ought to be able to spot patterns in chess and that’s exactly the approach Lai has taken. His network consists of four layers that together examine each position on the board in three different ways.

The first looks at the global state of the game, such as the number and type of pieces on each side, which side is to move, castling rights and so on. The second looks at piece-centric features such as the location of each piece on each side, while the final aspect is to map the squares that each piece attacks and defends.

Lai trains his network with a carefully generated set of data taken from real chess games. This data set must have the correct distribution of positions. “For example, it doesn’t make sense to train the system on positions with three queens per side, because those positions virtually never come up in actual games,” he says.

It must also have plenty of variety of unequal positions beyond those that usually occur in top level chess games. That’s because although unequal positions rarely arise in real chess games, they crop up all the time in the searches that the computer performs internally.

And this data set must be huge. The massive number of connections inside a neural network have to be fine-tuned during training and this can only be done with a vast dataset. Use a dataset that is too small and the network can settle into a state that fails to recognize the wide variety of patterns that occur in the real world.

Lai generated his dataset by randomly choosing five million positions from a database of computer chess games. He then created greater variety by adding a random legal move to each position before using it for training. In total he generated 175 million positions in this way.

The usual way of training these machines is to manually evaluate every position and use this information to teach the machine to recognize those that are strong and those that are weak.

But this is a huge task for 175 million positions. It could be done by another chess engine but Lai’s goal was more ambitious. He wanted the machine to learn itself.

Instead, he used a bootstrapping technique in which Giraffe played against itself with the goal of improving its prediction of its own evaluation of a future position. That works because there are fixed reference points that ultimately determine the value of a position—whether the game is later won, lost or drawn.

In this way, the computer learns which positions are strong and which are weak.

Having trained Giraffe, the final step is to test it and here the results make for interesting reading. Lai tested his machine on a standard database called the Strategic Test Suite, which consists of 1,500 positions that are chosen to test an engine’s ability to recognize different strategic ideas. “For example, one theme tests the understanding of control of open files, another tests the understanding of how bishop and knight’s values change relative to each other in different situations, and yet another tests the understanding of center control,” he says.

The results of this test are scored out of 15,000.

Lai uses this to test the machine at various stages during its training. As the bootstrapping process begins, Giraffe quickly reaches a score of 6,000 and eventually peaks at 9,700 after only 72 hours. Lai says that matches the best chess engines in the world.

“[That] is remarkable because their evaluation functions are all carefully hand-designed behemoths with hundreds of parameters that have been tuned both manually and automatically over several years, and many of them have been worked on by human grandmasters,” he adds.

Lai goes on to use the same kind of machine learning approach to determine the probability that a given move is likely to be worth pursuing. That’s important because it prevents unnecessary searches down unprofitable branches of the tree and dramatically improves computational efficiency.

Lai says this probabilistic approach predicts the best move 46 percent of the time and places the best move in its top three ranking, 70 percent of the time. So the computer doesn’t have to bother with the other moves.

That’s interesting work that represents a major change in the way chess engines work. It is not perfect, of course. One disadvantage of Giraffe is that neural networks are much slower than other types of data processing. Lai says Giraffe takes about 10 times longer than a conventional chess engine to search the same number of positions.

But even with this disadvantage, it is competitive. “Giraffe is able to play at the level of an FIDE International Master on a modern mainstream PC,” says Lai. By comparison, the top engines play at super-Grandmaster level.

That’s still impressive. “Unlike most chess engines in existence today, Giraffe derives its playing strength not from being able to see very far ahead, but from being able to evaluate tricky positions accurately, and understanding complicated positional concepts that are intuitive to humans, but have been elusive to chess engines for a long time,” says Lai. “This is especially important in the opening and end game phases, where it plays exceptionally well.”

And this is only the start. Lai says it should be straightforward to apply the same approach to other games. One that stands out is the traditional Chinese game of Go, where humans still hold an impressive advantage over their silicon competitors. Perhaps Lai could have a crack at that next.

http://www.technologyreview.com/view/541276/deep-learning-machine-teaches-itself-chess-in-72-hours-plays-at-international-master/

Thanks to Kebmodee for bringing this to the It’s Interesting community.

Bill Gates calls Ray, “the best person I know at predicting the future of artificial intelligence.” Ray is also amazing at predicting a lot more beyond just AI.

This post looks at his very incredible predictions for the next 20+ years.

So who is Ray Kurzweil?

He has received 20 honorary doctorates, has been awarded honors from three U.S. presidents, and has authored 7 books (5 of which have been national bestsellers).

He is the principal inventor of many technologies ranging from the first CCD flatbed scanner to the first print-to-speech reading machine for the blind. He is also the chancellor and co-founder of Singularity University, and the guy tagged by Larry Page to direct artificial intelligence development at Google.

In short, Ray’s pretty smart… and his predictions are amazing, mind-boggling, and important reminders that we are living in the most exciting time in human history.

But, first let’s look back at some of the predictions Ray got right.

Predictions Ray has gotten right over the last 25 years

In 1990 (twenty-five years ago), he predicted…

…that a computer would defeat a world chess champion by 1998. Then in 1997, IBM’s Deep Blue defeated Garry Kasparov.

… that PCs would be capable of answering queries by accessing information wirelessly via the Internet by 2010. He was right, to say the least.

… that by the early 2000s, exoskeletal limbs would let the disabled walk. Companies like Ekso Bionics and others now have technology that does just this, and much more.

In 1999, he predicted…

… that people would be able talk to their computer to give commands by 2009. While still in the early days in 2009, natural language interfaces like Apple’s Siri and Google Now have come a long way. I rarely use my keyboard anymore; instead I dictate texts and emails.

… that computer displays would be built into eyeglasses for augmented reality by 2009. Labs and teams were building head mounted displays well before 2009, but Google started experimenting with Google Glass prototypes in 2011. Now, we are seeing an explosion of augmented and virtual reality solutions and HMDs. Microsoft just released the Hololens, and Magic Leap is working on some amazing technology, to name two.

In 2005, he predicted…

… that by the 2010s, virtual solutions would be able to do real-time language translation in which words spoken in a foreign language would be translated into text that would appear as subtitles to a user wearing the glasses. Well, Microsoft (via Skype Translate), Google (Translate), and others have done this and beyond. One app called Word Lens actually uses your camera to find and translate text imagery in real time.

Ray’s predictions for the next 25 years

The above represent only a few of the predictions Ray has made.

While he hasn’t been precisely right, to the exact year, his track record is stunningly good.

Here are some of Ray’s predictions for the next 25+ years.

By the late 2010s, glasses will beam images directly onto the retina. Ten terabytes of computing power (roughly the same as the human brain) will cost about $1,000.

By the 2020s, most diseases will go away as nanobots become smarter than current medical technology. Normal human eating can be replaced by nanosystems. The Turing test begins to be passable. Self-driving cars begin to take over the roads, and people won’t be allowed to drive on highways.

By the 2030s, virtual reality will begin to feel 100% real. We will be able to upload our mind/consciousness by the end of the decade.

By the 2040s, non-biological intelligence will be a billion times more capable than biological intelligence (a.k.a. us). Nanotech foglets will be able to make food out of thin air and create any object in physical world at a whim.

By 2045, we will multiply our intelligence a billionfold by linking wirelessly from our neocortex to a synthetic neocortex in the cloud.

Ray’s predictions are a byproduct of his understanding of the power of Moore’s Law, more specifically Ray’s “Law of Accelerating Returns” and of exponential technologies.

These technologies follow an exponential growth curve based on the principle that the computing power that enables them doubles every two years.

http://singularityhub.com/2015/01/26/ray-kurzweils-mind-boggling-predictions-for-the-next-25-years/

Thanks to Kebmodee for bringing this to the attention of the It’s Interesting community.


Computers are taking over the kinds of knowledge work long considered the preserve of well-educated, well-trained professionals.

By Nicholas Carr

Artificial intelligence has arrived. Today’s computers are discerning and sharp. They can sense the environment, untangle knotty problems, make subtle judgments and learn from experience. They don’t think the way we think—they’re still as mindless as toothpicks—but they can replicate many of our most prized intellectual talents. Dazzled by our brilliant new machines, we’ve been rushing to hand them all sorts of sophisticated jobs that we used to do ourselves.

But our growing reliance on computer automation may be exacting a high price. Worrisome evidence suggests that our own intelligence is withering as we become more dependent on the artificial variety. Rather than lifting us up, smart software seems to be dumbing us down.

It has been a slow process. The first wave of automation rolled through U.S. industry after World War II, when manufacturers began installing electronically controlled equipment in their plants. The new machines made factories more efficient and companies more profitable. They were also heralded as emancipators. By relieving factory hands of routine chores, they would do more than boost productivity. They would elevate laborers, giving them more invigorating jobs and more valuable talents. The new technology would be ennobling.

Then, in the 1950s, a Harvard Business School professor named James Bright went into the field to study automation’s actual effects on a variety of industries, from heavy manufacturing to oil refining to bread baking. Factory conditions, he discovered, were anything but uplifting. More often than not, the new machines were leaving workers with drabber, less demanding jobs. An automated milling machine, for example, didn’t transform the metalworker into a more creative artisan; it turned him into a pusher of buttons.

Bright concluded that the overriding effect of automation was (in the jargon of labor economists) to “de-skill” workers rather than to “up-skill” them. “The lesson should be increasingly clear,” he wrote in 1966. “Highly complex equipment” did not require “skilled operators. The ‘skill’ can be built into the machine.”

We are learning that lesson again today on a much broader scale. As software has become capable of analysis and decision-making, automation has leapt out of the factory and into the white-collar world. Computers are taking over the kinds of knowledge work long considered the preserve of well-educated, well-trained professionals: Pilots rely on computers to fly planes; doctors consult them in diagnosing ailments; architects use them to design buildings. Automation’s new wave is hitting just about everyone.

Computers aren’t taking away all the jobs done by talented people. But computers are changing the way the work gets done. And the evidence is mounting that the same de-skilling effect that ate into the talents of factory workers last century is starting to gnaw away at professional skills, even highly specialized ones. Yesterday’s machine operators are today’s computer operators.

Just look skyward. Since their invention a century ago, autopilots have helped to make air travel safer and more efficient. That happy trend continued with the introduction of computerized “fly-by-wire” jets in the 1970s. But now, aviation experts worry that we’ve gone too far. We have shifted so many cockpit tasks from humans to computers that pilots are losing their edge—and beginning to exhibit what the British aviation researcher Matthew Ebbatson calls “skill fade.”

In 2007, while working on his doctoral thesis at Cranfield University’s School of Engineering, Mr. Ebbatson conducted an experiment with a group of airline pilots. He had them perform a difficult maneuver in a flight simulator—bringing a Boeing jet with a crippled engine in for a landing in rough weather—and measured subtle indicators of their skill, such as the precision with which they maintained the plane’s airspeed.

When he compared the simulator readings with the aviators’ actual flight records, he found a close connection between a pilot’s adroitness at the controls and the amount of time the pilot had recently spent flying planes manually. “Flying skills decay quite rapidly towards the fringes of ‘tolerable’ performance without relatively frequent practice,” Mr. Ebbatson concluded. But computers now handle most flight operations between takeoff and touchdown—so “frequent practice” is exactly what pilots are not getting.

Even a slight decay in manual flying ability can risk tragedy. A rusty pilot is more likely to make a mistake in an emergency. Automation-related pilot errors have been implicated in several recent air disasters, including the 2009 crashes of Continental Flight 3407 in Buffalo and Air France Flight 447 in the Atlantic Ocean, and the botched landing of Asiana Flight 214 in San Francisco in 2013.

Late last year, a report from a Federal Aviation Administration task force on cockpit technology documented a growing link between crashes and an overreliance on automation. Pilots have become “accustomed to watching things happen, and reacting, instead of being proactive,” the panel warned. The FAA is now urging airlines to get pilots to spend more time flying by hand.

As software improves, the people using it become less likely to sharpen their own know-how. Applications that offer lots of prompts and tips are often to blame; simpler, less solicitous programs push people harder to think, act and learn.

Ten years ago, information scientists at Utrecht University in the Netherlands had a group of people carry out complicated analytical and planning tasks using either rudimentary software that provided no assistance or sophisticated software that offered a great deal of aid. The researchers found that the people using the simple software developed better strategies, made fewer mistakes and developed a deeper aptitude for the work. The people using the more advanced software, meanwhile, would often “aimlessly click around” when confronted with a tricky problem. The supposedly helpful software actually short-circuited their thinking and learning.

The philosopher Hubert Dreyfus of the University of California, Berkeley, wrote in 2002 that human expertise develops through “experience in a variety of situations, all seen from the same perspective but requiring different tactical decisions.” In other words, our skills get sharper only through practice, when we use them regularly to overcome different sorts of difficult challenges.

The goal of modern software, by contrast, is to ease our way through such challenges. Arduous, painstaking work is exactly what programmers are most eager to automate—after all, that is where the immediate efficiency gains tend to lie. In other words, a fundamental tension ripples between the interests of the people doing the automation and the interests of the people doing the work.

Nevertheless, automation’s scope continues to widen. With the rise of electronic health records, physicians increasingly rely on software templates to guide them through patient exams. The programs incorporate valuable checklists and alerts, but they also make medicine more routinized and formulaic—and distance doctors from their patients.

In a study conducted in 2007-08 in upstate New York, SUNY Albany professor Timothy Hoff interviewed more than 75 primary-care physicians who had adopted computerized systems. The doctors felt that the software was impoverishing their understanding of patients, diminishing their “ability to make informed decisions around diagnosis and treatment.”

Harvard Medical School professor Beth Lown, in a 2012 journal article written with her student Dayron Rodriquez, warned that when doctors become “screen-driven,” following a computer’s prompts rather than “the patient’s narrative thread,” their thinking can become constricted. In the worst cases, they may miss important diagnostic signals.

The risk isn’t just theoretical. In a recent paper published in the journal Diagnosis, three medical researchers—including Hardeep Singh, director of the health policy, quality and informatics program at the Veterans Administration Medical Center in Houston—examined the misdiagnosis of Thomas Eric Duncan, the first person to die of Ebola in the U.S., at Texas Health Presbyterian Hospital Dallas. They argue that the digital templates used by the hospital’s clinicians to record patient information probably helped to induce a kind of tunnel vision. “These highly constrained tools,” the researchers write, “are optimized for data capture but at the expense of sacrificing their utility for appropriate triage and diagnosis, leading users to miss the forest for the trees.” Medical software, they write, is no “replacement for basic history-taking, examination skills, and critical thinking.”

Even creative trades are increasingly suffering from automation’s de-skilling effects. Computer-aided design has helped architects to construct buildings with unusual shapes and materials, but when computers are brought into the design process too early, they can deaden the aesthetic sensitivity and conceptual insight that come from sketching and model-building.

Working by hand, psychological studies have found, is better for unlocking designers’ originality, expands their working memory and strengthens their tactile sense. A sketchpad is an “intelligence amplifier,” says Nigel Cross, a design professor at the Open University in the U.K.

When software takes over, manual skills wane. In his book “The Thinking Hand,” the Finnish architect Juhani Pallasmaa argues that overreliance on computers makes it harder for designers to appreciate the subtlest, most human qualities of their buildings. “The false precision and apparent finiteness of the computer image” narrow a designer’s perspective, he writes, which can mean technically stunning but emotionally sterile work. As University of Miami architecture professor Jacob Brillhart wrote in a 2011 paper, modern computer systems can translate sets of dimensions into precise 3-D renderings with incredible speed, but they also breed “more banal, lazy, and uneventful designs that are void of intellect, imagination and emotion.”

We do not have to resign ourselves to this situation, however. Automation needn’t remove challenges from our work and diminish our skills. Those losses stem from what ergonomists and other scholars call “technology-centered automation,” a design philosophy that has come to dominate the thinking of programmers and engineers.

When system designers begin a project, they first consider the capabilities of computers, with an eye toward delegating as much of the work as possible to the software. The human operator is assigned whatever is left over, which usually consists of relatively passive chores such as entering data, following templates and monitoring displays.

This philosophy traps people in a vicious cycle of de-skilling. By isolating them from hard work, it dulls their skills and increases the odds that they will make mistakes. When those mistakes happen, designers respond by seeking to further restrict people’s responsibilities—spurring a new round of de-skilling.

Because the prevailing technique “emphasizes the needs of technology over those of humans,” it forces people “into a supporting role, one for which we are most unsuited,” writes the cognitive scientist and design researcher Donald Norman of the University of California, San Diego.

There is an alternative.

In “human-centered automation,” the talents of people take precedence. Systems are designed to keep the human operator in what engineers call “the decision loop”—the continuing process of action, feedback and judgment-making. That keeps workers attentive and engaged and promotes the kind of challenging practice that strengthens skills.

In this model, software plays an essential but secondary role. It takes over routine functions that a human operator has already mastered, issues alerts when unexpected situations arise, provides fresh information that expands the operator’s perspective and counters the biases that often distort human thinking. The technology becomes the expert’s partner, not the expert’s replacement.

Pushing automation in a more humane direction doesn’t require any technical breakthroughs. It requires a shift in priorities and a renewed focus on human strengths and weaknesses.

Airlines, for example, could program cockpit computers to shift control back and forth between computer and pilot during a flight. By keeping the aviator alert and active, that small change could make flying even safer.

In accounting, medicine and other professions, software could be far less intrusive, giving people room to exercise their own judgment before serving up algorithmically derived suggestions.

When it comes to the computerization of knowledge work, writes John Lee of the University of Iowa, “a less-automated approach, which places the automation in the role of critiquing the operator, has met with much more success” than the typical practice of supplanting human judgment with machine calculations. The best decision-support systems provide professionals with “alternative interpretations, hypotheses, or choices.”

Human-centered automation doesn’t constrain progress. Rather, it guides progress onto a more humanistic path, providing an antidote to the all-too-common, misanthropic view that venerates computers and denigrates people.

One of the most exciting examples of the human-focused approach is known as adaptive automation. It employs cutting-edge sensors and interpretive algorithms to monitor people’s physical and mental states, then uses that information to shift tasks and responsibilities between human and computer. When the system senses that an operator is struggling with a difficult procedure, it allocates more tasks to the computer to free the operator of distractions. But when it senses that the operator’s interest is waning, it ratchets up the person’s workload to capture their attention and build their skills.

We are amazed by our computers, and we should be. But we shouldn’t let our enthusiasm lead us to underestimate our own talents. Even the smartest software lacks the common sense, ingenuity and verve of the skilled professional. In cockpits, offices or examination rooms, human experts remain indispensable. Their insight, ingenuity and intuition, honed through hard work and seasoned real-world judgment, can’t be replicated by algorithms or robots.

If we let our own skills fade by relying too much on automation, we are going to render ourselves less capable, less resilient and more subservient to our machines. We will create a world more fit for robots than for us.

Mr. Carr is the author of “The Shallows: What the Internet Is Doing to Our Brains” and most recently, of “The Glass Cage: Automation and Us.”

Thanks to R. Williams for bringing this to the attention of the It’s Interesting community.

http://online.wsj.com/articles/automation-makes-us-dumb-1416589342