Archive for the ‘The Singularity’ Category

Wendy was barely 20 years old when she received a devastating diagnosis: juvenile amyotrophic lateral sclerosis (ALS), an aggressive neurodegenerative disorder that destroys motor neurons in the brain and the spinal cord.

Within half a year, Wendy was completely paralyzed. At 21 years old, she had to be artificially ventilated and fed through a tube placed into her stomach. Even more horrifyingly, as paralysis gradually swept through her body, Wendy realized that she was rapidly being robbed of ways to reach out to the world.

Initially, Wendy was able to communicate to her loved ones by moving her eyes. But as the disease progressed, even voluntary eye twitches were taken from her. In 2015, a mere three years after her diagnosis, Wendy completely lost the ability to communicate—she was utterly, irreversibly trapped inside her own mind.

Complete locked-in syndrome is the stuff of nightmares. Patients in this state remain fully conscious and cognitively sharp, but are unable to move or signal to the outside world that they’re mentally present. The consequences can be dire: when doctors mistake locked-in patients for comatose and decide to pull the plug, there’s nothing the patients can do to intervene.

Now, thanks to a new system developed by an international team of European researchers, Wendy and others like her may finally have a rudimentary link to the outside world. The system, a portable brain-machine interface, translates brain activity into simple yes or no answers to questions with around 70 percent accuracy.

That may not seem like enough, but the system represents the first sliver of hope that we may one day be able to reopen reliable communication channels with these patients.

Four people were tested in the study, with some locked-in for as long as seven years. In just 10 days, the patients were able to reliably use the system to finally tell their loved ones not to worry—they’re generally happy.

The results, though imperfect, came as “enormous relief” to their families, says study leader Dr. Niels Birbaumer at the University of Tübingen. The study was published this week in the journal PLOS Biology.

Breaking Through

Robbed of words and other routes of contact, locked-in patients have always turned to technology for communication.

Perhaps the most famous example is physicist Stephen Hawking, who became partially locked-in due to ALS. Hawking’s workaround is a speech synthesizer that he operates by twitching his cheek muscles. Jean-Dominique Bauby, an editor of the French fashion magazine Elle who became locked-in after a massive stroke, wrote an entire memoir by blinking his left eye to select letters from the alphabet.

Recently, the rapid development of brain-machine interfaces has given paralyzed patients increasing access to the world—not just the physical one, but also the digital universe.

These devices read brain waves directly through electrodes implanted into the patient’s brain, decode the pattern of activity, and correlate it to a command—say, move a computer cursor left or right on a screen. The technology is so reliable that paralyzed patients can even use an off-the-shelf tablet to Google things, using only the power of their minds.

But all of the above workarounds require one critical factor: the patient has to have control of at least one muscle—often, this is a cheek or an eyelid. People like Wendy who are completely locked-in are unable to control similar brain-machine interfaces. This is especially perplexing since these systems don’t require voluntary muscle movements, because they read directly from the mind.

The unexpected failure of brain-machine interfaces for completely locked-in patients has been a major stumbling block for the field. Although speculative, Birbaumer believes that it may be because over time, the brain becomes less efficient at transforming thoughts into actions.

“Anything you want, everything you wish does not occur. So what the brain learns is that intention has no sense anymore,” he says.


First Contact

In the new study, Birbaumer overhauled common brain-machine interface designs to get the brain back on board.

First off was how the system reads brain waves. Generally, this is done through EEG, which measures certain electrical activity patterns of the brain. Unfortunately, the usual solution was a no-go.

“We worked for more than 10 years with neuroelectric activity [EEG] without getting into contact with these completely paralyzed people,” says Birbaumer.

It may be because the electrodes have to be implanted to produce a more accurate readout, explains Birbaumer to Singularity Hub. But surgery comes with additional risks and expenses to the patients. In a somewhat desperate bid, the team turned their focus to a technique called functional near-infrared spectroscopy (fNIRS).

Like fMRI, fNIRS measures brain activity by measuring changes in blood flow through a specific brain region—generally speaking, more blood flow equals more activation. Unlike fMRI, which requires the patient to lie still in a gigantic magnet, fNIRS uses infrared light to measure blood flow. The light source is embedded into a swimming cap-like device that’s tightly worn around the patient’s head.

To train the system, the team started with facts about the world and personal questions that the patients can easily answer. Over the course of 10 days, the patients were repeatedly asked to respond yes or no to questions like “Paris is the capital of Germany” or “Your husband’s name is Joachim.” Throughout the entire training period, the researchers carefully monitored the patients’ alertness and concentration using EEG, to ensure that they were actually participating in the task at hand.

The answers were then used to train an algorithm that matched the responses to their respective brain activation patterns. Eventually, the algorithm was able to tell yes or no based on these patterns alone, at about 70 percent accuracy for a single trial.

“After 10 years [of trying], I felt relieved,” says Birbaumer. If the study can be replicated in more patients, we may finally have a way to restore useful communication with these patients, he added in a press release.

“The authors established communication with complete locked-in patients, which is rare and has not been demonstrated systematically before,” says Dr. Wolfgang Einhäuser-Treyer to Singularity Hub. Einhäuser-Treyer is a professor at Bielefeld University in Germany who had previously worked on measuring pupil response as a means of communication with locked-in patients and was not involved in this current study.

Generally Happy

With more training, the algorithm is expected to improve even further.

For now, researchers can average out mistakes by repeatedly asking a patient the same question multiple times. And even at an “acceptable” 70 percent accuracy rate, the system has already allowed locked-in patients to speak their minds—and somewhat endearingly, just like in real life, the answer may be rather unexpected.

One of the patients, a 61-year-old man, was asked whether his daughter should marry her boyfriend. The father said no a striking nine out of ten times—but the daughter went ahead anyway, much to her father’s consternation, which he was able to express with the help of his new brain-machine interface.

Perhaps the most heart-warming result from the study is that the patients were generally happy and content with their lives.

We were originally surprised, says Birbaumer. But on further thought, it made sense. These four patients had accepted ventilation to support their lives despite their condition.

“In a sense, they had already chosen to live,” says Birbaumer. “If we could make this technique widely clinically available, it could have a huge impact on the day-to-day lives of people with completely locked-in syndrome.”

For his next steps, the team hopes to extend the system beyond simple yes or no binary questions. Instead, they want to give patients access to the entire alphabet, thus allowing them to spell out words using their brain waves—something that’s already been done in partially locked-in patients but never before been possible for those completely locked-in.

“To me, this is a very impressive and important study,” says Einhäuser-Treyer. The downsides are mostly economical.

“The equipment is rather expensive and not easy to use. So the challenge for the field will be to develop this technology into an affordable ‘product’ that caretakers [sic], families or physicians can simply use without trained staff or extensive training,” he says. “In the interest of the patients and their families, we can hope that someone takes this challenge.”

https://singularityhub.com/2017/02/12/families-finally-hear-from-completely-paralyzed-patients-via-new-mind-reading-device/?utm_source=Singularity+Hub+Newsletter&utm_campaign=978304f198-Hub_Daily_Newsletter&utm_medium=email&utm_term=0_f0cf60cdae-978304f198-58158129

by Amina Khan

One day, gardeners might not just hear the buzz of bees among their flowers, but the whirr of robots, too. Scientists in Japan say they’ve managed to turn an unassuming drone into a remote-controlled pollinator by attaching horsehairs coated with a special, sticky gel to its underbelly.

The system, described in the journal Chem, is nowhere near ready to be sent to agricultural fields, but it could help pave the way to developing automated pollination techniques at a time when bee colonies are suffering precipitous declines.

In flowering plants, sex often involves a threesome. Flowers looking to get the pollen from their male parts into another bloom’s female parts need an envoy to carry it from one to the other. Those third players are animals known as pollinators — a diverse group of critters that includes bees, butterflies, birds and bats, among others.

Animal pollinators are needed for the reproduction of 90% of flowering plants and one third of human food crops, according to the U.S. Department of Agriculture’s Natural Resources Conservation Service. Chief among those are bees — but many bee populations in the United States have been in steep decline in recent decades, likely due to a combination of factors, including agricultural chemicals, invasive species and climate change. Just last month, the rusty patched bumblebee became the first wild bee in the United States to be listed as an endangered species (although the Trump administration just put a halt on that designation).

Thus, the decline of bees isn’t just worrisome because it could disrupt ecosystems, but also because it could disrupt agriculture and the economy. People have been trying to come up with replacement techniques, the study authors say, but none of them are especially effective yet — and some might do more harm than good.

“One pollination technique requires the physical transfer of pollen with an artist’s brush or cotton swab from male to female flowers,” the authors wrote. “Unfortunately, this requires much time and effort. Another approach uses a spray machine, such as a gun barrel and pneumatic ejector. However, this machine pollination has a low pollination success rate because it is likely to cause severe denaturing of pollens and flower pistils as a result of strong mechanical contact as the pollens bursts out of the machine.”

Scientists have thought about using drones, but they haven’t figured out how to make free-flying robot insects that can rely on their own power source without being attached to a wire.

“It’s very tough work,” said senior author Eijiro Miyako, a chemist at the National Institute of Advanced Industrial Science and Technology in Japan.

Miyako’s particular contribution to the field involves a gel, one he’d considered a mistake 10 years before. The scientist had been attempting to make fluids that could be used to conduct electricity, and one attempt left him with a gel that was as sticky as hair wax. Clearly this wouldn’t do, and so Miyako stuck it in a storage cabinet in an uncapped bottle. When it was rediscovered a decade later, it looked exactly the same – the gel hadn’t dried up or degraded at all.

“I was so surprised, because it still had a very high viscosity,” Miyako said.

The chemist noticed that when dropped, the gel absorbed an impressive amount of dust from the floor. Miyako realized this material could be very useful for picking up pollen grains. He took ants, slathered the ionic gel on some of them and let both the gelled and ungelled insects wander through a box of tulips. Those ants with the gel were far more likely to end up with a dusting of pollen than those that were free of the sticky substance.

The next step was to see if this worked with mechanical movers, as well. He and his colleagues chose a four-propeller drone whose retail value was $100, and attached horsehairs to its smooth surface to mimic a bee’s fuzzy body. They coated those horsehairs in the gel, and then maneuvered the drones over Japanese lilies, where they would pick up the pollen from one flower and then deposit the pollen at another bloom, thus fertilizing it.

The scientists looked at the hairs under a scanning electron microscope and counted up the pollen grains attached to the surface. They found that the robots whose horsehairs had been coated with the gel had on the order of 10 times more pollen than those hairs that had not been coated with the gel.

“A certain amount of practice with remote control of the artificial pollinator is necessary,” the study authors noted.

Miyako does not think such drones would replace bees altogether, but could simply help bees with their pollinating duties.

“In combination is the best way,” he said.

There’s a lot of work to be done before that’s a reality, however. Small drones will need to become more maneuverable and energy efficient, as well as smarter, he said — with better GPS and artificial intelligence, programmed to travel in highly effective search-and-pollinate patterns.

http://www.latimes.com/science/sciencenow/la-sci-sn-robot-bees-20170209-story.html#pt0-805728

Feeling run down? Have a case of the sniffles? Maybe you should have paid more attention to your smartwatch.

No, that’s not the pitch line for a new commercial peddling wearable technology, though no doubt a few companies will be interested in the latest research published in PLOS Biology for the next advertising campaign. It turns out that some of the data logged by our personal tracking devices regarding health—heart rate, skin temperature, even oxygen saturation—appear useful for detecting the onset of illness.

“We think we can pick up the earliest stages when people get sick,” says Michael Snyder, a professor and chair of genetics at Stanford University and senior author of the study, “Digital Health: Tracking Physiomes and Activity Using Wearable Biosensors Reveals Useful Health-Related Information.”

Snyder said his team was surprised that the wearables were so effective in detecting the start of the flu, or even Lyme disease, but in hindsight the results make sense: Wearables that track different parameters such as heart rate continuously monitor each vital sign, producing a dense set of data against which aberrations stand out even in the least sensitive wearables.

“[Wearables are] pretty powerful because they’re a continuous measurement of these things,” notes Snyder during an interview with Singularity Hub.

The researchers collected data for up to 24 months on a small study group, which included Snyder himself. Known as Participant #1 in the paper, Snyder benefited from the study when the wearable devices detected marked changes in his heart rate and skin temperature from his normal baseline. A test about two weeks later confirmed he had contracted Lyme disease.

In fact, during the nearly two years while he was monitored, the wearables detected 11 periods with elevated heart rate, corresponding to each instance of illness Snyder experienced during that time. It also detected anomalies on four occasions when Snyder was not feeling ill.

An expert in genomics, Snyder said his team was interested in looking at the effectiveness of wearables technology to detect illness as part of a broader interest in personalized medicine.

“Everybody’s baseline is different, and these devices are very good at characterizing individual baselines,” Snyder says. “I think medicine is going to go from reactive—measuring people after they get sick—to proactive: predicting these risks.”

That’s essentially what genomics is all about: trying to catch disease early, he notes. “I think these devices are set up for that,” Snyder says.

The cost savings could be substantial if a better preventive strategy for healthcare can be found. A landmark report in 2012 from the Cochrane Collaboration, an international group of medical researchers, analyzed 14 large trials with more than 182,000 people. The findings: Routine checkups are basically a waste of time. They did little to lower the risk of serious illness or premature death. A news story in Reuters estimated that the US spends about $8 billion a year in annual physicals.

The study also found that wearables have the potential to detect individuals at risk for Type 2 diabetes. Snyder and his co-authors argue that biosensors could be developed to detect variations in heart rate patterns, which tend to differ for those experiencing insulin resistance.

Finally, the researchers also noted that wearables capable of tracking blood oxygenation provided additional insights into physiological changes caused by flying. While a drop in blood oxygenation during flight due to changes in cabin pressure is a well-known medical fact, the wearables recorded a drop in levels during most of the flight, which was not known before. The paper also suggested that lower oxygen in the blood is associated with feelings of fatigue.

Speaking while en route to the airport for yet another fatigue-causing flight, Snyder is still tracking his vital signs today. He hopes to continue the project by improving on the software his team originally developed to detect deviations from baseline health and sense when people are becoming sick.

In addition, Snyder says his lab plans to make the software work on all smart wearable devices, and eventually develop an app for users.

“I think [wearables] will be the wave of the future for collecting a lot of health-related information. It’s a very inexpensive way to get very dense data about your health that you can’t get in other ways,” he says. “I do see a world where you go to the doctor and they’ve downloaded your data. They’ll be able to see if you’ve been exercising, for example.

“It will be very complementary to how healthcare currently works.”

https://singularityhub.com/2017/02/07/wearable-devices-can-actually-tell-when-youre-about-to-get-sick/?utm_source=Singularity+Hub+Newsletter&utm_campaign=1fcfffbc06-Hub_Daily_Newsletter&utm_medium=email&utm_term=0_f0cf60cdae-1fcfffbc06-58158129

MICROSOFT IS BUYING a deep learning startup based in Montreal, a global hub for deep learning research. But two years ago, this startup wasn’t based in Montreal, and it had nothing to do with deep learning. Which just goes to show: striking it big in the world of tech is all about being in the right place at the right time with the right idea.

Sam Pasupalak and Kaheer Suleman founded Maluuba in 2011 as students at the University of Waterloo, about 400 miles from Montreal. The company’s name is an insider’s nod to one of their undergraduate computer science classes. From an office in Waterloo, they started building something like Siri, the digital assistant that would soon arrive on the iPhone, and they built it in much the same way Apple built the original, using techniques that had driven the development of conversational computing for years—techniques that require extremely slow and meticulous work, where engineers construct AI one tiny piece at a time. But as they toiled away in Waterloo, companies like Google and Facebook embraced deep neural networks, and this technology reinvented everything from image recognition to machine translations, rapidly learning these tasks by analyzing vast amounts of data. Soon, Pasupalak and Suleman realized they should change tack.

In December 2015, the two founders opened a lab in Montreal, and they started recruiting deep learning specialists from places like McGill University and the University of Montreal. Just thirteen months later, after growing to a mere 50 employees, the company sold itself to Microsoft. And that’s not an unusual story. The giants of tech are buying up deep learning startups almost as quickly as they’re created. At the end of December, Uber acquired Geometric Logic, a two-year old AI startup spanning fifteen academic researchers that offered no product and no published research. The previous summer, Twitter paid a reported $150 million for Magic Pony, a two-year-old deep learning startup based in the UK. And in recent months, similarly small, similarly young deep learning companies have disappeared into the likes of General Electric, Salesforce, and Apple.

Microsoft did not disclose how much it paid for Maluuba, but some of these deep learning acquisitions have reached hefty sums, including Intel’s $400 million purchase of Nervana and Google’s $650 million acquisition of DeepMind, the British AI lab that made headlines last spring when it cracked the ancient game of Go, a feat experts didn’t expect for another decade.

At the same time, Microsoft’s buy is a little different than the rest. Maluuba is a deep learning company that focuses on natural language understanding, the ability to not just recognize the words that come out of our mouths but actually understand them and respond in kind—the breed of AI needed to build a good chatbot. Now that deep learning has proven so effective with speech recognition, image recognition, and translation, natural language is the next frontier. “In the past, people had to build large lexicons, dictionaries, ontologies,” Suleman says. “But with neural nets, we no longer need to do that. A neural net can learn from raw data.”

The acquisition is part of an industry-wide race towards digital assistants and chatbots that can converse like a human. Yes, we already have digital assistants like Microsoft Cortana, the Google Search Assistant, Facebook M, and Amazon Alexa. And chatbots are everywhere. But none of these services know how to chat (a particular problem for the chatbots). So, Microsoft, Google, Facebook, and Amazon are now looking at deep learning as a way of improving the state of the art.

Two summers ago, Google published a research paper describing a chatbot underpinned by deep learning that could debate the meaning of life (in a way). Around the same time, Facebook described an experimental system that could read a shortened form of The Lord of the Rings and answer questions about the Tolkien trilogy. Amazon is gathering data for similar work. And, none too surprisingly, Microsoft is gobbling up a startup that only just moved into the same field.

Winning the Game
Deep neural networks are complex mathematical systems that learn to perform discrete tasks by recognizing patterns in vast amounts of digital data. Feed millions of photos into a neural network, for instance, and it can learn to identify objects and people in photos. Pairing these systems with the enormous amounts of computing power inside their data centers, companies like Google, Facebook, and Microsoft have pushed artificial intelligence far further, far more quickly, than they ever could in the past.

Now, these companies hope to reinvent natural language understanding in much the same way. But there are big caveats: It’s a much harder task, and the work has only just begun. “Natural language is an area where more research needs to be done in terms of research, even basic research,” says University of Montreal professor Yoshua Bengio, one of the founding fathers of the deep learning movement and an advisor to Maluuba.

Part of the problem is that researchers don’t yet have the data needed to train neural networks for true conversation, and Maluuba is among those working to fill the void. Like Facebook and Amazon, it’s building brand new datasets for training natural language models: One involves questions and answers, and the other focuses on conversational dialogue. What’s more, the company is sharing this data with the larger community of researchers and encouraging then\m to share their own—a common strategy that seeks to accelerate the progress of AI research.

But even with adequate data, the task is quite different from image recognition or translation. Natural language isn’t necessarily something that neural networks can solve on their own. Dialogue isn’t a single task. It’s a series of tasks, each building on the one before. A neural network can’t just identify a pattern in a single piece of data. It must somehow identify patterns across an endless stream of data—and a keep a “memory” of this stream. That’s why Maluuba is exploring AI beyond neural networks, including a technique called reinforcement learning.

With reinforcement learning, a system repeats the same task over and over again, while carefully keeping tabs on what works and what doesn’t. Engineers at Google’s DeepMind lab used this method in building AlphaGo, the system that topped Korean grandmaster Lee Sedol at the ancient game of Go. In essence, the machine learned to play Go at a higher level than any human by playing game after game against itself, tracking which moves won the most territory on the board. In similar fashion, reinforcement learning can help machines learn to carry on a conversation. Like a game, Bengio says, dialogue is interactive. It’s a back and forth.

For Microsoft, winning the game of conversation means winning an enormous market. Natural language could streamline practically any computer interface. With this in mind, the company is already building an army of chatbots, but so far, the results are mixed. In China, the company says, its Xiaoice chatbot has been used by 40 million people. But when it first unleashed a similar bot in the US, the service was coaxed into spewing racism, and the replacement is flawed in so many other ways. That’s why Microsoft acquired Maluuba. The startup was in the right place at the right time. And it may carry the right idea.

https://www.wired.com/2017/01/microsoft-thinks-machines-can-learn-converse-chats-become-game/

by Tom Simonite

Each of these trucks is the size of a small two-story house. None has a driver or anyone else on board.

Mining company Rio Tinto has 73 of these titans hauling iron ore 24 hours a day at four mines in Australia’s Mars-red northwest corner. At this one, known as West Angelas, the vehicles work alongside robotic rock drilling rigs. The company is also upgrading the locomotives that haul ore hundreds of miles to port—the upgrades will allow the trains to drive themselves, and be loaded and unloaded automatically.

Rio Tinto intends its automated operations in Australia to preview a more efficient future for all of its mines—one that will also reduce the need for human miners. The rising capabilities and falling costs of robotics technology are allowing mining and oil companies to reimagine the dirty, dangerous business of getting resources out of the ground.

BHP Billiton, the world’s largest mining company, is also deploying driverless trucks and drills on iron ore mines in Australia. Suncor, Canada’s largest oil company, has begun testing driverless trucks on oil sands fields in Alberta.

“In the last couple of years we can just do so much more in terms of the sophistication of automation,” says Herman Herman, director of the National Robotics Engineering Center at Carnegie Mellon University, in Pittsburgh. The center helped Caterpillar develop its autonomous haul truck. Mining company Fortescue Metals Group is putting them to work in its own iron ore mines. Herman says the technology can be deployed sooner for mining than other applications, such as transportation on public roads. “It’s easier to deploy because these environments are already highly regulated,” he says.

Rio Tinto uses driverless trucks provided by Japan’s Komatsu. They find their way around using precision GPS and look out for obstacles using radar and laser sensors.

Rob Atkinson, who leads productivity efforts at Rio Tinto, says the fleet and other automation projects are already paying off. The company’s driverless trucks have proven to be roughly 15 percent cheaper to run than vehicles with humans behind the wheel, says Atkinson—a significant saving since haulage is by far a mine’s largest operational cost. “We’re going to continue as aggressively as possible down this path,” he says.

Trucks that drive themselves can spend more time working because software doesn’t need to stop for shift changes or bathroom breaks. They are also more predictable in how they do things like pull up for loading. “All those places where you could lose a few seconds or minutes by not being consistent add up,” says Atkinson. They also improve safety, he says.

The driverless locomotives, due to be tested extensively next year and fully deployed by 2018, are expected to bring similar benefits. Atkinson also anticipates savings on train maintenance, because software can be more predictable and gentle than any human in how it uses brakes and other controls. Diggers and bulldozers could be next to be automated.

Herman at CMU expects all large mining companies to widen their use of automation in the coming years as robotics continues to improve. The recent, sizeable investments by auto and tech companies in driverless cars will help accelerate improvements in the price and performance of the sensors, software, and other technologies needed.

Herman says many mining companies are well placed to expand automation rapidly, because they have already invested in centralized control systems that use software to coördinate and monitor their equipment. Rio Tinto, for example, gave the job of overseeing its autonomous trucks to staff at the company’s control center in Perth, 750 miles to the south. The center already plans train movements and in the future will shift from sending orders to people to directing driverless locomotives.

Atkinson of Rio Tinto acknowledges that just like earlier technologies that boosted efficiency, those changes will tend to reduce staffing levels, even if some new jobs are created servicing and managing autonomous machines. “It’s something that we’ve got to carefully manage, but it’s a reality of modern day life,” he says. “We will remain a very significant employer.”

https://www.technologyreview.com/s/603170/mining-24-hours-a-day-with-robots/

Thanks to Kebmodee for bringing this to the It’s Interesting community.

Most of the attention around automation focuses on how factory robots and self-driving cars may fundamentally change our workforce, potentially eliminating millions of jobs. But AI that can handle knowledge-based, white-collar work are also becoming increasingly competent.

One Japanese insurance company, Fukoku Mutual Life Insurance, is reportedly replacing 34 human insurance claim workers with “IBM Watson Explorer,” starting by January 2017.

The AI will scan hospital records and other documents to determine insurance payouts, according to a company press release, factoring injuries, patient medical histories, and procedures administered. Automation of these research and data gathering tasks will help the remaining human workers process the final payout faster, the release says.

Fukoku Mutual will spend $1.7 million (200 million yen) to install the AI system, and $128,000 per year for maintenance, according to Japan’s The Mainichi. The company saves roughly $1.1 million per year on employee salaries by using the IBM software, meaning it hopes to see a return on the investment in less than two years.

Watson AI is expected to improve productivity by 30%, Fukoku Mutual says. The company was encouraged by its use of similar IBM technology to analyze customer’s voices during complaints. The software typically takes the customer’s words, converts them to text, and analyzes whether those words are positive or negative. Similar sentiment analysis software is also being used by a range of US companies for customer service; incidentally, a large benefit of the software is understanding when customers get frustrated with automated systems.

The Mainichi reports that three other Japanese insurance companies are testing or implementing AI systems to automate work such as finding ideal plans for customers. An Israeli insurance startup, Lemonade, has raised $60 million on the idea of “replacing brokers and paperwork with bots and machine learning,” says CEO Daniel Schreiber.

Artificial intelligence systems like IBM’s are poised to upend knowledge-based professions, like insurance and financial services, according to the Harvard Business Review, due to the fact that many jobs can be “composed of work that can be codified into standard steps and of decisions based on cleanly formatted data.” But whether that means augmenting workers’ ability to be productive, or replacing them entirely remains to be seen.

“Almost all jobs have major elements that—for the foreseeable future—won’t be possible for computers to handle,” HBR writes. “And yet, we have to admit that there are some knowledge-work jobs that will simply succumb to the rise of the robots.”

Japanese white-collar workers are already being replaced by artificial intelligence

Thank to Kebmodee for bringing this to the It’s Interesting community.

Google AI computers have created their own secret language, creating a fascinating and existentially challenging development.

In September, Google announced that its Neural Machine Translation system had gone live. It uses deep learning to produce better, more natural translations between languages.

Following on this success, GNMT’s creators were curious about something. If you teach the translation system to translate English to Korean and vice versa, and also English to Japanese and vice versa… could it translate Korean to Japanese, without resorting to English as a bridge between them?

This is called zero-shot translation, illustrated below.

Indeed, Google’s AI has evolves to produce reasonable translations between two languages that it has not explicitly linked in any way.

But this raised a second question. If the computer is able to make connections between concepts and words that have not been formally linked… does that mean that the computer has formed a concept of shared meaning for those words, meaning at a deeper level than simply that one word or phrase is the equivalent of another?

n other words, has the computer developed its own internal language to represent the concepts it uses to translate between other languages? Based on how various sentences are related to one another in the memory space of the neural network, Google’s language and AI boffins think that it has.

This “interlingua” seems to exist as a deeper level of representation that sees similarities between a sentence or word in all three languages. Beyond that, it’s hard to say, since the inner processes of complex neural networks are infamously difficult to describe.

It could be something sophisticated, or it could be something simple. But the fact that it exists at all — an original creation of the system’s own to aid in its understanding of concepts it has not been trained to understand — is, philosophically speaking, pretty powerful stuff.

Google’s AI translation tool seems to have invented its own secret internal language

Thanks to Kebmodee for bringing this to the attention of the It’s Interesting community.