Archive for the ‘The Future’ Category

Feeling run down? Have a case of the sniffles? Maybe you should have paid more attention to your smartwatch.

No, that’s not the pitch line for a new commercial peddling wearable technology, though no doubt a few companies will be interested in the latest research published in PLOS Biology for the next advertising campaign. It turns out that some of the data logged by our personal tracking devices regarding health—heart rate, skin temperature, even oxygen saturation—appear useful for detecting the onset of illness.

“We think we can pick up the earliest stages when people get sick,” says Michael Snyder, a professor and chair of genetics at Stanford University and senior author of the study, “Digital Health: Tracking Physiomes and Activity Using Wearable Biosensors Reveals Useful Health-Related Information.”

Snyder said his team was surprised that the wearables were so effective in detecting the start of the flu, or even Lyme disease, but in hindsight the results make sense: Wearables that track different parameters such as heart rate continuously monitor each vital sign, producing a dense set of data against which aberrations stand out even in the least sensitive wearables.

“[Wearables are] pretty powerful because they’re a continuous measurement of these things,” notes Snyder during an interview with Singularity Hub.

The researchers collected data for up to 24 months on a small study group, which included Snyder himself. Known as Participant #1 in the paper, Snyder benefited from the study when the wearable devices detected marked changes in his heart rate and skin temperature from his normal baseline. A test about two weeks later confirmed he had contracted Lyme disease.

In fact, during the nearly two years while he was monitored, the wearables detected 11 periods with elevated heart rate, corresponding to each instance of illness Snyder experienced during that time. It also detected anomalies on four occasions when Snyder was not feeling ill.

An expert in genomics, Snyder said his team was interested in looking at the effectiveness of wearables technology to detect illness as part of a broader interest in personalized medicine.

“Everybody’s baseline is different, and these devices are very good at characterizing individual baselines,” Snyder says. “I think medicine is going to go from reactive—measuring people after they get sick—to proactive: predicting these risks.”

That’s essentially what genomics is all about: trying to catch disease early, he notes. “I think these devices are set up for that,” Snyder says.

The cost savings could be substantial if a better preventive strategy for healthcare can be found. A landmark report in 2012 from the Cochrane Collaboration, an international group of medical researchers, analyzed 14 large trials with more than 182,000 people. The findings: Routine checkups are basically a waste of time. They did little to lower the risk of serious illness or premature death. A news story in Reuters estimated that the US spends about $8 billion a year in annual physicals.

The study also found that wearables have the potential to detect individuals at risk for Type 2 diabetes. Snyder and his co-authors argue that biosensors could be developed to detect variations in heart rate patterns, which tend to differ for those experiencing insulin resistance.

Finally, the researchers also noted that wearables capable of tracking blood oxygenation provided additional insights into physiological changes caused by flying. While a drop in blood oxygenation during flight due to changes in cabin pressure is a well-known medical fact, the wearables recorded a drop in levels during most of the flight, which was not known before. The paper also suggested that lower oxygen in the blood is associated with feelings of fatigue.

Speaking while en route to the airport for yet another fatigue-causing flight, Snyder is still tracking his vital signs today. He hopes to continue the project by improving on the software his team originally developed to detect deviations from baseline health and sense when people are becoming sick.

In addition, Snyder says his lab plans to make the software work on all smart wearable devices, and eventually develop an app for users.

“I think [wearables] will be the wave of the future for collecting a lot of health-related information. It’s a very inexpensive way to get very dense data about your health that you can’t get in other ways,” he says. “I do see a world where you go to the doctor and they’ve downloaded your data. They’ll be able to see if you’ve been exercising, for example.

“It will be very complementary to how healthcare currently works.”

https://singularityhub.com/2017/02/07/wearable-devices-can-actually-tell-when-youre-about-to-get-sick/?utm_source=Singularity+Hub+Newsletter&utm_campaign=1fcfffbc06-Hub_Daily_Newsletter&utm_medium=email&utm_term=0_f0cf60cdae-1fcfffbc06-58158129

Advertisements

MICROSOFT IS BUYING a deep learning startup based in Montreal, a global hub for deep learning research. But two years ago, this startup wasn’t based in Montreal, and it had nothing to do with deep learning. Which just goes to show: striking it big in the world of tech is all about being in the right place at the right time with the right idea.

Sam Pasupalak and Kaheer Suleman founded Maluuba in 2011 as students at the University of Waterloo, about 400 miles from Montreal. The company’s name is an insider’s nod to one of their undergraduate computer science classes. From an office in Waterloo, they started building something like Siri, the digital assistant that would soon arrive on the iPhone, and they built it in much the same way Apple built the original, using techniques that had driven the development of conversational computing for years—techniques that require extremely slow and meticulous work, where engineers construct AI one tiny piece at a time. But as they toiled away in Waterloo, companies like Google and Facebook embraced deep neural networks, and this technology reinvented everything from image recognition to machine translations, rapidly learning these tasks by analyzing vast amounts of data. Soon, Pasupalak and Suleman realized they should change tack.

In December 2015, the two founders opened a lab in Montreal, and they started recruiting deep learning specialists from places like McGill University and the University of Montreal. Just thirteen months later, after growing to a mere 50 employees, the company sold itself to Microsoft. And that’s not an unusual story. The giants of tech are buying up deep learning startups almost as quickly as they’re created. At the end of December, Uber acquired Geometric Logic, a two-year old AI startup spanning fifteen academic researchers that offered no product and no published research. The previous summer, Twitter paid a reported $150 million for Magic Pony, a two-year-old deep learning startup based in the UK. And in recent months, similarly small, similarly young deep learning companies have disappeared into the likes of General Electric, Salesforce, and Apple.

Microsoft did not disclose how much it paid for Maluuba, but some of these deep learning acquisitions have reached hefty sums, including Intel’s $400 million purchase of Nervana and Google’s $650 million acquisition of DeepMind, the British AI lab that made headlines last spring when it cracked the ancient game of Go, a feat experts didn’t expect for another decade.

At the same time, Microsoft’s buy is a little different than the rest. Maluuba is a deep learning company that focuses on natural language understanding, the ability to not just recognize the words that come out of our mouths but actually understand them and respond in kind—the breed of AI needed to build a good chatbot. Now that deep learning has proven so effective with speech recognition, image recognition, and translation, natural language is the next frontier. “In the past, people had to build large lexicons, dictionaries, ontologies,” Suleman says. “But with neural nets, we no longer need to do that. A neural net can learn from raw data.”

The acquisition is part of an industry-wide race towards digital assistants and chatbots that can converse like a human. Yes, we already have digital assistants like Microsoft Cortana, the Google Search Assistant, Facebook M, and Amazon Alexa. And chatbots are everywhere. But none of these services know how to chat (a particular problem for the chatbots). So, Microsoft, Google, Facebook, and Amazon are now looking at deep learning as a way of improving the state of the art.

Two summers ago, Google published a research paper describing a chatbot underpinned by deep learning that could debate the meaning of life (in a way). Around the same time, Facebook described an experimental system that could read a shortened form of The Lord of the Rings and answer questions about the Tolkien trilogy. Amazon is gathering data for similar work. And, none too surprisingly, Microsoft is gobbling up a startup that only just moved into the same field.

Winning the Game
Deep neural networks are complex mathematical systems that learn to perform discrete tasks by recognizing patterns in vast amounts of digital data. Feed millions of photos into a neural network, for instance, and it can learn to identify objects and people in photos. Pairing these systems with the enormous amounts of computing power inside their data centers, companies like Google, Facebook, and Microsoft have pushed artificial intelligence far further, far more quickly, than they ever could in the past.

Now, these companies hope to reinvent natural language understanding in much the same way. But there are big caveats: It’s a much harder task, and the work has only just begun. “Natural language is an area where more research needs to be done in terms of research, even basic research,” says University of Montreal professor Yoshua Bengio, one of the founding fathers of the deep learning movement and an advisor to Maluuba.

Part of the problem is that researchers don’t yet have the data needed to train neural networks for true conversation, and Maluuba is among those working to fill the void. Like Facebook and Amazon, it’s building brand new datasets for training natural language models: One involves questions and answers, and the other focuses on conversational dialogue. What’s more, the company is sharing this data with the larger community of researchers and encouraging then\m to share their own—a common strategy that seeks to accelerate the progress of AI research.

But even with adequate data, the task is quite different from image recognition or translation. Natural language isn’t necessarily something that neural networks can solve on their own. Dialogue isn’t a single task. It’s a series of tasks, each building on the one before. A neural network can’t just identify a pattern in a single piece of data. It must somehow identify patterns across an endless stream of data—and a keep a “memory” of this stream. That’s why Maluuba is exploring AI beyond neural networks, including a technique called reinforcement learning.

With reinforcement learning, a system repeats the same task over and over again, while carefully keeping tabs on what works and what doesn’t. Engineers at Google’s DeepMind lab used this method in building AlphaGo, the system that topped Korean grandmaster Lee Sedol at the ancient game of Go. In essence, the machine learned to play Go at a higher level than any human by playing game after game against itself, tracking which moves won the most territory on the board. In similar fashion, reinforcement learning can help machines learn to carry on a conversation. Like a game, Bengio says, dialogue is interactive. It’s a back and forth.

For Microsoft, winning the game of conversation means winning an enormous market. Natural language could streamline practically any computer interface. With this in mind, the company is already building an army of chatbots, but so far, the results are mixed. In China, the company says, its Xiaoice chatbot has been used by 40 million people. But when it first unleashed a similar bot in the US, the service was coaxed into spewing racism, and the replacement is flawed in so many other ways. That’s why Microsoft acquired Maluuba. The startup was in the right place at the right time. And it may carry the right idea.

https://www.wired.com/2017/01/microsoft-thinks-machines-can-learn-converse-chats-become-game/

by Tom Simonite

Each of these trucks is the size of a small two-story house. None has a driver or anyone else on board.

Mining company Rio Tinto has 73 of these titans hauling iron ore 24 hours a day at four mines in Australia’s Mars-red northwest corner. At this one, known as West Angelas, the vehicles work alongside robotic rock drilling rigs. The company is also upgrading the locomotives that haul ore hundreds of miles to port—the upgrades will allow the trains to drive themselves, and be loaded and unloaded automatically.

Rio Tinto intends its automated operations in Australia to preview a more efficient future for all of its mines—one that will also reduce the need for human miners. The rising capabilities and falling costs of robotics technology are allowing mining and oil companies to reimagine the dirty, dangerous business of getting resources out of the ground.

BHP Billiton, the world’s largest mining company, is also deploying driverless trucks and drills on iron ore mines in Australia. Suncor, Canada’s largest oil company, has begun testing driverless trucks on oil sands fields in Alberta.

“In the last couple of years we can just do so much more in terms of the sophistication of automation,” says Herman Herman, director of the National Robotics Engineering Center at Carnegie Mellon University, in Pittsburgh. The center helped Caterpillar develop its autonomous haul truck. Mining company Fortescue Metals Group is putting them to work in its own iron ore mines. Herman says the technology can be deployed sooner for mining than other applications, such as transportation on public roads. “It’s easier to deploy because these environments are already highly regulated,” he says.

Rio Tinto uses driverless trucks provided by Japan’s Komatsu. They find their way around using precision GPS and look out for obstacles using radar and laser sensors.

Rob Atkinson, who leads productivity efforts at Rio Tinto, says the fleet and other automation projects are already paying off. The company’s driverless trucks have proven to be roughly 15 percent cheaper to run than vehicles with humans behind the wheel, says Atkinson—a significant saving since haulage is by far a mine’s largest operational cost. “We’re going to continue as aggressively as possible down this path,” he says.

Trucks that drive themselves can spend more time working because software doesn’t need to stop for shift changes or bathroom breaks. They are also more predictable in how they do things like pull up for loading. “All those places where you could lose a few seconds or minutes by not being consistent add up,” says Atkinson. They also improve safety, he says.

The driverless locomotives, due to be tested extensively next year and fully deployed by 2018, are expected to bring similar benefits. Atkinson also anticipates savings on train maintenance, because software can be more predictable and gentle than any human in how it uses brakes and other controls. Diggers and bulldozers could be next to be automated.

Herman at CMU expects all large mining companies to widen their use of automation in the coming years as robotics continues to improve. The recent, sizeable investments by auto and tech companies in driverless cars will help accelerate improvements in the price and performance of the sensors, software, and other technologies needed.

Herman says many mining companies are well placed to expand automation rapidly, because they have already invested in centralized control systems that use software to coördinate and monitor their equipment. Rio Tinto, for example, gave the job of overseeing its autonomous trucks to staff at the company’s control center in Perth, 750 miles to the south. The center already plans train movements and in the future will shift from sending orders to people to directing driverless locomotives.

Atkinson of Rio Tinto acknowledges that just like earlier technologies that boosted efficiency, those changes will tend to reduce staffing levels, even if some new jobs are created servicing and managing autonomous machines. “It’s something that we’ve got to carefully manage, but it’s a reality of modern day life,” he says. “We will remain a very significant employer.”

https://www.technologyreview.com/s/603170/mining-24-hours-a-day-with-robots/

Thanks to Kebmodee for bringing this to the It’s Interesting community.

Most of the attention around automation focuses on how factory robots and self-driving cars may fundamentally change our workforce, potentially eliminating millions of jobs. But AI that can handle knowledge-based, white-collar work are also becoming increasingly competent.

One Japanese insurance company, Fukoku Mutual Life Insurance, is reportedly replacing 34 human insurance claim workers with “IBM Watson Explorer,” starting by January 2017.

The AI will scan hospital records and other documents to determine insurance payouts, according to a company press release, factoring injuries, patient medical histories, and procedures administered. Automation of these research and data gathering tasks will help the remaining human workers process the final payout faster, the release says.

Fukoku Mutual will spend $1.7 million (200 million yen) to install the AI system, and $128,000 per year for maintenance, according to Japan’s The Mainichi. The company saves roughly $1.1 million per year on employee salaries by using the IBM software, meaning it hopes to see a return on the investment in less than two years.

Watson AI is expected to improve productivity by 30%, Fukoku Mutual says. The company was encouraged by its use of similar IBM technology to analyze customer’s voices during complaints. The software typically takes the customer’s words, converts them to text, and analyzes whether those words are positive or negative. Similar sentiment analysis software is also being used by a range of US companies for customer service; incidentally, a large benefit of the software is understanding when customers get frustrated with automated systems.

The Mainichi reports that three other Japanese insurance companies are testing or implementing AI systems to automate work such as finding ideal plans for customers. An Israeli insurance startup, Lemonade, has raised $60 million on the idea of “replacing brokers and paperwork with bots and machine learning,” says CEO Daniel Schreiber.

Artificial intelligence systems like IBM’s are poised to upend knowledge-based professions, like insurance and financial services, according to the Harvard Business Review, due to the fact that many jobs can be “composed of work that can be codified into standard steps and of decisions based on cleanly formatted data.” But whether that means augmenting workers’ ability to be productive, or replacing them entirely remains to be seen.

“Almost all jobs have major elements that—for the foreseeable future—won’t be possible for computers to handle,” HBR writes. “And yet, we have to admit that there are some knowledge-work jobs that will simply succumb to the rise of the robots.”

Japanese white-collar workers are already being replaced by artificial intelligence

Thank to Kebmodee for bringing this to the It’s Interesting community.

Google AI computers have created their own secret language, creating a fascinating and existentially challenging development.

In September, Google announced that its Neural Machine Translation system had gone live. It uses deep learning to produce better, more natural translations between languages.

Following on this success, GNMT’s creators were curious about something. If you teach the translation system to translate English to Korean and vice versa, and also English to Japanese and vice versa… could it translate Korean to Japanese, without resorting to English as a bridge between them?

This is called zero-shot translation, illustrated below.

Indeed, Google’s AI has evolves to produce reasonable translations between two languages that it has not explicitly linked in any way.

But this raised a second question. If the computer is able to make connections between concepts and words that have not been formally linked… does that mean that the computer has formed a concept of shared meaning for those words, meaning at a deeper level than simply that one word or phrase is the equivalent of another?

n other words, has the computer developed its own internal language to represent the concepts it uses to translate between other languages? Based on how various sentences are related to one another in the memory space of the neural network, Google’s language and AI boffins think that it has.

This “interlingua” seems to exist as a deeper level of representation that sees similarities between a sentence or word in all three languages. Beyond that, it’s hard to say, since the inner processes of complex neural networks are infamously difficult to describe.

It could be something sophisticated, or it could be something simple. But the fact that it exists at all — an original creation of the system’s own to aid in its understanding of concepts it has not been trained to understand — is, philosophically speaking, pretty powerful stuff.

Google’s AI translation tool seems to have invented its own secret internal language

Thanks to Kebmodee for bringing this to the attention of the It’s Interesting community.

By James Gallagher

An implant that beams instructions out of the brain has been used to restore movement in paralysed primates for the first time, say scientists.

Rhesus monkeys were paralysed in one leg due to a damaged spinal cord. The team at the Swiss Federal Institute of Technology bypassed the injury by sending the instructions straight from the brain to the nerves controlling leg movement. Experts said the technology could be ready for human trials within a decade.

Spinal-cord injuries block the flow of electrical signals from the brain to the rest of the body resulting in paralysis. It is a wound that rarely heals, but one potential solution is to use technology to bypass the injury.

In the study, a chip was implanted into the part of the monkeys’ brain that controls movement. Its job was to read the spikes of electrical activity that are the instructions for moving the legs and send them to a nearby computer. It deciphered the messages and sent instructions to an implant in the monkey’s spine to electrically stimulate the appropriate nerves. The process all takes place in real time. The results, published in the journal Nature, showed the monkeys regained some control of their paralysed leg within six days and could walk in a straight line on a treadmill.

Dr Gregoire Courtine, one of the researchers, said: “This is the first time that a neurotechnology has restored locomotion in primates.” He told the BBC News website: “The movement was close to normal for the basic walking pattern, but so far we have not been able to test the ability to steer.” The technology used to stimulate the spinal cord is the same as that used in deep brain stimulation to treat Parkinson’s disease, so it would not be a technological leap to doing the same tests in patients. “But the way we walk is different to primates, we are bipedal and this requires more sophisticated ways to stimulate the muscle,” said Dr Courtine.

Jocelyne Bloch, a neurosurgeon from the Lausanne University Hospital, said: “The link between decoding of the brain and the stimulation of the spinal cord is completely new. “For the first time, I can image a completely paralysed patient being able to move their legs through this brain-spine interface.”

Using technology to overcome paralysis is a rapidly developing field:
Brainwaves have been used to control a robotic arm
Electrical stimulation of the spinal cord has helped four paralysed people stand again
An implant has helped a paralysed man play a guitar-based computer game

Dr Mark Bacon, the director of research at the charity Spinal Research, said: “This is quite impressive work. Paralysed patients want to be able to regain real control, that is voluntary control of lost functions, like walking, and the use of implantable devices may be one way of achieving this. The current work is a clear demonstration that there is progress being made in the right direction.”

Dr Andrew Jackson, from the Institute of Neuroscience and Newcastle University, said: “It is not unreasonable to speculate that we could see the first clinical demonstrations of interfaces between the brain and spinal cord by the end of the decade.” However, he said, rhesus monkeys used all four limbs to move and only one leg had been paralysed, so it would be a greater challenge to restore the movement of both legs in people. “Useful locomotion also requires control of balance, steering and obstacle avoidance, which were not addressed,” he added.

The other approach to treating paralysis involves transplanting cells from the nasal cavity into the spinal cord to try to biologically repair the injury. Following this treatment, Darek Fidyka, who was paralysed from the chest down in a knife attack in 2010, can now walk using a frame.

Neither approach is ready for routine use.

http://www.bbc.com/news/health-37914543

Thanks to Kebmodee for bringing this to the It’s Interesting community.


Study paves way for personnel such as drone operators to have electrical pulses sent into their brains to improve effectiveness in high pressure situations.

US military scientists have used electrical brain stimulators to enhance mental skills of staff, in research that aims to boost the performance of air crews, drone operators and others in the armed forces’ most demanding roles.

The successful tests of the devices pave the way for servicemen and women to be wired up at critical times of duty, so that electrical pulses can be beamed into their brains to improve their effectiveness in high pressure situations.

The brain stimulation kits use five electrodes to send weak electric currents through the skull and into specific parts of the cortex. Previous studies have found evidence that by helping neurons to fire, these minor brain zaps can boost cognitive ability.

The technology is seen as a safer alternative to prescription drugs, such as modafinil and ritalin, both of which have been used off-label as performance enhancing drugs in the armed forces.

But while electrical brain stimulation appears to have no harmful side effects, some experts say its long-term safety is unknown, and raise concerns about staff being forced to use the equipment if it is approved for military operations.

Others are worried about the broader implications of the science on the general workforce because of the advance of an unregulated technology.

In a new report, scientists at Wright-Patterson Air Force Base in Ohio describe how the performance of military personnel can slump soon after they start work if the demands of the job become too intense.

“Within the air force, various operations such as remotely piloted and manned aircraft operations require a human operator to monitor and respond to multiple events simultaneously over a long period of time,” they write. “With the monotonous nature of these tasks, the operator’s performance may decline shortly after their work shift commences.”

Advertisement

But in a series of experiments at the air force base, the researchers found that electrical brain stimulation can improve people’s multitasking skills and stave off the drop in performance that comes with information overload. Writing in the journal Frontiers in Human Neuroscience, they say that the technology, known as transcranial direct current stimulation (tDCS), has a “profound effect”.

For the study, the scientists had men and women at the base take a test developed by Nasa to assess multitasking skills. The test requires people to keep a crosshair inside a moving circle on a computer screen, while constantly monitoring and responding to three other tasks on the screen.

To investigate whether tDCS boosted people’s scores, half of the volunteers had a constant two milliamp current beamed into the brain for the 36-minute-long test. The other half formed a control group and had only 30 seconds of stimulation at the start of the test.

According to the report, the brain stimulation group started to perform better than the control group four minutes into the test. “The findings provide new evidence that tDCS has the ability to augment and enhance multitasking capability in a human operator,” the researchers write. Larger studies must now look at whether the improvement in performance is real and, if so, how long it lasts.

The tests are not the first to claim beneficial effects from electrical brain stimulation. Last year, researchers at the same US facility found that tDCS seemed to work better than caffeine at keeping military target analysts vigilant after long hours at the desk. Brain stimulation has also been tested for its potential to help soldiers spot snipers more quickly in VR training programmes.

Neil Levy, deputy director of the Oxford Centre for Neuroethics, said that compared with prescription drugs, electrical brain stimulation could actually be a safer way to boost the performance of those in the armed forces. “I have more serious worries about the extent to which participants can give informed consent, and whether they can opt out once it is approved for use,” he said. “Even for those jobs where attention is absolutely critical, you want to be very careful about making it compulsory, or there being a strong social pressure to use it, before we are really sure about its long-term safety.”

But while the devices may be safe in the hands of experts, the technology is freely available, because the sale of brain stimulation kits is unregulated. They can be bought on the internet or assembled from simple components, which raises a greater concern, according to Levy. Young people whose brains are still developing may be tempted to experiment with the devices, and try higher currents than those used in laboratories, he says. “If you use high currents you can damage the brain,” he says.

In 2014 another Oxford scientist, Roi Cohen Kadosh, warned that while brain stimulation could improve performance at some tasks, it made people worse at others. In light of the work, Kadosh urged people not to use brain stimulators at home.

If the technology is proved safe in the long run though, it could help those who need it most, said Levy. “It may have a levelling-up effect, because it is cheap and enhancers tend to benefit the people that perform less well,” he said.

https://www.theguardian.com/science/2016/nov/07/us-military-successfully-tests-electrical-brain-stimulation-to-enhance-staff-skills

Thanks to Kebmodee for bringing this to the It’s Interesting community.