Posts Tagged ‘The Singularity’


by Edd Gent

Wiring our brains up to computers could have a host of exciting applications – from controlling robotic prosthetics with our minds to restoring sight by feeding camera feeds directly into the vision center of our brains.

Most brain-computer interface research to date has been conducted using electroencephalography (EEG) where electrodes are placed on the scalp to monitor the brain’s electrical activity. Achieving very high quality signals, however, requires a more invasive approach.

Integrating electronics with living tissue is complicated, though. Probes that are directly inserted into the gray matter have been around for decades, but while they are capable of highly accurate recording, the signals tend to degrade rapidly due to the buildup of scar tissue. Electrocorticography (ECoG), which uses electrodes placed beneath the skull but on top of the gray matter, has emerged as a popular compromise, as it achieves higher-accuracy recordings with a lower risk of scar formation.

But now researchers from the University of Texas have created new probes that are so thin and flexible, they don’t elicit scar tissue buildup. Unlike conventional probes, which are much larger and stiffer, they don’t cause significant damage to the brain tissue when implanted, and they are also able to comply with the natural movements of the brain.

In recent research published in the journal Science Advances, the team demonstrated that the probes were able to reliably record the electrical activity of individual neurons in mice for up to four months. This stability suggests these probes could be used for long-term monitoring of the brain for research or medical diagnostics as well as controlling prostheses, said Chong Xie, an assistant professor in the university’s department of biomedical engineering who led the research.

“Besides neuroprosthetics, they can possibly be used for neuromodulation as well, in which electrodes generate neural stimulation,” he told Singularity Hub in an email. “We are also using them to study the progression of neurovascular and neurodegenerative diseases such as stroke, Parkinson’s and Alzheimer’s.”

The group actually created two probe designs, one 50 microns long and the other 10 microns long. The smaller probe has a cross-section only a fraction of that of a neuron, which the researchers say is the smallest among all reported neural probes to the best of their knowledge.

Because the probes are so flexible, they can’t be pushed into the brain tissue by themselves, and so they needed to be guided in using a stiff rod called a “shuttle device.” Previous designs of these shuttle devices were much larger than the new probes and often led to serious damage to the brain tissue, so the group created a new carbon fiber design just seven microns in diameter.

At present, though, only 25 percent of the recordings can be tracked down to individual neurons – thanks to the fact that neurons each have characteristic waveforms – with the rest too unclear to distinguish from each other.

“The only solution, in my opinion, is to have many electrodes placed in the brain in an array or lattice so that any neuron can be within a reasonable distance from an electrode,” said Chong. “As a result, all enclosed neurons can be recorded and well-sorted.”

This a challenging problem, according to Chong, but one benefit of the new probes is that their small dimensions make it possible to implant probes just tens of microns apart rather than the few hundred micron distances necessary with conventional probes. This opens up the possibility of overlapping detection ranges between probes, though the group can still only consistently implant probes with an accuracy of 50 microns.

Takashi Kozai, an assistant professor in the University of Pittsburgh’s bioengineering department who has worked on ultra-small neural probes, said that further experiments would need to be done to show that the recordings, gleaned from anaesthetized rats, actually contained useful neural code. This could include visually stimulating the animals and trying to record activity in the visual cortex.

He also added that a lot of computational neuroscience relies on knowing the exact spacing between recording sites. The fact that flexible probes are able to migrate due to natural tissue movements could pose challenges.

But he said the study “does show some important advances forward in technology development, and most importantly, proof-of-concept feasibility,” adding that “there is clearly much more work necessary before this technology becomes widely used or practical.”

Chong actually worked on another promising approach to neural recording in his previous role under Charles M. Lieber at Harvard University. Last June, the group demonstrated a mesh of soft, conductive polymer threads studded with electrodes that could be injected into the skulls of mice with a syringe where it would then unfurl to both record and stimulate neurons.

As 95 percent of the mesh is free, space cells are able to arrange themselves around it, and the study reported no signs of an elevated immune response after five weeks. But the implantation required a syringe 100 microns in diameter, which causes considerably more damage than the new ultra-small probes developed in Chong’s lab.

It could be some time before the probes are tested on humans. “The major barrier is that this is still an invasive surgical procedure, including cranial surgery and implantation of devices into brain tissue,” said Chong. But, he said, the group is considering testing the probes on epilepsy patients, as it is common practice to implant electrodes inside the skulls of those who don’t respond to medication to locate the area of their brains responsible for their seizures.

By Vanessa Bates Ramirez

In recent years, technology has been producing more and more novel ways to diagnose and treat illness.

Urine tests will soon be able to detect cancer:

Smartphone apps can diagnose STDs:

Chatbots can provide quality mental healthcare:

Joining this list is a minimally-invasive technique that’s been getting increasing buzz across various sectors of healthcare: disease detection by voice analysis.

It’s basically what it sounds like: you talk, and a computer analyzes your voice and screens for illness. Most of the indicators that machine learning algorithms can pick up aren’t detectable to the human ear.

When we do hear irregularities in our own voices or those of others, the fact we’re noticing them at all means they’re extreme; elongating syllables, slurring, trembling, or using a tone that’s unusually flat or nasal could all be indicators of different health conditions. Even if we can hear them, though, unless someone says, “I’m having chest pain” or “I’m depressed,” we don’t know how to analyze or interpret these biomarkers.

Computers soon will, though.

Researchers from various medical centers, universities, and healthcare companies have collected voice recordings from hundreds of patients and fed them to machine learning software that compares the voices to those of healthy people, with the aim of establishing patterns clear enough to pinpoint vocal disease indicators.

In one particularly encouraging study, doctors from the Mayo Clinic worked with Israeli company Beyond Verbal to analyze voice recordings from 120 people who were scheduled for a coronary angiography. Participants used an app on their phones to record 30-second intervals of themselves reading a piece of text, describing a positive experience, then describing a negative experience. Doctors also took recordings from a control group of 25 patients who were either healthy or getting non-heart-related tests.

The doctors found 13 different voice characteristics associated with coronary artery disease. Most notably, the biggest differences between heart patients and non-heart patients’ voices occurred when they talked about a negative experience.

Heart disease isn’t the only illness that shows promise for voice diagnosis. Researchers are also making headway in the conditions below.

ADHD: German company Audioprofiling is using voice analysis to diagnose ADHD in children, achieving greater than 90 percent accuracy in identifying previously diagnosed kids based on their speech alone. The company’s founder gave speech rhythm as an example indicator for ADHD, saying children with the condition speak in syllables less equal in length.
PTSD: With the goal of decreasing the suicide rate among military service members, Boston-based Cogito partnered with the Department of Veterans Affairs to use a voice analysis app to monitor service members’ moods. Researchers at Massachusetts General Hospital are also using the app as part of a two-year study to track the health of 1,000 patients with bipolar disorder and depression.
Brain injury: In June 2016, the US Army partnered with MIT’s Lincoln Lab to develop an algorithm that uses voice to diagnose mild traumatic brain injury. Brain injury biomarkers may include elongated syllables and vowel sounds or difficulty pronouncing phrases that require complex facial muscle movements.
Parkinson’s: Parkinson’s disease has no biomarkers and can only be diagnosed via a costly in-clinic analysis with a neurologist. The Parkinson’s Voice Initiative is changing that by analyzing 30-second voice recordings with machine learning software, achieving 98.6 percent accuracy in detecting whether or not a participant suffers from the disease.
Challenges remain before vocal disease diagnosis becomes truly viable and widespread. For starters, there are privacy concerns over the personal health data identifiable in voice samples. It’s also not yet clear how well algorithms developed for English-speakers will perform with other languages.

Despite these hurdles, our voices appear to be on their way to becoming key players in our health.

by Arjun Kharpal

Billionaire Elon Musk is known for his futuristic ideas and his latest suggestion might just save us from being irrelevant as artificial intelligence (AI) grows more prominent.

The Tesla and SpaceX CEO said on Monday that humans need to merge with machines to become a sort of cyborg.

“Over time I think we will probably see a closer merger of biological intelligence and digital intelligence,” Musk told an audience at the World Government Summit in Dubai, where he also launched Tesla in the United Arab Emirates (UAE).

“It’s mostly about the bandwidth, the speed of the connection between your brain and the digital version of yourself, particularly output.”

Musk explained what he meant by saying that computers can communicate at “a trillion bits per second”, while humans, whose main communication method is typing with their fingers via a mobile device, can do about 10 bits per second.

In an age when AI threatens to become widespread, humans would be useless, so there’s a need to merge with machines, according to Musk.

“Some high bandwidth interface to the brain will be something that helps achieve a symbiosis between human and machine intelligence and maybe solves the control problem and the usefulness problem,” Musk explained.

The technologists proposal would see a new layer of a brain able to access information quickly and tap into artificial intelligence. It’s not the first time Musk has spoken about the need for humans to evolve, but it’s a constant theme of his talks on how society can deal with the disruptive threat of AI.

‘Very quick’ disruption

During his talk, Musk touched upon his fear of “deep AI” which goes beyond driverless cars to what he called “artificial general intelligence”. This he described as AI that is “smarter than the smartest human on earth” and called it a “dangerous situation”.

While this might be some way off, the Tesla boss said the more immediate threat is how AI, particularly autonomous cars, which his own firm is developing, will displace jobs. He said the disruption to people whose job it is to drive will take place over the next 20 years, after which 12 to 15 percent of the global workforce will be unemployed.

“The most near term impact from a technology standpoint is autonomous cars … That is going to happen much faster than people realize and it’s going to be a great convenience,” Musk said.

“But there are many people whose jobs are to drive. In fact I think it might be the single largest employer of people … Driving in various forms. So we need to figure out new roles for what do those people do, but it will be very disruptive and very quick.”

Wendy was barely 20 years old when she received a devastating diagnosis: juvenile amyotrophic lateral sclerosis (ALS), an aggressive neurodegenerative disorder that destroys motor neurons in the brain and the spinal cord.

Within half a year, Wendy was completely paralyzed. At 21 years old, she had to be artificially ventilated and fed through a tube placed into her stomach. Even more horrifyingly, as paralysis gradually swept through her body, Wendy realized that she was rapidly being robbed of ways to reach out to the world.

Initially, Wendy was able to communicate to her loved ones by moving her eyes. But as the disease progressed, even voluntary eye twitches were taken from her. In 2015, a mere three years after her diagnosis, Wendy completely lost the ability to communicate—she was utterly, irreversibly trapped inside her own mind.

Complete locked-in syndrome is the stuff of nightmares. Patients in this state remain fully conscious and cognitively sharp, but are unable to move or signal to the outside world that they’re mentally present. The consequences can be dire: when doctors mistake locked-in patients for comatose and decide to pull the plug, there’s nothing the patients can do to intervene.

Now, thanks to a new system developed by an international team of European researchers, Wendy and others like her may finally have a rudimentary link to the outside world. The system, a portable brain-machine interface, translates brain activity into simple yes or no answers to questions with around 70 percent accuracy.

That may not seem like enough, but the system represents the first sliver of hope that we may one day be able to reopen reliable communication channels with these patients.

Four people were tested in the study, with some locked-in for as long as seven years. In just 10 days, the patients were able to reliably use the system to finally tell their loved ones not to worry—they’re generally happy.

The results, though imperfect, came as “enormous relief” to their families, says study leader Dr. Niels Birbaumer at the University of Tübingen. The study was published this week in the journal PLOS Biology.

Breaking Through

Robbed of words and other routes of contact, locked-in patients have always turned to technology for communication.

Perhaps the most famous example is physicist Stephen Hawking, who became partially locked-in due to ALS. Hawking’s workaround is a speech synthesizer that he operates by twitching his cheek muscles. Jean-Dominique Bauby, an editor of the French fashion magazine Elle who became locked-in after a massive stroke, wrote an entire memoir by blinking his left eye to select letters from the alphabet.

Recently, the rapid development of brain-machine interfaces has given paralyzed patients increasing access to the world—not just the physical one, but also the digital universe.

These devices read brain waves directly through electrodes implanted into the patient’s brain, decode the pattern of activity, and correlate it to a command—say, move a computer cursor left or right on a screen. The technology is so reliable that paralyzed patients can even use an off-the-shelf tablet to Google things, using only the power of their minds.

But all of the above workarounds require one critical factor: the patient has to have control of at least one muscle—often, this is a cheek or an eyelid. People like Wendy who are completely locked-in are unable to control similar brain-machine interfaces. This is especially perplexing since these systems don’t require voluntary muscle movements, because they read directly from the mind.

The unexpected failure of brain-machine interfaces for completely locked-in patients has been a major stumbling block for the field. Although speculative, Birbaumer believes that it may be because over time, the brain becomes less efficient at transforming thoughts into actions.

“Anything you want, everything you wish does not occur. So what the brain learns is that intention has no sense anymore,” he says.

First Contact

In the new study, Birbaumer overhauled common brain-machine interface designs to get the brain back on board.

First off was how the system reads brain waves. Generally, this is done through EEG, which measures certain electrical activity patterns of the brain. Unfortunately, the usual solution was a no-go.

“We worked for more than 10 years with neuroelectric activity [EEG] without getting into contact with these completely paralyzed people,” says Birbaumer.

It may be because the electrodes have to be implanted to produce a more accurate readout, explains Birbaumer to Singularity Hub. But surgery comes with additional risks and expenses to the patients. In a somewhat desperate bid, the team turned their focus to a technique called functional near-infrared spectroscopy (fNIRS).

Like fMRI, fNIRS measures brain activity by measuring changes in blood flow through a specific brain region—generally speaking, more blood flow equals more activation. Unlike fMRI, which requires the patient to lie still in a gigantic magnet, fNIRS uses infrared light to measure blood flow. The light source is embedded into a swimming cap-like device that’s tightly worn around the patient’s head.

To train the system, the team started with facts about the world and personal questions that the patients can easily answer. Over the course of 10 days, the patients were repeatedly asked to respond yes or no to questions like “Paris is the capital of Germany” or “Your husband’s name is Joachim.” Throughout the entire training period, the researchers carefully monitored the patients’ alertness and concentration using EEG, to ensure that they were actually participating in the task at hand.

The answers were then used to train an algorithm that matched the responses to their respective brain activation patterns. Eventually, the algorithm was able to tell yes or no based on these patterns alone, at about 70 percent accuracy for a single trial.

“After 10 years [of trying], I felt relieved,” says Birbaumer. If the study can be replicated in more patients, we may finally have a way to restore useful communication with these patients, he added in a press release.

“The authors established communication with complete locked-in patients, which is rare and has not been demonstrated systematically before,” says Dr. Wolfgang Einhäuser-Treyer to Singularity Hub. Einhäuser-Treyer is a professor at Bielefeld University in Germany who had previously worked on measuring pupil response as a means of communication with locked-in patients and was not involved in this current study.

Generally Happy

With more training, the algorithm is expected to improve even further.

For now, researchers can average out mistakes by repeatedly asking a patient the same question multiple times. And even at an “acceptable” 70 percent accuracy rate, the system has already allowed locked-in patients to speak their minds—and somewhat endearingly, just like in real life, the answer may be rather unexpected.

One of the patients, a 61-year-old man, was asked whether his daughter should marry her boyfriend. The father said no a striking nine out of ten times—but the daughter went ahead anyway, much to her father’s consternation, which he was able to express with the help of his new brain-machine interface.

Perhaps the most heart-warming result from the study is that the patients were generally happy and content with their lives.

We were originally surprised, says Birbaumer. But on further thought, it made sense. These four patients had accepted ventilation to support their lives despite their condition.

“In a sense, they had already chosen to live,” says Birbaumer. “If we could make this technique widely clinically available, it could have a huge impact on the day-to-day lives of people with completely locked-in syndrome.”

For his next steps, the team hopes to extend the system beyond simple yes or no binary questions. Instead, they want to give patients access to the entire alphabet, thus allowing them to spell out words using their brain waves—something that’s already been done in partially locked-in patients but never before been possible for those completely locked-in.

“To me, this is a very impressive and important study,” says Einhäuser-Treyer. The downsides are mostly economical.

“The equipment is rather expensive and not easy to use. So the challenge for the field will be to develop this technology into an affordable ‘product’ that caretakers [sic], families or physicians can simply use without trained staff or extensive training,” he says. “In the interest of the patients and their families, we can hope that someone takes this challenge.”

by Amina Khan

One day, gardeners might not just hear the buzz of bees among their flowers, but the whirr of robots, too. Scientists in Japan say they’ve managed to turn an unassuming drone into a remote-controlled pollinator by attaching horsehairs coated with a special, sticky gel to its underbelly.

The system, described in the journal Chem, is nowhere near ready to be sent to agricultural fields, but it could help pave the way to developing automated pollination techniques at a time when bee colonies are suffering precipitous declines.

In flowering plants, sex often involves a threesome. Flowers looking to get the pollen from their male parts into another bloom’s female parts need an envoy to carry it from one to the other. Those third players are animals known as pollinators — a diverse group of critters that includes bees, butterflies, birds and bats, among others.

Animal pollinators are needed for the reproduction of 90% of flowering plants and one third of human food crops, according to the U.S. Department of Agriculture’s Natural Resources Conservation Service. Chief among those are bees — but many bee populations in the United States have been in steep decline in recent decades, likely due to a combination of factors, including agricultural chemicals, invasive species and climate change. Just last month, the rusty patched bumblebee became the first wild bee in the United States to be listed as an endangered species (although the Trump administration just put a halt on that designation).

Thus, the decline of bees isn’t just worrisome because it could disrupt ecosystems, but also because it could disrupt agriculture and the economy. People have been trying to come up with replacement techniques, the study authors say, but none of them are especially effective yet — and some might do more harm than good.

“One pollination technique requires the physical transfer of pollen with an artist’s brush or cotton swab from male to female flowers,” the authors wrote. “Unfortunately, this requires much time and effort. Another approach uses a spray machine, such as a gun barrel and pneumatic ejector. However, this machine pollination has a low pollination success rate because it is likely to cause severe denaturing of pollens and flower pistils as a result of strong mechanical contact as the pollens bursts out of the machine.”

Scientists have thought about using drones, but they haven’t figured out how to make free-flying robot insects that can rely on their own power source without being attached to a wire.

“It’s very tough work,” said senior author Eijiro Miyako, a chemist at the National Institute of Advanced Industrial Science and Technology in Japan.

Miyako’s particular contribution to the field involves a gel, one he’d considered a mistake 10 years before. The scientist had been attempting to make fluids that could be used to conduct electricity, and one attempt left him with a gel that was as sticky as hair wax. Clearly this wouldn’t do, and so Miyako stuck it in a storage cabinet in an uncapped bottle. When it was rediscovered a decade later, it looked exactly the same – the gel hadn’t dried up or degraded at all.

“I was so surprised, because it still had a very high viscosity,” Miyako said.

The chemist noticed that when dropped, the gel absorbed an impressive amount of dust from the floor. Miyako realized this material could be very useful for picking up pollen grains. He took ants, slathered the ionic gel on some of them and let both the gelled and ungelled insects wander through a box of tulips. Those ants with the gel were far more likely to end up with a dusting of pollen than those that were free of the sticky substance.

The next step was to see if this worked with mechanical movers, as well. He and his colleagues chose a four-propeller drone whose retail value was $100, and attached horsehairs to its smooth surface to mimic a bee’s fuzzy body. They coated those horsehairs in the gel, and then maneuvered the drones over Japanese lilies, where they would pick up the pollen from one flower and then deposit the pollen at another bloom, thus fertilizing it.

The scientists looked at the hairs under a scanning electron microscope and counted up the pollen grains attached to the surface. They found that the robots whose horsehairs had been coated with the gel had on the order of 10 times more pollen than those hairs that had not been coated with the gel.

“A certain amount of practice with remote control of the artificial pollinator is necessary,” the study authors noted.

Miyako does not think such drones would replace bees altogether, but could simply help bees with their pollinating duties.

“In combination is the best way,” he said.

There’s a lot of work to be done before that’s a reality, however. Small drones will need to become more maneuverable and energy efficient, as well as smarter, he said — with better GPS and artificial intelligence, programmed to travel in highly effective search-and-pollinate patterns.

Feeling run down? Have a case of the sniffles? Maybe you should have paid more attention to your smartwatch.

No, that’s not the pitch line for a new commercial peddling wearable technology, though no doubt a few companies will be interested in the latest research published in PLOS Biology for the next advertising campaign. It turns out that some of the data logged by our personal tracking devices regarding health—heart rate, skin temperature, even oxygen saturation—appear useful for detecting the onset of illness.

“We think we can pick up the earliest stages when people get sick,” says Michael Snyder, a professor and chair of genetics at Stanford University and senior author of the study, “Digital Health: Tracking Physiomes and Activity Using Wearable Biosensors Reveals Useful Health-Related Information.”

Snyder said his team was surprised that the wearables were so effective in detecting the start of the flu, or even Lyme disease, but in hindsight the results make sense: Wearables that track different parameters such as heart rate continuously monitor each vital sign, producing a dense set of data against which aberrations stand out even in the least sensitive wearables.

“[Wearables are] pretty powerful because they’re a continuous measurement of these things,” notes Snyder during an interview with Singularity Hub.

The researchers collected data for up to 24 months on a small study group, which included Snyder himself. Known as Participant #1 in the paper, Snyder benefited from the study when the wearable devices detected marked changes in his heart rate and skin temperature from his normal baseline. A test about two weeks later confirmed he had contracted Lyme disease.

In fact, during the nearly two years while he was monitored, the wearables detected 11 periods with elevated heart rate, corresponding to each instance of illness Snyder experienced during that time. It also detected anomalies on four occasions when Snyder was not feeling ill.

An expert in genomics, Snyder said his team was interested in looking at the effectiveness of wearables technology to detect illness as part of a broader interest in personalized medicine.

“Everybody’s baseline is different, and these devices are very good at characterizing individual baselines,” Snyder says. “I think medicine is going to go from reactive—measuring people after they get sick—to proactive: predicting these risks.”

That’s essentially what genomics is all about: trying to catch disease early, he notes. “I think these devices are set up for that,” Snyder says.

The cost savings could be substantial if a better preventive strategy for healthcare can be found. A landmark report in 2012 from the Cochrane Collaboration, an international group of medical researchers, analyzed 14 large trials with more than 182,000 people. The findings: Routine checkups are basically a waste of time. They did little to lower the risk of serious illness or premature death. A news story in Reuters estimated that the US spends about $8 billion a year in annual physicals.

The study also found that wearables have the potential to detect individuals at risk for Type 2 diabetes. Snyder and his co-authors argue that biosensors could be developed to detect variations in heart rate patterns, which tend to differ for those experiencing insulin resistance.

Finally, the researchers also noted that wearables capable of tracking blood oxygenation provided additional insights into physiological changes caused by flying. While a drop in blood oxygenation during flight due to changes in cabin pressure is a well-known medical fact, the wearables recorded a drop in levels during most of the flight, which was not known before. The paper also suggested that lower oxygen in the blood is associated with feelings of fatigue.

Speaking while en route to the airport for yet another fatigue-causing flight, Snyder is still tracking his vital signs today. He hopes to continue the project by improving on the software his team originally developed to detect deviations from baseline health and sense when people are becoming sick.

In addition, Snyder says his lab plans to make the software work on all smart wearable devices, and eventually develop an app for users.

“I think [wearables] will be the wave of the future for collecting a lot of health-related information. It’s a very inexpensive way to get very dense data about your health that you can’t get in other ways,” he says. “I do see a world where you go to the doctor and they’ve downloaded your data. They’ll be able to see if you’ve been exercising, for example.

“It will be very complementary to how healthcare currently works.”

by Tom Simonite

Each of these trucks is the size of a small two-story house. None has a driver or anyone else on board.

Mining company Rio Tinto has 73 of these titans hauling iron ore 24 hours a day at four mines in Australia’s Mars-red northwest corner. At this one, known as West Angelas, the vehicles work alongside robotic rock drilling rigs. The company is also upgrading the locomotives that haul ore hundreds of miles to port—the upgrades will allow the trains to drive themselves, and be loaded and unloaded automatically.

Rio Tinto intends its automated operations in Australia to preview a more efficient future for all of its mines—one that will also reduce the need for human miners. The rising capabilities and falling costs of robotics technology are allowing mining and oil companies to reimagine the dirty, dangerous business of getting resources out of the ground.

BHP Billiton, the world’s largest mining company, is also deploying driverless trucks and drills on iron ore mines in Australia. Suncor, Canada’s largest oil company, has begun testing driverless trucks on oil sands fields in Alberta.

“In the last couple of years we can just do so much more in terms of the sophistication of automation,” says Herman Herman, director of the National Robotics Engineering Center at Carnegie Mellon University, in Pittsburgh. The center helped Caterpillar develop its autonomous haul truck. Mining company Fortescue Metals Group is putting them to work in its own iron ore mines. Herman says the technology can be deployed sooner for mining than other applications, such as transportation on public roads. “It’s easier to deploy because these environments are already highly regulated,” he says.

Rio Tinto uses driverless trucks provided by Japan’s Komatsu. They find their way around using precision GPS and look out for obstacles using radar and laser sensors.

Rob Atkinson, who leads productivity efforts at Rio Tinto, says the fleet and other automation projects are already paying off. The company’s driverless trucks have proven to be roughly 15 percent cheaper to run than vehicles with humans behind the wheel, says Atkinson—a significant saving since haulage is by far a mine’s largest operational cost. “We’re going to continue as aggressively as possible down this path,” he says.

Trucks that drive themselves can spend more time working because software doesn’t need to stop for shift changes or bathroom breaks. They are also more predictable in how they do things like pull up for loading. “All those places where you could lose a few seconds or minutes by not being consistent add up,” says Atkinson. They also improve safety, he says.

The driverless locomotives, due to be tested extensively next year and fully deployed by 2018, are expected to bring similar benefits. Atkinson also anticipates savings on train maintenance, because software can be more predictable and gentle than any human in how it uses brakes and other controls. Diggers and bulldozers could be next to be automated.

Herman at CMU expects all large mining companies to widen their use of automation in the coming years as robotics continues to improve. The recent, sizeable investments by auto and tech companies in driverless cars will help accelerate improvements in the price and performance of the sensors, software, and other technologies needed.

Herman says many mining companies are well placed to expand automation rapidly, because they have already invested in centralized control systems that use software to coördinate and monitor their equipment. Rio Tinto, for example, gave the job of overseeing its autonomous trucks to staff at the company’s control center in Perth, 750 miles to the south. The center already plans train movements and in the future will shift from sending orders to people to directing driverless locomotives.

Atkinson of Rio Tinto acknowledges that just like earlier technologies that boosted efficiency, those changes will tend to reduce staffing levels, even if some new jobs are created servicing and managing autonomous machines. “It’s something that we’ve got to carefully manage, but it’s a reality of modern day life,” he says. “We will remain a very significant employer.”

Thanks to Kebmodee for bringing this to the It’s Interesting community.