Archive for the ‘Kebmodee’ Category

by Tom Simonite

Each of these trucks is the size of a small two-story house. None has a driver or anyone else on board.

Mining company Rio Tinto has 73 of these titans hauling iron ore 24 hours a day at four mines in Australia’s Mars-red northwest corner. At this one, known as West Angelas, the vehicles work alongside robotic rock drilling rigs. The company is also upgrading the locomotives that haul ore hundreds of miles to port—the upgrades will allow the trains to drive themselves, and be loaded and unloaded automatically.

Rio Tinto intends its automated operations in Australia to preview a more efficient future for all of its mines—one that will also reduce the need for human miners. The rising capabilities and falling costs of robotics technology are allowing mining and oil companies to reimagine the dirty, dangerous business of getting resources out of the ground.

BHP Billiton, the world’s largest mining company, is also deploying driverless trucks and drills on iron ore mines in Australia. Suncor, Canada’s largest oil company, has begun testing driverless trucks on oil sands fields in Alberta.

“In the last couple of years we can just do so much more in terms of the sophistication of automation,” says Herman Herman, director of the National Robotics Engineering Center at Carnegie Mellon University, in Pittsburgh. The center helped Caterpillar develop its autonomous haul truck. Mining company Fortescue Metals Group is putting them to work in its own iron ore mines. Herman says the technology can be deployed sooner for mining than other applications, such as transportation on public roads. “It’s easier to deploy because these environments are already highly regulated,” he says.

Rio Tinto uses driverless trucks provided by Japan’s Komatsu. They find their way around using precision GPS and look out for obstacles using radar and laser sensors.

Rob Atkinson, who leads productivity efforts at Rio Tinto, says the fleet and other automation projects are already paying off. The company’s driverless trucks have proven to be roughly 15 percent cheaper to run than vehicles with humans behind the wheel, says Atkinson—a significant saving since haulage is by far a mine’s largest operational cost. “We’re going to continue as aggressively as possible down this path,” he says.

Trucks that drive themselves can spend more time working because software doesn’t need to stop for shift changes or bathroom breaks. They are also more predictable in how they do things like pull up for loading. “All those places where you could lose a few seconds or minutes by not being consistent add up,” says Atkinson. They also improve safety, he says.

The driverless locomotives, due to be tested extensively next year and fully deployed by 2018, are expected to bring similar benefits. Atkinson also anticipates savings on train maintenance, because software can be more predictable and gentle than any human in how it uses brakes and other controls. Diggers and bulldozers could be next to be automated.

Herman at CMU expects all large mining companies to widen their use of automation in the coming years as robotics continues to improve. The recent, sizeable investments by auto and tech companies in driverless cars will help accelerate improvements in the price and performance of the sensors, software, and other technologies needed.

Herman says many mining companies are well placed to expand automation rapidly, because they have already invested in centralized control systems that use software to coördinate and monitor their equipment. Rio Tinto, for example, gave the job of overseeing its autonomous trucks to staff at the company’s control center in Perth, 750 miles to the south. The center already plans train movements and in the future will shift from sending orders to people to directing driverless locomotives.

Atkinson of Rio Tinto acknowledges that just like earlier technologies that boosted efficiency, those changes will tend to reduce staffing levels, even if some new jobs are created servicing and managing autonomous machines. “It’s something that we’ve got to carefully manage, but it’s a reality of modern day life,” he says. “We will remain a very significant employer.”

https://www.technologyreview.com/s/603170/mining-24-hours-a-day-with-robots/

Thanks to Kebmodee for bringing this to the It’s Interesting community.

Advertisements

Most of the attention around automation focuses on how factory robots and self-driving cars may fundamentally change our workforce, potentially eliminating millions of jobs. But AI that can handle knowledge-based, white-collar work are also becoming increasingly competent.

One Japanese insurance company, Fukoku Mutual Life Insurance, is reportedly replacing 34 human insurance claim workers with “IBM Watson Explorer,” starting by January 2017.

The AI will scan hospital records and other documents to determine insurance payouts, according to a company press release, factoring injuries, patient medical histories, and procedures administered. Automation of these research and data gathering tasks will help the remaining human workers process the final payout faster, the release says.

Fukoku Mutual will spend $1.7 million (200 million yen) to install the AI system, and $128,000 per year for maintenance, according to Japan’s The Mainichi. The company saves roughly $1.1 million per year on employee salaries by using the IBM software, meaning it hopes to see a return on the investment in less than two years.

Watson AI is expected to improve productivity by 30%, Fukoku Mutual says. The company was encouraged by its use of similar IBM technology to analyze customer’s voices during complaints. The software typically takes the customer’s words, converts them to text, and analyzes whether those words are positive or negative. Similar sentiment analysis software is also being used by a range of US companies for customer service; incidentally, a large benefit of the software is understanding when customers get frustrated with automated systems.

The Mainichi reports that three other Japanese insurance companies are testing or implementing AI systems to automate work such as finding ideal plans for customers. An Israeli insurance startup, Lemonade, has raised $60 million on the idea of “replacing brokers and paperwork with bots and machine learning,” says CEO Daniel Schreiber.

Artificial intelligence systems like IBM’s are poised to upend knowledge-based professions, like insurance and financial services, according to the Harvard Business Review, due to the fact that many jobs can be “composed of work that can be codified into standard steps and of decisions based on cleanly formatted data.” But whether that means augmenting workers’ ability to be productive, or replacing them entirely remains to be seen.

“Almost all jobs have major elements that—for the foreseeable future—won’t be possible for computers to handle,” HBR writes. “And yet, we have to admit that there are some knowledge-work jobs that will simply succumb to the rise of the robots.”

Japanese white-collar workers are already being replaced by artificial intelligence

Thank to Kebmodee for bringing this to the It’s Interesting community.

uber1

uber-3

Uber’s self-driving cars are making the move to San Francisco, in a new expansion of its pilot project with autonomous vehicles that will see Volvo SUVs outfitted with sensors and supercomputers begin picking up passengers in the city.

The autonomous cars won’t operate completely driverless, for the time being – as in Pittsburgh, where Uber launched self-driving Ford Focus vehicles this fall, each SUV will have a safety driver and Uber test engineer onboard to handle manual driving when needed and monitor progress with the tests. But the cars will still be picking up ordinary passengers – any customers who request uberX using the standard consumer-facing mobile app are eligible for a ride in one of the new XC90s operated by Uber’s Advanced Technologies Group (ATG).

There’s a difference here beyond the geography; this is the third generation of Uber’s autonomous vehicle, which is distinct from the second-generation Fords that were used in the Pittsburgh pilot. Uber has a more direct relationship with Volvo in turning its new XC90s into cars with autonomous capabilities; the Fords were essentially purchased stock off the line, while Uber’s partnership with Volvo means it can do more in terms of integrating its own sensor array into the ones available on board the vehicle already.

Uber ATG Head of Product Matt Sweeney told me in an interview that this third-generation vehicle actually uses fewer sensors than the Fords that are on the roads in Pittsburgh, though the loadout still includes a full complement of traditional optical cameras, radar, LiDAR and ultrasonic detectors. He said that fewer sensors are required in part because of the lessons learned from the Pittsburgh rollout, and from their work studying previous generation vehicles; with autonomy, you typically start by throwing everything you can think of at the problem, and then you narrow based on what’s specifically useful, and what turns out not to be so necessary. Still, the fused image of the world that results from data gathered from the Volvo’s sensor suite does not lack for detail.

“You combine [images and LiDAR] together you end up with an image which you know very explicitly distance information about, so it’s like this beautiful object that you can detect as you’re moving through,” Sweeney explained to me. “And with some of the better engineered integration here, we have some radars in the front and rear bumpers behind the facades.”

Those radar arrays provide more than just the ability to see even in conditions it might be difficult to do so optically, as in poor weather; Sweeney notes that the radar units they’re using can actually bounce signal off the surface of the road, underneath or around vehicles in front, in order to look for and report back information on potential accidents or hazards not immediately in front of the autonomous Uber itself.

“The car is one of the reasons we’re really excited about this partnership, it’s a really tremendous vehicle,” Sweeney said. “It’s Volvo’s new SPA, the scalable platform architecture – the first car on their brand new, built from the ground up vehicle architecture, so you get all new mechanical, all new electrical, all new compute.”

Uber didn’t pick a partner blindly – Sweeney says they found a company with a reputation for nearly a hundred years of solid engineering, manufacturing and a commitment to iterating improvement in those areas.

“The vehicle that we’re building on top of, we’re very intentional about it,” Sweeney said, noting that cars like this one are engineered specifically for safety, which is not the main failure point when it comes to most automobile accidents today – that role is reserved for the human drivers behind the wheel.

Uber’s contributions are mainly in the sensor pod, and in the compute stack in the trunk, which takes up about half the surface area of the storage space and which Sweeney said is “a blade architecture, a whole bunch of CPUs and GPUs that we can swap out under there,” though he wouldn’t speak to who’s supplying those components specifically. The tremendous computing power it represents taken together is the key identifying objects, doing so in higher volume, and doing better pathfinding in complex city street environments.

For the actual rider, there’s an iPad-based interactive display in the rear of the vehicle, which takes over for the mobile app once you’ve actually entered the vehicle and are ready to start your ride. The display guides you through the steps of starting your trip, including ensuring your seat belt is fastened, checking your destination and then setting off on the ride itself.

During our demo, the act of actually leaving the curb and merging into traffic was handled by the safety driver on board, but in eventual full deployment of these cars the vehicles will handle even that tricky task. The iPad shows you when you’re in active self-driving mode, and also when it’s been disengaged and steering is being handled by the actual person behind the wheel instead. The screen also shows you a simplified version of what the autonomous car itself “sees,” displaying on a white background color-coded point- and line-based rudimentary versions of the objects and the world surrounding the vehicle. Objects in motion display trails as they move through this real-time virtual world.

The iPad-based display also lets you take a selfie and share the image from your ride, which definitely helps Uber promote its efforts, while also helping with the other key goal that the iPad itself seeks to achieve – making riders feel like this tech is both knowable and normal. Public perception remains one of autonomous driving’s highest bars to overcome, along with the tech problem and regulation, and selfies are one seemingly shallow way to legitimately address that.

So how did I feel during my ride? About as excited as I typically feel during any Uber ride, after the initial thrill wore off – which is to say mostly bored. The vehicle I was in had to negotiate some heavy traffic, a lot of construction and very unpredictable south-of-Market San Francisco drivers, and as such did disengage with fair frequency. but it also handled long open stretches of road at speed with aplomb, and kept distance in more dense traffic well in stop-and-go situations. It felt overall like a system that is making good progress in terms of learning – but one that also still has a long way to go before it can do without its human minders up front.

My companion for the ride in the backseat was Uber Chief of Watch Rachel Maran, who has been a driver in Uber’s self-driving pilot in Pittsburgh previously. She explained that the unpredictability and variety in any new driving environment is going to be one of the biggest challenges Uber’s autonomous driving systems have to overcome.

Uber’s pilot in San Francisco will be limited to the downtown area to start, and will involve “a handful” of vehicles to start, with the intent of ramping up from there according to the company. The autonomous vehicles in Pittsburgh will also continue to run concurrently with the San Francisco deployment. Where Pittsburgh offers a range of weather conditions and other environmental variables for testing, San Francisco will provide new challenges for Uber’s self-driving tech, including denser, often more chaotic traffic, plus narrower lanes and roads.

The company doesn’t require a permit from the California DMV to operate in the state, it says, because the cars don’t qualify as fully autonomous as defined by state law because of the always present onboard safety operator. Legally, it’s more akin to a Tesla with Autopilot than to a self-driving Waymo car, under current regulatory rules.

Ultimately, the goal for Uber in autonomy is to create safer roads, according to Sweeney, while at the same time improving urban planning and space problems stemming from a vehicle ownership model that sees most cars sitting idle and unused somewhere near 95 percent of the time. I asked Sweeney about concerns from drivers and members of the public who can be very vocal about autonomous tech’s safety on real roads.

“This car has got centimeter-level distance measurements 360-degrees around the vehicle constantly, 20 meters front and 20 meters back constantly,” Sweeney said, noting that even though the autonomous decision-making remains “a really big challenge,” the advances achieved by the sensors themselves and “their continuous attention and superhuman perception […] sets us up for the first really marked decrease in automotive fatalities since the airbag.”

“I think this is where we really push it down to zero,” Sweeney added. “People treat it as though it’s a fact of life; it’s only because we’re used to it. We can do way better than this.”

Uber’s self-driving cars start picking up passengers in San Francisco

Thanks to Kebmodee for bringing this to the It’s Interesting community.

Google AI computers have created their own secret language, creating a fascinating and existentially challenging development.

In September, Google announced that its Neural Machine Translation system had gone live. It uses deep learning to produce better, more natural translations between languages.

Following on this success, GNMT’s creators were curious about something. If you teach the translation system to translate English to Korean and vice versa, and also English to Japanese and vice versa… could it translate Korean to Japanese, without resorting to English as a bridge between them?

This is called zero-shot translation, illustrated below.

Indeed, Google’s AI has evolves to produce reasonable translations between two languages that it has not explicitly linked in any way.

But this raised a second question. If the computer is able to make connections between concepts and words that have not been formally linked… does that mean that the computer has formed a concept of shared meaning for those words, meaning at a deeper level than simply that one word or phrase is the equivalent of another?

n other words, has the computer developed its own internal language to represent the concepts it uses to translate between other languages? Based on how various sentences are related to one another in the memory space of the neural network, Google’s language and AI boffins think that it has.

This “interlingua” seems to exist as a deeper level of representation that sees similarities between a sentence or word in all three languages. Beyond that, it’s hard to say, since the inner processes of complex neural networks are infamously difficult to describe.

It could be something sophisticated, or it could be something simple. But the fact that it exists at all — an original creation of the system’s own to aid in its understanding of concepts it has not been trained to understand — is, philosophically speaking, pretty powerful stuff.

Google’s AI translation tool seems to have invented its own secret internal language

Thanks to Kebmodee for bringing this to the attention of the It’s Interesting community.


Ta’u Island’s residents live off a solar power and battery storage-enabled microgrid.

by Amelia Heathman

SolarCity was applauded when it announced its plans for solar roofs earlier this year. Now, it appears it is in the business of creating solar islands.

The island of Ta’u in American Samoa, more than 4,000 miles from the United States’ West Coast, now hosts a solar power and battery storage-enabled microgrid that can supply nearly 100 per cent of the island’s power needs from renewable energy.

The microgrid is made up of 1.4 megawatts of solar generation capacity from SolarCity and Tesla and six-megawatt hours of battery storage from 60 Tesla Powerpacks. The whole thing took just a year to implement.

Due to the remote nature of the island, its citizens were used to constant power rationing, outages and a high dependency on diesel generators. The installation of the microgrid, however, provides a cost-saving alternative to diesel, and the island’s core services such as the local hospital, schools and police stations don’t have to worry about outages or rationing anymore.

“It’s always sunny out here, and harvesting that energy from the sun will make me sleep a lot more comfortably at night, just knowing I’ll be able to serve my customers,” said Keith Ahsoon, a local resident whose family owns one of the food stores on the island.

The power from the new Ta’u microgrid provides energy independence for the nearly 600 residents of the island. The battery system also allows the residents to use stored solar energy at night, meaning energy will always be available. As well as providing energy, the project will allow the island to significantly save on energy costs and offset the use of more than 109,500 gallons of diesel per year.

With concerns over climate change and the effects the heavy use of fossil fuels are having on the planet, more initiatives are taking off to prove the power of solar energy, whether it is SolarCity fueling an entire island or Bertrand Piccard’s Solar Impulse plane flying around the world on only solar energy.

Obviously Ta’u island’s location off the West Coast means it is in a prime location to harness the Sun’s energy, which wouldn’t necessarily work in the UK. Having said that, this is an exciting way to show where the future of solar energy could take us if it was amplified on a larger scale.

The project was funded by the American Samoa Economic Development Authority, the Environmental Protection Agency and the Department of Interior, whilst the microgrid is operated by the American Samoa Power Authority.

http://www.wired.co.uk/article/island-tau-solar-energy-solarcity

Thanks to Kebmodee for bringing this to the attention of the It’s Interesting community.

By James Gallagher

An implant that beams instructions out of the brain has been used to restore movement in paralysed primates for the first time, say scientists.

Rhesus monkeys were paralysed in one leg due to a damaged spinal cord. The team at the Swiss Federal Institute of Technology bypassed the injury by sending the instructions straight from the brain to the nerves controlling leg movement. Experts said the technology could be ready for human trials within a decade.

Spinal-cord injuries block the flow of electrical signals from the brain to the rest of the body resulting in paralysis. It is a wound that rarely heals, but one potential solution is to use technology to bypass the injury.

In the study, a chip was implanted into the part of the monkeys’ brain that controls movement. Its job was to read the spikes of electrical activity that are the instructions for moving the legs and send them to a nearby computer. It deciphered the messages and sent instructions to an implant in the monkey’s spine to electrically stimulate the appropriate nerves. The process all takes place in real time. The results, published in the journal Nature, showed the monkeys regained some control of their paralysed leg within six days and could walk in a straight line on a treadmill.

Dr Gregoire Courtine, one of the researchers, said: “This is the first time that a neurotechnology has restored locomotion in primates.” He told the BBC News website: “The movement was close to normal for the basic walking pattern, but so far we have not been able to test the ability to steer.” The technology used to stimulate the spinal cord is the same as that used in deep brain stimulation to treat Parkinson’s disease, so it would not be a technological leap to doing the same tests in patients. “But the way we walk is different to primates, we are bipedal and this requires more sophisticated ways to stimulate the muscle,” said Dr Courtine.

Jocelyne Bloch, a neurosurgeon from the Lausanne University Hospital, said: “The link between decoding of the brain and the stimulation of the spinal cord is completely new. “For the first time, I can image a completely paralysed patient being able to move their legs through this brain-spine interface.”

Using technology to overcome paralysis is a rapidly developing field:
Brainwaves have been used to control a robotic arm
Electrical stimulation of the spinal cord has helped four paralysed people stand again
An implant has helped a paralysed man play a guitar-based computer game

Dr Mark Bacon, the director of research at the charity Spinal Research, said: “This is quite impressive work. Paralysed patients want to be able to regain real control, that is voluntary control of lost functions, like walking, and the use of implantable devices may be one way of achieving this. The current work is a clear demonstration that there is progress being made in the right direction.”

Dr Andrew Jackson, from the Institute of Neuroscience and Newcastle University, said: “It is not unreasonable to speculate that we could see the first clinical demonstrations of interfaces between the brain and spinal cord by the end of the decade.” However, he said, rhesus monkeys used all four limbs to move and only one leg had been paralysed, so it would be a greater challenge to restore the movement of both legs in people. “Useful locomotion also requires control of balance, steering and obstacle avoidance, which were not addressed,” he added.

The other approach to treating paralysis involves transplanting cells from the nasal cavity into the spinal cord to try to biologically repair the injury. Following this treatment, Darek Fidyka, who was paralysed from the chest down in a knife attack in 2010, can now walk using a frame.

Neither approach is ready for routine use.

http://www.bbc.com/news/health-37914543

Thanks to Kebmodee for bringing this to the It’s Interesting community.


Study paves way for personnel such as drone operators to have electrical pulses sent into their brains to improve effectiveness in high pressure situations.

US military scientists have used electrical brain stimulators to enhance mental skills of staff, in research that aims to boost the performance of air crews, drone operators and others in the armed forces’ most demanding roles.

The successful tests of the devices pave the way for servicemen and women to be wired up at critical times of duty, so that electrical pulses can be beamed into their brains to improve their effectiveness in high pressure situations.

The brain stimulation kits use five electrodes to send weak electric currents through the skull and into specific parts of the cortex. Previous studies have found evidence that by helping neurons to fire, these minor brain zaps can boost cognitive ability.

The technology is seen as a safer alternative to prescription drugs, such as modafinil and ritalin, both of which have been used off-label as performance enhancing drugs in the armed forces.

But while electrical brain stimulation appears to have no harmful side effects, some experts say its long-term safety is unknown, and raise concerns about staff being forced to use the equipment if it is approved for military operations.

Others are worried about the broader implications of the science on the general workforce because of the advance of an unregulated technology.

In a new report, scientists at Wright-Patterson Air Force Base in Ohio describe how the performance of military personnel can slump soon after they start work if the demands of the job become too intense.

“Within the air force, various operations such as remotely piloted and manned aircraft operations require a human operator to monitor and respond to multiple events simultaneously over a long period of time,” they write. “With the monotonous nature of these tasks, the operator’s performance may decline shortly after their work shift commences.”

Advertisement

But in a series of experiments at the air force base, the researchers found that electrical brain stimulation can improve people’s multitasking skills and stave off the drop in performance that comes with information overload. Writing in the journal Frontiers in Human Neuroscience, they say that the technology, known as transcranial direct current stimulation (tDCS), has a “profound effect”.

For the study, the scientists had men and women at the base take a test developed by Nasa to assess multitasking skills. The test requires people to keep a crosshair inside a moving circle on a computer screen, while constantly monitoring and responding to three other tasks on the screen.

To investigate whether tDCS boosted people’s scores, half of the volunteers had a constant two milliamp current beamed into the brain for the 36-minute-long test. The other half formed a control group and had only 30 seconds of stimulation at the start of the test.

According to the report, the brain stimulation group started to perform better than the control group four minutes into the test. “The findings provide new evidence that tDCS has the ability to augment and enhance multitasking capability in a human operator,” the researchers write. Larger studies must now look at whether the improvement in performance is real and, if so, how long it lasts.

The tests are not the first to claim beneficial effects from electrical brain stimulation. Last year, researchers at the same US facility found that tDCS seemed to work better than caffeine at keeping military target analysts vigilant after long hours at the desk. Brain stimulation has also been tested for its potential to help soldiers spot snipers more quickly in VR training programmes.

Neil Levy, deputy director of the Oxford Centre for Neuroethics, said that compared with prescription drugs, electrical brain stimulation could actually be a safer way to boost the performance of those in the armed forces. “I have more serious worries about the extent to which participants can give informed consent, and whether they can opt out once it is approved for use,” he said. “Even for those jobs where attention is absolutely critical, you want to be very careful about making it compulsory, or there being a strong social pressure to use it, before we are really sure about its long-term safety.”

But while the devices may be safe in the hands of experts, the technology is freely available, because the sale of brain stimulation kits is unregulated. They can be bought on the internet or assembled from simple components, which raises a greater concern, according to Levy. Young people whose brains are still developing may be tempted to experiment with the devices, and try higher currents than those used in laboratories, he says. “If you use high currents you can damage the brain,” he says.

In 2014 another Oxford scientist, Roi Cohen Kadosh, warned that while brain stimulation could improve performance at some tasks, it made people worse at others. In light of the work, Kadosh urged people not to use brain stimulators at home.

If the technology is proved safe in the long run though, it could help those who need it most, said Levy. “It may have a levelling-up effect, because it is cheap and enhancers tend to benefit the people that perform less well,” he said.

https://www.theguardian.com/science/2016/nov/07/us-military-successfully-tests-electrical-brain-stimulation-to-enhance-staff-skills

Thanks to Kebmodee for bringing this to the It’s Interesting community.