Posts Tagged ‘robot’

One advantage humans have over robots is that we’re good at quickly passing on our knowledge to each other. A new system developed at MIT now allows anyone to coach robots through simple tasks and even lets them teach each other.

Typically, robots learn tasks through demonstrations by humans, or through hand-coded motion planning systems where a programmer specifies each of the required movements. But the former approach is not good at translating skills to new situations, and the latter is very time-consuming.

Humans, on the other hand, can typically demonstrate a simple task, like how to stack logs, to someone else just once before they pick it up, and that person can easily adapt that knowledge to new situations, say if they come across an odd-shaped log or the pile collapses.

In an attempt to mimic this kind of adaptable, one-shot learning, researchers from MIT’s Computer Science and Artificial Intelligence Laboratory (CSAIL) combined motion planning and learning through demonstration in an approach they’ve dubbed C-LEARN.

First, a human teaches the robot a series of basic motions using an interactive 3D model on a computer. Using the mouse to show it how to reach and grasp various objects in different positions helps the machine build up a library of possible actions.

The operator then shows the robot a single demonstration of a multistep task, and using its database of potential moves, it devises a motion plan to carry out the job at hand.

“This approach is actually very similar to how humans learn in terms of seeing how something’s done and connecting it to what we already know about the world,” says Claudia Pérez-D’Arpino, a PhD student who wrote a paper on C-LEARN with MIT Professor Julie Shah, in a press release.

“We can’t magically learn from a single demonstration, so we take new information and match it to previous knowledge about our environment.”

The robot successfully carried out tasks 87.5 percent of the time on its own, but when a human operator was allowed to correct minor errors in the interactive model before the robot carried out the task, the accuracy rose to 100 percent.

Most importantly, the robot could teach the skills it learned to another machine with a completely different configuration. The researchers tested C-LEARN on a new two-armed robot called Optimus that sits on a wheeled base and is designed for bomb disposal.

But in simulations, they were able to seamlessly transfer Optimus’ learned skills to CSAIL’s 6-foot-tall Atlas humanoid robot. They haven’t yet tested Atlas’ new skills in the real world, and they had to give Atlas some extra information on how to carry out tasks without falling over, but the demonstration shows that the approach can allow very different robots to learn from each other.

The research, which will be presented at the IEEE International Conference on Robotics and Automation in Singapore later this month, could have important implications for the large-scale roll-out of robot workers.

“Traditional programming of robots in real-world scenarios is difficult, tedious, and requires a lot of domain knowledge,” says Shah in the press release.

“It would be much more effective if we could train them more like how we train people: by giving them some basic knowledge and a single demonstration. This is an exciting step toward teaching robots to perform complex multi-arm and multi-step tasks necessary for assembly manufacturing and ship or aircraft maintenance.”

The MIT researchers aren’t the only people investigating the field of so-called transfer learning. The RoboEarth project and its spin-off RoboHow were both aimed at creating a shared language for robots and an online repository that would allow them to share their knowledge of how to carry out tasks over the web.

Google DeepMind has also been experimenting with ways to transfer knowledge from one machine to another, though in their case the aim is to help skills learned in simulations to be carried over into the real world.

A lot of their research involves deep reinforcement learning, in which robots learn how to carry out tasks in virtual environments through trial and error. But transferring this knowledge from highly-engineered simulations into the messy real world is not so simple.

So they have found a way for a model that has learned how to carry out a task in a simulation using deep reinforcement learning to transfer that knowledge to a so-called progressive neural network that controls a real-world robotic arm. This allows the system to take advantage of the accelerated learning possible in a simulation while still learning effectively in the real world.

These kinds of approaches make life easier for data scientists trying to build new models for AI and robots. As James Kobielus notes in InfoWorld, the approach “stands at the forefront of the data science community’s efforts to invent ‘master learning algorithms’ that automatically gain and apply fresh contextual knowledge through deep neural networks and other forms of AI.”

If you believe those who say we’re headed towards a technological singularity, you can bet transfer learning will be an important part of that process.

https://singularityhub.com/2017/05/26/these-robots-can-teach-other-robots-how-to-do-new-things/?utm_source=Singularity+Hub+Newsletter&utm_campaign=7c19f894b1-Hub_Daily_Newsletter&utm_medium=email&utm_term=0_f0cf60cdae-7c19f894b1-58158129

Advertisements

A viral video showing an army of little orange robots sorting out packages in a warehouse in eastern China is the latest example of how machines are increasingly taking over menial factory work on the mainland.

The behind-the-scenes footage of the self-charging robot army in a sorting centre of Chinese delivery powerhouse Shentong (STO) Express was shared on People’s Daily’s social media accounts on Sunday.

The video showed dozens of round orange Hikvision robots – each the size of a seat cushion – swivelling across the floor of the large warehouse in Hangzhou, Zhejiang province.

A worker was seen feeding each robot with a package before the machines carried the parcels away to different areas around the sorting centre, then flipping their lids to deposit them into chutes beneath the floor.

The robots identified the destination of each package by scanning a code on the parcel, thus minimising sorting mistakes, according to the video.

The machines can sort up to 200,000 packages a day and are self-charging, meaning they can operate around the clock.

An STO Express spokesman told the South China Morning Post on Monday that the robots had helped the company save half the costs it typically required to use human workers.

They also improved efficiency by around 30 per cent and maximised sorting accuracy, he said.

“We use these robots in two of our centres in Hangzhou right now,” the spokesman said. “We want to start using these across the country, especially in our bigger centres.”

Although the machines could run around the clock, they were presently used only for about six or seven hours each time from 6pm, he said.

Manufacturers across China have been increasingly replacing human workers with machines.

The output of industrial robots in the country grew 30.4 per cent last year.

In the country’s latest five-year plan, the central government set a target aiming for annual production of these robots to reach 100,000 by 2020.

Apple’s supplier Foxconn last year replaced 60,000 factory workers with robots, according to a Chinese government official in Kunshan, eastern Jiangsu province.

The Taiwanese smartphone maker has several factories across China.

http://www.scmp.com/news/china/society/article/2086662/chinese-firm-cuts-costs-hiring-army-robots-sort-out-200000

Thanks to Kebmodee for bringing this to the It’s Interesting community.

by Tom Simonite

Each of these trucks is the size of a small two-story house. None has a driver or anyone else on board.

Mining company Rio Tinto has 73 of these titans hauling iron ore 24 hours a day at four mines in Australia’s Mars-red northwest corner. At this one, known as West Angelas, the vehicles work alongside robotic rock drilling rigs. The company is also upgrading the locomotives that haul ore hundreds of miles to port—the upgrades will allow the trains to drive themselves, and be loaded and unloaded automatically.

Rio Tinto intends its automated operations in Australia to preview a more efficient future for all of its mines—one that will also reduce the need for human miners. The rising capabilities and falling costs of robotics technology are allowing mining and oil companies to reimagine the dirty, dangerous business of getting resources out of the ground.

BHP Billiton, the world’s largest mining company, is also deploying driverless trucks and drills on iron ore mines in Australia. Suncor, Canada’s largest oil company, has begun testing driverless trucks on oil sands fields in Alberta.

“In the last couple of years we can just do so much more in terms of the sophistication of automation,” says Herman Herman, director of the National Robotics Engineering Center at Carnegie Mellon University, in Pittsburgh. The center helped Caterpillar develop its autonomous haul truck. Mining company Fortescue Metals Group is putting them to work in its own iron ore mines. Herman says the technology can be deployed sooner for mining than other applications, such as transportation on public roads. “It’s easier to deploy because these environments are already highly regulated,” he says.

Rio Tinto uses driverless trucks provided by Japan’s Komatsu. They find their way around using precision GPS and look out for obstacles using radar and laser sensors.

Rob Atkinson, who leads productivity efforts at Rio Tinto, says the fleet and other automation projects are already paying off. The company’s driverless trucks have proven to be roughly 15 percent cheaper to run than vehicles with humans behind the wheel, says Atkinson—a significant saving since haulage is by far a mine’s largest operational cost. “We’re going to continue as aggressively as possible down this path,” he says.

Trucks that drive themselves can spend more time working because software doesn’t need to stop for shift changes or bathroom breaks. They are also more predictable in how they do things like pull up for loading. “All those places where you could lose a few seconds or minutes by not being consistent add up,” says Atkinson. They also improve safety, he says.

The driverless locomotives, due to be tested extensively next year and fully deployed by 2018, are expected to bring similar benefits. Atkinson also anticipates savings on train maintenance, because software can be more predictable and gentle than any human in how it uses brakes and other controls. Diggers and bulldozers could be next to be automated.

Herman at CMU expects all large mining companies to widen their use of automation in the coming years as robotics continues to improve. The recent, sizeable investments by auto and tech companies in driverless cars will help accelerate improvements in the price and performance of the sensors, software, and other technologies needed.

Herman says many mining companies are well placed to expand automation rapidly, because they have already invested in centralized control systems that use software to coördinate and monitor their equipment. Rio Tinto, for example, gave the job of overseeing its autonomous trucks to staff at the company’s control center in Perth, 750 miles to the south. The center already plans train movements and in the future will shift from sending orders to people to directing driverless locomotives.

Atkinson of Rio Tinto acknowledges that just like earlier technologies that boosted efficiency, those changes will tend to reduce staffing levels, even if some new jobs are created servicing and managing autonomous machines. “It’s something that we’ve got to carefully manage, but it’s a reality of modern day life,” he says. “We will remain a very significant employer.”

https://www.technologyreview.com/s/603170/mining-24-hours-a-day-with-robots/

Thanks to Kebmodee for bringing this to the It’s Interesting community.

by Bryan Nelson

Quantum physics has some spooky, anti-intuitive effects, but it could also be essential to how actual intuition works, at least in regards to artificial intelligence.

In a new study, researcher Vedran Dunjko and co-authors applied a quantum analysis to a field within artificial intelligence called reinforcement learning, which deals with how to program a machine to make appropriate choices to maximize a cumulative reward. The field is surprisingly complex and must take into account everything from game theory to information theory.

Dunjko and his team found that quantum effects, when applied to reinforcement learning in artificial intelligence systems, could provide quadratic improvements in learning efficiency, reports Phys.org. Exponential improvements might even be possible over short-term performance tasks. The study was published in the journal Physical Review Letters.

“This is, to our knowledge, the first work which shows that quantum improvements are possible in more general, interactive learning tasks,” explained Dunjko. “Thus, it opens up a new frontier of research in quantum machine learning.”

One of the key quantum effects in regards to learning is quantum superposition, which potentially allows a machine to perform many steps simultaneously. Such a system has vastly improved processing power, which allows it to compute more variables when making decisions.

The research is tantalizing, in part because it mirrors some theories about how biological brains might produce higher cognitive states, possibly even being related to consciousness. For instance, some scientists have proposed the idea that our brains pull off their complex calculations by making use of quantum computation.

Could quantum effects unlock consciousness in our machines? Quantum physics isn’t likely to produce HAL from “2001: A Space Odyssey” right away; the most immediate improvements in artificial intelligence will likely come in complex fields such as climate modeling or automated cars. But eventually, who knows?

You probably won’t want to be taking a joyride in an automated vehicle the moment it becomes conscious, if HAL is an example of what to expect.

“While the initial results are very encouraging, we have only begun to investigate the potential of quantum machine learning,” said Dunjko. “We plan on furthering our understanding of how quantum effects can aid in aspects of machine learning in an increasingly more general learning setting. One of the open questions we are interested in is whether quantum effects can play an instrumental role in the design of true artificial intelligence.”

http://www.mnn.com/green-tech/research-innovations/stories/quantum-artificial-intelligence-could-lead-super-smart-machines

by Lorenzo Tanos

The mind-controlled robotic arm of Pennsylvania man Nathan Copeland hasn’t just gotten the sense of touch. It’s also got to shake the hand of the U.S. President himself, Barack Obama.

Copeland, 30, was part of a groundbreaking research project involving researchers from the University of Pittsburgh and the University of Pittsburgh Medical Center. In this experiment, Copeland’s brain was implanted with microscopic electrodes — a report from the Washington Post describes the tiny particles as being “smaller than a grain of sand.” With the particles implanted into the cortex of the man’s brain, they then interacted with his robotic arm. This allowed Copeland to gain some feeling in his paralyzed right hand’s fingers, as the process worked around the spinal cord damage that robbed him of the sense of touch.

More than a decade had passed since Copeland, then a college student in his teens, had suffered his injuries in a car accident. The wreck had resulted in tetraplegia, or the paralysis of both arms and legs, though it didn’t completely rob the Western Pennsylvania resident of the ability to move his shoulders. He then volunteered in 2011 for the University of Pittsburgh Medical Center project, a broader research initiative with the goal of helping paralyzed individuals feel again. The Washington Post describes this process as something “even more difficult” than helping these people move again.

For Nathan Copeland, the robotic arm experiment has proven to be a success, as he’s regained the ability to feel most of his fingers. He told the Washington Post on Wednesday that the type of feeling does differ at times, but he can “tell most of the fingers with definite precision.” Likewise, UPMC biomedical engineer Robert Gaunt told the publication that he felt “relieved” that the project allowed Copeland to feel parts of the hand that had no feeling for the past 10 years.

Prior to this experiment, mind-controlled robotic arm capabilities were already quite impressive, but lacking one key ingredient – the sense of touch. These prosthetics allowed people to move objects around, but since the individuals using the arms didn’t have working peripheral nerve systems, they couldn’t feel the sense of touch, and movements with the robotic limbs were typically mechanical in nature. But that’s not the case with Nathan Copeland, according to UPMC’s Gaunt.

“With Nathan, he can control a prosthetic arm, do a handshake, fist bump, move objects around,” Gaunt observed. “And in this (study), he can experience sensations from his own hand. Now we want to put those two things together so that when he reaches out to grasp an object, he can feel it. … He can pick something up that’s soft and not squash it or drop it.”

But it wasn’t just ordinary handshakes that Copeland was sharing on Thursday. On that day, he had exchanged a handshake and fist bump with President Barack Obama, who was in Pittsburgh for a White House Frontiers Conference. And Obama appeared to be suitably impressed with what Gaunt and his team had achieved, as it allowed Copeland’s robotic arm and hand to have “pretty impressive” precision.

“When I’m moving the hand, it is also sending signals to Nathan so he is feeling me touching or moving his arm,” said Obama.

Unfortunately, Copeland won’t be able to go home with his specialized prosthesis. In a report from the Associated Press, he said that the experiment mainly amounts to having “done some cool stuff with some cool people.” But he nonetheless remains hopeful, as he believes that his experience with the robotic arm will mark some key advances in the quest to make paralyzed people regain their natural sense of touch.

Read more at http://www.inquisitr.com/3599638/paralyzed-mans-robotic-arm-gets-to-feel-again-shakes-obamas-hand/#xVzFDHGXukJWBV05.99

David Mills has opened up about his two year ‘relationship’ with a doll.

The 57-year-old has just celebrated his second anniversary with Taffy, his £5000 “RealDoll2”, with silicone skin and steel joints.

He has revealed that some women are turned on by the doll and he’s even shared a threesome with one woman.

The twice-divorced dad says he still dates and gets differing reactions if he tells them about his sex doll and some would “freak out”.

He told Men’s Health: “They’ll be like, ‘Don’t call me anymore, I’m unfriending you on Facebook, stay away from me and my children,’ that sort of thing.

“But I’ve met some women who were into me because of the doll. I’ve had sexual experiences that I never would’ve had without Taffy.”

The American bought the sex robot from a Californian company two years ago and paid an extra £300 for added freckles, to make her more realistic.


The robots come with a £5000 price tag and latets versions will even come with a pulse.

According the website of sex doll suppliers Abyss, Taffy has an “ultra-realistic labia,” “stretchy lips,” and a hinged jaw that “opens and closes very realistically.”

In the first few months, he revealed, he would often come home, see the frozen figure sitting on a chair, and let out a blood-curdling scream.

David recalls one occasion when he brought a woman back to his house after a date, without telling her about his silicone companion.

He added: “I didn’t want my date to walk into the room and suddenly see Taffy, because if you’re not expecting her, she’s kind of terrifying.”

“So I say to this girl, ‘Give me a minute.’ And I run into the bedroom and quickly throw a sheet over Taffy.

“That was a close one.”

David laughs as he recalls one particular act with Taffy which would be impossible with a real woman.

He said: “Sometimes, when I just don’t feel like looking at her, I’ll take out her vagina.
“She stays in the bedroom, and I just walk around with her p***y. Isn’t modern technology wonderful?”

But David is keen to point out that his ownership of a sex robot doesn’t mean he is crazy.

He said: “I wouldn’t exactly call this a relationship.

“I think one of the misconceptions about sex robots is that owners view their dolls as alive, or that my doll is in love with me, or that I sit around and talk to her about whether I should buy Apple stock.


Sex robots are big business in the States and are becoming more advanced all the time.

He also revealed his 20-year-old daughter aware of Taffy’s existence.

“We don’t really talk about it,” he added. “Just like we don’t talk about my television set or washing machine.”

Sex robots have become much more sophisticated in recent years and experts say walking, talking dolls won’t be too far away.

The “RoxxxyGold” robot from True Companion — with a base price, before the extras, of £4,800 — offers options including “a heartbeat and a circulatory system” and the ability to “talk to you about soccer.”

‘I’ve met some women who were into me because of the doll’… Man reveals the truth of his two year ‘relationship’ with a sex robot