Posts Tagged ‘robot’

By Sigal Samuel

A new priest named Mindar is holding forth at Kodaiji, a 400-year-old Buddhist temple in Kyoto, Japan. Like other clergy members, this priest can deliver sermons and move around to interface with worshippers. But Mindar comes with some … unusual traits. A body made of aluminum and silicone, for starters.

Mindar is a robot.

Designed to look like Kannon, the Buddhist deity of mercy, the $1 million machine is an attempt to reignite people’s passion for their faith in a country where religious affiliation is on the decline.

For now, Mindar is not AI-powered. It just recites the same preprogrammed sermon about the Heart Sutra over and over. But the robot’s creators say they plan to give it machine-learning capabilities that’ll enable it to tailor feedback to worshippers’ specific spiritual and ethical problems.

“This robot will never die; it will just keep updating itself and evolving,” said Tensho Goto, the temple’s chief steward. “With AI, we hope it will grow in wisdom to help people overcome even the most difficult troubles. It’s changing Buddhism.”

Robots are changing other religions, too. In 2017, Indians rolled out a robot that performs the Hindu aarti ritual, which involves moving a light round and round in front of a deity. That same year, in honor of the Protestant Reformation’s 500th anniversary, Germany’s Protestant Church created a robot called BlessU-2. It gave preprogrammed blessings to over 10,000 people.

Then there’s SanTO — short for Sanctified Theomorphic Operator — a 17-inch-tall robot reminiscent of figurines of Catholic saints. If you tell it you’re worried, it’ll respond by saying something like, “From the Gospel according to Matthew, do not worry about tomorrow, for tomorrow will worry about itself. Each day has enough trouble of its own.”

Roboticist Gabriele Trovato designed SanTO to offer spiritual succor to elderly people whose mobility and social contact may be limited. Next, he wants to develop devices for Muslims, though it remains to be seen what form those might take.

As more religious communities begin to incorporate robotics — in some cases, AI-powered and in others, not — it stands to change how people experience faith. It may also alter how we engage in ethical reasoning and decision-making, which is a big part of religion.

For the devout, there’s plenty of positive potential here: Robots can get disinterested people curious about religion or allow for a ritual to be performed when a human priest is inaccessible. But robots also pose risks for religion — for example, by making it feel too mechanized or homogenized or by challenging core tenets of theology. On the whole, will the emergence of AI religion make us better or worse off? The answer depends on how we design and deploy it — and on whom you ask.

Some cultures are more open to religious robots than others
New technologies often make us uncomfortable. Which ones we ultimately accept — and which ones we reject — is determined by an array of factors, ranging from our degree of exposure to the emerging technology to our moral presuppositions.

Japanese worshippers who visit Mindar are reportedly not too bothered by questions about the risks of siliconizing spirituality. That makes sense given that robots are already so commonplace in the country, including in the religious domain.

For years now, people who can’t afford to pay a human priest to perform a funeral have had the option to pay a robot named Pepper to do it at a much cheaper rate. And in China, at Beijing’s Longquan Monastery, an android monk named Xian’er recites Buddhist mantras and offers guidance on matters of faith.

What’s more, Buddhism’s non-dualistic metaphysical notion that everything has inherent “Buddha nature” — that all beings have the potential to become enlightened — may predispose its adherents to be receptive to spiritual guidance that comes from technology.

At the temple in Kyoto, Goto put it like this: “Buddhism isn’t a belief in a God; it’s pursuing Buddha’s path. It doesn’t matter whether it’s represented by a machine, a piece of scrap metal, or a tree.”

“Mindar’s metal skeleton is exposed, and I think that’s an interesting choice — its creator, Hiroshi Ishiguro, is not trying to make something that looks totally human,” said Natasha Heller, an associate professor of Chinese religions at the University of Virginia. She told me the deity Kannon, upon whom Mindar is based, is an ideal candidate for cyborgization because the Lotus Sutra explicitly says Kannon can manifest in different forms — whatever forms will best resonate with the humans of a given time and place.

Westerners seem more disturbed by Mindar, likening it to Frankenstein’s monster. In Western economies, we don’t yet have robots enmeshed in many aspects of our lives. What we do have is a pervasive cultural narrative, reinforced by Hollywood blockbusters, about our impending enslavement at the hands of “robot overlords.”

Plus, Abrahamic religions like Islam or Judaism tend to be more metaphysically dualistic — there’s the sacred and then there’s the profane. And they have more misgivings than Buddhism about visually depicting divinity, so they may take issue with Mindar-style iconography.

They also have different ideas about what makes a r

eligious practice effective. For example, Judaism places a strong emphasis on intentionality, something machines don’t possess. When a worshipper prays, what matters is not just that their mouth forms the right words — it’s also very important that they have the right intention.

Meanwhile, some Buddhists use prayer wheels containing scrolls printed with sacred words and believe that spinning the wheel has its own spiritual efficacy, even if nobody recites the words aloud. In hospice settings, elderly Buddhists who don’t have people on hand to recite prayers on their behalf will use devices known as nianfo ji — small machines about the size of an iPhone, which recite the name of the Buddha endlessly.

Despite such theological differences, it’s ironic that many Westerners have a knee-jerk negative reaction to a robot like Mindar. The dream of creating artificial life goes all the way back to ancient Greece, where the ancients actually invented real animated machines as the Stanford classicist Adrienne Mayor has documented in her book Gods and Robots. And there is a long tradition of religious robots in the West.

In the Middle Ages, Christians designed automata to perform the mysteries of Easter and Christmas. One proto-roboticist in the 16th century designed a mechanical monk that is, amazingly, performing ritual gestures to this day. With his right arm, he strikes his chest in a mea culpa; with his left, he raises a rosary to his lips.

In other words, the real novelty is not the use of robots in the religious domain but the use of AI.

How AI may change our theology and ethics
Even as our theology shapes the AI we create and embrace, AI will also shape our theology. It’s a two-way street.

Some people believe AI will force a truly momentous change in theology, because if humans create intelligent machines with free will, we’ll eventually have to ask whether they have something functionally similar to a soul.

“There will be a point in the future when these free-willed beings that we’ve made will say to us, ‘I believe in God. What do I do?’ At that point, we should have a response,” said Kevin Kelly, a Christian co-founder of Wired magazine who argues we need to develop “a catechism for robots.”

Other people believe that, rather than seeking to join a human religion, AI itself will become an object of worship. Anthony Levandowski, the Silicon Valley engineer who triggered a major Uber/Waymo lawsuit, has set up the first church of artificial intelligence, called Way of the Future. Levandowski’s new religion is dedicated to “the realization, acceptance, and worship of a Godhead based on artificial intelligence (AI) developed through computer hardware and software.”

Meanwhile, Ilia Delio, a Franciscan sister who holds two PhDs and a chair in theology at Villanova University, told me AI may also force a traditional religion like Catholicism to reimagine its understanding of human priests as divinely called and consecrated — a status that grants them special authority.

“The Catholic notion would say the priest is ontologically changed upon ordination. Is that really true?” she asked. Maybe priestliness is not an esoteric essence but a programmable trait that even a “fallen” creation like a robot can embody. “We have these fixed philosophical ideas and AI challenges those ideas — it challenges Catholicism to move toward a post-human priesthood.” (For now, she joked, a robot would probably do better as a Protestant.)

Then there are questions about how robotics will change our religious experiences. Traditionally, those experiences are valuable in part because they leave room for the spontaneous and surprising, the emotional and even the mystical. That could be lost if we mechanize them.

To visualize an automated ritual, take a look at this video of a robotic arm performing a Hindu aarti ceremony:

Another risk has to do with how an AI priest would handle ethical queries and decision-making. Robots whose algorithms learn from previous data may nudge us toward decisions based on what people have done in the past, incrementally homogenizing answers to our queries and narrowing the scope of our spiritual imagination.

That risk also exists with human clergy, Heller pointed out: “The clergy is bounded too — there’s already a built-in nudging or limiting factor, even without AI.”

But AI systems can be particularly problematic in that they often function as black boxes. We typically don’t know what sorts of biases are coded into them or what sorts of human nuance and context they’re failing to understand.

Let’s say you tell a robot you’re feeling depressed because you’re unemployed and broke, and the only job that’s available to you seems morally odious. Maybe the robot responds by reciting a verse from Proverbs 14: “In all toil there is profit, but mere talk tends only to poverty.” Even if it doesn’t presume to interpret the verse for you, in choosing that verse it’s already doing hidden interpretational work. It’s analyzing your situation and algorithmically determining a recommendation — in this case, one that may prompt you to take the job.

But perhaps it would’ve worked out better for you if the robot had recited a verse from Proverbs 16: “Commit your work to the Lord, and your plans will be established.” Maybe that verse would prompt you to pass on the morally dubious job, and, being a sensitive soul, you’ll later be happy you did. Or maybe your depression is severe enough that the job issue is somewhat beside the point and the crucial thing is for you to seek out mental health treatment.

A human priest who knows your broader context as a whole person may gather this and give you the right recommendation. An android priest might miss the nuances and just respond to the localized problem as you’ve expressed it.

The fact is human clergy members do so much more than provide answers. They serve as the anchor for a community, bringing people together. They offer pastoral care. And they provide human contact, which is in danger of becoming a luxury good as we create robots to more cheaply do the work of people.

On the other hand, Delio said, robots can excel in a social role in some ways that human priests might not. “Take the Catholic Church. It’s very male, very patriarchal, and we have this whole sexual abuse crisis. So would I want a robot priest? Maybe!” she said. “A robot can be gender-neutral. It might be able to transcend some of those divides and be able to enhance community in a way that’s more liberating.”

Ultimately, in religion as in other domains, robots and humans are perhaps best understood not as competitors but as collaborators. Each offers something the other lacks.

As Delio put it, “We tend to think in an either/or framework: It’s either us or the robots. But this is about partnership, not replacement. It can be a symbiotic relationship — if we approach it that way.”

https://www.vox.com/future-perfect/2019/9/9/20851753/ai-religion-robot-priest-mindar-buddhism-christianity

Thanks to Kebmodee for bringing this to the It’s Interesting community.

Advertisements

By Greg Ip

It’s time to stop worrying that robots will take our jobs — and start worrying that they will decide who gets jobs.

Millions of low-paid workers’ lives are increasingly governed by software and algorithms. This was starkly illustrated by a report last week that Amazon.com tracks the productivity of its employees and regularly fires those who underperform, with almost no human intervention.

“Amazon’s system tracks the rates of each individual associate’s productivity and automatically generates any warnings or terminations regarding quality or productivity without input from supervisors,” a law firm representing Amazon said in a letter to the National Labor Relations Board, as first reported by technology news site The Verge. Amazon was responding to a complaint that it had fired an employee from a Baltimore fulfillment center for federally protected activity, which could include union organizing. Amazon said the employee was fired for failing to meet productivity targets.

Perhaps it was only a matter of time before software started firing people. After all, it already screens resumes, recommends job applicants, schedules shifts and assigns projects. In the workplace, “sophisticated technology to track worker productivity on a minute-by-minute or even second-by-second basis is incredibly pervasive,” says Ian Larkin, a business professor at the University of California at Los Angeles specializing in human resources.

Industrial laundry services track how many seconds it takes to press a laundered shirt; on-board computers track truckers’ speed, gear changes and engine revolutions per minute; and checkout terminals at major discount retailers report if the cashier is scanning items quickly enough to meet a preset goal. In all these cases, results are shared in real time with the employee, and used to determine who is terminated, says Mr. Larkin.

Of course, weeding out underperforming employees is a basic function of management. General Electric Co.’s former chief executive Jack Welch regularly culled the company’s underperformers. “In banking and management consulting it is standard to exit about 20% of employees a year, even in good times, using ‘rank and yank’ systems,” says Nick Bloom, an economist at Stanford University specializing in management.

For employees of General Electric, Goldman Sachs Group Inc.and McKinsey & Co., that risk is more than compensated for by the reward of stimulating and challenging work and handsome paychecks. The risk-reward trade-off in industrial laundries, fulfillment centers and discount stores is not nearly so enticing: the work is repetitive and the pay is low. Those who aren’t weeded out one year may be the next if the company raises its productivity targets. Indeed, wage inequality doesn’t fully capture how unequal work has become: enjoyable and secure at the top, monotonous and insecure at the bottom.

At fulfillment centers, employees locate, scan and box all the items in an order. Amazon’s “Associate Development and Performance Tracker,” or Adapt, tracks how each employee performs on these steps against externally-established benchmarks and warns employees when they are falling short.

Amazon employees have complained of being monitored continuously — even having bathroom breaks measured — and being held to ever-rising productivity benchmarks. There is no public data to determine if such complaints are more or less common at Amazon than its peers. The company says about 300 employees — roughly 10% of the Baltimore center’s employment level — were terminated for productivity reasons in the year before the law firm’s letter was sent to the NLRB.

Mr. Larkin says 10% is not unusually high. Yet, automating the discipline process, he says, “makes an already difficult job seem even more inhuman and undesirable. Dealing with these tough situations is one of the key roles of managers.”

“Managers make final decisions on all personnel matters,” an Amazon spokeswoman said. “The [Adapt system] simply tracks and ensures consistency of data and process across hundreds of employees to ensure fairness.” The number of terminations has decreased in the last two years at the Baltimore facility and across North America, she said. Termination notices can be appealed.

Companies use these systems because they work well for them.

Mr. Bloom and his co-authors find that companies that more aggressively hire, fire and monitor employees have faster productivity growth. They also have wider gaps between the highest- and lowest-paid employees.

Computers also don’t succumb to the biases managers do. Economists Mitchell Hoffman, Lisa Kahn and Danielle Li looked at how 15 firms used a job-testing technology that tested applicants on computer and technical skills, personality, cognitive skills, fit for the job and various job scenarios. Drawing on past correlations, the algorithm ranked applicants as having high, moderate or low potential. Their study found employees hired against the software’s recommendation were below-average performers: “This suggests that managers often overrule test recommendations because they are biased or mistaken, not only because they have superior private information,” they wrote.

Last fall Amazon raised its starting pay to $15 an hour, several dollars more than what the brick-and-mortar stores being displaced by Amazon pay. Ruthless performance tracking is how Amazon ensures employees are productive enough to merit that salary. This also means that, while employees may increasingly be supervised by technology, at least they’re not about to be replaced by it.

Write to Greg Ip at greg.ip@wsj.com

https://www.morningstar.com/news/glbnewscan/TDJNDN_201905017114/for-lowerpaid-workers-the-robot-overlords-have-arrived.html

Thanks to Kebmodee for bringing this to the It’s Interesting community.

screen_shot_2018-09-06_at_154717_1024

by LINDSAY DODGSON

Medical training exercises are getting more and more realistic. Recently, companies have developed robots that medical students can practise on.

The idea is that these pretend people can lead us a little way into the uncanny valley, so we have to deal with the emotional response as well as the methodology behind a procedure.

One of the latest medical robots is called HAL. It takes the form of a five-year-old boy which can respond to certain questions, follow a finger with its eyes, bleed, and convulse.

It even has a pulse.

HAL was built by Gaumard Scientific, a company that produced the first synthetic human skeleton for medical schools.

The company’s technology has come a long way since then, having developed a synthetic boy who can simulate many medical problems, cry tears, and shout for its mother.

Using HAL is supposed to help students retain their knowledge better, because it is as close to treating a real person without actually using a human volunteer.

HAL’s other functions include going into cardiac arrest, anaphylactic shock, and the ability to have its blood sugar, blood oxygen level, and carbon dioxide levels measured.

Also, its pupils dilate when a light is shined into its eyes.

In a promotional video, a doctor asks HAL about how much its head hurts, and it responds “an eight”.

To prepare for the really bad injuries and problems, HAL can be hooked up to real hospital machines and shocked with a defibrillator.

When it’s awake it can be set to several different emotional states, including lethargic, angry, amazed, quizzical, and anxious.

The idea is to make HAL just realistic enough to help students with their studies, but not so realistic that it’s too traumatic to deal with when they have to slit its throat to insert a trachael tube.

HAL is one of a few medical robots currently in use. On the Gaumard website there is also a premature baby simulator, and a scarily realistic robot that gives birth.

These pretend people are very different from the lifeless dummies medical professionals have used for decades.

“I’ve seen several nurses be like, ‘Whoa it moves!'” Marc Berg, the medical director at the Revive Initiative for Resuscitation Excellence at Stanford, told Wired in a chilling article.

“I think that’s kind of similar to the idea that if you’ve driven a car for 20 years and then you got a brand new car, you’re kind of amazed initially.”

Watch the video explaining all of HAL’s functions here:

https://www.sciencealert.com/this-robot-child-bleeds-screams-and-cries-for-its-mother

Robots, take note: When working in tight, crowded spaces, fire ants know how to avoid too many cooks in the kitchen.

Observations of fire ants digging an underground nest reveal that a few industrious ants do most of the work while others dawdle. Computer simulations confirm that, while this strategy may not be the fairest, it is the most efficient because it helps reduce overcrowding in tunnels that would gum up the works. Following fire ants’ example could help robot squads work together more efficiently, researchers report in the Aug. 17 Science.

Robots that can work in close, crowded quarters without tripping each other up may be especially good at digging through rubble for search-and-rescue missions, disaster cleanup or construction, says Justin Werfel, a collective behavior researcher at Harvard University who has designed insect-inspired robot swarms.

Daniel Goldman, a physicist at Georgia Tech in Atlanta, and colleagues pored over footage of about 30 fire ants digging tunnels during 12-hour stretches. “To our surprise, we found that there’s only about three to five ants doing anything” at a time, Goldman says. Although individual ants’ activity levels varied over time, about 30 percent of the ants did about 70 percent of the work in any given 12-hour period.

To investigate why fire ants divvy up work this way, Goldman’s team created computer simulations of two ant colonies digging tunnels. In one, the virtual ants mimicked the real insects’ unequal work split; in the other, all the ants pitched in equally. The colony with fewer heavy lifters was better at keeping tunnel traffic moving; in three hours, that colony dug a tunnel that was about three times longer than the group of ants that all did their fair share.

Goldman’s team then tested the fire ants’ teamwork strategy on autonomous robots. These robots trundled back and forth along a narrow track, scooping up plastic balls at one end and dumping them at the other. Programming the robots to do equal work is “not so bad when you have two or three,” Goldman says, “but when you get four in that little narrow tunnel, forget about it.” The four-bot fleet tended to get stuck in pileups. Programming the robots to share the workload unequally helped avoid these smashups and move material 35 percent faster, the researchers found.

J. Aguilar et al. Collective clog control: Optimizing traffic flow in confined biological and robophysical excavation. Science. Vol. 361, August 17, 2018, p. 672.

https://www.sciencenews.org/article/what-robots-could-learn-fire-ants


An electronics repair company gives a compassionate farewell to mechanical pets, with a traditional ceremony held in a historic temple.

By James Burch
A traveler happening upon a funeral for robot dogs might be taken aback.

Is this a performance art statement about modern life? Is it a hoax? A practical joke?

But this is actually a religious ceremony, and the emotions expressed by the human participants are genuine.

A dog-shaped robot—as opposed to say, a dish on wheels with a built-in vacuum cleaner—represented a focus on entertainment and companionship. When Sony released the AIBO (short for “artificial intelligence robot”) in 1999, 3,000 units—the greater share of the first run—were sold to the Japanese market. At an initial cost of $3,000 in today’s money, those sold out in 20 minutes.

But AIBOs never became more than a niche product, and in 2006 Sony canceled production. In seven years, they’d sold 150,000 of the robots.

Some AIBO owners had already become deeply attached to their pet robots, though. And here is where the story takes an unexpected turn.

AIBOs aren’t like a remote-control car. They were designed to move in complex, fluid ways, with trainability and a simulated mischievous streak. (Meet Sophia, the robot that almost seems human.)

Over time, they would come to “know” their human companions, who grew attached to them as if they were real dogs. (Learn how playing games helped build the modern world.)

The AIBOs’ programs included both doggish behaviors, like tail-wagging, and humanlike actions, such as dancing, and—in later models—speech.

So when Sony announced in 2014 that they would no longer support updates to the aging robots, some AIBO owners heard a much more somber message: Their pet robot dogs would die. The community of devoted owners began sharing tips on providing care for their pets in the absence of official support.

Nobuyuki Norimatsu didn’t intend to create a cyberhospital. According to Nippon.com, the former Sony employee, who founded the repair company A-Fun in a Chiba Prefecture, a Tokyo suburb, simply felt a duty to stand by the company’s products. (Watch sunlight create a heart inside a Chiba Prefecture cave.)

And then came a request to repair an AIBO. Nippon.com reports that, at first, no one knew exactly what to do, but months of trial and error saw the robodog back on its feet. Soon, A-Fun had a steady demand for AIBO repairs—which could only be made by cannibalizing parts from other, defunct AIBOs.

Hiroshi Funabashi, A-Fun’s repairs supervisor, observes that the company’s clients describe their pets’ complaints in such terms as “aching joints.” Funabashi realized that they were not seeing a piece of electronic equipment, but a family member.

And Norimatsu came to regard the broken AIBOs his company received as “organ donors.” Out of respect for the owners’ emotional connection to the “deceased” devices, Norimatsu and his colleagues decided to hold funerals.

A-Fun approached Bungen Oi, head priest of Kōfuku-ji, a Buddhist temple in Chiba Prefecture’s city of Isumi. Oi agreed to take on the duty of honoring the sacrifice of donor AIBOs before their disassembly. In 2015, the centuries-old temple held its first robot funeral for 17 decommissioned AIBOs. Just as with the repairs, demand for funeral ceremonies quickly grew.

The most recent service, in April 2018, brought the total number of dearly departed AIBOs to about 800. Tags attached to the donor bodies record the dogs’ and owners’ names.

Services include chanting and the burning of incense, as they would for the human departed. A-Fun employees attend the closed ceremonies, serving as surrogates for the “families” of the pets, and pliers are placed before the robodogs in place of traditional offerings like fruit. Robots even recite Buddhist sutras, or scriptures. (Meet a master of Japanese Tea Ceremony.)

According to Head Priest Oi, honoring inanimate objects is consistent with Buddhist thought. Nippon.com quotes the priest: “Even though AIBO is a machine and doesn’t have feelings, it acts as a mirror for human emotions.” Speaking with videographer Kei Oumawatari, Oi cites a saying, “Everything has Buddha-nature.”

AIBOs and similar robots are especially popular among the elderly, and limited research hints that robots could potentially act like therapy animals—though attachment to machines could also be a symptom of loneliness, an increasing concern in Japan. (READ: Will a robot be your friend or steal your job?)

Sony has now introduced a new line of more advanced AIBOs, and although they are apparently not technologically compatible with their predecessors, it would seem they stand a good chance of finding similar popularity with those who can appreciate the soul of a machine.

Though AIBO funerals are closed to the public, travelers in Japan can at other times visit the Isumi’s historic Kōfuku-ji, one of several temples in the region including work by the master wood carver IHACHI. Isumi tourist info (Click on “Select Language” in the upper right for English.)

To learn about other personal robots, such as Paro, a therapeutic seal-bot, visit the permanent exhibit “Create your future” at Miraikan, the National Museum of Emerging Science and Innovation in Tokyo.

https://www.nationalgeographic.com/travel/destinations/asia/japan/in-japan–a-buddhist-funeral-service-for-robot-dogs/

Thanks to Kebmodee for bringing this to the It’s Interesting community.

One advantage humans have over robots is that we’re good at quickly passing on our knowledge to each other. A new system developed at MIT now allows anyone to coach robots through simple tasks and even lets them teach each other.

Typically, robots learn tasks through demonstrations by humans, or through hand-coded motion planning systems where a programmer specifies each of the required movements. But the former approach is not good at translating skills to new situations, and the latter is very time-consuming.

Humans, on the other hand, can typically demonstrate a simple task, like how to stack logs, to someone else just once before they pick it up, and that person can easily adapt that knowledge to new situations, say if they come across an odd-shaped log or the pile collapses.

In an attempt to mimic this kind of adaptable, one-shot learning, researchers from MIT’s Computer Science and Artificial Intelligence Laboratory (CSAIL) combined motion planning and learning through demonstration in an approach they’ve dubbed C-LEARN.

First, a human teaches the robot a series of basic motions using an interactive 3D model on a computer. Using the mouse to show it how to reach and grasp various objects in different positions helps the machine build up a library of possible actions.

The operator then shows the robot a single demonstration of a multistep task, and using its database of potential moves, it devises a motion plan to carry out the job at hand.

“This approach is actually very similar to how humans learn in terms of seeing how something’s done and connecting it to what we already know about the world,” says Claudia Pérez-D’Arpino, a PhD student who wrote a paper on C-LEARN with MIT Professor Julie Shah, in a press release.

“We can’t magically learn from a single demonstration, so we take new information and match it to previous knowledge about our environment.”

The robot successfully carried out tasks 87.5 percent of the time on its own, but when a human operator was allowed to correct minor errors in the interactive model before the robot carried out the task, the accuracy rose to 100 percent.

Most importantly, the robot could teach the skills it learned to another machine with a completely different configuration. The researchers tested C-LEARN on a new two-armed robot called Optimus that sits on a wheeled base and is designed for bomb disposal.

But in simulations, they were able to seamlessly transfer Optimus’ learned skills to CSAIL’s 6-foot-tall Atlas humanoid robot. They haven’t yet tested Atlas’ new skills in the real world, and they had to give Atlas some extra information on how to carry out tasks without falling over, but the demonstration shows that the approach can allow very different robots to learn from each other.

The research, which will be presented at the IEEE International Conference on Robotics and Automation in Singapore later this month, could have important implications for the large-scale roll-out of robot workers.

“Traditional programming of robots in real-world scenarios is difficult, tedious, and requires a lot of domain knowledge,” says Shah in the press release.

“It would be much more effective if we could train them more like how we train people: by giving them some basic knowledge and a single demonstration. This is an exciting step toward teaching robots to perform complex multi-arm and multi-step tasks necessary for assembly manufacturing and ship or aircraft maintenance.”

The MIT researchers aren’t the only people investigating the field of so-called transfer learning. The RoboEarth project and its spin-off RoboHow were both aimed at creating a shared language for robots and an online repository that would allow them to share their knowledge of how to carry out tasks over the web.

Google DeepMind has also been experimenting with ways to transfer knowledge from one machine to another, though in their case the aim is to help skills learned in simulations to be carried over into the real world.

A lot of their research involves deep reinforcement learning, in which robots learn how to carry out tasks in virtual environments through trial and error. But transferring this knowledge from highly-engineered simulations into the messy real world is not so simple.

So they have found a way for a model that has learned how to carry out a task in a simulation using deep reinforcement learning to transfer that knowledge to a so-called progressive neural network that controls a real-world robotic arm. This allows the system to take advantage of the accelerated learning possible in a simulation while still learning effectively in the real world.

These kinds of approaches make life easier for data scientists trying to build new models for AI and robots. As James Kobielus notes in InfoWorld, the approach “stands at the forefront of the data science community’s efforts to invent ‘master learning algorithms’ that automatically gain and apply fresh contextual knowledge through deep neural networks and other forms of AI.”

If you believe those who say we’re headed towards a technological singularity, you can bet transfer learning will be an important part of that process.

https://singularityhub.com/2017/05/26/these-robots-can-teach-other-robots-how-to-do-new-things/?utm_source=Singularity+Hub+Newsletter&utm_campaign=7c19f894b1-Hub_Daily_Newsletter&utm_medium=email&utm_term=0_f0cf60cdae-7c19f894b1-58158129

A viral video showing an army of little orange robots sorting out packages in a warehouse in eastern China is the latest example of how machines are increasingly taking over menial factory work on the mainland.

The behind-the-scenes footage of the self-charging robot army in a sorting centre of Chinese delivery powerhouse Shentong (STO) Express was shared on People’s Daily’s social media accounts on Sunday.

The video showed dozens of round orange Hikvision robots – each the size of a seat cushion – swivelling across the floor of the large warehouse in Hangzhou, Zhejiang province.

A worker was seen feeding each robot with a package before the machines carried the parcels away to different areas around the sorting centre, then flipping their lids to deposit them into chutes beneath the floor.

The robots identified the destination of each package by scanning a code on the parcel, thus minimising sorting mistakes, according to the video.

The machines can sort up to 200,000 packages a day and are self-charging, meaning they can operate around the clock.

An STO Express spokesman told the South China Morning Post on Monday that the robots had helped the company save half the costs it typically required to use human workers.

They also improved efficiency by around 30 per cent and maximised sorting accuracy, he said.

“We use these robots in two of our centres in Hangzhou right now,” the spokesman said. “We want to start using these across the country, especially in our bigger centres.”

Although the machines could run around the clock, they were presently used only for about six or seven hours each time from 6pm, he said.

Manufacturers across China have been increasingly replacing human workers with machines.

The output of industrial robots in the country grew 30.4 per cent last year.

In the country’s latest five-year plan, the central government set a target aiming for annual production of these robots to reach 100,000 by 2020.

Apple’s supplier Foxconn last year replaced 60,000 factory workers with robots, according to a Chinese government official in Kunshan, eastern Jiangsu province.

The Taiwanese smartphone maker has several factories across China.

http://www.scmp.com/news/china/society/article/2086662/chinese-firm-cuts-costs-hiring-army-robots-sort-out-200000

Thanks to Kebmodee for bringing this to the It’s Interesting community.