Archive for the ‘Kebmodee’ Category

A viral video showing an army of little orange robots sorting out packages in a warehouse in eastern China is the latest example of how machines are increasingly taking over menial factory work on the mainland.

The behind-the-scenes footage of the self-charging robot army in a sorting centre of Chinese delivery powerhouse Shentong (STO) Express was shared on People’s Daily’s social media accounts on Sunday.

The video showed dozens of round orange Hikvision robots – each the size of a seat cushion – swivelling across the floor of the large warehouse in Hangzhou, Zhejiang province.

A worker was seen feeding each robot with a package before the machines carried the parcels away to different areas around the sorting centre, then flipping their lids to deposit them into chutes beneath the floor.

The robots identified the destination of each package by scanning a code on the parcel, thus minimising sorting mistakes, according to the video.

The machines can sort up to 200,000 packages a day and are self-charging, meaning they can operate around the clock.

An STO Express spokesman told the South China Morning Post on Monday that the robots had helped the company save half the costs it typically required to use human workers.

They also improved efficiency by around 30 per cent and maximised sorting accuracy, he said.

“We use these robots in two of our centres in Hangzhou right now,” the spokesman said. “We want to start using these across the country, especially in our bigger centres.”

Although the machines could run around the clock, they were presently used only for about six or seven hours each time from 6pm, he said.

Manufacturers across China have been increasingly replacing human workers with machines.

The output of industrial robots in the country grew 30.4 per cent last year.

In the country’s latest five-year plan, the central government set a target aiming for annual production of these robots to reach 100,000 by 2020.

Apple’s supplier Foxconn last year replaced 60,000 factory workers with robots, according to a Chinese government official in Kunshan, eastern Jiangsu province.

The Taiwanese smartphone maker has several factories across China.

http://www.scmp.com/news/china/society/article/2086662/chinese-firm-cuts-costs-hiring-army-robots-sort-out-200000

Thanks to Kebmodee for bringing this to the It’s Interesting community.

by Arjun Kharpal

Billionaire Elon Musk is known for his futuristic ideas and his latest suggestion might just save us from being irrelevant as artificial intelligence (AI) grows more prominent.

The Tesla and SpaceX CEO said on Monday that humans need to merge with machines to become a sort of cyborg.

“Over time I think we will probably see a closer merger of biological intelligence and digital intelligence,” Musk told an audience at the World Government Summit in Dubai, where he also launched Tesla in the United Arab Emirates (UAE).

“It’s mostly about the bandwidth, the speed of the connection between your brain and the digital version of yourself, particularly output.”

Musk explained what he meant by saying that computers can communicate at “a trillion bits per second”, while humans, whose main communication method is typing with their fingers via a mobile device, can do about 10 bits per second.

In an age when AI threatens to become widespread, humans would be useless, so there’s a need to merge with machines, according to Musk.

“Some high bandwidth interface to the brain will be something that helps achieve a symbiosis between human and machine intelligence and maybe solves the control problem and the usefulness problem,” Musk explained.

The technologists proposal would see a new layer of a brain able to access information quickly and tap into artificial intelligence. It’s not the first time Musk has spoken about the need for humans to evolve, but it’s a constant theme of his talks on how society can deal with the disruptive threat of AI.

‘Very quick’ disruption

During his talk, Musk touched upon his fear of “deep AI” which goes beyond driverless cars to what he called “artificial general intelligence”. This he described as AI that is “smarter than the smartest human on earth” and called it a “dangerous situation”.

While this might be some way off, the Tesla boss said the more immediate threat is how AI, particularly autonomous cars, which his own firm is developing, will displace jobs. He said the disruption to people whose job it is to drive will take place over the next 20 years, after which 12 to 15 percent of the global workforce will be unemployed.

“The most near term impact from a technology standpoint is autonomous cars … That is going to happen much faster than people realize and it’s going to be a great convenience,” Musk said.

“But there are many people whose jobs are to drive. In fact I think it might be the single largest employer of people … Driving in various forms. So we need to figure out new roles for what do those people do, but it will be very disruptive and very quick.”

http://www.cnbc.com/2017/02/13/elon-musk-humans-merge-machines-cyborg-artificial-intelligence-robots.html

In the steppes of southwestern Russia, there lies the largest Buddhist city in all of Europe, a town called Elista. In addition to giant monasteries and Buddhist sculptures, Elista is also home to kings and queens—but not in the royal sense.

Lying on the east side of Elista is Chess City, a culturally and architecturally distinct enclave in which, as the New York Times put it, “chess is king and the people are pawns.”

Chess City was built in 1998 by chess fanatic Kirsan Ilyumzhinov, the megalomaniac leader of Russia’s Kalmykia province and president of the International Chess Federation, who claims to have been abducted by aliens with the wild, utopian mission of bringing chess to Elista.

Following the aliens’ suggestion, Ilyumzhinov built Chess City just in time to host the 33rd Chess Olympiad in grand fashion. Featuring a swimming pool, a chess museum, a large open-air chess board, and a museum of Buddhist art, Chess City hosted hundreds of elite grandmasters in 1998 and was home to several smaller chess championships in later years. Also found in Chess City is a statue of Ostap Bender, a fictional literary con man obsessed with chess.

But while Chess City brought temporary international attention to Elista, it was also highly controversial. In the impoverished steppes of Elista, cutting food subsidies to fund a giant, $50 million complex for the short-term use of foreigners wasn’t a popular idea with much of the region. Once the Chess Olympiad was over, Chess City became sparsely used and largely vacated, a symbol to the people of Elista of the local government’s misguided priorities.

http://www.slate.com/blogs/atlas_obscura/2017/01/30/the_alien_inspired_chess_city_in_europe_is_a_haven_for_chess_lovers.html

Thanks to Kebmodee for bringing this to the It’s Interesting community.

by Tom Simonite

Each of these trucks is the size of a small two-story house. None has a driver or anyone else on board.

Mining company Rio Tinto has 73 of these titans hauling iron ore 24 hours a day at four mines in Australia’s Mars-red northwest corner. At this one, known as West Angelas, the vehicles work alongside robotic rock drilling rigs. The company is also upgrading the locomotives that haul ore hundreds of miles to port—the upgrades will allow the trains to drive themselves, and be loaded and unloaded automatically.

Rio Tinto intends its automated operations in Australia to preview a more efficient future for all of its mines—one that will also reduce the need for human miners. The rising capabilities and falling costs of robotics technology are allowing mining and oil companies to reimagine the dirty, dangerous business of getting resources out of the ground.

BHP Billiton, the world’s largest mining company, is also deploying driverless trucks and drills on iron ore mines in Australia. Suncor, Canada’s largest oil company, has begun testing driverless trucks on oil sands fields in Alberta.

“In the last couple of years we can just do so much more in terms of the sophistication of automation,” says Herman Herman, director of the National Robotics Engineering Center at Carnegie Mellon University, in Pittsburgh. The center helped Caterpillar develop its autonomous haul truck. Mining company Fortescue Metals Group is putting them to work in its own iron ore mines. Herman says the technology can be deployed sooner for mining than other applications, such as transportation on public roads. “It’s easier to deploy because these environments are already highly regulated,” he says.

Rio Tinto uses driverless trucks provided by Japan’s Komatsu. They find their way around using precision GPS and look out for obstacles using radar and laser sensors.

Rob Atkinson, who leads productivity efforts at Rio Tinto, says the fleet and other automation projects are already paying off. The company’s driverless trucks have proven to be roughly 15 percent cheaper to run than vehicles with humans behind the wheel, says Atkinson—a significant saving since haulage is by far a mine’s largest operational cost. “We’re going to continue as aggressively as possible down this path,” he says.

Trucks that drive themselves can spend more time working because software doesn’t need to stop for shift changes or bathroom breaks. They are also more predictable in how they do things like pull up for loading. “All those places where you could lose a few seconds or minutes by not being consistent add up,” says Atkinson. They also improve safety, he says.

The driverless locomotives, due to be tested extensively next year and fully deployed by 2018, are expected to bring similar benefits. Atkinson also anticipates savings on train maintenance, because software can be more predictable and gentle than any human in how it uses brakes and other controls. Diggers and bulldozers could be next to be automated.

Herman at CMU expects all large mining companies to widen their use of automation in the coming years as robotics continues to improve. The recent, sizeable investments by auto and tech companies in driverless cars will help accelerate improvements in the price and performance of the sensors, software, and other technologies needed.

Herman says many mining companies are well placed to expand automation rapidly, because they have already invested in centralized control systems that use software to coördinate and monitor their equipment. Rio Tinto, for example, gave the job of overseeing its autonomous trucks to staff at the company’s control center in Perth, 750 miles to the south. The center already plans train movements and in the future will shift from sending orders to people to directing driverless locomotives.

Atkinson of Rio Tinto acknowledges that just like earlier technologies that boosted efficiency, those changes will tend to reduce staffing levels, even if some new jobs are created servicing and managing autonomous machines. “It’s something that we’ve got to carefully manage, but it’s a reality of modern day life,” he says. “We will remain a very significant employer.”

https://www.technologyreview.com/s/603170/mining-24-hours-a-day-with-robots/

Thanks to Kebmodee for bringing this to the It’s Interesting community.

Most of the attention around automation focuses on how factory robots and self-driving cars may fundamentally change our workforce, potentially eliminating millions of jobs. But AI that can handle knowledge-based, white-collar work are also becoming increasingly competent.

One Japanese insurance company, Fukoku Mutual Life Insurance, is reportedly replacing 34 human insurance claim workers with “IBM Watson Explorer,” starting by January 2017.

The AI will scan hospital records and other documents to determine insurance payouts, according to a company press release, factoring injuries, patient medical histories, and procedures administered. Automation of these research and data gathering tasks will help the remaining human workers process the final payout faster, the release says.

Fukoku Mutual will spend $1.7 million (200 million yen) to install the AI system, and $128,000 per year for maintenance, according to Japan’s The Mainichi. The company saves roughly $1.1 million per year on employee salaries by using the IBM software, meaning it hopes to see a return on the investment in less than two years.

Watson AI is expected to improve productivity by 30%, Fukoku Mutual says. The company was encouraged by its use of similar IBM technology to analyze customer’s voices during complaints. The software typically takes the customer’s words, converts them to text, and analyzes whether those words are positive or negative. Similar sentiment analysis software is also being used by a range of US companies for customer service; incidentally, a large benefit of the software is understanding when customers get frustrated with automated systems.

The Mainichi reports that three other Japanese insurance companies are testing or implementing AI systems to automate work such as finding ideal plans for customers. An Israeli insurance startup, Lemonade, has raised $60 million on the idea of “replacing brokers and paperwork with bots and machine learning,” says CEO Daniel Schreiber.

Artificial intelligence systems like IBM’s are poised to upend knowledge-based professions, like insurance and financial services, according to the Harvard Business Review, due to the fact that many jobs can be “composed of work that can be codified into standard steps and of decisions based on cleanly formatted data.” But whether that means augmenting workers’ ability to be productive, or replacing them entirely remains to be seen.

“Almost all jobs have major elements that—for the foreseeable future—won’t be possible for computers to handle,” HBR writes. “And yet, we have to admit that there are some knowledge-work jobs that will simply succumb to the rise of the robots.”

Japanese white-collar workers are already being replaced by artificial intelligence

Thank to Kebmodee for bringing this to the It’s Interesting community.

uber1

uber-3

Uber’s self-driving cars are making the move to San Francisco, in a new expansion of its pilot project with autonomous vehicles that will see Volvo SUVs outfitted with sensors and supercomputers begin picking up passengers in the city.

The autonomous cars won’t operate completely driverless, for the time being – as in Pittsburgh, where Uber launched self-driving Ford Focus vehicles this fall, each SUV will have a safety driver and Uber test engineer onboard to handle manual driving when needed and monitor progress with the tests. But the cars will still be picking up ordinary passengers – any customers who request uberX using the standard consumer-facing mobile app are eligible for a ride in one of the new XC90s operated by Uber’s Advanced Technologies Group (ATG).

There’s a difference here beyond the geography; this is the third generation of Uber’s autonomous vehicle, which is distinct from the second-generation Fords that were used in the Pittsburgh pilot. Uber has a more direct relationship with Volvo in turning its new XC90s into cars with autonomous capabilities; the Fords were essentially purchased stock off the line, while Uber’s partnership with Volvo means it can do more in terms of integrating its own sensor array into the ones available on board the vehicle already.

Uber ATG Head of Product Matt Sweeney told me in an interview that this third-generation vehicle actually uses fewer sensors than the Fords that are on the roads in Pittsburgh, though the loadout still includes a full complement of traditional optical cameras, radar, LiDAR and ultrasonic detectors. He said that fewer sensors are required in part because of the lessons learned from the Pittsburgh rollout, and from their work studying previous generation vehicles; with autonomy, you typically start by throwing everything you can think of at the problem, and then you narrow based on what’s specifically useful, and what turns out not to be so necessary. Still, the fused image of the world that results from data gathered from the Volvo’s sensor suite does not lack for detail.

“You combine [images and LiDAR] together you end up with an image which you know very explicitly distance information about, so it’s like this beautiful object that you can detect as you’re moving through,” Sweeney explained to me. “And with some of the better engineered integration here, we have some radars in the front and rear bumpers behind the facades.”

Those radar arrays provide more than just the ability to see even in conditions it might be difficult to do so optically, as in poor weather; Sweeney notes that the radar units they’re using can actually bounce signal off the surface of the road, underneath or around vehicles in front, in order to look for and report back information on potential accidents or hazards not immediately in front of the autonomous Uber itself.

“The car is one of the reasons we’re really excited about this partnership, it’s a really tremendous vehicle,” Sweeney said. “It’s Volvo’s new SPA, the scalable platform architecture – the first car on their brand new, built from the ground up vehicle architecture, so you get all new mechanical, all new electrical, all new compute.”

Uber didn’t pick a partner blindly – Sweeney says they found a company with a reputation for nearly a hundred years of solid engineering, manufacturing and a commitment to iterating improvement in those areas.

“The vehicle that we’re building on top of, we’re very intentional about it,” Sweeney said, noting that cars like this one are engineered specifically for safety, which is not the main failure point when it comes to most automobile accidents today – that role is reserved for the human drivers behind the wheel.

Uber’s contributions are mainly in the sensor pod, and in the compute stack in the trunk, which takes up about half the surface area of the storage space and which Sweeney said is “a blade architecture, a whole bunch of CPUs and GPUs that we can swap out under there,” though he wouldn’t speak to who’s supplying those components specifically. The tremendous computing power it represents taken together is the key identifying objects, doing so in higher volume, and doing better pathfinding in complex city street environments.

For the actual rider, there’s an iPad-based interactive display in the rear of the vehicle, which takes over for the mobile app once you’ve actually entered the vehicle and are ready to start your ride. The display guides you through the steps of starting your trip, including ensuring your seat belt is fastened, checking your destination and then setting off on the ride itself.

During our demo, the act of actually leaving the curb and merging into traffic was handled by the safety driver on board, but in eventual full deployment of these cars the vehicles will handle even that tricky task. The iPad shows you when you’re in active self-driving mode, and also when it’s been disengaged and steering is being handled by the actual person behind the wheel instead. The screen also shows you a simplified version of what the autonomous car itself “sees,” displaying on a white background color-coded point- and line-based rudimentary versions of the objects and the world surrounding the vehicle. Objects in motion display trails as they move through this real-time virtual world.

The iPad-based display also lets you take a selfie and share the image from your ride, which definitely helps Uber promote its efforts, while also helping with the other key goal that the iPad itself seeks to achieve – making riders feel like this tech is both knowable and normal. Public perception remains one of autonomous driving’s highest bars to overcome, along with the tech problem and regulation, and selfies are one seemingly shallow way to legitimately address that.

So how did I feel during my ride? About as excited as I typically feel during any Uber ride, after the initial thrill wore off – which is to say mostly bored. The vehicle I was in had to negotiate some heavy traffic, a lot of construction and very unpredictable south-of-Market San Francisco drivers, and as such did disengage with fair frequency. but it also handled long open stretches of road at speed with aplomb, and kept distance in more dense traffic well in stop-and-go situations. It felt overall like a system that is making good progress in terms of learning – but one that also still has a long way to go before it can do without its human minders up front.

My companion for the ride in the backseat was Uber Chief of Watch Rachel Maran, who has been a driver in Uber’s self-driving pilot in Pittsburgh previously. She explained that the unpredictability and variety in any new driving environment is going to be one of the biggest challenges Uber’s autonomous driving systems have to overcome.

Uber’s pilot in San Francisco will be limited to the downtown area to start, and will involve “a handful” of vehicles to start, with the intent of ramping up from there according to the company. The autonomous vehicles in Pittsburgh will also continue to run concurrently with the San Francisco deployment. Where Pittsburgh offers a range of weather conditions and other environmental variables for testing, San Francisco will provide new challenges for Uber’s self-driving tech, including denser, often more chaotic traffic, plus narrower lanes and roads.

The company doesn’t require a permit from the California DMV to operate in the state, it says, because the cars don’t qualify as fully autonomous as defined by state law because of the always present onboard safety operator. Legally, it’s more akin to a Tesla with Autopilot than to a self-driving Waymo car, under current regulatory rules.

Ultimately, the goal for Uber in autonomy is to create safer roads, according to Sweeney, while at the same time improving urban planning and space problems stemming from a vehicle ownership model that sees most cars sitting idle and unused somewhere near 95 percent of the time. I asked Sweeney about concerns from drivers and members of the public who can be very vocal about autonomous tech’s safety on real roads.

“This car has got centimeter-level distance measurements 360-degrees around the vehicle constantly, 20 meters front and 20 meters back constantly,” Sweeney said, noting that even though the autonomous decision-making remains “a really big challenge,” the advances achieved by the sensors themselves and “their continuous attention and superhuman perception […] sets us up for the first really marked decrease in automotive fatalities since the airbag.”

“I think this is where we really push it down to zero,” Sweeney added. “People treat it as though it’s a fact of life; it’s only because we’re used to it. We can do way better than this.”

Uber’s self-driving cars start picking up passengers in San Francisco

Thanks to Kebmodee for bringing this to the It’s Interesting community.

Google AI computers have created their own secret language, creating a fascinating and existentially challenging development.

In September, Google announced that its Neural Machine Translation system had gone live. It uses deep learning to produce better, more natural translations between languages.

Following on this success, GNMT’s creators were curious about something. If you teach the translation system to translate English to Korean and vice versa, and also English to Japanese and vice versa… could it translate Korean to Japanese, without resorting to English as a bridge between them?

This is called zero-shot translation, illustrated below.

Indeed, Google’s AI has evolves to produce reasonable translations between two languages that it has not explicitly linked in any way.

But this raised a second question. If the computer is able to make connections between concepts and words that have not been formally linked… does that mean that the computer has formed a concept of shared meaning for those words, meaning at a deeper level than simply that one word or phrase is the equivalent of another?

n other words, has the computer developed its own internal language to represent the concepts it uses to translate between other languages? Based on how various sentences are related to one another in the memory space of the neural network, Google’s language and AI boffins think that it has.

This “interlingua” seems to exist as a deeper level of representation that sees similarities between a sentence or word in all three languages. Beyond that, it’s hard to say, since the inner processes of complex neural networks are infamously difficult to describe.

It could be something sophisticated, or it could be something simple. But the fact that it exists at all — an original creation of the system’s own to aid in its understanding of concepts it has not been trained to understand — is, philosophically speaking, pretty powerful stuff.

Google’s AI translation tool seems to have invented its own secret internal language

Thanks to Kebmodee for bringing this to the attention of the It’s Interesting community.