Self-cloning crayfish may be unstoppable

by NOEL KIRKPATRICK

nvasions are usually difficult to miss, whether it’s a military invasion conducted by countries or political factions, or the fictional invasion of alien lifeforms and their very big ships.

However, one invasion began so quietly that we’re not even sure where, or how, it started. All we do know for sure is that the invaders are all over Europe and Madagascar, and that they have toeholds in other continents, including North America. Or maybe “clawholds” is a better phrase since the invaders are mutant crayfish that can clone themselves.

Yes, that’s right. Self-cloning crayfish called marbled crayfish (Procambarus virginalis) have invaded the planet, and it may be not be possible to stop them.

Marbled crayfish didn’t even exist until at least 1995. The story goes that it scientists only became aware of it because of a German aquarium owner who had gotten a bag of “Texan crayfish” from an American pet trader. Not long after the crayfish reached adulthood, the owner suddenly had a tank full of the creatures. Indeed, a single marbled crayfish can produce hundreds of eggs at a time, and all without needing to mate.

Scientists officially described the crayfish in 2003, confirming the reports of a crayfish capable of unisexual reproduction (all marbled crayfish are female), or parthenogenesis. These researchers did try to warn us about the havoc the crayfish could cause, writing that the species poses a “potential ecological threat” that could “outcompete native forms should even a single specimen be released into European lakes and rivers.”

Now, thanks to unwitting pet owners who dumped them into nearby lakes, feral populations of the marbled crayfish have been found in in a number of countries, including Croatia, the Czech Republic, Hungary, Japan, Sweden and Ukraine. In Madagascar, the marbled crayfish is threatening the existence of seven other crayfish species because its population grows so quickly and it will eat just about anything. In the European Union, the species, which is also called marmorkrebs, is banned; it’s illegal to own, distribute, sell or release the marbled crayfish into the wild.

A team of researchers decided to get to the bottom of the marbled crayfish’s origins and began work on sequencing its genome in 2013. This was no easy task since no one had sequenced the genome of a crayfish before, or even a relative of the crayfish. Once they sequenced it, however, they sequenced another 15 specimens’ genomes to suss out how this invasive clone army got started.

The study of the marbled crayfish’s genome was published in Nature Ecology and Evolution.

Marbled crayfish likely got their start when two slough crayfish, a species found in Florida, mated. One of those slough crayfish had a mutation in a sex cell — researchers couldn’t determine if it was an egg or sperm cell — that carried two sets of chromosomes instead of just one. Despite this mutation, the sex cells fused and the result was a female crayfish with three sets of chromosomes instead of the usual two. Also unexpectedly, the female offspring didn’t have any deformities as a result of those extra chromosomes.

That female was able to induce her own eggs and essentially clone herself, creating hundreds of offspring. The genetic similarities were constant across specimens, regardless of where they were collected. Only a few letters in crayfish’s DNA sequence were different.

As to how the crayfish is able to survive in such different waters, its extra chromosome may provide enough genetic material for it to adapt. And it may need that chromosome for other aspects of survival, too. Sexual reproduction creates different combinations of genes that in turn can increase the odds of developing a defense to pathogens. Should one pathogen develop a way to kill a single clone, the crayfish’s lack of genetic diversity could be its downfall.

Until then, scientists are intrigued to observe how well the crayfish can thrive, and for how long.

“Maybe they just survive for 100,000 years,” Frank Lyko, and lead author on the gene study suggested to The New York Times. “That would be a long time for me personally, but in evolution it would just be a blip on the radar.”

https://www.mnn.com/earth-matters/animals/stories/marbled-crayfish-self-cloning-invasion

Thanks to Kebmodee for bringing this to the It’s Interesting community.

Laser Scans Reveal Maya Vast Interconnected “Megalopolis” Below Guatemalan Jungle that was Home to Millions of People

Laser technology known as LiDAR digitally removes the forest canopy to reveal ancient ruins below, showing that Maya cities such as Tikal were much larger than ground-based research had suggested.

By Tom Clynes

In what’s being hailed as a “major breakthrough” in Maya archaeology, researchers have identified the ruins of more than 60,000 houses, palaces, elevated highways, and other human-made features that have been hidden for centuries under the jungles of northern Guatemala.


Laser scans revealed more than 60,000 previously unknown Maya structures that were part of a vast network of cities, fortifications, farms, and highways.

Using a revolutionary technology known as LiDAR (short for “Light Detection And Ranging”), scholars digitally removed the tree canopy from aerial images of the now-unpopulated landscape, revealing the ruins of a sprawling pre-Columbian civilization that was far more complex and interconnected than most Maya specialists had supposed.

“The LiDAR images make it clear that this entire region was a settlement system whose scale and population density had been grossly underestimated,” said Thomas Garrison, an Ithaca College archaeologist and National Geographic Explorer who specializes in using digital technology for archaeological research.

Garrison is part of a consortium of researchers who are participating in the project, which was spearheaded by the PACUNAM Foundation, a Guatemalan nonprofit that fosters scientific research, sustainable development, and cultural heritage preservation.

The project mapped more than 800 square miles (2,100 square kilometers) of the Maya Biosphere Reserve in the Petén region of Guatemala, producing the largest LiDAR data set ever obtained for archaeological research.

The results suggest that Central America supported an advanced civilization that was, at its peak some 1,200 years ago, more comparable to sophisticated cultures such as ancient Greece or China than to the scattered and sparsely populated city states that ground-based research had long suggested.

In addition to hundreds of previously unknown structures, the LiDAR images show raised highways connecting urban centers and quarries. Complex irrigation and terracing systems supported intensive agriculture capable of feeding masses of workers who dramatically reshaped the landscape.

The ancient Maya never used the wheel or beasts of burden, yet “this was a civilization that was literally moving mountains,” said Marcello Canuto, a Tulane University archaeologist and National Geographic Explorer who participated in the project.

“We’ve had this western conceit that complex civilizations can’t flourish in the tropics, that the tropics are where civilizations go to die,” said Canuto, who conducts archaeological research at a Guatemalan site known as La Corona. “But with the new LiDAR-based evidence from Central America and [Cambodia’s] Angkor Wat, we now have to consider that complex societies may have formed in the tropics and made their way outward from there.”

“LiDAR is revolutionizing archaeology the way the Hubble Space Telescope revolutionized astronomy,” said Francisco Estrada-Belli, a Tulane University archaeologist and National Geographic Explorer. “We’ll need 100 years to go through all [the data] and really understand what we’re seeing.”

The unaided eye sees only jungle and an overgrown mound, but LiDAR and augmented reality software reveal an ancient Maya pyramid.

Already, though, the survey has yielded surprising insights into settlement patterns, inter-urban connectivity, and militarization in the Maya Lowlands. At its peak in the Maya classic period (approximately A.D. 250–900), the civilization covered an area about twice the size of medieval England, but it was far more densely populated.

“Most people had been comfortable with population estimates of around 5 million,” said Estrada-Belli, who directs a multi-disciplinary archaeological project at Holmul, Guatemala. “With this new data it’s no longer unreasonable to think that there were 10 to 15 million people there—including many living in low-lying, swampy areas that many of us had thought uninhabitable.”


Hidden deep in the jungle, the newly-discovered pyramid rises some seven stories high but is nearly invisible to the naked eye.

Virtually all the Mayan cities were connected by causeways wide enough to suggest that they were heavily trafficked and used for trade and other forms of regional interaction. These highways were elevated to allow easy passage even during rainy seasons. In a part of the world where there is usually too much or too little precipitation, the flow of water was meticulously planned and controlled via canals, dikes, and reservoirs.

Among the most surprising findings was the ubiquity of defensive walls, ramparts, terraces, and fortresses. “Warfare wasn’t only happening toward the end of the civilization,” said Garrison. “It was large-scale and systematic, and it endured over many years.”

The survey also revealed thousands of pits dug by modern-day looters. “Many of these new sites are only new to us; they are not new to looters,” said Marianne Hernandez, president of the PACUNAM Foundation. (Read “Losing Maya Heritage to Looters.”)

Environmental degradation is another concern. Guatemala is losing more than 10 percent of its forests annually, and habitat loss has accelerated along its border with Mexico as trespassers burn and clear land for agriculture and human settlement.

“By identifying these sites and helping to understand who these ancient people were, we hope to raise awareness of the value of protecting these places,” Hernandez said.

The survey is the first phase of the PACUNAM LiDAR Initiative, a three-year project that will eventually map more than 5,000 square miles (14,000 square kilometers) of Guatemala’s lowlands, part of a pre-Columbian settlement system that extended north to the Gulf of Mexico.

“The ambition and the impact of this project is just incredible,” said Kathryn Reese-Taylor, a University of Calgary archaeologist and Maya specialist who was not associated with the PACUNAM survey. “After decades of combing through the forests, no archaeologists had stumbled across these sites. More importantly, we never had the big picture that this data set gives us. It really pulls back the veil and helps us see the civilization as the ancient Maya saw it.”

https://news.nationalgeographic.com/2018/02/maya-laser-lidar-guatemala-pacunam/

Thanks to Kebmodee for bringing this to the It’s Interesting community.

Nasa partner Robert Bigelow says he is ‘absolutely convinced’ aliens are currently living on Earth

One of Nasa’s partners has said that he is “absolutely convinced” aliens exist – and that they are living on Earth right now.

Robert Bigelow, an entrepreneur who is working closely with Nasa on future space missions, has suggested that he knows that our planet has an alien presence that is “right under our noses”.

Mr Bigelow made the announcement during an episode of the show 60 Minutes that focused on his work with the space agency.

His company, Bigelow Aerospace, is developing an expendable craft for humans that can inflate and might provide the space habitats of the future.

They have already been tested out in journeys to the International Space Station. And the two organisations are working on further co-operation.

But during that episode Mr Bigelow began to talk about his belief in aliens – and his claim that UFOs have come to Earth and extraterrestrials have an “existing presence” here.

“I’m absolutely convinced [that aliens exist],” he told reporter Lara Logan. “That’s all there is to it.”

Asked by Ms Logan whether he also thought that UFOs had come to Earth, he said he did.

“There has been and is an existing presence, an ET presence,” Mr Bigelow said. “And I spent millions and millions and millions – I probably spent more as an individual than anybody else in the United States has ever spent on this subject.”

Ms Logan then asked if Mr Bigelow thought it was “risky” to say that he believes such things. He said that he doesn’t care what people think because it wouldn’t “change the reality of what I know”.

Mr Bigelow didn’t give any details about whether the research and private space travel that he is funding had revealed anything about aliens to him.

But he said that the hugely expensive work his company and Nasa are doing won’t be required to meet them – he said that people “don’t have to go anywhere”, because the aliens are “right under people’s noses”.

https://www.independent.co.uk/news/science/nasa-robert-bigelow-aliens-extraterrestrials-earth-aerospace-space-international-station-a7763441.html

Thanks to Kebmodee for bringing this to the It’s Interesting community.

Acceleration of drug discovery by A.I.

To create a new drug, researchers have to test tens of thousands of compounds to determine how they interact. And that’s the easy part; after a substance is found to be effective against a disease, it has to perform well in three different phases of clinical trials and be approved by regulatory bodies.

It’s estimated that, on average, one new drug coming to market can take 1,000 people, 12-15 years, and up to $1.6 billion. Here is a short video on the current process.

Last week, researchers published a paper detailing an artificial intelligence system made to help discover new drugs, and significantly shorten the amount of time and money it takes to do so.

The system is called AtomNet, and it comes from San Francisco-based startup AtomWise. The technology aims to streamline the initial phase of drug discovery, which involves analyzing how different molecules interact with one another—specifically, scientists need to determine which molecules will bind together and how strongly. They use trial and error and process of elimination to analyze tens of thousands of compounds, both natural and synthetic.

AtomNet takes the legwork out of this process, using deep learning to predict how molecules will behave and how likely they are to bind together. The software teaches itself about molecular interaction by identifying patterns, similar to how AI learns to recognize images.

Remember the 3D models of atoms you made in high school, where you used pipe cleaners and foam balls to represent the connections between protons, neutrons and electrons? AtomNet uses similar digital 3D models of molecules, incorporating data about their structure to predict their bioactivity.

As AtomWise COO Alexander Levy put it, “You can take an interaction between a drug and huge biological system and you can decompose that to smaller and smaller interactive groups. If you study enough historical examples of molecules…you can then make predictions that are extremely accurate yet also extremely fast.”

“Fast” may even be an understatement; AtomNet can reportedly screen one million compounds in a day, a volume that would take months via traditional methods.

AtomNet can’t actually invent a new drug, or even say for sure whether a combination of two molecules will yield an effective drug. What it can do is predict how likely a compound is to work against a certain illness. Researchers then use those predictions to narrow thousands of options down to dozens (or less), focusing their testing where there’s more likely to be positive results.

The software has already proven itself by helping create new drugs for two diseases, Ebola and multiple sclerosis. The MS drug has been licensed to a British pharmaceutical company, and the Ebola drug is being submitted to a peer-reviewed journal for additional analysis.

Drug Discovery AI Can Do in a Day What Currently Takes Months

Thanks to Kebmodee for bringing this to the It’s Interesting community.

Chinese firm halves worker costs by hiring army of robots to sort out 200,000 packages a day

A viral video showing an army of little orange robots sorting out packages in a warehouse in eastern China is the latest example of how machines are increasingly taking over menial factory work on the mainland.

The behind-the-scenes footage of the self-charging robot army in a sorting centre of Chinese delivery powerhouse Shentong (STO) Express was shared on People’s Daily’s social media accounts on Sunday.

The video showed dozens of round orange Hikvision robots – each the size of a seat cushion – swivelling across the floor of the large warehouse in Hangzhou, Zhejiang province.

A worker was seen feeding each robot with a package before the machines carried the parcels away to different areas around the sorting centre, then flipping their lids to deposit them into chutes beneath the floor.

The robots identified the destination of each package by scanning a code on the parcel, thus minimising sorting mistakes, according to the video.

The machines can sort up to 200,000 packages a day and are self-charging, meaning they can operate around the clock.

An STO Express spokesman told the South China Morning Post on Monday that the robots had helped the company save half the costs it typically required to use human workers.

They also improved efficiency by around 30 per cent and maximised sorting accuracy, he said.

“We use these robots in two of our centres in Hangzhou right now,” the spokesman said. “We want to start using these across the country, especially in our bigger centres.”

Although the machines could run around the clock, they were presently used only for about six or seven hours each time from 6pm, he said.

Manufacturers across China have been increasingly replacing human workers with machines.

The output of industrial robots in the country grew 30.4 per cent last year.

In the country’s latest five-year plan, the central government set a target aiming for annual production of these robots to reach 100,000 by 2020.

Apple’s supplier Foxconn last year replaced 60,000 factory workers with robots, according to a Chinese government official in Kunshan, eastern Jiangsu province.

The Taiwanese smartphone maker has several factories across China.

http://www.scmp.com/news/china/society/article/2086662/chinese-firm-cuts-costs-hiring-army-robots-sort-out-200000

Thanks to Kebmodee for bringing this to the It’s Interesting community.

Elon Musk: Humans must merge with machines or become irrelevant in AI age

by Arjun Kharpal

Billionaire Elon Musk is known for his futuristic ideas and his latest suggestion might just save us from being irrelevant as artificial intelligence (AI) grows more prominent.

The Tesla and SpaceX CEO said on Monday that humans need to merge with machines to become a sort of cyborg.

“Over time I think we will probably see a closer merger of biological intelligence and digital intelligence,” Musk told an audience at the World Government Summit in Dubai, where he also launched Tesla in the United Arab Emirates (UAE).

“It’s mostly about the bandwidth, the speed of the connection between your brain and the digital version of yourself, particularly output.”

Musk explained what he meant by saying that computers can communicate at “a trillion bits per second”, while humans, whose main communication method is typing with their fingers via a mobile device, can do about 10 bits per second.

In an age when AI threatens to become widespread, humans would be useless, so there’s a need to merge with machines, according to Musk.

“Some high bandwidth interface to the brain will be something that helps achieve a symbiosis between human and machine intelligence and maybe solves the control problem and the usefulness problem,” Musk explained.

The technologists proposal would see a new layer of a brain able to access information quickly and tap into artificial intelligence. It’s not the first time Musk has spoken about the need for humans to evolve, but it’s a constant theme of his talks on how society can deal with the disruptive threat of AI.

‘Very quick’ disruption

During his talk, Musk touched upon his fear of “deep AI” which goes beyond driverless cars to what he called “artificial general intelligence”. This he described as AI that is “smarter than the smartest human on earth” and called it a “dangerous situation”.

While this might be some way off, the Tesla boss said the more immediate threat is how AI, particularly autonomous cars, which his own firm is developing, will displace jobs. He said the disruption to people whose job it is to drive will take place over the next 20 years, after which 12 to 15 percent of the global workforce will be unemployed.

“The most near term impact from a technology standpoint is autonomous cars … That is going to happen much faster than people realize and it’s going to be a great convenience,” Musk said.

“But there are many people whose jobs are to drive. In fact I think it might be the single largest employer of people … Driving in various forms. So we need to figure out new roles for what do those people do, but it will be very disruptive and very quick.”

http://www.cnbc.com/2017/02/13/elon-musk-humans-merge-machines-cyborg-artificial-intelligence-robots.html

This Russian City Was Built for Chess Fanatics According to Alien Specifications

In the steppes of southwestern Russia, there lies the largest Buddhist city in all of Europe, a town called Elista. In addition to giant monasteries and Buddhist sculptures, Elista is also home to kings and queens—but not in the royal sense.

Lying on the east side of Elista is Chess City, a culturally and architecturally distinct enclave in which, as the New York Times put it, “chess is king and the people are pawns.”

Chess City was built in 1998 by chess fanatic Kirsan Ilyumzhinov, the megalomaniac leader of Russia’s Kalmykia province and president of the International Chess Federation, who claims to have been abducted by aliens with the wild, utopian mission of bringing chess to Elista.

Following the aliens’ suggestion, Ilyumzhinov built Chess City just in time to host the 33rd Chess Olympiad in grand fashion. Featuring a swimming pool, a chess museum, a large open-air chess board, and a museum of Buddhist art, Chess City hosted hundreds of elite grandmasters in 1998 and was home to several smaller chess championships in later years. Also found in Chess City is a statue of Ostap Bender, a fictional literary con man obsessed with chess.

But while Chess City brought temporary international attention to Elista, it was also highly controversial. In the impoverished steppes of Elista, cutting food subsidies to fund a giant, $50 million complex for the short-term use of foreigners wasn’t a popular idea with much of the region. Once the Chess Olympiad was over, Chess City became sparsely used and largely vacated, a symbol to the people of Elista of the local government’s misguided priorities.

http://www.slate.com/blogs/atlas_obscura/2017/01/30/the_alien_inspired_chess_city_in_europe_is_a_haven_for_chess_lovers.html

Thanks to Kebmodee for bringing this to the It’s Interesting community.

24 / 7 Robot Miners Working in Australia

by Tom Simonite

Each of these trucks is the size of a small two-story house. None has a driver or anyone else on board.

Mining company Rio Tinto has 73 of these titans hauling iron ore 24 hours a day at four mines in Australia’s Mars-red northwest corner. At this one, known as West Angelas, the vehicles work alongside robotic rock drilling rigs. The company is also upgrading the locomotives that haul ore hundreds of miles to port—the upgrades will allow the trains to drive themselves, and be loaded and unloaded automatically.

Rio Tinto intends its automated operations in Australia to preview a more efficient future for all of its mines—one that will also reduce the need for human miners. The rising capabilities and falling costs of robotics technology are allowing mining and oil companies to reimagine the dirty, dangerous business of getting resources out of the ground.

BHP Billiton, the world’s largest mining company, is also deploying driverless trucks and drills on iron ore mines in Australia. Suncor, Canada’s largest oil company, has begun testing driverless trucks on oil sands fields in Alberta.

“In the last couple of years we can just do so much more in terms of the sophistication of automation,” says Herman Herman, director of the National Robotics Engineering Center at Carnegie Mellon University, in Pittsburgh. The center helped Caterpillar develop its autonomous haul truck. Mining company Fortescue Metals Group is putting them to work in its own iron ore mines. Herman says the technology can be deployed sooner for mining than other applications, such as transportation on public roads. “It’s easier to deploy because these environments are already highly regulated,” he says.

Rio Tinto uses driverless trucks provided by Japan’s Komatsu. They find their way around using precision GPS and look out for obstacles using radar and laser sensors.

Rob Atkinson, who leads productivity efforts at Rio Tinto, says the fleet and other automation projects are already paying off. The company’s driverless trucks have proven to be roughly 15 percent cheaper to run than vehicles with humans behind the wheel, says Atkinson—a significant saving since haulage is by far a mine’s largest operational cost. “We’re going to continue as aggressively as possible down this path,” he says.

Trucks that drive themselves can spend more time working because software doesn’t need to stop for shift changes or bathroom breaks. They are also more predictable in how they do things like pull up for loading. “All those places where you could lose a few seconds or minutes by not being consistent add up,” says Atkinson. They also improve safety, he says.

The driverless locomotives, due to be tested extensively next year and fully deployed by 2018, are expected to bring similar benefits. Atkinson also anticipates savings on train maintenance, because software can be more predictable and gentle than any human in how it uses brakes and other controls. Diggers and bulldozers could be next to be automated.

Herman at CMU expects all large mining companies to widen their use of automation in the coming years as robotics continues to improve. The recent, sizeable investments by auto and tech companies in driverless cars will help accelerate improvements in the price and performance of the sensors, software, and other technologies needed.

Herman says many mining companies are well placed to expand automation rapidly, because they have already invested in centralized control systems that use software to coördinate and monitor their equipment. Rio Tinto, for example, gave the job of overseeing its autonomous trucks to staff at the company’s control center in Perth, 750 miles to the south. The center already plans train movements and in the future will shift from sending orders to people to directing driverless locomotives.

Atkinson of Rio Tinto acknowledges that just like earlier technologies that boosted efficiency, those changes will tend to reduce staffing levels, even if some new jobs are created servicing and managing autonomous machines. “It’s something that we’ve got to carefully manage, but it’s a reality of modern day life,” he says. “We will remain a very significant employer.”

https://www.technologyreview.com/s/603170/mining-24-hours-a-day-with-robots/

Thanks to Kebmodee for bringing this to the It’s Interesting community.

Japanese white-collar workers are already being replaced by artificial intelligence at Fukoku Mutual Life Insurance

Most of the attention around automation focuses on how factory robots and self-driving cars may fundamentally change our workforce, potentially eliminating millions of jobs. But AI that can handle knowledge-based, white-collar work are also becoming increasingly competent.

One Japanese insurance company, Fukoku Mutual Life Insurance, is reportedly replacing 34 human insurance claim workers with “IBM Watson Explorer,” starting by January 2017.

The AI will scan hospital records and other documents to determine insurance payouts, according to a company press release, factoring injuries, patient medical histories, and procedures administered. Automation of these research and data gathering tasks will help the remaining human workers process the final payout faster, the release says.

Fukoku Mutual will spend $1.7 million (200 million yen) to install the AI system, and $128,000 per year for maintenance, according to Japan’s The Mainichi. The company saves roughly $1.1 million per year on employee salaries by using the IBM software, meaning it hopes to see a return on the investment in less than two years.

Watson AI is expected to improve productivity by 30%, Fukoku Mutual says. The company was encouraged by its use of similar IBM technology to analyze customer’s voices during complaints. The software typically takes the customer’s words, converts them to text, and analyzes whether those words are positive or negative. Similar sentiment analysis software is also being used by a range of US companies for customer service; incidentally, a large benefit of the software is understanding when customers get frustrated with automated systems.

The Mainichi reports that three other Japanese insurance companies are testing or implementing AI systems to automate work such as finding ideal plans for customers. An Israeli insurance startup, Lemonade, has raised $60 million on the idea of “replacing brokers and paperwork with bots and machine learning,” says CEO Daniel Schreiber.

Artificial intelligence systems like IBM’s are poised to upend knowledge-based professions, like insurance and financial services, according to the Harvard Business Review, due to the fact that many jobs can be “composed of work that can be codified into standard steps and of decisions based on cleanly formatted data.” But whether that means augmenting workers’ ability to be productive, or replacing them entirely remains to be seen.

“Almost all jobs have major elements that—for the foreseeable future—won’t be possible for computers to handle,” HBR writes. “And yet, we have to admit that there are some knowledge-work jobs that will simply succumb to the rise of the robots.”

Japanese white-collar workers are already being replaced by artificial intelligence

Thank to Kebmodee for bringing this to the It’s Interesting community.

Uber’s self-driving cars start picking up passengers in San Francisco

uber1

uber-3

Uber’s self-driving cars are making the move to San Francisco, in a new expansion of its pilot project with autonomous vehicles that will see Volvo SUVs outfitted with sensors and supercomputers begin picking up passengers in the city.

The autonomous cars won’t operate completely driverless, for the time being – as in Pittsburgh, where Uber launched self-driving Ford Focus vehicles this fall, each SUV will have a safety driver and Uber test engineer onboard to handle manual driving when needed and monitor progress with the tests. But the cars will still be picking up ordinary passengers – any customers who request uberX using the standard consumer-facing mobile app are eligible for a ride in one of the new XC90s operated by Uber’s Advanced Technologies Group (ATG).

There’s a difference here beyond the geography; this is the third generation of Uber’s autonomous vehicle, which is distinct from the second-generation Fords that were used in the Pittsburgh pilot. Uber has a more direct relationship with Volvo in turning its new XC90s into cars with autonomous capabilities; the Fords were essentially purchased stock off the line, while Uber’s partnership with Volvo means it can do more in terms of integrating its own sensor array into the ones available on board the vehicle already.

Uber ATG Head of Product Matt Sweeney told me in an interview that this third-generation vehicle actually uses fewer sensors than the Fords that are on the roads in Pittsburgh, though the loadout still includes a full complement of traditional optical cameras, radar, LiDAR and ultrasonic detectors. He said that fewer sensors are required in part because of the lessons learned from the Pittsburgh rollout, and from their work studying previous generation vehicles; with autonomy, you typically start by throwing everything you can think of at the problem, and then you narrow based on what’s specifically useful, and what turns out not to be so necessary. Still, the fused image of the world that results from data gathered from the Volvo’s sensor suite does not lack for detail.

“You combine [images and LiDAR] together you end up with an image which you know very explicitly distance information about, so it’s like this beautiful object that you can detect as you’re moving through,” Sweeney explained to me. “And with some of the better engineered integration here, we have some radars in the front and rear bumpers behind the facades.”

Those radar arrays provide more than just the ability to see even in conditions it might be difficult to do so optically, as in poor weather; Sweeney notes that the radar units they’re using can actually bounce signal off the surface of the road, underneath or around vehicles in front, in order to look for and report back information on potential accidents or hazards not immediately in front of the autonomous Uber itself.

“The car is one of the reasons we’re really excited about this partnership, it’s a really tremendous vehicle,” Sweeney said. “It’s Volvo’s new SPA, the scalable platform architecture – the first car on their brand new, built from the ground up vehicle architecture, so you get all new mechanical, all new electrical, all new compute.”

Uber didn’t pick a partner blindly – Sweeney says they found a company with a reputation for nearly a hundred years of solid engineering, manufacturing and a commitment to iterating improvement in those areas.

“The vehicle that we’re building on top of, we’re very intentional about it,” Sweeney said, noting that cars like this one are engineered specifically for safety, which is not the main failure point when it comes to most automobile accidents today – that role is reserved for the human drivers behind the wheel.

Uber’s contributions are mainly in the sensor pod, and in the compute stack in the trunk, which takes up about half the surface area of the storage space and which Sweeney said is “a blade architecture, a whole bunch of CPUs and GPUs that we can swap out under there,” though he wouldn’t speak to who’s supplying those components specifically. The tremendous computing power it represents taken together is the key identifying objects, doing so in higher volume, and doing better pathfinding in complex city street environments.

For the actual rider, there’s an iPad-based interactive display in the rear of the vehicle, which takes over for the mobile app once you’ve actually entered the vehicle and are ready to start your ride. The display guides you through the steps of starting your trip, including ensuring your seat belt is fastened, checking your destination and then setting off on the ride itself.

During our demo, the act of actually leaving the curb and merging into traffic was handled by the safety driver on board, but in eventual full deployment of these cars the vehicles will handle even that tricky task. The iPad shows you when you’re in active self-driving mode, and also when it’s been disengaged and steering is being handled by the actual person behind the wheel instead. The screen also shows you a simplified version of what the autonomous car itself “sees,” displaying on a white background color-coded point- and line-based rudimentary versions of the objects and the world surrounding the vehicle. Objects in motion display trails as they move through this real-time virtual world.

The iPad-based display also lets you take a selfie and share the image from your ride, which definitely helps Uber promote its efforts, while also helping with the other key goal that the iPad itself seeks to achieve – making riders feel like this tech is both knowable and normal. Public perception remains one of autonomous driving’s highest bars to overcome, along with the tech problem and regulation, and selfies are one seemingly shallow way to legitimately address that.

So how did I feel during my ride? About as excited as I typically feel during any Uber ride, after the initial thrill wore off – which is to say mostly bored. The vehicle I was in had to negotiate some heavy traffic, a lot of construction and very unpredictable south-of-Market San Francisco drivers, and as such did disengage with fair frequency. but it also handled long open stretches of road at speed with aplomb, and kept distance in more dense traffic well in stop-and-go situations. It felt overall like a system that is making good progress in terms of learning – but one that also still has a long way to go before it can do without its human minders up front.

My companion for the ride in the backseat was Uber Chief of Watch Rachel Maran, who has been a driver in Uber’s self-driving pilot in Pittsburgh previously. She explained that the unpredictability and variety in any new driving environment is going to be one of the biggest challenges Uber’s autonomous driving systems have to overcome.

Uber’s pilot in San Francisco will be limited to the downtown area to start, and will involve “a handful” of vehicles to start, with the intent of ramping up from there according to the company. The autonomous vehicles in Pittsburgh will also continue to run concurrently with the San Francisco deployment. Where Pittsburgh offers a range of weather conditions and other environmental variables for testing, San Francisco will provide new challenges for Uber’s self-driving tech, including denser, often more chaotic traffic, plus narrower lanes and roads.

The company doesn’t require a permit from the California DMV to operate in the state, it says, because the cars don’t qualify as fully autonomous as defined by state law because of the always present onboard safety operator. Legally, it’s more akin to a Tesla with Autopilot than to a self-driving Waymo car, under current regulatory rules.

Ultimately, the goal for Uber in autonomy is to create safer roads, according to Sweeney, while at the same time improving urban planning and space problems stemming from a vehicle ownership model that sees most cars sitting idle and unused somewhere near 95 percent of the time. I asked Sweeney about concerns from drivers and members of the public who can be very vocal about autonomous tech’s safety on real roads.

“This car has got centimeter-level distance measurements 360-degrees around the vehicle constantly, 20 meters front and 20 meters back constantly,” Sweeney said, noting that even though the autonomous decision-making remains “a really big challenge,” the advances achieved by the sensors themselves and “their continuous attention and superhuman perception […] sets us up for the first really marked decrease in automotive fatalities since the airbag.”

“I think this is where we really push it down to zero,” Sweeney added. “People treat it as though it’s a fact of life; it’s only because we’re used to it. We can do way better than this.”

Uber’s self-driving cars start picking up passengers in San Francisco

Thanks to Kebmodee for bringing this to the It’s Interesting community.