Posts Tagged ‘The Future’

by David Hambling

Everyone’s heart is different. Like the iris or fingerprint, our unique cardiac signature can be used as a way to tell us apart. Crucially, it can be done from a distance.

It’s that last point that has intrigued US Special Forces. Other long-range biometric techniques include gait analysis, which identifies someone by the way he or she walks. This method was supposedly used to identify an infamous ISIS terrorist before a drone strike. But gaits, like faces, are not necessarily distinctive. An individual’s cardiac signature is unique, though, and unlike faces or gait, it remains constant and cannot be altered or disguised.

Long-range detection
A new device, developed for the Pentagon after US Special Forces requested it, can identify people without seeing their face: instead it detects their unique cardiac signature with an infrared laser. While it works at 200 meters (219 yards), longer distances could be possible with a better laser. “I don’t want to say you could do it from space,” says Steward Remaly, of the Pentagon’s Combatting Terrorism Technical Support Office, “but longer ranges should be possible.”

Contact infrared sensors are often used to automatically record a patient’s pulse. They work by detecting the changes in reflection of infrared light caused by blood flow. By contrast, the new device, called Jetson, uses a technique known as laser vibrometry to detect the surface movement caused by the heartbeat. This works though typical clothing like a shirt and a jacket (though not thicker clothing such as a winter coat).

The most common way of carrying out remote biometric identification is by face recognition. But this needs good, frontal view of the face, which can be hard to obtain, especially from a drone. Face recognition may also be confused by beards, sunglasses, or headscarves.

Cardiac signatures are already used for security identification. The Canadian company Nymi has developed a wrist-worn pulse sensor as an alternative to fingerprint identification. The technology has been trialed by the Halifax building society in the UK.

Jetson extends this approach by adapting an off-the shelf device that is usually used to check vibration from a distance in structures such as wind turbines. For Jetson, a special gimbal was added so that an invisible, quarter-size laser spot could be kept on a target. It takes about 30 seconds to get a good return, so at present the device is only effective where the subject is sitting or standing.

Better than face recognition
Remaly’s team then developed algorithms capable of extracting a cardiac signature from the laser signals. He claims that Jetson can achieve over 95% accuracy under good conditions, and this might be further improved. In practice, it’s likely that Jetson would be used alongside facial recognition or other identification methods.

Wenyao Xu of the State University of New York at Buffalo has also developed a remote cardiac sensor, although it works only up to 20 meters away and uses radar. He believes the cardiac approach is far more robust than facial recognition. “Compared with face, cardiac biometrics are more stable and can reach more than 98% accuracy,” he says.

One glaring limitation is the need for a database of cardiac signatures, but even without this the system has its uses. For example, an insurgent seen in a group planting an IED could later be positively identified from a cardiac signature, even if the person’s name and face are unknown. Biometric data is also routinely collected by US armed forces in Iraq and Afghanistan, so cardiac data could be added to that library.

In the longer run, this technology could find many more uses, its developers believe. For example, a doctor could scan for arrythmias and other conditions remotely, or hospitals could monitor the condition of patients without having to wire them up to machines.

https://www.technologyreview.com/s/613891/the-pentagon-has-a-laser-that-can-identify-people-from-a-distanceby-their-heartbeat/

Artificial intelligence can share our natural ability to make numeric snap judgments.

Researchers observed this knack for numbers in a computer model composed of virtual brain cells, or neurons, called an artificial neural network. After being trained merely to identify objects in images — a common task for AI — the network developed virtual neurons that respond to specific quantities. These artificial neurons are reminiscent of the “number neurons” thought to give humans, birds, bees and other creatures the innate ability to estimate the number of items in a set (SN: 7/7/18, p. 7). This intuition is known as number sense.

In number-judging tasks, the AI demonstrated a number sense similar to humans and animals, researchers report online May 8 in Science Advances. This finding lends insight into what AI can learn without explicit instruction, and may prove interesting for scientists studying how number sensitivity arises in animals.

Neurobiologist Andreas Nieder of the University of Tübingen in Germany and colleagues used a library of about 1.2 million labeled images to teach an artificial neural network to recognize objects such as animals and vehicles in pictures. The researchers then presented the AI with dot patterns containing one to 30 dots and recorded how various virtual neurons responded.

Some neurons were more active when viewing patterns with specific numbers of dots. For instance, some neurons activated strongly when shown two dots but not 20, and vice versa. The degree to which these neurons preferred certain numbers was nearly identical to previous data from the neurons of monkeys.

Dot detectors
A new artificial intelligence program viewed images of dots previously shown to monkeys, including images with one dot and images with even numbers of dots from 2 to 30 (bottom). Much like the number-sensitive neurons in monkey brains, number-sensitive virtual neurons in the AI preferentially activated when shown specific numbers of dots. As in monkey brains, the AI contained more neurons tuned to smaller numbers than larger numbers (top).

To test whether the AI’s number neurons equipped it with an animal-like number sense, Nieder’s team presented pairs of dot patterns and asked whether the patterns contained the same number of dots. The AI was correct 81 percent of the time, performing about as well as humans and monkeys do on similar matching tasks. Like humans and other animals, the AI struggled to differentiate between patterns that had very similar numbers of dots, and between patterns that had many dots (SN: 12/10/16, p. 22).

This finding is a “very nice demonstration” of how AI can pick up multiple skills while training for a specific task, says Elias Issa, a neuroscientist at Columbia University not involved in the work. But exactly how and why number sense arose within this artificial neural network is still unclear, he says.

Nieder and colleagues argue that the emergence of number sense in AI might help biologists understand how human babies and wild animals get a number sense without being taught to count. Perhaps basic number sensitivity “is wired into the architecture of our visual system,” Nieder says.

Ivilin Stoianov, a computational neuroscientist at the Italian National Research Council in Padova, is not convinced that such a direct parallel exists between the number sense in this AI and that in animal brains. This AI learned to “see” by studying many labeled pictures, which is not how babies and wild animals learn to make sense of the world. Future experiments could explore whether similar number neurons emerge in AI systems that more closely mimic how biological brains learn, like those that use reinforcement learning, Stoianov says (SN: 12/8/18, p. 14).

https://www.sciencenews.org/article/new-ai-acquired-humanlike-number-sense-its-own

By Greg Ip

It’s time to stop worrying that robots will take our jobs — and start worrying that they will decide who gets jobs.

Millions of low-paid workers’ lives are increasingly governed by software and algorithms. This was starkly illustrated by a report last week that Amazon.com tracks the productivity of its employees and regularly fires those who underperform, with almost no human intervention.

“Amazon’s system tracks the rates of each individual associate’s productivity and automatically generates any warnings or terminations regarding quality or productivity without input from supervisors,” a law firm representing Amazon said in a letter to the National Labor Relations Board, as first reported by technology news site The Verge. Amazon was responding to a complaint that it had fired an employee from a Baltimore fulfillment center for federally protected activity, which could include union organizing. Amazon said the employee was fired for failing to meet productivity targets.

Perhaps it was only a matter of time before software started firing people. After all, it already screens resumes, recommends job applicants, schedules shifts and assigns projects. In the workplace, “sophisticated technology to track worker productivity on a minute-by-minute or even second-by-second basis is incredibly pervasive,” says Ian Larkin, a business professor at the University of California at Los Angeles specializing in human resources.

Industrial laundry services track how many seconds it takes to press a laundered shirt; on-board computers track truckers’ speed, gear changes and engine revolutions per minute; and checkout terminals at major discount retailers report if the cashier is scanning items quickly enough to meet a preset goal. In all these cases, results are shared in real time with the employee, and used to determine who is terminated, says Mr. Larkin.

Of course, weeding out underperforming employees is a basic function of management. General Electric Co.’s former chief executive Jack Welch regularly culled the company’s underperformers. “In banking and management consulting it is standard to exit about 20% of employees a year, even in good times, using ‘rank and yank’ systems,” says Nick Bloom, an economist at Stanford University specializing in management.

For employees of General Electric, Goldman Sachs Group Inc.and McKinsey & Co., that risk is more than compensated for by the reward of stimulating and challenging work and handsome paychecks. The risk-reward trade-off in industrial laundries, fulfillment centers and discount stores is not nearly so enticing: the work is repetitive and the pay is low. Those who aren’t weeded out one year may be the next if the company raises its productivity targets. Indeed, wage inequality doesn’t fully capture how unequal work has become: enjoyable and secure at the top, monotonous and insecure at the bottom.

At fulfillment centers, employees locate, scan and box all the items in an order. Amazon’s “Associate Development and Performance Tracker,” or Adapt, tracks how each employee performs on these steps against externally-established benchmarks and warns employees when they are falling short.

Amazon employees have complained of being monitored continuously — even having bathroom breaks measured — and being held to ever-rising productivity benchmarks. There is no public data to determine if such complaints are more or less common at Amazon than its peers. The company says about 300 employees — roughly 10% of the Baltimore center’s employment level — were terminated for productivity reasons in the year before the law firm’s letter was sent to the NLRB.

Mr. Larkin says 10% is not unusually high. Yet, automating the discipline process, he says, “makes an already difficult job seem even more inhuman and undesirable. Dealing with these tough situations is one of the key roles of managers.”

“Managers make final decisions on all personnel matters,” an Amazon spokeswoman said. “The [Adapt system] simply tracks and ensures consistency of data and process across hundreds of employees to ensure fairness.” The number of terminations has decreased in the last two years at the Baltimore facility and across North America, she said. Termination notices can be appealed.

Companies use these systems because they work well for them.

Mr. Bloom and his co-authors find that companies that more aggressively hire, fire and monitor employees have faster productivity growth. They also have wider gaps between the highest- and lowest-paid employees.

Computers also don’t succumb to the biases managers do. Economists Mitchell Hoffman, Lisa Kahn and Danielle Li looked at how 15 firms used a job-testing technology that tested applicants on computer and technical skills, personality, cognitive skills, fit for the job and various job scenarios. Drawing on past correlations, the algorithm ranked applicants as having high, moderate or low potential. Their study found employees hired against the software’s recommendation were below-average performers: “This suggests that managers often overrule test recommendations because they are biased or mistaken, not only because they have superior private information,” they wrote.

Last fall Amazon raised its starting pay to $15 an hour, several dollars more than what the brick-and-mortar stores being displaced by Amazon pay. Ruthless performance tracking is how Amazon ensures employees are productive enough to merit that salary. This also means that, while employees may increasingly be supervised by technology, at least they’re not about to be replaced by it.

Write to Greg Ip at greg.ip@wsj.com

https://www.morningstar.com/news/glbnewscan/TDJNDN_201905017114/for-lowerpaid-workers-the-robot-overlords-have-arrived.html

Thanks to Kebmodee for bringing this to the It’s Interesting community.


B/CI technology might also allow us to create a future “global superbrain” that would connect networks of individual human brains and AIs to enable collective thought. The image is in the public doamin.

Summary: Researchers predict the development of a brain/cloud interface that connects neurons to cloud computing networks in real time.

Source: Frontiers

Imagine a future technology that would provide instant access to the world’s knowledge and artificial intelligence, simply by thinking about a specific topic or question. Communications, education, work, and the world as we know it would be transformed.

Writing in Frontiers in Neuroscience, an international collaboration led by researchers at UC Berkeley and the US Institute for Molecular Manufacturing predicts that exponential progress in nanotechnology, nanomedicine, AI, and computation will lead this century to the development of a “Human Brain/Cloud Interface” (B/CI), that connects neurons and synapses in the brain to vast cloud-computing networks in real time.

Nanobots on the brain

The B/CI concept was initially proposed by futurist-author-inventor Ray Kurzweil, who suggested that neural nanorobots – brainchild of Robert Freitas, Jr., senior author of the research – could be used to connect the neocortex of the human brain to a “synthetic neocortex” in the cloud. Our wrinkled neocortex is the newest, smartest, ‘conscious’ part of the brain.

Freitas’ proposed neural nanorobots would provide direct, real-time monitoring and control of signals to and from brain cells.

“These devices would navigate the human vasculature, cross the blood-brain barrier, and precisely autoposition themselves among, or even within brain cells,” explains Freitas. “They would then wirelessly transmit encoded information to and from a cloud-based supercomputer network for real-time brain-state monitoring and data extraction.”

The internet of thoughts

This cortex in the cloud would allow “Matrix”-style downloading of information to the brain, the group claims.

“A human B/CI system mediated by neuralnanorobotics could empower individuals with instantaneous access to all cumulative human knowledge available in the cloud, while significantly improving human learning capacities and intelligence,” says lead author Dr. Nuno Martins.

B/CI technology might also allow us to create a future “global superbrain” that would connect networks of individual human brains and AIs to enable collective thought.

“While not yet particularly sophisticated, an experimental human ‘BrainNet’ system has already been tested, enabling thought-driven information exchange via the cloud between individual brains,” explains Martins. “It used electrical signals recorded through the skull of ‘senders’ and magnetic stimulation through the skull of ‘receivers,’ allowing for performing cooperative tasks.

“With the advance of neuralnanorobotics, we envisage the future creation of ‘superbrains’ that can harness the thoughts and thinking power of any number of humans and machines in real time. This shared cognition could revolutionize democracy, enhance empathy, and ultimately unite culturally diverse groups into a truly global society.”

When can we connect?

According to the group’s estimates, even existing supercomputers have processing speeds capable of handling the necessary volumes of neural data for B/CI – and they’re getting faster, fast.

Rather, transferring neural data to and from supercomputers in the cloud is likely to be the ultimate bottleneck in B/CI development.

“This challenge includes not only finding the bandwidth for global data transmission,” cautions Martins, “but also, how to enable data exchange with neurons via tiny devices embedded deep in the brain.”

One solution proposed by the authors is the use of ‘magnetoelectric nanoparticles’ to effectively amplify communication between neurons and the cloud.

“These nanoparticles have been used already in living mice to couple external magnetic fields to neuronal electric fields – that is, to detect and locally amplify these magnetic signals and so allow them to alter the electrical activity of neurons,” explains Martins. “This could work in reverse, too: electrical signals produced by neurons and nanorobots could be amplified via magnetoelectric nanoparticles, to allow their detection outside of the skull.”

Getting these nanoparticles – and nanorobots – safely into the brain via the circulation, would be perhaps the greatest challenge of all in B/CI.

“A detailed analysis of the biodistribution and biocompatibility of nanoparticles is required before they can be considered for human development. Nevertheless, with these and other promising technologies for B/CI developing at an ever-increasing rate, an ‘internet of thoughts’ could become a reality before the turn of the century,” Martins concludes.

https://neurosciencenews.com/internet-thoughts-brain-cloud-interface-11074/

Researchers from Tencent Keen Security Lab have published a report detailing their successful attacks on Tesla firmware, including remote control over the steering, and an adversarial example attack on the autopilot that confuses the car into driving into the oncoming traffic lane.

The researchers used an attack chain that they disclosed to Tesla, and which Tesla now claims has been eliminated with recent patches.

To effect the remote steering attack, the researchers had to bypass several redundant layers of protection, but having done this, they were able to write an app that would let them connect a video-game controller to a mobile device and then steer a target vehicle, overriding the actual steering wheel in the car as well as the autopilot systems. This attack has some limitations: while a car in Park or traveling at high speed on Cruise Control can be taken over completely, a car that has recently shifted from R to D can only be remote controlled at speeds up to 8km/h.

Tesla vehicles use a variety of neural networks for autopilot and other functions (such as detecting rain on the windscreen and switching on the wipers); the researchers were able to use adversarial examples (small, mostly human-imperceptible changes that cause machine learning systems to make gross, out-of-proportion errors) to attack these.

Most dramatically, the researchers attacked the autopilot’s lane-detection systems. By adding noise to lane-markings, they were able to fool the autopilot into losing the lanes altogether, however, the patches they had to apply to the lane-markings would not be hard for humans to spot.

Much more seriously, they were able to use “small stickers” on the ground to effect a “fake lane attack” that fooled the autopilot into steering into the opposite lanes where oncoming traffic would be moving. This worked even when the targeted vehicle was operating in daylight without snow, dust or other interference.

Misleading the autopilot vehicle to the wrong direction with some patches made by a malicious attacker, in sometimes, is more dangerous than making it fail to recognize the lane. We paint three inconspicuous tiny square in the picture took from camera, and the vision module would recognize it as a lane with a high degree of confidence as below shows…

After that we tried to build such a scene in physical: we pasted some small stickers as interference patches on the ground in an intersection. We hope to use these patches toguide the Tesla vehicle in the Autosteer mode driving to the reverse lane. The test scenario like Fig 34 shows, red dashes are the stickers, the vehicle would regard them as the continuation of its right lane, and ignore the real left lane opposite the intersection. When it travels to the middle of the intersection, it would take the real left lane as its right lane and drive into the reverse lane.

Tesla autopilot module’s lane recognition function has a good robustness in an ordinary external environment (no strong light, rain, snow, sand and dust interference), but it still doesn’t handle the situation correctly in our test scenario. This kind of attack is simple to deploy, and the materials are easy to obtain. As we talked in the previous introduction of Tesla’s lane recognition function, Tesla uses a pure computer vision solution for lane recognition, and we found in this attack experiment that the vehicle driving decision is only based on computer vision lane recognition results. Our experiments proved that this architecture has security risks and reverse lane recognition is one of the necessary functions for autonomous driving in non-closed roads. In the scene we build, if the vehicle knows that the fake lane is pointing to the reverse lane, it should ignore this fake lane and then it could avoid a traffic accident.

Security Research of Tesla Autopilot

https://boingboing.net/2019/03/31/mote-in-cars-eye.html

Aylin Woodward

The phrase “mass extinction” typically conjures images of the asteroid crash that led to the twilight of the dinosaurs.

Upon impact, that 6-mile-wide space rock caused a tsunami in the Atlantic Ocean, along with earthquakes and landslides up and down what is now the Americas. A heat pulse baked the Earth, and the Tyrannosaurus rex and its compatriots died out, along with 75% of the planet’s species.

Although it may not be obvious, another devastating mass extinction event is taking place today — the sixth of its kind in Earth’s history. The trend is hitting global fauna on multiple fronts, as hotter oceans, deforestation, and climate change drive animal populations to extinction in unprecedented numbers.

A 2017 study found that animal species around the world are experiencing a “biological annihilation” and that our current “mass extinction episode has proceeded further than most assume.”

Here are 12 signs that the planet is in the midst of the sixth mass extinction, and why human activity is primarily to blame.

Insects are dying off at record rates. Roughly 40% of the world’s insect species are in decline.

2019 study found that the total mass of all insects on the planets is decreasing by 2.5% per year.

If that trend continues unabated, the Earth may not have any insects at all by 2119.

“In 10 years you will have a quarter less, in 50 years only half left, and in 100 years you will have none,” Francisco Sánchez-Bayo, a coauthor of the study, told The Guardian.

That’s a major problem, because insects like bees, butterflies, and other pollinators perform a crucial role in fruit, vegetable, and nut production. Plus, bugs are food sources for many bird, fish, and mammal species — some of which humans rely on for food.

Earth appears to be undergoing a process of “biological annihilation.” As much as half of the total number of animal individuals that once shared the Earth with humans are already gone.

A 2017 study looked at all animal populations across the planet (not just insects) by examining 27,600 vertebrate species — about half of the overall total that we know exist. They found that more than 30% of them are in decline.

Some species are facing total collapse, while certain local populations of others are going extinct in specific areas. That’s still cause for alarm, since the study authors said these localized population extinctions are a “prelude to species extinctions.”

So even declines in animal populations that aren’t yet categorized as endangered is a worrisome sign.

More than 26,500 of the world’s species are threatened with extinction, and that number is expected to keep going up.

According to the International Union for Conservation of Nature Red List, more than 27% of all assessed species on the planet are threatened with extinction. Currently, 40% of the planet’s amphibians, 25% of its mammals, and 33% of its coral reefs are threatened.

The IUCN predicts that 99.9% of critically endangered species and 67% of endangered species will be lost within the next 100 years.

A 2015 study that examined bird, reptile, amphibian, and mammal species concluded that the average rate of extinction over the last century is up to 100 times as high as normal.

Elizabeth Kolbert, author of the book “The Sixth Extinction,” told National Geographic that the outlook from that study is dire; it means 75% of animal species could be extinct within a few human lifetimes.

In roughly 50 years, 1,700 species of amphibians, birds, and mammals will face a higher risk of extinction because their natural habitats are shrinking.

By 2070, 1,700 species will lose 30% to 50% of their present habitat ranges thanks to human land use, a 2019 study found. Specifically, 886 species of amphibians, 436 species of birds, and 376 species of mammals will be affected and consequently will be at more risk of extinction.

Logging and deforestation of the Amazon rainforest is of particular concern.

Roughly 17% of the Amazon has been destroyed in the past five decades, mostly because humans have cut down vegetation to open land for cattle ranching, according to the World Wildlife Fund. Some 80% of the world’s species can be found in tropical rainforests like the Amazon, including the critically endangered Amur leopard. Even deforestation in a small area can cause an animal to go extinct, since some species live only in small, isolated areas.

Every year, more than 18 million acres of forest disappear worldwide. That’s about 27 soccer fields’ worth every minute.

In addition to putting animals at risk, deforestation eliminates tree cover that helps absorb atmospheric carbon dioxide. Trees trap that gas, which contributes to global warming, so fewer trees means more CO2 in the atmosphere, which leads the planet to heat up.


In the next 50 years, humans will drive so many mammal species to extinction that Earth’s evolutionary diversity won’t recover for some 3 million years, one study said.

The scientists behind that study, which was published in 2018, concluded that after that loss, our planet will need between 3 million and 5 million years in a best-case scenario to get back to the level of biodiversity we have on Earth today.

Returning the planet’s biodiversity to the state it was in before modern humans evolved would take even longer — up to 7 million years.

Alien species are a major driver of species extinction.

A study published earlier this month found that alien species are a primary driver of recent animal and plant extinctions. An alien species is the term for any kind of animal, plant, fungus, or bacteria that isn’t native to an ecosystem. Some can be invasive, meaning they cause harm to the environment to which they’re introduced.

Many invasive alien species have been unintentionally spread by humans. People can carry alien species with them from one continent, country, or region to another when they travel. Shipments of goods and cargo between places can also contribute to a species’ spread.

Zebra mussels and brown marmorated stink bugs are two examples of invasive species in the US.

The recent study showed that since the year 1500, there have been 953 global extinctions. Roughly one-third of those were at least partially because of the introduction of alien species.

Oceans are absorbing a lot of the excess heat trapped on Earth because of greenhouse gases in the atmosphere. That kills marine species and coral reefs.

The planet’s oceans absorb a whopping 93% of the extra heat that greenhouse gases trap in Earth’s atmosphere. Last year was the oceans’ warmest year on record, and scientists recently realized that oceans are heating up 40% faster than they’d previously thought.

Higher ocean temperatures and acidification of the water cause corals to expel the algae living in their tissues and turn white, a process known as coral bleaching.

As a consequence, coral reefs — and the marine ecosystems they support — are dying. Around the world, about 50% of the world’s reefs have died over the past 30 years.

Species that live in fresh water are impacted by a warming planet, too.

A 2013 study showed that 82% of native freshwater fish species in California were vulnerable to extinction because of climate change.

Most native fish populations are expected decline, and some will likely be driven to extinction, the study authors said. Fish species that need water colder than 70 degrees Fahrenheit to thrive are especially at risk.

Warming oceans also lead to sea-level rise. Rising waters are already impacting vulnerable species’ habitats.

Water, like most things, expands when it heats up — so warmer water takes up more space. Already, the present-day global sea level is 5 to 8 inches higher on average than it was in 1900, according to Smithsonian.

In February, Australia’s environment minister officially declared a rodent called the Bramble Cay melomys to be the first species to go extinct because of human-driven climate change — specifically, sea-level rise.

The tiny rat relative was native to an island in the Queensland province, but its low-lying territory sat just 10 feet above sea level. The island was increasingly inundated by ocean water during high tides and storms, and those salt-water floods took a toll on the island’s plant life.

That flora provided the melomys with food and shelter, so the decrease in plants likely led to the animal’s demise.

Warming oceans are also leading to unprecedented Arctic and Antarctic ice melt, which further contributes to sea-level rise. In the US, 17% of all threatened and endangered species are at risk because of rising seas.

Melting ice sheets could raise sea levels significantly. The Antarctic ice sheet is melting nearly six times as fast as it did in the 1980s. Greenland’s ice is melting four times faster now than it was 16 years ago. It lost more than 400 billion tons of ice in 2012 alone.

In a worst-case scenario, called a “pulse,” warmer waters could cause the glaciers that hold back Antarctica’s and Greenland’s ice sheets to collapse. That would send massive quantities of ice into the oceans, potentially leading to rapid sea-level rise around the world.

Sea-level rise because of climate change threatens 233 federally protected animal and plant species in 23 coastal states across the US, according to a report from the Center for Biological Diversity.

The report noted that 17% of all the US’s threatened and endangered species are vulnerable to rising sea levels and storm surges, including the Hawaiian monk seal and the loggerhead sea turtle.

If “business as usual” continues regarding climate change, one in six species is on track to go extinct.

An analysis published in 2015 looked at over 130 studies about declining animal populations and found that one in six species could disappear as the planet continues warming.

Flora and fauna from South America and Oceania are expected top be the hardest hit by climate change, while North American species would have the lowest risk.

Previous mass extinctions came with warning signs. Those indicators were very similar to what we’re seeing now.

The most devastating mass extinction in planetary history is called the Permian-Triassic extinction, or the “Great Dying.” It happened 252 million years ago, prior to the dawn of the dinosaurs.

During the Great Dying, roughly 90% of the Earth’s species were wiped out; less than 5% of marine species survived, and only a third of land animal species made it, according to National Geographic. The event far eclipsed the cataclysm that killed the last of the dinosaurs some 187 million years later.

But the Great Dying didn’t come out of left field.

Scientists think the mass extinction was caused by a l arge-scale and rapid release of greenhouse gases into the atmosphere by Siberian volcanoes, which quickly warmed the planet — so there were warning signs. In fact, a 2018 study noted that those early signs appeared as much as 700,000 years ahead of the extinction.

“There is much evidence of severe global warming, ocean acidification, and a lack of oxygen,” the study’s lead author, Wolfgang Kießling, said in a release.

Today’s changes are similar but less severe — so far.

https://www.thisisinsider.com/signs-of-6th-mass-extinction-2019-3#previous-mass-extinctions-came-with-warning-signs-too-those-indicators-were-very-similar-to-what-were-seeing-now-14

by SIDNEY FUSSELL

Walgreens is piloting a new line of “smart coolers”—fridges equipped with cameras that scan shoppers’ faces and make inferences on their age and gender. On January 14, the company announced its first trial at a store in Chicago in January, and plans to equip stores in New York and San Francisco with the tech.

Demographic information is key to retail shopping. Retailers want to know what people are buying, segmenting shoppers by gender, age, and income (to name a few characteristics) and then targeting them precisely. To that end, these smart coolers are a marvel.

If, for example, Pepsi launched an ad campaign targeting young women, it could use smart-cooler data to see if its campaign was working. These machines can draw all kinds of useful inferences: Maybe young men buy more Sprite if it’s displayed next to Mountain Dew. Maybe older women buy more ice cream on Thursday nights than any other day of the week. The tech also has “iris tracking” capabilities, meaning the company can collect data on which displayed items are the most looked at.

Crucially, the “Cooler Screens” system does not use facial recognition. Shoppers aren’t identified when the fridge cameras scan their face. Instead, the cameras analyze faces to make inferences about shoppers’ age and gender. First, the camera takes their picture, which an AI system will measure and analyze, say, the width of someone’s eyes, the distance between their lips and nose, and other micro measurements. From there, the system can estimate if the person who opened the door is, say, a woman in her early 20s or a male in his late 50s. It’s analysis, not recognition.

The distinction between the two is very important. In Illinois, facial recognition in public is outlawed under BIPA, the Biometric Privacy Act. For two years, Google and Facebook fought class-actions suits filed under the law, after plaintiffs claimed the companies obtained their facial data without their consent. Home-security cams with facial-recognition abilities, such as Nest or Amazon’s Ring, also have those features disabled in the state; even Google’s viral “art selfie” app is banned. The suit against Facebook was dismissed in January, but privacy advocates champion BIPA as a would-be template for a world where facial recognition is federally regulated.

Walgreens’s camera system makes note only of what shoppers picked up and basic information on their age and gender. Last year, a Canadian mall used cameras to track shoppers and make inferences about which demographics prefer which stores. Shoppers’ identities weren’t collected or stored, but the mall ended the pilot after widespread backlash.

The smart cooler is just one of dozens of tracking technologies emerging in retail. At Amazon Go stores, for example—which do not have cashiers or self-checkout stations—sensors make note of shoppers’ purchases and charge them to their Amazon account; the resulting data are part of the feedback loop the company uses to target ads at customers, making it more money.

https://www.theatlantic.com/technology/archive/2019/01/walgreens-tests-new-smart-coolers/581248/

Thanks to Kebmodee for bringing this to the It’s Interesting community.