Posts Tagged ‘The Future’

Researchers from Tencent Keen Security Lab have published a report detailing their successful attacks on Tesla firmware, including remote control over the steering, and an adversarial example attack on the autopilot that confuses the car into driving into the oncoming traffic lane.

The researchers used an attack chain that they disclosed to Tesla, and which Tesla now claims has been eliminated with recent patches.

To effect the remote steering attack, the researchers had to bypass several redundant layers of protection, but having done this, they were able to write an app that would let them connect a video-game controller to a mobile device and then steer a target vehicle, overriding the actual steering wheel in the car as well as the autopilot systems. This attack has some limitations: while a car in Park or traveling at high speed on Cruise Control can be taken over completely, a car that has recently shifted from R to D can only be remote controlled at speeds up to 8km/h.

Tesla vehicles use a variety of neural networks for autopilot and other functions (such as detecting rain on the windscreen and switching on the wipers); the researchers were able to use adversarial examples (small, mostly human-imperceptible changes that cause machine learning systems to make gross, out-of-proportion errors) to attack these.

Most dramatically, the researchers attacked the autopilot’s lane-detection systems. By adding noise to lane-markings, they were able to fool the autopilot into losing the lanes altogether, however, the patches they had to apply to the lane-markings would not be hard for humans to spot.

Much more seriously, they were able to use “small stickers” on the ground to effect a “fake lane attack” that fooled the autopilot into steering into the opposite lanes where oncoming traffic would be moving. This worked even when the targeted vehicle was operating in daylight without snow, dust or other interference.

Misleading the autopilot vehicle to the wrong direction with some patches made by a malicious attacker, in sometimes, is more dangerous than making it fail to recognize the lane. We paint three inconspicuous tiny square in the picture took from camera, and the vision module would recognize it as a lane with a high degree of confidence as below shows…

After that we tried to build such a scene in physical: we pasted some small stickers as interference patches on the ground in an intersection. We hope to use these patches toguide the Tesla vehicle in the Autosteer mode driving to the reverse lane. The test scenario like Fig 34 shows, red dashes are the stickers, the vehicle would regard them as the continuation of its right lane, and ignore the real left lane opposite the intersection. When it travels to the middle of the intersection, it would take the real left lane as its right lane and drive into the reverse lane.

Tesla autopilot module’s lane recognition function has a good robustness in an ordinary external environment (no strong light, rain, snow, sand and dust interference), but it still doesn’t handle the situation correctly in our test scenario. This kind of attack is simple to deploy, and the materials are easy to obtain. As we talked in the previous introduction of Tesla’s lane recognition function, Tesla uses a pure computer vision solution for lane recognition, and we found in this attack experiment that the vehicle driving decision is only based on computer vision lane recognition results. Our experiments proved that this architecture has security risks and reverse lane recognition is one of the necessary functions for autonomous driving in non-closed roads. In the scene we build, if the vehicle knows that the fake lane is pointing to the reverse lane, it should ignore this fake lane and then it could avoid a traffic accident.

Security Research of Tesla Autopilot

https://boingboing.net/2019/03/31/mote-in-cars-eye.html

Aylin Woodward

The phrase “mass extinction” typically conjures images of the asteroid crash that led to the twilight of the dinosaurs.

Upon impact, that 6-mile-wide space rock caused a tsunami in the Atlantic Ocean, along with earthquakes and landslides up and down what is now the Americas. A heat pulse baked the Earth, and the Tyrannosaurus rex and its compatriots died out, along with 75% of the planet’s species.

Although it may not be obvious, another devastating mass extinction event is taking place today — the sixth of its kind in Earth’s history. The trend is hitting global fauna on multiple fronts, as hotter oceans, deforestation, and climate change drive animal populations to extinction in unprecedented numbers.

A 2017 study found that animal species around the world are experiencing a “biological annihilation” and that our current “mass extinction episode has proceeded further than most assume.”

Here are 12 signs that the planet is in the midst of the sixth mass extinction, and why human activity is primarily to blame.

Insects are dying off at record rates. Roughly 40% of the world’s insect species are in decline.

2019 study found that the total mass of all insects on the planets is decreasing by 2.5% per year.

If that trend continues unabated, the Earth may not have any insects at all by 2119.

“In 10 years you will have a quarter less, in 50 years only half left, and in 100 years you will have none,” Francisco Sánchez-Bayo, a coauthor of the study, told The Guardian.

That’s a major problem, because insects like bees, butterflies, and other pollinators perform a crucial role in fruit, vegetable, and nut production. Plus, bugs are food sources for many bird, fish, and mammal species — some of which humans rely on for food.

Earth appears to be undergoing a process of “biological annihilation.” As much as half of the total number of animal individuals that once shared the Earth with humans are already gone.

A 2017 study looked at all animal populations across the planet (not just insects) by examining 27,600 vertebrate species — about half of the overall total that we know exist. They found that more than 30% of them are in decline.

Some species are facing total collapse, while certain local populations of others are going extinct in specific areas. That’s still cause for alarm, since the study authors said these localized population extinctions are a “prelude to species extinctions.”

So even declines in animal populations that aren’t yet categorized as endangered is a worrisome sign.

More than 26,500 of the world’s species are threatened with extinction, and that number is expected to keep going up.

According to the International Union for Conservation of Nature Red List, more than 27% of all assessed species on the planet are threatened with extinction. Currently, 40% of the planet’s amphibians, 25% of its mammals, and 33% of its coral reefs are threatened.

The IUCN predicts that 99.9% of critically endangered species and 67% of endangered species will be lost within the next 100 years.

A 2015 study that examined bird, reptile, amphibian, and mammal species concluded that the average rate of extinction over the last century is up to 100 times as high as normal.

Elizabeth Kolbert, author of the book “The Sixth Extinction,” told National Geographic that the outlook from that study is dire; it means 75% of animal species could be extinct within a few human lifetimes.

In roughly 50 years, 1,700 species of amphibians, birds, and mammals will face a higher risk of extinction because their natural habitats are shrinking.

By 2070, 1,700 species will lose 30% to 50% of their present habitat ranges thanks to human land use, a 2019 study found. Specifically, 886 species of amphibians, 436 species of birds, and 376 species of mammals will be affected and consequently will be at more risk of extinction.

Logging and deforestation of the Amazon rainforest is of particular concern.

Roughly 17% of the Amazon has been destroyed in the past five decades, mostly because humans have cut down vegetation to open land for cattle ranching, according to the World Wildlife Fund. Some 80% of the world’s species can be found in tropical rainforests like the Amazon, including the critically endangered Amur leopard. Even deforestation in a small area can cause an animal to go extinct, since some species live only in small, isolated areas.

Every year, more than 18 million acres of forest disappear worldwide. That’s about 27 soccer fields’ worth every minute.

In addition to putting animals at risk, deforestation eliminates tree cover that helps absorb atmospheric carbon dioxide. Trees trap that gas, which contributes to global warming, so fewer trees means more CO2 in the atmosphere, which leads the planet to heat up.


In the next 50 years, humans will drive so many mammal species to extinction that Earth’s evolutionary diversity won’t recover for some 3 million years, one study said.

The scientists behind that study, which was published in 2018, concluded that after that loss, our planet will need between 3 million and 5 million years in a best-case scenario to get back to the level of biodiversity we have on Earth today.

Returning the planet’s biodiversity to the state it was in before modern humans evolved would take even longer — up to 7 million years.

Alien species are a major driver of species extinction.

A study published earlier this month found that alien species are a primary driver of recent animal and plant extinctions. An alien species is the term for any kind of animal, plant, fungus, or bacteria that isn’t native to an ecosystem. Some can be invasive, meaning they cause harm to the environment to which they’re introduced.

Many invasive alien species have been unintentionally spread by humans. People can carry alien species with them from one continent, country, or region to another when they travel. Shipments of goods and cargo between places can also contribute to a species’ spread.

Zebra mussels and brown marmorated stink bugs are two examples of invasive species in the US.

The recent study showed that since the year 1500, there have been 953 global extinctions. Roughly one-third of those were at least partially because of the introduction of alien species.

Oceans are absorbing a lot of the excess heat trapped on Earth because of greenhouse gases in the atmosphere. That kills marine species and coral reefs.

The planet’s oceans absorb a whopping 93% of the extra heat that greenhouse gases trap in Earth’s atmosphere. Last year was the oceans’ warmest year on record, and scientists recently realized that oceans are heating up 40% faster than they’d previously thought.

Higher ocean temperatures and acidification of the water cause corals to expel the algae living in their tissues and turn white, a process known as coral bleaching.

As a consequence, coral reefs — and the marine ecosystems they support — are dying. Around the world, about 50% of the world’s reefs have died over the past 30 years.

Species that live in fresh water are impacted by a warming planet, too.

A 2013 study showed that 82% of native freshwater fish species in California were vulnerable to extinction because of climate change.

Most native fish populations are expected decline, and some will likely be driven to extinction, the study authors said. Fish species that need water colder than 70 degrees Fahrenheit to thrive are especially at risk.

Warming oceans also lead to sea-level rise. Rising waters are already impacting vulnerable species’ habitats.

Water, like most things, expands when it heats up — so warmer water takes up more space. Already, the present-day global sea level is 5 to 8 inches higher on average than it was in 1900, according to Smithsonian.

In February, Australia’s environment minister officially declared a rodent called the Bramble Cay melomys to be the first species to go extinct because of human-driven climate change — specifically, sea-level rise.

The tiny rat relative was native to an island in the Queensland province, but its low-lying territory sat just 10 feet above sea level. The island was increasingly inundated by ocean water during high tides and storms, and those salt-water floods took a toll on the island’s plant life.

That flora provided the melomys with food and shelter, so the decrease in plants likely led to the animal’s demise.

Warming oceans are also leading to unprecedented Arctic and Antarctic ice melt, which further contributes to sea-level rise. In the US, 17% of all threatened and endangered species are at risk because of rising seas.

Melting ice sheets could raise sea levels significantly. The Antarctic ice sheet is melting nearly six times as fast as it did in the 1980s. Greenland’s ice is melting four times faster now than it was 16 years ago. It lost more than 400 billion tons of ice in 2012 alone.

In a worst-case scenario, called a “pulse,” warmer waters could cause the glaciers that hold back Antarctica’s and Greenland’s ice sheets to collapse. That would send massive quantities of ice into the oceans, potentially leading to rapid sea-level rise around the world.

Sea-level rise because of climate change threatens 233 federally protected animal and plant species in 23 coastal states across the US, according to a report from the Center for Biological Diversity.

The report noted that 17% of all the US’s threatened and endangered species are vulnerable to rising sea levels and storm surges, including the Hawaiian monk seal and the loggerhead sea turtle.

If “business as usual” continues regarding climate change, one in six species is on track to go extinct.

An analysis published in 2015 looked at over 130 studies about declining animal populations and found that one in six species could disappear as the planet continues warming.

Flora and fauna from South America and Oceania are expected top be the hardest hit by climate change, while North American species would have the lowest risk.

Previous mass extinctions came with warning signs. Those indicators were very similar to what we’re seeing now.

The most devastating mass extinction in planetary history is called the Permian-Triassic extinction, or the “Great Dying.” It happened 252 million years ago, prior to the dawn of the dinosaurs.

During the Great Dying, roughly 90% of the Earth’s species were wiped out; less than 5% of marine species survived, and only a third of land animal species made it, according to National Geographic. The event far eclipsed the cataclysm that killed the last of the dinosaurs some 187 million years later.

But the Great Dying didn’t come out of left field.

Scientists think the mass extinction was caused by a l arge-scale and rapid release of greenhouse gases into the atmosphere by Siberian volcanoes, which quickly warmed the planet — so there were warning signs. In fact, a 2018 study noted that those early signs appeared as much as 700,000 years ahead of the extinction.

“There is much evidence of severe global warming, ocean acidification, and a lack of oxygen,” the study’s lead author, Wolfgang Kießling, said in a release.

Today’s changes are similar but less severe — so far.

https://www.thisisinsider.com/signs-of-6th-mass-extinction-2019-3#previous-mass-extinctions-came-with-warning-signs-too-those-indicators-were-very-similar-to-what-were-seeing-now-14

by SIDNEY FUSSELL

Walgreens is piloting a new line of “smart coolers”—fridges equipped with cameras that scan shoppers’ faces and make inferences on their age and gender. On January 14, the company announced its first trial at a store in Chicago in January, and plans to equip stores in New York and San Francisco with the tech.

Demographic information is key to retail shopping. Retailers want to know what people are buying, segmenting shoppers by gender, age, and income (to name a few characteristics) and then targeting them precisely. To that end, these smart coolers are a marvel.

If, for example, Pepsi launched an ad campaign targeting young women, it could use smart-cooler data to see if its campaign was working. These machines can draw all kinds of useful inferences: Maybe young men buy more Sprite if it’s displayed next to Mountain Dew. Maybe older women buy more ice cream on Thursday nights than any other day of the week. The tech also has “iris tracking” capabilities, meaning the company can collect data on which displayed items are the most looked at.

Crucially, the “Cooler Screens” system does not use facial recognition. Shoppers aren’t identified when the fridge cameras scan their face. Instead, the cameras analyze faces to make inferences about shoppers’ age and gender. First, the camera takes their picture, which an AI system will measure and analyze, say, the width of someone’s eyes, the distance between their lips and nose, and other micro measurements. From there, the system can estimate if the person who opened the door is, say, a woman in her early 20s or a male in his late 50s. It’s analysis, not recognition.

The distinction between the two is very important. In Illinois, facial recognition in public is outlawed under BIPA, the Biometric Privacy Act. For two years, Google and Facebook fought class-actions suits filed under the law, after plaintiffs claimed the companies obtained their facial data without their consent. Home-security cams with facial-recognition abilities, such as Nest or Amazon’s Ring, also have those features disabled in the state; even Google’s viral “art selfie” app is banned. The suit against Facebook was dismissed in January, but privacy advocates champion BIPA as a would-be template for a world where facial recognition is federally regulated.

Walgreens’s camera system makes note only of what shoppers picked up and basic information on their age and gender. Last year, a Canadian mall used cameras to track shoppers and make inferences about which demographics prefer which stores. Shoppers’ identities weren’t collected or stored, but the mall ended the pilot after widespread backlash.

The smart cooler is just one of dozens of tracking technologies emerging in retail. At Amazon Go stores, for example—which do not have cashiers or self-checkout stations—sensors make note of shoppers’ purchases and charge them to their Amazon account; the resulting data are part of the feedback loop the company uses to target ads at customers, making it more money.

https://www.theatlantic.com/technology/archive/2019/01/walgreens-tests-new-smart-coolers/581248/

Thanks to Kebmodee for bringing this to the It’s Interesting community.

by George Dvorsky

Using brain-scanning technology, artificial intelligence, and speech synthesizers, scientists have converted brain patterns into intelligible verbal speech—an advance that could eventually give voice to those without.

It’s a shame Stephen Hawking isn’t alive to see this, as he may have gotten a real kick out of it. The new speech system, developed by researchers at the ​Neural Acoustic Processing Lab at Columbia University in New York City, is something the late physicist might have benefited from.

Hawking had amyotrophic lateral sclerosis (ALS), a motor neuron disease that took away his verbal speech, but he continued to communicate using a computer and a speech synthesizer. By using a cheek switch affixed to his glasses, Hawking was able to pre-select words on a computer, which were read out by a voice synthesizer. It was a bit tedious, but it allowed Hawking to produce around a dozen words per minute.

But imagine if Hawking didn’t have to manually select and trigger the words. Indeed, some individuals, whether they have ALS, locked-in syndrome, or are recovering from a stroke, may not have the motor skills required to control a computer, even by just a tweak of the cheek. Ideally, an artificial voice system would capture an individual’s thoughts directly to produce speech, eliminating the need to control a computer.

New research published today in Scientific Advances takes us an important step closer to that goal, but instead of capturing an individual’s internal thoughts to reconstruct speech, it uses the brain patterns produced while listening to speech.

To devise such a speech neuroprosthesis, neuroscientist Nima Mesgarani and his colleagues combined recent advances in deep learning with speech synthesis technologies. Their resulting brain-computer interface, though still rudimentary, captured brain patterns directly from the auditory cortex, which were then decoded by an AI-powered vocoder, or speech synthesizer, to produce intelligible speech. The speech was very robotic sounding, but nearly three in four listeners were able to discern the content. It’s an exciting advance—one that could eventually help people who have lost the capacity for speech.

To be clear, Mesgarani’s neuroprosthetic device isn’t translating an individual’s covert speech—that is, the thoughts in our heads, also called imagined speech—directly into words. Unfortunately, we’re not quite there yet in terms of the science. Instead, the system captured an individual’s distinctive cognitive responses as they listened to recordings of people speaking. A deep neural network was then able to decode, or translate, these patterns, allowing the system to reconstruct speech.

“This study continues a recent trend in applying deep learning techniques to decode neural signals,” Andrew Jackson, a professor of neural interfaces at Newcastle University who wasn’t involved in the new study, told Gizmodo. “In this case, the neural signals are recorded from the brain surface of humans during epilepsy surgery. The participants listen to different words and sentences which are read by actors. Neural networks are trained to learn the relationship between brain signals and sounds, and as a result can then reconstruct intelligible reproductions of the words/sentences based only on the brain signals.”

Epilepsy patients were chosen for the study because they often have to undergo brain surgery. Mesgarani, with the help of Ashesh Dinesh Mehta, a neurosurgeon at Northwell Health Physician Partners Neuroscience Institute and a co-author of the new study, recruited five volunteers for the experiment. The team used invasive electrocorticography (ECoG) to measure neural activity as the patients listened to continuous speech sounds. The patients listened, for example, to speakers reciting digits from zero to nine. Their brain patterns were then fed into the AI-enabled vocoder, resulting in the synthesized speech.

The results were very robotic-sounding, but fairly intelligible. In tests, listeners could correctly identify spoken digits around 75 percent of the time. They could even tell if the speaker was male or female. Not bad, and a result that even came as “a surprise” to Mesgaran, as he told Gizmodo in an email.

Recordings of the speech synthesizer can be found here (the researchers tested various techniques, but the best result came from the combination of deep neural networks with the vocoder).

The use of a voice synthesizer in this context, as opposed to a system that can match and recite pre-recorded words, was important to Mesgarani. As he explained to Gizmodo, there’s more to speech than just putting the right words together.

“Since the goal of this work is to restore speech communication in those who have lost the ability to talk, we aimed to learn the direct mapping from the brain signal to the speech sound itself,” he told Gizmodo. “It is possible to also decode phonemes [distinct units of sound] or words, however, speech has a lot more information than just the content—such as the speaker [with their distinct voice and style], intonation, emotional tone, and so on. Therefore, our goal in this particular paper has been to recover the sound itself.”

Looking ahead, Mesgarani would like to synthesize more complicated words and sentences, and collect brain signals of people who are simply thinking or imagining the act of speaking.

Jackson was impressed with the new study, but he said it’s still not clear if this approach will apply directly to brain-computer interfaces.

“In the paper, the decoded signals reflect actual words heard by the brain. To be useful, a communication device would have to decode words that are imagined by the user,” Jackson told Gizmodo. “Although there is often some overlap between brain areas involved in hearing, speaking, and imagining speech, we don’t yet know exactly how similar the associated brain signals will be.”

William Tatum, a neurologist at the Mayo Clinic who was also not involved in the new study, said the research is important in that it’s the first to use artificial intelligence to reconstruct speech from the brain waves involved in generating known acoustic stimuli. The significance is notable, “because it advances application of deep learning in the next generation of better designed speech-producing systems,” he told Gizmodo. That said, he felt the sample size of participants was too small, and that the use of data extracted directly from the human brain during surgery is not ideal.

Another limitation of the study is that the neural networks, in order for them do more than just reproduce words from zero to nine, would have to be trained on a large number of brain signals from each participant. The system is patient-specific, as we all produce different brain patterns when we listen to speech.

“It will be interesting in future to see how well decoders trained for one person generalize to other individuals,” said Jackson. “It’s a bit like early speech recognition systems that needed to be individually trained by the user, as opposed to today’s technology, such as Siri and Alexa, that can make sense of anyone’s voice, again using neural networks. Only time will tell whether these technologies could one day do the same for brain signals.”

No doubt, there’s still lots of work to do. But the new paper is an encouraging step toward the achievement of implantable speech neuroprosthetics.

https://gizmodo.com/neuroscientists-translate-brain-waves-into-recognizable-1832155006

https://www.nature.com/articles/s41598-018-37359-z

by Nicola Davies, PhD

Robots are infiltrating the field of psychiatry, with experts like Dr Joanne Pransky of the San Francisco Bay area in California advocating for robots to be embraced in the medical field. In this article, Dr Pransky shares some examples of robots that have shown impressive psychiatric applications, as well as her thoughts on giving robots the critical role of delivering healthcare to human beings.

Meet the world’s first robotic psychiatrist

Dr Pransky, who was named the world’s first “robotic psychiatrist” because her patients are robots, said, “In 1986, I said that one day, when robots are as intelligent as humans, they would need assistance in dealing with humans on a day-to-day basis.” She imagines that in the near future it will be normal for families to come to a clinic with their robot to help the robot deal with the emotions it develops as a result of interacting with human beings. She also believes that having a robot as part of the family will reshape human family dynamics.

While Dr Pransky’s expertise may sound like science fiction to some, it illustrates just how interlaced robotics and psychiatry are becoming. With 32 years of experience in robotics, she said technology has come a long way, “to the point where robots are used as therapeutic tools.”

Robots in psychiatry

Dr Pransky cites some cases of robots that have been developed to help people with psychiatric health needs. One example is Paro, a robotic baby harp seal developed by the National Institute of Advanced Industrial Science and Technology (AIST), one of the largest public research organizations in Japan. Paro is used in the care of elderly people with dementia, Alzheimer disease, and other mental conditions.1 It has an appealing physical appearance that helps create a calming effect and encourages emotional responses from people. “The designers found that Paro enhances social interaction and communication. Patients can hold and pet the fur-covered seal, which is equipped with different tactile sensors. The seal can also respond to sounds and learn names, including its own,” said Dr Pransky. In 2009, Paro was certified as a type of neurologic therapeutic device by the US Food and Drug Administration (FDA).

Mabu, which is being developed by the patient care management firm Catalia Health in San Francisco, California, is another example. Mabu is a voice-activated robot designed to provide cognitive behavioral therapy by coaching patients on their daily health needs and sending health data to medical professionals.2 Dr Pransky points out that the team developing Mabu is composed of experts in psychiatry and robotics.

Then there is ElliQ, which was developed by Intuition Robotics in San Francisco to provide a social companion for the elderly. ElliQ is powered by artificial intelligence (AI) to provide personalized advice to senior patients regarding activities that can help them stay engaged, active, and mentally sharp.3 It also provides a communication channel between elderly patients and their loved ones.

Beside small robot assistants, however, robotics technology is also integrated into current medical devices, such as Axilum Robotics (Strasbourg, France) TMS-Robot, which assists with transcranial magnetic stimulation (TMS). TMS is a painless, non-invasive brain stimulation technique performed in patients with major depression and other neurologic diseases.4 TMS is usually performed manually, but the TMS-robot automates the procedure, providing more accuracy for patients while saving the operator from performing a repetitive and painful task.

Chatbots are another way in which robotics technology is providing care to psychiatric patients. Using AI and a conversational user interface, chatbots interact with individuals in a human-like manner. For example, Woebot (Woebot Labs, Inc, San Francisco), which runs in Facebook Messenger, converses with users to monitor their mood, make assessments, and recommend psychological treatments.5

Will robots replace psychiatrists?

Robotics has started to become an integral part of mental health treatment and management. Yet critics say there are potential negative side-effects and safety issues in incorporating robotics technology too far into human lives. For instance, over-reliance on robots may have social and legal implications, as well as encroaching on human dignity.6 These issues can be distinctly problematic in the field of psychiatry, in which patients share highly emotional and sensitive personal information. Dr Pransky herself has worked on films such as Ender’s Game and Eagle Eye, which have presented the risks to humans of robots with excessive control and intelligence.

However, Dr Pransky points out that robots are meant to supplement, not supplant, and to facilitate physicians’ work, not replace them. “I think there will be therapeutic success for robotics, but there’s nothing like the understanding of the human experience by a qualified human being. Robotics should extend and augment what a psychiatrist can do, she said. “It’s not the technology I would worry about but the people developing and using it. Robotics needs to be safe, so we have to design safe,” she adds, explaining that emotional and psychological safety should be key components in the design.

Who stands to benefit from robotics in psychiatry?

Dr Pransky explains that robots can help address psychiatric issues that a psychiatrist may be unable to with traditional techniques and tools: “The greatest benefit of robotics use will be in filling gaps. For example, for people who are not comfortable or available to talk about their problems with another human being, a robotic tool can be a therapeutic asset or a diagnostic tool.”

An interesting example of a robot that could be used to fill gaps in psychiatric care is the robot used in BlabDroid, a 2012 documentary created by Alex Reben at the MIT Media Lab for his Master’s thesis. It was the first documentary ever filmed and directed by robots. The robot interviewed strangers on the streets of New York City7 and people surprisingly opened up to the robot. “Some humans are better off with something they feel is non-threatening,” said Dr Pransky.

https://www.psychiatryadvisor.com/practice-management/the-robot-will-see-you-now-the-increasing-role-of-robotics-in-psychiatric-care/article/828253/2/

The Japanese startup Attuned devised a 55-question test for companies to give their employees to find out exactly what motivates them.

The test uses AI to score each employee by how much they are motivated by competition, autonomy, feedback, financial needs, and seven other values.

Companies are paying thousands of dollars to use the service, which can also track when workers are becoming less motivated over time.

If you’ve ever led a team at work before, you know how hard it can be to keep people motivated.

But one Japanese startup is using technology to make that easier than ever.

The Tokyo-based company Attuned offers what it calls “predictive HR analytics” to help companies understand what makes each of their employees tick. And companies in Japan are paying thousands of dollars for the chance to get a better read on their workers.

It’s a simple process: When a company signs on with Attuned, its employees take a 55-question online test in which they’re presented with pairs of statements, such as “Planning my day in advance gives me a sense of security,” and “I prefer to be able to decide which task to focus on at any given time.” The test-taker must choose which of the two statements applies to them better, and whether they “strongly prefer” it, “prefer” it, or just “somewhat prefer” it:

Once the test is complete, Attuned churns out a unique “motivational profile” scoring each employee in 11 key human values, including “competition,” “feedback,” “autonomy,” “security,” and “financial needs.”

Areas in which the employee scores particularly high are labeled “need to have” motivators for that person, while lower scores indicate “nice to have” or “neutral” motivators. How each employee scores in certain areas can clue managers in to what kinds of work environments they’ll thrive in and what will keep them motivated, Casey Wahl, the American founder of Attuned, said.

“Maybe it’s, ‘Hey, you want to have drinks on a Friday night?’ if socialization is important for you,” Wahl told Business Insider. “For somebody else it’s different. Maybe it’s a financial incentive, or maybe, say, ‘OK, if you nail this product, you can have more autonomy; you can run this project that you’ve been wanting to do for a while.”

The technology can also help managers find common ground with their workers. Wahl recalled an employee of his who took issue with the location of Attuned’s office on a Tokyo backstreet instead of a more popular, high-trafficked area. As it turned out, the employee had scored high in the “status” category, suggesting a need to work for a well-known brand or in a position of prestige.

“This is something where, because I don’t value it, I can’t give her what she wants easily,” Wahl said. “Now that I see this, I can say, OK, she’s coming from this point of view. So it’s going to take a lot of the emotion and everything out of it.”

Attuned charges $1,960 for a basic yearly subscription, with prices varying based on the size of the company. The subscription also includes “pulse surveys” — short, 30-second follow-up quizzes that employees take every two weeks to see how their motivators change over time. Attuned uses AI to tailor the surveys to the individual based on answers they’ve previously given.

Wahl says the surveys can identify faster than ever when workers are feeling less motivated, allowing managers to act before the workers get frustrated and leave the company.

At the hiring level, the technology can also predict which departments a prospective employee might be well-suited for, say, if they’re motivated by high competition or require a lot of autonomy.

And they can help hiring managers recognize if someone might not be a good fit at all. Wahl said that after one client started screening potential hires with the Attuned test, its “mis-hire” rate — the percentage of new hires who left the company within six months — dropped from 35% to 8%.

“Management, up until now, has been art,” Wahl told Business Insider. “And we’re bringing some science to it.”

https://www.businessinsider.com/employee-motivation-survey-attuned-japan-startup-2019-1

By Nina Avramova

An international team of scientists has developed a diet it says can improve health while ensuring sustainable food production to reduce further damage to the planet.

The “planetary health diet” is based on cutting red meat and sugar consumption in half and upping intake of fruits, vegetables and nuts.

And it can prevent up to 11.6 million premature deaths without harming the planet, says the report published Wednesday in the medical journal The Lancet.

The authors warn that a global change in diet and food production is needed as 3 billion people across the world are malnourished — which includes those who are under and overnourished — and food production is overstepping environmental targets, driving climate change, biodiversity loss and pollution.

The world’s population is set to reach 10 billion people by 2050; that growth, plus our current diet and food production habits, will “exacerbate risks to people and planet,” according to the authors.

“The stakes are very high,” Dr. Richard Horton, editor in chief at The Lancet, said of the report’s findings, noting that 1 billion people live in hunger and 2 billion people eat too much of the wrong foods.

Horton believes that “nutrition has still failed to get the kind of political attention that is given to diseases such as AIDS, tuberculosis, malaria.”

“Using best available evidence” of controlled feeding studies, randomized trials and large cohort studies, the authors came up with a new recommendation, explained Dr. Walter Willett, lead author of the paper and a professor of epidemiology and nutrition at the Harvard T.H. Chan school of public health.

The report suggests five strategies to ensure people can change their diets and not harm the planet in doing so: incentivizing people to eat healthier, shifting global production toward varied crops, intensifying agriculture sustainably, stricter rules around the governing of oceans and lands, and reducing food waste.

The ‘planetary health diet’

To enable a healthy global population, the team of scientists created a global reference diet, that they call the “planetary health diet,” which is an ideal daily meal plan for people over the age of 2, that they believe will help reduce chronic diseases such as coronary heart disease, stroke and diabetes, as well as environmental degradation.

The diet breaks down the optimal daily intake of whole grains, starchy vegetables, fruit, dairy, protein, fats and sugars, representing a daily total calorie intake of 2500.

They recognize the difficulty of the task, which will need “substantial” dietary shifts on a global level, needing the consumption of foods such as red meat and sugar to decrease by more than 50%. In turn, consumption of nuts, fruits, vegetables, and legumes must increase more than two-fold, the report says.

The diet advises people consume 2,500 calories per day, which is slightly more than what people are eating today, said Willett. People should eat a “variety of plant-based foods, low amounts of animal-based foods, unsaturated rather than saturated fats, and few refined grains, highly processed foods and added sugars,” he said.

Regional differences are also important to note. For example, countries in North America eat almost 6.5 times the recommended amount of red meat, while countries in South Asia eat 1.5 times the required amount of starchy vegetables.

“Almost all of the regions in the world are exceeding quite substantially” the recommended levels of red meat, Willett said.

The health and environmental benefits of dietary changes like these are known, “but, until now, the challenge of attaining healthy diets from a sustainable food system has been hampered by a lack of science-based guidelines, said Howard Frumkin, Head of UK biomedical research charity The Wellcome Trust’s Our Planet Our Health program. The Wellcome Trust funded the research.

“It provides governments, producers and individuals with an evidence-based starting point to work together to transform our food systems and cultures,” he said.

If the new diet were adopted globally, 10.9 to 11.6 million premature deaths could be avoided every year — equating to 19% to 23.6% of adult deaths. A reduction in sodium and an increase in whole grains, nuts, vegetables and fruits contributed the most to the prevention of deaths, according to one of the report’s models.

Making it happen

Some scientists are skeptical of whether shifting the global population to this diet can be achieved.

The recommended diet “is quite a shock,” in terms of how feasible it is and how it should be implemented, said Alan Dangour, professor in food and nutrition for global health at the London School of Hygiene and Tropical Medicine. What “immediately makes implementation quite difficult” is the fact that cross-government departments need to work together, he said. Dangour was not involved in the report.

At the current level of food production, the reference diet is not achievable, said Modi Mwatsama, senior science lead (food systems, nutrition and health) at the Wellcome Trust. Some countries are not able to grow enough food because they could be, for example, lacking resilient crops, while in other countries, unhealthy foods are heavily promoted, she said.

Mwatsama added that unless there are structural changes, such as subsidies that move away from meat production, and environmental changes, such as limits on how much fertilizer can be used, “we won’t see people meeting this target.”

To enable populations to follow the reference diet, the report suggests five strategies, of which subsidies are one option. These fit under a recommendation to ensure good governance of land and ocean systems, for example by prohibiting land clearing and removing subsidies to world fisheries, as they lead to over-capacity of the global fishing fleet.

Second, the report further outlines strategies such as incentivizing farmers to shift food production away from large quantities of a few crops to diverse production of nutritious crops.

Healthy food must also be made more accessible, for example low-income groups should be helped with social protections to avoid continued poor nutrition, the authors suggest, and people encouraged to eat healthily through information campaigns.

A fourth strategy suggests that when agriculture is intensified it must take local conditions into account to ensure the best agricultural practices for a region, in turn producing the best crops.

Finally, the team suggests reducing food waste by improving harvest planning and market access in low and middle-income countries, while improving shopping habits of consumers in high-income countries.

Louise Manning, professor of agri-food and supply chain resilience at the Royal Agricultural University, said meeting the food waste reduction target is a “very difficult thing to achieve” because it would require government, communities and individual households to come together.

However, “it can be done,” said Manning, who was not involved in the report, noting the rollback in plastic usage in countries such as the UK.

The planet’s health

The 2015 Paris Climate Agreement aimed to limit global warming to 2 degrees Celsius above pre-industrial levels. Meeting this goal is no longer only about de-carbonizing energy systems by reducing fossil fuels, it’s also about a food transition, said professor of environmental science at the Stockholm Resilience Centre, Stockholm University, in Sweden, who co-led the study.

“This is urgent,” he said. Without global adaptation of the reference diet, the world “will not succeed with the Paris Climate Agreement.”

A sustainable food production system requires non-greenhouse gas emissions such as methane and nitrous oxide to be limited, but methane is produced during digestion of livestock while nitrous oxides are released from croplands and pastures. But the authors believe these emissions are unavoidable to provide healthy food for 10 billion people. They highlight that decarbonisation of the world’s energy system must progress faster than anticipated, to accommodate this.

Overall, ensuring a healthy population and planet requires combining all strategies, the report concludes — major dietary change, improved food production and technology changes, as well as reduced food waste.

“Designing and operationalising sustainable food systems that can deliver healthy diets for a growing and wealthier world population presents a formidable challenge. Nothing less than a new global agricultural revolution,” said Rockström, adding that “the solutions do exist.

“It is about behavioral change. It’s about technologies. It’s about policies. It’s about regulations. But we know how to do this.”

https://www.cnn.com/2019/01/16/health/new-diet-to-save-lives-and-planet-health-study-intl/index.html