Posts Tagged ‘The Future’

The Japanese startup Attuned devised a 55-question test for companies to give their employees to find out exactly what motivates them.

The test uses AI to score each employee by how much they are motivated by competition, autonomy, feedback, financial needs, and seven other values.

Companies are paying thousands of dollars to use the service, which can also track when workers are becoming less motivated over time.

If you’ve ever led a team at work before, you know how hard it can be to keep people motivated.

But one Japanese startup is using technology to make that easier than ever.

The Tokyo-based company Attuned offers what it calls “predictive HR analytics” to help companies understand what makes each of their employees tick. And companies in Japan are paying thousands of dollars for the chance to get a better read on their workers.

It’s a simple process: When a company signs on with Attuned, its employees take a 55-question online test in which they’re presented with pairs of statements, such as “Planning my day in advance gives me a sense of security,” and “I prefer to be able to decide which task to focus on at any given time.” The test-taker must choose which of the two statements applies to them better, and whether they “strongly prefer” it, “prefer” it, or just “somewhat prefer” it:

Once the test is complete, Attuned churns out a unique “motivational profile” scoring each employee in 11 key human values, including “competition,” “feedback,” “autonomy,” “security,” and “financial needs.”

Areas in which the employee scores particularly high are labeled “need to have” motivators for that person, while lower scores indicate “nice to have” or “neutral” motivators. How each employee scores in certain areas can clue managers in to what kinds of work environments they’ll thrive in and what will keep them motivated, Casey Wahl, the American founder of Attuned, said.

“Maybe it’s, ‘Hey, you want to have drinks on a Friday night?’ if socialization is important for you,” Wahl told Business Insider. “For somebody else it’s different. Maybe it’s a financial incentive, or maybe, say, ‘OK, if you nail this product, you can have more autonomy; you can run this project that you’ve been wanting to do for a while.”

The technology can also help managers find common ground with their workers. Wahl recalled an employee of his who took issue with the location of Attuned’s office on a Tokyo backstreet instead of a more popular, high-trafficked area. As it turned out, the employee had scored high in the “status” category, suggesting a need to work for a well-known brand or in a position of prestige.

“This is something where, because I don’t value it, I can’t give her what she wants easily,” Wahl said. “Now that I see this, I can say, OK, she’s coming from this point of view. So it’s going to take a lot of the emotion and everything out of it.”

Attuned charges $1,960 for a basic yearly subscription, with prices varying based on the size of the company. The subscription also includes “pulse surveys” — short, 30-second follow-up quizzes that employees take every two weeks to see how their motivators change over time. Attuned uses AI to tailor the surveys to the individual based on answers they’ve previously given.

Wahl says the surveys can identify faster than ever when workers are feeling less motivated, allowing managers to act before the workers get frustrated and leave the company.

At the hiring level, the technology can also predict which departments a prospective employee might be well-suited for, say, if they’re motivated by high competition or require a lot of autonomy.

And they can help hiring managers recognize if someone might not be a good fit at all. Wahl said that after one client started screening potential hires with the Attuned test, its “mis-hire” rate — the percentage of new hires who left the company within six months — dropped from 35% to 8%.

“Management, up until now, has been art,” Wahl told Business Insider. “And we’re bringing some science to it.”

https://www.businessinsider.com/employee-motivation-survey-attuned-japan-startup-2019-1

Advertisements

By Nina Avramova

An international team of scientists has developed a diet it says can improve health while ensuring sustainable food production to reduce further damage to the planet.

The “planetary health diet” is based on cutting red meat and sugar consumption in half and upping intake of fruits, vegetables and nuts.

And it can prevent up to 11.6 million premature deaths without harming the planet, says the report published Wednesday in the medical journal The Lancet.

The authors warn that a global change in diet and food production is needed as 3 billion people across the world are malnourished — which includes those who are under and overnourished — and food production is overstepping environmental targets, driving climate change, biodiversity loss and pollution.

The world’s population is set to reach 10 billion people by 2050; that growth, plus our current diet and food production habits, will “exacerbate risks to people and planet,” according to the authors.

“The stakes are very high,” Dr. Richard Horton, editor in chief at The Lancet, said of the report’s findings, noting that 1 billion people live in hunger and 2 billion people eat too much of the wrong foods.

Horton believes that “nutrition has still failed to get the kind of political attention that is given to diseases such as AIDS, tuberculosis, malaria.”

“Using best available evidence” of controlled feeding studies, randomized trials and large cohort studies, the authors came up with a new recommendation, explained Dr. Walter Willett, lead author of the paper and a professor of epidemiology and nutrition at the Harvard T.H. Chan school of public health.

The report suggests five strategies to ensure people can change their diets and not harm the planet in doing so: incentivizing people to eat healthier, shifting global production toward varied crops, intensifying agriculture sustainably, stricter rules around the governing of oceans and lands, and reducing food waste.

The ‘planetary health diet’

To enable a healthy global population, the team of scientists created a global reference diet, that they call the “planetary health diet,” which is an ideal daily meal plan for people over the age of 2, that they believe will help reduce chronic diseases such as coronary heart disease, stroke and diabetes, as well as environmental degradation.

The diet breaks down the optimal daily intake of whole grains, starchy vegetables, fruit, dairy, protein, fats and sugars, representing a daily total calorie intake of 2500.

They recognize the difficulty of the task, which will need “substantial” dietary shifts on a global level, needing the consumption of foods such as red meat and sugar to decrease by more than 50%. In turn, consumption of nuts, fruits, vegetables, and legumes must increase more than two-fold, the report says.

The diet advises people consume 2,500 calories per day, which is slightly more than what people are eating today, said Willett. People should eat a “variety of plant-based foods, low amounts of animal-based foods, unsaturated rather than saturated fats, and few refined grains, highly processed foods and added sugars,” he said.

Regional differences are also important to note. For example, countries in North America eat almost 6.5 times the recommended amount of red meat, while countries in South Asia eat 1.5 times the required amount of starchy vegetables.

“Almost all of the regions in the world are exceeding quite substantially” the recommended levels of red meat, Willett said.

The health and environmental benefits of dietary changes like these are known, “but, until now, the challenge of attaining healthy diets from a sustainable food system has been hampered by a lack of science-based guidelines, said Howard Frumkin, Head of UK biomedical research charity The Wellcome Trust’s Our Planet Our Health program. The Wellcome Trust funded the research.

“It provides governments, producers and individuals with an evidence-based starting point to work together to transform our food systems and cultures,” he said.

If the new diet were adopted globally, 10.9 to 11.6 million premature deaths could be avoided every year — equating to 19% to 23.6% of adult deaths. A reduction in sodium and an increase in whole grains, nuts, vegetables and fruits contributed the most to the prevention of deaths, according to one of the report’s models.

Making it happen

Some scientists are skeptical of whether shifting the global population to this diet can be achieved.

The recommended diet “is quite a shock,” in terms of how feasible it is and how it should be implemented, said Alan Dangour, professor in food and nutrition for global health at the London School of Hygiene and Tropical Medicine. What “immediately makes implementation quite difficult” is the fact that cross-government departments need to work together, he said. Dangour was not involved in the report.

At the current level of food production, the reference diet is not achievable, said Modi Mwatsama, senior science lead (food systems, nutrition and health) at the Wellcome Trust. Some countries are not able to grow enough food because they could be, for example, lacking resilient crops, while in other countries, unhealthy foods are heavily promoted, she said.

Mwatsama added that unless there are structural changes, such as subsidies that move away from meat production, and environmental changes, such as limits on how much fertilizer can be used, “we won’t see people meeting this target.”

To enable populations to follow the reference diet, the report suggests five strategies, of which subsidies are one option. These fit under a recommendation to ensure good governance of land and ocean systems, for example by prohibiting land clearing and removing subsidies to world fisheries, as they lead to over-capacity of the global fishing fleet.

Second, the report further outlines strategies such as incentivizing farmers to shift food production away from large quantities of a few crops to diverse production of nutritious crops.

Healthy food must also be made more accessible, for example low-income groups should be helped with social protections to avoid continued poor nutrition, the authors suggest, and people encouraged to eat healthily through information campaigns.

A fourth strategy suggests that when agriculture is intensified it must take local conditions into account to ensure the best agricultural practices for a region, in turn producing the best crops.

Finally, the team suggests reducing food waste by improving harvest planning and market access in low and middle-income countries, while improving shopping habits of consumers in high-income countries.

Louise Manning, professor of agri-food and supply chain resilience at the Royal Agricultural University, said meeting the food waste reduction target is a “very difficult thing to achieve” because it would require government, communities and individual households to come together.

However, “it can be done,” said Manning, who was not involved in the report, noting the rollback in plastic usage in countries such as the UK.

The planet’s health

The 2015 Paris Climate Agreement aimed to limit global warming to 2 degrees Celsius above pre-industrial levels. Meeting this goal is no longer only about de-carbonizing energy systems by reducing fossil fuels, it’s also about a food transition, said professor of environmental science at the Stockholm Resilience Centre, Stockholm University, in Sweden, who co-led the study.

“This is urgent,” he said. Without global adaptation of the reference diet, the world “will not succeed with the Paris Climate Agreement.”

A sustainable food production system requires non-greenhouse gas emissions such as methane and nitrous oxide to be limited, but methane is produced during digestion of livestock while nitrous oxides are released from croplands and pastures. But the authors believe these emissions are unavoidable to provide healthy food for 10 billion people. They highlight that decarbonisation of the world’s energy system must progress faster than anticipated, to accommodate this.

Overall, ensuring a healthy population and planet requires combining all strategies, the report concludes — major dietary change, improved food production and technology changes, as well as reduced food waste.

“Designing and operationalising sustainable food systems that can deliver healthy diets for a growing and wealthier world population presents a formidable challenge. Nothing less than a new global agricultural revolution,” said Rockström, adding that “the solutions do exist.

“It is about behavioral change. It’s about technologies. It’s about policies. It’s about regulations. But we know how to do this.”

https://www.cnn.com/2019/01/16/health/new-diet-to-save-lives-and-planet-health-study-intl/index.html

by Isobel Asher Hamilton

– China’s state press agency has developed what it calls “AI news anchors,” avatars of real-life news presenters that read out news as it is typed.

– It developed the anchors with the Chinese search-engine giant Sogou.

– No details were given as to how the anchors were made, and one expert said they fell into the “uncanny valley,” in which avatars have an unsettling resemblance to humans.

China’s state-run press agency, Xinhua, has unveiled what it claims are the world’s first news anchors generated by artificial intelligence.

Xinhua revealed two virtual anchors at the World Internet Conference on Thursday. Both were modeled on real presenters, with one who speaks Chinese and another who speaks English.

“AI anchors have officially become members of the Xinhua News Agency reporting team,” Xinhua told the South China Morning Post. “They will work with other anchors to bring you authoritative, timely, and accurate news information in both Chinese and English.”

In a post, Xinhua said the generated anchors could work “24 hours a day” on its website and various social-media platforms, “reducing news production costs and improving efficiency.”

Xinhua developed the virtual anchors with Sogou, China’s second-biggest search engine. No details were given about how they were made.

Though Xinhua presents the avatars as independently learning from “live broadcasting videos,” the avatars do not appear to rely on true artificial intelligence, as they simply read text written by humans.

“I will work tirelessly to keep you informed as texts will be typed into my system uninterrupted,” the English-speaking anchor says in its first video, using a synthesized voice.

The Oxford computer-science professor Michael Wooldridge told the BBC that the anchor fell into the “uncanny valley,” in which avatars or objects that closely but do not fully resemble humans make observers more uncomfortable than ones that are more obviously artificial.

https://www.businessinsider.com/ai-news-anchor-created-by-china-xinhua-news-agency-2018-11


Researchers at the University of Minnesota use a customized 3D printer to print electronics on a real hand. Image: McAlpine group, University of Minnesota

Soldiers are commonly thrust into situations where the danger is the unknown: Where is the enemy, how many are there, what weaponry is being used? The military already uses a mix of technology to help answer those questions quickly, and another may be on its way. Researchers at the University of Minnesota have developed a low-cost 3D printer that prints sensors and electronics directly on skin. The development could allow soldiers to directly print temporary, disposable sensors on their hands to detect such things as chemical or biological agents in the field.

The technology also could be used in medicine. The Minnesota researchers successfully used bioink with the device to print cells directly on the wounds of a mouse. Researchers believe it could eventually provide new methods of faster and more efficient treatment, or direct printing of grafts for skin wounds or conditions.

“The concept was to go beyond smart materials, to integrate them directly on to skin,” says Michael McAlpine, professor of mechanical engineering whose research group focuses on 3D printing functional materials and devices. “It is a biological merger with electronics. We wanted to push the limits of what a 3D printer can do.”

McAlpine calls it a very simple idea, “One of those ideas so simple, it turns out no one has done it.”

Others have used 3D printers to print electronics and biological cells. But printing on skin presented a few challenges. No matter how hard a person tries to remain still, there always will be some movement during the printing process. “If you put a hand under the printer, it is going to move,” he says.

To adjust for that, the printer the Minnesota team developed uses a machine vision algorithm written by Ph.D. student Zhijie Zhu to track the motion of the hand in real time while printing. Temporary markers are placed on the skin, which then is scanned. The printer tracks the hand using the markers and adjusts in real time to any movement. That allows the printed electronics to maintain a circuit shape. The printed device can be peeled off the skin when it is no longer needed.

The team also needed to develop a special ink that could not only be conductive but print and cure at room temperature. Standard 3D printing inks cure at high temperatures of 212 °F and would burn skin.

In a paper recently published in Advanced Materals, the team identified three criteria for conductive inks: The viscosity of the ink should be tunable while maintaining self-supporting structures; the ink solvent should evaporate quickly so the device becomes functional on the same timescale as the printing process; and the printed electrodes should become highly conductive under ambient conditions.

The solution was an ink using silver flakes to provide conductivity rather than particles more commonly used in other applications. Fibers were found to be too large, and cure at high temperatures. The flakes are aligned by their shear forces during printing, and the addition of ethanol to the mix increases speed of evaporation, allowing the ink to cure quickly at room temperature.

“Printing electronics directly on skin would have been a breakthrough in itself, but when you add all of these other components, this is big,” McAlpine says.

The printer is portable, lightweight and cost less than $400. It consists of a delta robot, monitor cameras for long-distance observation of printing states and tracking cameras mounted for precise localization of the surface. The team added a syringe-type nozzle to squeeze and deliver the ink

Furthering the printer’s versatility, McAlpine’s team worked with staff from the university’s medical school and hospital to print skin cells directly on a skin wound of a mouse. The mouse was anesthetized, but still moved slightly during the procedure, he says. The initial success makes the team optimistic that it could open up a new method of treating skin diseases.

“Think about what the applications could be,” McAlpine says. “A soldier in the field could take the printer out of a pack and print a solar panel. On the cellular side, you could bring a printer to the site of an accident and print cells directly on wounds, speeding the treatment. Eventually, you may be able to print biomedical devices within the body.”

In its paper, the team suggests that devices can be “autonomously fabricated without the need for microfabrication facilities in freeform geometries that are actively adaptive to target surfaces in real time, driven by advances in multifunctional 3D printing technologies.”

Besides the ability to print directly on skin, McAlpine says the work may offer advantages over other skin electronic devices. For example, soft, thin, stretchable patches that stick to the skin have been fitted with off-the-shelf chip-based electronics for monitoring a patient’s health. They stick to skin like a temporary tattoo and send updates wirelessly to a computer.

“The advantage of our approach is that you don’t have to start with electronic wafers made in a clean room,” McAlpine says. “This is a completely new paradigm for printing electronics using 3D printing.”

http://www.asme.org/engineering-topics/articles/bioengineering/researchers-3d-print-skin-breakthrough

What if we could edit the sensations we feel; paste in our brain pictures that we never saw, cut out unwanted pain or insert non-existent scents into memory?

UC Berkeley neuroscientists are building the equipment to do just that, using holographic projection into the brain to activate or suppress dozens and ultimately thousands of neurons at once, hundreds of times each second, copying real patterns of brain activity to fool the brain into thinking it has felt, seen or sensed something.

The goal is to read neural activity constantly and decide, based on the activity, which sets of neurons to activate to simulate the pattern and rhythm of an actual brain response, so as to replace lost sensations after peripheral nerve damage, for example, or control a prosthetic limb.

“This has great potential for neural prostheses, since it has the precision needed for the brain to interpret the pattern of activation. If you can read and write the language of the brain, you can speak to it in its own language and it can interpret the message much better,” said Alan Mardinly, a postdoctoral fellow in the UC Berkeley lab of Hillel Adesnik, an assistant professor of molecular and cell biology. “This is one of the first steps in a long road to develop a technology that could be a virtual brain implant with additional senses or enhanced senses.”

Mardinly is one of three first authors of a paper appearing online April 30 in advance of publication in the journal Nature Neuroscience that describes the holographic brain modulator, which can activate up to 50 neurons at once in a three-dimensional chunk of brain containing several thousand neurons, and repeat that up to 300 times a second with different sets of 50 neurons.

“The ability to talk to the brain has the incredible potential to help compensate for neurological damage caused by degenerative diseases or injury,” said Ehud Isacoff, a UC Berkeley professor of molecular and cell biology and director of the Helen Wills Neuroscience Institute, who was not involved in the research project. “By encoding perceptions into the human cortex, you could allow the blind to see or the paralyzed to feel touch.”

Holographic projection

Each of the 2,000 to 3,000 neurons in the chunk of brain was outfitted with a protein that, when hit by a flash of light, turns the cell on to create a brief spike of activity. One of the key breakthroughs was finding a way to target each cell individually without hitting all at once.

To focus the light onto just the cell body — a target smaller than the width of a human hair — of nearly all cells in a chunk of brain, they turned to computer generated holography, a method of bending and focusing light to form a three-dimensional spatial pattern. The effect is as if a 3D image were floating in space.

In this case, the holographic image was projected into a thin layer of brain tissue at the surface of the cortex, about a tenth of a millimeter thick, though a clear window into the brain.

“The major advance is the ability to control neurons precisely in space and time,” said postdoc Nicolas Pégard, another first author who works both in Adesnik’s lab and the lab of co-author Laura Waller, an associate professor of electrical engineering and computer sciences. “In other words, to shoot the very specific sets of neurons you want to activate and do it at the characteristic scale and the speed at which they normally work.”

The researchers have already tested the prototype in the touch, vision and motor areas of the brains of mice as they walk on a treadmill with their heads immobilized. While they have not noted any behavior changes in the mice when their brain is stimulated, Mardinly said that their brain activity — which is measured in real-time with two-photon imaging of calcium levels in the neurons — shows patterns similar to a response to a sensory stimulus. They’re now training mice so they can detect behavior changes after stimulation.

Prosthetics and brain implants

The area of the brain covered — now a slice one-half millimeter square and one-tenth of a millimeter thick — can be scaled up to read from and write to more neurons in the brain’s outer layer, or cortex, Pégard said. And the laser holography setup could eventually be miniaturized to fit in a backpack a person could haul around.

Mardinly, Pégard and the other first author, postdoc Ian Oldenburg, constructed the holographic brain modulator by making technological advances in a number of areas. Mardinly and Oldenburg, together with Savitha Sridharan, a research associate in the lab, developed better optogenetic switches to insert into cells to turn them on and off. The switches — light-activated ion channels on the cell surface that open briefly when triggered — turn on strongly and then quickly shut off, all in about 3 milliseconds, so they’re ready to be re-stimulated up to 50 or more times per second, consistent with normal firing rates in the cortex.

Pégard developed the holographic projection system using a liquid crystal screen that acts like a holographic negative to sculpt the light from 40W lasers into the desired 3D pattern. The lasers are pulsed in 300 femtosecond-long bursts every microsecond. He, Mardinly, Oldenburg and their colleagues published a paper last year describing the device, which they call 3D-SHOT, for three-dimensional scanless holographic optogenetics with temporal focusing.

“This is the culmination of technologies that researchers have been working on for a while, but have been impossible to put together,” Mardinly said. “We solved numerous technical problems at the same time to bring it all together and finally realize the potential of this technology.”

As they improve their technology, they plan to start capturing real patterns of activity in the cortex in order to learn how to reproduce sensations and perceptions to play back through their holographic system.

Reference:
Mardinly, A. R., Oldenburg, I. A., Pégard, N. C., Sridharan, S., Lyall, E. H., Chesnov, K., . . . Adesnik, H. (2018). Precise multimodal optical control of neural ensemble activity. Nature Neuroscience. doi:10.1038/s41593-018-0139-8

https://www.technologynetworks.com/neuroscience/news/using-holography-to-activate-the-brain-300329?utm_campaign=Newsletter_TN_BreakingScienceNews&utm_source=hs_email&utm_medium=email&utm_content=62560457&_hsenc=p2ANqtz–bJrpQXF2dp2fYgPpEKUOIkhpHxOYZR7Nx-irsQ649T-Ua02wmYTaBOkA9joFtI9BGKIAUb1NoL7-s27Rj9XMPH44XUw&_hsmi=62560457


Arnav Kapur, a researcher in the Fluid Interfaces group at the MIT Media Lab, demonstrates the AlterEgo project. Image: Lorrie Lejeune/MIT

MIT researchers have developed a computer interface that can transcribe words that the user verbalizes internally but does not actually speak aloud.

The system consists of a wearable device and an associated computing system. Electrodes in the device pick up neuromuscular signals in the jaw and face that are triggered by internal verbalizations — saying words “in your head” — but are undetectable to the human eye. The signals are fed to a machine-learning system that has been trained to correlate particular signals with particular words.

The device also includes a pair of bone-conduction headphones, which transmit vibrations through the bones of the face to the inner ear. Because they don’t obstruct the ear canal, the headphones enable the system to convey information to the user without interrupting conversation or otherwise interfering with the user’s auditory experience.

The device is thus part of a complete silent-computing system that lets the user undetectably pose and receive answers to difficult computational problems. In one of the researchers’ experiments, for instance, subjects used the system to silently report opponents’ moves in a chess game and just as silently receive computer-recommended responses.

“The motivation for this was to build an IA device — an intelligence-augmentation device,” says Arnav Kapur, a graduate student at the MIT Media Lab, who led the development of the new system. “Our idea was: Could we have a computing platform that’s more internal, that melds human and machine in some ways and that feels like an internal extension of our own cognition?”

“We basically can’t live without our cellphones, our digital devices,” says Pattie Maes, a professor of media arts and sciences and Kapur’s thesis advisor. “But at the moment, the use of those devices is very disruptive. If I want to look something up that’s relevant to a conversation I’m having, I have to find my phone and type in the passcode and open an app and type in some search keyword, and the whole thing requires that I completely shift attention from my environment and the people that I’m with to the phone itself. So, my students and I have for a very long time been experimenting with new form factors and new types of experience that enable people to still benefit from all the wonderful knowledge and services that these devices give us, but do it in a way that lets them remain in the present.”

The researchers describe their device in a paper they presented at the Association for Computing Machinery’s ACM Intelligent User Interface conference. Kapur is first author on the paper, Maes is the senior author, and they’re joined by Shreyas Kapur, an undergraduate major in electrical engineering and computer science.

Subtle signals

The idea that internal verbalizations have physical correlates has been around since the 19th century, and it was seriously investigated in the 1950s. One of the goals of the speed-reading movement of the 1960s was to eliminate internal verbalization, or “subvocalization,” as it’s known.

But subvocalization as a computer interface is largely unexplored. The researchers’ first step was to determine which locations on the face are the sources of the most reliable neuromuscular signals. So they conducted experiments in which the same subjects were asked to subvocalize the same series of words four times, with an array of 16 electrodes at different facial locations each time.

The researchers wrote code to analyze the resulting data and found that signals from seven particular electrode locations were consistently able to distinguish subvocalized words. In the conference paper, the researchers report a prototype of a wearable silent-speech interface, which wraps around the back of the neck like a telephone headset and has tentacle-like curved appendages that touch the face at seven locations on either side of the mouth and along the jaws.

But in current experiments, the researchers are getting comparable results using only four electrodes along one jaw, which should lead to a less obtrusive wearable device.

Once they had selected the electrode locations, the researchers began collecting data on a few computational tasks with limited vocabularies — about 20 words each. One was arithmetic, in which the user would subvocalize large addition or multiplication problems; another was the chess application, in which the user would report moves using the standard chess numbering system.

Then, for each application, they used a neural network to find correlations between particular neuromuscular signals and particular words. Like most neural networks, the one the researchers used is arranged into layers of simple processing nodes, each of which is connected to several nodes in the layers above and below. Data are fed into the bottom layer, whose nodes process it and pass them to the next layer, whose nodes process it and pass them to the next layer, and so on. The output of the final layer yields is the result of some classification task.

The basic configuration of the researchers’ system includes a neural network trained to identify subvocalized words from neuromuscular signals, but it can be customized to a particular user through a process that retrains just the last two layers.

Practical matters
Using the prototype wearable interface, the researchers conducted a usability study in which 10 subjects spent about 15 minutes each customizing the arithmetic application to their own neurophysiology, then spent another 90 minutes using it to execute computations. In that study, the system had an average transcription accuracy of about 92 percent.

But, Kapur says, the system’s performance should improve with more training data, which could be collected during its ordinary use. Although he hasn’t crunched the numbers, he estimates that the better-trained system he uses for demonstrations has an accuracy rate higher than that reported in the usability study.

In ongoing work, the researchers are collecting a wealth of data on more elaborate conversations, in the hope of building applications with much more expansive vocabularies. “We’re in the middle of collecting data, and the results look nice,” Kapur says. “I think we’ll achieve full conversation some day.”

“I think that they’re a little underselling what I think is a real potential for the work,” says Thad Starner, a professor in Georgia Tech’s College of Computing. “Like, say, controlling the airplanes on the tarmac at Hartsfield Airport here in Atlanta. You’ve got jet noise all around you, you’re wearing these big ear-protection things — wouldn’t it be great to communicate with voice in an environment where you normally wouldn’t be able to? You can imagine all these situations where you have a high-noise environment, like the flight deck of an aircraft carrier, or even places with a lot of machinery, like a power plant or a printing press. This is a system that would make sense, especially because oftentimes in these types of or situations people are already wearing protective gear. For instance, if you’re a fighter pilot, or if you’re a firefighter, you’re already wearing these masks.”

“The other thing where this is extremely useful is special ops,” Starner adds. “There’s a lot of places where it’s not a noisy environment but a silent environment. A lot of time, special-ops folks have hand gestures, but you can’t always see those. Wouldn’t it be great to have silent-speech for communication between these folks? The last one is people who have disabilities where they can’t vocalize normally. For example, Roger Ebert did not have the ability to speak anymore because lost his jaw to cancer. Could he do this sort of silent speech and then have a synthesizer that would speak the words?”

Uber has been sending self-driving trucks on delivery runs across Arizona since November, the first step in what promises to be a freight transportation revolution that could radically reshape the jobs of long-haul truckers.

After testing its technology earlier in 2017, Uber began contracting with trucking companies to use its own autonomous Volvo big rigs to take over loads as they traverse the state, it disclosed.

In Uber’s current program, a trucker meets the self-driving truck at the Arizona state border, which then takes the load across the state before handing it off to a second conventional trucker for the short-haul trip. During the autonomous trip, an Uber employee rides in the driver seat of the autonomous truck to monitor — but not to drive.

If one day both the technology and regulations play out in favor of self-driving trucks, two scenarios emerge.

The first would find self-driving trucks handling long-haul highway legs with no one at the wheel as they meet up with conventional truckers, who then drive the deliveries into city centers. The other possibility is Uber could sell its technology to trucking owner-operators, who then use it to sleep while the truck handles the bulk of long-distance driving.

Truckers make their money only when their rigs are on the road. They are also limited by law in terms of how much time they can spend behind the wheel, something a self-driving truck could impact positively. It could also introduce more round-trip hauls that find a driver back home at the end of the day’s journey.

“The big step for us recently is that we can plan to haul goods in both directions, using Uber Freight to coordinate load pickups and dropoffs with local truckers,” said Alden Woodrow, who leads Uber’s self-driving truck effort. “Keeping trucking local allows these drivers to make money while staying closer to home.”

Uber Freight, which launched last May, is an app that matches shippers with loads using technology drawn from Uber’s ride-hailing app. Typically such trucking logistics have been coordinated through phone calls and emails.

The San Francisco-based company isn’t alone in its pursuit of self-driving truck technology, with start-ups such as Embark joining companies such as Tesla and its new Tesla Semi to carve out a slice of a $700 billion industry that moves 70% of all domestic freight, according to the American Trucking Association.

“Today we’re operating our own trucks, but in the future it remains to be seen what happens,” he says. “Trucking is a very large and sophisticated business with a lot of companies in the value chain who are good at what they do. So our desire is to partner.”

Uber’s trucks stick to the highway

Uber’s current Arizona pilot program does not feature trucks making end-to-end runs from pickup to delivery because it’s tough to make huge trucks navigate urban traffic on their own.

Instead, Uber’s Volvo trucks receive loads at state border weigh stations. These trucks are equipped with hardware, software and an array of sensors developed by Uber’s Advanced Technologies Group that help the truck make what amounts to a glorified cruise-control run across the state. Uber ATG also is behind ongoing self-driving car testing in Arizona, Pennsylvania and San Francisco.

Uber did not disclose what items it is transporting for which companies.

Once the Uber trucks exit at the next highway hub near the Arizona border, they are met by a different set of truckers who hitch the trailer to own their cab to finish the delivery.

The idea is that truckers get to go home to their families instead of being on the road. In a video Uber created to tout the program, the company showcases a California trucker who, once at the Arizona border, hands his trailer over to an Uber self-driving truck for its trip east, while picking up a different load that needs to head back to California.

Autonomous vehicles are being pursued by dozens of companies ranging from large automakers to technology start-ups. Slowly, states are adapting their rules to try to be on the front lines of a potential transportation shift.

Michigan, California and Arizona, for example, have been constantly updating their autonomous car testing laws in order to court companies working on such tech. California recently joined Arizona in announcing that it would allow self-driving cars to be tested without a driver at the wheel.

Skeptics of the self-driving gold rush include the Consumer Watchdog Group’s John Simpson, who in a recent letter to lawmakers said “any autonomous vehicle legislation should require a human driver behind a steering wheel capable of taking control.”


Uber refocuses after lawsuit

Uber’s announcement aims to cast a positive light on the company’s trucking efforts and comes a few weeks after it settled a contentious year-old lawsuit brought by Waymo, the name of Google’s self-driving car program.

Waymo’s suit argued that Uber was building light detection and ranging sensors — roof-top lasers that help vehicles interpret their surroundings — based on trade secrets stolen by Anthony Levandowski, who left Waymo to start a self-driving truck company called Otto. Months after its creation in early 2016, Uber bought Otto for around $680 million.

Last year, Travis Kalanick, the Uber CEO who negotiated the deal with Levandowski, was ousted from the company he co-founded after a rash of bad publicity surrounding charges that Uber ran a sexist operation that often skirted the law. Levandowski was fired by Uber after he repeatedly declined to answer questions from Waymo’s lawyers.

In settling the suit, Uber had to give Waymo $245 million in equity, but it did not admit guilt. Uber has long maintained that its LiDAR was built with its own engineering know-how.

“Our trucks do not run on the same self-driving (technology) as Otto trucks did,” says Woodrow. “It’s Uber tech, and we’re improving on it all the time.”

https://www.usatoday.com/story/tech/2018/03/06/uber-trucks-start-shuttling-goods-arizona-no-drivers/397123002/

Thanks to Kebmodee for bringing this to the It’s Interesting community.