Posts Tagged ‘The Future’

by Nicola Davies, PhD

Robots are infiltrating the field of psychiatry, with experts like Dr Joanne Pransky of the San Francisco Bay area in California advocating for robots to be embraced in the medical field. In this article, Dr Pransky shares some examples of robots that have shown impressive psychiatric applications, as well as her thoughts on giving robots the critical role of delivering healthcare to human beings.

Meet the world’s first robotic psychiatrist

Dr Pransky, who was named the world’s first “robotic psychiatrist” because her patients are robots, said, “In 1986, I said that one day, when robots are as intelligent as humans, they would need assistance in dealing with humans on a day-to-day basis.” She imagines that in the near future it will be normal for families to come to a clinic with their robot to help the robot deal with the emotions it develops as a result of interacting with human beings. She also believes that having a robot as part of the family will reshape human family dynamics.

While Dr Pransky’s expertise may sound like science fiction to some, it illustrates just how interlaced robotics and psychiatry are becoming. With 32 years of experience in robotics, she said technology has come a long way, “to the point where robots are used as therapeutic tools.”

Robots in psychiatry

Dr Pransky cites some cases of robots that have been developed to help people with psychiatric health needs. One example is Paro, a robotic baby harp seal developed by the National Institute of Advanced Industrial Science and Technology (AIST), one of the largest public research organizations in Japan. Paro is used in the care of elderly people with dementia, Alzheimer disease, and other mental conditions.1 It has an appealing physical appearance that helps create a calming effect and encourages emotional responses from people. “The designers found that Paro enhances social interaction and communication. Patients can hold and pet the fur-covered seal, which is equipped with different tactile sensors. The seal can also respond to sounds and learn names, including its own,” said Dr Pransky. In 2009, Paro was certified as a type of neurologic therapeutic device by the US Food and Drug Administration (FDA).

Mabu, which is being developed by the patient care management firm Catalia Health in San Francisco, California, is another example. Mabu is a voice-activated robot designed to provide cognitive behavioral therapy by coaching patients on their daily health needs and sending health data to medical professionals.2 Dr Pransky points out that the team developing Mabu is composed of experts in psychiatry and robotics.

Then there is ElliQ, which was developed by Intuition Robotics in San Francisco to provide a social companion for the elderly. ElliQ is powered by artificial intelligence (AI) to provide personalized advice to senior patients regarding activities that can help them stay engaged, active, and mentally sharp.3 It also provides a communication channel between elderly patients and their loved ones.

Beside small robot assistants, however, robotics technology is also integrated into current medical devices, such as Axilum Robotics (Strasbourg, France) TMS-Robot, which assists with transcranial magnetic stimulation (TMS). TMS is a painless, non-invasive brain stimulation technique performed in patients with major depression and other neurologic diseases.4 TMS is usually performed manually, but the TMS-robot automates the procedure, providing more accuracy for patients while saving the operator from performing a repetitive and painful task.

Chatbots are another way in which robotics technology is providing care to psychiatric patients. Using AI and a conversational user interface, chatbots interact with individuals in a human-like manner. For example, Woebot (Woebot Labs, Inc, San Francisco), which runs in Facebook Messenger, converses with users to monitor their mood, make assessments, and recommend psychological treatments.5

Will robots replace psychiatrists?

Robotics has started to become an integral part of mental health treatment and management. Yet critics say there are potential negative side-effects and safety issues in incorporating robotics technology too far into human lives. For instance, over-reliance on robots may have social and legal implications, as well as encroaching on human dignity.6 These issues can be distinctly problematic in the field of psychiatry, in which patients share highly emotional and sensitive personal information. Dr Pransky herself has worked on films such as Ender’s Game and Eagle Eye, which have presented the risks to humans of robots with excessive control and intelligence.

However, Dr Pransky points out that robots are meant to supplement, not supplant, and to facilitate physicians’ work, not replace them. “I think there will be therapeutic success for robotics, but there’s nothing like the understanding of the human experience by a qualified human being. Robotics should extend and augment what a psychiatrist can do, she said. “It’s not the technology I would worry about but the people developing and using it. Robotics needs to be safe, so we have to design safe,” she adds, explaining that emotional and psychological safety should be key components in the design.

Who stands to benefit from robotics in psychiatry?

Dr Pransky explains that robots can help address psychiatric issues that a psychiatrist may be unable to with traditional techniques and tools: “The greatest benefit of robotics use will be in filling gaps. For example, for people who are not comfortable or available to talk about their problems with another human being, a robotic tool can be a therapeutic asset or a diagnostic tool.”

An interesting example of a robot that could be used to fill gaps in psychiatric care is the robot used in BlabDroid, a 2012 documentary created by Alex Reben at the MIT Media Lab for his Master’s thesis. It was the first documentary ever filmed and directed by robots. The robot interviewed strangers on the streets of New York City7 and people surprisingly opened up to the robot. “Some humans are better off with something they feel is non-threatening,” said Dr Pransky.

https://www.psychiatryadvisor.com/practice-management/the-robot-will-see-you-now-the-increasing-role-of-robotics-in-psychiatric-care/article/828253/2/

Advertisements

The Japanese startup Attuned devised a 55-question test for companies to give their employees to find out exactly what motivates them.

The test uses AI to score each employee by how much they are motivated by competition, autonomy, feedback, financial needs, and seven other values.

Companies are paying thousands of dollars to use the service, which can also track when workers are becoming less motivated over time.

If you’ve ever led a team at work before, you know how hard it can be to keep people motivated.

But one Japanese startup is using technology to make that easier than ever.

The Tokyo-based company Attuned offers what it calls “predictive HR analytics” to help companies understand what makes each of their employees tick. And companies in Japan are paying thousands of dollars for the chance to get a better read on their workers.

It’s a simple process: When a company signs on with Attuned, its employees take a 55-question online test in which they’re presented with pairs of statements, such as “Planning my day in advance gives me a sense of security,” and “I prefer to be able to decide which task to focus on at any given time.” The test-taker must choose which of the two statements applies to them better, and whether they “strongly prefer” it, “prefer” it, or just “somewhat prefer” it:

Once the test is complete, Attuned churns out a unique “motivational profile” scoring each employee in 11 key human values, including “competition,” “feedback,” “autonomy,” “security,” and “financial needs.”

Areas in which the employee scores particularly high are labeled “need to have” motivators for that person, while lower scores indicate “nice to have” or “neutral” motivators. How each employee scores in certain areas can clue managers in to what kinds of work environments they’ll thrive in and what will keep them motivated, Casey Wahl, the American founder of Attuned, said.

“Maybe it’s, ‘Hey, you want to have drinks on a Friday night?’ if socialization is important for you,” Wahl told Business Insider. “For somebody else it’s different. Maybe it’s a financial incentive, or maybe, say, ‘OK, if you nail this product, you can have more autonomy; you can run this project that you’ve been wanting to do for a while.”

The technology can also help managers find common ground with their workers. Wahl recalled an employee of his who took issue with the location of Attuned’s office on a Tokyo backstreet instead of a more popular, high-trafficked area. As it turned out, the employee had scored high in the “status” category, suggesting a need to work for a well-known brand or in a position of prestige.

“This is something where, because I don’t value it, I can’t give her what she wants easily,” Wahl said. “Now that I see this, I can say, OK, she’s coming from this point of view. So it’s going to take a lot of the emotion and everything out of it.”

Attuned charges $1,960 for a basic yearly subscription, with prices varying based on the size of the company. The subscription also includes “pulse surveys” — short, 30-second follow-up quizzes that employees take every two weeks to see how their motivators change over time. Attuned uses AI to tailor the surveys to the individual based on answers they’ve previously given.

Wahl says the surveys can identify faster than ever when workers are feeling less motivated, allowing managers to act before the workers get frustrated and leave the company.

At the hiring level, the technology can also predict which departments a prospective employee might be well-suited for, say, if they’re motivated by high competition or require a lot of autonomy.

And they can help hiring managers recognize if someone might not be a good fit at all. Wahl said that after one client started screening potential hires with the Attuned test, its “mis-hire” rate — the percentage of new hires who left the company within six months — dropped from 35% to 8%.

“Management, up until now, has been art,” Wahl told Business Insider. “And we’re bringing some science to it.”

https://www.businessinsider.com/employee-motivation-survey-attuned-japan-startup-2019-1

By Nina Avramova

An international team of scientists has developed a diet it says can improve health while ensuring sustainable food production to reduce further damage to the planet.

The “planetary health diet” is based on cutting red meat and sugar consumption in half and upping intake of fruits, vegetables and nuts.

And it can prevent up to 11.6 million premature deaths without harming the planet, says the report published Wednesday in the medical journal The Lancet.

The authors warn that a global change in diet and food production is needed as 3 billion people across the world are malnourished — which includes those who are under and overnourished — and food production is overstepping environmental targets, driving climate change, biodiversity loss and pollution.

The world’s population is set to reach 10 billion people by 2050; that growth, plus our current diet and food production habits, will “exacerbate risks to people and planet,” according to the authors.

“The stakes are very high,” Dr. Richard Horton, editor in chief at The Lancet, said of the report’s findings, noting that 1 billion people live in hunger and 2 billion people eat too much of the wrong foods.

Horton believes that “nutrition has still failed to get the kind of political attention that is given to diseases such as AIDS, tuberculosis, malaria.”

“Using best available evidence” of controlled feeding studies, randomized trials and large cohort studies, the authors came up with a new recommendation, explained Dr. Walter Willett, lead author of the paper and a professor of epidemiology and nutrition at the Harvard T.H. Chan school of public health.

The report suggests five strategies to ensure people can change their diets and not harm the planet in doing so: incentivizing people to eat healthier, shifting global production toward varied crops, intensifying agriculture sustainably, stricter rules around the governing of oceans and lands, and reducing food waste.

The ‘planetary health diet’

To enable a healthy global population, the team of scientists created a global reference diet, that they call the “planetary health diet,” which is an ideal daily meal plan for people over the age of 2, that they believe will help reduce chronic diseases such as coronary heart disease, stroke and diabetes, as well as environmental degradation.

The diet breaks down the optimal daily intake of whole grains, starchy vegetables, fruit, dairy, protein, fats and sugars, representing a daily total calorie intake of 2500.

They recognize the difficulty of the task, which will need “substantial” dietary shifts on a global level, needing the consumption of foods such as red meat and sugar to decrease by more than 50%. In turn, consumption of nuts, fruits, vegetables, and legumes must increase more than two-fold, the report says.

The diet advises people consume 2,500 calories per day, which is slightly more than what people are eating today, said Willett. People should eat a “variety of plant-based foods, low amounts of animal-based foods, unsaturated rather than saturated fats, and few refined grains, highly processed foods and added sugars,” he said.

Regional differences are also important to note. For example, countries in North America eat almost 6.5 times the recommended amount of red meat, while countries in South Asia eat 1.5 times the required amount of starchy vegetables.

“Almost all of the regions in the world are exceeding quite substantially” the recommended levels of red meat, Willett said.

The health and environmental benefits of dietary changes like these are known, “but, until now, the challenge of attaining healthy diets from a sustainable food system has been hampered by a lack of science-based guidelines, said Howard Frumkin, Head of UK biomedical research charity The Wellcome Trust’s Our Planet Our Health program. The Wellcome Trust funded the research.

“It provides governments, producers and individuals with an evidence-based starting point to work together to transform our food systems and cultures,” he said.

If the new diet were adopted globally, 10.9 to 11.6 million premature deaths could be avoided every year — equating to 19% to 23.6% of adult deaths. A reduction in sodium and an increase in whole grains, nuts, vegetables and fruits contributed the most to the prevention of deaths, according to one of the report’s models.

Making it happen

Some scientists are skeptical of whether shifting the global population to this diet can be achieved.

The recommended diet “is quite a shock,” in terms of how feasible it is and how it should be implemented, said Alan Dangour, professor in food and nutrition for global health at the London School of Hygiene and Tropical Medicine. What “immediately makes implementation quite difficult” is the fact that cross-government departments need to work together, he said. Dangour was not involved in the report.

At the current level of food production, the reference diet is not achievable, said Modi Mwatsama, senior science lead (food systems, nutrition and health) at the Wellcome Trust. Some countries are not able to grow enough food because they could be, for example, lacking resilient crops, while in other countries, unhealthy foods are heavily promoted, she said.

Mwatsama added that unless there are structural changes, such as subsidies that move away from meat production, and environmental changes, such as limits on how much fertilizer can be used, “we won’t see people meeting this target.”

To enable populations to follow the reference diet, the report suggests five strategies, of which subsidies are one option. These fit under a recommendation to ensure good governance of land and ocean systems, for example by prohibiting land clearing and removing subsidies to world fisheries, as they lead to over-capacity of the global fishing fleet.

Second, the report further outlines strategies such as incentivizing farmers to shift food production away from large quantities of a few crops to diverse production of nutritious crops.

Healthy food must also be made more accessible, for example low-income groups should be helped with social protections to avoid continued poor nutrition, the authors suggest, and people encouraged to eat healthily through information campaigns.

A fourth strategy suggests that when agriculture is intensified it must take local conditions into account to ensure the best agricultural practices for a region, in turn producing the best crops.

Finally, the team suggests reducing food waste by improving harvest planning and market access in low and middle-income countries, while improving shopping habits of consumers in high-income countries.

Louise Manning, professor of agri-food and supply chain resilience at the Royal Agricultural University, said meeting the food waste reduction target is a “very difficult thing to achieve” because it would require government, communities and individual households to come together.

However, “it can be done,” said Manning, who was not involved in the report, noting the rollback in plastic usage in countries such as the UK.

The planet’s health

The 2015 Paris Climate Agreement aimed to limit global warming to 2 degrees Celsius above pre-industrial levels. Meeting this goal is no longer only about de-carbonizing energy systems by reducing fossil fuels, it’s also about a food transition, said professor of environmental science at the Stockholm Resilience Centre, Stockholm University, in Sweden, who co-led the study.

“This is urgent,” he said. Without global adaptation of the reference diet, the world “will not succeed with the Paris Climate Agreement.”

A sustainable food production system requires non-greenhouse gas emissions such as methane and nitrous oxide to be limited, but methane is produced during digestion of livestock while nitrous oxides are released from croplands and pastures. But the authors believe these emissions are unavoidable to provide healthy food for 10 billion people. They highlight that decarbonisation of the world’s energy system must progress faster than anticipated, to accommodate this.

Overall, ensuring a healthy population and planet requires combining all strategies, the report concludes — major dietary change, improved food production and technology changes, as well as reduced food waste.

“Designing and operationalising sustainable food systems that can deliver healthy diets for a growing and wealthier world population presents a formidable challenge. Nothing less than a new global agricultural revolution,” said Rockström, adding that “the solutions do exist.

“It is about behavioral change. It’s about technologies. It’s about policies. It’s about regulations. But we know how to do this.”

https://www.cnn.com/2019/01/16/health/new-diet-to-save-lives-and-planet-health-study-intl/index.html

by Isobel Asher Hamilton

– China’s state press agency has developed what it calls “AI news anchors,” avatars of real-life news presenters that read out news as it is typed.

– It developed the anchors with the Chinese search-engine giant Sogou.

– No details were given as to how the anchors were made, and one expert said they fell into the “uncanny valley,” in which avatars have an unsettling resemblance to humans.

China’s state-run press agency, Xinhua, has unveiled what it claims are the world’s first news anchors generated by artificial intelligence.

Xinhua revealed two virtual anchors at the World Internet Conference on Thursday. Both were modeled on real presenters, with one who speaks Chinese and another who speaks English.

“AI anchors have officially become members of the Xinhua News Agency reporting team,” Xinhua told the South China Morning Post. “They will work with other anchors to bring you authoritative, timely, and accurate news information in both Chinese and English.”

In a post, Xinhua said the generated anchors could work “24 hours a day” on its website and various social-media platforms, “reducing news production costs and improving efficiency.”

Xinhua developed the virtual anchors with Sogou, China’s second-biggest search engine. No details were given about how they were made.

Though Xinhua presents the avatars as independently learning from “live broadcasting videos,” the avatars do not appear to rely on true artificial intelligence, as they simply read text written by humans.

“I will work tirelessly to keep you informed as texts will be typed into my system uninterrupted,” the English-speaking anchor says in its first video, using a synthesized voice.

The Oxford computer-science professor Michael Wooldridge told the BBC that the anchor fell into the “uncanny valley,” in which avatars or objects that closely but do not fully resemble humans make observers more uncomfortable than ones that are more obviously artificial.

https://www.businessinsider.com/ai-news-anchor-created-by-china-xinhua-news-agency-2018-11


Researchers at the University of Minnesota use a customized 3D printer to print electronics on a real hand. Image: McAlpine group, University of Minnesota

Soldiers are commonly thrust into situations where the danger is the unknown: Where is the enemy, how many are there, what weaponry is being used? The military already uses a mix of technology to help answer those questions quickly, and another may be on its way. Researchers at the University of Minnesota have developed a low-cost 3D printer that prints sensors and electronics directly on skin. The development could allow soldiers to directly print temporary, disposable sensors on their hands to detect such things as chemical or biological agents in the field.

The technology also could be used in medicine. The Minnesota researchers successfully used bioink with the device to print cells directly on the wounds of a mouse. Researchers believe it could eventually provide new methods of faster and more efficient treatment, or direct printing of grafts for skin wounds or conditions.

“The concept was to go beyond smart materials, to integrate them directly on to skin,” says Michael McAlpine, professor of mechanical engineering whose research group focuses on 3D printing functional materials and devices. “It is a biological merger with electronics. We wanted to push the limits of what a 3D printer can do.”

McAlpine calls it a very simple idea, “One of those ideas so simple, it turns out no one has done it.”

Others have used 3D printers to print electronics and biological cells. But printing on skin presented a few challenges. No matter how hard a person tries to remain still, there always will be some movement during the printing process. “If you put a hand under the printer, it is going to move,” he says.

To adjust for that, the printer the Minnesota team developed uses a machine vision algorithm written by Ph.D. student Zhijie Zhu to track the motion of the hand in real time while printing. Temporary markers are placed on the skin, which then is scanned. The printer tracks the hand using the markers and adjusts in real time to any movement. That allows the printed electronics to maintain a circuit shape. The printed device can be peeled off the skin when it is no longer needed.

The team also needed to develop a special ink that could not only be conductive but print and cure at room temperature. Standard 3D printing inks cure at high temperatures of 212 °F and would burn skin.

In a paper recently published in Advanced Materals, the team identified three criteria for conductive inks: The viscosity of the ink should be tunable while maintaining self-supporting structures; the ink solvent should evaporate quickly so the device becomes functional on the same timescale as the printing process; and the printed electrodes should become highly conductive under ambient conditions.

The solution was an ink using silver flakes to provide conductivity rather than particles more commonly used in other applications. Fibers were found to be too large, and cure at high temperatures. The flakes are aligned by their shear forces during printing, and the addition of ethanol to the mix increases speed of evaporation, allowing the ink to cure quickly at room temperature.

“Printing electronics directly on skin would have been a breakthrough in itself, but when you add all of these other components, this is big,” McAlpine says.

The printer is portable, lightweight and cost less than $400. It consists of a delta robot, monitor cameras for long-distance observation of printing states and tracking cameras mounted for precise localization of the surface. The team added a syringe-type nozzle to squeeze and deliver the ink

Furthering the printer’s versatility, McAlpine’s team worked with staff from the university’s medical school and hospital to print skin cells directly on a skin wound of a mouse. The mouse was anesthetized, but still moved slightly during the procedure, he says. The initial success makes the team optimistic that it could open up a new method of treating skin diseases.

“Think about what the applications could be,” McAlpine says. “A soldier in the field could take the printer out of a pack and print a solar panel. On the cellular side, you could bring a printer to the site of an accident and print cells directly on wounds, speeding the treatment. Eventually, you may be able to print biomedical devices within the body.”

In its paper, the team suggests that devices can be “autonomously fabricated without the need for microfabrication facilities in freeform geometries that are actively adaptive to target surfaces in real time, driven by advances in multifunctional 3D printing technologies.”

Besides the ability to print directly on skin, McAlpine says the work may offer advantages over other skin electronic devices. For example, soft, thin, stretchable patches that stick to the skin have been fitted with off-the-shelf chip-based electronics for monitoring a patient’s health. They stick to skin like a temporary tattoo and send updates wirelessly to a computer.

“The advantage of our approach is that you don’t have to start with electronic wafers made in a clean room,” McAlpine says. “This is a completely new paradigm for printing electronics using 3D printing.”

http://www.asme.org/engineering-topics/articles/bioengineering/researchers-3d-print-skin-breakthrough

What if we could edit the sensations we feel; paste in our brain pictures that we never saw, cut out unwanted pain or insert non-existent scents into memory?

UC Berkeley neuroscientists are building the equipment to do just that, using holographic projection into the brain to activate or suppress dozens and ultimately thousands of neurons at once, hundreds of times each second, copying real patterns of brain activity to fool the brain into thinking it has felt, seen or sensed something.

The goal is to read neural activity constantly and decide, based on the activity, which sets of neurons to activate to simulate the pattern and rhythm of an actual brain response, so as to replace lost sensations after peripheral nerve damage, for example, or control a prosthetic limb.

“This has great potential for neural prostheses, since it has the precision needed for the brain to interpret the pattern of activation. If you can read and write the language of the brain, you can speak to it in its own language and it can interpret the message much better,” said Alan Mardinly, a postdoctoral fellow in the UC Berkeley lab of Hillel Adesnik, an assistant professor of molecular and cell biology. “This is one of the first steps in a long road to develop a technology that could be a virtual brain implant with additional senses or enhanced senses.”

Mardinly is one of three first authors of a paper appearing online April 30 in advance of publication in the journal Nature Neuroscience that describes the holographic brain modulator, which can activate up to 50 neurons at once in a three-dimensional chunk of brain containing several thousand neurons, and repeat that up to 300 times a second with different sets of 50 neurons.

“The ability to talk to the brain has the incredible potential to help compensate for neurological damage caused by degenerative diseases or injury,” said Ehud Isacoff, a UC Berkeley professor of molecular and cell biology and director of the Helen Wills Neuroscience Institute, who was not involved in the research project. “By encoding perceptions into the human cortex, you could allow the blind to see or the paralyzed to feel touch.”

Holographic projection

Each of the 2,000 to 3,000 neurons in the chunk of brain was outfitted with a protein that, when hit by a flash of light, turns the cell on to create a brief spike of activity. One of the key breakthroughs was finding a way to target each cell individually without hitting all at once.

To focus the light onto just the cell body — a target smaller than the width of a human hair — of nearly all cells in a chunk of brain, they turned to computer generated holography, a method of bending and focusing light to form a three-dimensional spatial pattern. The effect is as if a 3D image were floating in space.

In this case, the holographic image was projected into a thin layer of brain tissue at the surface of the cortex, about a tenth of a millimeter thick, though a clear window into the brain.

“The major advance is the ability to control neurons precisely in space and time,” said postdoc Nicolas Pégard, another first author who works both in Adesnik’s lab and the lab of co-author Laura Waller, an associate professor of electrical engineering and computer sciences. “In other words, to shoot the very specific sets of neurons you want to activate and do it at the characteristic scale and the speed at which they normally work.”

The researchers have already tested the prototype in the touch, vision and motor areas of the brains of mice as they walk on a treadmill with their heads immobilized. While they have not noted any behavior changes in the mice when their brain is stimulated, Mardinly said that their brain activity — which is measured in real-time with two-photon imaging of calcium levels in the neurons — shows patterns similar to a response to a sensory stimulus. They’re now training mice so they can detect behavior changes after stimulation.

Prosthetics and brain implants

The area of the brain covered — now a slice one-half millimeter square and one-tenth of a millimeter thick — can be scaled up to read from and write to more neurons in the brain’s outer layer, or cortex, Pégard said. And the laser holography setup could eventually be miniaturized to fit in a backpack a person could haul around.

Mardinly, Pégard and the other first author, postdoc Ian Oldenburg, constructed the holographic brain modulator by making technological advances in a number of areas. Mardinly and Oldenburg, together with Savitha Sridharan, a research associate in the lab, developed better optogenetic switches to insert into cells to turn them on and off. The switches — light-activated ion channels on the cell surface that open briefly when triggered — turn on strongly and then quickly shut off, all in about 3 milliseconds, so they’re ready to be re-stimulated up to 50 or more times per second, consistent with normal firing rates in the cortex.

Pégard developed the holographic projection system using a liquid crystal screen that acts like a holographic negative to sculpt the light from 40W lasers into the desired 3D pattern. The lasers are pulsed in 300 femtosecond-long bursts every microsecond. He, Mardinly, Oldenburg and their colleagues published a paper last year describing the device, which they call 3D-SHOT, for three-dimensional scanless holographic optogenetics with temporal focusing.

“This is the culmination of technologies that researchers have been working on for a while, but have been impossible to put together,” Mardinly said. “We solved numerous technical problems at the same time to bring it all together and finally realize the potential of this technology.”

As they improve their technology, they plan to start capturing real patterns of activity in the cortex in order to learn how to reproduce sensations and perceptions to play back through their holographic system.

Reference:
Mardinly, A. R., Oldenburg, I. A., Pégard, N. C., Sridharan, S., Lyall, E. H., Chesnov, K., . . . Adesnik, H. (2018). Precise multimodal optical control of neural ensemble activity. Nature Neuroscience. doi:10.1038/s41593-018-0139-8

https://www.technologynetworks.com/neuroscience/news/using-holography-to-activate-the-brain-300329?utm_campaign=Newsletter_TN_BreakingScienceNews&utm_source=hs_email&utm_medium=email&utm_content=62560457&_hsenc=p2ANqtz–bJrpQXF2dp2fYgPpEKUOIkhpHxOYZR7Nx-irsQ649T-Ua02wmYTaBOkA9joFtI9BGKIAUb1NoL7-s27Rj9XMPH44XUw&_hsmi=62560457


Arnav Kapur, a researcher in the Fluid Interfaces group at the MIT Media Lab, demonstrates the AlterEgo project. Image: Lorrie Lejeune/MIT

MIT researchers have developed a computer interface that can transcribe words that the user verbalizes internally but does not actually speak aloud.

The system consists of a wearable device and an associated computing system. Electrodes in the device pick up neuromuscular signals in the jaw and face that are triggered by internal verbalizations — saying words “in your head” — but are undetectable to the human eye. The signals are fed to a machine-learning system that has been trained to correlate particular signals with particular words.

The device also includes a pair of bone-conduction headphones, which transmit vibrations through the bones of the face to the inner ear. Because they don’t obstruct the ear canal, the headphones enable the system to convey information to the user without interrupting conversation or otherwise interfering with the user’s auditory experience.

The device is thus part of a complete silent-computing system that lets the user undetectably pose and receive answers to difficult computational problems. In one of the researchers’ experiments, for instance, subjects used the system to silently report opponents’ moves in a chess game and just as silently receive computer-recommended responses.

“The motivation for this was to build an IA device — an intelligence-augmentation device,” says Arnav Kapur, a graduate student at the MIT Media Lab, who led the development of the new system. “Our idea was: Could we have a computing platform that’s more internal, that melds human and machine in some ways and that feels like an internal extension of our own cognition?”

“We basically can’t live without our cellphones, our digital devices,” says Pattie Maes, a professor of media arts and sciences and Kapur’s thesis advisor. “But at the moment, the use of those devices is very disruptive. If I want to look something up that’s relevant to a conversation I’m having, I have to find my phone and type in the passcode and open an app and type in some search keyword, and the whole thing requires that I completely shift attention from my environment and the people that I’m with to the phone itself. So, my students and I have for a very long time been experimenting with new form factors and new types of experience that enable people to still benefit from all the wonderful knowledge and services that these devices give us, but do it in a way that lets them remain in the present.”

The researchers describe their device in a paper they presented at the Association for Computing Machinery’s ACM Intelligent User Interface conference. Kapur is first author on the paper, Maes is the senior author, and they’re joined by Shreyas Kapur, an undergraduate major in electrical engineering and computer science.

Subtle signals

The idea that internal verbalizations have physical correlates has been around since the 19th century, and it was seriously investigated in the 1950s. One of the goals of the speed-reading movement of the 1960s was to eliminate internal verbalization, or “subvocalization,” as it’s known.

But subvocalization as a computer interface is largely unexplored. The researchers’ first step was to determine which locations on the face are the sources of the most reliable neuromuscular signals. So they conducted experiments in which the same subjects were asked to subvocalize the same series of words four times, with an array of 16 electrodes at different facial locations each time.

The researchers wrote code to analyze the resulting data and found that signals from seven particular electrode locations were consistently able to distinguish subvocalized words. In the conference paper, the researchers report a prototype of a wearable silent-speech interface, which wraps around the back of the neck like a telephone headset and has tentacle-like curved appendages that touch the face at seven locations on either side of the mouth and along the jaws.

But in current experiments, the researchers are getting comparable results using only four electrodes along one jaw, which should lead to a less obtrusive wearable device.

Once they had selected the electrode locations, the researchers began collecting data on a few computational tasks with limited vocabularies — about 20 words each. One was arithmetic, in which the user would subvocalize large addition or multiplication problems; another was the chess application, in which the user would report moves using the standard chess numbering system.

Then, for each application, they used a neural network to find correlations between particular neuromuscular signals and particular words. Like most neural networks, the one the researchers used is arranged into layers of simple processing nodes, each of which is connected to several nodes in the layers above and below. Data are fed into the bottom layer, whose nodes process it and pass them to the next layer, whose nodes process it and pass them to the next layer, and so on. The output of the final layer yields is the result of some classification task.

The basic configuration of the researchers’ system includes a neural network trained to identify subvocalized words from neuromuscular signals, but it can be customized to a particular user through a process that retrains just the last two layers.

Practical matters
Using the prototype wearable interface, the researchers conducted a usability study in which 10 subjects spent about 15 minutes each customizing the arithmetic application to their own neurophysiology, then spent another 90 minutes using it to execute computations. In that study, the system had an average transcription accuracy of about 92 percent.

But, Kapur says, the system’s performance should improve with more training data, which could be collected during its ordinary use. Although he hasn’t crunched the numbers, he estimates that the better-trained system he uses for demonstrations has an accuracy rate higher than that reported in the usability study.

In ongoing work, the researchers are collecting a wealth of data on more elaborate conversations, in the hope of building applications with much more expansive vocabularies. “We’re in the middle of collecting data, and the results look nice,” Kapur says. “I think we’ll achieve full conversation some day.”

“I think that they’re a little underselling what I think is a real potential for the work,” says Thad Starner, a professor in Georgia Tech’s College of Computing. “Like, say, controlling the airplanes on the tarmac at Hartsfield Airport here in Atlanta. You’ve got jet noise all around you, you’re wearing these big ear-protection things — wouldn’t it be great to communicate with voice in an environment where you normally wouldn’t be able to? You can imagine all these situations where you have a high-noise environment, like the flight deck of an aircraft carrier, or even places with a lot of machinery, like a power plant or a printing press. This is a system that would make sense, especially because oftentimes in these types of or situations people are already wearing protective gear. For instance, if you’re a fighter pilot, or if you’re a firefighter, you’re already wearing these masks.”

“The other thing where this is extremely useful is special ops,” Starner adds. “There’s a lot of places where it’s not a noisy environment but a silent environment. A lot of time, special-ops folks have hand gestures, but you can’t always see those. Wouldn’t it be great to have silent-speech for communication between these folks? The last one is people who have disabilities where they can’t vocalize normally. For example, Roger Ebert did not have the ability to speak anymore because lost his jaw to cancer. Could he do this sort of silent speech and then have a synthesizer that would speak the words?”