How a New AI Translated Brain Activity to Speech With 97 Percent Accuracy

By Edd Gent

The idea of a machine that can decode your thoughts might sound creepy, but for thousands of people who have lost the ability to speak due to disease or disability it could be game-changing. Even for the able-bodied, being able to type out an email by just thinking or sending commands to your digital assistant telepathically could be hugely useful.

That vision may have come a step closer after researchers at the University of California, San Francisco demonstrated that they could translate brain signals into complete sentences with error rates as low as three percent, which is below the threshold for professional speech transcription.

While we’ve been able to decode parts of speech from brain signals for around a decade, so far most of the solutions have been a long way from consistently translating intelligible sentences. Last year, researchers used a novel approach that achieved some of the best results so far by using brain signals to animate a simulated vocal tract, but only 70 percent of the words were intelligible.

The key to the improved performance achieved by the authors of the new paper in Nature Neuroscience was their realization that there were strong parallels between translating brain signals to text and machine translation between languages using neural networks, which is now highly accurate for many languages.

While most efforts to decode brain signals have focused on identifying neural activity that corresponds to particular phonemes—the distinct chunks of sound that make up words—the researchers decided to mimic machine translation, where the entire sentence is translated at once. This has proven a powerful approach; as certain words are always more likely to appear close together, the system can rely on context to fill in any gaps.

The team used the same encoder-decoder approach commonly used for machine translation, in which one neural network analyzes the input signal—normally text, but in this case brain signals—to create a representation of the data, and then a second neural network translates this into the target language.

They trained their system using brain activity recorded from 4 women with electrodes implanted in their brains to monitor seizures as they read out a set of 50 sentences, including 250 unique words. This allowed the first network to work out what neural activity correlated with which parts of speech.

In testing, it relied only on the neural signals and was able to achieve error rates of below eight percent on two out of the four subjects, which matches the kinds of accuracy achieved by professional transcribers.

Inevitably, there are caveats. Firstly, the system was only able to decode 30-50 specific sentences using a limited vocabulary of 250 words. It also requires people to have electrodes implanted in their brains, which is currently only permitted for a limited number of highly specific medical reasons. However, there are a number of signs that this direction holds considerable promise.

One concern was that because the system was being tested on sentences that were included in its training data, it might simply be learning to match specific sentences to specific neural signatures. That would suggest it wasn’t really learning the constituent parts of speech, which would make it harder to generalize to unfamiliar sentences.

But when the researchers added another set of recordings to the training data that were not included in testing, it reduced error rates significantly, suggesting that the system is learning sub-sentence information like words.

They also found that pre-training the system on data from the volunteer that achieved the highest accuracy before training on data from one of the worst performers significantly reduced error rates. This suggests that in practical applications, much of the training could be done before the system is given to the end user, and they would only have to fine-tune it to the quirks of their brain signals.

The vocabulary of such a system is likely to improve considerably as people build upon this approach—but even a limited palette of 250 words could be incredibly useful to a paraplegic, and could likely be tailored to a specific set of commands for telepathic control of other devices.

Now the ball is back in the court of the scrum of companies racing to develop the first practical neural interfaces.

How a New AI Translated Brain Activity to Speech With 97 Percent Accuracy

New tiny sensors track dopamine in the brain for more than a year, and could be useful for monitoring patients with Parkinson’s and other diseases.

Mit-Dopamine-Tracking_0

By Anne Trafton

Dopamine, a signaling molecule used throughout the brain, plays a major role in regulating our mood, as well as controlling movement. Many disorders, including Parkinson’s disease, depression, and schizophrenia, are linked to dopamine deficiencies.

MIT neuroscientists have now devised a way to measure dopamine in the brain for more than a year, which they believe will help them to learn much more about its role in both healthy and diseased brains.

“Despite all that is known about dopamine as a crucial signaling molecule in the brain, implicated in neurologic and neuropsychiatric conditions as well as our abilty to learn, it has been impossible to monitor changes in the online release of dopamine over time periods long enough to relate these to clinical conditions,” says Ann Graybiel, an MIT Institute Professor, a member of MIT’s McGovern Institute for Brain Research, and one of the senior authors of the study.

Michael Cima, the David H. Koch Professor of Engineering in the Department of Materials Science and Engineering and a member of MIT’s Koch Institute for Integrative Cancer Research, and Rober Langer, the David H. Koch Institute Professor and a member of the Koch Institute, are also senior authors of the study. MIT postdoc Helen Schwerdt is the lead author of the paper, which appears in the Sept. 12 issue of Communications Biology.

Long-term sensing

Dopamine is one of many neurotransmitters that neurons in the brain use to communicate with each other. Traditional systems for measuring dopamine — carbon electrodes with a shaft diameter of about 100 microns — can only be used reliably for about a day because they produce scar tissue that interferes with the electrodes’ ability to interact with dopamine.

In 2015, the MIT team demonstrated that tiny microfabricated sensors could be used to measure dopamine levels in a part of the brain called the striatum, which contains dopamine-producing cells that are critical for habit formation and reward-reinforced learning.

Because these probes are so small (about 10 microns in diameter), the researchers could implant up to 16 of them to measure dopamine levels in different parts of the striatum. In the new study, the researchers wanted to test whether they could use these sensors for long-term dopamine tracking.

“Our fundamental goal from the very beginning was to make the sensors work over a long period of time and produce accurate readings from day to day,” Schwerdt says. “This is necessary if you want to understand how these signals mediate specific diseases or conditions.”

To develop a sensor that can be accurate over long periods of time, the researchers had to make sure that it would not provoke an immune reaction, to avoid the scar tissue that interferes with the accuracy of the readings.

The MIT team found that their tiny sensors were nearly invisible to the immune system, even over extended periods of time. After the sensors were implanted, populations of microglia (immune cells that respond to short-term damage), and astrocytes, which respond over longer periods, were the same as those in brain tissue that did not have the probes inserted.

In this study, the researchers implanted three to five sensors per animal, about 5 millimeters deep, in the striatum. They took readings every few weeks, after stimulating dopamine release from the brainstem, which travels to the striatum. They found that the measurements remained consistent for up to 393 days.

“This is the first time that anyone’s shown that these sensors work for more than a few months. That gives us a lot of confidence that these kinds of sensors might be feasible for human use someday,” Schwerdt says.

Paul Glimcher, a professor of physiology and neuroscience at New York University, says the new sensors should enable more researchers to perform long-term studies of dopamine, which is essential for studying phenomena such as learning, which occurs over long time periods.

“This is a really solid engineering accomplishment that moves the field forward,” says Glimcher, who was not involved in the research. “This dramatically improves the technology in a way that makes it accessible to a lot of labs.”

Monitoring Parkinson’s

If developed for use in humans, these sensors could be useful for monitoring Parkinson’s patients who receive deep brain stimulation, the researchers say. This treatment involves implanting an electrode that delivers electrical impulses to a structure deep within the brain. Using a sensor to monitor dopamine levels could help doctors deliver the stimulation more selectively, only when it is needed.

The researchers are now looking into adapting the sensors to measure other neurotransmitters in the brain, and to measure electrical signals, which can also be disrupted in Parkinson’s and other diseases.

“Understanding those relationships between chemical and electrical activity will be really important to understanding all of the issues that you see in Parkinson’s,” Schwerdt says.

The research was funded by the National Institute of Biomedical Imaging and Bioengineering, the National Institute of Neurological Disorders and Stroke, the Army Research Office, the Saks Kavanaugh Foundation, the Nancy Lurie Marks Family Foundation, and Dr. Tenley Albright.

https://news.mit.edu/2018/brain-dopamine-tracking-sensors-0912

New Hearing Aid Includes Fitness Tracking, Language Translation

new-hearing-aid-includes-fitness-tracking-language-translation-309458
Starkey Hearing Technologies recently unveiled their latest hearing aid, the Livio AI. The aid leverages artificially intelligent software to adapt to users’ listening environments. Starkey says the device does a lot more than just assist in hearing, and includes a range of additional technology, such as a physical activity tracker and integrated language translation.

Hearing loss has a disabling effect on 466 million people worldwide, including over 7 million children under 5 years old. Modern hearing aids already include some pretty sophisticated connectivity, including Bluetooth and internet functionality. The Livio device, however, goes quite a few steps further, and capitalizes on the current craze for fitness devices by including a host of health-minded integrations.

The Future is Hear
Launched August 27 at an event at Starkey’s Minnesota HQ, the Livio contains advances which the Starkey CTO Achin Bhowmik was keen to compare to those seen in the phone market over the last twenty years. The eponymous “artificial intelligence” aspect of the device includes the ability to detect the location and environment in which the user is wearing the aid and optimize the listening experience based on this information. This is, arguably, not the most eye-catching (ear-catching?) feature of the Livio – such capabilities have been advertised in other hearing aid technology.

Rather the Livio’s integration of inertial sensors is its main party trick – this enables it to count physical activity much like other fitness devices. It can count your steps and exercise, and cleverly integrates this with a “brain health” measurement to derive a mind and body health score. The brain health measurement is partly calculated from how much you wear the device, and while it’s arguable whether simply wearing a hearing aid represents training your brain, another component that increases its users’ score when they interact with different people in different environments sounds like a neat way to check on the social health of elderly users. Furthermore, the inertia sensor can detect whether a wearer has fallen, which Bhowmik was keen to point out is a major health hazard for older people.

The translation software is also a major draw, and the promise of sci-fi level language conversion, covering 27 languages, shows Starkey are aiming to bring the multi-billion-dollar hearing aid industry into the future.

As for whether the device can meet these lofty promises, you’ll simply have to keep an eye (and er, ear) out to see if the Livio performs as well as Starkey hope.

Vaitheki Maheswaran, Audiology Specialist for UK-based charity Action on Hearing Loss, said: “The innovation in technology is interesting, not only enabling users to hear better but to monitor their body and mental fitness with the use of an app. However, while this technology is not currently available in the UK, it is important to speak to an audiologist who can help you in choosing the most suitable type of hearing aid for your needs because one type of hearing aid is not suitable for everyone.”

https://www.technologynetworks.com/informatics/news/new-hearing-aid-includes-fitness-tracking-language-translation-309458?utm_campaign=Sanjay%20September%20Import&utm_source=hs_email&utm_medium=email&utm_content=65893142&_hsenc=p2ANqtz-_CC9W1Y_evlwrgOG0GdRhfYJ_mOHrGxnEpu1HE6y-7cm33CbRDTUVa6V0mxPwdOreS8vfPP4WXVAlEOoHebb4_S9KOxA&_hsmi=65893142

Brain-boosting prosthesis moves from mice to humans

by Robbie Gonzalez

THE SHAPE ON the screen appears only briefly—just long enough for the test subject to commit it to memory. At the same time, an electrical signal snakes past the bony perimeter of her skull, down through a warm layer of grey matter toward a batch of electrodes near the center of her brain. Zap zap zap they go, in a carefully orchestrated pattern of pulses. The picture disappears from the screen. A minute later, it reappears, this time beside a handful of other abstract images. The patient pauses, recognizes the shape, then points to it with her finger.

What she’s doing is remarkable, not for what she remembers, but for how well she remembers. On average, she and seven other test subjects perform 37 percent better at the memory game with the brain pulses than they do without—making them the first humans on Earth to experience the memory-boosting benefits of a tailored neural prosthesis.

If you want to get technical, the brain-booster in question is a “closed-loop hippocampal neural prosthesis.” Closed loop because the signals passing between each patient’s brain and the computer to which it’s attached are zipping back and forth in near-real-time. Hippocampal because those signals start and end inside the test subject’s hippocampus, a seahorse-shaped region of the brain critical to the formation of memories. “We’re looking at how the neurons in this region fire when memories are encoded and prepared for storage,” says Robert Hampson, a neuroscientist at Wake Forest Baptist Medical Center and lead author of the paper describing the experiment in the latest issue of the Journal of Neural Engineering.

By distinguishing the patterns associated with successfully encoded memories from unsuccessful ones, he and his colleagues have developed a system that improves test subjects’ performance on visual memory tasks. “What we’ve been able to do is identify what makes a correct pattern, what makes an error pattern, and use microvolt level electrical stimulations to strengthen the correct patterns. What that has resulted in is an improvement of memory recall in tests of episodic memory.” Translation: They’ve improved short-term memory by zapping patients’ brains with individualized patterns of electricity.

Today, their proof-of-concept prosthetic lives outside a patient’s head and connects to the brain via wires. But in the future, Hampson hopes, surgeons could implant a similar apparatus entirely within a person’s skull, like a neural pacemaker. It could augment all manner of brain functions—not just in victims of dementia and brain injury, but healthy individuals, as well.

If the possibility of a neuroprosthetic future strikes you as far-fetched, consider how far Hampson has come already. He’s been studying the formation of memories in the hippocampus since the 1980s. Then, about two decades ago, he connected with University of Southern California neural engineer Theodore Berger, who had been working on ways to model hippocampal activity mathematically. The two have been collaborating ever since. In the early aughts, they demonstrated the potential of a neuroprosthesis in slices of brain tissue. In 2011 they did it in live rats. A couple years later, they pulled it off in live monkeys. Now, at long last, they’ve done it in people.

“In one sense, that makes this prosthesis a culmination,” Hampson says. “But in another sense, it’s just the beginning. Human memory is such a complex process, and there is so much left to learn. We’re only at the edge of understanding it.”

To test their system in human subjects, the researchers recruited people with epilepsy; those patients already had electrodes implanted in their hippocampi to monitor for seizure-related electrical activity. By piggybacking on the diagnostic hardware, Hampson and his colleagues were able to record, and later deliver, electrical activity.

You see, the researchers weren’t just zapping their subjects’ brains willy nilly. They determined where and when to deliver stimulation by first recording activity in the hippocampus as each test subject performed the visual memory test described above. It’s an assessment of working memory—the short-term mental storage bin you use to stash, say, a two-factor authentication code, only to retrieve it seconds later.

All the while, electrodes were recording the brain’s activity, tracking the firing patterns in the hippocampus when the patient guessed right and wrong. From those patterns, Berger, together with USC biomedical engineer Dong Song, created a mathematical model that could predict how neurons in each subject’s hippocampus would fire during successful memory-formation. And if you can predict that activity, that means you can stimulate the brain to mimic that memory formation.

Stimulating the patients’ hippocampi had a similar effect on longer-term memory retention—like your ability to remember where you parked when you leave the grocery store. In a second test, Hampson’s team introduced a 30- to 60-minute delay between displaying an image and asking the subjects to pull it out of a lineup. On average, test subjects performed 35 percent better in the stimulated trials.

The effect came as a shock to the researchers. “We weren’t surprised to see improvement, because we’d had success in our preliminary animal studies. We were surprised by the amount of improvement,” Hampson says. “We could tell, as we were running the patients, that they were performing better. But we didn’t appreciate how much better until we went back and analyzed the results.”

The results have impressed other researchers, as well. “The loss of one’s memories and the ability to encode new memories is devastating—we are who we are because of the memories we have formed throughout our lifetimes,” Rob Malenka, a psychiatrist and neurologist at Stanford University who was unaffiliated with the study, said via email. In that light, he says, “this very exciting neural prosthetic approach, which borders on science fiction, has great potential value. (Malenka has expressed cautious optimism about neuroprosthetic research in the past, noting as recently as 2015 that the translation of the technology from animal to human subjects would constitute “a huge leap.”) However, he says, it’s important to be remain clear-headed. “This kind of approach is certainly worth pursuing with vigor but I think it will still be decades before this kind of approach will ever be used routinely in large numbers of patient populations.”

Then again, with enough support, it could happen sooner than that. Facebook is working on brain computer interfaces; so is Elon Musk. Berger himself briefly served as the chief science officer of Kernel, an ambitious neurotechnology startup led by entrepreneur Bryan Johnson. “Initially, I was very hopeful about working with Bryan,” Berger says now. “We were both excited about the possibility of the work, and he was willing to put in the kind of money that would be required to see it thrive.”

But the partnership crumbled, right in the middle of Kernel’s first clinical test. Berger declines to go into details, except to say that Johnson—either out of hubris or ignorance—wanted to move too fast. (Johnson declined to comment for this story.)

https://www.wired.com/story/hippocampal-neural-prosthetic?mbid=nl_040618_daily_list3_p1&CNDID=50678559

DARPA program aims to develop an implantable neural interface capable of connecting with one million neurons

A new DARPA program aims to develop an implantable neural interface able to provide unprecedented signal resolution and data-transfer bandwidth between the human brain and the digital world. The interface would serve as a translator, converting between the electrochemical language used by neurons in the brain and the ones and zeros that constitute the language of information technology. The goal is to achieve this communications link in a biocompatible device no larger than one cubic centimeter in size, roughly the volume of two nickels stacked back to back.

The program, Neural Engineering System Design (NESD), stands to dramatically enhance research capabilities in neurotechnology and provide a foundation for new therapies.

“Today’s best brain-computer interface systems are like two supercomputers trying to talk to each other using an old 300-baud modem,” said Phillip Alvelda, the NESD program manager. “Imagine what will become possible when we upgrade our tools to really open the channel between the human brain and modern electronics.”

Among the program’s potential applications are devices that could compensate for deficits in sight or hearing by feeding digital auditory or visual information into the brain at a resolution and experiential quality far higher than is possible with current technology.

Neural interfaces currently approved for human use squeeze a tremendous amount of information through just 100 channels, with each channel aggregating signals from tens of thousands of neurons at a time. The result is noisy and imprecise. In contrast, the NESD program aims to develop systems that can communicate clearly and individually with any of up to one million neurons in a given region of the brain.

Achieving the program’s ambitious goals and ensuring that the envisioned devices will have the potential to be practical outside of a research setting will require integrated breakthroughs across numerous disciplines including neuroscience, synthetic biology, low-power electronics, photonics, medical device packaging and manufacturing, systems engineering, and clinical testing. In addition to the program’s hardware challenges, NESD researchers will be required to develop advanced mathematical and neuro-computation techniques to first transcode high-definition sensory information between electronic and cortical neuron representations and then compress and represent those data with minimal loss of fidelity and functionality.

To accelerate that integrative process, the NESD program aims to recruit a diverse roster of leading industry stakeholders willing to offer state-of-the-art prototyping and manufacturing services and intellectual property to NESD researchers on a pre-competitive basis. In later phases of the program, these partners could help transition the resulting technologies into research and commercial application spaces.

To familiarize potential participants with the technical objectives of NESD, DARPA will host a Proposers Day meeting that runs Tuesday and Wednesday, February 2-3, 2016, in Arlington, Va. The Special Notice announcing the Proposers Day meeting is available at https://www.fbo.gov/spg/ODA/DARPA/CMO/DARPA-SN-16-16/listing.html. More details about the Industry Group that will support NESD is available at https://www.fbo.gov/spg/ODA/DARPA/CMO/DARPA-SN-16-17/listing.html. A Broad Agency Announcement describing the specific capabilities sought will be forthcoming on http://www.fbo.gov.

NESD is part of a broader portfolio of programs within DARPA that support President Obama’s brain initiative. For more information about DARPA’s work in that domain, please visit: http://www.darpa.mil/program/our-research/darpa-and-the-brain-initiative.

http://www.darpa.mil/news-events/2015-01-19

Thanks to Kebmodee for bringing this to the It’s Interesting community.

Scientists encode memories in a way that bypasses damaged brain tissue

Researchers at University of South Carolina (USC) and Wake Forest Baptist Medical Center have developed a brain prosthesis that is designed to help individuals suffering from memory loss.

The prosthesis, which includes a small array of electrodes implanted into the brain, has performed well in laboratory testing in animals and is currently being evaluated in human patients.

Designed originally at USC and tested at Wake Forest Baptist, the device builds on decades of research by Ted Berger and relies on a new algorithm created by Dong Song, both of the USC Viterbi School of Engineering. The development also builds on more than a decade of collaboration with Sam Deadwyler and Robert Hampson of the Department of Physiology & Pharmacology of Wake Forest Baptist who have collected the neural data used to construct the models and algorithms.

When your brain receives the sensory input, it creates a memory in the form of a complex electrical signal that travels through multiple regions of the hippocampus, the memory center of the brain. At each region, the signal is re-encoded until it reaches the final region as a wholly different signal that is sent off for long-term storage.

If there’s damage at any region that prevents this translation, then there is the possibility that long-term memory will not be formed. That’s why an individual with hippocampal damage (for example, due to Alzheimer’s disease) can recall events from a long time ago – things that were already translated into long-term memories before the brain damage occurred – but have difficulty forming new long-term memories.

Song and Berger found a way to accurately mimic how a memory is translated from short-term memory into long-term memory, using data obtained by Deadwyler and Hampson, first from animals, and then from humans. Their prosthesis is designed to bypass a damaged hippocampal section and provide the next region with the correctly translated memory.

That’s despite the fact that there is currently no way of “reading” a memory just by looking at its electrical signal.

“It’s like being able to translate from Spanish to French without being able to understand either language,” Berger said.

Their research was presented at the 37th Annual International Conference of the IEEE Engineering in Medicine and Biology Society in Milan on August 27, 2015.

The effectiveness of the model was tested by the USC and Wake Forest Baptist teams. With the permission of patients who had electrodes implanted in their hippocampi to treat chronic seizures, Hampson and Deadwyler read the electrical signals created during memory formation at two regions of the hippocampus, then sent that information to Song and Berger to construct the model. The team then fed those signals into the model and read how the signals generated from the first region of the hippocampus were translated into signals generated by the second region of the hippocampus.

In hundreds of trials conducted with nine patients, the algorithm accurately predicted how the signals would be translated with about 90 percent accuracy.

“Being able to predict neural signals with the USC model suggests that it can be used to design a device to support or replace the function of a damaged part of the brain,” Hampson said.
Next, the team will attempt to send the translated signal back into the brain of a patient with damage at one of the regions in order to try to bypass the damage and enable the formation of an accurate long-term memory.

http://medicalxpress.com/news/2015-09-scientists-bypass-brain-re-encoding-memories.html#nRlv

Paralyzed man walks again, using only his mind.


Paraplegic Adam Fritz works out with Kristen Johnson, a spinal cord injury recovery specialist, at the Project Walk facility in Claremont, California on September 24. A brain-to-computer technology that can translate thoughts into leg movements has enabled Fritz, paralyzed from the waist down by a spinal cord injury, to become the first such patient to walk without the use of robotics.

It’s a technology that sounds lifted from the latest Marvel movie—a brain-computer interface functional electrical stimulation (BCI-FES) system that enables paralyzed users to walk again. But thanks to neurologists, biomedical engineers and other scientists at the University of California, Irvine, it’s very much a reality, though admittedly with only one successful test subject so far.

The team, led by Zoran Nenadic and An H. Do, built a device that translates brain waves into electrical signals than can bypass the damaged region of a paraplegic’s spine and go directly to the muscles, stimulating them to move. To test it, they recruited 28-year-old Adam Fritz, who had lost the use of his legs five years earlier in a motorcycle accident.

Fritz first had to learn how exactly he’d been telling his legs to move for all those years before his accident. The research team fitted him with an electroencephalogram (EEG) cap that read his brain waves as he visualized moving an avatar in a virtual reality environment. After hours training on the video game, he eventually figured out how to signal “walk.”

The next step was to transfer that newfound skill to his legs. The scientists wired up the EEG device so that it would send electrical signals to the muscles in Fritz’s leg. And then, along with physical therapy to strengthen his legs, he would practice walking—his legs suspended a few inches off the ground—using only his brain (and, of course, the device). On his 20th visit, Fritz was finally able to walk using a harness that supported his body weight and prevented him from falling. After a little more practice, he walked using just the BCI-FES system. After 30 trials run over a period of 19 weeks, he could successfully walk through a 12-foot-long course.

As encouraging as the trial sounds, there are experts who suggest the design has limitations. “It appears that the brain EEG signal only contributed a walk or stop command,” says Dr. Chet Moritz, an associate professor of rehab medicine, physiology and biophysics at the University of Washington. “This binary signal could easily be provided by the user using a sip-puff straw, eye-blink device or many other more reliable means of communicating a simple ‘switch.’”

Moritz believes it’s unlikely that an EEG alone would be reliable enough to extract any more specific input from the brain while the test subject is walking. In other words, it might not be able to do much more beyond beginning and ending a simple motion like moving your legs forward—not so helpful in stepping over curbs or turning a corner in a hallway.

The UC Irvine team hopes to improve the capability of its technology. A simplified version of the system has the potential to work as a means of noninvasive rehabilitation for a wide range of paralytic conditions, from less severe spinal cord injuries to stroke and multiple sclerosis.

“Once we’ve confirmed the usability of this noninvasive system, we can look into invasive means, such as brain implants,” said Nenadic in a statement announcing the project’s success. “We hope that an implant could achieve an even greater level of prosthesis control because brain waves are recorded with higher quality. In addition, such an implant could deliver sensation back to the brain, enabling the user to feel their legs.

http://www.newsweek.com/paralyzed-man-walks-again-using-only-his-mind-379531

Mind-controlled drones promise a future of hands-free flying

There have been tentative steps into thought-controlled drones in the past, but Tekever and a team of European researchers just kicked things up a notch. They’ve successfully tested Brainflight, a project that uses your mental activity (detected through a cap) to pilot an unmanned aircraft. You have to learn how to fly on your own, but it doesn’t take long before you’re merely thinking about where you want to go. And don’t worry about crashing because of distractions or mental trauma, like seizures — there are “algorithms” to prevent the worst from happening.

You probably won’t be using Brainflight to fly anything larger than a small drone, at least not in the near future. There’s no regulatory framework that would cover mind-controlled aircraft, after all. Tekever is hopeful that its technology will change how we approach transportation, though. It sees brain power reducing complex activities like flying or driving to something you can do instinctively, like walking — you’d have freedom to focus on higher-level tasks like navigation. The underlying technology would also let people with injuries and physical handicaps steer vehicles and their own prosthetic limbs. Don’t be surprised if you eventually need little more than some headgear to take to the skies.

http://www.engadget.com/2015/02/25/tekever-mind-controlled-drone/?ncid=rss_truncated