Stanford scientists advance thought-control computer cursor movement

 

 

Stanford researchers have designed the fastest, most accurate mathematical algorithm yet for brain-implantable prosthetic systems that can help disabled people maneuver computer cursors with their thoughts. The algorithm’s speed, accuracy and natural movement approach those of a real arm.

 

 

On each side of the screen, a monkey moves a cursor with its thoughts, using the cursor to make contact with the colored ball. On the left, the monkey’s thoughts are decoded with the use of a mathematical algorithm known as Velocity. On the right, the monkey’s thoughts are decoded with a new algorithm known as ReFITT, with better results. The ReFIT system helps the monkey to click on 21 targets in 21 seconds, as opposed to just 10 clicks with the older system.

 

 

When a paralyzed person imagines moving a limb, cells in the part of the brain that controls movement activate, as if trying to make the immobile limb work again.

Despite a neurological injury or disease that has severed the pathway between brain and muscle, the region where the signals originate remains intact and functional.

In recent years, neuroscientists and neuroengineers working in prosthetics have begun to develop brain-implantable sensors that can measure signals from individual neurons.

After those signals have been decoded through a mathematical algorithm, they can be used to control the movement of a cursor on a computer screen – in essence, the cursor is controlled by thoughts.

The work is part of a field known as neural prosthetics.

A team of Stanford researchers have now developed a new algorithm, known as ReFIT, that vastly improves the speed and accuracy of neural prosthetics that control computer cursors. The results were published Nov. 18 in the journal Nature Neuroscience in a paper by Krishna Shenoy, a professor of electrical engineering, bioengineering and neurobiology at Stanford, and a team led by research associate Dr. Vikash Gilja and bioengineering doctoral candidate Paul Nuyujukian.

In side-by-side demonstrations with rhesus monkeys, cursors controlled by the new algorithm doubled the performance of existing systems and approached performance of the monkey’s actual arm in controlling the cursor. Better yet, more than four years after implantation, the new system is still going strong, while previous systems have seen a steady decline in performance over time.

“These findings could lead to greatly improved prosthetic system performance and robustness in paralyzed people, which we are actively pursuing as part of the FDA Phase-I BrainGate2 clinical trial here at Stanford,” said Shenoy.

The system relies on a sensor implanted into the brain, which records “action potentials” in neural activity from an array of electrode sensors and sends data to a computer. The frequency with which action potentials are generated provides the computer important information about the direction and speed of the user’s intended movement.

The ReFIT algorithm that decodes these signals represents a departure from earlier models. In most neural prosthetics research, scientists have recorded brain activity while the subject moves or imagines moving an arm, analyzing the data after the fact. “Quite a bit of the work in neural prosthetics has focused on this sort of offline reconstruction,” said Gilja, the first author of the paper.

The Stanford team wanted to understand how the system worked “online,” under closed-loop control conditions in which the computer analyzes and implements visual feedback gathered in real time as the monkey neurally controls the cursor toward an onscreen target.

The system is able to make adjustments on the fly when guiding the cursor to a target, just as a hand and eye would work in tandem to move a mouse-cursor onto an icon on a computer desktop.

If the cursor were straying too far to the left, for instance, the user likely adjusts the imagined movements to redirect the cursor to the right. The team designed the system to learn from the user’s corrective movements, allowing the cursor to move more precisely than it could in earlier prosthetics.

To test the new system, the team gave monkeys the task of mentally directing a cursor to a target – an onscreen dot – and holding the cursor there for half a second. ReFIT performed vastly better than previous technology in terms of both speed and accuracy.

The path of the cursor from the starting point to the target was straighter and it reached the target twice as quickly as earlier systems, achieving 75 to 85 percent of the speed of the monkey’s arm.

“This paper reports very exciting innovations in closed-loop decoding for brain-machine interfaces. These innovations should lead to a significant boost in the control of neuroprosthetic devices and increase the clinical viability of this technology,” said Jose Carmena, an associate professor of electrical engineering and neuroscience at the University of California-Berkeley.

Critical to ReFIT’s time-to-target improvement was its superior ability to stop the cursor. While the old model’s cursor reached the target almost as fast as ReFIT, it often overshot the destination, requiring additional time and multiple passes to hold the target.

The key to this efficiency was in the step-by-step calculation that transforms electrical signals from the brain into movements of the cursor onscreen. The team had a unique way of “training” the algorithm about movement. When the monkey used his arm to move the cursor, the computer used signals from the implant to match the arm movements with neural activity.

Next, the monkey simply thought about moving the cursor, and the computer translated that neural activity into onscreen movement of the cursor. The team then used the monkey’s brain activity to refine their algorithm, increasing its accuracy.

The team introduced a second innovation in the way ReFIT encodes information about the position and velocity of the cursor. Gilja said that previous algorithms could interpret neural signals about either the cursor’s position or its velocity, but not both at once. ReFIT can do both, resulting in faster, cleaner movements of the cursor.

Early research in neural prosthetics had the goal of understanding the brain and its systems more thoroughly, Gilja said, but he and his team wanted to build on this approach by taking a more pragmatic engineering perspective. “The core engineering goal is to achieve highest possible performance and robustness for a potential clinical device,” he said.

To create such a responsive system, the team decided to abandon one of the traditional methods in neural prosthetics.

Much of the existing research in this field has focused on differentiating among individual neurons in the brain. Importantly, such a detailed approach has allowed neuroscientists to create a detailed understanding of the individual neurons that control arm movement.

But the individual neuron approach has its drawbacks, Gilja said. “From an engineering perspective, the process of isolating single neurons is difficult, due to minute physical movements between the electrode and nearby neurons, making it error prone,” he said. ReFIT focuses on small groups of neurons instead of single neurons.

By abandoning the single-neuron approach, the team also reaped a surprising benefit: performance longevity. Neural implant systems that are fine-tuned to specific neurons degrade over time. It is a common belief in the field that after six months to a year they can no longer accurately interpret the brain’s intended movement. Gilja said the Stanford system is working very well more than four years later.

“Despite great progress in brain-computer interfaces to control the movement of devices such as prosthetic limbs, we’ve been left so far with halting, jerky, Etch-a-Sketch-like movements. Dr. Shenoy’s study is a big step toward clinically useful brain-machine technology that has faster, smoother, more natural movements,” said James Gnadt, a program director in Systems and Cognitive Neuroscience at the National Institute of Neurological Disorders and Stroke, part of the National Institutes of Health.

For the time being, the team has been focused on improving cursor movement rather than the creation of robotic limbs, but that is not out of the question, Gilja said. Near term, precise, accurate control of a cursor is a simplified task with enormous value for people with paralysis.

“We think we have a good chance of giving them something very useful,” he said. The team is now translating these innovations to people with paralysis as part of a clinical trial.

This research was funded by the Christopher and Dana Reeve Paralysis Foundation, the National Science Foundation, National Defense Science and Engineering Graduate Fellowships, Stanford Graduate Fellowships, Defense Advanced Research Projects Agency (“Revolutionizing Prosthetics” and “REPAIR”) and the National Institutes of Health (NINDS-CRCNS and Director’s Pioneer Award).

Other contributing researchers include Cynthia Chestek, John Cunningham, Byron Yu, Joline Fan, Mark Churchland, Matthew Kaufman, Jonathan Kao and Stephen Ryu.

http://news.stanford.edu/news/2012/november/thought-control-cursor-111812.html

Thanks to Kebmodee for bringing this to the attention of the It’s Interesting community

Brain-controlled helicopter may soon be available

For the last few years, Puzzlebox has been publishing open source software and hacking guides that walk makers through the modification of RC helicopters so that they can be flown and controlled using just the power of the mind. Full systems have also been custom built to introduce youngsters to brain-computer interfaces and neuroscience. The group is about to take the project to the next stage by making a Puzzlebox Orbit brain-controlled helicopter available to the public, while encouraging user experimentation by making all the code, schematics, 3D models, build guides and other documentation freely available under an open-source license.

The helicopter has a protective outer sphere that prevents the rotor blades from impacting with walls, furniture, floor and ceiling is very similar in design to the Kyosho Space Ball. It’s not the same craft though, and the ability to control it with the mind is not the only difference.

“There’s a ring around the top and bottom of the Space Ball which isn’t present on the Puzzlebox Orbit,” Castellotti says. “The casing around their server motor looks quite different, too. The horizontal ring at-mid level is more rounded on the Orbit, and vertically it is more squat. We’re also selling the Puzzlebox Orbit in the U.S. for US$89 (including shipping), versus their $117 (plus shipping).”

Two versions of the Puzzlebox Orbit system are being offered to the public. The first is designed for use with mobile devices like tablets and smartphones. A NeuroSky MindWave Mobile EEG headset communicates with the device via Bluetooth. Proprietary software then analyzes the brainwave data in real time and translates the input as command signals, which are sent to the helicopter via an IR adapter plugged into the device’s audio jack.

This system isn’t quite ready for all mobile operating platforms, though. The team is “happy on Android but don’t have access to a wide variety of hardware for testing,” confirmed Castellotti, adding “Some tuning after release is expected. We’ll have open source code available to iOS developers and will have initiated the App Store evaluation process if it’s not already been approved.”

The second offering comes with a Puzzlebox Pyramid, which was developed completely in-house and has a dual role as a home base for the Orbit helicopter and a remote control unit. At its heart is a programmable micro-controller that’s compatible with Arduino boards. On one face of the pyramid there’s a broken circle of multi-colored LED lights in a clock face configuration. These are used to indicate levels of concentration, mental relaxation, and the quality of the EEG signal from a NeuroSky MindWave EEG headset (which wirelessly communicates with a USB dongle plugged into the rear of the pyramid).

Twelve infrared LEDs to the top of each face actually control the Orbit helicopter, and with some inventive tweaking, these can also be used to control other IR toys and devices (including TVs).

In either case, a targeted mental state can be assigned to a helicopter control or flight path (such as hover in place or fly in a straight line) and actioned whenever that state is detected and maintained. Estimated Orbit flight time is around eight minutes (or more), after which the user will need to recharge the unit for 30 minutes before the next take-off.

At the time of writing, a crowd-funding campaign on Kickstarter to take the prototype system into mass production has attracted almost three times its target. The Puzzlebox team has already secured enough hardware and materials to start shipping the first wave of Orbits next month. International backers will get their hands on the system early next year.

The brain-controlled helicopter is only a part of the package, however. The development team has promised to release the source code for the Linux/Mac/PC software and mobile apps, all protocols, and available hardware schematics under open-source licenses. Step-by-step how-to guides are also in the pipeline (like the one already on the Instructables website), together with educational aids detailing how everything works.

“We have prepared contributor tools for Orbit, including a wiki, source code browser, and ticket tracking system,” said Castellotti. “We are already using these tools internally to build the project. Access to these will be granted when the Kickstarter campaign closes.”

“We would really like to underline that we are producing more than just a brain-controlled helicopter,” he stressed. “The toy and concept is fun and certainly the main draw, but the true purpose lies in the open code and hacking guides. We don’t want to be the holiday toy that gets played with for ten minutes then sits forever in the corner or on a shelf. We want owners to be able to use the Orbit to experiment with biofeedback – practicing how to concentrate better or to unwind and relax with this physical and visual aid.”

“And when curiosity kicks in and they start to wonder how it actually works, all of the information is published freely. That’s how we hope to share knowledge and foster a community. For example, a motivated experimenter should be able to start with the hardware we provide, and using our tools and guides learn how to hack support for driving a remote controlled car or causing a television to change channels when attention levels are measured as being low for too long a period of time. Such advancements could then be contributed back to the rest of our users.”

The Kickstarter campaign will close on December 8, after which the team will concentrate its efforts on getting Orbit systems delivered to backers and ensure that all the background and support documentation is in place. If all goes according to plan, a retail launch could follow as soon as Q1 2013.

It is hoped that the consumer Puzzlebox Orbit mobile/tablet edition with the NeuroSky headset will remain under US$200, followed by the Pyramid version at an as-yet undisclosed price.

http://www.gizmag.com/puzzlebox-orbit-brain-controlled-helicopter/25138/

 

Humans can learn a new sense: ‘Whisking’

 

Rats use a sense that humans don’t: whisking. They move their facial whiskers back and forth about eight times a second to locate objects in their environment. Could humans acquire this sense? And if they can, what could understanding the process of adapting to new sensory input tell us about how humans normally sense? At the Weizmann Institute, researchers explored these questions by attaching plastic “whiskers” to the fingers of blindfolded volunteers and asking them to carry out a location task. The findings, which recently appeared in the Journal of Neuroscience, have yielded new insight into the process of sensing, and they may point to new avenues in developing aids for the blind.
The scientific team, including Drs. Avraham Saig and Goren Gordon, and Eldad Assa in the group of Prof. Ehud Ahissar and Dr. Amos Arieli, all of the Neurobiology Department attached a “whisker” – a 30 cm-long elastic “hair” with position and force sensors on its base – to the index finger of each hand of a blindfolded subject. Then two poles were placed at arm’s distance on either side and slightly to the front of the seated subject, with one a bit farther back than the other. Using just their whiskers, the subjects were challenged to figure out which pole – left or right – was the back one. As the experiment continued, the displacement between front and back poles was reduced, up to the point when the subject could no longer distinguish front from back.
On the first day of the experiment, subjects picked up the new sense so well that they could correctly identify a pole that was set back by only eight cm. An analysis of the data revealed that the subjects did this by figuring the spatial information from the sensory timing. That is, moving their bewhiskered hands together, they could determine which pole was the back one because the whisker on that hand made contact earlier.
When they repeated the testing the next day, the researchers discovered that the subjects had improved their whisking skills significantly: The average sensory threshold went down to just three cm, with some being able to sense a displacement of just one cm. Interestingly, the ability of the subjects to sense time differences had not changed over the two days. Rather, they had improved in the motor aspects of their whisking strategies: Slowing down their hand motions – in effect lengthening the delay time – enabled them to sense a smaller spatial difference.
Saig: “We know that our senses are linked to muscles, for example ocular and hand muscles. In order to sense the texture of cloth, for example, we move our fingers across it, and to seeing stationary object, our eyes must be in constant motion. In this research, we see that changing our physical movements alone – without any corresponding change in the sensitivity of our senses – can be sufficient to sharpen our perception.”
Based on the experiments, the scientists created a statistical model to describe how the subjects updated their “world view” as they acquired new sensory information – up to the point at which they were confident enough to rely on that sense. The model, based on principles of information processing, could explain the number of whisking movements needed to arrive at the correct answer, as well as the pattern of scanning the subjects employed – a gradual change from long to short movements. With this strategy, the flow of information remains constant. “The experiment was conducted in a controlled manner, which allowed us direct access to all the relevant variables: hand motion, hand-pole contact and the reports of the subjects themselves,” says Gordon. “Not only was there a good fit between the theory and the experimental data, we obtained some useful quantitative information on the process of active sensing.”
“Both sight and touch are based on arrays of receptors that scan the outside world in an active manner,” says Ahissar, “Our findings reveal some new principles of active sensing, and show us that activating a new artificial sense in a ‘natural’ way can be very efficient.”  Arieli adds: “Our vision for the future is to help blind people ‘see’ with their fingers. Small devices that translate video to mechanical stimulation, based on principles of active sensing that are common to vision and touch, could provide an intuitive, easily used sensory aid.”
 

Retinal device restores sight to blind mice

 

Researchers report they have developed in mice what they believe might one day become a breakthrough for humans: a retinal prosthesis that could restore near-normal sight to those who have lost their vision.

That would be a welcome development for the roughly 25 million people worldwide who are blind because of retinal disease, most notably macular degeneration.

The notion of using prosthetics to combat blindness is not new, with prior efforts involving retinal electrode implantation and/or gene therapy restoring a limited ability to pick out spots and rough edges of light.

The current effort takes matters to a new level. The scientists fashioned a prosthetic system packed with computer chips that replicate the “neural impulse codes” the eye uses to transmit light signals to the brain.

“This is a unique approach that hasn’t really been explored before, and we’re really very excited about it,” said study author Sheila Nirenberg, a professor and computational neuroscientist in the department of physiology and biophysics at Weill Medical College of Cornell University in New York City. “I’ve actually been working on this for 10 years. And suddenly, after a lot of work, I knew immediately that I could make a prosthetic that would work, by making one that could take in images and process them into a code that the brain can understand.”

Nirenberg and her co-author Chethan Pandarinath (a former Cornell graduate student now conducting postdoctoral research at Stanford University School of Medicine) report their work in the Aug. 14 issue of Proceedings of the National Academy of Sciences. Their efforts were funded by the U.S. National Institutes of Health and Cornell University’s Institute for Computational Biomedicine.

The study authors explained that retinal diseases destroy the light-catching photoreceptor cells on the retina’s surface. Without those, the eye cannot convert light into neural signals that can be sent to the brain.

However, most of these patients retain the use of their retina’s “output cells” — called ganglion cells — whose job it is to actually send these impulses to the brain. The goal, therefore, would be to jumpstart these ganglion cells by using a light-catching device that could produce critical neural signaling.

But past efforts to implant electrodes directly into the eye have only achieved a small degree of ganglion stimulation, and alternate strategies using gene therapy to insert light-sensitive proteins directly into the retina have also fallen short, the researchers said.

Nirenberg theorized that stimulation alone wasn’t enough if the neural signals weren’t exact replicas of those the brain receives from a healthy retina.

“So, what we did is figure out this code, the right set of mathematical equations,” Nirenberg explained. And by incorporating the code right into their prosthetic device’s chip, she and Pandarinath generated the kind of electrical and light impulses that the brain understood.

The team also used gene therapy to hypersensitize the ganglion output cells and get them to deliver the visual message up the chain of command.

Behavioral tests were then conducted among blind mice given a code-outfitted retinal prosthetic and among those given a prosthetic that lacked the code in question.

The result: The code group fared dramatically better on visual tracking than the non-code group, with the former able to distinguish images nearly as well as mice with healthy retinas.

“Now we hope to move on to human trials as soon as possible,” said Nirenberg. “Of course, we have to conduct standard safety studies before we get there. And I would say that we’re looking at five to seven years before this is something that might be ready to go, in the best possible case. But we do hope to start clinical trials in the next one to two years.”

Results achieved in animal studies don’t necessarily translate to humans.

Dr. Alfred Sommer, a professor of ophthalmology at Johns Hopkins University in Baltimore and dean emeritus of Hopkins’  Bloomberg School of Public Health, urged caution about the findings.

“This could be revolutionary,” he said. “But I doubt it. It’s a very, very complicated business. And people have been working on it intensively and incrementally for the last 30 years.”

“The fact that they have done something that sounds a little bit better than the last set of results is great,” Sommer added.  “It’s terrific. But this approach is really in its infancy. And I guarantee that it will be a long time before they get to the point where they can really restore vision to people using prosthetics.”

Other advances may offer benefits in the meantime, he said. “We now have new therapies that we didn’t have even five years ago,” Sommer said. “So we may be reaching a state where the amount of people losing their sight will decline even as these new techniques for providing artificial vision improve. It may not be as sci-fi. But I think it’s infinitely more important at this stage.”

http://health.usnews.com/health-news/news/articles/2012/08/13/retinal-device-restores-sight-to-blind-mice

Thanks to Kebmodee for bringing this to the attention of the It’s Interesting community.