Brain implants: Restoring memory with a microchip

130507101540-brain-implants-human-horizontal-gallery

William Gibson’s popular science fiction tale “Johnny Mnemonic” foresaw sensitive information being carried by microchips in the brain by 2021. A team of American neuroscientists could be making this fantasy world a reality. Their motivation is different but the outcome would be somewhat similar. Hailed as one of 2013’s top ten technological breakthroughs by MIT, the work by the University of Southern California, North Carolina’s Wake Forest University and other partners has actually spanned a decade.

But the U.S.-wide team now thinks that it will see a memory device being implanted in a small number of human volunteers within two years and available to patients in five to 10 years. They can’t quite contain their excitement. “I never thought I’d see this in my lifetime,” said Ted Berger, professor of biomedical engineering at the University of Southern California in Los Angeles. “I might not benefit from it myself but my kids will.”

Rob Hampson, associate professor of physiology and pharmacology at Wake Forest University, agrees. “We keep pushing forward, every time I put an estimate on it, it gets shorter and shorter.”

The scientists — who bring varied skills to the table, including mathematical modeling and psychiatry — believe they have cracked how long-term memories are made, stored and retrieved and how to replicate this process in brains that are damaged, particularly by stroke or localized injury.

Berger said they record a memory being made, in an undamaged area of the brain, then use that data to predict what a damaged area “downstream” should be doing. Electrodes are then used to stimulate the damaged area to replicate the action of the undamaged cells.

They concentrate on the hippocampus — part of the cerebral cortex which sits deep in the brain — where short-term memories become long-term ones. Berger has looked at how electrical signals travel through neurons there to form those long-term memories and has used his expertise in mathematical modeling to mimic these movements using electronics.

Hampson, whose university has done much of the animal studies, adds: “We support and reinforce the signal in the hippocampus but we are moving forward with the idea that if you can study enough of the inputs and outputs to replace the function of the hippocampus, you can bypass the hippocampus.”

The team’s experiments on rats and monkeys have shown that certain brain functions can be replaced with signals via electrodes. You would think that the work of then creating an implant for people and getting such a thing approved would be a Herculean task, but think again.

For 15 years, people have been having brain implants to provide deep brain stimulation to treat epilepsy and Parkinson’s disease — a reported 80,000 people have now had such devices placed in their brains. So many of the hurdles have already been overcome — particularly the “yuck factor” and the fear factor.

“It’s now commonly accepted that humans will have electrodes put in them — it’s done for epilepsy, deep brain stimulation, (that has made it) easier for investigative research, it’s much more acceptable now than five to 10 years ago,” Hampson says.

Much of the work that remains now is in shrinking down the electronics.

“Right now it’s not a device, it’s a fair amount of equipment,”Hampson says. “We’re probably looking at devices in the five to 10 year range for human patients.”

The ultimate goal in memory research would be to treat Alzheimer’s Disease but unlike in stroke or localized brain injury, Alzheimer’s tends to affect many parts of the brain, especially in its later stages, making these implants a less likely option any time soon.

Berger foresees a future, however, where drugs and implants could be used together to treat early dementia. Drugs could be used to enhance the action of cells that surround the most damaged areas, and the team’s memory implant could be used to replace a lot of the lost cells in the center of the damaged area. “I think the best strategy is going to involve both drugs and devices,” he says.

Unfortunately, the team found that its method can’t help patients with advanced dementia.

“When looking at a patient with mild memory loss, there’s probably enough residual signal to work with, but not when there’s significant memory loss,” Hampson said.

Constantine Lyketsos, professor of psychiatry and behavioral sciences at John Hopkins Medicine in Baltimore which is trialing a deep brain stimulator implant for Alzheimer’s patients was a little skeptical of the other team’s claims.

“The brain has a lot of redundancy, it can function pretty well if loses one or two parts. But memory involves circuits diffusely dispersed throughout the brain so it’s hard to envision.” However, he added that it was more likely to be successful in helping victims of stroke or localized brain injury as indeed its makers are aiming to do.

The UK’s Alzheimer’s Society is cautiously optimistic.

“Finding ways to combat symptoms caused by changes in the brain is an ongoing battle for researchers. An implant like this one is an interesting avenue to explore,” said Doug Brown, director of research and development.

Hampson says the team’s breakthrough is “like the difference between a cane, to help you walk, and a prosthetic limb — it’s two different approaches.”

It will still take time for many people to accept their findings and their claims, he says, but they don’t expect to have a shortage of volunteers stepping forward to try their implant — the project is partly funded by the U.S. military which is looking for help with battlefield injuries.

There are U.S. soldiers coming back from operations with brain trauma and a neurologist at DARPA (the Defense Advanced Research Projects Agency) is asking “what can you do for my boys?” Hampson says.

“That’s what it’s all about.”

http://www.cnn.com/2013/05/07/tech/brain-memory-implants-humans/index.html?iref=allsearch

Bionic superhumans on the horizon

1-bionic-hand-story-top

Around 220,000 people worldwide already walk around with cochlear implants — devices worn around the ear that turn sound waves into electrical impulses shunted directly into the auditory nerve.

Tens of thousands of people have been implanted with deep brain stimulators, devices that send an electrode tunneling several inches in the brain. Deep brain stimulators are used to control Parkinson’s disease, though lately they’ve also been tested — with encouraging results — in use against severe depression and obsessive compulsive disorder.

The most obvious bionics are those that replace limbs. Olympian “Blade Runner” Oscar Pistorius, now awaiting trial for the alleged murder of his girlfriend, made a splash with his Cheetah carbon fiber prostheses. Yet those are a relatively simple technology — a curved piece of slightly springy, super-strong material. In the digital age, we’re seeing more sophisticated limbs.

Consider the thought-controlled bionic leg that Zac Vawter used to climb all 103 floors of Chicago’s Willis Tower. Or the nerve-controlled bionic hand that Iraq war veteran Glen Lehman had attached after the loss of his original hand.

Or the even more sophisticated i-limb Ultra, an artificial hand with five independently articulating artificial fingers. Those limbs don’t just react mechanically to pressure. They actually respond to the thoughts and intentions of their owners, flexing, extending, gripping, and releasing on mental command.

The age when prostheses were largely inert pieces of wood, metal, and plastic is passing. Advances in microprocessors, in techniques to interface digital technology with the human nervous system, and in battery technology to allow prostheses to pack more power with less weight are turning replacement limbs into active parts of the human body.

In some cases, they’re not even part of the body at all. Consider the case of Cathy Hutchinson. In 1997, Cathy had a stroke, leaving her without control of her arms. Hutchinson volunteered for an experimental procedure that could one day help millions of people with partial or complete paralysis. She let researchers implant a small device in the part of her brain responsible for motor control. With that device, she is able to control an external robotic arm by thinking about it.

That, in turn, brings up an interesting question: If the arm isn’t physically attached to her body, how far away could she be and still control it? The answer is at least thousands of miles. In animal studies, scientists have shown that a monkey with a brain implant can control a robot arm 7,000 miles away. The monkey’s mental signals were sent over the internet, from Duke University in North Carolina, to the robot arm in Japan. In this day and age, distance is almost irrelevant.

The 7,000-mile-away prosthetic arm makes an important point: These new prostheses aren’t just going to restore missing human abilities. They’re going to enhance our abilities, giving us powers we never had before, and augmenting other capabilities we have. While the current generation of prostheses is still primitive, we can already see this taking shape when a monkey moves a robotic arm on the other side of the planet just by thinking about it.

Other research is pointing to enhancements to memory and decision making.

The hippocampus is a small, seahorse-shaped part of the brain that’s essential in forming new memories. If it’s damaged — by an injury to the head, for example — people start having difficulty forming new long-term memories. In the most extreme cases, this can lead to the complete inability to form new long-term memories, as in the film Memento. Working to find a way to repair this sort of brain damage, researchers in 2011 created a “hippocampus chip” that can replace damaged brain tissue. When they implanted it in rats with a damaged hippocampus, they found that not only could their chip repair damaged memory — it could improve the rats’ ability to learn new things.

Nor is memory the end of it. Another study, in 2012, demonstrated that we can boost intelligence — at least one sort — in monkeys. Scientists at Wake Forest University implanted specialized brain chips in a set of monkeys and trained those monkeys to perform a picture-matching game. When the implant was activated, it raised their scores by an average of 10 points on a 100-point scale. The implant makes monkeys smarter.

Both of those technologies for boosting memory and intelligence are in very early stages, in small animal studies only, and years (or possibly decades) away from wide use in humans. Still, they make us wonder — what happens when it’s possible to improve on the human body and mind?

The debate has started already, of course. Oscar Pistorius had to fight hard for inclusion in the Olympics. Many objected that his carbon fiber prostheses gave him a competitive advantage. He was able — with the help of doctors and biomedical engineers — to make a compelling case that his Cheetah blades didn’t give him any advantage on the field. But how long will that be true? How long until we have prostheses (not to mention drugs and genetic therapies) that make athletes better in their sports?

But the issue is much, much wider than professional sports. We may care passionately about the integrity of the Olympics or professional cycling or so on, but they only directly affect a very small number of us. In other areas of life — in the workforce in particular — enhancement technology might affect all of us.

When it’s possible to make humans smarter, sharper, and faster, how will that affect us? Will the effect be mostly positive, boosting our productivity and the rate of human innovation? Or will it be just another pressure to compete at work? Who will be able to afford these technologies? Will anyone be able to have their body, and more importantly, their brain upgraded? Or will only the rich have access to these enhancements?

We have a little while to consider these questions, but we ought to start. The technology will sneak its way into our lives, starting with people with disabilities, the injured, and the ill. It’ll improve their lives in ways that are unquestionably good. And then, one day, we’ll wake up and realize that we’re doing more than restoring lost function. We’re enhancing it.

Superhuman technology is on the horizon. Time to start thinking about what that means for us.

http://www.cnn.com/2013/04/24/opinion/bionic-superhumans-ramez-naam/index.html?iid=article_sidebar

Researchers explore connecting the brain to machines

brain

Behind a locked door in a white-walled basement in a research building in Tempe, Ariz., a monkey sits stone-still in a chair, eyes locked on a computer screen. From his head protrudes a bundle of wires; from his mouth, a plastic tube. As he stares, a picture of a green cursor on the black screen floats toward the corner of a cube. The monkey is moving it with his mind.

The monkey, a rhesus macaque named Oscar, has electrodes implanted in his motor cortex, detecting electrical impulses that indicate mental activity and translating them to the movement of the ball on the screen. The computer isn’t reading his mind, exactly — Oscar’s own brain is doing a lot of the lifting, adapting itself by trial and error to the delicate task of accurately communicating its intentions to the machine. (When Oscar succeeds in controlling the ball as instructed, the tube in his mouth rewards him with a sip of his favorite beverage, Crystal Light.) It’s not technically telekinesis, either, since that would imply that there’s something paranormal about the process. It’s called a “brain-computer interface” (BCI). And it just might represent the future of the relationship between human and machine.

Stephen Helms Tillery’s laboratory at Arizona State University is one of a growing number where researchers are racing to explore the breathtaking potential of BCIs and a related technology, neuroprosthetics. The promise is irresistible: from restoring sight to the blind, to helping the paralyzed walk again, to allowing people suffering from locked-in syndrome to communicate with the outside world. In the past few years, the pace of progress has been accelerating, delivering dazzling headlines seemingly by the week.

At Duke University in 2008, a monkey named Idoya walked on a treadmill, causing a robot in Japan to do the same. Then Miguel Nicolelis stopped the monkey’s treadmill — and the robotic legs kept walking, controlled by Idoya’s brain. At Andrew Schwartz’s lab at the University of Pittsburgh in December 2012, a quadriplegic woman named Jan Scheuermann learned to feed herself chocolate by mentally manipulating a robotic arm. Just last month, Nicolelis’ lab set up what it billed as the first brain-to-brain interface, allowing a rat in North Carolina to make a decision based on sensory data beamed via Internet from the brain of a rat in Brazil.

So far the focus has been on medical applications — restoring standard-issue human functions to people with disabilities. But it’s not hard to imagine the same technologies someday augmenting capacities. If you can make robotic legs walk with your mind, there’s no reason you can’t also make them run faster than any sprinter. If you can control a robotic arm, you can control a robotic crane. If you can play a computer game with your mind, you can, theoretically at least, fly a drone with your mind.

It’s tempting and a bit frightening to imagine that all of this is right around the corner, given how far the field has already come in a short time. Indeed, Nicolelis — the media-savvy scientist behind the “rat telepathy” experiment — is aiming to build a robotic bodysuit that would allow a paralyzed teen to take the first kick of the 2014 World Cup. Yet the same factor that has made the explosion of progress in neuroprosthetics possible could also make future advances harder to come by: the almost unfathomable complexity of the human brain.

From I, Robot to Skynet, we’ve tended to assume that the machines of the future would be guided by artificial intelligence — that our robots would have minds of their own. Over the decades, researchers have made enormous leaps in artificial intelligence (AI), and we may be entering an age of “smart objects” that can learn, adapt to, and even shape our habits and preferences. We have planes that fly themselves, and we’ll soon have cars that do the same. Google has some of the world’s top AI minds working on making our smartphones even smarter, to the point that they can anticipate our needs. But “smart” is not the same as “sentient.” We can train devices to learn specific behaviors, and even out-think humans in certain constrained settings, like a game of Jeopardy. But we’re still nowhere close to building a machine that can pass the Turing test, the benchmark for human-like intelligence. Some experts doubt we ever will.

Philosophy aside, for the time being the smartest machines of all are those that humans can control. The challenge lies in how best to control them. From vacuum tubes to the DOS command line to the Mac to the iPhone, the history of computing has been a progression from lower to higher levels of abstraction. In other words, we’ve been moving from machines that require us to understand and directly manipulate their inner workings to machines that understand how we work and respond readily to our commands. The next step after smartphones may be voice-controlled smart glasses, which can intuit our intentions all the more readily because they see what we see and hear what we hear.

The logical endpoint of this progression would be computers that read our minds, computers we can control without any physical action on our part at all. That sounds impossible. After all, if the human brain is so hard to compute, how can a computer understand what’s going on inside it?

It can’t. But as it turns out, it doesn’t have to — not fully, anyway. What makes brain-computer interfaces possible is an amazing property of the brain called neuroplasticity: the ability of neurons to form new connections in response to fresh stimuli. Our brains are constantly rewiring themselves to allow us to adapt to our environment. So when researchers implant electrodes in a part of the brain that they expect to be active in moving, say, the right arm, it’s not essential that they know in advance exactly which neurons will fire at what rate. When the subject attempts to move the robotic arm and sees that it isn’t quite working as expected, the person — or rat or monkey — will try different configurations of brain activity. Eventually, with time and feedback and training, the brain will hit on a solution that makes use of the electrodes to move the arm.

That’s the principle behind such rapid progress in brain-computer interface and neuroprosthetics. Researchers began looking into the possibility of reading signals directly from the brain in the 1970s, and testing on rats began in the early 1990s. The first big breakthrough for humans came in Georgia in 1997, when a scientist named Philip Kennedy used brain implants to allow a “locked in” stroke victim named Johnny Ray to spell out words by moving a cursor with his thoughts. (It took him six exhausting months of training to master the process.) In 2008, when Nicolelis got his monkey at Duke to make robotic legs run a treadmill in Japan, it might have seemed like mind-controlled exoskeletons for humans were just another step or two away. If he succeeds in his plan to have a paralyzed youngster kick a soccer ball at next year’s World Cup, some will pronounce the cyborg revolution in full swing.

Schwartz, the Pittsburgh researcher who helped Jan Scheuermann feed herself chocolate in December, is optimistic that neuroprosthetics will eventually allow paralyzed people to regain some mobility. But he says that full control over an exoskeleton would require a more sophisticated way to extract nuanced information from the brain. Getting a pair of robotic legs to walk is one thing. Getting robotic limbs to do everything human limbs can do may be exponentially more complicated. “The challenge of maintaining balance and staying upright on two feet is a difficult problem, but it can be handled by robotics without a brain. But if you need to move gracefully and with skill, turn and step over obstacles, decide if it’s slippery outside — that does require a brain. If you see someone go up and kick a soccer ball, the essential thing to ask is, ‘OK, what would happen if I moved the soccer ball two inches to the right?'” The idea that simple electrodes could detect things as complex as memory or cognition, which involve the firing of billions of neurons in patterns that scientists can’t yet comprehend, is far-fetched, Schwartz adds.

That’s not the only reason that companies like Apple and Google aren’t yet working on devices that read our minds (as far as we know). Another one is that the devices aren’t portable. And then there’s the little fact that they require brain surgery.

A different class of brain-scanning technology is being touted on the consumer market and in the media as a way for computers to read people’s minds without drilling into their skulls. It’s called electroencephalography, or EEG, and it involves headsets that press electrodes against the scalp. In an impressive 2010 TED Talk, Tan Le of the consumer EEG-headset company Emotiv Lifescience showed how someone can use her company’s EPOC headset to move objects on a computer screen.

Skeptics point out that these devices can detect only the crudest electrical signals from the brain itself, which is well-insulated by the skull and scalp. In many cases, consumer devices that claim to read people’s thoughts are in fact relying largely on physical signals like skin conductivity and tension of the scalp or eyebrow muscles.

Robert Oschler, a robotics enthusiast who develops apps for EEG headsets, believes the more sophisticated consumer headsets like the Emotiv EPOC may be the real deal in terms of filtering out the noise to detect brain waves. Still, he says, there are limits to what even the most advanced, medical-grade EEG devices can divine about our cognition. He’s fond of an analogy that he attributes to Gerwin Schalk, a pioneer in the field of invasive brain implants. The best EEG devices, he says, are “like going to a stadium with a bunch of microphones: You can’t hear what any individual is saying, but maybe you can tell if they’re doing the wave.” With some of the more basic consumer headsets, at this point, “it’s like being in a party in the parking lot outside the same game.”

It’s fairly safe to say that EEG headsets won’t be turning us into cyborgs anytime soon. But it would be a mistake to assume that we can predict today how brain-computer interface technology will evolve. Just last month, a team at Brown University unveiled a prototype of a low-power, wireless neural implant that can transmit signals to a computer over broadband. That could be a major step forward in someday making BCIs practical for everyday use. Meanwhile, researchers at Cornell last week revealed that they were able to use fMRI, a measure of brain activity, to detect which of four people a research subject was thinking about at a given time. Machines today can read our minds in only the most rudimentary ways. But such advances hint that they may be able to detect and respond to more abstract types of mental activity in the always-changing future.

http://www.ydr.com/living/ci_22800493/researchers-explore-connecting-brain-machines

Mind-meld brain power is best for steering spaceships

piggy

Two people have successfully steered a virtual spacecraft by combining the power of their thoughts – and their efforts were far more accurate than one person acting alone. One day groups of people hooked up to brain-computer interfaces (BCIs) might work together to control complex robotic and telepresence systems, maybe even in space.

A BCI system records the brain’s electrical activity using EEG signals, which are detected with electrodes attached to the scalp. Machine-learning software learns to recognise the patterns generated by each user as they think of a certain concept, such as “left” or “right”. BCIs have helped people with disabilities to steer a wheelchair, for example.

Researchers are discovering, however, that they get better results in some tasks by combining the signals from multiple BCI users. Until now, this “collaborative BCI” technique has been used in simple pattern-recognition tasks, but a team at the University of Essex in the UK wanted to test it more rigorously.

So they developed a simulator in which pairs of BCI users had to steer a craft towards the dead centre of a planet by thinking about one of eight directions that they could fly in, like using compass points. Brain signals representing the users’ chosen direction, as interpreted by the machine-learning system, were merged in real time and the spacecraft followed that path.

The results, to be presented at an Intelligent User Interfaces conference in California in March, strongly favoured two-brain navigation. Simulation flights were 67 per cent accurate for a single user, but 90 per cent on target for two users. And when coping with sudden changes in the simulated planet’s position, reaction times were halved, too. Combining signals eradicates the random noise that dogs EEG signals. “When you average signals from two people’s brains, the noise cancels out a bit,” says team member Riccardo Poli.

The technique can also compensate for a lapse in attention. “It is difficult to stay focused on the task at all times. So when a single user has momentary attention lapses, it matters. But when there are two users, a lapse by one will not have much effect, so you stay on target,” Poli says.

NASA’s Jet Propulsion Lab in Pasadena, California, has been observing the work while itself investigating BCI’s potential for controlling planetary rovers, for example. But don’t hold your breath, says JPL senior research scientist Adrian Stoica. “While potential uses for space applications exist, in terms of uses for planetary rover remote control, this is still a speculative idea,” he says.

http://www.newscientist.com/article/mg21729025.600-mindmeld-brain-power-is-best-for-steering-spaceships.html

Stanford scientists advance thought-control computer cursor movement

 

 

Stanford researchers have designed the fastest, most accurate mathematical algorithm yet for brain-implantable prosthetic systems that can help disabled people maneuver computer cursors with their thoughts. The algorithm’s speed, accuracy and natural movement approach those of a real arm.

 

 

On each side of the screen, a monkey moves a cursor with its thoughts, using the cursor to make contact with the colored ball. On the left, the monkey’s thoughts are decoded with the use of a mathematical algorithm known as Velocity. On the right, the monkey’s thoughts are decoded with a new algorithm known as ReFITT, with better results. The ReFIT system helps the monkey to click on 21 targets in 21 seconds, as opposed to just 10 clicks with the older system.

 

 

When a paralyzed person imagines moving a limb, cells in the part of the brain that controls movement activate, as if trying to make the immobile limb work again.

Despite a neurological injury or disease that has severed the pathway between brain and muscle, the region where the signals originate remains intact and functional.

In recent years, neuroscientists and neuroengineers working in prosthetics have begun to develop brain-implantable sensors that can measure signals from individual neurons.

After those signals have been decoded through a mathematical algorithm, they can be used to control the movement of a cursor on a computer screen – in essence, the cursor is controlled by thoughts.

The work is part of a field known as neural prosthetics.

A team of Stanford researchers have now developed a new algorithm, known as ReFIT, that vastly improves the speed and accuracy of neural prosthetics that control computer cursors. The results were published Nov. 18 in the journal Nature Neuroscience in a paper by Krishna Shenoy, a professor of electrical engineering, bioengineering and neurobiology at Stanford, and a team led by research associate Dr. Vikash Gilja and bioengineering doctoral candidate Paul Nuyujukian.

In side-by-side demonstrations with rhesus monkeys, cursors controlled by the new algorithm doubled the performance of existing systems and approached performance of the monkey’s actual arm in controlling the cursor. Better yet, more than four years after implantation, the new system is still going strong, while previous systems have seen a steady decline in performance over time.

“These findings could lead to greatly improved prosthetic system performance and robustness in paralyzed people, which we are actively pursuing as part of the FDA Phase-I BrainGate2 clinical trial here at Stanford,” said Shenoy.

The system relies on a sensor implanted into the brain, which records “action potentials” in neural activity from an array of electrode sensors and sends data to a computer. The frequency with which action potentials are generated provides the computer important information about the direction and speed of the user’s intended movement.

The ReFIT algorithm that decodes these signals represents a departure from earlier models. In most neural prosthetics research, scientists have recorded brain activity while the subject moves or imagines moving an arm, analyzing the data after the fact. “Quite a bit of the work in neural prosthetics has focused on this sort of offline reconstruction,” said Gilja, the first author of the paper.

The Stanford team wanted to understand how the system worked “online,” under closed-loop control conditions in which the computer analyzes and implements visual feedback gathered in real time as the monkey neurally controls the cursor toward an onscreen target.

The system is able to make adjustments on the fly when guiding the cursor to a target, just as a hand and eye would work in tandem to move a mouse-cursor onto an icon on a computer desktop.

If the cursor were straying too far to the left, for instance, the user likely adjusts the imagined movements to redirect the cursor to the right. The team designed the system to learn from the user’s corrective movements, allowing the cursor to move more precisely than it could in earlier prosthetics.

To test the new system, the team gave monkeys the task of mentally directing a cursor to a target – an onscreen dot – and holding the cursor there for half a second. ReFIT performed vastly better than previous technology in terms of both speed and accuracy.

The path of the cursor from the starting point to the target was straighter and it reached the target twice as quickly as earlier systems, achieving 75 to 85 percent of the speed of the monkey’s arm.

“This paper reports very exciting innovations in closed-loop decoding for brain-machine interfaces. These innovations should lead to a significant boost in the control of neuroprosthetic devices and increase the clinical viability of this technology,” said Jose Carmena, an associate professor of electrical engineering and neuroscience at the University of California-Berkeley.

Critical to ReFIT’s time-to-target improvement was its superior ability to stop the cursor. While the old model’s cursor reached the target almost as fast as ReFIT, it often overshot the destination, requiring additional time and multiple passes to hold the target.

The key to this efficiency was in the step-by-step calculation that transforms electrical signals from the brain into movements of the cursor onscreen. The team had a unique way of “training” the algorithm about movement. When the monkey used his arm to move the cursor, the computer used signals from the implant to match the arm movements with neural activity.

Next, the monkey simply thought about moving the cursor, and the computer translated that neural activity into onscreen movement of the cursor. The team then used the monkey’s brain activity to refine their algorithm, increasing its accuracy.

The team introduced a second innovation in the way ReFIT encodes information about the position and velocity of the cursor. Gilja said that previous algorithms could interpret neural signals about either the cursor’s position or its velocity, but not both at once. ReFIT can do both, resulting in faster, cleaner movements of the cursor.

Early research in neural prosthetics had the goal of understanding the brain and its systems more thoroughly, Gilja said, but he and his team wanted to build on this approach by taking a more pragmatic engineering perspective. “The core engineering goal is to achieve highest possible performance and robustness for a potential clinical device,” he said.

To create such a responsive system, the team decided to abandon one of the traditional methods in neural prosthetics.

Much of the existing research in this field has focused on differentiating among individual neurons in the brain. Importantly, such a detailed approach has allowed neuroscientists to create a detailed understanding of the individual neurons that control arm movement.

But the individual neuron approach has its drawbacks, Gilja said. “From an engineering perspective, the process of isolating single neurons is difficult, due to minute physical movements between the electrode and nearby neurons, making it error prone,” he said. ReFIT focuses on small groups of neurons instead of single neurons.

By abandoning the single-neuron approach, the team also reaped a surprising benefit: performance longevity. Neural implant systems that are fine-tuned to specific neurons degrade over time. It is a common belief in the field that after six months to a year they can no longer accurately interpret the brain’s intended movement. Gilja said the Stanford system is working very well more than four years later.

“Despite great progress in brain-computer interfaces to control the movement of devices such as prosthetic limbs, we’ve been left so far with halting, jerky, Etch-a-Sketch-like movements. Dr. Shenoy’s study is a big step toward clinically useful brain-machine technology that has faster, smoother, more natural movements,” said James Gnadt, a program director in Systems and Cognitive Neuroscience at the National Institute of Neurological Disorders and Stroke, part of the National Institutes of Health.

For the time being, the team has been focused on improving cursor movement rather than the creation of robotic limbs, but that is not out of the question, Gilja said. Near term, precise, accurate control of a cursor is a simplified task with enormous value for people with paralysis.

“We think we have a good chance of giving them something very useful,” he said. The team is now translating these innovations to people with paralysis as part of a clinical trial.

This research was funded by the Christopher and Dana Reeve Paralysis Foundation, the National Science Foundation, National Defense Science and Engineering Graduate Fellowships, Stanford Graduate Fellowships, Defense Advanced Research Projects Agency (“Revolutionizing Prosthetics” and “REPAIR”) and the National Institutes of Health (NINDS-CRCNS and Director’s Pioneer Award).

Other contributing researchers include Cynthia Chestek, John Cunningham, Byron Yu, Joline Fan, Mark Churchland, Matthew Kaufman, Jonathan Kao and Stephen Ryu.

http://news.stanford.edu/news/2012/november/thought-control-cursor-111812.html

Thanks to Kebmodee for bringing this to the attention of the It’s Interesting community