Researchers explore connecting the brain to machines

brain

Behind a locked door in a white-walled basement in a research building in Tempe, Ariz., a monkey sits stone-still in a chair, eyes locked on a computer screen. From his head protrudes a bundle of wires; from his mouth, a plastic tube. As he stares, a picture of a green cursor on the black screen floats toward the corner of a cube. The monkey is moving it with his mind.

The monkey, a rhesus macaque named Oscar, has electrodes implanted in his motor cortex, detecting electrical impulses that indicate mental activity and translating them to the movement of the ball on the screen. The computer isn’t reading his mind, exactly — Oscar’s own brain is doing a lot of the lifting, adapting itself by trial and error to the delicate task of accurately communicating its intentions to the machine. (When Oscar succeeds in controlling the ball as instructed, the tube in his mouth rewards him with a sip of his favorite beverage, Crystal Light.) It’s not technically telekinesis, either, since that would imply that there’s something paranormal about the process. It’s called a “brain-computer interface” (BCI). And it just might represent the future of the relationship between human and machine.

Stephen Helms Tillery’s laboratory at Arizona State University is one of a growing number where researchers are racing to explore the breathtaking potential of BCIs and a related technology, neuroprosthetics. The promise is irresistible: from restoring sight to the blind, to helping the paralyzed walk again, to allowing people suffering from locked-in syndrome to communicate with the outside world. In the past few years, the pace of progress has been accelerating, delivering dazzling headlines seemingly by the week.

At Duke University in 2008, a monkey named Idoya walked on a treadmill, causing a robot in Japan to do the same. Then Miguel Nicolelis stopped the monkey’s treadmill — and the robotic legs kept walking, controlled by Idoya’s brain. At Andrew Schwartz’s lab at the University of Pittsburgh in December 2012, a quadriplegic woman named Jan Scheuermann learned to feed herself chocolate by mentally manipulating a robotic arm. Just last month, Nicolelis’ lab set up what it billed as the first brain-to-brain interface, allowing a rat in North Carolina to make a decision based on sensory data beamed via Internet from the brain of a rat in Brazil.

So far the focus has been on medical applications — restoring standard-issue human functions to people with disabilities. But it’s not hard to imagine the same technologies someday augmenting capacities. If you can make robotic legs walk with your mind, there’s no reason you can’t also make them run faster than any sprinter. If you can control a robotic arm, you can control a robotic crane. If you can play a computer game with your mind, you can, theoretically at least, fly a drone with your mind.

It’s tempting and a bit frightening to imagine that all of this is right around the corner, given how far the field has already come in a short time. Indeed, Nicolelis — the media-savvy scientist behind the “rat telepathy” experiment — is aiming to build a robotic bodysuit that would allow a paralyzed teen to take the first kick of the 2014 World Cup. Yet the same factor that has made the explosion of progress in neuroprosthetics possible could also make future advances harder to come by: the almost unfathomable complexity of the human brain.

From I, Robot to Skynet, we’ve tended to assume that the machines of the future would be guided by artificial intelligence — that our robots would have minds of their own. Over the decades, researchers have made enormous leaps in artificial intelligence (AI), and we may be entering an age of “smart objects” that can learn, adapt to, and even shape our habits and preferences. We have planes that fly themselves, and we’ll soon have cars that do the same. Google has some of the world’s top AI minds working on making our smartphones even smarter, to the point that they can anticipate our needs. But “smart” is not the same as “sentient.” We can train devices to learn specific behaviors, and even out-think humans in certain constrained settings, like a game of Jeopardy. But we’re still nowhere close to building a machine that can pass the Turing test, the benchmark for human-like intelligence. Some experts doubt we ever will.

Philosophy aside, for the time being the smartest machines of all are those that humans can control. The challenge lies in how best to control them. From vacuum tubes to the DOS command line to the Mac to the iPhone, the history of computing has been a progression from lower to higher levels of abstraction. In other words, we’ve been moving from machines that require us to understand and directly manipulate their inner workings to machines that understand how we work and respond readily to our commands. The next step after smartphones may be voice-controlled smart glasses, which can intuit our intentions all the more readily because they see what we see and hear what we hear.

The logical endpoint of this progression would be computers that read our minds, computers we can control without any physical action on our part at all. That sounds impossible. After all, if the human brain is so hard to compute, how can a computer understand what’s going on inside it?

It can’t. But as it turns out, it doesn’t have to — not fully, anyway. What makes brain-computer interfaces possible is an amazing property of the brain called neuroplasticity: the ability of neurons to form new connections in response to fresh stimuli. Our brains are constantly rewiring themselves to allow us to adapt to our environment. So when researchers implant electrodes in a part of the brain that they expect to be active in moving, say, the right arm, it’s not essential that they know in advance exactly which neurons will fire at what rate. When the subject attempts to move the robotic arm and sees that it isn’t quite working as expected, the person — or rat or monkey — will try different configurations of brain activity. Eventually, with time and feedback and training, the brain will hit on a solution that makes use of the electrodes to move the arm.

That’s the principle behind such rapid progress in brain-computer interface and neuroprosthetics. Researchers began looking into the possibility of reading signals directly from the brain in the 1970s, and testing on rats began in the early 1990s. The first big breakthrough for humans came in Georgia in 1997, when a scientist named Philip Kennedy used brain implants to allow a “locked in” stroke victim named Johnny Ray to spell out words by moving a cursor with his thoughts. (It took him six exhausting months of training to master the process.) In 2008, when Nicolelis got his monkey at Duke to make robotic legs run a treadmill in Japan, it might have seemed like mind-controlled exoskeletons for humans were just another step or two away. If he succeeds in his plan to have a paralyzed youngster kick a soccer ball at next year’s World Cup, some will pronounce the cyborg revolution in full swing.

Schwartz, the Pittsburgh researcher who helped Jan Scheuermann feed herself chocolate in December, is optimistic that neuroprosthetics will eventually allow paralyzed people to regain some mobility. But he says that full control over an exoskeleton would require a more sophisticated way to extract nuanced information from the brain. Getting a pair of robotic legs to walk is one thing. Getting robotic limbs to do everything human limbs can do may be exponentially more complicated. “The challenge of maintaining balance and staying upright on two feet is a difficult problem, but it can be handled by robotics without a brain. But if you need to move gracefully and with skill, turn and step over obstacles, decide if it’s slippery outside — that does require a brain. If you see someone go up and kick a soccer ball, the essential thing to ask is, ‘OK, what would happen if I moved the soccer ball two inches to the right?'” The idea that simple electrodes could detect things as complex as memory or cognition, which involve the firing of billions of neurons in patterns that scientists can’t yet comprehend, is far-fetched, Schwartz adds.

That’s not the only reason that companies like Apple and Google aren’t yet working on devices that read our minds (as far as we know). Another one is that the devices aren’t portable. And then there’s the little fact that they require brain surgery.

A different class of brain-scanning technology is being touted on the consumer market and in the media as a way for computers to read people’s minds without drilling into their skulls. It’s called electroencephalography, or EEG, and it involves headsets that press electrodes against the scalp. In an impressive 2010 TED Talk, Tan Le of the consumer EEG-headset company Emotiv Lifescience showed how someone can use her company’s EPOC headset to move objects on a computer screen.

Skeptics point out that these devices can detect only the crudest electrical signals from the brain itself, which is well-insulated by the skull and scalp. In many cases, consumer devices that claim to read people’s thoughts are in fact relying largely on physical signals like skin conductivity and tension of the scalp or eyebrow muscles.

Robert Oschler, a robotics enthusiast who develops apps for EEG headsets, believes the more sophisticated consumer headsets like the Emotiv EPOC may be the real deal in terms of filtering out the noise to detect brain waves. Still, he says, there are limits to what even the most advanced, medical-grade EEG devices can divine about our cognition. He’s fond of an analogy that he attributes to Gerwin Schalk, a pioneer in the field of invasive brain implants. The best EEG devices, he says, are “like going to a stadium with a bunch of microphones: You can’t hear what any individual is saying, but maybe you can tell if they’re doing the wave.” With some of the more basic consumer headsets, at this point, “it’s like being in a party in the parking lot outside the same game.”

It’s fairly safe to say that EEG headsets won’t be turning us into cyborgs anytime soon. But it would be a mistake to assume that we can predict today how brain-computer interface technology will evolve. Just last month, a team at Brown University unveiled a prototype of a low-power, wireless neural implant that can transmit signals to a computer over broadband. That could be a major step forward in someday making BCIs practical for everyday use. Meanwhile, researchers at Cornell last week revealed that they were able to use fMRI, a measure of brain activity, to detect which of four people a research subject was thinking about at a given time. Machines today can read our minds in only the most rudimentary ways. But such advances hint that they may be able to detect and respond to more abstract types of mental activity in the always-changing future.

http://www.ydr.com/living/ci_22800493/researchers-explore-connecting-brain-machines

Mind over matter helps paralysed woman control robotic arm

Graphic-robotic-arm-001

A woman who is paralysed from the neck down has stunned doctors with her extraordinary skill at using a robotic arm that is controlled by her thoughts alone.

The 52-year-old patient, called Jan, lost the use of her limbs more than 10 years ago to a degenerative disease that damaged her spinal cord. The disruption to her nervous system was the equivalent to having a broken neck.

But in training sessions at the University of Pittsburgh, doctors found she quickly learned to make fluid movements with the brain-controlled robotic arm, reaching levels of performance never seen before.

Doctors recruited the woman to test a robotic arm that is controlled by a new kind of computer program that translates the natural brain activity used to move our limbs into commands to move the robotic arm.

The design is intended to make the robotic arm more intuitive for patients to use. Instead of having to think where to move the arm, a patient can simply focus on the goal, such as “pick up the ball”.

Several groups around the world are developing so-called brain-machine interfaces to control robotic arms and other devices, such as computers, but none has achieved such impressive results.

Writing in the Lancet, researchers said Jan was able to move the robotic arm back, forward, right, left, and up and down only two days into her training. Within weeks she could reach out, and change the position of the hand to pick up objects on a table, including cones, blocks and small balls, and put them down at another location.

“We were blown away by how fast she was able to acquire her skill, that was completely unexpected,” said Andrew Schwartz, professor of neurobiology at the University of Pittsburgh. “At the end of a good day, when she was making these beautiful movements, she was ecstatic.”

To wire the woman up to the arm, doctors performed a four-hour operation to implant two tiny grids of electrodes, measuring 4mm on each side, into Jan’s brain. Each grid has 96 little electrodes that stick out 1.5mm. The electrodes were pushed just beneath the surface of the brain, near neurons that control hand and arm movement in the motor cortex.

Once the surgeons had implanted the electrodes, they replaced the part of the skull they had removed to expose the brain. Wires from the electrodes ran to connectors on the patient’s head, which doctors could then use to plug the patient into the computer system and robotic arm.

Before Jan could use the arm, doctors had to record her brain activity imagining various arm movements. To do this, they asked her to watch the robotic arm as it performed various moves, and got her to imagine moving her own arm in the same way.

While she was thinking, the computer recorded the electrical activity from individual neurons in her brain.

Neurons that control movement tend to have a preferred direction, and fire their electrical pulses more frequently to perform a movement in that direction. “Once we understand which direction each neuron likes to fire in, we can look at a larger group of neurons and figure out what direction the patient is trying to move the arm in,” Schwartz said.

To begin with, the robotic arm was programmed to help Jan’s movements, by ignoring small mistakes in movements. But she quickly progressed to controlling the arm without help. After three months of training, she completed tasks with the robotic arm 91.6% of the time, and 30 seconds faster than when the trial began.

In an accompanying article, Grégoire Courtine, at the Swiss Federal Institute of Technology in Lausanne, said: “This bioinspired brain-machine interface is a remarkable technological and biomedical achievement.”

There are hurdles ahead for mind-controlled robot limbs. Though Jan’s performance continued to improve after the Lancet study was written, she has plateaued recently, because scar tissue that forms around the tips of the electrodes degrades the brain signals the computer receives.

Schwartz said that using thinner electrodes, around five thousandths of a millimetre thick, should solve this problem, as they will be too small to trigger the scarring process in the body.

The researchers now hope to build senses into the robotic arm, so the patient can feel the texture and temperature of the objects they are handling. To do this, sensors on the fingers of the robotic hand could send information back to the sensory regions of the brain.

Another major focus of future work is to develop a wireless system, so the patient does not have to be physically plugged into the computer that controls the robotic arm.

Thanks to Kebmodee AND Dr. Rajadhyaksha for bringing this to the attention of the It’s Interesting community.

http://www.guardian.co.uk/science/2012/dec/17/paralysed-woman-robotic-arm-pittsburgh