Archive for the ‘neural prosthetics’ Category

Researchers at University of South Carolina (USC) and Wake Forest Baptist Medical Center have developed a brain prosthesis that is designed to help individuals suffering from memory loss.

The prosthesis, which includes a small array of electrodes implanted into the brain, has performed well in laboratory testing in animals and is currently being evaluated in human patients.

Designed originally at USC and tested at Wake Forest Baptist, the device builds on decades of research by Ted Berger and relies on a new algorithm created by Dong Song, both of the USC Viterbi School of Engineering. The development also builds on more than a decade of collaboration with Sam Deadwyler and Robert Hampson of the Department of Physiology & Pharmacology of Wake Forest Baptist who have collected the neural data used to construct the models and algorithms.

When your brain receives the sensory input, it creates a memory in the form of a complex electrical signal that travels through multiple regions of the hippocampus, the memory center of the brain. At each region, the signal is re-encoded until it reaches the final region as a wholly different signal that is sent off for long-term storage.

If there’s damage at any region that prevents this translation, then there is the possibility that long-term memory will not be formed. That’s why an individual with hippocampal damage (for example, due to Alzheimer’s disease) can recall events from a long time ago – things that were already translated into long-term memories before the brain damage occurred – but have difficulty forming new long-term memories.

Song and Berger found a way to accurately mimic how a memory is translated from short-term memory into long-term memory, using data obtained by Deadwyler and Hampson, first from animals, and then from humans. Their prosthesis is designed to bypass a damaged hippocampal section and provide the next region with the correctly translated memory.

That’s despite the fact that there is currently no way of “reading” a memory just by looking at its electrical signal.

“It’s like being able to translate from Spanish to French without being able to understand either language,” Berger said.

Their research was presented at the 37th Annual International Conference of the IEEE Engineering in Medicine and Biology Society in Milan on August 27, 2015.

The effectiveness of the model was tested by the USC and Wake Forest Baptist teams. With the permission of patients who had electrodes implanted in their hippocampi to treat chronic seizures, Hampson and Deadwyler read the electrical signals created during memory formation at two regions of the hippocampus, then sent that information to Song and Berger to construct the model. The team then fed those signals into the model and read how the signals generated from the first region of the hippocampus were translated into signals generated by the second region of the hippocampus.

In hundreds of trials conducted with nine patients, the algorithm accurately predicted how the signals would be translated with about 90 percent accuracy.

“Being able to predict neural signals with the USC model suggests that it can be used to design a device to support or replace the function of a damaged part of the brain,” Hampson said.
Next, the team will attempt to send the translated signal back into the brain of a patient with damage at one of the regions in order to try to bypass the damage and enable the formation of an accurate long-term memory.

http://medicalxpress.com/news/2015-09-scientists-bypass-brain-re-encoding-memories.html#nRlv

brainy_2758840b

Talking to yourself used to be a strictly private pastime. That’s no longer the case – researchers have eavesdropped on our internal monologue for the first time. The achievement is a step towards helping people who cannot physically speak communicate with the outside world.

“If you’re reading text in a newspaper or a book, you hear a voice in your own head,” says Brian Pasley at the University of California, Berkeley. “We’re trying to decode the brain activity related to that voice to create a medical prosthesis that can allow someone who is paralysed or locked in to speak.”

When you hear someone speak, sound waves activate sensory neurons in your inner ear. These neurons pass information to areas of the brain where different aspects of the sound are extracted and interpreted as words.

In a previous study, Pasley and his colleagues recorded brain activity in people who already had electrodes implanted in their brain to treat epilepsy, while they listened to speech. The team found that certain neurons in the brain’s temporal lobe were only active in response to certain aspects of sound, such as a specific frequency. One set of neurons might only react to sound waves that had a frequency of 1000 hertz, for example, while another set only cares about those at 2000 hertz. Armed with this knowledge, the team built an algorithm that could decode the words heard based on neural activity alone (PLoS Biology, doi.org/fzv269).

The team hypothesised that hearing speech and thinking to oneself might spark some of the same neural signatures in the brain. They supposed that an algorithm trained to identify speech heard out loud might also be able to identify words that are thought.

Mind-reading

To test the idea, they recorded brain activity in another seven people undergoing epilepsy surgery, while they looked at a screen that displayed text from either the Gettysburg Address, John F. Kennedy’s inaugural address or the nursery rhyme Humpty Dumpty.

Each participant was asked to read the text aloud, read it silently in their head and then do nothing. While they read the text out loud, the team worked out which neurons were reacting to what aspects of speech and generated a personalised decoder to interpret this information. The decoder was used to create a spectrogram – a visual representation of the different frequencies of sound waves heard over time. As each frequency correlates to specific sounds in each word spoken, the spectrogram can be used to recreate what had been said. They then applied the decoder to the brain activity that occurred while the participants read the passages silently to themselves.

Despite the neural activity from imagined or actual speech differing slightly, the decoder was able to reconstruct which words several of the volunteers were thinking, using neural activity alone (Frontiers in Neuroengineering, doi.org/whb).

The algorithm isn’t perfect, says Stephanie Martin, who worked on the study with Pasley. “We got significant results but it’s not good enough yet to build a device.”

In practice, if the decoder is to be used by people who are unable to speak it would have to be trained on what they hear rather than their own speech. “We don’t think it would be an issue to train the decoder on heard speech because they share overlapping brain areas,” says Martin.

The team is now fine-tuning their algorithms, by looking at the neural activity associated with speaking rate and different pronunciations of the same word, for example. “The bar is very high,” says Pasley. “Its preliminary data, and we’re still working on making it better.”

The team have also turned their hand to predicting what songs a person is listening to by playing lots of Pink Floyd to volunteers, and then working out which neurons respond to what aspects of the music. “Sound is sound,” says Pasley. “It all helps us understand different aspects of how the brain processes it.”

“Ultimately, if we understand covert speech well enough, we’ll be able to create a medical prosthesis that could help someone who is paralysed, or locked in and can’t speak,” he says.

Several other researchers are also investigating ways to read the human mind. Some can tell what pictures a person is looking at, others have worked out what neural activity represents certain concepts in the brain, and one team has even produced crude reproductions of movie clips that someone is watching just by analysing their brain activity. So is it possible to put it all together to create one multisensory mind-reading device?

In theory, yes, says Martin, but it would be extraordinarily complicated. She says you would need a huge amount of data for each thing you are trying to predict. “It would be really interesting to look into. It would allow us to predict what people are doing or thinking,” she says. “But we need individual decoders that work really well before combining different senses.”

http://www.newscientist.com/article/mg22429934.000-brain-decoder-can-eavesdrop-on-your-inner-voice.html

Imagine being confined to a bed, diagnosed as “vegetative“—the doctors think you’re completely unresponsive and unaware, but they’re wrong. As many as one-third of vegetative patients are misdiagnosed, according to a new study in The Lancet. Using brain imaging techniques, researchers found signs of minimal consciousness in 13 of 42 patients who were considered vegetative. “The consequences are huge,” lead author Dr. Steven Laureys, of the Coma Science Group at the Université de Liège, tells Maclean’s. “These patients have emotions; they may feel pain; studies have shown they have a better outcome [than vegetative patients]. Distinguishing between unconscious, and a little bit conscious, is very important.”

Detecting human consciousness following brain injury remains exceedingly difficult. Vegetative patients are typically diagnosed by a bedside clinical exam, and remain “neglected” in the health care system, Laureys says. Once diagnosed, “they might not be [re-examined] for years. Nobody questions whether or not there could be something more going on.” That’s about to change.

Laureys has collaborated previously with British neuroscientist Adrian Owen, based at Western University in London, Ont., who holds the Canada Excellence Research Chair in Cognitive Neuroscience and Imaging. (Owen’s work was featured in Maclean’s in October 2013.) Together they co-authored a now-famous paper in the journal Science, in 2006, in which a 23-year-old vegetative patient was instructed to either imagine playing tennis, or moving around her house. Using functional magnetic resonance imaging, or fMRI, they saw that the patient was activating two different parts of her brain, just like healthy volunteers did. Laureys and Owen also worked together on a 2010 follow-up study, in the New England Journal of Medicine, where the same technique was used to ask a patient to answer “yes” or “no” to various questions, presenting the stunning possibility that some vegetative patients might be able to communicate.

In the new Lancet paper, Laureys used two functional brain imaging techniques, fMRI and positron emission tomography (PET), to examine 126 patients with severe brain injury: 41 of them vegetative, four locked-in (a rare condition in which patients are fully conscious and aware, yet completely paralyzed from head-to-toe), and another 81 who were minimally conscious. After finding that 13 of 42 vegetative patients showed brain activity indicating minimal consciousness, they re-examined them a year later. By then, nine of the 13 had improved, and progressed into a minimally conscious state or higher.

The mounting evidence that some vegetative patients are conscious, even minimally so, carries ethical and legal implications. Just last year, Canada’s Supreme Court ruled that doctors couldn’t unilaterally pull the plug on Hassan Rasouli, a man in a vegetative state. This work raises the possibility that one day, some patients may be able to communicate through some kind of brain-machine interface, and maybe even weigh in on their own medical treatment. For now, doctors could make better use of functional brain imaging tests to diagnose these patients, Laureys believes. Kate Bainbridge, who was one of the first vegetative patients examined by Owen, was given a scan that showed her brain lighting up in response to images of her family. Her health later improved. “I can’t say how lucky I was to have the scan,” she said in an email to Maclean’s last year. “[It] really scares me to think what would have happened if I hadn’t had it.”

https://ca.news.yahoo.com/one-third-of-vegetative-patients-may-be-conscious–study-195412300.html

MATH

In a lab in Oxford University’s experimental psychology department, researcher Roi Cohen Kadosh is testing an intriguing treatment: He is sending low-dose electric current through the brains of adults and children as young as 8 to make them better at math.

A relatively new brain-stimulation technique called transcranial electrical stimulation may help people learn and improve their understanding of math concepts.

The electrodes are placed in a tightly fitted cap and worn around the head. The device, run off a 9-volt battery commonly used in smoke detectors, induces only a gentle current and can be targeted to specific areas of the brain or applied generally. The mild current reduces the risk of side effects, which has opened up possibilities about using it, even in individuals without a disorder, as a general cognitive enhancer. Scientists also are investigating its use to treat mood disorders and other conditions.

Dr. Cohen Kadosh’s pioneering work on learning enhancement and brain stimulation is one example of the long journey faced by scientists studying brain-stimulation and cognitive-stimulation techniques. Like other researchers in the community, he has dealt with public concerns about safety and side effects, plus skepticism from other scientists about whether these findings would hold in the wider population.

There are also ethical questions about the technique. If it truly works to enhance cognitive performance, should it be accessible to anyone who can afford to buy the device—which already is available for sale in the U.S.? Should parents be able to perform such stimulation on their kids without monitoring?

“It’s early days but that hasn’t stopped some companies from selling the device and marketing it as a learning tool,” Dr. Cohen Kadosh says. “Be very careful.”

The idea of using electric current to treat the brain of various diseases has a long and fraught history, perhaps most notably with what was called electroshock therapy, developed in 1938 to treat severe mental illness and often portrayed as a medieval treatment that rendered people zombielike in movies such as “One Flew over the Cuckoo’s Nest.”

Electroconvulsive therapy has improved dramatically over the years and is considered appropriate for use against types of major depression that don’t respond to other treatments, as well as other related, severe mood states.

A number of new brain-stimulation techniques have been developed, including deep brain stimulation, which acts like a pacemaker for the brain. With DBS, electrodes are implanted into the brain and, though a battery pack in the chest, stimulate neurons continuously. DBS devices have been approved by U.S. regulators to treat tremors in Parkinson’s disease and continue to be studied as possible treatments for chronic pain and obsessive-compulsive disorder.

Transcranial electrical stimulation, or tES, is one of the newest brain stimulation techniques. Unlike DBS, it is noninvasive.

If the technique continues to show promise, “this type of method may have a chance to be the new drug of the 21st century,” says Dr. Cohen Kadosh.

The 37-year-old father of two completed graduate school at Ben-Gurion University in Israel before coming to London to do postdoctoral work with Vincent Walsh at University College London. Now, sitting in a small, tidy office with a model brain on a shelf, the senior research fellow at Oxford speaks with cautious enthusiasm about brain stimulation and its potential to help children with math difficulties.

Up to 6% of the population is estimated to have a math-learning disability called developmental dyscalculia, similar to dyslexia but with numerals instead of letters. Many more people say they find math difficult. People with developmental dyscalculia also may have trouble with daily tasks, such as remembering phone numbers and understanding bills.

Whether transcranial electrical stimulation proves to be a useful cognitive enhancer remains to be seen. Dr. Cohen Kadosh first thought about the possibility as a university student in Israel, where he conducted an experiment using transcranial magnetic stimulation, a tool that employs magnetic coils to induce a more powerful electrical current.

He found that he could temporarily turn off regions of the brain known to be important for cognitive skills. When the parietal lobe of the brain was stimulated using that technique, he found that the basic arithmetic skills of doctoral students who were normally very good with numbers were reduced to a level similar to those with developmental dyscalculia.

That led to his next inquiry: If current could turn off regions of the brain making people temporarily math-challenged, could a different type of stimulation improve math performance? Cognitive training helps to some extent in some individuals with math difficulties. Dr. Cohen Kadosh wondered if such learning could be improved if the brain was stimulated at the same time.

But transcranial magnetic stimulation wasn’t the right tool because the current induced was too strong. Dr. Cohen Kadosh puzzled over what type of stimulation would be appropriate until a colleague who had worked with researchers in Germany returned and told him about tES, at the time a new technique. Dr. Cohen Kadosh decided tES was the way to go.

His group has since conducted a series of studies suggesting that tES appears helpful improving learning speed on various math tasks in adults who don’t have trouble in math. Now they’ve found preliminary evidence for those who struggle in math, too.

Participants typically come for 30-minute stimulation-and-training sessions daily for a week. His team is now starting to study children between 8 and 10 who receive twice-weekly training and stimulation for a month. Studies of tES, including the ones conducted by Dr. Cohen Kadosh, tend to have small sample sizes of up to several dozen participants; replication of the findings by other researchers is important.

In a small, toasty room, participants, often Oxford students, sit in front of a computer screen and complete hundreds of trials in which they learn to associate numerical values with abstract, nonnumerical symbols, figuring out which symbols are “greater” than others, in the way that people learn to know that three is greater than two.

When neurons fire, they transfer information, which could facilitate learning. The tES technique appears to work by lowering the threshold neurons need to reach before they fire, studies have shown. In addition, the stimulation appears to cause changes in neurochemicals involved in learning and memory.

However, the results so far in the field appear to differ significantly by individual. Stimulating the wrong brain region or at too high or long a current has been known to show an inhibiting effect on learning. The young and elderly, for instance, respond exactly the opposite way to the same current in the same location, Dr. Cohen Kadosh says.

He and a colleague published a paper in January in the journal Frontiers in Human Neuroscience, in which they found that one individual with developmental dyscalculia improved her performance significantly while the other study subject didn’t.

What is clear is that anyone trying the treatment would need to train as well as to stimulate the brain. Otherwise “it’s like taking steroids but sitting on a couch,” says Dr. Cohen Kadosh.

Dr. Cohen Kadosh and Beatrix Krause, a graduate student in the lab, have been examining individual differences in response. Whether a room is dark or well-lighted, if a person smokes and even where women are in their menstrual cycle can affect the brain’s response to electrical stimulation, studies have found.

Results from his lab and others have shown that even if stimulation is stopped, those who benefited are going to maintain a higher performance level than those who weren’t stimulated, up to a year afterward. If there isn’t any follow-up training, everyone’s performance declines over time, but the stimulated group still performs better than the non-stimulated group. It remains to be seen whether reintroducing stimulation would then improve learning again, Dr. Cohen Kadosh says.

http://online.wsj.com/news/articles/SB10001424052702303650204579374951187246122?mod=WSJ_article_EditorsPicks&mg=reno64-wsj&url=http%3A%2F%2Fonline.wsj.com%2Farticle%2FSB10001424052702303650204579374951187246122.html%3Fmod%3DWSJ_article_EditorsPicks

Doctors in the US have induced feelings of intense determination in two men by stimulating a part of their brains with gentle electric currents.

The men were having a routine procedure to locate regions in their brains that caused epileptic seizures when they felt their heart rates rise, a sense of foreboding, and an overwhelming desire to persevere against a looming hardship.

The remarkable findings could help researchers develop treatments for depression and other disorders where people are debilitated by a lack of motivation.

One patient said the feeling was like driving a car into a raging storm. When his brain was stimulated, he sensed a shaking in his chest and a surge in his pulse. In six trials, he felt the same sensations time and again.

Comparing the feelings to a frantic drive towards a storm, the patient said: “You’re only halfway there and you have no other way to turn around and go back, you have to keep going forward.”

When asked by doctors to elaborate on whether the feeling was good or bad, he said: “It was more of a positive thing, like push harder, push harder, push harder to try and get through this.”

A second patient had similar feelings when his brain was stimulated in the same region, called the anterior midcingulate cortex (aMCC). He felt worried that something terrible was about to happen, but knew he had to fight and not give up, according to a case study in the journal Neuron.

Both men were having an exploratory procedure to find the focal point in their brains that caused them to suffer epileptic fits. In the procedure, doctors sink fine electrodes deep into different parts of the brain and stimulate them with tiny electrical currents until the patient senses the “aura” that precedes a seizure. Often, seizures can be treated by removing tissue from this part of the brain.

“In the very first patient this was something very unexpected, and we didn’t report it,” said Josef Parvizi at Stanford University in California. But then I was doing functional mapping on the second patient and he suddenly experienced a very similar thing.”

“Its extraordinary that two individuals with very different past experiences respond in a similar way to one or two seconds of very low intensity electricity delivered to the same area of their brain. These patients are normal individuals, they have their IQ, they have their jobs. We are not reporting these findings in sick brains,” Parvizi said.

The men were stimulated with between two and eight milliamps of electrical current, but in tests the doctors administered sham stimulation too. In the sham tests, they told the patients they were about to stimulate the brain, but had switched off the electical supply. In these cases, the men reported no changes to their feelings. The sensation was only induced in a small area of the brain, and vanished when doctors implanted electrodes just five millimetres away.

Parvizi said a crucial follow-up experiment will be to test whether stimulation of the brain region really makes people more determined, or simply creates the sensation of perseverance. If future studies replicate the findings, stimulation of the brain region – perhaps without the need for brain-penetrating electrodes – could be used to help people with severe depression.

The anterior midcingulate cortex seems to be important in helping us select responses and make decisions in light of the feedback we get. Brent Vogt, a neurobiologist at Boston University, said patients with chronic pain and obsessive-compulsive disorder have already been treated by destroying part of the aMCC. “Why not stimulate it? If this would enhance relieving depression, for example, let’s go,” he said.

http://www.theguardian.com/science/2013/dec/05/determination-electrical-brain-stimulation

Thanks to Kebmodee for bringing this to the attention of the It’s Interesting community.

sn-math

by Emily Underwood
ScienceNOW

If you are one of the 20% of healthy adults who struggle with basic arithmetic, simple tasks like splitting the dinner bill can be excruciating. Now, a new study suggests that a gentle, painless electrical current applied to the brain can boost math performance for up to 6 months. Researchers don’t fully understand how it works, however, and there could be side effects.

The idea of using electrical current to alter brain activity is nothing new—electroshock therapy, which induces seizures for therapeutic effect, is probably the best known and most dramatic example. In recent years, however, a slew of studies has shown that much milder electrical stimulation applied to targeted regions of the brain can dramatically accelerate learning in a wide range of tasks, from marksmanship to speech rehabilitation after stroke.

In 2010, cognitive neuroscientist Roi Cohen Kadosh of the University of Oxford in the United Kingdom showed that, when combined with training, electrical brain stimulation can make people better at very basic numerical tasks, such as judging which of two quantities is larger. However, it wasn’t clear how those basic numerical skills would translate to real-world math ability.

To answer that question, Cohen Kadosh recruited 25 volunteers to practice math while receiving either real or “sham” brain stimulation. Two sponge-covered electrodes, fixed to either side of the forehead with a stretchy athletic band, targeted an area of the prefrontal cortex considered key to arithmetic processing, says Jacqueline Thompson, a Ph.D. student in Cohen Kadosh’s lab and a co-author on the study. The electrical current slowly ramped up to about 1 milliamp—a tiny fraction of the voltage of an AA battery—then randomly fluctuated between high and low values. For the sham group, the researchers simulated the initial sensation of the increase by releasing a small amount of current, then turned it off.

For roughly 20 minutes per day over 5 days, the participants memorized arbitrary mathematical “facts,” such as 4#10 = 23, then performed a more sophisticated task requiring multiple steps of arithmetic, also based on memorized symbols. A squiggle, for example, might mean “add 2,” or “subtract 1.” This is the first time that brain stimulation has been applied to improving such complex math skills, says neuroethicist Peter Reiner of the University of British Columbia, Vancouver, in Canada, who wasn’t involved in the research.

The researchers also used a brain imaging technique called near-infrared spectroscopy to measure how efficiently the participants’ brains were working as they performed the tasks.

Although the two groups performed at the same level on the first day, over the next 4 days people receiving brain stimulation along with training learned to do the tasks two to five times faster than people receiving a sham treatment, the authors reported in Current Biology. Six months later, the researchers called the participants back and found that people who had received brain stimulation were still roughly 30% faster at the same types of mathematical challenges. The targeted brain region also showed more efficient activity, Thompson says.

The fact that only participants who received electrical stimulation and practiced math showed lasting physiological changes in their brains suggests that experience is required to seal in the effects of stimulation, says Michael Weisend, a neuroscientist at the Mind Research Network in Albuquerque, New Mexico, who wasn’t involved with the study. That’s valuable information for people who hope to get benefits from stimulation alone, he says. “It’s not going to be a magic bullet.”

Although it’s not clear how the technique works, Thompson says, one hypothesis is that the current helps synchronize neuron firing, enabling the brain to work more efficiently. Scientists also don’t know if negative or unintended effects might result. Although no side effects of brain stimulation have yet been reported, “it’s impossible to say with any certainty” that there aren’t any, Thompson says.

“Math is only one of dozens of skills in which this could be used,” Reiner says, adding that it’s “not unreasonable” to imagine that this and similar stimulation techniques could replace the use of pills for cognitive enhancement.

In the future, the researchers hope to include groups that often struggle with math, such as people with neurodegenerative disorders and a condition called developmental dyscalculia. As long as further testing shows that the technique is safe and effective, children in schools could also receive brain stimulation along with their lessons, Thompson says. But there’s “a long way to go,” before the method is ready for schools, she says. In the meantime, she adds, “We strongly caution you not to try this at home, no matter how tempted you may be to slap a battery on your kid’s head.”

http://news.sciencemag.org/sciencenow/2013/05/trouble-with-math-maybe-you-shou.html?ref=hp

130507101540-brain-implants-human-horizontal-gallery

William Gibson’s popular science fiction tale “Johnny Mnemonic” foresaw sensitive information being carried by microchips in the brain by 2021. A team of American neuroscientists could be making this fantasy world a reality. Their motivation is different but the outcome would be somewhat similar. Hailed as one of 2013’s top ten technological breakthroughs by MIT, the work by the University of Southern California, North Carolina’s Wake Forest University and other partners has actually spanned a decade.

But the U.S.-wide team now thinks that it will see a memory device being implanted in a small number of human volunteers within two years and available to patients in five to 10 years. They can’t quite contain their excitement. “I never thought I’d see this in my lifetime,” said Ted Berger, professor of biomedical engineering at the University of Southern California in Los Angeles. “I might not benefit from it myself but my kids will.”

Rob Hampson, associate professor of physiology and pharmacology at Wake Forest University, agrees. “We keep pushing forward, every time I put an estimate on it, it gets shorter and shorter.”

The scientists — who bring varied skills to the table, including mathematical modeling and psychiatry — believe they have cracked how long-term memories are made, stored and retrieved and how to replicate this process in brains that are damaged, particularly by stroke or localized injury.

Berger said they record a memory being made, in an undamaged area of the brain, then use that data to predict what a damaged area “downstream” should be doing. Electrodes are then used to stimulate the damaged area to replicate the action of the undamaged cells.

They concentrate on the hippocampus — part of the cerebral cortex which sits deep in the brain — where short-term memories become long-term ones. Berger has looked at how electrical signals travel through neurons there to form those long-term memories and has used his expertise in mathematical modeling to mimic these movements using electronics.

Hampson, whose university has done much of the animal studies, adds: “We support and reinforce the signal in the hippocampus but we are moving forward with the idea that if you can study enough of the inputs and outputs to replace the function of the hippocampus, you can bypass the hippocampus.”

The team’s experiments on rats and monkeys have shown that certain brain functions can be replaced with signals via electrodes. You would think that the work of then creating an implant for people and getting such a thing approved would be a Herculean task, but think again.

For 15 years, people have been having brain implants to provide deep brain stimulation to treat epilepsy and Parkinson’s disease — a reported 80,000 people have now had such devices placed in their brains. So many of the hurdles have already been overcome — particularly the “yuck factor” and the fear factor.

“It’s now commonly accepted that humans will have electrodes put in them — it’s done for epilepsy, deep brain stimulation, (that has made it) easier for investigative research, it’s much more acceptable now than five to 10 years ago,” Hampson says.

Much of the work that remains now is in shrinking down the electronics.

“Right now it’s not a device, it’s a fair amount of equipment,”Hampson says. “We’re probably looking at devices in the five to 10 year range for human patients.”

The ultimate goal in memory research would be to treat Alzheimer’s Disease but unlike in stroke or localized brain injury, Alzheimer’s tends to affect many parts of the brain, especially in its later stages, making these implants a less likely option any time soon.

Berger foresees a future, however, where drugs and implants could be used together to treat early dementia. Drugs could be used to enhance the action of cells that surround the most damaged areas, and the team’s memory implant could be used to replace a lot of the lost cells in the center of the damaged area. “I think the best strategy is going to involve both drugs and devices,” he says.

Unfortunately, the team found that its method can’t help patients with advanced dementia.

“When looking at a patient with mild memory loss, there’s probably enough residual signal to work with, but not when there’s significant memory loss,” Hampson said.

Constantine Lyketsos, professor of psychiatry and behavioral sciences at John Hopkins Medicine in Baltimore which is trialing a deep brain stimulator implant for Alzheimer’s patients was a little skeptical of the other team’s claims.

“The brain has a lot of redundancy, it can function pretty well if loses one or two parts. But memory involves circuits diffusely dispersed throughout the brain so it’s hard to envision.” However, he added that it was more likely to be successful in helping victims of stroke or localized brain injury as indeed its makers are aiming to do.

The UK’s Alzheimer’s Society is cautiously optimistic.

“Finding ways to combat symptoms caused by changes in the brain is an ongoing battle for researchers. An implant like this one is an interesting avenue to explore,” said Doug Brown, director of research and development.

Hampson says the team’s breakthrough is “like the difference between a cane, to help you walk, and a prosthetic limb — it’s two different approaches.”

It will still take time for many people to accept their findings and their claims, he says, but they don’t expect to have a shortage of volunteers stepping forward to try their implant — the project is partly funded by the U.S. military which is looking for help with battlefield injuries.

There are U.S. soldiers coming back from operations with brain trauma and a neurologist at DARPA (the Defense Advanced Research Projects Agency) is asking “what can you do for my boys?” Hampson says.

“That’s what it’s all about.”

http://www.cnn.com/2013/05/07/tech/brain-memory-implants-humans/index.html?iref=allsearch

brain

Behind a locked door in a white-walled basement in a research building in Tempe, Ariz., a monkey sits stone-still in a chair, eyes locked on a computer screen. From his head protrudes a bundle of wires; from his mouth, a plastic tube. As he stares, a picture of a green cursor on the black screen floats toward the corner of a cube. The monkey is moving it with his mind.

The monkey, a rhesus macaque named Oscar, has electrodes implanted in his motor cortex, detecting electrical impulses that indicate mental activity and translating them to the movement of the ball on the screen. The computer isn’t reading his mind, exactly — Oscar’s own brain is doing a lot of the lifting, adapting itself by trial and error to the delicate task of accurately communicating its intentions to the machine. (When Oscar succeeds in controlling the ball as instructed, the tube in his mouth rewards him with a sip of his favorite beverage, Crystal Light.) It’s not technically telekinesis, either, since that would imply that there’s something paranormal about the process. It’s called a “brain-computer interface” (BCI). And it just might represent the future of the relationship between human and machine.

Stephen Helms Tillery’s laboratory at Arizona State University is one of a growing number where researchers are racing to explore the breathtaking potential of BCIs and a related technology, neuroprosthetics. The promise is irresistible: from restoring sight to the blind, to helping the paralyzed walk again, to allowing people suffering from locked-in syndrome to communicate with the outside world. In the past few years, the pace of progress has been accelerating, delivering dazzling headlines seemingly by the week.

At Duke University in 2008, a monkey named Idoya walked on a treadmill, causing a robot in Japan to do the same. Then Miguel Nicolelis stopped the monkey’s treadmill — and the robotic legs kept walking, controlled by Idoya’s brain. At Andrew Schwartz’s lab at the University of Pittsburgh in December 2012, a quadriplegic woman named Jan Scheuermann learned to feed herself chocolate by mentally manipulating a robotic arm. Just last month, Nicolelis’ lab set up what it billed as the first brain-to-brain interface, allowing a rat in North Carolina to make a decision based on sensory data beamed via Internet from the brain of a rat in Brazil.

So far the focus has been on medical applications — restoring standard-issue human functions to people with disabilities. But it’s not hard to imagine the same technologies someday augmenting capacities. If you can make robotic legs walk with your mind, there’s no reason you can’t also make them run faster than any sprinter. If you can control a robotic arm, you can control a robotic crane. If you can play a computer game with your mind, you can, theoretically at least, fly a drone with your mind.

It’s tempting and a bit frightening to imagine that all of this is right around the corner, given how far the field has already come in a short time. Indeed, Nicolelis — the media-savvy scientist behind the “rat telepathy” experiment — is aiming to build a robotic bodysuit that would allow a paralyzed teen to take the first kick of the 2014 World Cup. Yet the same factor that has made the explosion of progress in neuroprosthetics possible could also make future advances harder to come by: the almost unfathomable complexity of the human brain.

From I, Robot to Skynet, we’ve tended to assume that the machines of the future would be guided by artificial intelligence — that our robots would have minds of their own. Over the decades, researchers have made enormous leaps in artificial intelligence (AI), and we may be entering an age of “smart objects” that can learn, adapt to, and even shape our habits and preferences. We have planes that fly themselves, and we’ll soon have cars that do the same. Google has some of the world’s top AI minds working on making our smartphones even smarter, to the point that they can anticipate our needs. But “smart” is not the same as “sentient.” We can train devices to learn specific behaviors, and even out-think humans in certain constrained settings, like a game of Jeopardy. But we’re still nowhere close to building a machine that can pass the Turing test, the benchmark for human-like intelligence. Some experts doubt we ever will.

Philosophy aside, for the time being the smartest machines of all are those that humans can control. The challenge lies in how best to control them. From vacuum tubes to the DOS command line to the Mac to the iPhone, the history of computing has been a progression from lower to higher levels of abstraction. In other words, we’ve been moving from machines that require us to understand and directly manipulate their inner workings to machines that understand how we work and respond readily to our commands. The next step after smartphones may be voice-controlled smart glasses, which can intuit our intentions all the more readily because they see what we see and hear what we hear.

The logical endpoint of this progression would be computers that read our minds, computers we can control without any physical action on our part at all. That sounds impossible. After all, if the human brain is so hard to compute, how can a computer understand what’s going on inside it?

It can’t. But as it turns out, it doesn’t have to — not fully, anyway. What makes brain-computer interfaces possible is an amazing property of the brain called neuroplasticity: the ability of neurons to form new connections in response to fresh stimuli. Our brains are constantly rewiring themselves to allow us to adapt to our environment. So when researchers implant electrodes in a part of the brain that they expect to be active in moving, say, the right arm, it’s not essential that they know in advance exactly which neurons will fire at what rate. When the subject attempts to move the robotic arm and sees that it isn’t quite working as expected, the person — or rat or monkey — will try different configurations of brain activity. Eventually, with time and feedback and training, the brain will hit on a solution that makes use of the electrodes to move the arm.

That’s the principle behind such rapid progress in brain-computer interface and neuroprosthetics. Researchers began looking into the possibility of reading signals directly from the brain in the 1970s, and testing on rats began in the early 1990s. The first big breakthrough for humans came in Georgia in 1997, when a scientist named Philip Kennedy used brain implants to allow a “locked in” stroke victim named Johnny Ray to spell out words by moving a cursor with his thoughts. (It took him six exhausting months of training to master the process.) In 2008, when Nicolelis got his monkey at Duke to make robotic legs run a treadmill in Japan, it might have seemed like mind-controlled exoskeletons for humans were just another step or two away. If he succeeds in his plan to have a paralyzed youngster kick a soccer ball at next year’s World Cup, some will pronounce the cyborg revolution in full swing.

Schwartz, the Pittsburgh researcher who helped Jan Scheuermann feed herself chocolate in December, is optimistic that neuroprosthetics will eventually allow paralyzed people to regain some mobility. But he says that full control over an exoskeleton would require a more sophisticated way to extract nuanced information from the brain. Getting a pair of robotic legs to walk is one thing. Getting robotic limbs to do everything human limbs can do may be exponentially more complicated. “The challenge of maintaining balance and staying upright on two feet is a difficult problem, but it can be handled by robotics without a brain. But if you need to move gracefully and with skill, turn and step over obstacles, decide if it’s slippery outside — that does require a brain. If you see someone go up and kick a soccer ball, the essential thing to ask is, ‘OK, what would happen if I moved the soccer ball two inches to the right?'” The idea that simple electrodes could detect things as complex as memory or cognition, which involve the firing of billions of neurons in patterns that scientists can’t yet comprehend, is far-fetched, Schwartz adds.

That’s not the only reason that companies like Apple and Google aren’t yet working on devices that read our minds (as far as we know). Another one is that the devices aren’t portable. And then there’s the little fact that they require brain surgery.

A different class of brain-scanning technology is being touted on the consumer market and in the media as a way for computers to read people’s minds without drilling into their skulls. It’s called electroencephalography, or EEG, and it involves headsets that press electrodes against the scalp. In an impressive 2010 TED Talk, Tan Le of the consumer EEG-headset company Emotiv Lifescience showed how someone can use her company’s EPOC headset to move objects on a computer screen.

Skeptics point out that these devices can detect only the crudest electrical signals from the brain itself, which is well-insulated by the skull and scalp. In many cases, consumer devices that claim to read people’s thoughts are in fact relying largely on physical signals like skin conductivity and tension of the scalp or eyebrow muscles.

Robert Oschler, a robotics enthusiast who develops apps for EEG headsets, believes the more sophisticated consumer headsets like the Emotiv EPOC may be the real deal in terms of filtering out the noise to detect brain waves. Still, he says, there are limits to what even the most advanced, medical-grade EEG devices can divine about our cognition. He’s fond of an analogy that he attributes to Gerwin Schalk, a pioneer in the field of invasive brain implants. The best EEG devices, he says, are “like going to a stadium with a bunch of microphones: You can’t hear what any individual is saying, but maybe you can tell if they’re doing the wave.” With some of the more basic consumer headsets, at this point, “it’s like being in a party in the parking lot outside the same game.”

It’s fairly safe to say that EEG headsets won’t be turning us into cyborgs anytime soon. But it would be a mistake to assume that we can predict today how brain-computer interface technology will evolve. Just last month, a team at Brown University unveiled a prototype of a low-power, wireless neural implant that can transmit signals to a computer over broadband. That could be a major step forward in someday making BCIs practical for everyday use. Meanwhile, researchers at Cornell last week revealed that they were able to use fMRI, a measure of brain activity, to detect which of four people a research subject was thinking about at a given time. Machines today can read our minds in only the most rudimentary ways. But such advances hint that they may be able to detect and respond to more abstract types of mental activity in the always-changing future.

http://www.ydr.com/living/ci_22800493/researchers-explore-connecting-brain-machines

The world’s first brain-to-brain connection has given rats the power to communicate by thought alone.

“Many people thought it could never happen,” says Miguel Nicolelis at Duke University in Durham, North Carolina. Although monkeys have been able to control robots with their mind using brain-to-machine interfaces, work by Nicolelis’s team has, for the first time, demonstrated a direct interface between two brains – with the rats able to share both motor and sensory information.

The feat was achieved by first training rats to press one of two levers when an LED above that lever was lit. A correct action opened a hatch containing a drink of water. The rats were then split into two groups, designated as “encoders” and “decoders”.

An array of microelectrodes – each about one-hundredth the width of a human hair – was then implanted in the encoder rats’ primary motor cortex, an area of the brain that processes movement. The team used the implant to record the neuronal activity that occurs just before the rat made a decision in the lever task. They found that pressing the left lever produced a different pattern of activity from pressing the right lever, regardless of which was the correct action.

Next, the team recreated these patterns in decoder rats, using an implant in the same brain area that stimulates neurons rather than recording from them. The decoders received a few training sessions to prime them to pick the correct lever in response to the different patterns of stimulation.

The researchers then wired up the implants of an encoder and a decoder rat. The pair were given the same lever-press task again, but this time only the encoder rats saw the LEDs come on. Brain signals from the encoder rat were recorded just before they pressed the lever and transmitted to the decoder rat. The team found that the decoders, despite having no visual cue, pressed the correct lever between 60 and 72 per cent of the time.

The rats’ ability to cooperate was reinforced by rewarding both rats if the communication resulted in a correct outcome. Such reinforcement led to the transmission of clearer signals, improving the rats’ success rate compared with cases where decoders were given a pre-recorded signal. This was a big surprise, says Nicolelis. “The encoder’s brain activity became more precise. This could have happened because the animal enhanced its attention during the performance of the next trial after a decoder error.”

If the decoders had not been primed to relate specific activity with the left or right lever prior to the being linked with an encoder, the only consequence would be that it would have taken a bit more time for them to learn the task while interacting with the encoder, says Nicolelis. “We simply primed the decoder so that it would get the gist of the task it had to perform.” In unpublished monkey experiments doing a similar task, the team did not need to prime the animals at all.

In a second experiment, rats were trained to explore a hole with their whiskers and indicate if it was narrow or wide by turning to the left or right. Pairs of rats were then connected as before, but this time the implants were placed in their primary somatosensory cortex, an area that processes touch. Decoder rats were able to indicate over 60 per cent of the time the width of a gap that only the encoder rats were exploring.

Finally, encoder rats were held still while their whiskers were stroked with metal bars. The researchers observed patterns of activity in the somatosensory cortex of the decoder rats that matched that of the encoder rats, even though the whiskers of the decoder rats had not been touched.

Pairs of rats were even able to cooperate across continents using cyberspace. Brain signals from an encoder rat at the Edmond and Lily Safra International Institute of Neuroscience of Natal in Brazil were sent to a decoder in Nicolelis’s lab in North Carolina via the internet. Though there was a slight transmission delay, the decoder rat still performed with an accuracy similar to those of rats in closer proximity with encoders.

Christopher James at the University of Warwick, UK, who works on brain-to-machine interfaces for prostheses, says the work is a “wake-up call” for people who haven’t caught up with recent advances in brain research.

We have the technology to create implants for long-term use, he says. What is missing, though, is a full understanding of the brain processes involved. In this case, Nicolelis’s team is “blasting a relatively large area of the brain with a signal they’re not sure is 100 per cent correct,” he says.

That’s because the exact information being communicated between the rats’ brains is not clear. The brain activity of the encoders cannot be transferred precisely to the decoders because that would require matching the patterns neuron for neuron, which is not currently possible. Instead, the two patterns are closely related in terms of their frequency and spatial representation.

“We are still using a sledgehammer to crack a walnut,” says James. “They’re not hearing the voice of God.” But the rats are certainly sending and receiving more than a binary signal that simply points to one or other lever, he says. “I think it will be possible one day to transfer an abstract thought.”

The decoders have to interpret relatively complex brain patterns, says Marshall Shuler at Johns Hopkins University in Baltimore, Maryland. The animals learn the relevance of these new patterns and their brains adapt to the signals. “But the decoders are probably not having the same quality of experience as the encoders,” he says.

Patrick Degenaar at Newcastle University in the UK says that the military might one day be able to deploy genetically modified insects or small mammals that are controlled by the brain signals of a remote human operator. These would be drones that could feed themselves, he says, and could be used for surveillance or even assassination missions. “You’d probably need a flying bug to get near the head [of someone to be targeted],” he says.

Nicolelis is most excited about the future of multiple networked brains. He is currently trialling the implants in monkeys, getting them to work together telepathically to complete a task. For example, each monkey might only have access to part of the information needed to make the right decision in a game. Several monkeys would then need to communicate with each other in order to successfully complete the task.

“In the distant future we may be able to communicate via a brain-net,” says Nicolelis. “I would be very glad if the brain-net my great grandchildren used was due to their great grandfather’s work.”

Journal reference: Nature Scientific Reports, DOI: 10.1038/srep01319

_65888650_65886269

Duke University researchers have effectively given laboratory rats a “sixth sense” using an implant in their brains.

An experimental device allowed the rats to “touch” infrared light – which is normally invisible to them.

The team at Duke University fitted the rats with an infrared detector wired up to microscopic electrodes that were implanted in the part of their brains that processes tactile information.

The results of the study were published in Nature Communications journal.

The researchers say that, in theory at least, a human with a damaged visual cortex might be able to regain sight through a device implanted in another part of the brain.

Lead author Miguel Nicolelis said this was the first time a brain-machine interface has augmented a sense in adult animals.

The experiment also shows that a new sensory input can be interpreted by a region of the brain that normally does something else (without having to “hijack” the function of that brain region).

“We could create devices sensitive to any physical energy,” said Prof Nicolelis, from the Duke University Medical Center in Durham, North Carolina.

“It could be magnetic fields, radio waves, or ultrasound. We chose infrared initially because it didn’t interfere with our electrophysiological recordings.”

His colleague Eric Thomson commented: “The philosophy of the field of brain-machine interfaces has until now been to attempt to restore a motor function lost to lesion or damage of the central nervous system.

“This is the first paper in which a neuroprosthetic device was used to augment function – literally enabling a normal animal to acquire a sixth sense.”
In their experiments, the researchers used a test chamber with three light sources that could be switched on randomly.

They taught the rats to choose the active light source by poking their noses into a port to receive a sip of water as a reward. They then implanted the microelectrodes, each about a tenth the diameter of a human hair, into the animals’ brains. These electrodes were attached to the infrared detectors.

The scientists then returned the animals to the test chamber. At first, the rats scratched at their faces, indicating that they were interpreting the lights as touch. But after a month the animals learned to associate the signal in their brains with the infrared source.

They began to search actively for the signal, eventually achieving perfect scores in tracking and identifying the correct location of the invisible light source.

One key finding was that enlisting the touch cortex to detect infrared light did not reduce its ability to process touch signals.

http://www.bbc.co.uk/news/science-environment-21459745

Thanks to Kebmodee for bringing this to the attention of the It’s Interesting community.