US military enhancing human skills with electrical brain stimulation


Study paves way for personnel such as drone operators to have electrical pulses sent into their brains to improve effectiveness in high pressure situations.

US military scientists have used electrical brain stimulators to enhance mental skills of staff, in research that aims to boost the performance of air crews, drone operators and others in the armed forces’ most demanding roles.

The successful tests of the devices pave the way for servicemen and women to be wired up at critical times of duty, so that electrical pulses can be beamed into their brains to improve their effectiveness in high pressure situations.

The brain stimulation kits use five electrodes to send weak electric currents through the skull and into specific parts of the cortex. Previous studies have found evidence that by helping neurons to fire, these minor brain zaps can boost cognitive ability.

The technology is seen as a safer alternative to prescription drugs, such as modafinil and ritalin, both of which have been used off-label as performance enhancing drugs in the armed forces.

But while electrical brain stimulation appears to have no harmful side effects, some experts say its long-term safety is unknown, and raise concerns about staff being forced to use the equipment if it is approved for military operations.

Others are worried about the broader implications of the science on the general workforce because of the advance of an unregulated technology.

In a new report, scientists at Wright-Patterson Air Force Base in Ohio describe how the performance of military personnel can slump soon after they start work if the demands of the job become too intense.

“Within the air force, various operations such as remotely piloted and manned aircraft operations require a human operator to monitor and respond to multiple events simultaneously over a long period of time,” they write. “With the monotonous nature of these tasks, the operator’s performance may decline shortly after their work shift commences.”

Advertisement

But in a series of experiments at the air force base, the researchers found that electrical brain stimulation can improve people’s multitasking skills and stave off the drop in performance that comes with information overload. Writing in the journal Frontiers in Human Neuroscience, they say that the technology, known as transcranial direct current stimulation (tDCS), has a “profound effect”.

For the study, the scientists had men and women at the base take a test developed by Nasa to assess multitasking skills. The test requires people to keep a crosshair inside a moving circle on a computer screen, while constantly monitoring and responding to three other tasks on the screen.

To investigate whether tDCS boosted people’s scores, half of the volunteers had a constant two milliamp current beamed into the brain for the 36-minute-long test. The other half formed a control group and had only 30 seconds of stimulation at the start of the test.

According to the report, the brain stimulation group started to perform better than the control group four minutes into the test. “The findings provide new evidence that tDCS has the ability to augment and enhance multitasking capability in a human operator,” the researchers write. Larger studies must now look at whether the improvement in performance is real and, if so, how long it lasts.

The tests are not the first to claim beneficial effects from electrical brain stimulation. Last year, researchers at the same US facility found that tDCS seemed to work better than caffeine at keeping military target analysts vigilant after long hours at the desk. Brain stimulation has also been tested for its potential to help soldiers spot snipers more quickly in VR training programmes.

Neil Levy, deputy director of the Oxford Centre for Neuroethics, said that compared with prescription drugs, electrical brain stimulation could actually be a safer way to boost the performance of those in the armed forces. “I have more serious worries about the extent to which participants can give informed consent, and whether they can opt out once it is approved for use,” he said. “Even for those jobs where attention is absolutely critical, you want to be very careful about making it compulsory, or there being a strong social pressure to use it, before we are really sure about its long-term safety.”

But while the devices may be safe in the hands of experts, the technology is freely available, because the sale of brain stimulation kits is unregulated. They can be bought on the internet or assembled from simple components, which raises a greater concern, according to Levy. Young people whose brains are still developing may be tempted to experiment with the devices, and try higher currents than those used in laboratories, he says. “If you use high currents you can damage the brain,” he says.

In 2014 another Oxford scientist, Roi Cohen Kadosh, warned that while brain stimulation could improve performance at some tasks, it made people worse at others. In light of the work, Kadosh urged people not to use brain stimulators at home.

If the technology is proved safe in the long run though, it could help those who need it most, said Levy. “It may have a levelling-up effect, because it is cheap and enhancers tend to benefit the people that perform less well,” he said.

https://www.theguardian.com/science/2016/nov/07/us-military-successfully-tests-electrical-brain-stimulation-to-enhance-staff-skills

Thanks to Kebmodee for bringing this to the It’s Interesting community.

Scientists encode memories in a way that bypasses damaged brain tissue

Researchers at University of South Carolina (USC) and Wake Forest Baptist Medical Center have developed a brain prosthesis that is designed to help individuals suffering from memory loss.

The prosthesis, which includes a small array of electrodes implanted into the brain, has performed well in laboratory testing in animals and is currently being evaluated in human patients.

Designed originally at USC and tested at Wake Forest Baptist, the device builds on decades of research by Ted Berger and relies on a new algorithm created by Dong Song, both of the USC Viterbi School of Engineering. The development also builds on more than a decade of collaboration with Sam Deadwyler and Robert Hampson of the Department of Physiology & Pharmacology of Wake Forest Baptist who have collected the neural data used to construct the models and algorithms.

When your brain receives the sensory input, it creates a memory in the form of a complex electrical signal that travels through multiple regions of the hippocampus, the memory center of the brain. At each region, the signal is re-encoded until it reaches the final region as a wholly different signal that is sent off for long-term storage.

If there’s damage at any region that prevents this translation, then there is the possibility that long-term memory will not be formed. That’s why an individual with hippocampal damage (for example, due to Alzheimer’s disease) can recall events from a long time ago – things that were already translated into long-term memories before the brain damage occurred – but have difficulty forming new long-term memories.

Song and Berger found a way to accurately mimic how a memory is translated from short-term memory into long-term memory, using data obtained by Deadwyler and Hampson, first from animals, and then from humans. Their prosthesis is designed to bypass a damaged hippocampal section and provide the next region with the correctly translated memory.

That’s despite the fact that there is currently no way of “reading” a memory just by looking at its electrical signal.

“It’s like being able to translate from Spanish to French without being able to understand either language,” Berger said.

Their research was presented at the 37th Annual International Conference of the IEEE Engineering in Medicine and Biology Society in Milan on August 27, 2015.

The effectiveness of the model was tested by the USC and Wake Forest Baptist teams. With the permission of patients who had electrodes implanted in their hippocampi to treat chronic seizures, Hampson and Deadwyler read the electrical signals created during memory formation at two regions of the hippocampus, then sent that information to Song and Berger to construct the model. The team then fed those signals into the model and read how the signals generated from the first region of the hippocampus were translated into signals generated by the second region of the hippocampus.

In hundreds of trials conducted with nine patients, the algorithm accurately predicted how the signals would be translated with about 90 percent accuracy.

“Being able to predict neural signals with the USC model suggests that it can be used to design a device to support or replace the function of a damaged part of the brain,” Hampson said.
Next, the team will attempt to send the translated signal back into the brain of a patient with damage at one of the regions in order to try to bypass the damage and enable the formation of an accurate long-term memory.

http://medicalxpress.com/news/2015-09-scientists-bypass-brain-re-encoding-memories.html#nRlv

Paralyzed man walks again, using only his mind.


Paraplegic Adam Fritz works out with Kristen Johnson, a spinal cord injury recovery specialist, at the Project Walk facility in Claremont, California on September 24. A brain-to-computer technology that can translate thoughts into leg movements has enabled Fritz, paralyzed from the waist down by a spinal cord injury, to become the first such patient to walk without the use of robotics.

It’s a technology that sounds lifted from the latest Marvel movie—a brain-computer interface functional electrical stimulation (BCI-FES) system that enables paralyzed users to walk again. But thanks to neurologists, biomedical engineers and other scientists at the University of California, Irvine, it’s very much a reality, though admittedly with only one successful test subject so far.

The team, led by Zoran Nenadic and An H. Do, built a device that translates brain waves into electrical signals than can bypass the damaged region of a paraplegic’s spine and go directly to the muscles, stimulating them to move. To test it, they recruited 28-year-old Adam Fritz, who had lost the use of his legs five years earlier in a motorcycle accident.

Fritz first had to learn how exactly he’d been telling his legs to move for all those years before his accident. The research team fitted him with an electroencephalogram (EEG) cap that read his brain waves as he visualized moving an avatar in a virtual reality environment. After hours training on the video game, he eventually figured out how to signal “walk.”

The next step was to transfer that newfound skill to his legs. The scientists wired up the EEG device so that it would send electrical signals to the muscles in Fritz’s leg. And then, along with physical therapy to strengthen his legs, he would practice walking—his legs suspended a few inches off the ground—using only his brain (and, of course, the device). On his 20th visit, Fritz was finally able to walk using a harness that supported his body weight and prevented him from falling. After a little more practice, he walked using just the BCI-FES system. After 30 trials run over a period of 19 weeks, he could successfully walk through a 12-foot-long course.

As encouraging as the trial sounds, there are experts who suggest the design has limitations. “It appears that the brain EEG signal only contributed a walk or stop command,” says Dr. Chet Moritz, an associate professor of rehab medicine, physiology and biophysics at the University of Washington. “This binary signal could easily be provided by the user using a sip-puff straw, eye-blink device or many other more reliable means of communicating a simple ‘switch.’”

Moritz believes it’s unlikely that an EEG alone would be reliable enough to extract any more specific input from the brain while the test subject is walking. In other words, it might not be able to do much more beyond beginning and ending a simple motion like moving your legs forward—not so helpful in stepping over curbs or turning a corner in a hallway.

The UC Irvine team hopes to improve the capability of its technology. A simplified version of the system has the potential to work as a means of noninvasive rehabilitation for a wide range of paralytic conditions, from less severe spinal cord injuries to stroke and multiple sclerosis.

“Once we’ve confirmed the usability of this noninvasive system, we can look into invasive means, such as brain implants,” said Nenadic in a statement announcing the project’s success. “We hope that an implant could achieve an even greater level of prosthesis control because brain waves are recorded with higher quality. In addition, such an implant could deliver sensation back to the brain, enabling the user to feel their legs.

http://www.newsweek.com/paralyzed-man-walks-again-using-only-his-mind-379531

Brain decoder can eavesdrop on your inner voice

brainy_2758840b

Talking to yourself used to be a strictly private pastime. That’s no longer the case – researchers have eavesdropped on our internal monologue for the first time. The achievement is a step towards helping people who cannot physically speak communicate with the outside world.

“If you’re reading text in a newspaper or a book, you hear a voice in your own head,” says Brian Pasley at the University of California, Berkeley. “We’re trying to decode the brain activity related to that voice to create a medical prosthesis that can allow someone who is paralysed or locked in to speak.”

When you hear someone speak, sound waves activate sensory neurons in your inner ear. These neurons pass information to areas of the brain where different aspects of the sound are extracted and interpreted as words.

In a previous study, Pasley and his colleagues recorded brain activity in people who already had electrodes implanted in their brain to treat epilepsy, while they listened to speech. The team found that certain neurons in the brain’s temporal lobe were only active in response to certain aspects of sound, such as a specific frequency. One set of neurons might only react to sound waves that had a frequency of 1000 hertz, for example, while another set only cares about those at 2000 hertz. Armed with this knowledge, the team built an algorithm that could decode the words heard based on neural activity alone (PLoS Biology, doi.org/fzv269).

The team hypothesised that hearing speech and thinking to oneself might spark some of the same neural signatures in the brain. They supposed that an algorithm trained to identify speech heard out loud might also be able to identify words that are thought.

Mind-reading

To test the idea, they recorded brain activity in another seven people undergoing epilepsy surgery, while they looked at a screen that displayed text from either the Gettysburg Address, John F. Kennedy’s inaugural address or the nursery rhyme Humpty Dumpty.

Each participant was asked to read the text aloud, read it silently in their head and then do nothing. While they read the text out loud, the team worked out which neurons were reacting to what aspects of speech and generated a personalised decoder to interpret this information. The decoder was used to create a spectrogram – a visual representation of the different frequencies of sound waves heard over time. As each frequency correlates to specific sounds in each word spoken, the spectrogram can be used to recreate what had been said. They then applied the decoder to the brain activity that occurred while the participants read the passages silently to themselves.

Despite the neural activity from imagined or actual speech differing slightly, the decoder was able to reconstruct which words several of the volunteers were thinking, using neural activity alone (Frontiers in Neuroengineering, doi.org/whb).

The algorithm isn’t perfect, says Stephanie Martin, who worked on the study with Pasley. “We got significant results but it’s not good enough yet to build a device.”

In practice, if the decoder is to be used by people who are unable to speak it would have to be trained on what they hear rather than their own speech. “We don’t think it would be an issue to train the decoder on heard speech because they share overlapping brain areas,” says Martin.

The team is now fine-tuning their algorithms, by looking at the neural activity associated with speaking rate and different pronunciations of the same word, for example. “The bar is very high,” says Pasley. “Its preliminary data, and we’re still working on making it better.”

The team have also turned their hand to predicting what songs a person is listening to by playing lots of Pink Floyd to volunteers, and then working out which neurons respond to what aspects of the music. “Sound is sound,” says Pasley. “It all helps us understand different aspects of how the brain processes it.”

“Ultimately, if we understand covert speech well enough, we’ll be able to create a medical prosthesis that could help someone who is paralysed, or locked in and can’t speak,” he says.

Several other researchers are also investigating ways to read the human mind. Some can tell what pictures a person is looking at, others have worked out what neural activity represents certain concepts in the brain, and one team has even produced crude reproductions of movie clips that someone is watching just by analysing their brain activity. So is it possible to put it all together to create one multisensory mind-reading device?

In theory, yes, says Martin, but it would be extraordinarily complicated. She says you would need a huge amount of data for each thing you are trying to predict. “It would be really interesting to look into. It would allow us to predict what people are doing or thinking,” she says. “But we need individual decoders that work really well before combining different senses.”

http://www.newscientist.com/article/mg22429934.000-brain-decoder-can-eavesdrop-on-your-inner-voice.html

New research suggests that a third of patients diagnosed as vegetative may be conscious with a chance for recovery

Imagine being confined to a bed, diagnosed as “vegetative“—the doctors think you’re completely unresponsive and unaware, but they’re wrong. As many as one-third of vegetative patients are misdiagnosed, according to a new study in The Lancet. Using brain imaging techniques, researchers found signs of minimal consciousness in 13 of 42 patients who were considered vegetative. “The consequences are huge,” lead author Dr. Steven Laureys, of the Coma Science Group at the Université de Liège, tells Maclean’s. “These patients have emotions; they may feel pain; studies have shown they have a better outcome [than vegetative patients]. Distinguishing between unconscious, and a little bit conscious, is very important.”

Detecting human consciousness following brain injury remains exceedingly difficult. Vegetative patients are typically diagnosed by a bedside clinical exam, and remain “neglected” in the health care system, Laureys says. Once diagnosed, “they might not be [re-examined] for years. Nobody questions whether or not there could be something more going on.” That’s about to change.

Laureys has collaborated previously with British neuroscientist Adrian Owen, based at Western University in London, Ont., who holds the Canada Excellence Research Chair in Cognitive Neuroscience and Imaging. (Owen’s work was featured in Maclean’s in October 2013.) Together they co-authored a now-famous paper in the journal Science, in 2006, in which a 23-year-old vegetative patient was instructed to either imagine playing tennis, or moving around her house. Using functional magnetic resonance imaging, or fMRI, they saw that the patient was activating two different parts of her brain, just like healthy volunteers did. Laureys and Owen also worked together on a 2010 follow-up study, in the New England Journal of Medicine, where the same technique was used to ask a patient to answer “yes” or “no” to various questions, presenting the stunning possibility that some vegetative patients might be able to communicate.

In the new Lancet paper, Laureys used two functional brain imaging techniques, fMRI and positron emission tomography (PET), to examine 126 patients with severe brain injury: 41 of them vegetative, four locked-in (a rare condition in which patients are fully conscious and aware, yet completely paralyzed from head-to-toe), and another 81 who were minimally conscious. After finding that 13 of 42 vegetative patients showed brain activity indicating minimal consciousness, they re-examined them a year later. By then, nine of the 13 had improved, and progressed into a minimally conscious state or higher.

The mounting evidence that some vegetative patients are conscious, even minimally so, carries ethical and legal implications. Just last year, Canada’s Supreme Court ruled that doctors couldn’t unilaterally pull the plug on Hassan Rasouli, a man in a vegetative state. This work raises the possibility that one day, some patients may be able to communicate through some kind of brain-machine interface, and maybe even weigh in on their own medical treatment. For now, doctors could make better use of functional brain imaging tests to diagnose these patients, Laureys believes. Kate Bainbridge, who was one of the first vegetative patients examined by Owen, was given a scan that showed her brain lighting up in response to images of her family. Her health later improved. “I can’t say how lucky I was to have the scan,” she said in an email to Maclean’s last year. “[It] really scares me to think what would have happened if I hadn’t had it.”

https://ca.news.yahoo.com/one-third-of-vegetative-patients-may-be-conscious–study-195412300.html

Communication of thoughts between rats on different continents, connected via brain-to-brain interface

The world’s first brain-to-brain connection has given rats the power to communicate by thought alone.

“Many people thought it could never happen,” says Miguel Nicolelis at Duke University in Durham, North Carolina. Although monkeys have been able to control robots with their mind using brain-to-machine interfaces, work by Nicolelis’s team has, for the first time, demonstrated a direct interface between two brains – with the rats able to share both motor and sensory information.

The feat was achieved by first training rats to press one of two levers when an LED above that lever was lit. A correct action opened a hatch containing a drink of water. The rats were then split into two groups, designated as “encoders” and “decoders”.

An array of microelectrodes – each about one-hundredth the width of a human hair – was then implanted in the encoder rats’ primary motor cortex, an area of the brain that processes movement. The team used the implant to record the neuronal activity that occurs just before the rat made a decision in the lever task. They found that pressing the left lever produced a different pattern of activity from pressing the right lever, regardless of which was the correct action.

Next, the team recreated these patterns in decoder rats, using an implant in the same brain area that stimulates neurons rather than recording from them. The decoders received a few training sessions to prime them to pick the correct lever in response to the different patterns of stimulation.

The researchers then wired up the implants of an encoder and a decoder rat. The pair were given the same lever-press task again, but this time only the encoder rats saw the LEDs come on. Brain signals from the encoder rat were recorded just before they pressed the lever and transmitted to the decoder rat. The team found that the decoders, despite having no visual cue, pressed the correct lever between 60 and 72 per cent of the time.

The rats’ ability to cooperate was reinforced by rewarding both rats if the communication resulted in a correct outcome. Such reinforcement led to the transmission of clearer signals, improving the rats’ success rate compared with cases where decoders were given a pre-recorded signal. This was a big surprise, says Nicolelis. “The encoder’s brain activity became more precise. This could have happened because the animal enhanced its attention during the performance of the next trial after a decoder error.”

If the decoders had not been primed to relate specific activity with the left or right lever prior to the being linked with an encoder, the only consequence would be that it would have taken a bit more time for them to learn the task while interacting with the encoder, says Nicolelis. “We simply primed the decoder so that it would get the gist of the task it had to perform.” In unpublished monkey experiments doing a similar task, the team did not need to prime the animals at all.

In a second experiment, rats were trained to explore a hole with their whiskers and indicate if it was narrow or wide by turning to the left or right. Pairs of rats were then connected as before, but this time the implants were placed in their primary somatosensory cortex, an area that processes touch. Decoder rats were able to indicate over 60 per cent of the time the width of a gap that only the encoder rats were exploring.

Finally, encoder rats were held still while their whiskers were stroked with metal bars. The researchers observed patterns of activity in the somatosensory cortex of the decoder rats that matched that of the encoder rats, even though the whiskers of the decoder rats had not been touched.

Pairs of rats were even able to cooperate across continents using cyberspace. Brain signals from an encoder rat at the Edmond and Lily Safra International Institute of Neuroscience of Natal in Brazil were sent to a decoder in Nicolelis’s lab in North Carolina via the internet. Though there was a slight transmission delay, the decoder rat still performed with an accuracy similar to those of rats in closer proximity with encoders.

Christopher James at the University of Warwick, UK, who works on brain-to-machine interfaces for prostheses, says the work is a “wake-up call” for people who haven’t caught up with recent advances in brain research.

We have the technology to create implants for long-term use, he says. What is missing, though, is a full understanding of the brain processes involved. In this case, Nicolelis’s team is “blasting a relatively large area of the brain with a signal they’re not sure is 100 per cent correct,” he says.

That’s because the exact information being communicated between the rats’ brains is not clear. The brain activity of the encoders cannot be transferred precisely to the decoders because that would require matching the patterns neuron for neuron, which is not currently possible. Instead, the two patterns are closely related in terms of their frequency and spatial representation.

“We are still using a sledgehammer to crack a walnut,” says James. “They’re not hearing the voice of God.” But the rats are certainly sending and receiving more than a binary signal that simply points to one or other lever, he says. “I think it will be possible one day to transfer an abstract thought.”

The decoders have to interpret relatively complex brain patterns, says Marshall Shuler at Johns Hopkins University in Baltimore, Maryland. The animals learn the relevance of these new patterns and their brains adapt to the signals. “But the decoders are probably not having the same quality of experience as the encoders,” he says.

Patrick Degenaar at Newcastle University in the UK says that the military might one day be able to deploy genetically modified insects or small mammals that are controlled by the brain signals of a remote human operator. These would be drones that could feed themselves, he says, and could be used for surveillance or even assassination missions. “You’d probably need a flying bug to get near the head [of someone to be targeted],” he says.

Nicolelis is most excited about the future of multiple networked brains. He is currently trialling the implants in monkeys, getting them to work together telepathically to complete a task. For example, each monkey might only have access to part of the information needed to make the right decision in a game. Several monkeys would then need to communicate with each other in order to successfully complete the task.

“In the distant future we may be able to communicate via a brain-net,” says Nicolelis. “I would be very glad if the brain-net my great grandchildren used was due to their great grandfather’s work.”

Journal reference: Nature Scientific Reports, DOI: 10.1038/srep01319

Lab rats given a 6th sense through a brain-machine interface

_65888650_65886269

Duke University researchers have effectively given laboratory rats a “sixth sense” using an implant in their brains.

An experimental device allowed the rats to “touch” infrared light – which is normally invisible to them.

The team at Duke University fitted the rats with an infrared detector wired up to microscopic electrodes that were implanted in the part of their brains that processes tactile information.

The results of the study were published in Nature Communications journal.

The researchers say that, in theory at least, a human with a damaged visual cortex might be able to regain sight through a device implanted in another part of the brain.

Lead author Miguel Nicolelis said this was the first time a brain-machine interface has augmented a sense in adult animals.

The experiment also shows that a new sensory input can be interpreted by a region of the brain that normally does something else (without having to “hijack” the function of that brain region).

“We could create devices sensitive to any physical energy,” said Prof Nicolelis, from the Duke University Medical Center in Durham, North Carolina.

“It could be magnetic fields, radio waves, or ultrasound. We chose infrared initially because it didn’t interfere with our electrophysiological recordings.”

His colleague Eric Thomson commented: “The philosophy of the field of brain-machine interfaces has until now been to attempt to restore a motor function lost to lesion or damage of the central nervous system.

“This is the first paper in which a neuroprosthetic device was used to augment function – literally enabling a normal animal to acquire a sixth sense.”
In their experiments, the researchers used a test chamber with three light sources that could be switched on randomly.

They taught the rats to choose the active light source by poking their noses into a port to receive a sip of water as a reward. They then implanted the microelectrodes, each about a tenth the diameter of a human hair, into the animals’ brains. These electrodes were attached to the infrared detectors.

The scientists then returned the animals to the test chamber. At first, the rats scratched at their faces, indicating that they were interpreting the lights as touch. But after a month the animals learned to associate the signal in their brains with the infrared source.

They began to search actively for the signal, eventually achieving perfect scores in tracking and identifying the correct location of the invisible light source.

One key finding was that enlisting the touch cortex to detect infrared light did not reduce its ability to process touch signals.

http://www.bbc.co.uk/news/science-environment-21459745

Thanks to Kebmodee for bringing this to the attention of the It’s Interesting community.

Scientists Construct First Detailed Map of How the Brain Organizes Everything We See

121219142257-large

Our eyes may be our window to the world, but how do we make sense of the thousands of images that flood our retinas each day? Scientists at the University of California, Berkeley, have found that the brain is wired to put in order all the categories of objects and actions that we see. They have created the first interactive map of how the brain organizes these groupings.

The result — achieved through computational models of brain imaging data collected while the subjects watched hours of movie clips — is what researchers call “a continuous semantic space.”

“Our methods open a door that will quickly lead to a more complete and detailed understanding of how the brain is organized. Already, our online brain viewer appears to provide the most detailed look ever at the visual function and organization of a single human brain,” said Alexander Huth, a doctoral student in neuroscience at UC Berkeley and lead author of the study published Dec. 19 in the journal Neuron.

A clearer understanding of how the brain organizes visual input can help with the medical diagnosis and treatment of brain disorders. These findings may also be used to create brain-machine interfaces, particularly for facial and other image recognition systems. Among other things, they could improve a grocery store self-checkout system’s ability to recognize different kinds of merchandise.

“Our discovery suggests that brain scans could soon be used to label an image that someone is seeing, and may also help teach computers how to better recognize images,” said Huth.

It has long been thought that each category of object or action humans see — people, animals, vehicles, household appliances and movements — is represented in a separate region of the visual cortex. In this latest study, UC Berkeley researchers found that these categories are actually represented in highly organized, overlapping maps that cover as much as 20 percent of the brain, including the somatosensory and frontal cortices.

To conduct the experiment, the brain activity of five researchers was recorded via functional Magnetic Resonance Imaging (fMRI) as they each watched two hours of movie clips. The brain scans simultaneously measured blood flow in thousands of locations across the brain.

Researchers then used regularized linear regression analysis, which finds correlations in data, to build a model showing how each of the roughly 30,000 locations in the cortex responded to each of the 1,700 categories of objects and actions seen in the movie clips. Next, they used principal components analysis, a statistical method that can summarize large data sets, to find the “semantic space” that was common to all the study subjects.

The results are presented in multicolored, multidimensional maps showing the more than 1,700 visual categories and their relationships to one another. Categories that activate the same brain areas have similar colors. For example, humans are green, animals are yellow, vehicles are pink and violet and buildings are blue.

“Using the semantic space as a visualization tool, we immediately saw that categories are represented in these incredibly intricate maps that cover much more of the brain than we expected,” Huth said.

Other co-authors of the study are UC Berkeley neuroscientists Shinji Nishimoto, An T. Vu and Jack Gallant.

Journal Reference:

1.Alexander G. Huth, Shinji Nishimoto, An T. Vu, Jack L. Gallant. A Continuous Semantic Space Describes the Representation of Thousands of Object and Action Categories across the Human Brain. Neuron, 2012; 76 (6): 1210 DOI: 10.1016/j.neuron.2012.10.014

http://www.sciencedaily.com/releases/2012/12/121219142257.htm

Mind over matter helps paralysed woman control robotic arm

Graphic-robotic-arm-001

A woman who is paralysed from the neck down has stunned doctors with her extraordinary skill at using a robotic arm that is controlled by her thoughts alone.

The 52-year-old patient, called Jan, lost the use of her limbs more than 10 years ago to a degenerative disease that damaged her spinal cord. The disruption to her nervous system was the equivalent to having a broken neck.

But in training sessions at the University of Pittsburgh, doctors found she quickly learned to make fluid movements with the brain-controlled robotic arm, reaching levels of performance never seen before.

Doctors recruited the woman to test a robotic arm that is controlled by a new kind of computer program that translates the natural brain activity used to move our limbs into commands to move the robotic arm.

The design is intended to make the robotic arm more intuitive for patients to use. Instead of having to think where to move the arm, a patient can simply focus on the goal, such as “pick up the ball”.

Several groups around the world are developing so-called brain-machine interfaces to control robotic arms and other devices, such as computers, but none has achieved such impressive results.

Writing in the Lancet, researchers said Jan was able to move the robotic arm back, forward, right, left, and up and down only two days into her training. Within weeks she could reach out, and change the position of the hand to pick up objects on a table, including cones, blocks and small balls, and put them down at another location.

“We were blown away by how fast she was able to acquire her skill, that was completely unexpected,” said Andrew Schwartz, professor of neurobiology at the University of Pittsburgh. “At the end of a good day, when she was making these beautiful movements, she was ecstatic.”

To wire the woman up to the arm, doctors performed a four-hour operation to implant two tiny grids of electrodes, measuring 4mm on each side, into Jan’s brain. Each grid has 96 little electrodes that stick out 1.5mm. The electrodes were pushed just beneath the surface of the brain, near neurons that control hand and arm movement in the motor cortex.

Once the surgeons had implanted the electrodes, they replaced the part of the skull they had removed to expose the brain. Wires from the electrodes ran to connectors on the patient’s head, which doctors could then use to plug the patient into the computer system and robotic arm.

Before Jan could use the arm, doctors had to record her brain activity imagining various arm movements. To do this, they asked her to watch the robotic arm as it performed various moves, and got her to imagine moving her own arm in the same way.

While she was thinking, the computer recorded the electrical activity from individual neurons in her brain.

Neurons that control movement tend to have a preferred direction, and fire their electrical pulses more frequently to perform a movement in that direction. “Once we understand which direction each neuron likes to fire in, we can look at a larger group of neurons and figure out what direction the patient is trying to move the arm in,” Schwartz said.

To begin with, the robotic arm was programmed to help Jan’s movements, by ignoring small mistakes in movements. But she quickly progressed to controlling the arm without help. After three months of training, she completed tasks with the robotic arm 91.6% of the time, and 30 seconds faster than when the trial began.

In an accompanying article, Grégoire Courtine, at the Swiss Federal Institute of Technology in Lausanne, said: “This bioinspired brain-machine interface is a remarkable technological and biomedical achievement.”

There are hurdles ahead for mind-controlled robot limbs. Though Jan’s performance continued to improve after the Lancet study was written, she has plateaued recently, because scar tissue that forms around the tips of the electrodes degrades the brain signals the computer receives.

Schwartz said that using thinner electrodes, around five thousandths of a millimetre thick, should solve this problem, as they will be too small to trigger the scarring process in the body.

The researchers now hope to build senses into the robotic arm, so the patient can feel the texture and temperature of the objects they are handling. To do this, sensors on the fingers of the robotic hand could send information back to the sensory regions of the brain.

Another major focus of future work is to develop a wireless system, so the patient does not have to be physically plugged into the computer that controls the robotic arm.

Thanks to Kebmodee AND Dr. Rajadhyaksha for bringing this to the attention of the It’s Interesting community.

http://www.guardian.co.uk/science/2012/dec/17/paralysed-woman-robotic-arm-pittsburgh