Posts Tagged ‘brain-computer interface’

Wendy was barely 20 years old when she received a devastating diagnosis: juvenile amyotrophic lateral sclerosis (ALS), an aggressive neurodegenerative disorder that destroys motor neurons in the brain and the spinal cord.

Within half a year, Wendy was completely paralyzed. At 21 years old, she had to be artificially ventilated and fed through a tube placed into her stomach. Even more horrifyingly, as paralysis gradually swept through her body, Wendy realized that she was rapidly being robbed of ways to reach out to the world.

Initially, Wendy was able to communicate to her loved ones by moving her eyes. But as the disease progressed, even voluntary eye twitches were taken from her. In 2015, a mere three years after her diagnosis, Wendy completely lost the ability to communicate—she was utterly, irreversibly trapped inside her own mind.

Complete locked-in syndrome is the stuff of nightmares. Patients in this state remain fully conscious and cognitively sharp, but are unable to move or signal to the outside world that they’re mentally present. The consequences can be dire: when doctors mistake locked-in patients for comatose and decide to pull the plug, there’s nothing the patients can do to intervene.

Now, thanks to a new system developed by an international team of European researchers, Wendy and others like her may finally have a rudimentary link to the outside world. The system, a portable brain-machine interface, translates brain activity into simple yes or no answers to questions with around 70 percent accuracy.

That may not seem like enough, but the system represents the first sliver of hope that we may one day be able to reopen reliable communication channels with these patients.

Four people were tested in the study, with some locked-in for as long as seven years. In just 10 days, the patients were able to reliably use the system to finally tell their loved ones not to worry—they’re generally happy.

The results, though imperfect, came as “enormous relief” to their families, says study leader Dr. Niels Birbaumer at the University of Tübingen. The study was published this week in the journal PLOS Biology.

Breaking Through

Robbed of words and other routes of contact, locked-in patients have always turned to technology for communication.

Perhaps the most famous example is physicist Stephen Hawking, who became partially locked-in due to ALS. Hawking’s workaround is a speech synthesizer that he operates by twitching his cheek muscles. Jean-Dominique Bauby, an editor of the French fashion magazine Elle who became locked-in after a massive stroke, wrote an entire memoir by blinking his left eye to select letters from the alphabet.

Recently, the rapid development of brain-machine interfaces has given paralyzed patients increasing access to the world—not just the physical one, but also the digital universe.

These devices read brain waves directly through electrodes implanted into the patient’s brain, decode the pattern of activity, and correlate it to a command—say, move a computer cursor left or right on a screen. The technology is so reliable that paralyzed patients can even use an off-the-shelf tablet to Google things, using only the power of their minds.

But all of the above workarounds require one critical factor: the patient has to have control of at least one muscle—often, this is a cheek or an eyelid. People like Wendy who are completely locked-in are unable to control similar brain-machine interfaces. This is especially perplexing since these systems don’t require voluntary muscle movements, because they read directly from the mind.

The unexpected failure of brain-machine interfaces for completely locked-in patients has been a major stumbling block for the field. Although speculative, Birbaumer believes that it may be because over time, the brain becomes less efficient at transforming thoughts into actions.

“Anything you want, everything you wish does not occur. So what the brain learns is that intention has no sense anymore,” he says.


First Contact

In the new study, Birbaumer overhauled common brain-machine interface designs to get the brain back on board.

First off was how the system reads brain waves. Generally, this is done through EEG, which measures certain electrical activity patterns of the brain. Unfortunately, the usual solution was a no-go.

“We worked for more than 10 years with neuroelectric activity [EEG] without getting into contact with these completely paralyzed people,” says Birbaumer.

It may be because the electrodes have to be implanted to produce a more accurate readout, explains Birbaumer to Singularity Hub. But surgery comes with additional risks and expenses to the patients. In a somewhat desperate bid, the team turned their focus to a technique called functional near-infrared spectroscopy (fNIRS).

Like fMRI, fNIRS measures brain activity by measuring changes in blood flow through a specific brain region—generally speaking, more blood flow equals more activation. Unlike fMRI, which requires the patient to lie still in a gigantic magnet, fNIRS uses infrared light to measure blood flow. The light source is embedded into a swimming cap-like device that’s tightly worn around the patient’s head.

To train the system, the team started with facts about the world and personal questions that the patients can easily answer. Over the course of 10 days, the patients were repeatedly asked to respond yes or no to questions like “Paris is the capital of Germany” or “Your husband’s name is Joachim.” Throughout the entire training period, the researchers carefully monitored the patients’ alertness and concentration using EEG, to ensure that they were actually participating in the task at hand.

The answers were then used to train an algorithm that matched the responses to their respective brain activation patterns. Eventually, the algorithm was able to tell yes or no based on these patterns alone, at about 70 percent accuracy for a single trial.

“After 10 years [of trying], I felt relieved,” says Birbaumer. If the study can be replicated in more patients, we may finally have a way to restore useful communication with these patients, he added in a press release.

“The authors established communication with complete locked-in patients, which is rare and has not been demonstrated systematically before,” says Dr. Wolfgang Einhäuser-Treyer to Singularity Hub. Einhäuser-Treyer is a professor at Bielefeld University in Germany who had previously worked on measuring pupil response as a means of communication with locked-in patients and was not involved in this current study.

Generally Happy

With more training, the algorithm is expected to improve even further.

For now, researchers can average out mistakes by repeatedly asking a patient the same question multiple times. And even at an “acceptable” 70 percent accuracy rate, the system has already allowed locked-in patients to speak their minds—and somewhat endearingly, just like in real life, the answer may be rather unexpected.

One of the patients, a 61-year-old man, was asked whether his daughter should marry her boyfriend. The father said no a striking nine out of ten times—but the daughter went ahead anyway, much to her father’s consternation, which he was able to express with the help of his new brain-machine interface.

Perhaps the most heart-warming result from the study is that the patients were generally happy and content with their lives.

We were originally surprised, says Birbaumer. But on further thought, it made sense. These four patients had accepted ventilation to support their lives despite their condition.

“In a sense, they had already chosen to live,” says Birbaumer. “If we could make this technique widely clinically available, it could have a huge impact on the day-to-day lives of people with completely locked-in syndrome.”

For his next steps, the team hopes to extend the system beyond simple yes or no binary questions. Instead, they want to give patients access to the entire alphabet, thus allowing them to spell out words using their brain waves—something that’s already been done in partially locked-in patients but never before been possible for those completely locked-in.

“To me, this is a very impressive and important study,” says Einhäuser-Treyer. The downsides are mostly economical.

“The equipment is rather expensive and not easy to use. So the challenge for the field will be to develop this technology into an affordable ‘product’ that caretakers [sic], families or physicians can simply use without trained staff or extensive training,” he says. “In the interest of the patients and their families, we can hope that someone takes this challenge.”

https://singularityhub.com/2017/02/12/families-finally-hear-from-completely-paralyzed-patients-via-new-mind-reading-device/?utm_source=Singularity+Hub+Newsletter&utm_campaign=978304f198-Hub_Daily_Newsletter&utm_medium=email&utm_term=0_f0cf60cdae-978304f198-58158129

By James Gallagher

An implant that beams instructions out of the brain has been used to restore movement in paralysed primates for the first time, say scientists.

Rhesus monkeys were paralysed in one leg due to a damaged spinal cord. The team at the Swiss Federal Institute of Technology bypassed the injury by sending the instructions straight from the brain to the nerves controlling leg movement. Experts said the technology could be ready for human trials within a decade.

Spinal-cord injuries block the flow of electrical signals from the brain to the rest of the body resulting in paralysis. It is a wound that rarely heals, but one potential solution is to use technology to bypass the injury.

In the study, a chip was implanted into the part of the monkeys’ brain that controls movement. Its job was to read the spikes of electrical activity that are the instructions for moving the legs and send them to a nearby computer. It deciphered the messages and sent instructions to an implant in the monkey’s spine to electrically stimulate the appropriate nerves. The process all takes place in real time. The results, published in the journal Nature, showed the monkeys regained some control of their paralysed leg within six days and could walk in a straight line on a treadmill.

Dr Gregoire Courtine, one of the researchers, said: “This is the first time that a neurotechnology has restored locomotion in primates.” He told the BBC News website: “The movement was close to normal for the basic walking pattern, but so far we have not been able to test the ability to steer.” The technology used to stimulate the spinal cord is the same as that used in deep brain stimulation to treat Parkinson’s disease, so it would not be a technological leap to doing the same tests in patients. “But the way we walk is different to primates, we are bipedal and this requires more sophisticated ways to stimulate the muscle,” said Dr Courtine.

Jocelyne Bloch, a neurosurgeon from the Lausanne University Hospital, said: “The link between decoding of the brain and the stimulation of the spinal cord is completely new. “For the first time, I can image a completely paralysed patient being able to move their legs through this brain-spine interface.”

Using technology to overcome paralysis is a rapidly developing field:
Brainwaves have been used to control a robotic arm
Electrical stimulation of the spinal cord has helped four paralysed people stand again
An implant has helped a paralysed man play a guitar-based computer game

Dr Mark Bacon, the director of research at the charity Spinal Research, said: “This is quite impressive work. Paralysed patients want to be able to regain real control, that is voluntary control of lost functions, like walking, and the use of implantable devices may be one way of achieving this. The current work is a clear demonstration that there is progress being made in the right direction.”

Dr Andrew Jackson, from the Institute of Neuroscience and Newcastle University, said: “It is not unreasonable to speculate that we could see the first clinical demonstrations of interfaces between the brain and spinal cord by the end of the decade.” However, he said, rhesus monkeys used all four limbs to move and only one leg had been paralysed, so it would be a greater challenge to restore the movement of both legs in people. “Useful locomotion also requires control of balance, steering and obstacle avoidance, which were not addressed,” he added.

The other approach to treating paralysis involves transplanting cells from the nasal cavity into the spinal cord to try to biologically repair the injury. Following this treatment, Darek Fidyka, who was paralysed from the chest down in a knife attack in 2010, can now walk using a frame.

Neither approach is ready for routine use.

http://www.bbc.com/news/health-37914543

Thanks to Kebmodee for bringing this to the It’s Interesting community.

Frank Swain has been going deaf since his 20s. Now he has hacked his hearing so he can listen in to the data that surrounds us.

I am walking through my north London neighbourhood on an unseasonably warm day in late autumn. I can hear birds tweeting in the trees, traffic prowling the back roads, children playing in gardens and Wi-Fi leaching from their homes. Against the familiar sounds of suburban life, it is somehow incongruous and appropriate at the same time.

As I approach Turnpike Lane tube station and descend to the underground platform, I catch the now familiar gurgle of the public Wi-Fi hub, as well as the staff network beside it. On board the train, these sounds fade into silence as we burrow into the tunnels leading to central London.

I have been able to hear these fields since last week. This wasn’t the result of a sudden mutation or years of transcendental meditation, but an upgrade to my hearing aids. With a grant from Nesta, the UK innovation charity, sound artist Daniel Jones and I built Phantom Terrains, an experimental tool for making Wi-Fi fields audible.

Our modern world is suffused with data. Since radio towers began climbing over towns and cities in the early 20th century, the air has grown thick with wireless communication, the platform on which radio, television, cellphones, satellite broadcasts, Wi-Fi, GPS, remote controls and hundreds of other technologies rely. And yet, despite wireless communication becoming a ubiquitous presence in modern life, the underlying infrastructure has remained largely invisible.

Every day, we use it to read the news, chat to friends, navigate through cities, post photos to our social networks and call for help. These systems make up a huge and integral part of our lives, but the signals that support them remain intangible. If you have ever wandered in circles to find a signal for your cellphone, you will know what I mean.

Phantom Terrains opens the door to this world to a small degree by tuning into these fields. Running on a hacked iPhone, the software exploits the inbuilt Wi-Fi sensor to pick up details about nearby fields: router name, signal strength, encryption and distance. This wasn’t easy. Reams of cryptic variables and numerical values had to be decoded by changing the settings of our test router and observing the effects.

“On a busy street, we may see over a hundred independent wireless access points within signal range,” says Jones. The strength of the signal, direction, name and security level on these are translated into an audio stream made up of a foreground and background layer: distant signals click and pop like hits on a Geiger counter, while the strongest bleat their network ID in a looped melody. This audio is streamed constantly to a pair of hearing aids donated by US developer Starkey. The extra sound layer is blended with the normal output of the hearing aids; it simply becomes part of my soundscape. So long as I carry my phone with me, I will always be able to hear Wi-Fi.

Silent soundscape

From the roar of Oxford Circus, I make my way into the close silence of an anechoic booth on Harley Street. I have been spending a lot of time in these since 2012, when I was first diagnosed with hearing loss. I have been going deaf since my 20s, and two years ago I was fitted with hearing aids which instantly brought a world of missing sound back to my ears, although it took a little longer for my brain to make sense of it.

Recreating hearing is an incredibly difficult task. Unlike glasses, which simply bring the world into focus, digital hearing aids strive to recreate the soundscape, amplifying useful sound and suppressing noise. As this changes by the second, sorting one from the other requires a lot of programming.

In essence, I am listening to a computer’s interpretation of the soundscape, heavily tailored to what it thinks I need to hear. I am intrigued to see how far this editorialisation of my hearing can be pushed. If I have to spend my life listening to an interpretative version of the world, what elements could I add? The data that surrounds me seems a good place to start.

Mapping digital fields isn’t a new idea. Timo Arnall’s Light Painting Wi-Fi saw the artist and his collaborators build a rod of LEDs that lit up when exposed to digital signals, and carried it through the city at night. Captured in long exposure photographs, the topographies of wireless networks appear as a ghostly blue ribbon that waxes and wanes to the strength of nearby signals, revealing the digital landscape.

“Just as the architecture of nearby buildings gives insight to their origin and purpose, we can begin to understand the social world by examining the network landscape,” says Jones. For example, by tracing the hardware address transmitted with the Wi-Fi signal, the Phantom Terrains software can trace a router’s origin. We found that residential areas were full of low-security routers whereas commercial districts had highly encrypted routers and a higher bandwidth.

Despite the information gathered, most people would balk at the idea of being forced to listen to the hum and crackle of invisible fields all day. How long I will tolerate the additional noise in my soundscape remains to be seen. But there is more to the project than a critique of digital transparency.

With the advent of the internet of things, our material world is becoming ever more draped in sensors, and it is important to think about how we might make sense of all this information. Hearing is a fantastic platform for interpreting dynamic, continuous, broad spectrum data.

Its use in this way is being aided by a revolution in hearing technology. The latest models, such as the Halo brand used in our project and ReSound’s Linx, boast a specialised low-energy Bluetooth function that can link to compatible gadgets. This has a host of immediate advantages, such as allowing people to fine-tune their hearing aids using a smartphone as an interface. More crucially, the continuous connectivity elevates hearing aids to something similar to Google Glass – an always-on, networked tool that can seamlessly stream data and audio into your world.

Already, we are talking to our computers more, using voice-activated virtual assistants such as Apple’s Siri, Microsoft’s Cortana and OK Google. Always-on headphones that talk back, whispering into our ear like discreet advisers, might well catch on ahead of Google Glass.

“The biggest challenge is human,” says Jones. “How can we create an auditory representation that is sufficiently sophisticated to express the richness and complexity of an ever-changing network infrastructure, yet unobtrusive enough to be overlaid on our normal sensory experience without being a distraction?”

Only time will tell if we have succeeded in this respect. If we have, it will be a further step towards breaking computers out of the glass-fronted box they have been trapped inside for the last 50 years.

Auditory interfaces also prompt a rethink about how we investigate data and communicate those findings, setting aside the precise and discrete nature of visual presentation in favour of complex, overlapping forms. Instead of boiling the stock market down to the movement of one index or another, for example, we could one day listen to the churning mass of numbers in real time, our ears attuned for discordant melodies.

In Harley Street, the audiologist shows me the graphical results of my tests. What should be a wide blue swathe – good hearing across all volume levels and sound frequencies – narrows sharply, permanently, at one end.

There is currently no treatment that can widen this channel, but assistive hearing technology can tweak the volume and pitch of my soundscape to pack more sound into the space available. It’s not much to work with, but I’m hoping I can inject even more into this narrow strait, to hear things in this world that nobody else can.

http://www.newscientist.com/article/mg22429952.300-the-man-who-can-hear-wifi-wherever-he-walks.html?full=true