‘Mind pilots’ steer plane sim with thoughts alone

Electrodes attached to a cap convert brain waves into signals that can be processed by the flight simulator for hands-free flying.

New research out of the Technische Universität München (TUM) in Germany is hinting that mind control might soon reach entirely new heights — even by us non-mutants. They’ve demonstrated that pilots might be able to fly planes through the sky using their thoughts alone.

The researchers hooked study participants to a cap containing dozens of electroencephalography (EEG) electrodes, sat them down in a flight simulator, and told them to steer the plane through the sim using their thoughts alone. The cap read the electrical signals from their brains and an algorithm then translated those signals into computer commands.

Seven people underwent the experiment and, according to the researchers, all were able to pilot the plane using their thoughts to such a degree that their performance could have satisfied some of the criteria for getting a pilot’s license.

What’s more, the study participants weren’t all pilots and had varying levels of flight experience. One had no cockpit experience at all.

We have, of course, seen similar thought-control experiments before — an artist who can paint with her thoughts http://www.cnet.com/news/paralyzed-artist-paints-with-mind-alone/) and another who causes water to vibrate (http://www.cnet.com/news/artist-vibrates-water-with-the-power-of-thought/), for example, as well as a quadcopter controlled by brainwaves (http://www.cnet.com/news/mind-controlled-quadcopter-takes-to-the-air/) and a thought-powered typing solution (http://www.cnet.com/news/indendix-eeg-lets-you-type-with-your-brain/). But there’s something particularly remarkable about the idea of someone actually flying an airplane with just the mind.

The research was part of an EU-funded program called ” Brainflight.” “A long-term vision of the project is to make flying accessible to more people,” aerospace engineer Tim Fricke, who heads the project at TUM, explained in a statement. “With brain control, flying, in itself, could become easier. This would reduce the workload of pilots and thereby increase safety. In addition, pilots would have more freedom of movement to manage other manual tasks in the cockpit.”

One of the outstanding challenges of the research is to provide feedback from the plane to the “mind pilots.” This is something normal pilots rely upon to gauge the state of their flight. For example, they would feel resistance from the controls if they begin to push the plane to its limits. TUM says the researchers are currently looking for ways to deliver such feedback to the pilots.

Thanks to Kebmodee for bringing this to the attention of the It’s Interesting community.


After cardiac arrest, a final surge of brain activity could contain vivid experience, new research in rodents suggests.


What people experience as death creeps in—after the heart stops and the brain becomes starved of oxygen—seems to lie beyond the reach of science. But the authors of a new study on dying rats make a bold claim: After cardiac arrest, the rodents’ brains enter a state similar to heightened consciousness in humans. The researchers suggest that if the same is true for people, such brain activity could be the source of the visions and other sensations that make up so-called near-death experiences.

Estimated to occur in about 20% of patients who survive cardiac arrest, near-death experiences are frequently described as hypervivid or “realer-than-real,” and often include leaving the body and observing oneself from outside, or seeing a bright light. The similarities between these reports are hard to ignore, but the conversation about near-death experiences often bleeds into metaphysics: Are these visions produced solely by the brain, or are they a glimpse at an afterlife outside the body?

Neurologist Jimo Borjigin of the University of Michigan, Ann Arbor, got interested in near-death experiences during a different project—measuring the hormone levels in the brains of rodents after a stroke. Some of the animals in her lab died unexpectedly, and her measurements captured a surge in neurochemicals at the moment of their death. Previous research in rodents and humans has shown that electrical activity surges in the brain right after the heart stops, then goes flat after a few seconds. Without any evidence that this final blip contains meaningful brain activity, Borjigin says “it’s perhaps natural for people to assume that [near-death] experiences came from elsewhere, from more supernatural sources.” But after seeing those neurochemical surges in her animals, she wondered about those last few seconds, hypothesizing that even experiences seeming to stretch for days in a person’s memory could originate from a brief “knee-jerk reaction” of the dying brain.

To observe brains on the brink of death, Borjigin and her colleagues implanted electrodes into the brains of nine rats to measure electrical activity at six different locations. The team anesthetized the rats for about an hour, for ethical reasons, and then injected potassium chloride into each unconscious animal’s heart to cause cardiac arrest. In the approximately 30 seconds between a rat’s last heartbeat and the point when its brain stopped producing signals, the team carefully recorded its neuronal oscillations, or the frequency with which brain cells were firing their electrical signals.

The data produced by electroencephalograms (EEGs) of the nine rats revealed a highly organized brain response in the seconds after cardiac arrest, Borjigin and colleagues report online today in the Proceedings of the National Academy of Sciences. While overall electrical activity in the brain sharply declined after the last heartbeat, oscillations in the low gamma frequency (between 25 and 55 Hz) increased in power. Previous human research has linked gamma waves to waking consciousness, meditative states, and REM sleep. These oscillations in the dying rats were synchronized across different parts of the brain, even more so than in the rat’s normal waking state. The team also noticed that firing patterns in the front of the brain would be echoed in the back and sides. This so-called top-down signaling, which is associated with conscious perception and information processing, increased eightfold compared with the waking state, the team reports. When you put these features together, Borjigin says, they suggest that the dying brain is hyperactive in its final seconds, producing meaningful, conscious activity.

The team proposed that such research offers a “scientific framework” for approaching the highly lucid experiences that some people report after their brushes with death. But relating signs of consciousness in rat brains to human near-death experiences is controversial. “It opens more questions than it answers,” says Christof Koch, a neuroscientist at the Allen Institute for Brain Science in Seattle, Washington, of the research. Evidence of a highly organized and connected brain state during the animal’s death throes is surprising and fascinating, he says. But Koch, who worked with Francis Crick in the early 1980s to hypothesize that gamma waves are a hallmark of consciousness, says the increase in their frequency doesn’t necessarily mean that the rats were in a hyperconscious state. Not only is it impossible to project any mental experience onto these animals, but their response was also “still overlaid by the anesthesiology,” he says; this sedation likely influenced their brain response in unpredictable ways.

Others share Koch’s concerns. “There is no animal model of a near-death experience,” says critical care physician Sam Parnia of Stony Brook University School of Medicine in New York. We can never confirm what animals think or feel in their final moments, making it all but impossible to use them to study our own near-death experiences, he believes. Nonetheless, Parnia sees value in this new study from a clinical perspective, as a step toward understanding how the brain behaves right before death. He says that doctors might use a similar approach to learn how to improve blood flow or prolong electrical activity in the brain, preventing damage while resuscitating a patient.

Borjigin argues that the rat data are compelling enough to drive further study of near-death experiences in humans. She suggests monitoring EEG activity in people undergoing brain surgery that involves cooling the brain and reducing its blood supply. This procedure has prompted near-death experiences in the past, she says, and could offer a systematic way to explore the phenomenon.

read more here: http://news.sciencemag.org/brain-behavior/2013/08/probing-brain%E2%80%99s-final-moments

Thanks to Kebmodee for bringing this to the attention of the It’s Interesting community.

Researchers explore connecting the brain to machines


Behind a locked door in a white-walled basement in a research building in Tempe, Ariz., a monkey sits stone-still in a chair, eyes locked on a computer screen. From his head protrudes a bundle of wires; from his mouth, a plastic tube. As he stares, a picture of a green cursor on the black screen floats toward the corner of a cube. The monkey is moving it with his mind.

The monkey, a rhesus macaque named Oscar, has electrodes implanted in his motor cortex, detecting electrical impulses that indicate mental activity and translating them to the movement of the ball on the screen. The computer isn’t reading his mind, exactly — Oscar’s own brain is doing a lot of the lifting, adapting itself by trial and error to the delicate task of accurately communicating its intentions to the machine. (When Oscar succeeds in controlling the ball as instructed, the tube in his mouth rewards him with a sip of his favorite beverage, Crystal Light.) It’s not technically telekinesis, either, since that would imply that there’s something paranormal about the process. It’s called a “brain-computer interface” (BCI). And it just might represent the future of the relationship between human and machine.

Stephen Helms Tillery’s laboratory at Arizona State University is one of a growing number where researchers are racing to explore the breathtaking potential of BCIs and a related technology, neuroprosthetics. The promise is irresistible: from restoring sight to the blind, to helping the paralyzed walk again, to allowing people suffering from locked-in syndrome to communicate with the outside world. In the past few years, the pace of progress has been accelerating, delivering dazzling headlines seemingly by the week.

At Duke University in 2008, a monkey named Idoya walked on a treadmill, causing a robot in Japan to do the same. Then Miguel Nicolelis stopped the monkey’s treadmill — and the robotic legs kept walking, controlled by Idoya’s brain. At Andrew Schwartz’s lab at the University of Pittsburgh in December 2012, a quadriplegic woman named Jan Scheuermann learned to feed herself chocolate by mentally manipulating a robotic arm. Just last month, Nicolelis’ lab set up what it billed as the first brain-to-brain interface, allowing a rat in North Carolina to make a decision based on sensory data beamed via Internet from the brain of a rat in Brazil.

So far the focus has been on medical applications — restoring standard-issue human functions to people with disabilities. But it’s not hard to imagine the same technologies someday augmenting capacities. If you can make robotic legs walk with your mind, there’s no reason you can’t also make them run faster than any sprinter. If you can control a robotic arm, you can control a robotic crane. If you can play a computer game with your mind, you can, theoretically at least, fly a drone with your mind.

It’s tempting and a bit frightening to imagine that all of this is right around the corner, given how far the field has already come in a short time. Indeed, Nicolelis — the media-savvy scientist behind the “rat telepathy” experiment — is aiming to build a robotic bodysuit that would allow a paralyzed teen to take the first kick of the 2014 World Cup. Yet the same factor that has made the explosion of progress in neuroprosthetics possible could also make future advances harder to come by: the almost unfathomable complexity of the human brain.

From I, Robot to Skynet, we’ve tended to assume that the machines of the future would be guided by artificial intelligence — that our robots would have minds of their own. Over the decades, researchers have made enormous leaps in artificial intelligence (AI), and we may be entering an age of “smart objects” that can learn, adapt to, and even shape our habits and preferences. We have planes that fly themselves, and we’ll soon have cars that do the same. Google has some of the world’s top AI minds working on making our smartphones even smarter, to the point that they can anticipate our needs. But “smart” is not the same as “sentient.” We can train devices to learn specific behaviors, and even out-think humans in certain constrained settings, like a game of Jeopardy. But we’re still nowhere close to building a machine that can pass the Turing test, the benchmark for human-like intelligence. Some experts doubt we ever will.

Philosophy aside, for the time being the smartest machines of all are those that humans can control. The challenge lies in how best to control them. From vacuum tubes to the DOS command line to the Mac to the iPhone, the history of computing has been a progression from lower to higher levels of abstraction. In other words, we’ve been moving from machines that require us to understand and directly manipulate their inner workings to machines that understand how we work and respond readily to our commands. The next step after smartphones may be voice-controlled smart glasses, which can intuit our intentions all the more readily because they see what we see and hear what we hear.

The logical endpoint of this progression would be computers that read our minds, computers we can control without any physical action on our part at all. That sounds impossible. After all, if the human brain is so hard to compute, how can a computer understand what’s going on inside it?

It can’t. But as it turns out, it doesn’t have to — not fully, anyway. What makes brain-computer interfaces possible is an amazing property of the brain called neuroplasticity: the ability of neurons to form new connections in response to fresh stimuli. Our brains are constantly rewiring themselves to allow us to adapt to our environment. So when researchers implant electrodes in a part of the brain that they expect to be active in moving, say, the right arm, it’s not essential that they know in advance exactly which neurons will fire at what rate. When the subject attempts to move the robotic arm and sees that it isn’t quite working as expected, the person — or rat or monkey — will try different configurations of brain activity. Eventually, with time and feedback and training, the brain will hit on a solution that makes use of the electrodes to move the arm.

That’s the principle behind such rapid progress in brain-computer interface and neuroprosthetics. Researchers began looking into the possibility of reading signals directly from the brain in the 1970s, and testing on rats began in the early 1990s. The first big breakthrough for humans came in Georgia in 1997, when a scientist named Philip Kennedy used brain implants to allow a “locked in” stroke victim named Johnny Ray to spell out words by moving a cursor with his thoughts. (It took him six exhausting months of training to master the process.) In 2008, when Nicolelis got his monkey at Duke to make robotic legs run a treadmill in Japan, it might have seemed like mind-controlled exoskeletons for humans were just another step or two away. If he succeeds in his plan to have a paralyzed youngster kick a soccer ball at next year’s World Cup, some will pronounce the cyborg revolution in full swing.

Schwartz, the Pittsburgh researcher who helped Jan Scheuermann feed herself chocolate in December, is optimistic that neuroprosthetics will eventually allow paralyzed people to regain some mobility. But he says that full control over an exoskeleton would require a more sophisticated way to extract nuanced information from the brain. Getting a pair of robotic legs to walk is one thing. Getting robotic limbs to do everything human limbs can do may be exponentially more complicated. “The challenge of maintaining balance and staying upright on two feet is a difficult problem, but it can be handled by robotics without a brain. But if you need to move gracefully and with skill, turn and step over obstacles, decide if it’s slippery outside — that does require a brain. If you see someone go up and kick a soccer ball, the essential thing to ask is, ‘OK, what would happen if I moved the soccer ball two inches to the right?'” The idea that simple electrodes could detect things as complex as memory or cognition, which involve the firing of billions of neurons in patterns that scientists can’t yet comprehend, is far-fetched, Schwartz adds.

That’s not the only reason that companies like Apple and Google aren’t yet working on devices that read our minds (as far as we know). Another one is that the devices aren’t portable. And then there’s the little fact that they require brain surgery.

A different class of brain-scanning technology is being touted on the consumer market and in the media as a way for computers to read people’s minds without drilling into their skulls. It’s called electroencephalography, or EEG, and it involves headsets that press electrodes against the scalp. In an impressive 2010 TED Talk, Tan Le of the consumer EEG-headset company Emotiv Lifescience showed how someone can use her company’s EPOC headset to move objects on a computer screen.

Skeptics point out that these devices can detect only the crudest electrical signals from the brain itself, which is well-insulated by the skull and scalp. In many cases, consumer devices that claim to read people’s thoughts are in fact relying largely on physical signals like skin conductivity and tension of the scalp or eyebrow muscles.

Robert Oschler, a robotics enthusiast who develops apps for EEG headsets, believes the more sophisticated consumer headsets like the Emotiv EPOC may be the real deal in terms of filtering out the noise to detect brain waves. Still, he says, there are limits to what even the most advanced, medical-grade EEG devices can divine about our cognition. He’s fond of an analogy that he attributes to Gerwin Schalk, a pioneer in the field of invasive brain implants. The best EEG devices, he says, are “like going to a stadium with a bunch of microphones: You can’t hear what any individual is saying, but maybe you can tell if they’re doing the wave.” With some of the more basic consumer headsets, at this point, “it’s like being in a party in the parking lot outside the same game.”

It’s fairly safe to say that EEG headsets won’t be turning us into cyborgs anytime soon. But it would be a mistake to assume that we can predict today how brain-computer interface technology will evolve. Just last month, a team at Brown University unveiled a prototype of a low-power, wireless neural implant that can transmit signals to a computer over broadband. That could be a major step forward in someday making BCIs practical for everyday use. Meanwhile, researchers at Cornell last week revealed that they were able to use fMRI, a measure of brain activity, to detect which of four people a research subject was thinking about at a given time. Machines today can read our minds in only the most rudimentary ways. But such advances hint that they may be able to detect and respond to more abstract types of mental activity in the always-changing future.


Musicians’ Brains Synchronize During Duets


The brain waves of two musicians synchronize when they are performing duet, a new study found, suggesting that there’s a neural blueprint for coordinating actions with others.

A team of scientists at the Max Planck Institute for Human Development in Berlin used electrodes to record the brain waves of 16 pairs of guitarists while they played a sequence from “Sonata in G Major” by Christian Gottlieb Scheidler. In each pair, the two musicians played different voices of the piece. One guitarist was responsible for beginning the song and setting the tempo while the other was instructed to follow.

In 60 trials each, the pairs of musicians showed coordinated brain oscillations — or matching rhythms of neural activity — in regions of the brain associated with social cognition and music production, the researchers said.

“When people coordinate their own actions, small networks between brain regions are formed,” study researcher Johanna Sänger said in a statement. “But we also observed similar network properties between the brains of the individual players, especially when mutual coordination is very important; for example at the joint onset of a piece of music.”

Sänger added that the internal synchronization of the lead guitarists’ brain waves was present, and actually stronger, before the duet began.

“This could be a reflection of the leading player’s decision to begin playing at a certain moment in time,” she explained.

Another Max Planck researcher involved in the study, Ulman Lindenberger, led a similar set of experiments in 2009. But in that study, which was published in the journal BMC Neuroscience, the pairs of guitarists played a song in unison, rather than a duet. Lindenberger and his team at the time observed the same type of coordinated brain oscillations, but noted that the synchronization could have been the result of the similarities of the actions performed by the pairs of musicians.

As the new study involved guitarists who were performing different parts of a song, the researchers say their results provide stronger evidence that there is a neural basis for interpersonal coordination. The team believes people’s brain waves might also synchronize during other types of actions, such as during sports games.

The study was published online Nov. 29 in the journal Frontiers in Human Neuroscience.


DARPA project suggests a mix of man and machine may be the most efficient way to spot danger: the Cognitive Technology Threat Warning System



Sentry duty is a tough assignment. Most of the time there’s nothing to see, and when a threat does pop up, it can be hard to spot. In some military studies, humans are shown to detect only 47 percent of visible dangers.

A project run by the Defense Advanced Research Projects Agency (DARPA) suggests that combining the abilities of human sentries with those of machine-vision systems could be a better way to identify danger. It also uses electroencephalography to identify spikes in brain activity that can correspond to subconscious recognition of an object.

An experimental system developed by DARPA sandwiches a human observer between layers of computer vision and has been shown to outperform either machines or humans used in isolation.

The so-called Cognitive Technology Threat Warning System consists of a wide-angle camera and radar, which collects imagery for humans to review on a screen, and a wearable electroencephalogram device that measures the reviewer’s brain activity. This allows the system to detect unconscious recognition of changes in a scene—called a P300 event.

In experiments, a participant was asked to review test footage shot at military test sites in the desert and rain forest. The system caught 91 percent of incidents (such as humans on foot or approaching vehicles) in the simulation. It also widened the field of view that could effectively be monitored. False alarms were raised only 0.2 percent of the time, down from 35 percent when a computer vision system was used on its own. When combined with radar, which detects things invisible to the naked eye, the accuracy of the system was close to 100 percent, DARPA says.

“The DARPA project is different from other ‘human-in-the-loop’ projects because it takes advantage of the human visual system without having the humans do any ‘work,’ ” says computer scientist Devi Parikh of the Toyota Technological Institute at Chicago. Parikh researches vision systems that combine human and machine expertise.

While electroencephalogram-measuring caps are commercially available for a few hundred dollars, Parikh warns that the technology is still in its infancy. Furthermore, she notes, the P300 signals may vary enough to require training or personalized processing, which could make it harder to scale up such a system for widespread use.

Thanks to Kebmodee for bringing this to the attention of the It’s Interesting community.


Japanese Shippo: cat tail that moves with your mood

When facial cues aren’t enough, there’s Shippo.

A Japanese company called Neurowear, which makes brain-wave interpreting products like the Necomimi cat ear set, is now developing a toy tail that wags in sync with a user’s mood.

By utilizing an electroencephalography (EEG) apparatus similar to that of the company’s popular cat ears, the Shippo tail reads electrical patterns emitted by the brain and manifests them as wagging.

A concentrating person emits brain waves in the range of 12 to 30 hertz, while a relaxed person’s waves measure in the 8- to 12-hertz range, NeuroSky, the San Jose-based company that developed the Necomimi, told CNET.

With Shippo, relaxed users’ tails will demonstrate “soft and slow” wagging, while concentrated users’ tails will display “hard and fast” wagging. The gadget is also social media enabled; a neural application reads the user’s mood and shares it to a map.

But does the Shippo tail work? This entertaining video promo certainly makes it seem so. Unfortunately, since the project is only in its prototype phase, there aren’t any models available to test outside of the company’s Tokyo office, a Neurowear spokesperson told The Huffington Post in an email.

As HuffPost Tech’s review of the Necomimi explains, getting “in the zone” for the product to respond appropriately can prove difficult for some users (although not with our reviewer). It’s conceivable that the Shippo may present similar issues.

Neurowear names the “augmented human body” as a design concept on its Web site. If preliminary media reports are to be believed, the wacky gizmo might be a hard sell to North American audiences.