Archive for the ‘EEG’ Category

sn-brain

What people experience as death creeps in—after the heart stops and the brain becomes starved of oxygen—seems to lie beyond the reach of science. But the authors of a new study on dying rats make a bold claim: After cardiac arrest, the rodents’ brains enter a state similar to heightened consciousness in humans. The researchers suggest that if the same is true for people, such brain activity could be the source of the visions and other sensations that make up so-called near-death experiences.

Estimated to occur in about 20% of patients who survive cardiac arrest, near-death experiences are frequently described as hypervivid or “realer-than-real,” and often include leaving the body and observing oneself from outside, or seeing a bright light. The similarities between these reports are hard to ignore, but the conversation about near-death experiences often bleeds into metaphysics: Are these visions produced solely by the brain, or are they a glimpse at an afterlife outside the body?

Neurologist Jimo Borjigin of the University of Michigan, Ann Arbor, got interested in near-death experiences during a different project—measuring the hormone levels in the brains of rodents after a stroke. Some of the animals in her lab died unexpectedly, and her measurements captured a surge in neurochemicals at the moment of their death. Previous research in rodents and humans has shown that electrical activity surges in the brain right after the heart stops, then goes flat after a few seconds. Without any evidence that this final blip contains meaningful brain activity, Borjigin says “it’s perhaps natural for people to assume that [near-death] experiences came from elsewhere, from more supernatural sources.” But after seeing those neurochemical surges in her animals, she wondered about those last few seconds, hypothesizing that even experiences seeming to stretch for days in a person’s memory could originate from a brief “knee-jerk reaction” of the dying brain.

To observe brains on the brink of death, Borjigin and her colleagues implanted electrodes into the brains of nine rats to measure electrical activity at six different locations. The team anesthetized the rats for about an hour, for ethical reasons, and then injected potassium chloride into each unconscious animal’s heart to cause cardiac arrest. In the approximately 30 seconds between a rat’s last heartbeat and the point when its brain stopped producing signals, the team carefully recorded its neuronal oscillations, or the frequency with which brain cells were firing their electrical signals.

The data produced by electroencephalograms (EEGs) of the nine rats revealed a highly organized brain response in the seconds after cardiac arrest, Borjigin and colleagues report online today in the Proceedings of the National Academy of Sciences. While overall electrical activity in the brain sharply declined after the last heartbeat, oscillations in the low gamma frequency (between 25 and 55 Hz) increased in power. Previous human research has linked gamma waves to waking consciousness, meditative states, and REM sleep. These oscillations in the dying rats were synchronized across different parts of the brain, even more so than in the rat’s normal waking state. The team also noticed that firing patterns in the front of the brain would be echoed in the back and sides. This so-called top-down signaling, which is associated with conscious perception and information processing, increased eightfold compared with the waking state, the team reports. When you put these features together, Borjigin says, they suggest that the dying brain is hyperactive in its final seconds, producing meaningful, conscious activity.

The team proposed that such research offers a “scientific framework” for approaching the highly lucid experiences that some people report after their brushes with death. But relating signs of consciousness in rat brains to human near-death experiences is controversial. “It opens more questions than it answers,” says Christof Koch, a neuroscientist at the Allen Institute for Brain Science in Seattle, Washington, of the research. Evidence of a highly organized and connected brain state during the animal’s death throes is surprising and fascinating, he says. But Koch, who worked with Francis Crick in the early 1980s to hypothesize that gamma waves are a hallmark of consciousness, says the increase in their frequency doesn’t necessarily mean that the rats were in a hyperconscious state. Not only is it impossible to project any mental experience onto these animals, but their response was also “still overlaid by the anesthesiology,” he says; this sedation likely influenced their brain response in unpredictable ways.

Others share Koch’s concerns. “There is no animal model of a near-death experience,” says critical care physician Sam Parnia of Stony Brook University School of Medicine in New York. We can never confirm what animals think or feel in their final moments, making it all but impossible to use them to study our own near-death experiences, he believes. Nonetheless, Parnia sees value in this new study from a clinical perspective, as a step toward understanding how the brain behaves right before death. He says that doctors might use a similar approach to learn how to improve blood flow or prolong electrical activity in the brain, preventing damage while resuscitating a patient.

Borjigin argues that the rat data are compelling enough to drive further study of near-death experiences in humans. She suggests monitoring EEG activity in people undergoing brain surgery that involves cooling the brain and reducing its blood supply. This procedure has prompted near-death experiences in the past, she says, and could offer a systematic way to explore the phenomenon.

read more here: http://news.sciencemag.org/brain-behavior/2013/08/probing-brain%E2%80%99s-final-moments

Thanks to Kebmodee for bringing this to the attention of the It’s Interesting community.

brain

Behind a locked door in a white-walled basement in a research building in Tempe, Ariz., a monkey sits stone-still in a chair, eyes locked on a computer screen. From his head protrudes a bundle of wires; from his mouth, a plastic tube. As he stares, a picture of a green cursor on the black screen floats toward the corner of a cube. The monkey is moving it with his mind.

The monkey, a rhesus macaque named Oscar, has electrodes implanted in his motor cortex, detecting electrical impulses that indicate mental activity and translating them to the movement of the ball on the screen. The computer isn’t reading his mind, exactly — Oscar’s own brain is doing a lot of the lifting, adapting itself by trial and error to the delicate task of accurately communicating its intentions to the machine. (When Oscar succeeds in controlling the ball as instructed, the tube in his mouth rewards him with a sip of his favorite beverage, Crystal Light.) It’s not technically telekinesis, either, since that would imply that there’s something paranormal about the process. It’s called a “brain-computer interface” (BCI). And it just might represent the future of the relationship between human and machine.

Stephen Helms Tillery’s laboratory at Arizona State University is one of a growing number where researchers are racing to explore the breathtaking potential of BCIs and a related technology, neuroprosthetics. The promise is irresistible: from restoring sight to the blind, to helping the paralyzed walk again, to allowing people suffering from locked-in syndrome to communicate with the outside world. In the past few years, the pace of progress has been accelerating, delivering dazzling headlines seemingly by the week.

At Duke University in 2008, a monkey named Idoya walked on a treadmill, causing a robot in Japan to do the same. Then Miguel Nicolelis stopped the monkey’s treadmill — and the robotic legs kept walking, controlled by Idoya’s brain. At Andrew Schwartz’s lab at the University of Pittsburgh in December 2012, a quadriplegic woman named Jan Scheuermann learned to feed herself chocolate by mentally manipulating a robotic arm. Just last month, Nicolelis’ lab set up what it billed as the first brain-to-brain interface, allowing a rat in North Carolina to make a decision based on sensory data beamed via Internet from the brain of a rat in Brazil.

So far the focus has been on medical applications — restoring standard-issue human functions to people with disabilities. But it’s not hard to imagine the same technologies someday augmenting capacities. If you can make robotic legs walk with your mind, there’s no reason you can’t also make them run faster than any sprinter. If you can control a robotic arm, you can control a robotic crane. If you can play a computer game with your mind, you can, theoretically at least, fly a drone with your mind.

It’s tempting and a bit frightening to imagine that all of this is right around the corner, given how far the field has already come in a short time. Indeed, Nicolelis — the media-savvy scientist behind the “rat telepathy” experiment — is aiming to build a robotic bodysuit that would allow a paralyzed teen to take the first kick of the 2014 World Cup. Yet the same factor that has made the explosion of progress in neuroprosthetics possible could also make future advances harder to come by: the almost unfathomable complexity of the human brain.

From I, Robot to Skynet, we’ve tended to assume that the machines of the future would be guided by artificial intelligence — that our robots would have minds of their own. Over the decades, researchers have made enormous leaps in artificial intelligence (AI), and we may be entering an age of “smart objects” that can learn, adapt to, and even shape our habits and preferences. We have planes that fly themselves, and we’ll soon have cars that do the same. Google has some of the world’s top AI minds working on making our smartphones even smarter, to the point that they can anticipate our needs. But “smart” is not the same as “sentient.” We can train devices to learn specific behaviors, and even out-think humans in certain constrained settings, like a game of Jeopardy. But we’re still nowhere close to building a machine that can pass the Turing test, the benchmark for human-like intelligence. Some experts doubt we ever will.

Philosophy aside, for the time being the smartest machines of all are those that humans can control. The challenge lies in how best to control them. From vacuum tubes to the DOS command line to the Mac to the iPhone, the history of computing has been a progression from lower to higher levels of abstraction. In other words, we’ve been moving from machines that require us to understand and directly manipulate their inner workings to machines that understand how we work and respond readily to our commands. The next step after smartphones may be voice-controlled smart glasses, which can intuit our intentions all the more readily because they see what we see and hear what we hear.

The logical endpoint of this progression would be computers that read our minds, computers we can control without any physical action on our part at all. That sounds impossible. After all, if the human brain is so hard to compute, how can a computer understand what’s going on inside it?

It can’t. But as it turns out, it doesn’t have to — not fully, anyway. What makes brain-computer interfaces possible is an amazing property of the brain called neuroplasticity: the ability of neurons to form new connections in response to fresh stimuli. Our brains are constantly rewiring themselves to allow us to adapt to our environment. So when researchers implant electrodes in a part of the brain that they expect to be active in moving, say, the right arm, it’s not essential that they know in advance exactly which neurons will fire at what rate. When the subject attempts to move the robotic arm and sees that it isn’t quite working as expected, the person — or rat or monkey — will try different configurations of brain activity. Eventually, with time and feedback and training, the brain will hit on a solution that makes use of the electrodes to move the arm.

That’s the principle behind such rapid progress in brain-computer interface and neuroprosthetics. Researchers began looking into the possibility of reading signals directly from the brain in the 1970s, and testing on rats began in the early 1990s. The first big breakthrough for humans came in Georgia in 1997, when a scientist named Philip Kennedy used brain implants to allow a “locked in” stroke victim named Johnny Ray to spell out words by moving a cursor with his thoughts. (It took him six exhausting months of training to master the process.) In 2008, when Nicolelis got his monkey at Duke to make robotic legs run a treadmill in Japan, it might have seemed like mind-controlled exoskeletons for humans were just another step or two away. If he succeeds in his plan to have a paralyzed youngster kick a soccer ball at next year’s World Cup, some will pronounce the cyborg revolution in full swing.

Schwartz, the Pittsburgh researcher who helped Jan Scheuermann feed herself chocolate in December, is optimistic that neuroprosthetics will eventually allow paralyzed people to regain some mobility. But he says that full control over an exoskeleton would require a more sophisticated way to extract nuanced information from the brain. Getting a pair of robotic legs to walk is one thing. Getting robotic limbs to do everything human limbs can do may be exponentially more complicated. “The challenge of maintaining balance and staying upright on two feet is a difficult problem, but it can be handled by robotics without a brain. But if you need to move gracefully and with skill, turn and step over obstacles, decide if it’s slippery outside — that does require a brain. If you see someone go up and kick a soccer ball, the essential thing to ask is, ‘OK, what would happen if I moved the soccer ball two inches to the right?'” The idea that simple electrodes could detect things as complex as memory or cognition, which involve the firing of billions of neurons in patterns that scientists can’t yet comprehend, is far-fetched, Schwartz adds.

That’s not the only reason that companies like Apple and Google aren’t yet working on devices that read our minds (as far as we know). Another one is that the devices aren’t portable. And then there’s the little fact that they require brain surgery.

A different class of brain-scanning technology is being touted on the consumer market and in the media as a way for computers to read people’s minds without drilling into their skulls. It’s called electroencephalography, or EEG, and it involves headsets that press electrodes against the scalp. In an impressive 2010 TED Talk, Tan Le of the consumer EEG-headset company Emotiv Lifescience showed how someone can use her company’s EPOC headset to move objects on a computer screen.

Skeptics point out that these devices can detect only the crudest electrical signals from the brain itself, which is well-insulated by the skull and scalp. In many cases, consumer devices that claim to read people’s thoughts are in fact relying largely on physical signals like skin conductivity and tension of the scalp or eyebrow muscles.

Robert Oschler, a robotics enthusiast who develops apps for EEG headsets, believes the more sophisticated consumer headsets like the Emotiv EPOC may be the real deal in terms of filtering out the noise to detect brain waves. Still, he says, there are limits to what even the most advanced, medical-grade EEG devices can divine about our cognition. He’s fond of an analogy that he attributes to Gerwin Schalk, a pioneer in the field of invasive brain implants. The best EEG devices, he says, are “like going to a stadium with a bunch of microphones: You can’t hear what any individual is saying, but maybe you can tell if they’re doing the wave.” With some of the more basic consumer headsets, at this point, “it’s like being in a party in the parking lot outside the same game.”

It’s fairly safe to say that EEG headsets won’t be turning us into cyborgs anytime soon. But it would be a mistake to assume that we can predict today how brain-computer interface technology will evolve. Just last month, a team at Brown University unveiled a prototype of a low-power, wireless neural implant that can transmit signals to a computer over broadband. That could be a major step forward in someday making BCIs practical for everyday use. Meanwhile, researchers at Cornell last week revealed that they were able to use fMRI, a measure of brain activity, to detect which of four people a research subject was thinking about at a given time. Machines today can read our minds in only the most rudimentary ways. But such advances hint that they may be able to detect and respond to more abstract types of mental activity in the always-changing future.

http://www.ydr.com/living/ci_22800493/researchers-explore-connecting-brain-machines

piggy

Two people have successfully steered a virtual spacecraft by combining the power of their thoughts – and their efforts were far more accurate than one person acting alone. One day groups of people hooked up to brain-computer interfaces (BCIs) might work together to control complex robotic and telepresence systems, maybe even in space.

A BCI system records the brain’s electrical activity using EEG signals, which are detected with electrodes attached to the scalp. Machine-learning software learns to recognise the patterns generated by each user as they think of a certain concept, such as “left” or “right”. BCIs have helped people with disabilities to steer a wheelchair, for example.

Researchers are discovering, however, that they get better results in some tasks by combining the signals from multiple BCI users. Until now, this “collaborative BCI” technique has been used in simple pattern-recognition tasks, but a team at the University of Essex in the UK wanted to test it more rigorously.

So they developed a simulator in which pairs of BCI users had to steer a craft towards the dead centre of a planet by thinking about one of eight directions that they could fly in, like using compass points. Brain signals representing the users’ chosen direction, as interpreted by the machine-learning system, were merged in real time and the spacecraft followed that path.

The results, to be presented at an Intelligent User Interfaces conference in California in March, strongly favoured two-brain navigation. Simulation flights were 67 per cent accurate for a single user, but 90 per cent on target for two users. And when coping with sudden changes in the simulated planet’s position, reaction times were halved, too. Combining signals eradicates the random noise that dogs EEG signals. “When you average signals from two people’s brains, the noise cancels out a bit,” says team member Riccardo Poli.

The technique can also compensate for a lapse in attention. “It is difficult to stay focused on the task at all times. So when a single user has momentary attention lapses, it matters. But when there are two users, a lapse by one will not have much effect, so you stay on target,” Poli says.

NASA’s Jet Propulsion Lab in Pasadena, California, has been observing the work while itself investigating BCI’s potential for controlling planetary rovers, for example. But don’t hold your breath, says JPL senior research scientist Adrian Stoica. “While potential uses for space applications exist, in terms of uses for planetary rover remote control, this is still a speculative idea,” he says.

http://www.newscientist.com/article/mg21729025.600-mindmeld-brain-power-is-best-for-steering-spaceships.html

brain-generic-101221-02

The brain waves of two musicians synchronize when they are performing duet, a new study found, suggesting that there’s a neural blueprint for coordinating actions with others.

A team of scientists at the Max Planck Institute for Human Development in Berlin used electrodes to record the brain waves of 16 pairs of guitarists while they played a sequence from “Sonata in G Major” by Christian Gottlieb Scheidler. In each pair, the two musicians played different voices of the piece. One guitarist was responsible for beginning the song and setting the tempo while the other was instructed to follow.

In 60 trials each, the pairs of musicians showed coordinated brain oscillations — or matching rhythms of neural activity — in regions of the brain associated with social cognition and music production, the researchers said.

“When people coordinate their own actions, small networks between brain regions are formed,” study researcher Johanna Sänger said in a statement. “But we also observed similar network properties between the brains of the individual players, especially when mutual coordination is very important; for example at the joint onset of a piece of music.”

Sänger added that the internal synchronization of the lead guitarists’ brain waves was present, and actually stronger, before the duet began.

“This could be a reflection of the leading player’s decision to begin playing at a certain moment in time,” she explained.

Another Max Planck researcher involved in the study, Ulman Lindenberger, led a similar set of experiments in 2009. But in that study, which was published in the journal BMC Neuroscience, the pairs of guitarists played a song in unison, rather than a duet. Lindenberger and his team at the time observed the same type of coordinated brain oscillations, but noted that the synchronization could have been the result of the similarities of the actions performed by the pairs of musicians.

As the new study involved guitarists who were performing different parts of a song, the researchers say their results provide stronger evidence that there is a neural basis for interpersonal coordination. The team believes people’s brain waves might also synchronize during other types of actions, such as during sports games.

The study was published online Nov. 29 in the journal Frontiers in Human Neuroscience.

http://www.livescience.com/25117-musicians-brains-sync-up-during-duet.html

smart_sentryx519

 

Sentry duty is a tough assignment. Most of the time there’s nothing to see, and when a threat does pop up, it can be hard to spot. In some military studies, humans are shown to detect only 47 percent of visible dangers.

A project run by the Defense Advanced Research Projects Agency (DARPA) suggests that combining the abilities of human sentries with those of machine-vision systems could be a better way to identify danger. It also uses electroencephalography to identify spikes in brain activity that can correspond to subconscious recognition of an object.

An experimental system developed by DARPA sandwiches a human observer between layers of computer vision and has been shown to outperform either machines or humans used in isolation.

The so-called Cognitive Technology Threat Warning System consists of a wide-angle camera and radar, which collects imagery for humans to review on a screen, and a wearable electroencephalogram device that measures the reviewer’s brain activity. This allows the system to detect unconscious recognition of changes in a scene—called a P300 event.

In experiments, a participant was asked to review test footage shot at military test sites in the desert and rain forest. The system caught 91 percent of incidents (such as humans on foot or approaching vehicles) in the simulation. It also widened the field of view that could effectively be monitored. False alarms were raised only 0.2 percent of the time, down from 35 percent when a computer vision system was used on its own. When combined with radar, which detects things invisible to the naked eye, the accuracy of the system was close to 100 percent, DARPA says.

“The DARPA project is different from other ‘human-in-the-loop’ projects because it takes advantage of the human visual system without having the humans do any ‘work,’ ” says computer scientist Devi Parikh of the Toyota Technological Institute at Chicago. Parikh researches vision systems that combine human and machine expertise.

While electroencephalogram-measuring caps are commercially available for a few hundred dollars, Parikh warns that the technology is still in its infancy. Furthermore, she notes, the P300 signals may vary enough to require training or personalized processing, which could make it harder to scale up such a system for widespread use.

Thanks to Kebmodee for bringing this to the attention of the It’s Interesting community.

http://www.technologyreview.com/news/507826/sentry-system-combines-a-human-brain-with-computer-vision/

For the last few years, Puzzlebox has been publishing open source software and hacking guides that walk makers through the modification of RC helicopters so that they can be flown and controlled using just the power of the mind. Full systems have also been custom built to introduce youngsters to brain-computer interfaces and neuroscience. The group is about to take the project to the next stage by making a Puzzlebox Orbit brain-controlled helicopter available to the public, while encouraging user experimentation by making all the code, schematics, 3D models, build guides and other documentation freely available under an open-source license.

The helicopter has a protective outer sphere that prevents the rotor blades from impacting with walls, furniture, floor and ceiling is very similar in design to the Kyosho Space Ball. It’s not the same craft though, and the ability to control it with the mind is not the only difference.

“There’s a ring around the top and bottom of the Space Ball which isn’t present on the Puzzlebox Orbit,” Castellotti says. “The casing around their server motor looks quite different, too. The horizontal ring at-mid level is more rounded on the Orbit, and vertically it is more squat. We’re also selling the Puzzlebox Orbit in the U.S. for US$89 (including shipping), versus their $117 (plus shipping).”

Two versions of the Puzzlebox Orbit system are being offered to the public. The first is designed for use with mobile devices like tablets and smartphones. A NeuroSky MindWave Mobile EEG headset communicates with the device via Bluetooth. Proprietary software then analyzes the brainwave data in real time and translates the input as command signals, which are sent to the helicopter via an IR adapter plugged into the device’s audio jack.

This system isn’t quite ready for all mobile operating platforms, though. The team is “happy on Android but don’t have access to a wide variety of hardware for testing,” confirmed Castellotti, adding “Some tuning after release is expected. We’ll have open source code available to iOS developers and will have initiated the App Store evaluation process if it’s not already been approved.”

The second offering comes with a Puzzlebox Pyramid, which was developed completely in-house and has a dual role as a home base for the Orbit helicopter and a remote control unit. At its heart is a programmable micro-controller that’s compatible with Arduino boards. On one face of the pyramid there’s a broken circle of multi-colored LED lights in a clock face configuration. These are used to indicate levels of concentration, mental relaxation, and the quality of the EEG signal from a NeuroSky MindWave EEG headset (which wirelessly communicates with a USB dongle plugged into the rear of the pyramid).

Twelve infrared LEDs to the top of each face actually control the Orbit helicopter, and with some inventive tweaking, these can also be used to control other IR toys and devices (including TVs).

In either case, a targeted mental state can be assigned to a helicopter control or flight path (such as hover in place or fly in a straight line) and actioned whenever that state is detected and maintained. Estimated Orbit flight time is around eight minutes (or more), after which the user will need to recharge the unit for 30 minutes before the next take-off.

At the time of writing, a crowd-funding campaign on Kickstarter to take the prototype system into mass production has attracted almost three times its target. The Puzzlebox team has already secured enough hardware and materials to start shipping the first wave of Orbits next month. International backers will get their hands on the system early next year.

The brain-controlled helicopter is only a part of the package, however. The development team has promised to release the source code for the Linux/Mac/PC software and mobile apps, all protocols, and available hardware schematics under open-source licenses. Step-by-step how-to guides are also in the pipeline (like the one already on the Instructables website), together with educational aids detailing how everything works.

“We have prepared contributor tools for Orbit, including a wiki, source code browser, and ticket tracking system,” said Castellotti. “We are already using these tools internally to build the project. Access to these will be granted when the Kickstarter campaign closes.”

“We would really like to underline that we are producing more than just a brain-controlled helicopter,” he stressed. “The toy and concept is fun and certainly the main draw, but the true purpose lies in the open code and hacking guides. We don’t want to be the holiday toy that gets played with for ten minutes then sits forever in the corner or on a shelf. We want owners to be able to use the Orbit to experiment with biofeedback – practicing how to concentrate better or to unwind and relax with this physical and visual aid.”

“And when curiosity kicks in and they start to wonder how it actually works, all of the information is published freely. That’s how we hope to share knowledge and foster a community. For example, a motivated experimenter should be able to start with the hardware we provide, and using our tools and guides learn how to hack support for driving a remote controlled car or causing a television to change channels when attention levels are measured as being low for too long a period of time. Such advancements could then be contributed back to the rest of our users.”

The Kickstarter campaign will close on December 8, after which the team will concentrate its efforts on getting Orbit systems delivered to backers and ensure that all the background and support documentation is in place. If all goes according to plan, a retail launch could follow as soon as Q1 2013.

It is hoped that the consumer Puzzlebox Orbit mobile/tablet edition with the NeuroSky headset will remain under US$200, followed by the Pyramid version at an as-yet undisclosed price.

http://www.gizmag.com/puzzlebox-orbit-brain-controlled-helicopter/25138/

 

When facial cues aren’t enough, there’s Shippo.

A Japanese company called Neurowear, which makes brain-wave interpreting products like the Necomimi cat ear set, is now developing a toy tail that wags in sync with a user’s mood.

By utilizing an electroencephalography (EEG) apparatus similar to that of the company’s popular cat ears, the Shippo tail reads electrical patterns emitted by the brain and manifests them as wagging.

A concentrating person emits brain waves in the range of 12 to 30 hertz, while a relaxed person’s waves measure in the 8- to 12-hertz range, NeuroSky, the San Jose-based company that developed the Necomimi, told CNET.

With Shippo, relaxed users’ tails will demonstrate “soft and slow” wagging, while concentrated users’ tails will display “hard and fast” wagging. The gadget is also social media enabled; a neural application reads the user’s mood and shares it to a map.

But does the Shippo tail work? This entertaining video promo certainly makes it seem so. Unfortunately, since the project is only in its prototype phase, there aren’t any models available to test outside of the company’s Tokyo office, a Neurowear spokesperson told The Huffington Post in an email.

As HuffPost Tech’s review of the Necomimi explains, getting “in the zone” for the product to respond appropriately can prove difficult for some users (although not with our reviewer). It’s conceivable that the Shippo may present similar issues.

Neurowear names the “augmented human body” as a design concept on its Web site. If preliminary media reports are to be believed, the wacky gizmo might be a hard sell to North American audiences.