Researchers explore connecting the brain to machines

brain

Behind a locked door in a white-walled basement in a research building in Tempe, Ariz., a monkey sits stone-still in a chair, eyes locked on a computer screen. From his head protrudes a bundle of wires; from his mouth, a plastic tube. As he stares, a picture of a green cursor on the black screen floats toward the corner of a cube. The monkey is moving it with his mind.

The monkey, a rhesus macaque named Oscar, has electrodes implanted in his motor cortex, detecting electrical impulses that indicate mental activity and translating them to the movement of the ball on the screen. The computer isn’t reading his mind, exactly — Oscar’s own brain is doing a lot of the lifting, adapting itself by trial and error to the delicate task of accurately communicating its intentions to the machine. (When Oscar succeeds in controlling the ball as instructed, the tube in his mouth rewards him with a sip of his favorite beverage, Crystal Light.) It’s not technically telekinesis, either, since that would imply that there’s something paranormal about the process. It’s called a “brain-computer interface” (BCI). And it just might represent the future of the relationship between human and machine.

Stephen Helms Tillery’s laboratory at Arizona State University is one of a growing number where researchers are racing to explore the breathtaking potential of BCIs and a related technology, neuroprosthetics. The promise is irresistible: from restoring sight to the blind, to helping the paralyzed walk again, to allowing people suffering from locked-in syndrome to communicate with the outside world. In the past few years, the pace of progress has been accelerating, delivering dazzling headlines seemingly by the week.

At Duke University in 2008, a monkey named Idoya walked on a treadmill, causing a robot in Japan to do the same. Then Miguel Nicolelis stopped the monkey’s treadmill — and the robotic legs kept walking, controlled by Idoya’s brain. At Andrew Schwartz’s lab at the University of Pittsburgh in December 2012, a quadriplegic woman named Jan Scheuermann learned to feed herself chocolate by mentally manipulating a robotic arm. Just last month, Nicolelis’ lab set up what it billed as the first brain-to-brain interface, allowing a rat in North Carolina to make a decision based on sensory data beamed via Internet from the brain of a rat in Brazil.

So far the focus has been on medical applications — restoring standard-issue human functions to people with disabilities. But it’s not hard to imagine the same technologies someday augmenting capacities. If you can make robotic legs walk with your mind, there’s no reason you can’t also make them run faster than any sprinter. If you can control a robotic arm, you can control a robotic crane. If you can play a computer game with your mind, you can, theoretically at least, fly a drone with your mind.

It’s tempting and a bit frightening to imagine that all of this is right around the corner, given how far the field has already come in a short time. Indeed, Nicolelis — the media-savvy scientist behind the “rat telepathy” experiment — is aiming to build a robotic bodysuit that would allow a paralyzed teen to take the first kick of the 2014 World Cup. Yet the same factor that has made the explosion of progress in neuroprosthetics possible could also make future advances harder to come by: the almost unfathomable complexity of the human brain.

From I, Robot to Skynet, we’ve tended to assume that the machines of the future would be guided by artificial intelligence — that our robots would have minds of their own. Over the decades, researchers have made enormous leaps in artificial intelligence (AI), and we may be entering an age of “smart objects” that can learn, adapt to, and even shape our habits and preferences. We have planes that fly themselves, and we’ll soon have cars that do the same. Google has some of the world’s top AI minds working on making our smartphones even smarter, to the point that they can anticipate our needs. But “smart” is not the same as “sentient.” We can train devices to learn specific behaviors, and even out-think humans in certain constrained settings, like a game of Jeopardy. But we’re still nowhere close to building a machine that can pass the Turing test, the benchmark for human-like intelligence. Some experts doubt we ever will.

Philosophy aside, for the time being the smartest machines of all are those that humans can control. The challenge lies in how best to control them. From vacuum tubes to the DOS command line to the Mac to the iPhone, the history of computing has been a progression from lower to higher levels of abstraction. In other words, we’ve been moving from machines that require us to understand and directly manipulate their inner workings to machines that understand how we work and respond readily to our commands. The next step after smartphones may be voice-controlled smart glasses, which can intuit our intentions all the more readily because they see what we see and hear what we hear.

The logical endpoint of this progression would be computers that read our minds, computers we can control without any physical action on our part at all. That sounds impossible. After all, if the human brain is so hard to compute, how can a computer understand what’s going on inside it?

It can’t. But as it turns out, it doesn’t have to — not fully, anyway. What makes brain-computer interfaces possible is an amazing property of the brain called neuroplasticity: the ability of neurons to form new connections in response to fresh stimuli. Our brains are constantly rewiring themselves to allow us to adapt to our environment. So when researchers implant electrodes in a part of the brain that they expect to be active in moving, say, the right arm, it’s not essential that they know in advance exactly which neurons will fire at what rate. When the subject attempts to move the robotic arm and sees that it isn’t quite working as expected, the person — or rat or monkey — will try different configurations of brain activity. Eventually, with time and feedback and training, the brain will hit on a solution that makes use of the electrodes to move the arm.

That’s the principle behind such rapid progress in brain-computer interface and neuroprosthetics. Researchers began looking into the possibility of reading signals directly from the brain in the 1970s, and testing on rats began in the early 1990s. The first big breakthrough for humans came in Georgia in 1997, when a scientist named Philip Kennedy used brain implants to allow a “locked in” stroke victim named Johnny Ray to spell out words by moving a cursor with his thoughts. (It took him six exhausting months of training to master the process.) In 2008, when Nicolelis got his monkey at Duke to make robotic legs run a treadmill in Japan, it might have seemed like mind-controlled exoskeletons for humans were just another step or two away. If he succeeds in his plan to have a paralyzed youngster kick a soccer ball at next year’s World Cup, some will pronounce the cyborg revolution in full swing.

Schwartz, the Pittsburgh researcher who helped Jan Scheuermann feed herself chocolate in December, is optimistic that neuroprosthetics will eventually allow paralyzed people to regain some mobility. But he says that full control over an exoskeleton would require a more sophisticated way to extract nuanced information from the brain. Getting a pair of robotic legs to walk is one thing. Getting robotic limbs to do everything human limbs can do may be exponentially more complicated. “The challenge of maintaining balance and staying upright on two feet is a difficult problem, but it can be handled by robotics without a brain. But if you need to move gracefully and with skill, turn and step over obstacles, decide if it’s slippery outside — that does require a brain. If you see someone go up and kick a soccer ball, the essential thing to ask is, ‘OK, what would happen if I moved the soccer ball two inches to the right?'” The idea that simple electrodes could detect things as complex as memory or cognition, which involve the firing of billions of neurons in patterns that scientists can’t yet comprehend, is far-fetched, Schwartz adds.

That’s not the only reason that companies like Apple and Google aren’t yet working on devices that read our minds (as far as we know). Another one is that the devices aren’t portable. And then there’s the little fact that they require brain surgery.

A different class of brain-scanning technology is being touted on the consumer market and in the media as a way for computers to read people’s minds without drilling into their skulls. It’s called electroencephalography, or EEG, and it involves headsets that press electrodes against the scalp. In an impressive 2010 TED Talk, Tan Le of the consumer EEG-headset company Emotiv Lifescience showed how someone can use her company’s EPOC headset to move objects on a computer screen.

Skeptics point out that these devices can detect only the crudest electrical signals from the brain itself, which is well-insulated by the skull and scalp. In many cases, consumer devices that claim to read people’s thoughts are in fact relying largely on physical signals like skin conductivity and tension of the scalp or eyebrow muscles.

Robert Oschler, a robotics enthusiast who develops apps for EEG headsets, believes the more sophisticated consumer headsets like the Emotiv EPOC may be the real deal in terms of filtering out the noise to detect brain waves. Still, he says, there are limits to what even the most advanced, medical-grade EEG devices can divine about our cognition. He’s fond of an analogy that he attributes to Gerwin Schalk, a pioneer in the field of invasive brain implants. The best EEG devices, he says, are “like going to a stadium with a bunch of microphones: You can’t hear what any individual is saying, but maybe you can tell if they’re doing the wave.” With some of the more basic consumer headsets, at this point, “it’s like being in a party in the parking lot outside the same game.”

It’s fairly safe to say that EEG headsets won’t be turning us into cyborgs anytime soon. But it would be a mistake to assume that we can predict today how brain-computer interface technology will evolve. Just last month, a team at Brown University unveiled a prototype of a low-power, wireless neural implant that can transmit signals to a computer over broadband. That could be a major step forward in someday making BCIs practical for everyday use. Meanwhile, researchers at Cornell last week revealed that they were able to use fMRI, a measure of brain activity, to detect which of four people a research subject was thinking about at a given time. Machines today can read our minds in only the most rudimentary ways. But such advances hint that they may be able to detect and respond to more abstract types of mental activity in the always-changing future.

http://www.ydr.com/living/ci_22800493/researchers-explore-connecting-brain-machines

Flip of a single molecular switch makes an old brain young

green-image

The flip of a single molecular switch helps create the mature neuronal connections that allow the brain to bridge the gap between adolescent impressionability and adult stability. Now Yale School of Medicine researchers have reversed the process, recreating a youthful brain that facilitated both learning and healing in the adult mouse.

Scientists have long known that the young and old brains are very different. Adolescent brains are more malleable or plastic, which allows them to learn languages more quickly than adults and speeds recovery from brain injuries. The comparative rigidity of the adult brain results in part from the function of a single gene that slows the rapid change in synaptic connections between neurons.

By monitoring the synapses in living mice over weeks and months, Yale researchers have identified the key genetic switch for brain maturation a study released March 6 in the journal Neuron. The Nogo Receptor 1 gene is required to suppress high levels of plasticity in the adolescent brain and create the relatively quiescent levels of plasticity in adulthood. In mice without this gene, juvenile levels of brain plasticity persist throughout adulthood. When researchers blocked the function of this gene in old mice, they reset the old brain to adolescent levels of plasticity.

“These are the molecules the brain needs for the transition from adolescence to adulthood,” said Dr. Stephen Strittmatter. Vincent Coates Professor of Neurology, Professor of Neurobiology and senior author of the paper. “It suggests we can turn back the clock in the adult brain and recover from trauma the way kids recover.”

Rehabilitation after brain injuries like strokes requires that patients re-learn tasks such as moving a hand. Researchers found that adult mice lacking Nogo Receptor recovered from injury as quickly as adolescent mice and mastered new, complex motor tasks more quickly than adults with the receptor.

“This raises the potential that manipulating Nogo Receptor in humans might accelerate and magnify rehabilitation after brain injuries like strokes,” said Feras Akbik, Yale doctoral student who is first author of the study.

Researchers also showed that Nogo Receptor slows loss of memories. Mice without Nogo receptor lost stressful memories more quickly, suggesting that manipulating the receptor could help treat post-traumatic stress disorder.

“We know a lot about the early development of the brain,” Strittmatter said, “But we know amazingly little about what happens in the brain during late adolescence.”

Other Yale authors are: Sarah M. Bhagat, Pujan R. Patel and William B.J. Cafferty

The study was funded by the National Institutes of Health. Strittmatter is scientific founder of Axerion Therapeutics, which is investigating applications of Nogo research to repair spinal cord damage.

http://news.yale.edu/2013/03/06/flip-single-molecular-switch-makes-old-brain-young

Communication of thoughts between rats on different continents, connected via brain-to-brain interface

The world’s first brain-to-brain connection has given rats the power to communicate by thought alone.

“Many people thought it could never happen,” says Miguel Nicolelis at Duke University in Durham, North Carolina. Although monkeys have been able to control robots with their mind using brain-to-machine interfaces, work by Nicolelis’s team has, for the first time, demonstrated a direct interface between two brains – with the rats able to share both motor and sensory information.

The feat was achieved by first training rats to press one of two levers when an LED above that lever was lit. A correct action opened a hatch containing a drink of water. The rats were then split into two groups, designated as “encoders” and “decoders”.

An array of microelectrodes – each about one-hundredth the width of a human hair – was then implanted in the encoder rats’ primary motor cortex, an area of the brain that processes movement. The team used the implant to record the neuronal activity that occurs just before the rat made a decision in the lever task. They found that pressing the left lever produced a different pattern of activity from pressing the right lever, regardless of which was the correct action.

Next, the team recreated these patterns in decoder rats, using an implant in the same brain area that stimulates neurons rather than recording from them. The decoders received a few training sessions to prime them to pick the correct lever in response to the different patterns of stimulation.

The researchers then wired up the implants of an encoder and a decoder rat. The pair were given the same lever-press task again, but this time only the encoder rats saw the LEDs come on. Brain signals from the encoder rat were recorded just before they pressed the lever and transmitted to the decoder rat. The team found that the decoders, despite having no visual cue, pressed the correct lever between 60 and 72 per cent of the time.

The rats’ ability to cooperate was reinforced by rewarding both rats if the communication resulted in a correct outcome. Such reinforcement led to the transmission of clearer signals, improving the rats’ success rate compared with cases where decoders were given a pre-recorded signal. This was a big surprise, says Nicolelis. “The encoder’s brain activity became more precise. This could have happened because the animal enhanced its attention during the performance of the next trial after a decoder error.”

If the decoders had not been primed to relate specific activity with the left or right lever prior to the being linked with an encoder, the only consequence would be that it would have taken a bit more time for them to learn the task while interacting with the encoder, says Nicolelis. “We simply primed the decoder so that it would get the gist of the task it had to perform.” In unpublished monkey experiments doing a similar task, the team did not need to prime the animals at all.

In a second experiment, rats were trained to explore a hole with their whiskers and indicate if it was narrow or wide by turning to the left or right. Pairs of rats were then connected as before, but this time the implants were placed in their primary somatosensory cortex, an area that processes touch. Decoder rats were able to indicate over 60 per cent of the time the width of a gap that only the encoder rats were exploring.

Finally, encoder rats were held still while their whiskers were stroked with metal bars. The researchers observed patterns of activity in the somatosensory cortex of the decoder rats that matched that of the encoder rats, even though the whiskers of the decoder rats had not been touched.

Pairs of rats were even able to cooperate across continents using cyberspace. Brain signals from an encoder rat at the Edmond and Lily Safra International Institute of Neuroscience of Natal in Brazil were sent to a decoder in Nicolelis’s lab in North Carolina via the internet. Though there was a slight transmission delay, the decoder rat still performed with an accuracy similar to those of rats in closer proximity with encoders.

Christopher James at the University of Warwick, UK, who works on brain-to-machine interfaces for prostheses, says the work is a “wake-up call” for people who haven’t caught up with recent advances in brain research.

We have the technology to create implants for long-term use, he says. What is missing, though, is a full understanding of the brain processes involved. In this case, Nicolelis’s team is “blasting a relatively large area of the brain with a signal they’re not sure is 100 per cent correct,” he says.

That’s because the exact information being communicated between the rats’ brains is not clear. The brain activity of the encoders cannot be transferred precisely to the decoders because that would require matching the patterns neuron for neuron, which is not currently possible. Instead, the two patterns are closely related in terms of their frequency and spatial representation.

“We are still using a sledgehammer to crack a walnut,” says James. “They’re not hearing the voice of God.” But the rats are certainly sending and receiving more than a binary signal that simply points to one or other lever, he says. “I think it will be possible one day to transfer an abstract thought.”

The decoders have to interpret relatively complex brain patterns, says Marshall Shuler at Johns Hopkins University in Baltimore, Maryland. The animals learn the relevance of these new patterns and their brains adapt to the signals. “But the decoders are probably not having the same quality of experience as the encoders,” he says.

Patrick Degenaar at Newcastle University in the UK says that the military might one day be able to deploy genetically modified insects or small mammals that are controlled by the brain signals of a remote human operator. These would be drones that could feed themselves, he says, and could be used for surveillance or even assassination missions. “You’d probably need a flying bug to get near the head [of someone to be targeted],” he says.

Nicolelis is most excited about the future of multiple networked brains. He is currently trialling the implants in monkeys, getting them to work together telepathically to complete a task. For example, each monkey might only have access to part of the information needed to make the right decision in a game. Several monkeys would then need to communicate with each other in order to successfully complete the task.

“In the distant future we may be able to communicate via a brain-net,” says Nicolelis. “I would be very glad if the brain-net my great grandchildren used was due to their great grandfather’s work.”

Journal reference: Nature Scientific Reports, DOI: 10.1038/srep01319

$300 dollar glasses sold on Amazon will correct colorblindness

OxyAmp_b

tohruMurakami_students1
Mark Changizi and Tim Barber turned research on human vision and blood flow into colorblindness-correcting glasses you can buy on Amazon. Here’s how they did it.

About 10 years ago, Mark Changizi started to develop research on human vision and how it could see changes in skin color. Like many academics, Changizi, an accomplished neurobiologist, went on to pen a book. The Vision Revolution challenged prevailing theories–no, we don’t see red only to spot berries and fruits amid the vegetation–and detailed the amazing capabilities of why we see the way we do.

If it were up to academia, Changizi’s story might have ended there. “I started out in math and physics, trying to understand the beauty in these fields,” he says, “You are taught, or come to believe, that applying something useful is inherently not interesting.”

Not only did Changizi manage to beat that impulse out of himself, but he and Tim Barber, a friend from middle school, teamed up several years ago to form a joint research institute. 2AI Labs allows the pair to focus on research into cognition and perception in humans and machines, and then to commercialize it. The most recent project? A pair of glasses with filters that just happen to cure colorblindness.

Changizi and Barber didn’t set out to cure colorblindness. Changizi just put forth the idea that humans’ ability to see colors evolved to detect oxygenation and hemoglobin changes in the skin so they could tell if someone was scared, uncomfortable or unhealthy. “We as humans blush and blanche, regardless of overall skin tone,” Barber explains, “We associate color with emotion. People turn purple with anger in every culture.” Once Changizi fully understood the connection between color vision and blood physiology, Changizi determined it would be possible to build filters that aimed to enhance the ability to see those subtle changes by making veins more or less distinct–by sharpening the ability to see the red-green or blue-yellow parts of the spectrum. He and Barber then began the process of patenting their invention.

When they started thinking about commercial applications, Changizi and Barber both admit their minds went straight to television cameras. Changizi was fascinated by the possibilities of infusing an already-enhanced HDTV experience with the capacity to see colors even more clearly.

“We looked into cameras photo receptors and decided that producing a filter for a camera would be too difficult and expensive,” Barber says. The easiest possible approach was not electronic at all, he says. Instead, they worked to develop a lens that adjusts the color signal that hits the human eye and the O2Amp was born.

The patented lens technology simply perfects what the eye does naturally: it read the changes in skin tone brought on by a flush, bruise, or blanch. The filters can be used in a range of products from indoor lighting (especially for hospital trauma centers) to windows, to perhaps eventually face cream. For now, one of the most promising applications is in glasses that correct colorblindness.

As a veteran entrepreneur, founding Clickbank and Keynetics among other ventures, Barber wasn’t interested in chasing the perfect color filter for a demo pair of glasses. “If you look for perfection you could spend a million dollars. And it is just a waste of time,” he says. A bunch of prototypes were created, and rejected. Some were too shiny, others too iridescent. “We finally found something that worked to get the tone spectrum we wanted and to produce a more interesting view of the world.”

What they got was about 90 percent of the way to total color enhancement across three different types of lenses: Oxy-Iso, Hemo-Iso, and Oxy-Amp. While the Amp, which boosts the wearer’s general perception of blood oxygenation under the skin (your own vision, but better), is the centerpiece of the technology, it was the Oxy-Iso, the lens that isolates and enhances the red-green part of the spectrum, that generated some unexpected feedback from users. Changizi says the testers told them that the Oxy-Iso lens appeared to “cure” their colorblindness.

Changizi knew this was a possibility, as the filter concentrates enhancement exactly where red-green colorblind people have a block. Professor Daniel Bor, a red-green colorblind neuroscientist at the University of Sussex tried them and was practically giddy with the results. Changizi published Bor’s testimony on his blog: “When I first put one of them on [the Oxy-Iso,], I got a shiver of excitement at how vibrant and red lips, clothes and other objects around me seemed. I’ve just done a quick 8 plate Ishihara colour blindness test. I scored 0/8 without the specs (so obviously colour blind), but 8/8 with them on (normal colour vision)!”

Despite these early testimonials, the pair thought that the O2Amp glasses would be primarily picked up by hospitals. The Hemo-Iso filter enhances variations along the yellow-blue dimension, which makes it easier for healthcare providers to see veins. “It’s a little scary to think about people drawing blood who can’t see see the veins,” Barber says. EMT workers were enthusiastic users thanks to the Hemo-Iso’s capability of making bruising more visible.

From there, Barber and Changizi embarked on a two-year odyssey to find a manufacturer to make the eyewear that would enable them to sell commercially. Through 2AI Labs, they were able push their discoveries into mainstream applications without having to rely on grants; any funding they earn from their inventions is reinvested. They also forewent some of the traditional development steps. “We bootstrapped the bench testing and we didn’t do any market research,” Barber says.

Plenty of cold calling to potential manufacturers ensued. “As scientists talking to manufacturers, it seemed like we were speaking a different language,” Barber says. Not to mention looking strange as they walked around wearing the purple and green-tinted glasses at trade shows. Changizi says they finally got lucky last year and found a few manufacturers able to produce the specialized specs. All are available on Amazon for just under $300.

Changizi and Barber aren’t done yet. In addition to overseeing sales reps who are trying to get the glasses into the hands of more buyers, the two are in talks with companies such as Oakley and Ray-Ban to put the technology into sunglasses. Imagine, says Changizi, if you could more easily see if you are getting a sunburn at the beach despite the glare. They’re testing a mirrored O2Amp lens specially for poker players (think: all the better to see the flush of a bluffer). Changizi says they are also working with cosmetics companies to embed the technology in creams that would enhance the skin’s vasculature. Move over Hope in a Jar. Barber says it’s not clear how profitable any of this will be yet: “We just want the technology to be used.”

http://www.popsci.com/science/article/2013-02/amazing-story-300-glasses-can-cure-colorblindness?page=2

Lab rats given a 6th sense through a brain-machine interface

_65888650_65886269

Duke University researchers have effectively given laboratory rats a “sixth sense” using an implant in their brains.

An experimental device allowed the rats to “touch” infrared light – which is normally invisible to them.

The team at Duke University fitted the rats with an infrared detector wired up to microscopic electrodes that were implanted in the part of their brains that processes tactile information.

The results of the study were published in Nature Communications journal.

The researchers say that, in theory at least, a human with a damaged visual cortex might be able to regain sight through a device implanted in another part of the brain.

Lead author Miguel Nicolelis said this was the first time a brain-machine interface has augmented a sense in adult animals.

The experiment also shows that a new sensory input can be interpreted by a region of the brain that normally does something else (without having to “hijack” the function of that brain region).

“We could create devices sensitive to any physical energy,” said Prof Nicolelis, from the Duke University Medical Center in Durham, North Carolina.

“It could be magnetic fields, radio waves, or ultrasound. We chose infrared initially because it didn’t interfere with our electrophysiological recordings.”

His colleague Eric Thomson commented: “The philosophy of the field of brain-machine interfaces has until now been to attempt to restore a motor function lost to lesion or damage of the central nervous system.

“This is the first paper in which a neuroprosthetic device was used to augment function – literally enabling a normal animal to acquire a sixth sense.”
In their experiments, the researchers used a test chamber with three light sources that could be switched on randomly.

They taught the rats to choose the active light source by poking their noses into a port to receive a sip of water as a reward. They then implanted the microelectrodes, each about a tenth the diameter of a human hair, into the animals’ brains. These electrodes were attached to the infrared detectors.

The scientists then returned the animals to the test chamber. At first, the rats scratched at their faces, indicating that they were interpreting the lights as touch. But after a month the animals learned to associate the signal in their brains with the infrared source.

They began to search actively for the signal, eventually achieving perfect scores in tracking and identifying the correct location of the invisible light source.

One key finding was that enlisting the touch cortex to detect infrared light did not reduce its ability to process touch signals.

http://www.bbc.co.uk/news/science-environment-21459745

Thanks to Kebmodee for bringing this to the attention of the It’s Interesting community.

Scientists Construct First Detailed Map of How the Brain Organizes Everything We See

121219142257-large

Our eyes may be our window to the world, but how do we make sense of the thousands of images that flood our retinas each day? Scientists at the University of California, Berkeley, have found that the brain is wired to put in order all the categories of objects and actions that we see. They have created the first interactive map of how the brain organizes these groupings.

The result — achieved through computational models of brain imaging data collected while the subjects watched hours of movie clips — is what researchers call “a continuous semantic space.”

“Our methods open a door that will quickly lead to a more complete and detailed understanding of how the brain is organized. Already, our online brain viewer appears to provide the most detailed look ever at the visual function and organization of a single human brain,” said Alexander Huth, a doctoral student in neuroscience at UC Berkeley and lead author of the study published Dec. 19 in the journal Neuron.

A clearer understanding of how the brain organizes visual input can help with the medical diagnosis and treatment of brain disorders. These findings may also be used to create brain-machine interfaces, particularly for facial and other image recognition systems. Among other things, they could improve a grocery store self-checkout system’s ability to recognize different kinds of merchandise.

“Our discovery suggests that brain scans could soon be used to label an image that someone is seeing, and may also help teach computers how to better recognize images,” said Huth.

It has long been thought that each category of object or action humans see — people, animals, vehicles, household appliances and movements — is represented in a separate region of the visual cortex. In this latest study, UC Berkeley researchers found that these categories are actually represented in highly organized, overlapping maps that cover as much as 20 percent of the brain, including the somatosensory and frontal cortices.

To conduct the experiment, the brain activity of five researchers was recorded via functional Magnetic Resonance Imaging (fMRI) as they each watched two hours of movie clips. The brain scans simultaneously measured blood flow in thousands of locations across the brain.

Researchers then used regularized linear regression analysis, which finds correlations in data, to build a model showing how each of the roughly 30,000 locations in the cortex responded to each of the 1,700 categories of objects and actions seen in the movie clips. Next, they used principal components analysis, a statistical method that can summarize large data sets, to find the “semantic space” that was common to all the study subjects.

The results are presented in multicolored, multidimensional maps showing the more than 1,700 visual categories and their relationships to one another. Categories that activate the same brain areas have similar colors. For example, humans are green, animals are yellow, vehicles are pink and violet and buildings are blue.

“Using the semantic space as a visualization tool, we immediately saw that categories are represented in these incredibly intricate maps that cover much more of the brain than we expected,” Huth said.

Other co-authors of the study are UC Berkeley neuroscientists Shinji Nishimoto, An T. Vu and Jack Gallant.

Journal Reference:

1.Alexander G. Huth, Shinji Nishimoto, An T. Vu, Jack L. Gallant. A Continuous Semantic Space Describes the Representation of Thousands of Object and Action Categories across the Human Brain. Neuron, 2012; 76 (6): 1210 DOI: 10.1016/j.neuron.2012.10.014

http://www.sciencedaily.com/releases/2012/12/121219142257.htm

Mother-Child Connection: Scientists Discover Children’s Cells Living in Mothers’ Brains, Including Male Cells Living in the Female Brain for Decades

scientists-discover-childrens-cells-living-in-mothers-brain_1

 

The link between a mother and child is profound, and new research suggests a physical connection even deeper than anyone thought. The profound psychological and physical bonds shared by the mother and her child begin during gestation when the mother is everything for the developing fetus, supplying warmth and sustenance, while her heartbeat provides a soothing constant rhythm.

The physical connection between mother and fetus is provided by the placenta, an organ, built of cells from both the mother and fetus, which serves as a conduit for the exchange of nutrients, gasses, and wastes. Cells may migrate through the placenta between the mother and the fetus, taking up residence in many organs of the body including the lung, thyroid muscle, liver, heart, kidney and skin. These may have a broad range of impacts, from tissue repair and cancer prevention to sparking immune disorders.

It is remarkable that it is so common for cells from one individual to integrate into the tissues of another distinct person. We are accustomed to thinking of ourselves as singular autonomous individuals, and these foreign cells seem to belie that notion, and suggest that most people carry remnants of other individuals. As remarkable as this may be, stunning results from a new study show that cells from other individuals are also found in the brain. In this study, male cells were found in the brains of women and had been living there, in some cases, for several decades. What impact they may have had is now only a guess, but this study revealed that these cells were less common in the brains of women who had Alzheimer’s disease, suggesting they may be related to the health of the brain.

We all consider our bodies to be our own unique being, so the notion that we may harbor cells from other people in our bodies seems strange. Even stranger is the thought that, although we certainly consider our actions and decisions as originating in the activity of our own individual brains, cells from other individuals are living and functioning in that complex structure. However, the mixing of cells from genetically distinct individuals is not at all uncommon. This condition is called chimerism after the fire-breathing Chimera from Greek mythology, a creature that was part serpent part lion and part goat. Naturally occurring chimeras are far less ominous though, and include such creatures as the slime mold and corals.

 Microchimerism is the persistent presence of a few genetically distinct cells in an organism. This was first noticed in humans many years ago when cells containing the male “Y” chromosome were found circulating in the blood of women after pregnancy. Since these cells are genetically male, they could not have been the women’s own, but most likely came from their babies during gestation.

In this new study, scientists observed that microchimeric cells are not only found circulating in the blood, they are also embedded in the brain. They examined the brains of deceased women for the presence of cells containing the male “Y” chromosome. They found such cells in more than 60 percent of the brains and in multiple brain regions. Since Alzheimer’s disease is more common in women who have had multiple pregnancies, they suspected that the number of fetal cells would be greater in women with AD compared to those who had no evidence for neurological disease. The results were precisely the opposite: there were fewer fetal-derived cells in women with Alzheimer’s. The reasons are unclear.

Microchimerism most commonly results from the exchange of cells across the placenta during pregnancy, however there is also evidence that cells may be transferred from mother to infant through nursing. In addition to exchange between mother and fetus, there may be exchange of cells between twins in utero, and there is also the possibility that cells from an older sibling residing in the mother may find their way back across the placenta to a younger sibling during the latter’s gestation. Women may have microchimeric cells both from their mother as well as from their own pregnancies, and there is even evidence for competition between cells from grandmother and infant within the mother.

What it is that fetal microchimeric cells do in the mother’s body is unclear, although there are some intriguing possibilities. For example, fetal microchimeric cells are similar to stem cells in that they are able to become a variety of different tissues and may aid in tissue repair. One research group investigating this possibility followed the activity of fetal microchimeric cells in a mother rat after the maternal heart was injured: they discovered that the fetal cells migrated to the maternal heart and differentiated into heart cells helping to repair the damage. In animal studies, microchimeric cells were found in maternal brains where they became nerve cells, suggesting they might be functionally integrated in the brain. It is possible that the same may true of such cells in the human brain.

These microchimeric cells may also influence the immune system. A fetal microchimeric cell from a pregnancy is recognized by the mother’s immune system partly as belonging to the mother, since the fetus is genetically half identical to the mother, but partly foreign, due to the father’s genetic contribution. This may “prime” the immune system to be alert for cells that are similar to the self, but with some genetic differences. Cancer cells which arise due to genetic mutations are just such cells, and there are studies which suggest that microchimeric cells may stimulate the immune system to stem the growth of tumors. Many more microchimeric cells are found in the blood of healthy women compared to those with breast cancer, for example, suggesting that microchimeric cells can somehow prevent tumor formation. In other circumstances, the immune system turns against the self, causing significant damage. Microchimerism is more common in patients suffering from Multiple Sclerosis than in their healthy siblings, suggesting chimeric cells may have a detrimental role in this disease, perhaps by setting off an autoimmune attack.

This is a burgeoning new field of inquiry with tremendous potential for novel findings as well as for practical applications. But it is also a reminder of our interconnectedness.

http://www.scientificamerican.com/article.cfm?id=scientists-discover-childrens-cells-living-in-mothers-brain

The Death of “Near Death” Experiences ?

near-death-experience-1

 

You careen headlong into a blinding light. Around you, phantasms of people and pets lost. Clouds billow and sway, giving way to a gilded and golden entrance. You feel the air, thrusted downward by delicate wings. Everything is soothing, comforting, familiar. Heaven.

It’s a paradise that some experience during an apparent demise. The surprising consistency of heavenly visions during a “near death experience” (or NDE) indicates for many that an afterlife awaits us. Religious believers interpret these similar yet varying accounts like blind men exploring an elephant—they each feel something different (the tail is a snake and the legs are tree trunks, for example); yet all touch the same underlying reality. Skeptics point to the curious tendency for Heaven to conform to human desires, or for Heaven’s fleeting visage to be so dependent on culture or time period.

Heaven, in a theological view, has some kind of entrance. When you die, this entrance is supposed to appear—a Platform 9 ¾ for those running towards the grave. Of course, the purported way to see Heaven without having to take the final run at the platform wall is the NDE. Thrust back into popular consciousness by a surgeon claiming that “Heaven is Real,” the NDE has come under both theological and scientific scrutiny for its supposed ability to preview the great gig in the sky.

But getting to see Heaven is hell—you have to die. Or do you?

This past October, neurosurgeon Dr. Eben Alexander claimed that “Heaven is Real”, making the cover of the now defunct Newsweek magazine. His account of Heaven was based on a series of visions he had while in a coma, suffering the ravages of a particularly vicious case of bacterial meningitis. Alexander claimed that because his neocortex was “inactivated” by this malady, his near death visions indicated an intellect apart from the grey matter, and therefore a part of us survives brain-death.

Alexander’s resplendent descriptions of the afterlife were intriguing and beautiful, but were also promoted as scientific proof. Because Alexander was a brain “scientist” (more accurately, a brain surgeon), his account carried apparent weight.

Scientifically, Alexander’s claims have been roundly criticized. Academic clinical neurologist Steve Novella removes the foundation of Alexander’s whole claim by noting that his assumption of cortex “inactivation” is flawed:

Alexander claims there is no scientific explanation for his experiences, but I just gave one. They occurred while his brain function was either on the way down or on the way back up, or both, not while there was little to no brain activity.

In another takedown of the popular article, neuroscientist Sam Harris (with characteristic sharpness) also points out this faulty premise, and notes that Alexander’s evidence for such inactivation is lacking:

The problem, however, is that “CT scans and neurological examinations” can’t determine neuronal inactivity—in the cortex or anywhere else. And Alexander makes no reference to functional data that might have been acquired by fMRI, PET, or EEG—nor does he seem to realize that only this sort of evidence could support his case.

Without a scientific foundation for Alexander’s claims, skeptics suggest he had a NDE later fleshed out by confirmation bias and colored by culture. Harris concludes in a follow-up post on his blog, “I am quite sure that I’ve never seen a scientist speak in a manner more suggestive of wishful thinking. If self-deception were an Olympic sport, this is how our most gifted athletes would appear when they were in peak condition.”

And these takedowns have company. Paul Raeburn in the Huffington Post, speaking of Alexander’s deathbed vision being promoted as a scientific account, wrote, “We are all demeaned, and our national conversation is demeaned, by people who promote this kind of thing as science. This is religious belief; nothing else.” We might expect this tone from skeptics, but even the faithful chime in. Greg Stier writes in the Christian post that while he fully believes in the existence of Heaven, we should not take NDE accounts like Alexander’s as proof of it.

These criticisms of Alexander point out that what he saw was a classic NDE—the white light, the tunnel, the feelings of connectedness, etc. This is effective in dismantling his account of an “immaterial intellect” because, so far, most symptoms of a NDE are in fact scientifically explainable. [ another article on this site provides a thorough description of the evidence, as does this study.]

One might argue that the scientific description of NDE symptoms is merely the physical account of what happens as you cross over. A brain without oxygen may experience “tunnel vision,” but a brain without oxygen is also near death and approaching the afterlife, for example. This argument rests on the fact that you are indeed dying. But without the theological gymnastics, I think there is an overlooked yet critical aspect to the near death phenomenon, one that can render Platform 9 ¾ wholly solid. Studies have shown that you don’t have to be near death to have a near death experience.

“Dying”

In 1990, a study was published in the Lancet that looked at the medical records of people who experienced NDE-like symptoms as a result of some injury or illness. It showed that out of 58 patients who reported “unusual” experiences associated with NDEs (tunnels, light, being outside one’s own body, etc.), 30 of them were not actually in any danger of dying, although they believed they were [1]. The authors of the study concluded that this finding offered support to the physical basis of NDEs, as well as the “transcendental” basis.

Why would the brain react to death (or even imagined death) in such a way? Well, death is a scary thing. Scientific accounts of the NDE characterize it as the body’s psychological and physiological response mechanism to such fear, producing chemicals in the brain that calm the individual while inducing euphoric sensations to reduce trauma.

Imagine an alpine climber whose pick fails to catch the next icy outcropping as he or she plummets towards a craggy mountainside. If one truly believes the next experience he or she will have is an intimate acquainting with a boulder, similar NDE-like sensations may arise (i.e., “My life flashed before my eyes…”). We know this because these men and women have come back to us, emerging from a cushion of snow after their fall rather than becoming a mountain’s Jackson Pollock installation.

You do not have to be, in reality, dying to have a near-death experience. Even if you are dying (but survive), you probably won’t have one. What does this make of Heaven? It follows that if you aren’t even on your way to the afterlife, the scientifically explicable NDE symptoms point to neurology, not paradise.

This Must Be the Place

Explaining the near death experience in a purely physical way is not to say that people cannot have a transformative vision or intense mental journey. The experience is real and tells us quite a bit about the brain (while raising even more fascinating questions about consciousness). But emotional and experiential gravitas says nothing of Heaven, or the afterlife in general. A healthy imbibing of ketamine can induce the same feelings, but rarely do we consider this euphoric haze a glance of God’s paradise.

In this case, as in science, a theory can be shot through with experimentation. As Richard Feynman said, “It doesn’t matter how beautiful your theory is, it doesn’t matter how smart you are. If it doesn’t agree with experiment, it’s wrong.

The experiment is exploring an NDE under different conditions. Can the same sensations be produced when you are in fact not dying? If so, your rapping on the Pearly Gates is an illusion, even if Heaven were real. St. Peter surely can tell the difference between a dying man and a hallucinating one.

The near death experience as a foreshadowing of Heaven is a beautiful theory perhaps, but wrong.

Barring a capricious conception of “God’s plan,” one can experience a beautiful white light at the end of a tunnel while still having a firm grasp of their mortal coil. This is the death of near death. Combine explainable symptoms with a plausible, physical theory as to why we have them and you get a description of what it is like to die, not what it is like to glimpse God.

Sitting atop clouds fluffy and white, Heaven may be waiting. We can’t prove that it is not. But rather than helping to clarify, the near death experience, not dependent on death, may only point to an ever interesting and complex human brain, nothing more.

http://blogs.scientificamerican.com/guest-blog/2012/12/03/the-death-of-near-death-even-if-heaven-is-real-you-arent-seeing-it/