Brain decoder can eavesdrop on your inner voice


Talking to yourself used to be a strictly private pastime. That’s no longer the case – researchers have eavesdropped on our internal monologue for the first time. The achievement is a step towards helping people who cannot physically speak communicate with the outside world.

“If you’re reading text in a newspaper or a book, you hear a voice in your own head,” says Brian Pasley at the University of California, Berkeley. “We’re trying to decode the brain activity related to that voice to create a medical prosthesis that can allow someone who is paralysed or locked in to speak.”

When you hear someone speak, sound waves activate sensory neurons in your inner ear. These neurons pass information to areas of the brain where different aspects of the sound are extracted and interpreted as words.

In a previous study, Pasley and his colleagues recorded brain activity in people who already had electrodes implanted in their brain to treat epilepsy, while they listened to speech. The team found that certain neurons in the brain’s temporal lobe were only active in response to certain aspects of sound, such as a specific frequency. One set of neurons might only react to sound waves that had a frequency of 1000 hertz, for example, while another set only cares about those at 2000 hertz. Armed with this knowledge, the team built an algorithm that could decode the words heard based on neural activity alone (PLoS Biology,

The team hypothesised that hearing speech and thinking to oneself might spark some of the same neural signatures in the brain. They supposed that an algorithm trained to identify speech heard out loud might also be able to identify words that are thought.


To test the idea, they recorded brain activity in another seven people undergoing epilepsy surgery, while they looked at a screen that displayed text from either the Gettysburg Address, John F. Kennedy’s inaugural address or the nursery rhyme Humpty Dumpty.

Each participant was asked to read the text aloud, read it silently in their head and then do nothing. While they read the text out loud, the team worked out which neurons were reacting to what aspects of speech and generated a personalised decoder to interpret this information. The decoder was used to create a spectrogram – a visual representation of the different frequencies of sound waves heard over time. As each frequency correlates to specific sounds in each word spoken, the spectrogram can be used to recreate what had been said. They then applied the decoder to the brain activity that occurred while the participants read the passages silently to themselves.

Despite the neural activity from imagined or actual speech differing slightly, the decoder was able to reconstruct which words several of the volunteers were thinking, using neural activity alone (Frontiers in Neuroengineering,

The algorithm isn’t perfect, says Stephanie Martin, who worked on the study with Pasley. “We got significant results but it’s not good enough yet to build a device.”

In practice, if the decoder is to be used by people who are unable to speak it would have to be trained on what they hear rather than their own speech. “We don’t think it would be an issue to train the decoder on heard speech because they share overlapping brain areas,” says Martin.

The team is now fine-tuning their algorithms, by looking at the neural activity associated with speaking rate and different pronunciations of the same word, for example. “The bar is very high,” says Pasley. “Its preliminary data, and we’re still working on making it better.”

The team have also turned their hand to predicting what songs a person is listening to by playing lots of Pink Floyd to volunteers, and then working out which neurons respond to what aspects of the music. “Sound is sound,” says Pasley. “It all helps us understand different aspects of how the brain processes it.”

“Ultimately, if we understand covert speech well enough, we’ll be able to create a medical prosthesis that could help someone who is paralysed, or locked in and can’t speak,” he says.

Several other researchers are also investigating ways to read the human mind. Some can tell what pictures a person is looking at, others have worked out what neural activity represents certain concepts in the brain, and one team has even produced crude reproductions of movie clips that someone is watching just by analysing their brain activity. So is it possible to put it all together to create one multisensory mind-reading device?

In theory, yes, says Martin, but it would be extraordinarily complicated. She says you would need a huge amount of data for each thing you are trying to predict. “It would be really interesting to look into. It would allow us to predict what people are doing or thinking,” she says. “But we need individual decoders that work really well before combining different senses.”

Today’s awardee for 2013 Nobel Prize in Physiology or Medicine, James Rothman, hopes it will help him secure funds for the research for which he won the prize.


STOCKHOLM (AP) — Two Americans and a German-American won the Nobel Prize in medicine on Monday for discovering how key substances are transported within cells, a process involved in such important activities as brain cell communication and the release of insulin.

James Rothman, 62, of Yale University, Randy Schekman, 64, of the University of California, Berkeley, and Dr. Thomas Sudhof, 57, of Stanford University shared the $1.2 million prize for their research on how tiny bubbles called vesicles act as cargo carriers inside cells.

This traffic control system ensures that the cargo is delivered to the right place at the right time and keeps activities inside cells from descending into chaos, the committee said. Defects can be harmful, leading to neurological diseases, diabetes and disorders affecting the immune system.

“Imagine hundreds of thousands of people who are traveling around hundreds of miles of streets; how are they going to find the right way? Where will the bus stop and open its doors so that people can get out?” Nobel committee secretary Goran Hansson said. “There are similar problems in the cell.”

The winners’ discoveries in the 1970s, ’80s and ’90s have helped doctors diagnose a severe form of epilepsy and immune deficiency diseases in children, Hansson said. In the future, scientists hope the research could lead to medicines against more common types of epilepsy, diabetes and other metabolism deficiencies, he added.

Schekman said he was awakened at 1 a.m. at his home in California by the chairman of the prize committee, just as he was suffering from jetlag after returning from a trip to Germany the night before.

“I wasn’t thinking too straight. I didn’t have anything elegant to say,” he told The Associated Press. “All I could say was ‘Oh my God,’ and that was that.”

He called the prize a wonderful acknowledgment of the work he and his students had done and said he knew it would change his life.

“I called my lab manager and I told him to go buy a couple bottles of Champagne and expect to have a celebration with my lab,” he said.

In the 1970s, Schekman discovered a set of genes that were required for vesicle transport, while Rothman revealed in the 1980s and 1990s how vesicles delivered their cargo to the right places. Also in the ’90s, Sudhof identified the machinery that controls when vesicles release chemical messengers from one brain cell that let it communicate with another.

“This is not an overnight thing. Most of it has been accomplished and developed over many years, if not decades,” Rothman told the AP.

Rothman said he lost grant money for the work recognized by the Nobel committee, but he will now reapply, hoping the Nobel prize will make a difference in receiving funding.

Sudhof, who was born in Germany but moved to the U.S. in 1983 and also has U.S. citizenship, told the AP he received the call from the committee while driving toward the city of Baeza, in southern Spain, where he was due to give a talk.

“I got the call while I was driving and like a good citizen I pulled over and picked up the phone,” he said. “To be honest, I thought at first it was a joke. I have a lot of friends who might play these kinds of tricks.”

The medicine prize kicked off this year’s Nobel announcements. The awards in physics, chemistry, literature, peace and economics will be announced by other prize juries this week and next. Each prize is worth 8 million Swedish kronor ($1.2 million).

Rothman and Schekman won the Albert Lasker Basic Medical Research Award for their research in 2002 — an award often seen as a precursor of a Nobel Prize. Sudhof won the Lasker award this year.

“I might have been just as happy to have been a practicing primary-care doctor,” Sudhof said after winning that prize. “But as a medical student I had interacted with patients suffering from neurodegeneration or acute clinical schizophrenia. It left an indelible mark on my memory.”

Jeremy Berg, former director of the National Institute of General Medical Sciences in Bethesda, Maryland, said Monday’s announcement was “long overdue” and widely expected because the research was “so fundamental, and has driven so much other research.”

Berg, who now directs the Institute for Personalized Medicine at the University of Pittsburgh, said the work provided the intellectual framework that scientists use to study how brain cells communicate and how other cells release hormones. In both cases, vesicles play a key role by delivering their cargo to the cell surface and releasing it to the outside, he told the AP.

So the work has indirectly affected research into virtually all neurological disease as well as other diseases, he said.

Established by Swedish industrialist Alfred Nobel, the Nobel Prizes have been handed out by award committees in Stockholm and Oslo since 1901. The winners always receive their awards on Dec. 10, the anniversary of Nobel’s death in 1896.

Last year’s Nobel medicine award went to Britain’s John Gurdon and Japan’s Shinya Yamanaka for their contributions to stem cell science.

Economic crisis in Greece has lowered their air pollution


EVEN the darkest cloud may have a silver lining. The sharp drop in air pollution that accompanied Greece’s economic crisis could be a boon to the nation’s health.

Mihalis Vrekoussis of the Cyprus Institute in Nicosia and colleagues used three satellites and a network of ground-based instruments to measure air pollution over Greece between 2007 and 2011. Levels of nitrogen dioxide fell over the whole country, with a particularly steep drop of 30 to 40 per cent over Athens. Nitrogen monoxide, carbon monoxide and sulphur dioxide also fell (Geophysical Research Letters, DOI: 10.1002/grl.50118).

Pollution levels have been falling since 2002, but the rate accelerated after 2008 by a factor of 3.5, says Vrekoussis. He found that the drop in pollution correlated with a decline in oil consumption, industrial activity and the size of the economy. “This suggests that the additional reported reduction in gas pollutant levels is due to the economic recession,” he says.

In Athens, a combination of heavy car use and lots of sunshine have created serious health problems, so city dwellers should see real benefits. Sunlight triggers chemical reactions that make the car exhaust pollution more harmful, for instance by forming small particulates that cause respiratory diseases. “Hospital admissions for asthma should decline,” says Dwayne Heard of the University of Leeds in the UK.

It’s not all good news: despite the drop in pollutants, levels of ground-level ozone – another cause of respiratory disease – have risen. Ozone would normally be suppressed by nitrogen oxides, but those have declined. That will take the edge off the benefit, says Heard.

Greece isn’t the only country where air pollution has dropped. Nitrogen oxide levels fell across Europe after the 2008 financial crisis (Scientific Reports, In the US, nitrogen dioxide levels fell between 2005 and 2011, with the sharpest fall at the height of the recession (Atmospheric Chemistry and Physics,

Such declines can be one-offs, or governments can help make them permanent, says Ronald Cohen of the University of California, Berkeley, who led the US study. “A time of crisis is a real opportunity to initiate change.” After the 2008 financial downturn, for instance, the US and Europe committed to pollution cuts. “In 10 years, there will be an end to air pollution in the US and Europe,” says Cohen. “It’s an incredible success story.”

Greece, however, is not seizing the current opportunity, says Vrekoussis. “Investments in clean technologies and low-carbon green strategies have been abandoned,” he says. “I’m afraid that in the long run the negative effects will override the positives.”

Global greenhouse gas emissions initially fell in the wake of the financial crisis, but not by much. Emerging economies like China and India continued their economic growth, so a small emissions drop in 2009 was followed by a huge rise in 2010 which continued in 2011.|NSNS|2012-GLOBAL|online-news

Scientists Construct First Detailed Map of How the Brain Organizes Everything We See


Our eyes may be our window to the world, but how do we make sense of the thousands of images that flood our retinas each day? Scientists at the University of California, Berkeley, have found that the brain is wired to put in order all the categories of objects and actions that we see. They have created the first interactive map of how the brain organizes these groupings.

The result — achieved through computational models of brain imaging data collected while the subjects watched hours of movie clips — is what researchers call “a continuous semantic space.”

“Our methods open a door that will quickly lead to a more complete and detailed understanding of how the brain is organized. Already, our online brain viewer appears to provide the most detailed look ever at the visual function and organization of a single human brain,” said Alexander Huth, a doctoral student in neuroscience at UC Berkeley and lead author of the study published Dec. 19 in the journal Neuron.

A clearer understanding of how the brain organizes visual input can help with the medical diagnosis and treatment of brain disorders. These findings may also be used to create brain-machine interfaces, particularly for facial and other image recognition systems. Among other things, they could improve a grocery store self-checkout system’s ability to recognize different kinds of merchandise.

“Our discovery suggests that brain scans could soon be used to label an image that someone is seeing, and may also help teach computers how to better recognize images,” said Huth.

It has long been thought that each category of object or action humans see — people, animals, vehicles, household appliances and movements — is represented in a separate region of the visual cortex. In this latest study, UC Berkeley researchers found that these categories are actually represented in highly organized, overlapping maps that cover as much as 20 percent of the brain, including the somatosensory and frontal cortices.

To conduct the experiment, the brain activity of five researchers was recorded via functional Magnetic Resonance Imaging (fMRI) as they each watched two hours of movie clips. The brain scans simultaneously measured blood flow in thousands of locations across the brain.

Researchers then used regularized linear regression analysis, which finds correlations in data, to build a model showing how each of the roughly 30,000 locations in the cortex responded to each of the 1,700 categories of objects and actions seen in the movie clips. Next, they used principal components analysis, a statistical method that can summarize large data sets, to find the “semantic space” that was common to all the study subjects.

The results are presented in multicolored, multidimensional maps showing the more than 1,700 visual categories and their relationships to one another. Categories that activate the same brain areas have similar colors. For example, humans are green, animals are yellow, vehicles are pink and violet and buildings are blue.

“Using the semantic space as a visualization tool, we immediately saw that categories are represented in these incredibly intricate maps that cover much more of the brain than we expected,” Huth said.

Other co-authors of the study are UC Berkeley neuroscientists Shinji Nishimoto, An T. Vu and Jack Gallant.

Journal Reference:

1.Alexander G. Huth, Shinji Nishimoto, An T. Vu, Jack L. Gallant. A Continuous Semantic Space Describes the Representation of Thousands of Object and Action Categories across the Human Brain. Neuron, 2012; 76 (6): 1210 DOI: 10.1016/j.neuron.2012.10.014

Berkeley Laser Fires Pulses Hundreds of Times More Powerful Than All the World’s Electric Plants Combined

Blink and you’ll miss it. Don’t blink, and you’ll still miss it.

Imagine a device capable of delivering more power than all of the world’s electric plants. But this is not a prop for the next James Bond movie. A new laser at Lawrence Berkeley National Laboratory was put through its paces July 20, delivering pulses with a petawatt of power once per second. A petawatt is 1015 watts, or 1,000,000,000,000,000 watts—about 400 times as much as the combined instantaneous output of all the world’s electric plants.

How is that even possible? Well, the pulses at the Berkeley Lab Laser Accelerator (BELLA) are both exceedingly powerful and exceedingly short. Each petawatt burst lasts just 40 femtoseconds, or 0.00000000000004 second. Since it fires just one brief pulse per second, the laser’s average power is only about 40 watts—the same as an incandescent bulb in a reading lamp.

BELLA’s laser is not the first to pack so much power—a laser at Lawrence Livermore National Laboratory, just an hour’s drive inland from Berkeley, reached 1.25 petawatts in the 1990s. And the University of Texas at Austin has its own high-power laser, which hit the 1.1-petawatt mark in 2008. But the Berkeley laser is the first to deliver petawatt pulses with such frequency, the lab says. At full power, for comparison, the Texas Petawatt Laser can fire one shot per hour.

The Department of Energy plans to use the powerful laser to drive a very compact particle accelerator via a process called laser wakefield acceleration, boosting electrons to high energies for use in colliders or for imaging or medical applications. Electron beams are already in use to produce bright pulses of x-rays for high-speed imaging. An intense laser pulse can ionize the atoms in a gas, separating electrons from protons to produce a plasma. And laser-carved waves in the plasma [blue in image above] sweep up electrons [green], accelerating them outward at nearly the speed of light.

BELLA director Wim Leemans says that the project’s first experiments will seek to accelerate beams of electrons to energies of 10 billion electron-volts (or 10 GeV) by firing the laser through a plasma-based apparatus about one meter long. The laser apparatus itself is quite a bit larger, filling a good-size room. For comparison, the recently repurposed Stanford Linear Accelerator Center produced electron beams of 50 GeV from an accelerator 3.2 kilometers in length.

Thanks to Ray Gaudette for bringing this to the attention of the It’s Interesting community.