Researchers Are Examining a 9,000-Year-Old Bison Mummy

The well-mummified specimen of a steppe bison, a now-extinct species that lived in the Ice Age, has intact organs.

By Marissa Fessenden
smithsonian.com

In the reaches of East Siberia, in the north of the Ust-Yana district, a section of lake shore slumped in 2011, revealing the frozen 9,000-year old body of a bison. The locals found the remains and delivered them to the Academy of Sciences in Yakutia, who realized this mummified bison was remarkably well preserved, reports The Siberian Times. Now, the first results from the necropsy are in, the researchers announced at the Annual Meeting of the Society of Vertebrate Paleontology.

The bison is a steppe bison, a species that lived during the early Holocene epoch, or 9,000 to 12,000 years ago. The mummy is in such good condition that the brain appears to be complete, though shrunken. Other organs, including the heart, blood vessels and stomach look to be close to their normal size, writes the Daily Mail. The find is a rare opportunity, explains Olga Potopova of the Mammoth Site of Hot Springs in South Dakota:

It is one out of three relatively complete steppe bison mummies that exist in the world, and it is the most complete out of those three.

The body is in excellent condition. Normally, we find the mummies that are significantly damaged by predators in the past, or by modern arctic foxes and others, as soon as mummies are thawed out from the permafrost.

Such processes happen very quickly, and a mummy that thaws out during summer may be gone in a few months forever.

Very few complete steppe bison have ever surfaced. This Siberian bison joins a much older steppe bison skeleton, nicknamed Bison Bob, that was discovered in 2013 and the remarkably well-mummified Alaska steppe bison (a different but related species) named Blue Babe. However, records of animals from the Siberian specimen’s era, known as the Pleistocene-Holocene boundary, are rare, Potopova says.

“The exclusively good preservation of the Yukagir bison mummy allows direct anatomical comparisons with modern species of Bison and cattle, as well as with extinct species of bison that were gone at the Pleistocene-Holocene boundary,” Evgeny Maschenko, a scientist from the Paleontological Institute in Moscow working on the project, says in a press release.

Further study of the bison’s parasites and stomach contents could give the researchers a more complete picture of life in the Holocene. They’ve noted so far that this animal had very little fat and may have died of starvation. He was about 4 years old. But more clues could led them to possible causes for the whole species’ extinction.

Read more here: http://www.smithsonianmag.com/smart-news/researchers-are-examining-9000-year-old-extinct-bison-mummy-180953284/

91 year old Polish woman declared dead and then later wakes up in mortuary

A Polish woman who spent 11 hours in cold storage in a mortuary after being declared dead has returned to her family, complaining of feeling cold. Officials say Janina Kolkiewicz, 91, was declared dead after an examination by the family doctor. However, mortuary staff were astonished to notice movement in her body bag while it was in storage. The police have launched an investigation.

Back home, Ms Kolkiewicz warmed up with a bowl of soup and two pancakes. Her family and doctor said they were in shock, according to the website of the Polish newspaper Dziennik Wschodni.

The woman’s niece, in the eastern Polish town of Ostrow Lubelski, summoned the doctor after coming home one morning to find that her aunt did not seem to be breathing or to have a pulse. After examining the woman, the family doctor declared her dead and wrote out her death certificate.

The body was taken to the mortuary and preparations were made for a funeral in two days’ time. “I was sure she was dead,” Dr Wieslawa Czyz told the television channel TVP. “I’m stunned, I don’t understand what happened. Her heart had stopped beating, she was no longer breathing,” Dr Czyz said.

However, the mortuary staff called some hours later to report that the woman was not yet dead, her niece told Dziennik Wschodni. The death certificate has been declared invalid, the newspaper says.

Ms Kolkiewicz told her relatives she felt “normal, fine” after returning home. She is apparently unaware of how near she came to the grave. “My aunt has no inkling of what happened since she has late-stage dementia,” Bogumila Kolkiewicz, her niece, told local media.

http://www.bbc.com/news/world-europe-30048087

Human thoughts used to switch on genes

Could a futuristic society of humans with the power to control their own biological functions ever become reality?

It’s not as out there as it sounds, now the technical foundations have been laid. Researchers have created a link between thoughts and cells, allowing people to switch on genes in mice using just their thoughts.

“We wanted to be able to use brainwaves to control genes. It’s the first time anyone has linked synthetic biology and the mind,” says Martin Fussenegger, a bioengineer at ETH Zurich in Basel, Switzerland, who led the team behind the work.

They hope to use the technology to help people who are “locked-in” – that is, fully conscious but unable to move or speak – to do things like self-administer pain medication. It might also be able to help people with epilepsy control their seizures.

In theory, the technology could be used for non-medical purposes, too. For example, we could give ourselves a hormone burst on demand, much like in the Culture – Iain M. Banks’s utopian society, where people are able to secrete hormones and other chemicals to change their mood.

Mouse meet man

Fussenegger’s team started by inserting a light-responsive gene into human kidney cells in a dish. The gene is activated, or expressed, when exposed to infrared light. The cells were engineered so that when the gene activated, it caused a cascade of chemical reactions leading to the expression of another gene – the one the team wanted to switch on.

Next, they put the cells into an implant about the size of a 10-pence piece or a US quarter, alongside an infrared LED that could be controlled wirelessly. The implant was inserted under the skin of a mouse. A semi-permeable membrane allowed vital nutrients from the animal’s blood supply to reach the cells inside.

With the mouse part of the experiment prepared, the team turned to the human volunteers. Eight people, wearing EEG devices that monitored their brainwaves, were taught how to conjure up different mental states that the device could recognise by their distinctive brain waves.

The volunteers were shown meditation techniques to produce a “relaxed” pattern of brainwaves, and played a computer game to produce patterns that reflected deep concentration. They also used a technique known as biofeedback, in which they learned by trial and error to control their thoughts to switch on a set of lights on a computer.

By linking the volunteer’s EEG device to the wireless LED implant in the mouse, they were able to switch on the LED using any of the three mental states. This activated the light-responsive gene in the kidney cells, which, in turn, led to the activation of the target gene. A human protein was produced that passed through the implant’s membrane and into the rodent’s bloodstream, where it could be detected. “We picked a protein that made an enzyme that was easy to identify in the mouse as a proof of concept, but essentially we think we could switch on any target gene we liked,” says Fussenegger.

Behaviour controlled

The possibilities this could open up extend as far as your imagination. For example, the implant cells could produce hormones, so how about giving yourself a burst of oxytocin before a stressful social event – just by concentrating on a computer game?

That’s possible in principle, Fussenegger says, but for now his team is focused on creating a device to help people who are locked-in, or those with chronic pain, medicate themselves. For people with epilepsy, a similar device could potentially pick up the specific electrical patterns that appear in the brain just before a seizure. It might be possible to engineer cells to react to this pattern and release drugs to lessen the seizure.

While the applications are futuristic, the work itself is very interesting, says Florian Wurm, head of cellular biotechnology at EPFL in Lausanne, Switzerland. He says it shows for the first time that you can link together two really important ideas – synthetic biology and mind control.

“But we have to consider the ethical and legal challenges associated with this kind of technology,” Wurm says. “The moment you can control genes by thought you might be able to interfere with human behaviour, perhaps against someone’s wishes.” He doesn’t want to paint a negative picture, though. “We shouldn’t close our eyes to these inventions. It’s not going to be made into a medical device any time soon but it’s interesting to consider who it could help.”

Fussenegger says he would like to start a clinical trial within 10 years.

Journal reference: Nature Communications, DOI: 10.1038/ncomms6392

http://www.newscientist.com/article/dn26538-human-thoughts-used-to-switch-on-genes.html

The man who can hear Wi-Fi wherever he walks

Frank Swain has been going deaf since his 20s. Now he has hacked his hearing so he can listen in to the data that surrounds us.

I am walking through my north London neighbourhood on an unseasonably warm day in late autumn. I can hear birds tweeting in the trees, traffic prowling the back roads, children playing in gardens and Wi-Fi leaching from their homes. Against the familiar sounds of suburban life, it is somehow incongruous and appropriate at the same time.

As I approach Turnpike Lane tube station and descend to the underground platform, I catch the now familiar gurgle of the public Wi-Fi hub, as well as the staff network beside it. On board the train, these sounds fade into silence as we burrow into the tunnels leading to central London.

I have been able to hear these fields since last week. This wasn’t the result of a sudden mutation or years of transcendental meditation, but an upgrade to my hearing aids. With a grant from Nesta, the UK innovation charity, sound artist Daniel Jones and I built Phantom Terrains, an experimental tool for making Wi-Fi fields audible.

Our modern world is suffused with data. Since radio towers began climbing over towns and cities in the early 20th century, the air has grown thick with wireless communication, the platform on which radio, television, cellphones, satellite broadcasts, Wi-Fi, GPS, remote controls and hundreds of other technologies rely. And yet, despite wireless communication becoming a ubiquitous presence in modern life, the underlying infrastructure has remained largely invisible.

Every day, we use it to read the news, chat to friends, navigate through cities, post photos to our social networks and call for help. These systems make up a huge and integral part of our lives, but the signals that support them remain intangible. If you have ever wandered in circles to find a signal for your cellphone, you will know what I mean.

Phantom Terrains opens the door to this world to a small degree by tuning into these fields. Running on a hacked iPhone, the software exploits the inbuilt Wi-Fi sensor to pick up details about nearby fields: router name, signal strength, encryption and distance. This wasn’t easy. Reams of cryptic variables and numerical values had to be decoded by changing the settings of our test router and observing the effects.

“On a busy street, we may see over a hundred independent wireless access points within signal range,” says Jones. The strength of the signal, direction, name and security level on these are translated into an audio stream made up of a foreground and background layer: distant signals click and pop like hits on a Geiger counter, while the strongest bleat their network ID in a looped melody. This audio is streamed constantly to a pair of hearing aids donated by US developer Starkey. The extra sound layer is blended with the normal output of the hearing aids; it simply becomes part of my soundscape. So long as I carry my phone with me, I will always be able to hear Wi-Fi.

Silent soundscape

From the roar of Oxford Circus, I make my way into the close silence of an anechoic booth on Harley Street. I have been spending a lot of time in these since 2012, when I was first diagnosed with hearing loss. I have been going deaf since my 20s, and two years ago I was fitted with hearing aids which instantly brought a world of missing sound back to my ears, although it took a little longer for my brain to make sense of it.

Recreating hearing is an incredibly difficult task. Unlike glasses, which simply bring the world into focus, digital hearing aids strive to recreate the soundscape, amplifying useful sound and suppressing noise. As this changes by the second, sorting one from the other requires a lot of programming.

In essence, I am listening to a computer’s interpretation of the soundscape, heavily tailored to what it thinks I need to hear. I am intrigued to see how far this editorialisation of my hearing can be pushed. If I have to spend my life listening to an interpretative version of the world, what elements could I add? The data that surrounds me seems a good place to start.

Mapping digital fields isn’t a new idea. Timo Arnall’s Light Painting Wi-Fi saw the artist and his collaborators build a rod of LEDs that lit up when exposed to digital signals, and carried it through the city at night. Captured in long exposure photographs, the topographies of wireless networks appear as a ghostly blue ribbon that waxes and wanes to the strength of nearby signals, revealing the digital landscape.

“Just as the architecture of nearby buildings gives insight to their origin and purpose, we can begin to understand the social world by examining the network landscape,” says Jones. For example, by tracing the hardware address transmitted with the Wi-Fi signal, the Phantom Terrains software can trace a router’s origin. We found that residential areas were full of low-security routers whereas commercial districts had highly encrypted routers and a higher bandwidth.

Despite the information gathered, most people would balk at the idea of being forced to listen to the hum and crackle of invisible fields all day. How long I will tolerate the additional noise in my soundscape remains to be seen. But there is more to the project than a critique of digital transparency.

With the advent of the internet of things, our material world is becoming ever more draped in sensors, and it is important to think about how we might make sense of all this information. Hearing is a fantastic platform for interpreting dynamic, continuous, broad spectrum data.

Its use in this way is being aided by a revolution in hearing technology. The latest models, such as the Halo brand used in our project and ReSound’s Linx, boast a specialised low-energy Bluetooth function that can link to compatible gadgets. This has a host of immediate advantages, such as allowing people to fine-tune their hearing aids using a smartphone as an interface. More crucially, the continuous connectivity elevates hearing aids to something similar to Google Glass – an always-on, networked tool that can seamlessly stream data and audio into your world.

Already, we are talking to our computers more, using voice-activated virtual assistants such as Apple’s Siri, Microsoft’s Cortana and OK Google. Always-on headphones that talk back, whispering into our ear like discreet advisers, might well catch on ahead of Google Glass.

“The biggest challenge is human,” says Jones. “How can we create an auditory representation that is sufficiently sophisticated to express the richness and complexity of an ever-changing network infrastructure, yet unobtrusive enough to be overlaid on our normal sensory experience without being a distraction?”

Only time will tell if we have succeeded in this respect. If we have, it will be a further step towards breaking computers out of the glass-fronted box they have been trapped inside for the last 50 years.

Auditory interfaces also prompt a rethink about how we investigate data and communicate those findings, setting aside the precise and discrete nature of visual presentation in favour of complex, overlapping forms. Instead of boiling the stock market down to the movement of one index or another, for example, we could one day listen to the churning mass of numbers in real time, our ears attuned for discordant melodies.

In Harley Street, the audiologist shows me the graphical results of my tests. What should be a wide blue swathe – good hearing across all volume levels and sound frequencies – narrows sharply, permanently, at one end.

There is currently no treatment that can widen this channel, but assistive hearing technology can tweak the volume and pitch of my soundscape to pack more sound into the space available. It’s not much to work with, but I’m hoping I can inject even more into this narrow strait, to hear things in this world that nobody else can.

http://www.newscientist.com/article/mg22429952.300-the-man-who-can-hear-wifi-wherever-he-walks.html?full=true

The future of virtual-reality travel

Glynis Freeman stands on a tower balcony in nighttime London, peering down at the dizzying lights hundreds of feet below.

The distant rumble of city traffic rises up from the streets. A gust of wind brushes her hair. Freeman smiles while swiveling her head in all directions to take in the view.

“That was cool,” she said a few minutes later. “I want to go back to London.”

That’s because Freeman was never physically in London. The Marietta, Georgia, woman was 4,000 miles away in an Atlanta hotel lobby, wearing a headset and trying out a demonstration of new technology that can place people in exotic virtual settings almost anywhere on the planet.

It’s all part of a new experiment by Marriott, the global hotel chain, to let guests sample virtual destinations with the Oculus Rift, a headset whose high-definition, 3-D display immerses wearers in a lifelike interactive world.

“We really want to appeal to the next generation of travelers,” said Karen Olivares, director of global brand marketing for Marriott.

Virtual travel is in its infancy and a long way from being mainstream. But the travel industry is intrigued by its potential, which goes far beyond Google Street View or online “virtual tours” of hotels and resorts.

The idea is not that virtual travel will replace real-world travel, because nobody in the industry would go for that. Instead, the travel industry hopes that people who sample virtual snippets of alluring vacations — say, rafting the Grand Canyon or hiking the Great Wall of China — will be persuaded to splurge on the real thing.

Behind the Oculus Rift

Driving this trend are next-generation systems such as the Oculus Rift and Sony’s Project Morpheus, which promise a leap forward in virtual-reality technology.

The much-hyped Oculus Rift headset looks like something a skier or scuba diver might wear and fits snugly over the wearer’s face, paired with headphones. Its crisp 3D display immerses you in an interactive world — a medieval village or a tropical jungle — which you sometimes can navigate with the help of a game controller.

The goggles come packed with a 100-degree field of view, extending beyond viewers’ peripheral vision. They have an accelerometer, gyroscope and compass to track the position of your head and sync the visuals to the direction where you are looking — allowing Oculus to improve on the sometimes jerky visuals of other virtual-reality systems.

The Oculus Rift was designed to enhance video gaming. But Facebook paid $2 billion for its maker, Oculus VR, in March, seeing the device as a potential future communication platform.

One developer for the Oculus Rift is excited about the technology’s long-term potential to tranform travel.

“I could go for a run in the morning in some exotic beach and in the evening stroll the streets of some city … I could be a virtual storm chaser close to a tornado and even travel deep in the ocean,” the developer wrote in an online forum.

“In fact these experiences will be so real, without risk, and of course cheap that I might actually have second thoughts about traveling … Antarctica without the cold … Jungles without the heat and bugs … And people who will provide (this) content will make millions.”

Virtual journeying

Consumer versions of the Oculus Rift and Project Morpheus — which works in much the same way — aren’t expected on the market until 2015 at the earliest. But that hasn’t stopped the travel industry from tinkering with prototypes.

Thomas Cook, the international travel agency, announced a trial program in August that will allow customers at one of its stores in England to don Oculus Rifts and experience a flight on one of its airplanes or tour a Sentido resort.

And Marriott has been touring U.S. cities this fall with its “Teleporter,” a booth that invites visitors to climb inside, strap on an Oculus Rift and take a virtual tour of Wai’anapanapa Black Sand Beach in Maui and Tower 42 in London.

Viewers watch a 90-second video produced by Framestore, the British creative studio that has done visual effects for “Gravity” and other movies. To make the experiences feel more lifelike, fans in the booths blow soft breezes while misters recreate the feel of ocean spray.

Whether such virtual-reality glimpses inspire someone to take a real trip remains to be seen. But visitors to the booths on a recent weekday in Atlanta came away impressed.

“That was truly amazing. It reminded me of something from ‘Star Trek,’ ” said Lisa Lewis of Monroe, Louisiana. “London has always been a dream destination of mine. And just to get a feel for a place — it was much more than I imagined.”

http://www.cnn.com/2014/10/31/travel/virtual-reality-travel/index.html?c=&page=3

School accidentally told over 700 parents their children were missing.

John Adams Elementary School in Corona. Most of the 717 parents received a phone message last Thursday notifying them of their children’s absence.

Shane Reichardt was at a Banning City Council meeting on Thursday morning, Oct. 23, when a phone message started the clock on what he called “the longest eight minutes of my life.”

The call was from John Adams Elementary School in Corona, where his son, Drew, is a second-grader. The message noted that Drew was absent from school that day.

But he wasn’t absent, and eight minutes later, after Reichardt had bolted out of the meeting and ran to the parking lot for the 46-mile drive back to Corona, another call from the school arrived.

It was all a mistake.

Reichardt, 45, didn’t know at the time that most of the parents of the 717 students at John Adams had received the same computer-generated calls, the first one at 11:11 a.m. All he knew was that Drew had been missing for almost 2 1/2 hours.

He had left Drew with his childcare provider, who was to drop him off at school a little before 8 a.m. “At this point my concern increased exponentially.”

What happened, according to Evita Tapia-Gonzalez, spokeswoman for the Corona-Norco Unified School District, was an “inadvertent error” on the part of a school employee operating Blackboard Messaging, a broadcast messaging system used for communications.

“There is an option to send messages to a filtered group,” she said. “This one was sent to all parents.”

Before such messages are distributed, she said, “The software prompts you to reread and review what was sent and to whom. It was human error coupled with technology error.”

Shortly after the first phone call, several parents who live near the school arrived on campus to get more information. “Site administration immediately was available to offer their apologies to parents and answer questions,” Tapia-Gonzalez said.

One of the parents, Angel Lomeli, said, “I was scared to death.” Lomeli has four children attending John Adams and received a separate call about each.

But another parent, Susan Fonseca, said while she was concerned, she knew her daughter, Alexxi, a fifth-grader, was in class because Fonseca’s husband had dropped her off at school that morning.

Tapia-Gonzalez said in an email that the error has prompted changes at John Adams. “The school has developed a process that provides additional layers of review before messages are sent to parents,” she wrote.

Reichardt, who has a management position with the Riverside County Fire Department, was in Banning representing the department that morning. “I take safety and open communication very seriously,” he said. “The notification system in the school could play a vital role when something goes wrong. If the school can’t use it correctly on a good day, I have concerns about how they will function on a bad day.”

He added, “To tell a parent their child is unaccounted for could quite possibly be the scariest thing a parent could ever imagine. I hope they find a way to prevent it from happening (again).”

http://www.pe.com/articles/school-753273-parents-call.html

Thief distracts staff by squirting her breast milk

A mother in central Germany came up with an unusual tactic to allegedly steal from a pharmacy on Monday. She distracted staff at a pharmacy in Darmstadt, Hesse, by lifting up her top and squirting her breast milk at them.

The mother entered the store at 4.25pm and asked to buy a breast pump, police reported.

But after handing over a €200-note to pay for her €20 purchase, she suddenly uncovered one breast and used her fingers to squirt milk from it at the pharmacist.

She then rummaged through the counter display and went to a second cash register.

Ignoring the pleas of staff and customers to cover herself up, she again rooted through the counter displays and unleashed a fresh spray of milk.

Apparently satisfied with her handiwork, she quickly left the pharmacy, leaving the breast pump behind.

The pharmacists only noticed that €100 was missing from their cash register some time later when counting the day’s takings.

Police believe the woman, who they described as having a “robust” figure, long dark hair tied into a ponytail and speaking an unknown language, stole the cash while customers and staff were distracted by her antics.

Officers described the woman’s antics as “almost unbelievable”.

http://www.thelocal.de/20141028/thief-squirts-her-breast-milk-to-steal-german-pharmacy-darmstadt

Watch A Bowling Ball And Feather Falling In A Vacuum

Here is the perfect example of how any two objects will fall at the same rate in a vacuum, brought to us by physicist Brian Cox. He checked out NASA’s Space Simulation Chamber located at the Space Power Facility in Ohio. With a volume of 22,653 cubic meters, it’s the largest vacuum chamber in the world.

In this clip from the BBC, Cox drops a bowling ball and a feather together, first in normal conditions, and then after virtually all the air has been sucked out of the chamber. We know what happens, but that doesn’t stop it from being awesome, especially with the team’s ecstatic faces.

http://www.iflscience.com/physics/dropping-bowling-ball-and-feather-vacuum

Thanks to Kebmodee for bringing this to the attention of the It’s Interesting community.

Scientists propose existence and interaction of parallel worlds: Many Interacting Worlds theory challenges foundations of quantum science

Academics are challenging the foundations of quantum science with a radical new theory on parallel universes. Scientists now propose that parallel universes really exist, and that they interact. They show that such an interaction could explain everything that is bizarre about quantum mechanics.

Griffith University academics are challenging the foundations of quantum science with a radical new theory based on the existence of, and interactions between, parallel universes.

In a paper published in the journal Physical Review X, Professor Howard Wiseman and Dr Michael Hall from Griffith’s Centre for Quantum Dynamics, and Dr Dirk-Andre Deckert from the University of California, take interacting parallel worlds out of the realm of science fiction and into that of hard science.
The team proposes that parallel universes really exist, and that they interact. That is, rather than evolving independently, nearby worlds influence one another by a subtle force of repulsion. They show that such an interaction could explain everything that is bizarre about quantum mechanics.

Quantum theory is needed to explain how the universe works at the microscopic scale, and is believed to apply to all matter. But it is notoriously difficult to fathom, exhibiting weird phenomena which seem to violate the laws of cause and effect.

As the eminent American theoretical physicist Richard Feynman once noted: “I think I can safely say that nobody understands quantum mechanics.”

However, the “Many-Interacting Worlds” approach developed at Griffith University provides a new and daring perspective on this baffling field.

“The idea of parallel universes in quantum mechanics has been around since 1957,” says Professor Wiseman.

“In the well-known “Many-Worlds Interpretation,” each universe branches into a bunch of new universes every time a quantum measurement is made. All possibilities are therefore realised — in some universes the dinosaur-killing asteroid missed Earth. In others, Australia was colonised by the Portuguese.

“But critics question the reality of these other universes, since they do not influence our universe at all. On this score, our “Many Interacting Worlds” approach is completely different, as its name implies.”

Professor Wiseman and his colleagues propose that:

•The universe we experience is just one of a gigantic number of worlds. Some are almost identical to ours while most are very different;
•All of these worlds are equally real, exist continuously through time, and possess precisely defined properties;
•All quantum phenomena arise from a universal force of repulsion between ‘nearby’ (i.e. similar) worlds which tends to make them more dissimilar.
Dr Hall says the “Many-Interacting Worlds” theory may even create the extraordinary possibility of testing for the existence of other worlds.

“The beauty of our approach is that if there is just one world our theory reduces to Newtonian mechanics, while if there is a gigantic number of worlds it reproduces quantum mechanics,” he says.

“In between it predicts something new that is neither Newton’s theory nor quantum theory.

“We also believe that, in providing a new mental picture of quantum effects, it will be useful in planning experiments to test and exploit quantum phenomena.”

The ability to approximate quantum evolution using a finite number of worlds could have significant ramifications in molecular dynamics, which is important for understanding chemical reactions and the action of drugs.

Professor Bill Poirier, Distinguished Professor of Chemistry at Texas Tech University, has observed: “These are great ideas, not only conceptually, but also with regard to the new numerical breakthroughs they are almost certain to engender.”

Journal Reference:

1.Michael J. W. Hall, Dirk-André Deckert, Howard M. Wiseman. Quantum Phenomena Modeled by Interactions between Many Classical Worlds. Physical Review X, 2014; 4 (4) DOI: 10.1103/PhysRevX.4.041013

http://www.sciencedaily.com/releases/2014/10/141030101654.htm

Brain decoder can eavesdrop on your inner voice

brainy_2758840b

Talking to yourself used to be a strictly private pastime. That’s no longer the case – researchers have eavesdropped on our internal monologue for the first time. The achievement is a step towards helping people who cannot physically speak communicate with the outside world.

“If you’re reading text in a newspaper or a book, you hear a voice in your own head,” says Brian Pasley at the University of California, Berkeley. “We’re trying to decode the brain activity related to that voice to create a medical prosthesis that can allow someone who is paralysed or locked in to speak.”

When you hear someone speak, sound waves activate sensory neurons in your inner ear. These neurons pass information to areas of the brain where different aspects of the sound are extracted and interpreted as words.

In a previous study, Pasley and his colleagues recorded brain activity in people who already had electrodes implanted in their brain to treat epilepsy, while they listened to speech. The team found that certain neurons in the brain’s temporal lobe were only active in response to certain aspects of sound, such as a specific frequency. One set of neurons might only react to sound waves that had a frequency of 1000 hertz, for example, while another set only cares about those at 2000 hertz. Armed with this knowledge, the team built an algorithm that could decode the words heard based on neural activity alone (PLoS Biology, doi.org/fzv269).

The team hypothesised that hearing speech and thinking to oneself might spark some of the same neural signatures in the brain. They supposed that an algorithm trained to identify speech heard out loud might also be able to identify words that are thought.

Mind-reading

To test the idea, they recorded brain activity in another seven people undergoing epilepsy surgery, while they looked at a screen that displayed text from either the Gettysburg Address, John F. Kennedy’s inaugural address or the nursery rhyme Humpty Dumpty.

Each participant was asked to read the text aloud, read it silently in their head and then do nothing. While they read the text out loud, the team worked out which neurons were reacting to what aspects of speech and generated a personalised decoder to interpret this information. The decoder was used to create a spectrogram – a visual representation of the different frequencies of sound waves heard over time. As each frequency correlates to specific sounds in each word spoken, the spectrogram can be used to recreate what had been said. They then applied the decoder to the brain activity that occurred while the participants read the passages silently to themselves.

Despite the neural activity from imagined or actual speech differing slightly, the decoder was able to reconstruct which words several of the volunteers were thinking, using neural activity alone (Frontiers in Neuroengineering, doi.org/whb).

The algorithm isn’t perfect, says Stephanie Martin, who worked on the study with Pasley. “We got significant results but it’s not good enough yet to build a device.”

In practice, if the decoder is to be used by people who are unable to speak it would have to be trained on what they hear rather than their own speech. “We don’t think it would be an issue to train the decoder on heard speech because they share overlapping brain areas,” says Martin.

The team is now fine-tuning their algorithms, by looking at the neural activity associated with speaking rate and different pronunciations of the same word, for example. “The bar is very high,” says Pasley. “Its preliminary data, and we’re still working on making it better.”

The team have also turned their hand to predicting what songs a person is listening to by playing lots of Pink Floyd to volunteers, and then working out which neurons respond to what aspects of the music. “Sound is sound,” says Pasley. “It all helps us understand different aspects of how the brain processes it.”

“Ultimately, if we understand covert speech well enough, we’ll be able to create a medical prosthesis that could help someone who is paralysed, or locked in and can’t speak,” he says.

Several other researchers are also investigating ways to read the human mind. Some can tell what pictures a person is looking at, others have worked out what neural activity represents certain concepts in the brain, and one team has even produced crude reproductions of movie clips that someone is watching just by analysing their brain activity. So is it possible to put it all together to create one multisensory mind-reading device?

In theory, yes, says Martin, but it would be extraordinarily complicated. She says you would need a huge amount of data for each thing you are trying to predict. “It would be really interesting to look into. It would allow us to predict what people are doing or thinking,” she says. “But we need individual decoders that work really well before combining different senses.”

http://www.newscientist.com/article/mg22429934.000-brain-decoder-can-eavesdrop-on-your-inner-voice.html