New theory on why some people may be better than others at getting inside people’s heads

Mind-Reading-300x232

Humans have an impressive ability to take on other viewpoints – it’s crucial for a social species like ours. So why are some of us better at it than others?

Picture two friends, Sally and Anne, having a drink in a bar. While Sally is in the bathroom, Anne decides to buy another round, but she notices that Sally has left her phone on the table. So no one can steal it, Anne puts the phone into her friend’s bag before heading to the bar. When Sally returns, where will she expect to see her phone?

If you said she would look at the table where she left it, congratulations! You have a theory of mind – the ability to understand that another person may have knowledge, ideas and beliefs that differ from your own, or from reality.

If that sounds like nothing out of the ordinary, perhaps it’s because we usually take it for granted. Yet it involves doing something no other animal can do to the same extent: temporarily setting aside our own ideas and beliefs about the world – that the phone is in the bag, in this case – in order to take on an alternative world view.

This process, also known as “mentalising”, not only lets us see that someone else can believe something that isn’t true, but also lets us predict other people’s behaviour, tell lies, and spot deceit by others. Theory of mind is a necessary ingredient in the arts and religion – after all, a belief in the spirit world requires us to conceive of minds that aren’t present – and it may even determine the number of friends we have.

Yet our understanding of this crucial aspect of our social intelligence is in flux. New ways of investigating and analysing it are challenging some long-held beliefs. As the dust settles, we are getting glimpses of how this ability develops, and why some of us are better at it than others. Theory of mind has “enormous cultural implications”, says Robin Dunbar, an evolutionary anthropologist at the University of Oxford. “It allows you to look beyond the world as we physically see it, and imagine how it might be different.”

The first ideas about theory of mind emerged in the 1970s, when it was discovered that at around the age of 4, children make a dramatic cognitive leap. The standard way to test a child’s theory of mind is called the Sally-Anne test, and it involves acting out the chain of events described earlier, only with puppets and a missing ball.

When asked, “When Sally returns, where will she look for the ball?”, most 3-year-olds say with confidence that she’ll look in the new spot, where Anne has placed it. The child knows the ball’s location, so they cannot conceive that Sally would think it was anywhere else.

Baby change
But around the age of 4, that changes. Most 4 and 5-year olds realise that Sally will expect the ball to be just where she left it.

For over two decades that was the dogma, but more recently those ideas have been shaken. The first challenge came in 2005, when it was reported in Science (vol 308, p 255) that theory of mind seemed to be present in babies just 15 months old.

Such young children cannot answer questions about where they expect Sally to look for the ball, but you can tell what they’re thinking by having Sally look in different places and noting how long they stare: babies look for longer at things they find surprising.

When Sally searched for a toy in a place she should not have expected to find it, the babies did stare for longer. In other words, babies barely past their first birthdays seemed to understand that people can have false beliefs. More remarkable still, similar findings were reported in 2010 for 7-month-old infants (Science, vol 330, p 1830).

Some say that since theory of mind seems to be present in infants, it must be present in young children as well. Something about the design of the classic Sally-Anne test, these critics argue, must be confusing 3-year-olds.

Yet there’s another possibility: perhaps we gain theory of mind twice. From a very young age we possess a basic, or implicit, form of mentalising, so this theory goes, and then around age 4, we develop a more sophisticated version. The implicit system is automatic but limited in its scope; the explicit system, which allows for a more refined understanding of other people’s mental states, is what you need to pass the Sally-Anne test.

If you think that explanation sounds complicated, you’re not alone. “The key problem is explaining why you would bother acquiring the same concept twice,” says Rebecca Saxe, a cognitive scientist at Massachusetts Institute of Technology.

Yet there are other mental skills that develop twice. Take number theory. Long before they can count, infants have an ability to gauge rough quantities; they can distinguish, for instance, between a general sense of “threeness” and “fourness”. Eventually, though, they do learn to count and multiply and so on, although the innate system still hums beneath the surface. Our decision-making ability, too, may develop twice. We seem to have an automatic and intuitive system for making gut decisions, and a second system that is slower and more explicit.

Double-think
So perhaps we also have a dual system for thinking about thoughts, says Ian Apperly, a cognitive scientist at the University of Birmingham, UK. “There might be two kinds of processes, on the one hand for speed and efficiency, and on the other hand for flexibility,” he argues (Psychological Review, vol 116, p 953).

Apperly has found evidence that we still possess the fast implicit system as adults. People were asked to study pictures showing a man looking at dots on a wall; sometimes the man could see all the dots, sometimes not. When asked how many dots there were, volunteers were slower and less accurate if the man could see fewer dots than they could. Even when trying not to take the man’s perspective into account, they couldn’t help but do so, says Apperly. “That’s a strong indication of an automatic process,” he says – in other words, an implicit system working at an unconscious level.

If this theory is true, it suggests we should pay attention to our gut feelings about people’s state of mind, says Apperly. Imagine surprising an intruder in your home. The implicit system might help you make fast decisions about what they see and know, while the explicit system could help you to make more calculated judgments about their motives. “Which system is better depends on whether you have time to make the more sophisticated judgement,” says Apperly.

The idea that we have a two-tier theory of mind is gaining ground. Further support comes from a study of people with autism, a group known to have difficulty with social skills, who are often said to lack theory of mind. In fact, tests on a group of high-functioning people with Asperger’s syndrome, a form of autism, showed they had the explicit system, yet they failed at non-verbal tests of the kind that reveal implicit theory of mind in babies (Science, vol 325, p 883). So people with autism can learn explicit mentalising skills, even without the implicit system, although the process remains “a little bit cumbersome” says Uta Frith, a cognitive scientist at University College London, who led the work. The finding suggests that the capacity to understand others should not be so easily written off in those with autism. “They can handle it when they have time to think about it,” says Frith.

If theory of mind is not an all-or-nothing quality, does that help explain why some of us seem to be better than others at putting ourselves into other people’s shoes? “Clearly people vary,” points out Apperly. “If you think of all your colleagues and friends, some are socially more or less capable.”

Unfortunately, that is not reflected in the Sally-Anne test, the mainstay of theory of mind research for the past four decades. Nearly everyone over the age of 5 can pass it standing on their head.

To get the measure of the variation in people’s abilities, different approaches are needed. One is called the director task; based on a similar idea to Apperly’s dot pictures, this involves people moving objects around on a grid while taking into account the viewpoint of an observer. This test reveals how children and adolescents improve progressively as they mature, only reaching a plateau in their 20s.

How does that timing square with the fact that the implicit system – which the director test hinges on – is supposed to emerge in early infancy? Sarah-Jayne Blakemore, a cognitive neuroscientist at University College London who works with Apperly, has an answer. What improves, she reckons, is not theory of mind per se but how we apply it in social situations using cognitive skills such as planning, attention and problem-solving, which keep developing during adolescence. “It’s the way we use that information when we make decisions,” she says.

So teenagers can blame their reputation for being self-centred on the fact they are still developing their theory of mind. The good news for parents is that most adolescents will learn how to put themselves in others’ shoes eventually. “You improve your skills by experiencing social scenarios,” says Frith.

It is also possible to test people’s explicit mentalising abilities by asking them convoluted “who-thought-what-about-whom” questions. After all, we can do better than realising that our friend mistakenly thinks her phone will be on the table. If such a construct represents “second-order” theory of mind, most of us can understand a fourth-order sentence like: “John said that Michael thinks that Anne knows that Sally thinks her phone will be on the table.”

In fact Dunbar’s team has shown that such a concept would be the limit of about 20 per cent of the general population (British Journal of Psychology, vol 89, p 191). Sixty per cent of us can manage fifth-order theory of mind and the top 20 per cent can reach the heights of sixth order.

As well as letting us keep track of our complex social lives, this kind of mentalising is crucial for our appreciation of works of fiction. Shakespeare’s genius, according to Dunbar, was to make his audience work at the edge of their ability, tracking multiple mind states. In Othello, for instance, the audience has to understand that Iago wants jealous Othello to mistakenly think that his wife Desdemona loves Cassio. “He’s able to lift the audience to his limits,” says Dunbar.

So why do some of us operate at the Bard’s level while others are less socially capable? Dunbar argues it’s all down to the size of our brains.

According to one theory, during human evolution the prime driver of our expanding brains was the growing size of our social groups, with the resulting need to keep track of all those relatives, rivals and allies. Dunbar’s team has shown that among monkeys and apes, those living in bigger groups have a larger prefrontal cortex. This is the outermost section of the brain covering roughly the front third of our heads, where a lot of higher thought processes go on.

Last year, Dunbar applied that theory to a single primate species: us. His team got 40 people to fill in a questionnaire about the number of friends they had, and then imaged their brains in an MRI scanner. Those with the biggest social networks had a larger region of the prefrontal cortex tucked behind the eye sockets. They also scored better on theory of mind tests (Proceedings of the Royal Society B, vol 279, p 2157). “The size of the bits of prefrontal cortex involved in mentalising determine your mentalising competencies,” says Dunbar. “And your mentalising competencies then determine the number of friends you have.” It’s a bold claim, and one that has not convinced everyone in the field. After all, correlation does not prove causation. Perhaps having lots of friends makes this part of the brain grow bigger, rather than the other way round, or perhaps a large social network is a sign of more general intelligence.

Lying robots
What’s more, there seem to be several parts of the brain involved in mentalising – perhaps unsurprisingly for such a complex ability. In fact, so many brain areas have been implicated that scientists now talk about the theory of mind “network” rather than a single region.

A type of imaging called fMRI scanning, which can reveal which parts of the brain “light up” for specific mental functions, strongly implicates a region called the right temporoparietal junction, located towards the rear of the brain, as being crucial for theory of mind. In addition, people with damage to this region tend to fail the Sally-Anne test.

Other evidence has emerged for the involvement of the right temporoparietal junction. When Rebecca Saxe temporarily disabled that part of the brain in healthy volunteers, by holding a magnet above the skull, they did worse at tests that involved considering others’ beliefs while making moral judgments (PNAS, vol 107, p 6753).

Despite the explosion of research in this area in recent years, there is still lots to learn about this nifty piece of mental machinery. As our understanding grows, it is not just our own skills that stand to improve. If we can figure out how to give mentalising powers to computers and robots, they could become a lot more sophisticated. “Part of the process of socialising robots might draw upon things we’re learning from how people think about people,” Apperly says.

For instance, programmers at the Georgia Institute of Technology in Atlanta have developed robots that can deceive each other and leave behind false clues in a high-tech game of hide-and-seek. Such projects may ultimately lead to robots that can figure out the thoughts and intentions of people.

For now, though, the remarkable ability to thoroughly worm our way into someone else’s head exists only in the greatest computer of all – the human brain.

(Article by Kirsten Weir, who is a science writer based in Minneapolis).

http://beyondmusing.wordpress.com/2013/06/07/mind-reading-how-we-get-inside-other-peoples-heads/

Why music makes our brain sing

music

By ROBERT J. ZATORRE and VALORIE N. SALIMPOOR
Published: June 7, 2013

Music is not tangible. You can’t eat it, drink it or mate with it. It doesn’t protect against the rain, wind or cold. It doesn’t vanquish predators or mend broken bones. And yet humans have always prized music — or well beyond prized, loved it.

In the modern age we spend great sums of money to attend concerts, download music files, play instruments and listen to our favorite artists whether we’re in a subway or salon. But even in Paleolithic times, people invested significant time and effort to create music, as the discovery of flutes carved from animal bones would suggest.

So why does this thingless “thing” — at its core, a mere sequence of sounds — hold such potentially enormous intrinsic value?

The quick and easy explanation is that music brings a unique pleasure to humans. Of course, that still leaves the question of why. But for that, neuroscience is starting to provide some answers.

More than a decade ago, our research team used brain imaging to show that music that people described as highly emotional engaged the reward system deep in their brains — activating subcortical nuclei known to be important in reward, motivation and emotion. Subsequently we found that listening to what might be called “peak emotional moments” in music — that moment when you feel a “chill” of pleasure to a musical passage — causes the release of the neurotransmitter dopamine, an essential signaling molecule in the brain.

When pleasurable music is heard, dopamine is released in the striatum — an ancient part of the brain found in other vertebrates as well — which is known to respond to naturally rewarding stimuli like food and sex and which is artificially targeted by drugs like cocaine and amphetamine.

But what may be most interesting here is when this neurotransmitter is released: not only when the music rises to a peak emotional moment, but also several seconds before, during what we might call the anticipation phase.

The idea that reward is partly related to anticipation (or the prediction of a desired outcome) has a long history in neuroscience. Making good predictions about the outcome of one’s actions would seem to be essential in the context of survival, after all. And dopamine neurons, both in humans and other animals, play a role in recording which of our predictions turn out to be correct.

To dig deeper into how music engages the brain’s reward system, we designed a study to mimic online music purchasing. Our goal was to determine what goes on in the brain when someone hears a new piece of music and decides he likes it enough to buy it.

We used music-recommendation programs to customize the selections to our listeners’ preferences, which turned out to be indie and electronic music, matching Montreal’s hip music scene. And we found that neural activity within the striatum — the reward-related structure — was directly proportional to the amount of money people were willing to spend.

But more interesting still was the cross talk between this structure and the auditory cortex, which also increased for songs that were ultimately purchased compared with those that were not.

Why the auditory cortex? Some 50 years ago, Wilder Penfield, the famed neurosurgeon and the founder of the Montreal Neurological Institute, reported that when neurosurgical patients received electrical stimulation to the auditory cortex while they were awake, they would sometimes report hearing music. Dr. Penfield’s observations, along with those of many others, suggest that musical information is likely to be represented in these brain regions.

The auditory cortex is also active when we imagine a tune: think of the first four notes of Beethoven’s Fifth Symphony — your cortex is abuzz! This ability allows us not only to experience music even when it’s physically absent, but also to invent new compositions and to reimagine how a piece might sound with a different tempo or instrumentation.

We also know that these areas of the brain encode the abstract relationships between sounds — for instance, the particular sound pattern that makes a major chord major, regardless of the key or instrument. Other studies show distinctive neural responses from similar regions when there is an unexpected break in a repetitive pattern of sounds, or in a chord progression. This is akin to what happens if you hear someone play a wrong note — easily noticeable even in an unfamiliar piece of music.

These cortical circuits allow us to make predictions about coming events on the basis of past events. They are thought to accumulate musical information over our lifetime, creating templates of the statistical regularities that are present in the music of our culture and enabling us to understand the music we hear in relation to our stored mental representations of the music we’ve heard.

So each act of listening to music may be thought of as both recapitulating the past and predicting the future. When we listen to music, these brain networks actively create expectations based on our stored knowledge.

Composers and performers intuitively understand this: they manipulate these prediction mechanisms to give us what we want — or to surprise us, perhaps even with something better.

In the cross talk between our cortical systems, which analyze patterns and yield expectations, and our ancient reward and motivational systems, may lie the answer to the question: does a particular piece of music move us?

When that answer is yes, there is little — in those moments of listening, at least — that we value more.

Robert J. Zatorre is a professor of neuroscience at the Montreal Neurological Institute and Hospital at McGill University. Valorie N. Salimpoor is a postdoctoral neuroscientist at the Baycrest Health Sciences’ Rotman Research Institute in Toronto.

Thanks to S.R.W. for bringing this to the attention of the It’s Interesting community.

Trouble With Math? Maybe You Should Get Your Brain Zapped

sn-math

by Emily Underwood
ScienceNOW

If you are one of the 20% of healthy adults who struggle with basic arithmetic, simple tasks like splitting the dinner bill can be excruciating. Now, a new study suggests that a gentle, painless electrical current applied to the brain can boost math performance for up to 6 months. Researchers don’t fully understand how it works, however, and there could be side effects.

The idea of using electrical current to alter brain activity is nothing new—electroshock therapy, which induces seizures for therapeutic effect, is probably the best known and most dramatic example. In recent years, however, a slew of studies has shown that much milder electrical stimulation applied to targeted regions of the brain can dramatically accelerate learning in a wide range of tasks, from marksmanship to speech rehabilitation after stroke.

In 2010, cognitive neuroscientist Roi Cohen Kadosh of the University of Oxford in the United Kingdom showed that, when combined with training, electrical brain stimulation can make people better at very basic numerical tasks, such as judging which of two quantities is larger. However, it wasn’t clear how those basic numerical skills would translate to real-world math ability.

To answer that question, Cohen Kadosh recruited 25 volunteers to practice math while receiving either real or “sham” brain stimulation. Two sponge-covered electrodes, fixed to either side of the forehead with a stretchy athletic band, targeted an area of the prefrontal cortex considered key to arithmetic processing, says Jacqueline Thompson, a Ph.D. student in Cohen Kadosh’s lab and a co-author on the study. The electrical current slowly ramped up to about 1 milliamp—a tiny fraction of the voltage of an AA battery—then randomly fluctuated between high and low values. For the sham group, the researchers simulated the initial sensation of the increase by releasing a small amount of current, then turned it off.

For roughly 20 minutes per day over 5 days, the participants memorized arbitrary mathematical “facts,” such as 4#10 = 23, then performed a more sophisticated task requiring multiple steps of arithmetic, also based on memorized symbols. A squiggle, for example, might mean “add 2,” or “subtract 1.” This is the first time that brain stimulation has been applied to improving such complex math skills, says neuroethicist Peter Reiner of the University of British Columbia, Vancouver, in Canada, who wasn’t involved in the research.

The researchers also used a brain imaging technique called near-infrared spectroscopy to measure how efficiently the participants’ brains were working as they performed the tasks.

Although the two groups performed at the same level on the first day, over the next 4 days people receiving brain stimulation along with training learned to do the tasks two to five times faster than people receiving a sham treatment, the authors reported in Current Biology. Six months later, the researchers called the participants back and found that people who had received brain stimulation were still roughly 30% faster at the same types of mathematical challenges. The targeted brain region also showed more efficient activity, Thompson says.

The fact that only participants who received electrical stimulation and practiced math showed lasting physiological changes in their brains suggests that experience is required to seal in the effects of stimulation, says Michael Weisend, a neuroscientist at the Mind Research Network in Albuquerque, New Mexico, who wasn’t involved with the study. That’s valuable information for people who hope to get benefits from stimulation alone, he says. “It’s not going to be a magic bullet.”

Although it’s not clear how the technique works, Thompson says, one hypothesis is that the current helps synchronize neuron firing, enabling the brain to work more efficiently. Scientists also don’t know if negative or unintended effects might result. Although no side effects of brain stimulation have yet been reported, “it’s impossible to say with any certainty” that there aren’t any, Thompson says.

“Math is only one of dozens of skills in which this could be used,” Reiner says, adding that it’s “not unreasonable” to imagine that this and similar stimulation techniques could replace the use of pills for cognitive enhancement.

In the future, the researchers hope to include groups that often struggle with math, such as people with neurodegenerative disorders and a condition called developmental dyscalculia. As long as further testing shows that the technique is safe and effective, children in schools could also receive brain stimulation along with their lessons, Thompson says. But there’s “a long way to go,” before the method is ready for schools, she says. In the meantime, she adds, “We strongly caution you not to try this at home, no matter how tempted you may be to slap a battery on your kid’s head.”

http://news.sciencemag.org/sciencenow/2013/05/trouble-with-math-maybe-you-shou.html?ref=hp

Cocaine Vaccine Passes Key Testing Hurdle of Preventing Drug from Reaching the Brain – Human Clinical Trials soon

cocaine

Researchers at Weill Cornell Medical College have successfully tested their novel anti-cocaine vaccine in primates, bringing them closer to launching human clinical trials. Their study, published online by the journal Neuropsychopharmacology, used a radiological technique to demonstrate that the anti-cocaine vaccine prevented the drug from reaching the brain and producing a dopamine-induced high.

“The vaccine eats up the cocaine in the blood like a little Pac-man before it can reach the brain,” says the study’s lead investigator, Dr. Ronald G. Crystal, chairman of the Department of Genetic Medicine at Weill Cornell Medical College. “We believe this strategy is a win-win for those individuals, among the estimated 1.4 million cocaine users in the United States, who are committed to breaking their addiction to the drug,” he says. “Even if a person who receives the anti-cocaine vaccine falls off the wagon, cocaine will have no effect.”

Dr. Crystal says he expects to begin human testing of the anti-cocaine vaccine within a year.

Cocaine, a tiny molecule drug, works to produce feelings of pleasure because it blocks the recycling of dopamine — the so-called “pleasure” neurotransmitter — in two areas of the brain, the putamen in the forebrain and the caudate nucleus in the brain’s center. When dopamine accumulates at the nerve endings, “you get this massive flooding of dopamine and that is the feel good part of the cocaine high,” says Dr. Crystal.

The novel vaccine Dr. Crystal and his colleagues developed combines bits of the common cold virus with a particle that mimics the structure of cocaine. When the vaccine is injected into an animal, its body “sees” the cold virus and mounts an immune response against both the virus and the cocaine impersonator that is hooked to it. “The immune system learns to see cocaine as an intruder,” says Dr. Crystal. “Once immune cells are educated to regard cocaine as the enemy, it produces antibodies, from that moment on, against cocaine the moment the drug enters the body.”

In their first study in animals, the researchers injected billions of their viral concoction into laboratory mice, and found a strong immune response was generated against the vaccine. Also, when the scientists extracted the antibodies produced by the mice and put them in test tubes, it gobbled up cocaine. They also saw that mice that received both the vaccine and cocaine were much less hyperactive than untreated mice given cocaine.

In this study, the researchers sought to precisely define how effective the anti-cocaine vaccine is in non-human primates, who are closer in biology to humans than mice. They developed a tool to measure how much cocaine attached to the dopamine transporter, which picks up dopamine in the synapse between neurons and brings it out to be recycled. If cocaine is in the brain, it binds on to the transporter, effectively blocking the transporter from ferrying dopamine out of the synapse, keeping the neurotransmitter active to produce a drug high.

In the study, the researchers attached a short-lived isotope tracer to the dopamine transporter. The activity of the tracer could be seen using positron emission tomography (PET). The tool measured how much of the tracer attached to the dopamine receptor in the presence or absence of cocaine.

The PET studies showed no difference in the binding of the tracer to the dopamine transporter in vaccinated compared to unvaccinated animals if these two groups were not given cocaine. But when cocaine was given to the primates, there was a significant drop in activity of the tracer in non-vaccinated animals. That meant that without the vaccine, cocaine displaced the tracer in binding to the dopamine receptor.

Previous research had shown in humans that at least 47 percent of the dopamine transporter had to be occupied by cocaine in order to produce a drug high. The researchers found, in vaccinated primates, that cocaine occupancy of the dopamine receptor was reduced to levels of less than 20 percent.

“This is a direct demonstration in a large animal, using nuclear medicine technology, that we can reduce the amount of cocaine that reaches the brain sufficiently so that it is below the threshold by which you get the high,” says Dr. Crystal.

When the vaccine is studied in humans, the non-toxic dopamine transporter tracer can be used to help study its effectiveness as well, he adds.

The researchers do not know how often the vaccine needs to be administered in humans to maintain its anti-cocaine effect. One vaccine lasted 13 weeks in mice and seven weeks in non-human primates.

“An anti-cocaine vaccination will require booster shots in humans, but we don’t know yet how often these booster shots will be needed,” says Dr. Crystal. “I believe that for those people who desperately want to break their addiction, a series of vaccinations will help.”

Co-authors of the study include Dr. Anat Maoz, Dr. Martin J. Hicks, Dr. Shankar Vallabhajosula, Michael Synan, Dr. Paresh J. Kothari, Dr. Jonathan P. Dyke, Dr. Douglas J. Ballon, Dr. Stephen M. Kaminsky, Dr. Bishnu P. De and Dr. Jonathan B. Rosenberg from Weill Cornell Medical College; Dr. Diana Martinez from Columbia University; and Dr. George F. Koob and Dr. Kim D. Janda from The Scripps Research Institute.

The study was funded by grants from the National Institute on Drug Abuse (NIDA).

Thanks to Kebmodee and Dr. Rajadhyaksha for bringing this to the attention of the It’s Interesting community.

New study links first-person singular pronouns to relationship problems and higher rates of depression

me

Researchers in Germany have found that people who frequently use first-person singular words like “I,” “me,” and “myself,” are more likely to be depressed and have more interpersonal problems than people who often say “we” and “us.”

In the study, 103 women and 15 men completed 60- to 90-minute psychotherapeutic interviews about their relationships, their past, and their self-perception. (99 of the subjects were patients at a psychotherapy clinic who had problems ranging from eating disorders to anxiety.) They also filled out questionnaires about depression and their interpersonal behavior.

Then, researchers led by Johannes Zimmerman of Germany’s University of Kassel counted the number of first-person singular (I, me) and first-person plural (we, us) pronouns used in each interview. Subjects who said more first-personal singular words scored higher on measures of depression. They also were more likely to show problematic interpersonal behaviors such as attention seeking, inappropriate self-disclosure, and an inability to spend time alone.

By contrast, the participants who used more pronouns like “we” and “us” tended to have what the researches called a “cold” interpersonal style. But, they explained, the coldness functioned as a positive way to maintain appropriate relationship boundaries while still helping others with their needs.

“Using first-person singular pronouns highlights the self as a distinct entity,” Zimmermann says, “whereas using first-person plural pronouns emphasizes its embeddedness into social relationships.” According to the study authors, the use of more first-person singular pronouns may be part of a strategy to gain more friendly attention from others.

Zimmerman points out that there’s no evidence that using more “I” and “me” words actually causes depression—instead, the speaking habit probably reflects how people see themselves and relate to others, he says.

The study appears in the June 2013 issue of the Journal of Research in Personality.

http://www.popsci.com/science/article/2013-05/people-who-often-say-me-myself-and-i-are-more-depressed?src=SOC&dom=tw

Brain implants: Restoring memory with a microchip

130507101540-brain-implants-human-horizontal-gallery

William Gibson’s popular science fiction tale “Johnny Mnemonic” foresaw sensitive information being carried by microchips in the brain by 2021. A team of American neuroscientists could be making this fantasy world a reality. Their motivation is different but the outcome would be somewhat similar. Hailed as one of 2013’s top ten technological breakthroughs by MIT, the work by the University of Southern California, North Carolina’s Wake Forest University and other partners has actually spanned a decade.

But the U.S.-wide team now thinks that it will see a memory device being implanted in a small number of human volunteers within two years and available to patients in five to 10 years. They can’t quite contain their excitement. “I never thought I’d see this in my lifetime,” said Ted Berger, professor of biomedical engineering at the University of Southern California in Los Angeles. “I might not benefit from it myself but my kids will.”

Rob Hampson, associate professor of physiology and pharmacology at Wake Forest University, agrees. “We keep pushing forward, every time I put an estimate on it, it gets shorter and shorter.”

The scientists — who bring varied skills to the table, including mathematical modeling and psychiatry — believe they have cracked how long-term memories are made, stored and retrieved and how to replicate this process in brains that are damaged, particularly by stroke or localized injury.

Berger said they record a memory being made, in an undamaged area of the brain, then use that data to predict what a damaged area “downstream” should be doing. Electrodes are then used to stimulate the damaged area to replicate the action of the undamaged cells.

They concentrate on the hippocampus — part of the cerebral cortex which sits deep in the brain — where short-term memories become long-term ones. Berger has looked at how electrical signals travel through neurons there to form those long-term memories and has used his expertise in mathematical modeling to mimic these movements using electronics.

Hampson, whose university has done much of the animal studies, adds: “We support and reinforce the signal in the hippocampus but we are moving forward with the idea that if you can study enough of the inputs and outputs to replace the function of the hippocampus, you can bypass the hippocampus.”

The team’s experiments on rats and monkeys have shown that certain brain functions can be replaced with signals via electrodes. You would think that the work of then creating an implant for people and getting such a thing approved would be a Herculean task, but think again.

For 15 years, people have been having brain implants to provide deep brain stimulation to treat epilepsy and Parkinson’s disease — a reported 80,000 people have now had such devices placed in their brains. So many of the hurdles have already been overcome — particularly the “yuck factor” and the fear factor.

“It’s now commonly accepted that humans will have electrodes put in them — it’s done for epilepsy, deep brain stimulation, (that has made it) easier for investigative research, it’s much more acceptable now than five to 10 years ago,” Hampson says.

Much of the work that remains now is in shrinking down the electronics.

“Right now it’s not a device, it’s a fair amount of equipment,”Hampson says. “We’re probably looking at devices in the five to 10 year range for human patients.”

The ultimate goal in memory research would be to treat Alzheimer’s Disease but unlike in stroke or localized brain injury, Alzheimer’s tends to affect many parts of the brain, especially in its later stages, making these implants a less likely option any time soon.

Berger foresees a future, however, where drugs and implants could be used together to treat early dementia. Drugs could be used to enhance the action of cells that surround the most damaged areas, and the team’s memory implant could be used to replace a lot of the lost cells in the center of the damaged area. “I think the best strategy is going to involve both drugs and devices,” he says.

Unfortunately, the team found that its method can’t help patients with advanced dementia.

“When looking at a patient with mild memory loss, there’s probably enough residual signal to work with, but not when there’s significant memory loss,” Hampson said.

Constantine Lyketsos, professor of psychiatry and behavioral sciences at John Hopkins Medicine in Baltimore which is trialing a deep brain stimulator implant for Alzheimer’s patients was a little skeptical of the other team’s claims.

“The brain has a lot of redundancy, it can function pretty well if loses one or two parts. But memory involves circuits diffusely dispersed throughout the brain so it’s hard to envision.” However, he added that it was more likely to be successful in helping victims of stroke or localized brain injury as indeed its makers are aiming to do.

The UK’s Alzheimer’s Society is cautiously optimistic.

“Finding ways to combat symptoms caused by changes in the brain is an ongoing battle for researchers. An implant like this one is an interesting avenue to explore,” said Doug Brown, director of research and development.

Hampson says the team’s breakthrough is “like the difference between a cane, to help you walk, and a prosthetic limb — it’s two different approaches.”

It will still take time for many people to accept their findings and their claims, he says, but they don’t expect to have a shortage of volunteers stepping forward to try their implant — the project is partly funded by the U.S. military which is looking for help with battlefield injuries.

There are U.S. soldiers coming back from operations with brain trauma and a neurologist at DARPA (the Defense Advanced Research Projects Agency) is asking “what can you do for my boys?” Hampson says.

“That’s what it’s all about.”

http://www.cnn.com/2013/05/07/tech/brain-memory-implants-humans/index.html?iref=allsearch

New Study Ties Autism Risk to Creases in Placenta

placenta

After most pregnancies, the placenta is thrown out, having done its job of nourishing and supporting the developing baby.

But a new study raises the possibility that analyzing the placenta after birth may provide clues to a child’s risk for developing autism. The study, which analyzed placentas from 217 births, found that in families at high genetic risk for having an autistic child, placentas were significantly more likely to have abnormal folds and creases.

“It’s quite stark,” said Dr. Cheryl K. Walker, an obstetrician-gynecologist at the Mind Institute at the University of California, Davis, and a co-author of the study, published in the journal Biological Psychiatry. “Placentas from babies at risk for autism, clearly there’s something quite different about them.”

Researchers will not know until at least next year how many of the children, who are between 2 and 5, whose placentas were studied will be found to have autism. Experts said, however, that if researchers find that children with autism had more placental folds, called trophoblast inclusions, visible after birth, the condition could become an early indicator or biomarker for babies at high risk for the disorder.

“It would be really exciting to have a real biomarker and especially one that you can get at birth,” said Dr. Tara Wenger, a researcher at the Center for Autism Research at Children’s Hospital of Philadelphia, who was not involved in the study.

The research potentially marks a new frontier, not only for autism, but also for the significance of the placenta, long considered an after-birth afterthought. Now, only 10 percent to 15 percent of placentas are analyzed, usually after pregnancy complications or a newborn’s death.

Dr. Harvey J. Kliman, a research scientist at the Yale School of Medicine and lead author of the study, said the placenta had typically been given such little respect in the medical community that wanting to study it was considered equivalent to someone in the Navy wanting to scrub ships’ toilets with a toothbrush. But he became fascinated with placentas and noticed that inclusions often occurred with births involving problematic outcomes, usually genetic disorders.

He also noticed that “the more trophoblast inclusions you have, the more severe the abnormality.” In 2006, Dr. Kliman and colleagues published research involving 13 children with autism, finding that their placentas were three times as likely to have inclusions. The new study began when Dr. Kliman, looking for more placentas, contacted the Mind Institute, which is conducting an extensive study, called Marbles, examining potential causes of autism.

“This person came out of the woodwork and said, ‘I want to study trophoblastic inclusions,’ ” Dr. Walker recalled. “Now I’m fairly intelligent and have been an obstetrician for years and I had never heard of them.”

Dr. Walker said she concluded that while “this sounds like a very smart person with a very intriguing hypothesis, I don’t know him and I don’t know how much I trust him.” So she sent him Milky Way bar-size sections of 217 placentas and let him think they all came from babies considered at high risk for autism because an older sibling had the disorder. Only after Dr. Kliman had counted each placenta’s inclusions did she tell him that only 117 placentas came from at-risk babies; the other 100 came from babies with low autism risk.

She reasoned that if Dr. Kliman found that “they all show a lot of inclusions, then maybe he’s a bit overzealous” in trying to link inclusions to autism. But the results, she said, were “astonishing.” More than two-thirds of the low-risk placentas had no inclusions, and none had more than two. But 77 high-risk placentas had inclusions, 48 of them had two or more, including 16 with between 5 and 15 inclusions.

Dr. Walker said that typically between 2 percent and 7 percent of at-risk babies develop autism, and 20 percent to 25 percent have either autism or another developmental delay. She said she is seeing some autism and non-autism diagnoses among the 117 at-risk children in the study, but does not yet know how those cases match with placental inclusions.

Dr. Jonathan L. Hecht, associate professor of pathology at Harvard Medical School, said the study was intriguing and “probably true if it finds an association between these trophoblast inclusions and autism.” But he said that inclusions were the placenta’s way of responding to many kinds of stress, so they might turn out not to be specific enough to predict autism.

Dr. Kliman calls inclusions a “check-engine light, a marker of: something’s wrong, but I don’t know what it is.”

That’s how Chris Mann Sullivan sees it, too. Dr. Sullivan, a behavioral analyst in Morrisville, N.C., was not in the study, but sent her placenta to Dr. Kliman after her daughter Dania, now 3, was born. He found five inclusions. Dr. Sullivan began intensive one-on-one therapy with Dania, who has not been given a diagnosis of autism, but has some relatively mild difficulties.

“What would have happened if I did absolutely nothing, I’m not sure,” Dr. Sullivan said. “I think it’s a great way for parents to say, ‘O.K., we have some risk factors; we’re not going to ignore it.’ ”

Thanks to Dr. Rajadhyaksha for bringing this to the attention of the It’s Interesting community.

Documentary on Sleep Paralysis this May

sleep-paralysis-still-130329

Stephanie Pappas, LiveScience Senior Writer

When filmmaker Carla MacKinnon started waking up several times a week unable to move, with the sense that a disturbing presence was in the room with her, she didn’t call up her local ghost hunter. She got researching. Now, that research is becoming a short film and multiplatform art project exploring the strange and spooky phenomenon of sleep paralysis. The film, supported by the Wellcome Trust and set to screen at the Royal College of Arts in London, will debut in May.

Sleep paralysis happens when people become conscious while their muscles remain in the ultra-relaxed state that prevents them from acting out their dreams. The experience can be quite terrifying, with many people hallucinating a malevolent presence nearby, or even an attacker suffocating them. Surveys put the number of sleep paralysis sufferers between about 5 percent and 60 percent of the population. “I was getting quite a lot of sleep paralysis over the summer, quite frequently, and I became quite interested in what was happening, what medically or scientifically, it was all about,” MacKinnon said.

Her questions led her to talk with psychologists and scientists, as well as to people who experience the phenomenon. Myths and legends about sleep paralysis persist all over the globe, from the incubus and succubus (male and female demons, respectively) of European tales to a pink dolphin-turned-nighttime seducer in Brazil. Some of the stories MacKinnon uncovered reveal why these myths are so chilling.

One man told her about his frequent sleep paralysis episodes, during which he’d experience extremely realistic hallucinations of a young child, skipping around the bed and singing nursery rhymes. Sometimes, the child would sit on his pillow and talk to him. One night, the tot asked the man a personal question. When he refused to answer, the child transformed into a “horrendous demon,” MacKinnon said.

For another man, who had the sleep disorder narcolepsy (which can make sleep paralysis more common), his dream world clashed with the real world in a horrifying way. His sleep paralysis episodes typically included hallucinations that someone else was in his house or his room — he’d hear voices or banging around. One night, he awoke in a paralyzed state and saw a figure in his room as usual. “He suddenly realizes something is different,” MacKinnon said. “He suddenly realizes that he is in sleep paralysis, and his eyes are open, but the person who is in the room is in his room in real life.” The figure was no dream demon, but an actual burglar.

Sleep paralysis experiences are almost certainly behind the myths of the incubus and succubus, demons thought to have sex with unsuspecting humans in their sleep. In many cases, MacKinnon said, the science of sleep paralysis explains these myths. The feeling of suffocating or someone pushing down on the chest that often occurs during sleep paralysis may be a result of the automatic breathing pattern people fall into during sleep. When they become conscious while still in this breathing pattern, people may try to bring their breathing under voluntary control, leading to the feeling of suffocating. Add to that the hallucinations that seem to seep in from the dream world, and it’s no surprise that interpretations lend themselves to demons, ghosts or even alien abduction, MacKinnon said.

What’s more, MacKinnon said, sleep paralysis is more likely when your sleep is disrupted in some way — perhaps because you’ve been traveling, you’re too hot or too cold, or you’re sleeping in an unfamiliar or spooky place. Those tendencies may make it more likely that a person will experience sleep paralysis when already vulnerable to thoughts of ghosts and ghouls. “It’s interesting seeing how these scientific narratives and the more psychoanalytical or psychological narratives can support each other rather than conflict,” MacKinnon said.

Since working on the project, MacKinnon has been able to bring her own sleep paralysis episodes under control — or at least learned to calm herself during them. The trick, she said, is to use episodes like a form of research, by paying attention to details like how her hands feel and what position she’s in. This sort of mindfulness tends to make scary hallucinations blink away, she said. “Rationalizing it is incredibly counterintuitive,” she said. “It took me a really long time to stop believing that it was real, because it feels so incredibly real.”

http://www.livescience.com/28325-spooky-film-explores-sleep-paralysis.html

Researchers explore connecting the brain to machines

brain

Behind a locked door in a white-walled basement in a research building in Tempe, Ariz., a monkey sits stone-still in a chair, eyes locked on a computer screen. From his head protrudes a bundle of wires; from his mouth, a plastic tube. As he stares, a picture of a green cursor on the black screen floats toward the corner of a cube. The monkey is moving it with his mind.

The monkey, a rhesus macaque named Oscar, has electrodes implanted in his motor cortex, detecting electrical impulses that indicate mental activity and translating them to the movement of the ball on the screen. The computer isn’t reading his mind, exactly — Oscar’s own brain is doing a lot of the lifting, adapting itself by trial and error to the delicate task of accurately communicating its intentions to the machine. (When Oscar succeeds in controlling the ball as instructed, the tube in his mouth rewards him with a sip of his favorite beverage, Crystal Light.) It’s not technically telekinesis, either, since that would imply that there’s something paranormal about the process. It’s called a “brain-computer interface” (BCI). And it just might represent the future of the relationship between human and machine.

Stephen Helms Tillery’s laboratory at Arizona State University is one of a growing number where researchers are racing to explore the breathtaking potential of BCIs and a related technology, neuroprosthetics. The promise is irresistible: from restoring sight to the blind, to helping the paralyzed walk again, to allowing people suffering from locked-in syndrome to communicate with the outside world. In the past few years, the pace of progress has been accelerating, delivering dazzling headlines seemingly by the week.

At Duke University in 2008, a monkey named Idoya walked on a treadmill, causing a robot in Japan to do the same. Then Miguel Nicolelis stopped the monkey’s treadmill — and the robotic legs kept walking, controlled by Idoya’s brain. At Andrew Schwartz’s lab at the University of Pittsburgh in December 2012, a quadriplegic woman named Jan Scheuermann learned to feed herself chocolate by mentally manipulating a robotic arm. Just last month, Nicolelis’ lab set up what it billed as the first brain-to-brain interface, allowing a rat in North Carolina to make a decision based on sensory data beamed via Internet from the brain of a rat in Brazil.

So far the focus has been on medical applications — restoring standard-issue human functions to people with disabilities. But it’s not hard to imagine the same technologies someday augmenting capacities. If you can make robotic legs walk with your mind, there’s no reason you can’t also make them run faster than any sprinter. If you can control a robotic arm, you can control a robotic crane. If you can play a computer game with your mind, you can, theoretically at least, fly a drone with your mind.

It’s tempting and a bit frightening to imagine that all of this is right around the corner, given how far the field has already come in a short time. Indeed, Nicolelis — the media-savvy scientist behind the “rat telepathy” experiment — is aiming to build a robotic bodysuit that would allow a paralyzed teen to take the first kick of the 2014 World Cup. Yet the same factor that has made the explosion of progress in neuroprosthetics possible could also make future advances harder to come by: the almost unfathomable complexity of the human brain.

From I, Robot to Skynet, we’ve tended to assume that the machines of the future would be guided by artificial intelligence — that our robots would have minds of their own. Over the decades, researchers have made enormous leaps in artificial intelligence (AI), and we may be entering an age of “smart objects” that can learn, adapt to, and even shape our habits and preferences. We have planes that fly themselves, and we’ll soon have cars that do the same. Google has some of the world’s top AI minds working on making our smartphones even smarter, to the point that they can anticipate our needs. But “smart” is not the same as “sentient.” We can train devices to learn specific behaviors, and even out-think humans in certain constrained settings, like a game of Jeopardy. But we’re still nowhere close to building a machine that can pass the Turing test, the benchmark for human-like intelligence. Some experts doubt we ever will.

Philosophy aside, for the time being the smartest machines of all are those that humans can control. The challenge lies in how best to control them. From vacuum tubes to the DOS command line to the Mac to the iPhone, the history of computing has been a progression from lower to higher levels of abstraction. In other words, we’ve been moving from machines that require us to understand and directly manipulate their inner workings to machines that understand how we work and respond readily to our commands. The next step after smartphones may be voice-controlled smart glasses, which can intuit our intentions all the more readily because they see what we see and hear what we hear.

The logical endpoint of this progression would be computers that read our minds, computers we can control without any physical action on our part at all. That sounds impossible. After all, if the human brain is so hard to compute, how can a computer understand what’s going on inside it?

It can’t. But as it turns out, it doesn’t have to — not fully, anyway. What makes brain-computer interfaces possible is an amazing property of the brain called neuroplasticity: the ability of neurons to form new connections in response to fresh stimuli. Our brains are constantly rewiring themselves to allow us to adapt to our environment. So when researchers implant electrodes in a part of the brain that they expect to be active in moving, say, the right arm, it’s not essential that they know in advance exactly which neurons will fire at what rate. When the subject attempts to move the robotic arm and sees that it isn’t quite working as expected, the person — or rat or monkey — will try different configurations of brain activity. Eventually, with time and feedback and training, the brain will hit on a solution that makes use of the electrodes to move the arm.

That’s the principle behind such rapid progress in brain-computer interface and neuroprosthetics. Researchers began looking into the possibility of reading signals directly from the brain in the 1970s, and testing on rats began in the early 1990s. The first big breakthrough for humans came in Georgia in 1997, when a scientist named Philip Kennedy used brain implants to allow a “locked in” stroke victim named Johnny Ray to spell out words by moving a cursor with his thoughts. (It took him six exhausting months of training to master the process.) In 2008, when Nicolelis got his monkey at Duke to make robotic legs run a treadmill in Japan, it might have seemed like mind-controlled exoskeletons for humans were just another step or two away. If he succeeds in his plan to have a paralyzed youngster kick a soccer ball at next year’s World Cup, some will pronounce the cyborg revolution in full swing.

Schwartz, the Pittsburgh researcher who helped Jan Scheuermann feed herself chocolate in December, is optimistic that neuroprosthetics will eventually allow paralyzed people to regain some mobility. But he says that full control over an exoskeleton would require a more sophisticated way to extract nuanced information from the brain. Getting a pair of robotic legs to walk is one thing. Getting robotic limbs to do everything human limbs can do may be exponentially more complicated. “The challenge of maintaining balance and staying upright on two feet is a difficult problem, but it can be handled by robotics without a brain. But if you need to move gracefully and with skill, turn and step over obstacles, decide if it’s slippery outside — that does require a brain. If you see someone go up and kick a soccer ball, the essential thing to ask is, ‘OK, what would happen if I moved the soccer ball two inches to the right?'” The idea that simple electrodes could detect things as complex as memory or cognition, which involve the firing of billions of neurons in patterns that scientists can’t yet comprehend, is far-fetched, Schwartz adds.

That’s not the only reason that companies like Apple and Google aren’t yet working on devices that read our minds (as far as we know). Another one is that the devices aren’t portable. And then there’s the little fact that they require brain surgery.

A different class of brain-scanning technology is being touted on the consumer market and in the media as a way for computers to read people’s minds without drilling into their skulls. It’s called electroencephalography, or EEG, and it involves headsets that press electrodes against the scalp. In an impressive 2010 TED Talk, Tan Le of the consumer EEG-headset company Emotiv Lifescience showed how someone can use her company’s EPOC headset to move objects on a computer screen.

Skeptics point out that these devices can detect only the crudest electrical signals from the brain itself, which is well-insulated by the skull and scalp. In many cases, consumer devices that claim to read people’s thoughts are in fact relying largely on physical signals like skin conductivity and tension of the scalp or eyebrow muscles.

Robert Oschler, a robotics enthusiast who develops apps for EEG headsets, believes the more sophisticated consumer headsets like the Emotiv EPOC may be the real deal in terms of filtering out the noise to detect brain waves. Still, he says, there are limits to what even the most advanced, medical-grade EEG devices can divine about our cognition. He’s fond of an analogy that he attributes to Gerwin Schalk, a pioneer in the field of invasive brain implants. The best EEG devices, he says, are “like going to a stadium with a bunch of microphones: You can’t hear what any individual is saying, but maybe you can tell if they’re doing the wave.” With some of the more basic consumer headsets, at this point, “it’s like being in a party in the parking lot outside the same game.”

It’s fairly safe to say that EEG headsets won’t be turning us into cyborgs anytime soon. But it would be a mistake to assume that we can predict today how brain-computer interface technology will evolve. Just last month, a team at Brown University unveiled a prototype of a low-power, wireless neural implant that can transmit signals to a computer over broadband. That could be a major step forward in someday making BCIs practical for everyday use. Meanwhile, researchers at Cornell last week revealed that they were able to use fMRI, a measure of brain activity, to detect which of four people a research subject was thinking about at a given time. Machines today can read our minds in only the most rudimentary ways. But such advances hint that they may be able to detect and respond to more abstract types of mental activity in the always-changing future.

http://www.ydr.com/living/ci_22800493/researchers-explore-connecting-brain-machines

Putting the Clock in ‘Cock-A-Doodle-Doo’

130318132625-large
Of course, roosters crow with the dawn. But are they simply reacting to the environment, or do they really know what time of day it is? Researchers reporting on March 18 in Current Biology, a Cell Press publication, have evidence that puts the clock in “cock-a-doodle-doo”

“‘Cock-a-doodle-doo’ symbolizes the break of dawn in many countries,” says Takashi Yoshimura of Nagoya University. “But it wasn’t clear whether crowing is under the control of a biological clock or is simply a response to external stimuli.”

That’s because other things — a car’s headlights, for instance — will set a rooster off, too, at any time of day. To find out whether the roosters’ crowing is driven by an internal biological clock, Yoshimura and his colleague Tsuyoshi Shimmura placed birds under constant light conditions and turned on recorders to listen and watch.

Under round-the-clock dim lighting, the roosters kept right on crowing each morning just before dawn, proof that the behavior is entrained to a circadian rhythm. The roosters’ reactions to external events also varied over the course of the day.

In other words, predawn crowing and the crowing that roosters do in response to other cues both depend on a circadian clock.

The findings are just the start of the team’s efforts to unravel the roosters’ innate vocalizations, which aren’t learned like songbird songs or human speech, the researchers say.

“We still do not know why a dog says ‘bow-wow’ and a cat says ‘meow,’ Yoshimura says. “We are interested in the mechanism of this genetically controlled behavior and believe that chickens provide an excellent model.”

Tsuyoshi Shimmura, Takashi Yoshimura. Circadian clock determines the timing of rooster crowing. Current Biology, 2013; 23 (6): R231 DOI: 10.1016/j.cub.2013.02.015

http://www.sciencedaily.com/releases/2013/03/130318132625.htm