DARPA project suggests a mix of man and machine may be the most efficient way to spot danger: the Cognitive Technology Threat Warning System

smart_sentryx519

 

Sentry duty is a tough assignment. Most of the time there’s nothing to see, and when a threat does pop up, it can be hard to spot. In some military studies, humans are shown to detect only 47 percent of visible dangers.

A project run by the Defense Advanced Research Projects Agency (DARPA) suggests that combining the abilities of human sentries with those of machine-vision systems could be a better way to identify danger. It also uses electroencephalography to identify spikes in brain activity that can correspond to subconscious recognition of an object.

An experimental system developed by DARPA sandwiches a human observer between layers of computer vision and has been shown to outperform either machines or humans used in isolation.

The so-called Cognitive Technology Threat Warning System consists of a wide-angle camera and radar, which collects imagery for humans to review on a screen, and a wearable electroencephalogram device that measures the reviewer’s brain activity. This allows the system to detect unconscious recognition of changes in a scene—called a P300 event.

In experiments, a participant was asked to review test footage shot at military test sites in the desert and rain forest. The system caught 91 percent of incidents (such as humans on foot or approaching vehicles) in the simulation. It also widened the field of view that could effectively be monitored. False alarms were raised only 0.2 percent of the time, down from 35 percent when a computer vision system was used on its own. When combined with radar, which detects things invisible to the naked eye, the accuracy of the system was close to 100 percent, DARPA says.

“The DARPA project is different from other ‘human-in-the-loop’ projects because it takes advantage of the human visual system without having the humans do any ‘work,’ ” says computer scientist Devi Parikh of the Toyota Technological Institute at Chicago. Parikh researches vision systems that combine human and machine expertise.

While electroencephalogram-measuring caps are commercially available for a few hundred dollars, Parikh warns that the technology is still in its infancy. Furthermore, she notes, the P300 signals may vary enough to require training or personalized processing, which could make it harder to scale up such a system for widespread use.

Thanks to Kebmodee for bringing this to the attention of the It’s Interesting community.

http://www.technologyreview.com/news/507826/sentry-system-combines-a-human-brain-with-computer-vision/

Brain-controlled helicopter may soon be available

For the last few years, Puzzlebox has been publishing open source software and hacking guides that walk makers through the modification of RC helicopters so that they can be flown and controlled using just the power of the mind. Full systems have also been custom built to introduce youngsters to brain-computer interfaces and neuroscience. The group is about to take the project to the next stage by making a Puzzlebox Orbit brain-controlled helicopter available to the public, while encouraging user experimentation by making all the code, schematics, 3D models, build guides and other documentation freely available under an open-source license.

The helicopter has a protective outer sphere that prevents the rotor blades from impacting with walls, furniture, floor and ceiling is very similar in design to the Kyosho Space Ball. It’s not the same craft though, and the ability to control it with the mind is not the only difference.

“There’s a ring around the top and bottom of the Space Ball which isn’t present on the Puzzlebox Orbit,” Castellotti says. “The casing around their server motor looks quite different, too. The horizontal ring at-mid level is more rounded on the Orbit, and vertically it is more squat. We’re also selling the Puzzlebox Orbit in the U.S. for US$89 (including shipping), versus their $117 (plus shipping).”

Two versions of the Puzzlebox Orbit system are being offered to the public. The first is designed for use with mobile devices like tablets and smartphones. A NeuroSky MindWave Mobile EEG headset communicates with the device via Bluetooth. Proprietary software then analyzes the brainwave data in real time and translates the input as command signals, which are sent to the helicopter via an IR adapter plugged into the device’s audio jack.

This system isn’t quite ready for all mobile operating platforms, though. The team is “happy on Android but don’t have access to a wide variety of hardware for testing,” confirmed Castellotti, adding “Some tuning after release is expected. We’ll have open source code available to iOS developers and will have initiated the App Store evaluation process if it’s not already been approved.”

The second offering comes with a Puzzlebox Pyramid, which was developed completely in-house and has a dual role as a home base for the Orbit helicopter and a remote control unit. At its heart is a programmable micro-controller that’s compatible with Arduino boards. On one face of the pyramid there’s a broken circle of multi-colored LED lights in a clock face configuration. These are used to indicate levels of concentration, mental relaxation, and the quality of the EEG signal from a NeuroSky MindWave EEG headset (which wirelessly communicates with a USB dongle plugged into the rear of the pyramid).

Twelve infrared LEDs to the top of each face actually control the Orbit helicopter, and with some inventive tweaking, these can also be used to control other IR toys and devices (including TVs).

In either case, a targeted mental state can be assigned to a helicopter control or flight path (such as hover in place or fly in a straight line) and actioned whenever that state is detected and maintained. Estimated Orbit flight time is around eight minutes (or more), after which the user will need to recharge the unit for 30 minutes before the next take-off.

At the time of writing, a crowd-funding campaign on Kickstarter to take the prototype system into mass production has attracted almost three times its target. The Puzzlebox team has already secured enough hardware and materials to start shipping the first wave of Orbits next month. International backers will get their hands on the system early next year.

The brain-controlled helicopter is only a part of the package, however. The development team has promised to release the source code for the Linux/Mac/PC software and mobile apps, all protocols, and available hardware schematics under open-source licenses. Step-by-step how-to guides are also in the pipeline (like the one already on the Instructables website), together with educational aids detailing how everything works.

“We have prepared contributor tools for Orbit, including a wiki, source code browser, and ticket tracking system,” said Castellotti. “We are already using these tools internally to build the project. Access to these will be granted when the Kickstarter campaign closes.”

“We would really like to underline that we are producing more than just a brain-controlled helicopter,” he stressed. “The toy and concept is fun and certainly the main draw, but the true purpose lies in the open code and hacking guides. We don’t want to be the holiday toy that gets played with for ten minutes then sits forever in the corner or on a shelf. We want owners to be able to use the Orbit to experiment with biofeedback – practicing how to concentrate better or to unwind and relax with this physical and visual aid.”

“And when curiosity kicks in and they start to wonder how it actually works, all of the information is published freely. That’s how we hope to share knowledge and foster a community. For example, a motivated experimenter should be able to start with the hardware we provide, and using our tools and guides learn how to hack support for driving a remote controlled car or causing a television to change channels when attention levels are measured as being low for too long a period of time. Such advancements could then be contributed back to the rest of our users.”

The Kickstarter campaign will close on December 8, after which the team will concentrate its efforts on getting Orbit systems delivered to backers and ensure that all the background and support documentation is in place. If all goes according to plan, a retail launch could follow as soon as Q1 2013.

It is hoped that the consumer Puzzlebox Orbit mobile/tablet edition with the NeuroSky headset will remain under US$200, followed by the Pyramid version at an as-yet undisclosed price.

http://www.gizmag.com/puzzlebox-orbit-brain-controlled-helicopter/25138/

 

New smell discovered

 

Scientists have discovered a new smell, but you may have to go to a laboratory to experience it yourself.

The smell is dubbed “olfactory white,” because it is the nasal equivalent of white noise, researchers reported Nov. 19 in the journal Proceedings of the National Academy of Sciences. Just as white noise is a mixture of many different sound frequencies and white light is a mixture of many different wavelengths, olfactory white is a mixture of many different smelly compounds.

In fact, the key to olfactory white is not the compounds themselves, researchers found, but the fact that there are a lot of them. 

“[T]he more components there were in each of two mixtures, the more similar the smell of those two mixtures became, even though the mixtures had no components in common,” they wrote.

Almost any given smell in the real world comes from a mixture of compounds. Humans are good at telling these mixtures apart (it’s hard to mix up the smell of coffee with the smell of roses, for example), but we’re bad at picking individual components out of those mixtures. (Quick, sniff your coffee mug and report back all the individual compounds that make that roasted smell. Not so easy, huh?)

Mixing multiple wavelegths that span the human visual range equally makes white light; mixing multiple frequencies that span the range of human hearing equally makes the whooshing hum of white noise. Neurobiologist Noam Sobel from the Weizmann Institute of Science in Israel and his colleagues wanted to find out whether a similar phenomenon happens with smelling. [7 New Flavors Your Tongue May Taste]

In a series of experiments, they exposed participants to hundreds of equally mixed smells, some containing as few as one compound and others containing up to 43 components. They first had 56 participants compare mixtures of the same number of compounds with one another. For example, a person might compare a 40-compound mixture with a 40-compound mixture, neither of which had any components in common.

This experiment revealed that the more components in a mixture, the worse participants were at telling them apart. A four-component mixture smells less similar to other four-component mixtures than a 43-component mixture smells to other 43-component mixtures.

The researchers seemed on track to finding the olfactory version of white noise. They set up a new experiment to confirm the find. In this experiment, they first created four 40-component mixtures. Twelve participants were then given one of the mixtures to sniff and told that it was called “Laurax,” a made-up word. Three of the participants were told compound 1 was Laurax, three were told it was compound 2, three were told it was compound 3, and the rest were told it was compound 4. 

After three days of sniffing their version of Laurax in the lab, the participants were given four new scents and four scent labels, one of which was Laurax. They were asked to label each scent with the most appropriate label.

The researchers found that the label “Laurax” was most popular for scents with more compounds. In fact, the more compounds in a mixture, the more likely participants were to call it Laurax. The label went to mixtures with more than 40 compounds 57.1 percent of the time.

Another experiment replicated the first, except that it allowed for participants to label one of the scents “other,” a way to ensure “Laurax” wasn’t just a catch-all. Again, scents with more compounds were more likely to get the Laurax label.

The meaning of these results, the researchers wrote, is that olfactory white is a distinct smell, caused not by specific compounds but by certain mixes of compounds. The key is that the compounds are all of equal intensity and that they span the full range of human smells. That’s why roses and coffee, both of which have many smell compounds, don’t smell anything alike: Their compounds are unequally mixed and don’t span a large range of smells.

In other words, our brains treat smells as a single unit, not as a mixture of compounds to break down, analyze and put back together again. If they didn’t, they’d never see mixtures of completely different compounds as smelling the same.

Perhaps the next burning question is: What does olfactory white smell like? Unfortunately, the scent is so bland as to defy description. Participants rated it right in the middle of the scale for both pleasantness and edibility.

“The best way to appreciate the qualities of olfactory white is to smell it,” the researchers wrote.

http://www.livescience.com/24890-new-white-smell-discovered.html

Scientists decode why Einstein was a genius

 

Physicist Albert Einstein’s brain had an “extraordinary” prefrontal cortex – unlike those of most people – which may have contributed to his remarkable genius, a new study has claimed.

According to the study led by Florida State University evolutionary anthropologist Dean Falk, portions of Einstein’s brain have been found to be unlike those of most people and could be related to his extraordinary cognitive abilities.

Falk and his colleagues describe for the first time the entire cerebral cortex of Einstein’s brain from an examination of 14 recently discovered photographs.

The researchers compared Einstein’s brain to 85 “normal” human brains and, in light of current functional imaging studies, interpreted its unusual features.

“Although the overall size and asymmetrical shape of Einstein’s brain were normal, the prefrontal, somatosensory, primary motor, parietal, temporal and occipital cortices were extraordinary.

“These may have provided the neurological underpinnings for some of his visuospatial and mathematical abilities, for instance,” said Falk.

The study was published in the journal Brain.

On Einstein’s death in 1955, his brain was removed and photographed from multiple angles with the permission of his family. Furthermore, it was sectioned into 240 blocks from which histological slides were prepared.

A great majority of the photographs, blocks and slides were lost from public sight for more than 55 years. The 14 photographs used by the researchers now are held by the National Museum of Health and Medicine.

The study also published the “roadmap” to Einstein’s brain prepared in 1955 by Dr Thomas Harvey to illustrate the locations within his previously whole brain of 240 dissected blocks of tissue, which provides a key to locating the origins within the brain of the newly emerged histological slides.

http://www.phenomenica.com/2012/11/scientists-decode-why-einstein-was-a-genius.html

A Peek Inside Rappers’ Brains Shows Roots Of Improvisation

 The warmer orange colors show parts of the brain most active during improvisational rap. The blue regions are most active when rappers performed a memorized piece.

Some rappers have an impressive ability to make up lyrics on the fly, in a style known as freestyle rap.

These performers have a lot in common with jazz musicians, it turns out.

Scientists have found artists in both genres are using their brains in similar ways when they improvise.

A group of jazz pianists had their heads examined in a 2008 PLOS One study, which subjected the musicians to functional magnetic resonance imaging scans. These scans highlight areas of brain activity.

When riffing on a tune instead of playing a memorized composition, the musicians had lower activity in a part of the frontal brain that is thought to be responsible for planning and greater activity in another part of the frontal brain believed to motivate thought and action.

After hearing about the jazz study, Los Angeles rappers Michael Eagle and Daniel Rizik-Baer contacted one of the researchers, Allen Braun, chief of the voice, speech and language branch of the National Institute on Deafness and Other Communication Disorders. Eagle and Rizik-Baer proposed a similar study on freestyle rap.

Soon Braun’s colleague Siyuan Liu at NIDCD put together a team, which included the two rappers, to determine what was happening inside these performers’ brains.

In their study, published today in Scientific Reports, five professional rappers were given a set of lyrics to memorize. A week later each was put inside an MRI machine to put on his performance.

“It’s not a very natural environment, that’s for sure,” notes Braun, a co-author on the latest study. “It’s noisy and you have to lie on your back. And you need to stay still.”

The clinical setting may not have been the normal setting for the rappers, but they had little difficulty performing on cue. Each would perform the preset rap, and then they would switch to improvising to the same music track.

“By comparing the two, we could see the neural activity associated with freestyle rap,” Braun says.

When Liu and the other researchers examined the fMRI data, they found that, like the jazz musicians, the rappers’ brains were paying less conscious attention to what was going on but had strong action in the area that motivates action and thought.

“Unlike the jazz study, these changes were very strongly associated with the left hemisphere of the brain,” Braun says. That’s the half of the brain where, for most right-handed people, language is processed.

The team also found a network of connections in the performers’ brains during the freestyle raps, linking parts of the brain responsible for motivation, language, action and emotion.

And raps that were rated as more innovative correlated with more activity in the region of the brain that stores words. It’s not surprising, says Braun, “that the more creative the rap, the more they’re tapping the lexicon.”

The study is part of a larger body of research that is hoping to determine what is happening inside the brain during the creative process. Braun says that he’d like to know more about what happens in the next phase of creativity, revision. He has recruited a group of poets for that study.

http://www.npr.org/blogs/health/2012/11/14/165145967/a-peek-inside-rappers-brains-shows-roots-of-improvisation?sc=emaf

Thanks to Dr. Nakamura for bringing this to the attention of the It’s Interesting community.

How childhood neglect affects the brain

 

Science is painting a dramatic picture of how childhood neglect damages developing brains, so stunting them that neglect might be likened to physically violent abuse.

The latest addition to this research narrative comes from a study of mice placed in isolation early in their lives, an experiment that, on its surface, might seem redundant: After all, we already know that neglect is bad for humans, much less mice.

But they key to the study is in the details. The researchers found striking abnormalities in tissues that transmit electrical messages across the brain, suggesting a specific mechanism for some of the dysfunctions seen in neglected human children.

“This is very strong evidence that changes in myelin cause some of the behavioral problems caused by isolation,” said neurologist Gabriel Corfas of Harvard Medical School, a co-author of the new study, released Sept. 13 in Science.

 

Corfas and his team, led by fellow Harvard Med neuroscientist Manabu Makinodan, put 21-day-old mice in isolation for two weeks, then returned them to their colonies. When the mice reached adolescence, the researchers compared their brains and behavior to mice who hadn’t been isolated.

The isolated mice were antisocial, with striking deficits in memory. Their myelin, a cell layer that forms around neuronal networks like insulation around wires, was unusually thin, especially in the prefrontal cortex, a brain region central to cognition and personality.

Similar patterns of behavior have been seen, again and again, in children raised in orphanages or neglected by parents, as have changes to a variety of brain regions, including the prefrontal cortex. The myelin deficiencies identified by Corfas and Makinodan may underlie these defects.

 

“This is incredibly important data, because it gives us the neural mechanisms associated with the deleterious changes in the brain” that arise from neglect, said Nathan Fox, a cognitive neuroscientist at the University of Maryland.

Fox was not involved in the new study, but is part of a research group working on a long-term study of childhood neglect that is scientifically striking and poignantly tragic. Led by Harvard Medical School pediatricians Charles Nelson and Margaret Sheridan, the project has tracked for the last 12 years children who started their lives in an orphanage in Bucharest, Romania, a country infamous for the spartan, impersonal conditions of its orphanages.

Among children who spent their first two years in the orphanage, the researchers observed high levels developmental problems, cognitive deficits, mental illness, and significant reductions in brain size. When the researchers measured the sheer amount of electrical activity generated by the brains of children who’d been isolated as toddlers, “it was like you’d had a rheostat, a dimmer, and dimmed down the amount of energy in these institutionalized children,” said Fox.

These problems persisted even when toddlers were later adopted, suggesting a crucial importance for those early years in setting a life’s neurological trajectory. “There’s a sensitive period for which, if a child is taken out of an institution, the effects appear to be remediated, and after which remediation is very, very difficult,” Fox said. The same pattern was observed in Corfas and Makinodan’s mice.

One phenomenon not studied in the mice, but regularly found in people neglected as children, are problems with stress: mood disorders, anxiety, and general dysfunction in a body’s stress responses.

Those mechanisms have been studied in another animal, the rhesus monkey. While deprivation studies on non-human primates — and in particular chimpanzees — are controversial, the results from the monkey studies have been instructive.

Early-life isolation sets off a flood of hormones that permanently warp their responses to stress, leaving them anxious and prone to violent swings in mood.

Isolation is so damaging because humans, especially as infants, literally depend on social stimulation to shape their minds, said psychologist John Cacioppo of the University of Chicago.

“Human social processes were once thought to have been incidental to learning and cognition,” Cacioppo wrote in an e-mail. “However, we now think that the complexities and demands of social species have contributed to the evolution of the brain and nervous system and to various aspects of cognition.”

Corfas and Makinodan’s team linked specific genetic changes to the abnormalities in their mice, and hope they might someday inform the development of drugs that can help reverse isolation’s effects.

A more immediate implication of the research is social. As evidence of neglect’s severe, long-term consequences accumulates, it could shape the way people think not just of orphanages, but policy matters like maternity and paternity leave, or the work requirements of single parents on welfare.

“What this work certainly says is that the first years of life are crucially important for brain architecture,” Fox said. “Infants and young children have to grow up in an environment of social relationships, and experiencing those is critical for healthy cognitive, social and psychological development. As a society, we should be figuring out how to encourage all that to happen.”

Thanks to Kebmobee for bringing this to the attention of the It’s Interesting community.

http://www.wired.com/wiredscience/2012/09/neuroscience-of-neglect/

Pupil dilation in response to viewing erotic videos indicates sexual orientation

For the first time, researchers have used a specialized camera to measure pupillary changes in people watching erotic videos, the changes in pupil dilation revealing where the participant is located on the heterosexual-homosexual spectrum. The researchers at Cornell University who developed the technique say it provides an accurate method of gauging the precise sexual orientation of a subject. The work is detailed in the journal PLoS ONE.

Previously, researchers trying to assess sexual orientation simply asked people about their sexuality or used intrusive physiological measures, such as assessing their genital arousal.

“We wanted to find an alternative measure that would be an automatic indication of sexual orientation, but without being as invasive as previous measures. Pupillary responses are exactly that,” says lead researcher Gerulf Rieger. “With this new technology we are able to explore sexual orientation of people who would never participate in a study on genital arousal, such as people from traditional cultures. This will give us a much better understanding how sexuality is expressed across the planet.”

Experimenting with the technique, the researchers found heterosexual men showed strong pupillary responses to sexual videos of women, and little to men. Heterosexual women, however, showed pupillary responses to both sexes. This result confirms previous research suggesting that women have a very different type of sexuality than men.

Interestingly, the new study sheds new light on the long-standing debate on male bisexuality. Previous notions were that most bisexual men do not base their sexual identity on their physiological sexual arousal but on romantic and identity issues. Contrary to this claim, bisexual men in the new study showed substantial pupil dilations to sexual videos of both men and women.

“We can now finally argue that a flexible sexual desire is not simply restricted to women – some men have it, too, and it is reflected in their pupils,” said co-researcher Ritch C. Savin-Williams. “In fact, not even a division into ‘straight,’ ‘bi,’ and ‘gay’ tells the full story. Men who identity as ‘mostly straight’ really exist both in their identity and their pupil response; they are more aroused to males than straight men, but much less so than both bisexual and gay men.”

Thanks to Dr. A.R. for bringing this to the attention of the It’s Interesting community.

Retinal device restores sight to blind mice

 

Researchers report they have developed in mice what they believe might one day become a breakthrough for humans: a retinal prosthesis that could restore near-normal sight to those who have lost their vision.

That would be a welcome development for the roughly 25 million people worldwide who are blind because of retinal disease, most notably macular degeneration.

The notion of using prosthetics to combat blindness is not new, with prior efforts involving retinal electrode implantation and/or gene therapy restoring a limited ability to pick out spots and rough edges of light.

The current effort takes matters to a new level. The scientists fashioned a prosthetic system packed with computer chips that replicate the “neural impulse codes” the eye uses to transmit light signals to the brain.

“This is a unique approach that hasn’t really been explored before, and we’re really very excited about it,” said study author Sheila Nirenberg, a professor and computational neuroscientist in the department of physiology and biophysics at Weill Medical College of Cornell University in New York City. “I’ve actually been working on this for 10 years. And suddenly, after a lot of work, I knew immediately that I could make a prosthetic that would work, by making one that could take in images and process them into a code that the brain can understand.”

Nirenberg and her co-author Chethan Pandarinath (a former Cornell graduate student now conducting postdoctoral research at Stanford University School of Medicine) report their work in the Aug. 14 issue of Proceedings of the National Academy of Sciences. Their efforts were funded by the U.S. National Institutes of Health and Cornell University’s Institute for Computational Biomedicine.

The study authors explained that retinal diseases destroy the light-catching photoreceptor cells on the retina’s surface. Without those, the eye cannot convert light into neural signals that can be sent to the brain.

However, most of these patients retain the use of their retina’s “output cells” — called ganglion cells — whose job it is to actually send these impulses to the brain. The goal, therefore, would be to jumpstart these ganglion cells by using a light-catching device that could produce critical neural signaling.

But past efforts to implant electrodes directly into the eye have only achieved a small degree of ganglion stimulation, and alternate strategies using gene therapy to insert light-sensitive proteins directly into the retina have also fallen short, the researchers said.

Nirenberg theorized that stimulation alone wasn’t enough if the neural signals weren’t exact replicas of those the brain receives from a healthy retina.

“So, what we did is figure out this code, the right set of mathematical equations,” Nirenberg explained. And by incorporating the code right into their prosthetic device’s chip, she and Pandarinath generated the kind of electrical and light impulses that the brain understood.

The team also used gene therapy to hypersensitize the ganglion output cells and get them to deliver the visual message up the chain of command.

Behavioral tests were then conducted among blind mice given a code-outfitted retinal prosthetic and among those given a prosthetic that lacked the code in question.

The result: The code group fared dramatically better on visual tracking than the non-code group, with the former able to distinguish images nearly as well as mice with healthy retinas.

“Now we hope to move on to human trials as soon as possible,” said Nirenberg. “Of course, we have to conduct standard safety studies before we get there. And I would say that we’re looking at five to seven years before this is something that might be ready to go, in the best possible case. But we do hope to start clinical trials in the next one to two years.”

Results achieved in animal studies don’t necessarily translate to humans.

Dr. Alfred Sommer, a professor of ophthalmology at Johns Hopkins University in Baltimore and dean emeritus of Hopkins’  Bloomberg School of Public Health, urged caution about the findings.

“This could be revolutionary,” he said. “But I doubt it. It’s a very, very complicated business. And people have been working on it intensively and incrementally for the last 30 years.”

“The fact that they have done something that sounds a little bit better than the last set of results is great,” Sommer added.  “It’s terrific. But this approach is really in its infancy. And I guarantee that it will be a long time before they get to the point where they can really restore vision to people using prosthetics.”

Other advances may offer benefits in the meantime, he said. “We now have new therapies that we didn’t have even five years ago,” Sommer said. “So we may be reaching a state where the amount of people losing their sight will decline even as these new techniques for providing artificial vision improve. It may not be as sci-fi. But I think it’s infinitely more important at this stage.”

http://health.usnews.com/health-news/news/articles/2012/08/13/retinal-device-restores-sight-to-blind-mice

Thanks to Kebmodee for bringing this to the attention of the It’s Interesting community.

Baboons Can Learn to Identify Printed Words

Dan the baboon sits in front of a computer screen. The letters BRRU pop up.  With a quick and almost dismissive tap, the monkey signals it’s not a word. Correct. Next comes, ITCS. Again, not a word. Finally KITE comes up.

He pauses and hits a green oval to show it’s a word. In the space of just a few seconds, Dan has demonstrated a mastery of what some experts say is a form of pre-reading and walks away rewarded with a treat of dried wheat.

Dan is part of new research that shows baboons are able to pick up the first step in reading – identifying recurring patterns and determining which four-letter combinations are words and which are just gobbledygook.

The study shows that reading’s early steps are far more instinctive than scientists first thought and it also indicates that non-human primates may be smarter than we give them credit for.

“They’ve got the hang of this thing,” said Jonathan Grainger, a French scientist and lead author of the research.

Baboons and other monkeys are good pattern finders and what they are doing may be what we first do in recognizing words.

It’s still a far cry from real reading. They don’t understand what these words mean, and are just breaking them down into parts, said Grainger, a cognitive psychologist at the Aix-Marseille University in France.

In 300,000 tests, the six baboons distinguished between real and fake words about three-out-of-four times, according to the study published in Thursday’s journal Science.

The 4-year-old Dan, the star of the bunch and about the equivalent age of a human teenager, got 80 percent of the words right and learned 308 four-letter words.

The baboons are rewarded with food when they press the right spot on the screen: A blue plus sign for bogus combos or a green oval for real words.

Even though the experiments were done in France, the researchers used English words because it is the language of science, Grainger said.

The key is that these animals not only learned by trial and error which letter combinations were correct, but they also noticed which letters tend to go together to form real words, such as SH but not FX, said Grainger. So even when new words were sprung on them, they did a better job at figuring out which were real.

Grainger said a pre-existing capacity in the brain may allow them to recognize patterns and objects, and perhaps that’s how we humans also first learn to read.

The study’s results were called “extraordinarily exciting” by another language researcher, psychology professor Stanislas Dehaene at the College of France, who wasn’t part of this study. He said Grainger’s finding makes sense. Dehaene’s earlier work says a distinct part of the brain visually recognizes the forms of words. The new work indicates this is also likely in a non-human primate.

This new study also tells us a lot about our distant primate relatives.

“They have shown repeatedly amazing cognitive abilities,” said study co-author Joel Fagot, a researcher at the French National Center for Scientific Research.

Bill Hopkins, a professor of psychology at the Yerkes Primate Center in Atlanta, isn’t surprised.

“We tend to underestimate what their capacities are,” said Hopkins, who wasn’t part of the French research team. “Non-human primates are really specialized in the visual domain and this is an example of that.”

This raises interesting questions about how the complex primate mind works without language or what we think of as language, Hopkins said. While we use language to solve problems in our heads, such as deciphering words, it seems that baboons use a “remarkably sophisticated” method to attack problems without language, he said.

Key to the success of the experiment was a change in the testing technique, the researchers said. The baboons weren’t put in the computer stations and forced to take the test. Instead, they could choose when they wanted to work, going to one of the 10 computer booths at any time, even in the middle of the night.

The most ambitious baboons test 3,000 times a day; the laziest only 400.

The advantage of this type of experiment setup, which can be considered more humane, is that researchers get far more trials in a shorter time period, he said.

“They come because they want to,” Fagot said. “What do they want? They want some food. They want to solve some task.”

Speech-Jammer Gun Invented in Japan

 

 

 

Imagine sitting around a conference table with several of your colleagues as you hold an important meeting. Now imagine your boss pulling out what looks like a radar gun for catching speeding motorists and aiming at any of you that speak to long, very nearly instantly causing whoever is speaking to start stuttering then mumbling and then to stop speaking at all. That’s the idea behind the SpeechJammer, a gun that can be fired at people to force them to stop speaking. It’s the brainchild of Koji Tsukada and Kazutaka Kurihara, science and technology researchers in Japan. They’ve published a paper describing how it works on the preprint server arXiv.

The idea is based on the fact that to speak properly, we humans need to hear what we’re saying so that we can constantly adjust how we go about it, scientists call it delayed auditory feedback. It’s partly why singers are able to sing better when they wear headphones that allow them to hear their own voice as they sing with music, or use feedback monitors when onstage. Trouble comes though when there is a slight delay between the time the words are spoken and the time they are heard. If that happens, people tend to get discombobulated and stop speaking, and that’s the whole idea behind the SpeechJammer. It’s basically just a gun that causes someone speaking to hear their own words delayed by 0.2 seconds.

To make that happen, the two attached a directional microphone and speaker to a box that also holds a laser pointer and distance sensor and of course a computer board to compute the delay time based on distance from the speaker. To make it work, the person using it points the gun at the person talking, using the laser pointer as a guide, then pulls the trigger. It works for distances up to a hundred feet.

The two say they have no plans to market the device, but because the technology is so simple, it’s doubtful they could patent it anyway.

http://www.physorg.com/news/2012-03-speechjammer-gun-quash-human-utterances.html