Scientists at Duke University say the world is on the brink of its sixth great extinction

Is the end near? Scientists at Duke University say the world is on the brink of its sixth great extinction, since certain species of plants and animals are now dying out at least 1,000 times faster than they did before humans came into existence.

The study, published Thursday in the journal Science, measured the the rate at which species are disappearing from Earth. In 1995, the researchers found that the pre-human rate of extinctions was roughly 1. Now, that rate is about 100 to 1,000.

Stuart Pimm, the study’s lead author, said habitat loss is mostly to blame for the increasing death rates. As humans continue to alter and destroy more land, animals and plants are increasingly being displaced from their natural habitats. Climate change is also a factor, he added.

“Whether we avoid it or not will depend on our actions,” Pimm warned.

Thanks to Da Brayn for bringing this to the attention of the It’s Interesting community.

http://theweek.com/article/index/262400/speedreads-earth-is-nearing-sixth-great-extinction-alarming-survey-says#axzz33DjJqxZg

Researchers explore connecting the brain to machines

brain

Behind a locked door in a white-walled basement in a research building in Tempe, Ariz., a monkey sits stone-still in a chair, eyes locked on a computer screen. From his head protrudes a bundle of wires; from his mouth, a plastic tube. As he stares, a picture of a green cursor on the black screen floats toward the corner of a cube. The monkey is moving it with his mind.

The monkey, a rhesus macaque named Oscar, has electrodes implanted in his motor cortex, detecting electrical impulses that indicate mental activity and translating them to the movement of the ball on the screen. The computer isn’t reading his mind, exactly — Oscar’s own brain is doing a lot of the lifting, adapting itself by trial and error to the delicate task of accurately communicating its intentions to the machine. (When Oscar succeeds in controlling the ball as instructed, the tube in his mouth rewards him with a sip of his favorite beverage, Crystal Light.) It’s not technically telekinesis, either, since that would imply that there’s something paranormal about the process. It’s called a “brain-computer interface” (BCI). And it just might represent the future of the relationship between human and machine.

Stephen Helms Tillery’s laboratory at Arizona State University is one of a growing number where researchers are racing to explore the breathtaking potential of BCIs and a related technology, neuroprosthetics. The promise is irresistible: from restoring sight to the blind, to helping the paralyzed walk again, to allowing people suffering from locked-in syndrome to communicate with the outside world. In the past few years, the pace of progress has been accelerating, delivering dazzling headlines seemingly by the week.

At Duke University in 2008, a monkey named Idoya walked on a treadmill, causing a robot in Japan to do the same. Then Miguel Nicolelis stopped the monkey’s treadmill — and the robotic legs kept walking, controlled by Idoya’s brain. At Andrew Schwartz’s lab at the University of Pittsburgh in December 2012, a quadriplegic woman named Jan Scheuermann learned to feed herself chocolate by mentally manipulating a robotic arm. Just last month, Nicolelis’ lab set up what it billed as the first brain-to-brain interface, allowing a rat in North Carolina to make a decision based on sensory data beamed via Internet from the brain of a rat in Brazil.

So far the focus has been on medical applications — restoring standard-issue human functions to people with disabilities. But it’s not hard to imagine the same technologies someday augmenting capacities. If you can make robotic legs walk with your mind, there’s no reason you can’t also make them run faster than any sprinter. If you can control a robotic arm, you can control a robotic crane. If you can play a computer game with your mind, you can, theoretically at least, fly a drone with your mind.

It’s tempting and a bit frightening to imagine that all of this is right around the corner, given how far the field has already come in a short time. Indeed, Nicolelis — the media-savvy scientist behind the “rat telepathy” experiment — is aiming to build a robotic bodysuit that would allow a paralyzed teen to take the first kick of the 2014 World Cup. Yet the same factor that has made the explosion of progress in neuroprosthetics possible could also make future advances harder to come by: the almost unfathomable complexity of the human brain.

From I, Robot to Skynet, we’ve tended to assume that the machines of the future would be guided by artificial intelligence — that our robots would have minds of their own. Over the decades, researchers have made enormous leaps in artificial intelligence (AI), and we may be entering an age of “smart objects” that can learn, adapt to, and even shape our habits and preferences. We have planes that fly themselves, and we’ll soon have cars that do the same. Google has some of the world’s top AI minds working on making our smartphones even smarter, to the point that they can anticipate our needs. But “smart” is not the same as “sentient.” We can train devices to learn specific behaviors, and even out-think humans in certain constrained settings, like a game of Jeopardy. But we’re still nowhere close to building a machine that can pass the Turing test, the benchmark for human-like intelligence. Some experts doubt we ever will.

Philosophy aside, for the time being the smartest machines of all are those that humans can control. The challenge lies in how best to control them. From vacuum tubes to the DOS command line to the Mac to the iPhone, the history of computing has been a progression from lower to higher levels of abstraction. In other words, we’ve been moving from machines that require us to understand and directly manipulate their inner workings to machines that understand how we work and respond readily to our commands. The next step after smartphones may be voice-controlled smart glasses, which can intuit our intentions all the more readily because they see what we see and hear what we hear.

The logical endpoint of this progression would be computers that read our minds, computers we can control without any physical action on our part at all. That sounds impossible. After all, if the human brain is so hard to compute, how can a computer understand what’s going on inside it?

It can’t. But as it turns out, it doesn’t have to — not fully, anyway. What makes brain-computer interfaces possible is an amazing property of the brain called neuroplasticity: the ability of neurons to form new connections in response to fresh stimuli. Our brains are constantly rewiring themselves to allow us to adapt to our environment. So when researchers implant electrodes in a part of the brain that they expect to be active in moving, say, the right arm, it’s not essential that they know in advance exactly which neurons will fire at what rate. When the subject attempts to move the robotic arm and sees that it isn’t quite working as expected, the person — or rat or monkey — will try different configurations of brain activity. Eventually, with time and feedback and training, the brain will hit on a solution that makes use of the electrodes to move the arm.

That’s the principle behind such rapid progress in brain-computer interface and neuroprosthetics. Researchers began looking into the possibility of reading signals directly from the brain in the 1970s, and testing on rats began in the early 1990s. The first big breakthrough for humans came in Georgia in 1997, when a scientist named Philip Kennedy used brain implants to allow a “locked in” stroke victim named Johnny Ray to spell out words by moving a cursor with his thoughts. (It took him six exhausting months of training to master the process.) In 2008, when Nicolelis got his monkey at Duke to make robotic legs run a treadmill in Japan, it might have seemed like mind-controlled exoskeletons for humans were just another step or two away. If he succeeds in his plan to have a paralyzed youngster kick a soccer ball at next year’s World Cup, some will pronounce the cyborg revolution in full swing.

Schwartz, the Pittsburgh researcher who helped Jan Scheuermann feed herself chocolate in December, is optimistic that neuroprosthetics will eventually allow paralyzed people to regain some mobility. But he says that full control over an exoskeleton would require a more sophisticated way to extract nuanced information from the brain. Getting a pair of robotic legs to walk is one thing. Getting robotic limbs to do everything human limbs can do may be exponentially more complicated. “The challenge of maintaining balance and staying upright on two feet is a difficult problem, but it can be handled by robotics without a brain. But if you need to move gracefully and with skill, turn and step over obstacles, decide if it’s slippery outside — that does require a brain. If you see someone go up and kick a soccer ball, the essential thing to ask is, ‘OK, what would happen if I moved the soccer ball two inches to the right?'” The idea that simple electrodes could detect things as complex as memory or cognition, which involve the firing of billions of neurons in patterns that scientists can’t yet comprehend, is far-fetched, Schwartz adds.

That’s not the only reason that companies like Apple and Google aren’t yet working on devices that read our minds (as far as we know). Another one is that the devices aren’t portable. And then there’s the little fact that they require brain surgery.

A different class of brain-scanning technology is being touted on the consumer market and in the media as a way for computers to read people’s minds without drilling into their skulls. It’s called electroencephalography, or EEG, and it involves headsets that press electrodes against the scalp. In an impressive 2010 TED Talk, Tan Le of the consumer EEG-headset company Emotiv Lifescience showed how someone can use her company’s EPOC headset to move objects on a computer screen.

Skeptics point out that these devices can detect only the crudest electrical signals from the brain itself, which is well-insulated by the skull and scalp. In many cases, consumer devices that claim to read people’s thoughts are in fact relying largely on physical signals like skin conductivity and tension of the scalp or eyebrow muscles.

Robert Oschler, a robotics enthusiast who develops apps for EEG headsets, believes the more sophisticated consumer headsets like the Emotiv EPOC may be the real deal in terms of filtering out the noise to detect brain waves. Still, he says, there are limits to what even the most advanced, medical-grade EEG devices can divine about our cognition. He’s fond of an analogy that he attributes to Gerwin Schalk, a pioneer in the field of invasive brain implants. The best EEG devices, he says, are “like going to a stadium with a bunch of microphones: You can’t hear what any individual is saying, but maybe you can tell if they’re doing the wave.” With some of the more basic consumer headsets, at this point, “it’s like being in a party in the parking lot outside the same game.”

It’s fairly safe to say that EEG headsets won’t be turning us into cyborgs anytime soon. But it would be a mistake to assume that we can predict today how brain-computer interface technology will evolve. Just last month, a team at Brown University unveiled a prototype of a low-power, wireless neural implant that can transmit signals to a computer over broadband. That could be a major step forward in someday making BCIs practical for everyday use. Meanwhile, researchers at Cornell last week revealed that they were able to use fMRI, a measure of brain activity, to detect which of four people a research subject was thinking about at a given time. Machines today can read our minds in only the most rudimentary ways. But such advances hint that they may be able to detect and respond to more abstract types of mental activity in the always-changing future.

http://www.ydr.com/living/ci_22800493/researchers-explore-connecting-brain-machines

Communication of thoughts between rats on different continents, connected via brain-to-brain interface

The world’s first brain-to-brain connection has given rats the power to communicate by thought alone.

“Many people thought it could never happen,” says Miguel Nicolelis at Duke University in Durham, North Carolina. Although monkeys have been able to control robots with their mind using brain-to-machine interfaces, work by Nicolelis’s team has, for the first time, demonstrated a direct interface between two brains – with the rats able to share both motor and sensory information.

The feat was achieved by first training rats to press one of two levers when an LED above that lever was lit. A correct action opened a hatch containing a drink of water. The rats were then split into two groups, designated as “encoders” and “decoders”.

An array of microelectrodes – each about one-hundredth the width of a human hair – was then implanted in the encoder rats’ primary motor cortex, an area of the brain that processes movement. The team used the implant to record the neuronal activity that occurs just before the rat made a decision in the lever task. They found that pressing the left lever produced a different pattern of activity from pressing the right lever, regardless of which was the correct action.

Next, the team recreated these patterns in decoder rats, using an implant in the same brain area that stimulates neurons rather than recording from them. The decoders received a few training sessions to prime them to pick the correct lever in response to the different patterns of stimulation.

The researchers then wired up the implants of an encoder and a decoder rat. The pair were given the same lever-press task again, but this time only the encoder rats saw the LEDs come on. Brain signals from the encoder rat were recorded just before they pressed the lever and transmitted to the decoder rat. The team found that the decoders, despite having no visual cue, pressed the correct lever between 60 and 72 per cent of the time.

The rats’ ability to cooperate was reinforced by rewarding both rats if the communication resulted in a correct outcome. Such reinforcement led to the transmission of clearer signals, improving the rats’ success rate compared with cases where decoders were given a pre-recorded signal. This was a big surprise, says Nicolelis. “The encoder’s brain activity became more precise. This could have happened because the animal enhanced its attention during the performance of the next trial after a decoder error.”

If the decoders had not been primed to relate specific activity with the left or right lever prior to the being linked with an encoder, the only consequence would be that it would have taken a bit more time for them to learn the task while interacting with the encoder, says Nicolelis. “We simply primed the decoder so that it would get the gist of the task it had to perform.” In unpublished monkey experiments doing a similar task, the team did not need to prime the animals at all.

In a second experiment, rats were trained to explore a hole with their whiskers and indicate if it was narrow or wide by turning to the left or right. Pairs of rats were then connected as before, but this time the implants were placed in their primary somatosensory cortex, an area that processes touch. Decoder rats were able to indicate over 60 per cent of the time the width of a gap that only the encoder rats were exploring.

Finally, encoder rats were held still while their whiskers were stroked with metal bars. The researchers observed patterns of activity in the somatosensory cortex of the decoder rats that matched that of the encoder rats, even though the whiskers of the decoder rats had not been touched.

Pairs of rats were even able to cooperate across continents using cyberspace. Brain signals from an encoder rat at the Edmond and Lily Safra International Institute of Neuroscience of Natal in Brazil were sent to a decoder in Nicolelis’s lab in North Carolina via the internet. Though there was a slight transmission delay, the decoder rat still performed with an accuracy similar to those of rats in closer proximity with encoders.

Christopher James at the University of Warwick, UK, who works on brain-to-machine interfaces for prostheses, says the work is a “wake-up call” for people who haven’t caught up with recent advances in brain research.

We have the technology to create implants for long-term use, he says. What is missing, though, is a full understanding of the brain processes involved. In this case, Nicolelis’s team is “blasting a relatively large area of the brain with a signal they’re not sure is 100 per cent correct,” he says.

That’s because the exact information being communicated between the rats’ brains is not clear. The brain activity of the encoders cannot be transferred precisely to the decoders because that would require matching the patterns neuron for neuron, which is not currently possible. Instead, the two patterns are closely related in terms of their frequency and spatial representation.

“We are still using a sledgehammer to crack a walnut,” says James. “They’re not hearing the voice of God.” But the rats are certainly sending and receiving more than a binary signal that simply points to one or other lever, he says. “I think it will be possible one day to transfer an abstract thought.”

The decoders have to interpret relatively complex brain patterns, says Marshall Shuler at Johns Hopkins University in Baltimore, Maryland. The animals learn the relevance of these new patterns and their brains adapt to the signals. “But the decoders are probably not having the same quality of experience as the encoders,” he says.

Patrick Degenaar at Newcastle University in the UK says that the military might one day be able to deploy genetically modified insects or small mammals that are controlled by the brain signals of a remote human operator. These would be drones that could feed themselves, he says, and could be used for surveillance or even assassination missions. “You’d probably need a flying bug to get near the head [of someone to be targeted],” he says.

Nicolelis is most excited about the future of multiple networked brains. He is currently trialling the implants in monkeys, getting them to work together telepathically to complete a task. For example, each monkey might only have access to part of the information needed to make the right decision in a game. Several monkeys would then need to communicate with each other in order to successfully complete the task.

“In the distant future we may be able to communicate via a brain-net,” says Nicolelis. “I would be very glad if the brain-net my great grandchildren used was due to their great grandfather’s work.”

Journal reference: Nature Scientific Reports, DOI: 10.1038/srep01319

Lab rats given a 6th sense through a brain-machine interface

_65888650_65886269

Duke University researchers have effectively given laboratory rats a “sixth sense” using an implant in their brains.

An experimental device allowed the rats to “touch” infrared light – which is normally invisible to them.

The team at Duke University fitted the rats with an infrared detector wired up to microscopic electrodes that were implanted in the part of their brains that processes tactile information.

The results of the study were published in Nature Communications journal.

The researchers say that, in theory at least, a human with a damaged visual cortex might be able to regain sight through a device implanted in another part of the brain.

Lead author Miguel Nicolelis said this was the first time a brain-machine interface has augmented a sense in adult animals.

The experiment also shows that a new sensory input can be interpreted by a region of the brain that normally does something else (without having to “hijack” the function of that brain region).

“We could create devices sensitive to any physical energy,” said Prof Nicolelis, from the Duke University Medical Center in Durham, North Carolina.

“It could be magnetic fields, radio waves, or ultrasound. We chose infrared initially because it didn’t interfere with our electrophysiological recordings.”

His colleague Eric Thomson commented: “The philosophy of the field of brain-machine interfaces has until now been to attempt to restore a motor function lost to lesion or damage of the central nervous system.

“This is the first paper in which a neuroprosthetic device was used to augment function – literally enabling a normal animal to acquire a sixth sense.”
In their experiments, the researchers used a test chamber with three light sources that could be switched on randomly.

They taught the rats to choose the active light source by poking their noses into a port to receive a sip of water as a reward. They then implanted the microelectrodes, each about a tenth the diameter of a human hair, into the animals’ brains. These electrodes were attached to the infrared detectors.

The scientists then returned the animals to the test chamber. At first, the rats scratched at their faces, indicating that they were interpreting the lights as touch. But after a month the animals learned to associate the signal in their brains with the infrared source.

They began to search actively for the signal, eventually achieving perfect scores in tracking and identifying the correct location of the invisible light source.

One key finding was that enlisting the touch cortex to detect infrared light did not reduce its ability to process touch signals.

http://www.bbc.co.uk/news/science-environment-21459745

Thanks to Kebmodee for bringing this to the attention of the It’s Interesting community.

Multi-Tasking Humpback Whales Sing While Feeding, Not Just Breeding

whale

Humpback whales are famed for their songs, most often heard in breeding season when males are competing to mate with females. In recent years, however, reports of whale songs occurring outside traditional breeding grounds have become more common. A new study may help explain why.

Humpbacks sing for their supper — or at least, they sing while they hunt for it.

The research, published December 19 in PLoS ONE, uncovers the whales’ little-understood acoustic behavior while foraging.

It also reveals a previously unknown behavioral flexibility on their part that allows the endangered marine mammals to balance their need to feed continuously with the competing need to exhibit mating behaviors such as song displays.

“They need to feed. They need to breed. So essentially, they multi-task,” said study co-author Ari S. Friedlaender, research scientist at Duke University’s Nicholas School of the Environment. “This suggests the widely held behavioral dichotomy of breeding-versus-feeding for this species is too simplistic.”

Researchers from the U.S. Naval Postgraduate School, the University of California-Santa Barbara and Duke tracked 10 humpback whales in coastal waters along the Western Antarctic Peninsula in May and June 2010. The peninsula’s bays and fjords are important late-season feeding grounds where humpbacks feast on krill each austral autumn before migrating to warm-water calving grounds thousands of miles away.

Using non-invasive multi-sensor tags that attach to the whales with suction cups, the researchers recorded the whales’ underwater movements and vocalizations as they foraged.

All 10 of the tags picked up the sounds of background songs, and in two cases, they recorded intense and continuous whale singing with a level of organization and structure approaching that of a typical breeding-ground mating display. The song bouts sometimes lasted close to an hour and in one case occurred even while sensors indicated the whale, or a close companion, was diving and lunging for food.

Humpbacks sing most frequently during breeding season, but are known to sing on other occasions too, such as while escorting mother-calf pairs along migratory routes. Though the reasons they sing are still not thoroughly understood, one distinction is clear: Songs sung in breeding grounds are quite different in duration, phrase type and theme structure from those heard at other locations and times.

“The fact that we heard mating displays being sung in late-season foraging grounds off the coast of Antarctica suggests humpback whale behavior may be more closely tied to the time of year than to physical locations. This may signify an ability to engage in breeding activities outside their traditional warm-water breeding grounds,” said Douglas P. Nowacek, Repass-Rogers University Associate Professor of Conservation Technology at Duke’s Nicholas School.

As the region’s climate warms, sea ice cover around the Western Antarctic Peninsula has thinned in recent years and the water stays open later in the foraging season, he explained. Whales are remaining there longer into austral autumn to feast on krill instead of heading off to warm-water breeding grounds, as many scientists previously believed.

“Mating may now be taking place at higher latitudes,” Nowacek said. “This merits further study.”

Alison K. Stimpert, research associate in oceanography at the Naval Postgraduate School, was lead author of the new study. Lindsey E. Peavey, a PhD Student at the University of California at Santa Barbara’s Bren School of Environmental Science and Management, co-authored it with Stimpert, Friedlaender and Nowacek.

Journal Reference:

1.Stimpert AK, Peavey LE, Friedlaender AS, Nowacek DP. Humpback Whale Song and Foraging Behavior on an Antarctic Feeding Ground. PLoS One, 2012 DOI: 10.1371/journal.pone.0051214

http://www.sciencedaily.com/releases/2012/12/121219174156.htm

Duke University scientists create Harry Potter invisibility cloak

Scientists seem to have unlocked another technology that was only available in fantasy movies.  Physicists at Duke University have announced that they have successfully cloaked an object with “perfect” invisibility, straight out of Harry Potter.

In 2006 David Smith and his colleagues developed a theory called “transformation optics”.  The theory is based on redirecting magnetic fields around an object making it invisible, according to ScienceNOW.

All attempts at testing the theory provided some level of invisibility but it wasn’t until Dr. Smith started experimenting with metamaterials, which are designed to bend light and other radiation around them that they were able to create a Harry Potter style invisibility cloak.

Graduate student Dr. Landy says all earlier versions of a Harry Potter cloak suffered from reflected light.  Landy explained to Phys.org that “it was much like reflections seen on clear glass. The viewer can see through the glass just fine, but at the same time the viewer is aware the glass is present due to light reflected from the surface of the glass.”

The new cloak got around it by reworking the materials.

“Landy’s new microwave cloak is naturally divided into four quadrants, each of which have voids or blind spots at their intersections and corners with each other,”explains io9. “Thus, to avoid the reflectivity problem, Landy was able to correct for it by shifting each strip so that is met its mirror image at each interface.”

Smith said of the research:

“This to our knowledge is the first cloak that really addresses getting the transformation exactly right to get you that perfect invisibility.”