Bear eats monkey when forced to ride bike at Shanghai Wild Animal Park

A monkey was mauled by a bear after a disturbing circus stunt went wrong.

A video has emerged online of two monkeys and a black bear being forced to ride bicycles around a track in front of a large crowd.

After two laps of the track, one of the monkeys crash and the bear then attacks it as it lies stuck under the bicycle.

The video is believed to have been shot at the Shanghai Wild Animal Park, in China, which has hit the headlines in the past for its ‘Wild Animal Olympics’.

In the video, the audience can be heard cheering and laughing as the animals are sent riding around the small arena.

Circus workers holding sticks push the small bikes off but after a few seconds one monkey and the bear crash.

Staff can be seen desperately trying to force the bear off as it grabs the small monkey in its mouth.

At one stage three workers, dressed in brightly coloured costumes, try to wrestle the bear away, while another leads the second monkey away.

Campaign group Animals Asia said it has previously documented cheetahs, lions, tigers, bears, chimpanzees and an elephant being forced to perform in the ‘Wild Animal Olympics’.

It is not clear when the latest video was taken but Shanghai Wild Animal Park said in 2006 that the Olympic event had been scrapped following complaints and ‘out of consideration for the safety of our visitors.’

Stunts in the show had included making bears box one another and ride bicycles, kangaroos boxing humans and monkeys lifting weights.

Visitors to the park can also pay to have their picture taken with the big cats and other animals.

Animals Asia said some of the creatures had also been declawed.

China Tour Online’s website said the park ‘offers animal performances, showing the charm and skill of the animals and their gift in performing.’

Thanks to Kebmodee for bringing this to the attention of the It’s Interesting community.

Read more: http://www.dailymail.co.uk/news/article-2320745/Bear-forced-ride-bike-sick-circus-stunt-crashes-mauls-monkey-large-crowd.html#ixzz2T1OBSdWz
Follow us: @MailOnline on Twitter | DailyMail on Facebook

Researchers explore connecting the brain to machines

brain

Behind a locked door in a white-walled basement in a research building in Tempe, Ariz., a monkey sits stone-still in a chair, eyes locked on a computer screen. From his head protrudes a bundle of wires; from his mouth, a plastic tube. As he stares, a picture of a green cursor on the black screen floats toward the corner of a cube. The monkey is moving it with his mind.

The monkey, a rhesus macaque named Oscar, has electrodes implanted in his motor cortex, detecting electrical impulses that indicate mental activity and translating them to the movement of the ball on the screen. The computer isn’t reading his mind, exactly — Oscar’s own brain is doing a lot of the lifting, adapting itself by trial and error to the delicate task of accurately communicating its intentions to the machine. (When Oscar succeeds in controlling the ball as instructed, the tube in his mouth rewards him with a sip of his favorite beverage, Crystal Light.) It’s not technically telekinesis, either, since that would imply that there’s something paranormal about the process. It’s called a “brain-computer interface” (BCI). And it just might represent the future of the relationship between human and machine.

Stephen Helms Tillery’s laboratory at Arizona State University is one of a growing number where researchers are racing to explore the breathtaking potential of BCIs and a related technology, neuroprosthetics. The promise is irresistible: from restoring sight to the blind, to helping the paralyzed walk again, to allowing people suffering from locked-in syndrome to communicate with the outside world. In the past few years, the pace of progress has been accelerating, delivering dazzling headlines seemingly by the week.

At Duke University in 2008, a monkey named Idoya walked on a treadmill, causing a robot in Japan to do the same. Then Miguel Nicolelis stopped the monkey’s treadmill — and the robotic legs kept walking, controlled by Idoya’s brain. At Andrew Schwartz’s lab at the University of Pittsburgh in December 2012, a quadriplegic woman named Jan Scheuermann learned to feed herself chocolate by mentally manipulating a robotic arm. Just last month, Nicolelis’ lab set up what it billed as the first brain-to-brain interface, allowing a rat in North Carolina to make a decision based on sensory data beamed via Internet from the brain of a rat in Brazil.

So far the focus has been on medical applications — restoring standard-issue human functions to people with disabilities. But it’s not hard to imagine the same technologies someday augmenting capacities. If you can make robotic legs walk with your mind, there’s no reason you can’t also make them run faster than any sprinter. If you can control a robotic arm, you can control a robotic crane. If you can play a computer game with your mind, you can, theoretically at least, fly a drone with your mind.

It’s tempting and a bit frightening to imagine that all of this is right around the corner, given how far the field has already come in a short time. Indeed, Nicolelis — the media-savvy scientist behind the “rat telepathy” experiment — is aiming to build a robotic bodysuit that would allow a paralyzed teen to take the first kick of the 2014 World Cup. Yet the same factor that has made the explosion of progress in neuroprosthetics possible could also make future advances harder to come by: the almost unfathomable complexity of the human brain.

From I, Robot to Skynet, we’ve tended to assume that the machines of the future would be guided by artificial intelligence — that our robots would have minds of their own. Over the decades, researchers have made enormous leaps in artificial intelligence (AI), and we may be entering an age of “smart objects” that can learn, adapt to, and even shape our habits and preferences. We have planes that fly themselves, and we’ll soon have cars that do the same. Google has some of the world’s top AI minds working on making our smartphones even smarter, to the point that they can anticipate our needs. But “smart” is not the same as “sentient.” We can train devices to learn specific behaviors, and even out-think humans in certain constrained settings, like a game of Jeopardy. But we’re still nowhere close to building a machine that can pass the Turing test, the benchmark for human-like intelligence. Some experts doubt we ever will.

Philosophy aside, for the time being the smartest machines of all are those that humans can control. The challenge lies in how best to control them. From vacuum tubes to the DOS command line to the Mac to the iPhone, the history of computing has been a progression from lower to higher levels of abstraction. In other words, we’ve been moving from machines that require us to understand and directly manipulate their inner workings to machines that understand how we work and respond readily to our commands. The next step after smartphones may be voice-controlled smart glasses, which can intuit our intentions all the more readily because they see what we see and hear what we hear.

The logical endpoint of this progression would be computers that read our minds, computers we can control without any physical action on our part at all. That sounds impossible. After all, if the human brain is so hard to compute, how can a computer understand what’s going on inside it?

It can’t. But as it turns out, it doesn’t have to — not fully, anyway. What makes brain-computer interfaces possible is an amazing property of the brain called neuroplasticity: the ability of neurons to form new connections in response to fresh stimuli. Our brains are constantly rewiring themselves to allow us to adapt to our environment. So when researchers implant electrodes in a part of the brain that they expect to be active in moving, say, the right arm, it’s not essential that they know in advance exactly which neurons will fire at what rate. When the subject attempts to move the robotic arm and sees that it isn’t quite working as expected, the person — or rat or monkey — will try different configurations of brain activity. Eventually, with time and feedback and training, the brain will hit on a solution that makes use of the electrodes to move the arm.

That’s the principle behind such rapid progress in brain-computer interface and neuroprosthetics. Researchers began looking into the possibility of reading signals directly from the brain in the 1970s, and testing on rats began in the early 1990s. The first big breakthrough for humans came in Georgia in 1997, when a scientist named Philip Kennedy used brain implants to allow a “locked in” stroke victim named Johnny Ray to spell out words by moving a cursor with his thoughts. (It took him six exhausting months of training to master the process.) In 2008, when Nicolelis got his monkey at Duke to make robotic legs run a treadmill in Japan, it might have seemed like mind-controlled exoskeletons for humans were just another step or two away. If he succeeds in his plan to have a paralyzed youngster kick a soccer ball at next year’s World Cup, some will pronounce the cyborg revolution in full swing.

Schwartz, the Pittsburgh researcher who helped Jan Scheuermann feed herself chocolate in December, is optimistic that neuroprosthetics will eventually allow paralyzed people to regain some mobility. But he says that full control over an exoskeleton would require a more sophisticated way to extract nuanced information from the brain. Getting a pair of robotic legs to walk is one thing. Getting robotic limbs to do everything human limbs can do may be exponentially more complicated. “The challenge of maintaining balance and staying upright on two feet is a difficult problem, but it can be handled by robotics without a brain. But if you need to move gracefully and with skill, turn and step over obstacles, decide if it’s slippery outside — that does require a brain. If you see someone go up and kick a soccer ball, the essential thing to ask is, ‘OK, what would happen if I moved the soccer ball two inches to the right?'” The idea that simple electrodes could detect things as complex as memory or cognition, which involve the firing of billions of neurons in patterns that scientists can’t yet comprehend, is far-fetched, Schwartz adds.

That’s not the only reason that companies like Apple and Google aren’t yet working on devices that read our minds (as far as we know). Another one is that the devices aren’t portable. And then there’s the little fact that they require brain surgery.

A different class of brain-scanning technology is being touted on the consumer market and in the media as a way for computers to read people’s minds without drilling into their skulls. It’s called electroencephalography, or EEG, and it involves headsets that press electrodes against the scalp. In an impressive 2010 TED Talk, Tan Le of the consumer EEG-headset company Emotiv Lifescience showed how someone can use her company’s EPOC headset to move objects on a computer screen.

Skeptics point out that these devices can detect only the crudest electrical signals from the brain itself, which is well-insulated by the skull and scalp. In many cases, consumer devices that claim to read people’s thoughts are in fact relying largely on physical signals like skin conductivity and tension of the scalp or eyebrow muscles.

Robert Oschler, a robotics enthusiast who develops apps for EEG headsets, believes the more sophisticated consumer headsets like the Emotiv EPOC may be the real deal in terms of filtering out the noise to detect brain waves. Still, he says, there are limits to what even the most advanced, medical-grade EEG devices can divine about our cognition. He’s fond of an analogy that he attributes to Gerwin Schalk, a pioneer in the field of invasive brain implants. The best EEG devices, he says, are “like going to a stadium with a bunch of microphones: You can’t hear what any individual is saying, but maybe you can tell if they’re doing the wave.” With some of the more basic consumer headsets, at this point, “it’s like being in a party in the parking lot outside the same game.”

It’s fairly safe to say that EEG headsets won’t be turning us into cyborgs anytime soon. But it would be a mistake to assume that we can predict today how brain-computer interface technology will evolve. Just last month, a team at Brown University unveiled a prototype of a low-power, wireless neural implant that can transmit signals to a computer over broadband. That could be a major step forward in someday making BCIs practical for everyday use. Meanwhile, researchers at Cornell last week revealed that they were able to use fMRI, a measure of brain activity, to detect which of four people a research subject was thinking about at a given time. Machines today can read our minds in only the most rudimentary ways. But such advances hint that they may be able to detect and respond to more abstract types of mental activity in the always-changing future.

http://www.ydr.com/living/ci_22800493/researchers-explore-connecting-brain-machines

Stanford scientists advance thought-control computer cursor movement

 

 

Stanford researchers have designed the fastest, most accurate mathematical algorithm yet for brain-implantable prosthetic systems that can help disabled people maneuver computer cursors with their thoughts. The algorithm’s speed, accuracy and natural movement approach those of a real arm.

 

 

On each side of the screen, a monkey moves a cursor with its thoughts, using the cursor to make contact with the colored ball. On the left, the monkey’s thoughts are decoded with the use of a mathematical algorithm known as Velocity. On the right, the monkey’s thoughts are decoded with a new algorithm known as ReFITT, with better results. The ReFIT system helps the monkey to click on 21 targets in 21 seconds, as opposed to just 10 clicks with the older system.

 

 

When a paralyzed person imagines moving a limb, cells in the part of the brain that controls movement activate, as if trying to make the immobile limb work again.

Despite a neurological injury or disease that has severed the pathway between brain and muscle, the region where the signals originate remains intact and functional.

In recent years, neuroscientists and neuroengineers working in prosthetics have begun to develop brain-implantable sensors that can measure signals from individual neurons.

After those signals have been decoded through a mathematical algorithm, they can be used to control the movement of a cursor on a computer screen – in essence, the cursor is controlled by thoughts.

The work is part of a field known as neural prosthetics.

A team of Stanford researchers have now developed a new algorithm, known as ReFIT, that vastly improves the speed and accuracy of neural prosthetics that control computer cursors. The results were published Nov. 18 in the journal Nature Neuroscience in a paper by Krishna Shenoy, a professor of electrical engineering, bioengineering and neurobiology at Stanford, and a team led by research associate Dr. Vikash Gilja and bioengineering doctoral candidate Paul Nuyujukian.

In side-by-side demonstrations with rhesus monkeys, cursors controlled by the new algorithm doubled the performance of existing systems and approached performance of the monkey’s actual arm in controlling the cursor. Better yet, more than four years after implantation, the new system is still going strong, while previous systems have seen a steady decline in performance over time.

“These findings could lead to greatly improved prosthetic system performance and robustness in paralyzed people, which we are actively pursuing as part of the FDA Phase-I BrainGate2 clinical trial here at Stanford,” said Shenoy.

The system relies on a sensor implanted into the brain, which records “action potentials” in neural activity from an array of electrode sensors and sends data to a computer. The frequency with which action potentials are generated provides the computer important information about the direction and speed of the user’s intended movement.

The ReFIT algorithm that decodes these signals represents a departure from earlier models. In most neural prosthetics research, scientists have recorded brain activity while the subject moves or imagines moving an arm, analyzing the data after the fact. “Quite a bit of the work in neural prosthetics has focused on this sort of offline reconstruction,” said Gilja, the first author of the paper.

The Stanford team wanted to understand how the system worked “online,” under closed-loop control conditions in which the computer analyzes and implements visual feedback gathered in real time as the monkey neurally controls the cursor toward an onscreen target.

The system is able to make adjustments on the fly when guiding the cursor to a target, just as a hand and eye would work in tandem to move a mouse-cursor onto an icon on a computer desktop.

If the cursor were straying too far to the left, for instance, the user likely adjusts the imagined movements to redirect the cursor to the right. The team designed the system to learn from the user’s corrective movements, allowing the cursor to move more precisely than it could in earlier prosthetics.

To test the new system, the team gave monkeys the task of mentally directing a cursor to a target – an onscreen dot – and holding the cursor there for half a second. ReFIT performed vastly better than previous technology in terms of both speed and accuracy.

The path of the cursor from the starting point to the target was straighter and it reached the target twice as quickly as earlier systems, achieving 75 to 85 percent of the speed of the monkey’s arm.

“This paper reports very exciting innovations in closed-loop decoding for brain-machine interfaces. These innovations should lead to a significant boost in the control of neuroprosthetic devices and increase the clinical viability of this technology,” said Jose Carmena, an associate professor of electrical engineering and neuroscience at the University of California-Berkeley.

Critical to ReFIT’s time-to-target improvement was its superior ability to stop the cursor. While the old model’s cursor reached the target almost as fast as ReFIT, it often overshot the destination, requiring additional time and multiple passes to hold the target.

The key to this efficiency was in the step-by-step calculation that transforms electrical signals from the brain into movements of the cursor onscreen. The team had a unique way of “training” the algorithm about movement. When the monkey used his arm to move the cursor, the computer used signals from the implant to match the arm movements with neural activity.

Next, the monkey simply thought about moving the cursor, and the computer translated that neural activity into onscreen movement of the cursor. The team then used the monkey’s brain activity to refine their algorithm, increasing its accuracy.

The team introduced a second innovation in the way ReFIT encodes information about the position and velocity of the cursor. Gilja said that previous algorithms could interpret neural signals about either the cursor’s position or its velocity, but not both at once. ReFIT can do both, resulting in faster, cleaner movements of the cursor.

Early research in neural prosthetics had the goal of understanding the brain and its systems more thoroughly, Gilja said, but he and his team wanted to build on this approach by taking a more pragmatic engineering perspective. “The core engineering goal is to achieve highest possible performance and robustness for a potential clinical device,” he said.

To create such a responsive system, the team decided to abandon one of the traditional methods in neural prosthetics.

Much of the existing research in this field has focused on differentiating among individual neurons in the brain. Importantly, such a detailed approach has allowed neuroscientists to create a detailed understanding of the individual neurons that control arm movement.

But the individual neuron approach has its drawbacks, Gilja said. “From an engineering perspective, the process of isolating single neurons is difficult, due to minute physical movements between the electrode and nearby neurons, making it error prone,” he said. ReFIT focuses on small groups of neurons instead of single neurons.

By abandoning the single-neuron approach, the team also reaped a surprising benefit: performance longevity. Neural implant systems that are fine-tuned to specific neurons degrade over time. It is a common belief in the field that after six months to a year they can no longer accurately interpret the brain’s intended movement. Gilja said the Stanford system is working very well more than four years later.

“Despite great progress in brain-computer interfaces to control the movement of devices such as prosthetic limbs, we’ve been left so far with halting, jerky, Etch-a-Sketch-like movements. Dr. Shenoy’s study is a big step toward clinically useful brain-machine technology that has faster, smoother, more natural movements,” said James Gnadt, a program director in Systems and Cognitive Neuroscience at the National Institute of Neurological Disorders and Stroke, part of the National Institutes of Health.

For the time being, the team has been focused on improving cursor movement rather than the creation of robotic limbs, but that is not out of the question, Gilja said. Near term, precise, accurate control of a cursor is a simplified task with enormous value for people with paralysis.

“We think we have a good chance of giving them something very useful,” he said. The team is now translating these innovations to people with paralysis as part of a clinical trial.

This research was funded by the Christopher and Dana Reeve Paralysis Foundation, the National Science Foundation, National Defense Science and Engineering Graduate Fellowships, Stanford Graduate Fellowships, Defense Advanced Research Projects Agency (“Revolutionizing Prosthetics” and “REPAIR”) and the National Institutes of Health (NINDS-CRCNS and Director’s Pioneer Award).

Other contributing researchers include Cynthia Chestek, John Cunningham, Byron Yu, Joline Fan, Mark Churchland, Matthew Kaufman, Jonathan Kao and Stephen Ryu.

http://news.stanford.edu/news/2012/november/thought-control-cursor-111812.html

Thanks to Kebmodee for bringing this to the attention of the It’s Interesting community

New monkey species discovered: Cercopithecus Lomamiensis

Scientists are claiming they have discovered a new species of monkey living  in the remote forests of the Democratic Republic of Congo — an animal  well-known to local hunters but until now, unknown to the outside world.

In a paper published Wednesday in the open-access journal Plos One, the  scientists describe the new species that they call Cercopithecus Lomamiensis,  known locally as the Lesula, whose home is deep in central DR Congo’s Lomami  forest basin. The scientists say it is only the second discovery of a monkey  species in 28 years.

In an age where so much of the earth’s surface has been photographed,  digitized, and placed on a searchable map on the web discoveries like this one  by a group of American scientists this seem a throwback to another time.

“We never expected to find a new species there,” says John Hart, the lead  scientist of the project, “but the Lomami basin is a very large block that has  had very little exploration by biologists.”

Hart says that the rigorous scientific process to determine the new species  started with a piece of luck, strong field teams, and an unlikely field sighting  in a small forest town.

“Our Congolese field teams were on a routine stop in Opala. It is the closest  settlement of any kind to the area of forest we were working in,” says Hart.

The team came across a strange looking monkey tethered to a post. It was the  pet of Georgette, the daughter of the local school director.

She adopted the young monkey when its mother was killed by a hunter in the  forest. Her father said it was a Lesula, well-known to hunters in that part of  the forest. The field team took pictures and showed them to Hart.

“Right away I saw that this was something different. It looked a bit like a  monkey from much further east, but the coloring was so different and the range  was so different,” said Hart.

The monkey to the east is the semi-terrestrial owl-faced monkey. Based on the  photos, Hart believed that their shape and size could be similar, but their  morphology or outward appearance was very distinct.

The Lesula had strikingly large, almost human like, eyes, a pink face and  golden mane. Far to the east, across several large river systems, the Owl Face  is aptly named. Its sunken eyes are set deep in a dark face, a white stripe  running down from its brow to its mouth, like a line of chalk on a  blackboard.

To a layman it looks like an open and shut case. But animals are often widely  divergent within a species — humans are an obvious example — so Hart and his  team needed science to prove their gut feeling.

“I got in touch with geneticists and anthropologists to get their advice. I  knew it was important to have a collaborative team of experts,” says Hart.

The exhaustive study took three years.

Hart’s teams set up digital sound recorders in the forests to record the  morning calls of the Owl Face and Lesula monkeys. They analyzed the ecology of  the forest and behavior of the shy and difficult to observe monkey.

Field teams collected Lesula specimens from hunters and monkeys freshly  killed by leopards and once, a crowned eagle (the field worker had to wait for  the eagle to leave its perch, says Hart). The specimens were shipped to two  research centers in the U.S and the data shared with labs across the  country.

Christopher Gilbert, an anthropologist based at Hunter College in Manhattan,  says the difference in appearance between the Lesula and Owl Face was  striking.

“After comparing the skins, we immediately concluded that this was probably  something different that we had seen before,” says Gilbert, an expert in primate  and monkey evolution.

Skulls of the Lesula and Owl Face monkey were measured with calipers and  digitally drawn in 3D. “We looked at the difference in shape and a number of  landmarks in the skulls,” says Gilbert.

While the Owl Face and Lesula had similar sized skulls, he says, the Lesula  had significantly larger orbits and several other small, but statistically  significant, differences in the hard anatomy of the skull.

The anatomical studies are backed up by genetics. Scientists at New York  University and Florida Atlantic University were able trace an ancient common  ancestor. Scientists believe the monkeys evolved separately after a series of  rivers separated their habitats.

“The clincher was that lab and field teams were able to document significant  difference in conjunction with the genetics. The monkeys were different and have  been different for a couple of million years. It demonstrates that there are  places in the world that we do not know much about,” says Gilbert.

The Lesula’s range covers an area of about 6,500 square miles (17,000 square  kilometers) between the Lomani and Tshuapa Rivers. Until recently, it was one of  the Congo’s least biologically explored forest blocks.

Hart hopes that the announcement will bring a renewed effort to save central  Africa’s pristine forests. Under threat by loggers, bush meat hunters, and weak  national governments, the forests are a potential well of important scientific discoveries, and a key linchpin of the earth’s  biodiversity.

Teresa and John Hart’s Lukuru Foundation is working with the Congolese  authorities to establish a national park in the Lomani basin before it loses its  unique biodiversity.

“The challenge now is to make the Lesula an iconic species that carries the  message for conservation of all of DR Congo’s endangered fauna,” says Hart.

And what of the first Lesula they found — Georgette’s pet. After he saw the  pictures, Hart regularly sent a team to keep track of the young Lesula’s  progress. At some point Georgette let the monkey roam free.

“It seems someone captured it,” says Hart, “it probably ended up in the  cooking pot.”

He hopes that with proper protection, the Lesula, and the rest of Lomani’s  incredible animal biodiversity, won’t suffer a similar fate.

Read more: http://www.abc15.com/dpp/news/national/scientists-discover-new-monkey#ixzz26JLXnMcp

 

Zookeeper licks monkey’s butt to help it defecate

50-year-old Zhang Bangsheng used warm water to clean a small Francois’ Leaf Monkey’s buttocks, then began using his mouth to lick it, not stopping for over an hour, until the little monkey defecated a single peanut. Only after the peanut was defecated did Zhang Bangsheng laugh with satisfaction.

As it is understood, this small Francois’ langur is only 3 months old, and is the first Francois’ Leaf Monkey to be born in nearly 10 years at this animal park. The Francois’ langur is a rare primate from Guangxi and Guizhou and is amongst the nation’s most protected animals. Because it is so precious, the zoo gave it to model worker and high-level expert Zhang Bangsheng to care for and raise.

On the first day of the “May 1st” short holiday, Zhang Bangsheng let the small Francois langur enter the monkey exhibit for the first time to meet visitors so it can see more of the world. The next day, Old Zhang discovered that the little monkey had indigestion and difficulty defecating, and immediately became worried. Seeing peanut shells on the ground, Old Zhang immediately understood that visitors had definitely tossed peanuts to the small monkey, and the toothless monkey swallowed the peanut whole. If it does not quickly defecate it, it would endanger the little monkey’s life.

Because the monkey is too small, it wasn’t suitable to use medicine to let it defecate. The only way was to lick its butt, to prompt it to defecate the peanut.

http://bossip.com/581209/did-you-hear-the-one-about-the-zookeeper-that-licked-a-monkeys-butt-to-make-it-poo/