Apes have mid-life crises

 

Too bad chimpanzees can’t buy sports cars. New research says it’s not just humans who go through midlife crises: Chimps and orangutans also experience a dip in happiness around the middle of their lives.

“There may be different things going on at the surface, but underneath it all, there’s something common in all three species that’s leading to this,” said study leader Alexander Weiss, a primate psychologist at the University of Edinburgh in Scotland.

The study team asked longtime caretakers of more than 500 chimpanzees and orangutans at zoos in five countries to fill out a questionnaire about the well-being of each animal they work with, including overall mood, how much the animals seemed to enjoy social interactions, and how successful they were in achieving goals (such as obtaining a desired item or spot within their enclosure).

The survey even asked the humans to imagine themselves as the animal and rate how happy they’d be.

When Weiss’s team plotted the results on a graph, they saw a familiar curve, bottoming out in the middle of the animals’ lives and rising again in old age. It’s the same U-shape that has shown up in several studies about age and happiness in people.

“When you look at worldwide data, you see this U-shape,” said National Geographic Fellow Dan Buettner, author of Thrive: Finding Happiness the Blue Zones Way.

“It’s different for every country, but it’s usually somewhere between age 45 and 55 that you hit the bottom of the curve, and it continues to go up with age. You see centenarians in good health reporting higher well-being than teenagers.”

(Take Buettner’s True Happiness Test.)

Social and economic hypotheses may partly explain this happiness curve in human lifetimes: Maybe it’s tied to adjusting expectations, abandoning regret, or just getting more stuff as we grow older. But Weiss suspects there may be something more primal going on.

“We’re saying, take a step back and look at the big picture: Is there any evidence that there’s an evolutionary basis underlying this?” said Weiss, whose study was published today in the journal Proceedings of the National Academy of Sciences.

“Knowing that a similar phenomenon exists in human and nonhuman primates opens up the realm of possible explanations.”

Although the stereotype of a midlife crisis is generally negative—feelings of depression or discontentment with one’s life and where it’s headed—Weiss believes such ennui may have an evolutionary upside.

By the middle of one’s life, humans and apes often have access to more resources than when they were younger, which could make it easier to achieve goals. Feelings of discontentment may be nature’s way of motivating us to “strike while the iron is hot,” said Weiss.

“It may feel lousy, but your brain could be tricking you into improving your circumstances and situation, signaling you to get up and really start pushing while you’re absolutely at your prime,” he said. “And I think that’s a really powerful and positive message.”

Knowing that a midlife dip in happiness is a natural—and temporary—part of life could make it easier for humans to cope with the experience, Weiss said. It could also help caretakers improve captive apes’ quality of life, by identifying ages at which the animals might benefit from extra attention or enrichment.

(See pictures of places where people are happiest.)

“I don’t think this totally subsumes other explanations for age-related changes in happiness, but it adds another layer,” Weiss said.

Weiss has previously studied the correlation between personality and happiness in both chimpanzees and humans, and plans to look next at the impact of factors like sex and social groupings.

“I hope this raises awareness of all that we can learn by looking at our closest living animal relatives.”

http://news.nationalgeographic.com/news/2012/11/121119-apes-happiness-midlife-crises-science-animals/

The mannequin that spies on you

Mannequins in fashion boutiques are now being fitted with secret cameras to ‘spy’ on shoppers’ buying habits.

Benetton is among the High Street fashion chains to have deployed the dummies equipped with technology adapted from security systems used to identify criminals at airports.

From the outside, the $3,200 (£2,009) EyeSee dummy looks like any other mannequin, but behind its blank gaze it hides a camera feeding images into facial recognition software that logs the age, gender and race of shoppers.

This information is fed into a computer and is ‘aggregated’ to offer retailers using the system statistical and contextual information they can use to develop their marketing strategies.

Its makers boast: ‘From now on you can know how many people enter the store, record what time there is a greater influx of customers (and which type) and see if some areas risk to be overcrowded.

However, privacy campaigners have denounced the system as ‘creepy’ and said that such surveillance is an instance of profit trumping privacy.

The device is marketed by Italian mannequin maker Almax and has already spurred shops into adjusting window displays, floor layours and promotions, Bloomberg reported.

With growth slowing in the luxury goods industry, the technology taps into retailers’ desperation to personalise their offers to reach increasingly picky customers.

Although video profiling of customers is not new, Almax claims its offering is better at providing data because it stands at eye level with customers, who are more likely to look directly at the mannequins.

The video surveillance mannequins have been on sale for almost a year, and are already being used in three European countries and in the U.S.

Almax claims information from the devices led one outlet to adjust window displays after they found that men shopping in the first two days of a sale spent more than women, while another introduced a children’s line after the dummy showed youngsters made up more than half its afternoon traffic.

A third retailer placed Chinese-speaking staff by a particular entrance after it found a third of visitors using that door after 4pm were Asian.

Almax chief executive Max Catanese refused to name which retailers were using the new technology, telling Bloomberg that confidentiality agreements meant he could not disclose the names of clients.

But he did reveal that five companies – among them leading fashion brands – are using ‘a few dozen’ of the mannequins, with orders for at least that many more.

Almax is now hoping to update the technology to allow the mannequins – and by extension the retailers who operate them – to listen in on what customers are saying about the clothes on display.

Mr Catanese told Bloomberg the company also plans to add screens next to the dummies to prompt passers-by about products that fit their profile, similar to the way online retailers use cookies to personalise web browsing.

Almax insists that its system does not invade the privacy of shoppers since the camera inside the mannequin is ‘blind’, meaning that it does not record the images of passers-by, instead merely collecting data about them.

In an emailed statement, Mr Catanese told MailOnline: ‘Let’s say I pass in front of the mannequin. Nobody will know that “Max Catanese” passed in front of it.

‘The retailer will have the information that a male adult Caucasian passed in front of the mannequin at 6:25pm and spent 3 minutes in front of it. No sensible/private data, nor image is collected.

‘Different is the case if a place (shop, department store, etc.) is already covered by security cameras (by the way, basically almost every retailer in the world today).

‘In those cases we could even provide the regular camera as the data and customers images are already collected in the store which are authorised to do so.

‘In any case, just to avoid questions, so far we only offer the version with blind camera.’

Nevertheless, privacy groups are concerned about the roll-out of the technology. Emma Carr, deputy director of civil liberties campaign group Big Brother Watch, said: ‘Keeping cameras hidden in a mannequin is nothing short of creepy.

‘The use of covert surveillance technology by shops, in order to provide a personalised service, seems totally disproportionate.

‘The fact that the cameras are hidden suggests that shops are fully aware that many customers would object to this kind of monitoring.

‘It is not only essential that customers are fully informed that they are being watched, but that they also have real choice of service and on what terms it is offered.

‘Without this transparency, shops cannot be completely sure that their customers even want this level of personalised service.

‘This is another example of how the public are increasingly being monitored by retailers without ever being asked for their permission. Profit trumps privacy yet again.’

Read more: http://www.dailymail.co.uk/sciencetech/article-2235848/The-creepy-mannequin-stares-Fashion-retailers-adapt-airport-security-technology-profile-customers.html#ixzz2CsSISqiB

Thanks to Kebmodee for bringing this to the attention of the It’s Interesting community.

Stanford scientists advance thought-control computer cursor movement

 

 

Stanford researchers have designed the fastest, most accurate mathematical algorithm yet for brain-implantable prosthetic systems that can help disabled people maneuver computer cursors with their thoughts. The algorithm’s speed, accuracy and natural movement approach those of a real arm.

 

 

On each side of the screen, a monkey moves a cursor with its thoughts, using the cursor to make contact with the colored ball. On the left, the monkey’s thoughts are decoded with the use of a mathematical algorithm known as Velocity. On the right, the monkey’s thoughts are decoded with a new algorithm known as ReFITT, with better results. The ReFIT system helps the monkey to click on 21 targets in 21 seconds, as opposed to just 10 clicks with the older system.

 

 

When a paralyzed person imagines moving a limb, cells in the part of the brain that controls movement activate, as if trying to make the immobile limb work again.

Despite a neurological injury or disease that has severed the pathway between brain and muscle, the region where the signals originate remains intact and functional.

In recent years, neuroscientists and neuroengineers working in prosthetics have begun to develop brain-implantable sensors that can measure signals from individual neurons.

After those signals have been decoded through a mathematical algorithm, they can be used to control the movement of a cursor on a computer screen – in essence, the cursor is controlled by thoughts.

The work is part of a field known as neural prosthetics.

A team of Stanford researchers have now developed a new algorithm, known as ReFIT, that vastly improves the speed and accuracy of neural prosthetics that control computer cursors. The results were published Nov. 18 in the journal Nature Neuroscience in a paper by Krishna Shenoy, a professor of electrical engineering, bioengineering and neurobiology at Stanford, and a team led by research associate Dr. Vikash Gilja and bioengineering doctoral candidate Paul Nuyujukian.

In side-by-side demonstrations with rhesus monkeys, cursors controlled by the new algorithm doubled the performance of existing systems and approached performance of the monkey’s actual arm in controlling the cursor. Better yet, more than four years after implantation, the new system is still going strong, while previous systems have seen a steady decline in performance over time.

“These findings could lead to greatly improved prosthetic system performance and robustness in paralyzed people, which we are actively pursuing as part of the FDA Phase-I BrainGate2 clinical trial here at Stanford,” said Shenoy.

The system relies on a sensor implanted into the brain, which records “action potentials” in neural activity from an array of electrode sensors and sends data to a computer. The frequency with which action potentials are generated provides the computer important information about the direction and speed of the user’s intended movement.

The ReFIT algorithm that decodes these signals represents a departure from earlier models. In most neural prosthetics research, scientists have recorded brain activity while the subject moves or imagines moving an arm, analyzing the data after the fact. “Quite a bit of the work in neural prosthetics has focused on this sort of offline reconstruction,” said Gilja, the first author of the paper.

The Stanford team wanted to understand how the system worked “online,” under closed-loop control conditions in which the computer analyzes and implements visual feedback gathered in real time as the monkey neurally controls the cursor toward an onscreen target.

The system is able to make adjustments on the fly when guiding the cursor to a target, just as a hand and eye would work in tandem to move a mouse-cursor onto an icon on a computer desktop.

If the cursor were straying too far to the left, for instance, the user likely adjusts the imagined movements to redirect the cursor to the right. The team designed the system to learn from the user’s corrective movements, allowing the cursor to move more precisely than it could in earlier prosthetics.

To test the new system, the team gave monkeys the task of mentally directing a cursor to a target – an onscreen dot – and holding the cursor there for half a second. ReFIT performed vastly better than previous technology in terms of both speed and accuracy.

The path of the cursor from the starting point to the target was straighter and it reached the target twice as quickly as earlier systems, achieving 75 to 85 percent of the speed of the monkey’s arm.

“This paper reports very exciting innovations in closed-loop decoding for brain-machine interfaces. These innovations should lead to a significant boost in the control of neuroprosthetic devices and increase the clinical viability of this technology,” said Jose Carmena, an associate professor of electrical engineering and neuroscience at the University of California-Berkeley.

Critical to ReFIT’s time-to-target improvement was its superior ability to stop the cursor. While the old model’s cursor reached the target almost as fast as ReFIT, it often overshot the destination, requiring additional time and multiple passes to hold the target.

The key to this efficiency was in the step-by-step calculation that transforms electrical signals from the brain into movements of the cursor onscreen. The team had a unique way of “training” the algorithm about movement. When the monkey used his arm to move the cursor, the computer used signals from the implant to match the arm movements with neural activity.

Next, the monkey simply thought about moving the cursor, and the computer translated that neural activity into onscreen movement of the cursor. The team then used the monkey’s brain activity to refine their algorithm, increasing its accuracy.

The team introduced a second innovation in the way ReFIT encodes information about the position and velocity of the cursor. Gilja said that previous algorithms could interpret neural signals about either the cursor’s position or its velocity, but not both at once. ReFIT can do both, resulting in faster, cleaner movements of the cursor.

Early research in neural prosthetics had the goal of understanding the brain and its systems more thoroughly, Gilja said, but he and his team wanted to build on this approach by taking a more pragmatic engineering perspective. “The core engineering goal is to achieve highest possible performance and robustness for a potential clinical device,” he said.

To create such a responsive system, the team decided to abandon one of the traditional methods in neural prosthetics.

Much of the existing research in this field has focused on differentiating among individual neurons in the brain. Importantly, such a detailed approach has allowed neuroscientists to create a detailed understanding of the individual neurons that control arm movement.

But the individual neuron approach has its drawbacks, Gilja said. “From an engineering perspective, the process of isolating single neurons is difficult, due to minute physical movements between the electrode and nearby neurons, making it error prone,” he said. ReFIT focuses on small groups of neurons instead of single neurons.

By abandoning the single-neuron approach, the team also reaped a surprising benefit: performance longevity. Neural implant systems that are fine-tuned to specific neurons degrade over time. It is a common belief in the field that after six months to a year they can no longer accurately interpret the brain’s intended movement. Gilja said the Stanford system is working very well more than four years later.

“Despite great progress in brain-computer interfaces to control the movement of devices such as prosthetic limbs, we’ve been left so far with halting, jerky, Etch-a-Sketch-like movements. Dr. Shenoy’s study is a big step toward clinically useful brain-machine technology that has faster, smoother, more natural movements,” said James Gnadt, a program director in Systems and Cognitive Neuroscience at the National Institute of Neurological Disorders and Stroke, part of the National Institutes of Health.

For the time being, the team has been focused on improving cursor movement rather than the creation of robotic limbs, but that is not out of the question, Gilja said. Near term, precise, accurate control of a cursor is a simplified task with enormous value for people with paralysis.

“We think we have a good chance of giving them something very useful,” he said. The team is now translating these innovations to people with paralysis as part of a clinical trial.

This research was funded by the Christopher and Dana Reeve Paralysis Foundation, the National Science Foundation, National Defense Science and Engineering Graduate Fellowships, Stanford Graduate Fellowships, Defense Advanced Research Projects Agency (“Revolutionizing Prosthetics” and “REPAIR”) and the National Institutes of Health (NINDS-CRCNS and Director’s Pioneer Award).

Other contributing researchers include Cynthia Chestek, John Cunningham, Byron Yu, Joline Fan, Mark Churchland, Matthew Kaufman, Jonathan Kao and Stephen Ryu.

http://news.stanford.edu/news/2012/november/thought-control-cursor-111812.html

Thanks to Kebmodee for bringing this to the attention of the It’s Interesting community

World’s Oldest Digital Computer is Being Re-Booted

 

One of the world’s first digital computers to replace the handwritten calculations of human “computors” is getting an official reboot that could lead to a spot in the Guinness Book of Records.

The 61-year-old Harwell Dekatron — about the size and weight of an SUV — was originally hailed as a slow, steady machine capable of delivering error-free calculations while running for 90 hours a week. It has survived to become the oldest original working digital computer following the announcement of its completed restoration by The National Museum of Computing (TNMOC) in the U.K. today (Nov. 20).

“In 1951, the Harwell Dekatron was one of perhaps a dozen computers in the world, and since then, it has led a charmed life surviving intact while its contemporaries were recycled or destroyed,” said Kevin Murrell, a trustee at TNMOC.

The computer relies on 480 relays that have more in common with telephone exchanges rather than modern PCs or Macs. Such relays sit inside a collection of racks that also hold 828 flashing Dekatron valves — gas-filled counting tubes used in the early days of computing rather than the transistors of modern electronics. [Could the Computer Age Have Begun in Victorian England?]

“The restoration was quite a challenge, requiring work with components like valves, relays and paper tape readers that are rarely seen these days and are certainly not found in modern computers,” said Delwyn Holroyd, a volunteer at TNMOC.

Running the computer requires about 1,500 watts of power — roughly equivalent to the power consumption of a modern hairdryer. By comparison, a laptop might use just 50 watts (1,000 watts being the equivalent of a kilowatt).

The computer does not convert calculations to the modern binary computer code consisting of ones and zeroes. Instead, the Dekatron valves each hold 10 gas-filled tubes that can each be activated as part of its decimal counting system.

Clattering paper readers and printers surround the computer to create a sound more like a roomful of typewriters than the quiet, whirring fans of modern computers.

Harwell Dekatron first served in the Harwell Atomic Energy Research Establishment that represented the U.K.’s main center for nuclear research from the end of World War II through the 1990s. But the computer had become redundant by 1957 and ended up as a teaching computer at the Wolverhampton and Staffordshire Technical College until its retirement in 1973.

The computer joins other relics of the early computing age at The National Museum of Computing, such as a rebuilt Colossus computer originally made by the Allies to break Nazi codes during World War II.

http://www.livescience.com/24918-oldest-digital-computer-reboot.html

 

Brain-controlled helicopter may soon be available

For the last few years, Puzzlebox has been publishing open source software and hacking guides that walk makers through the modification of RC helicopters so that they can be flown and controlled using just the power of the mind. Full systems have also been custom built to introduce youngsters to brain-computer interfaces and neuroscience. The group is about to take the project to the next stage by making a Puzzlebox Orbit brain-controlled helicopter available to the public, while encouraging user experimentation by making all the code, schematics, 3D models, build guides and other documentation freely available under an open-source license.

The helicopter has a protective outer sphere that prevents the rotor blades from impacting with walls, furniture, floor and ceiling is very similar in design to the Kyosho Space Ball. It’s not the same craft though, and the ability to control it with the mind is not the only difference.

“There’s a ring around the top and bottom of the Space Ball which isn’t present on the Puzzlebox Orbit,” Castellotti says. “The casing around their server motor looks quite different, too. The horizontal ring at-mid level is more rounded on the Orbit, and vertically it is more squat. We’re also selling the Puzzlebox Orbit in the U.S. for US$89 (including shipping), versus their $117 (plus shipping).”

Two versions of the Puzzlebox Orbit system are being offered to the public. The first is designed for use with mobile devices like tablets and smartphones. A NeuroSky MindWave Mobile EEG headset communicates with the device via Bluetooth. Proprietary software then analyzes the brainwave data in real time and translates the input as command signals, which are sent to the helicopter via an IR adapter plugged into the device’s audio jack.

This system isn’t quite ready for all mobile operating platforms, though. The team is “happy on Android but don’t have access to a wide variety of hardware for testing,” confirmed Castellotti, adding “Some tuning after release is expected. We’ll have open source code available to iOS developers and will have initiated the App Store evaluation process if it’s not already been approved.”

The second offering comes with a Puzzlebox Pyramid, which was developed completely in-house and has a dual role as a home base for the Orbit helicopter and a remote control unit. At its heart is a programmable micro-controller that’s compatible with Arduino boards. On one face of the pyramid there’s a broken circle of multi-colored LED lights in a clock face configuration. These are used to indicate levels of concentration, mental relaxation, and the quality of the EEG signal from a NeuroSky MindWave EEG headset (which wirelessly communicates with a USB dongle plugged into the rear of the pyramid).

Twelve infrared LEDs to the top of each face actually control the Orbit helicopter, and with some inventive tweaking, these can also be used to control other IR toys and devices (including TVs).

In either case, a targeted mental state can be assigned to a helicopter control or flight path (such as hover in place or fly in a straight line) and actioned whenever that state is detected and maintained. Estimated Orbit flight time is around eight minutes (or more), after which the user will need to recharge the unit for 30 minutes before the next take-off.

At the time of writing, a crowd-funding campaign on Kickstarter to take the prototype system into mass production has attracted almost three times its target. The Puzzlebox team has already secured enough hardware and materials to start shipping the first wave of Orbits next month. International backers will get their hands on the system early next year.

The brain-controlled helicopter is only a part of the package, however. The development team has promised to release the source code for the Linux/Mac/PC software and mobile apps, all protocols, and available hardware schematics under open-source licenses. Step-by-step how-to guides are also in the pipeline (like the one already on the Instructables website), together with educational aids detailing how everything works.

“We have prepared contributor tools for Orbit, including a wiki, source code browser, and ticket tracking system,” said Castellotti. “We are already using these tools internally to build the project. Access to these will be granted when the Kickstarter campaign closes.”

“We would really like to underline that we are producing more than just a brain-controlled helicopter,” he stressed. “The toy and concept is fun and certainly the main draw, but the true purpose lies in the open code and hacking guides. We don’t want to be the holiday toy that gets played with for ten minutes then sits forever in the corner or on a shelf. We want owners to be able to use the Orbit to experiment with biofeedback – practicing how to concentrate better or to unwind and relax with this physical and visual aid.”

“And when curiosity kicks in and they start to wonder how it actually works, all of the information is published freely. That’s how we hope to share knowledge and foster a community. For example, a motivated experimenter should be able to start with the hardware we provide, and using our tools and guides learn how to hack support for driving a remote controlled car or causing a television to change channels when attention levels are measured as being low for too long a period of time. Such advancements could then be contributed back to the rest of our users.”

The Kickstarter campaign will close on December 8, after which the team will concentrate its efforts on getting Orbit systems delivered to backers and ensure that all the background and support documentation is in place. If all goes according to plan, a retail launch could follow as soon as Q1 2013.

It is hoped that the consumer Puzzlebox Orbit mobile/tablet edition with the NeuroSky headset will remain under US$200, followed by the Pyramid version at an as-yet undisclosed price.

http://www.gizmag.com/puzzlebox-orbit-brain-controlled-helicopter/25138/

 

Michael Newman – drunk Australian man tries to ride saltwater crocodile

 

A drunk man who climbed into a crocodile enclosure in Australia and attempted to ride a 5m (16ft) long crocodile has survived his encounter.

The crocodile, called Fatso, bit the 36-year-old man’s leg, tearing chunks of flesh from him as he straddled the reptile.

He received surgery to serious wounds to his leg and is recovering in hospital, police say.

He had been chucked out of a pub in the town of Broome for being too drunk.

The man, Michael Newman, climbed over a fence and tried to sit on the 800kg (1,800lb) saltwater crocodile.

“Fatso has taken offence to this and has spun around and bit this man on the right leg,” Sgt Roger Haynes of Broome police told journalists.

“The crocodile has let him go and he’s been able to scale the fence again and leave the wildlife park.”

Malcolm Douglas, the park’s owner, said that the crocodile was capable of crushing a man to death with a single bite.

“The man who climbed the fence was fortunate because Fatso was a bit more sluggish than normal, due to the cooler nights we have been experiencing in Broome,” said Mr Douglas.

“If it had been warmer and Fatso was more alert, we would have been dealing with a fatality.”

“No person in their right mind would try to sit on a 5m crocodile, Saltwater crocodiles, once they get hold of you, are not renowned for letting you go.”

The man staggered back to the pub bleeding heavily.

Pub manager Mark Phillips said staff told him that the man reappeared at about 11pm with bits of bark hanging off him and flesh gouged out of his limbs.

“They said he had chunks out of his legs and things like that,” Mr Phillips told The West Australian news website.

An average of two people are killed each year in Australia by aggressive saltwater crocodiles, which can grow up to 7m (23 ft) long and weigh more than a tonne.

http://www.bbc.co.uk/news/10611973

 

 

Duke University scientists create Harry Potter invisibility cloak

Scientists seem to have unlocked another technology that was only available in fantasy movies.  Physicists at Duke University have announced that they have successfully cloaked an object with “perfect” invisibility, straight out of Harry Potter.

In 2006 David Smith and his colleagues developed a theory called “transformation optics”.  The theory is based on redirecting magnetic fields around an object making it invisible, according to ScienceNOW.

All attempts at testing the theory provided some level of invisibility but it wasn’t until Dr. Smith started experimenting with metamaterials, which are designed to bend light and other radiation around them that they were able to create a Harry Potter style invisibility cloak.

Graduate student Dr. Landy says all earlier versions of a Harry Potter cloak suffered from reflected light.  Landy explained to Phys.org that “it was much like reflections seen on clear glass. The viewer can see through the glass just fine, but at the same time the viewer is aware the glass is present due to light reflected from the surface of the glass.”

The new cloak got around it by reworking the materials.

“Landy’s new microwave cloak is naturally divided into four quadrants, each of which have voids or blind spots at their intersections and corners with each other,”explains io9. “Thus, to avoid the reflectivity problem, Landy was able to correct for it by shifting each strip so that is met its mirror image at each interface.”

Smith said of the research:

“This to our knowledge is the first cloak that really addresses getting the transformation exactly right to get you that perfect invisibility.”

Deepika Kurup, 14, is America’s Top Young Scientist: Her Solar-Powered Jug Cleans Water

 

A 14-year-old New York student was named “America’s Top Young Scientist” for inventing a solar-powered water jug that changes dirty water into purified drinking water.

Deepika Kurup not only surpassed 9 finalists with her science and math skills to win $25,000 from Discovery Education and 3M, she persuaded the judges with a dynamic five-minute LIVE presentation about the plight of a billion poor people who have no access to clean drinking water.

Watch her presentation below.

The cost effective and sustainable water-purification system, which harnesses solar energy to disinfect contaminated water uses her own innovative process designed to overcome current problems with portable purification. Her process can kill many types of bacteria in a fraction of the time of other methods.

Kurup, a ninth grader at Nashua High School, won the prize last week following a live competition at the 3M Innovation Center in St. Paul, Minn.

During the past three months, Kurup and the other finalists had the exclusive opportunity to work directly with a 3M scientist as they created their personal innovations as part of a summer mentorship program. The 3M Scientists provided guidance to the finalists as they developed their ideas from a theoretical concept into an actual prototype that would help solve a problem in everyday life.

The second, third and fourth place winners each received a $1,000 cash prize and a trip from Discovery Student Adventures to Costa Rica. These extraordinary students are:

  • Carolyn Jons, from Eden Prairie High School in Eden Prairie, Minn., received second place for her innovative packaging method that inhibits mold growth and helps keep food fresh longer.
  • Anin Sayana from Bellarmine College Preparatory in Cupertino, Calif., received third place for his innovation that selectively targets chemotherapy-resistant cancer stem cells.
  • Anishaa Sivakumar from Franklin Regional High School in Murrysville, Pa., received fourth place for her innovation that would help treat patients suffering from macular degeneration.

The six other finalists each received a $1,000 cash prize.

http://www.goodnewsnetwork.org/most-popular/americas-top-young-scientist-2012.html

Thanks to Kebmodee for bringing this to the attention of the It’s Interesting community.

New smell discovered

 

Scientists have discovered a new smell, but you may have to go to a laboratory to experience it yourself.

The smell is dubbed “olfactory white,” because it is the nasal equivalent of white noise, researchers reported Nov. 19 in the journal Proceedings of the National Academy of Sciences. Just as white noise is a mixture of many different sound frequencies and white light is a mixture of many different wavelengths, olfactory white is a mixture of many different smelly compounds.

In fact, the key to olfactory white is not the compounds themselves, researchers found, but the fact that there are a lot of them. 

“[T]he more components there were in each of two mixtures, the more similar the smell of those two mixtures became, even though the mixtures had no components in common,” they wrote.

Almost any given smell in the real world comes from a mixture of compounds. Humans are good at telling these mixtures apart (it’s hard to mix up the smell of coffee with the smell of roses, for example), but we’re bad at picking individual components out of those mixtures. (Quick, sniff your coffee mug and report back all the individual compounds that make that roasted smell. Not so easy, huh?)

Mixing multiple wavelegths that span the human visual range equally makes white light; mixing multiple frequencies that span the range of human hearing equally makes the whooshing hum of white noise. Neurobiologist Noam Sobel from the Weizmann Institute of Science in Israel and his colleagues wanted to find out whether a similar phenomenon happens with smelling. [7 New Flavors Your Tongue May Taste]

In a series of experiments, they exposed participants to hundreds of equally mixed smells, some containing as few as one compound and others containing up to 43 components. They first had 56 participants compare mixtures of the same number of compounds with one another. For example, a person might compare a 40-compound mixture with a 40-compound mixture, neither of which had any components in common.

This experiment revealed that the more components in a mixture, the worse participants were at telling them apart. A four-component mixture smells less similar to other four-component mixtures than a 43-component mixture smells to other 43-component mixtures.

The researchers seemed on track to finding the olfactory version of white noise. They set up a new experiment to confirm the find. In this experiment, they first created four 40-component mixtures. Twelve participants were then given one of the mixtures to sniff and told that it was called “Laurax,” a made-up word. Three of the participants were told compound 1 was Laurax, three were told it was compound 2, three were told it was compound 3, and the rest were told it was compound 4. 

After three days of sniffing their version of Laurax in the lab, the participants were given four new scents and four scent labels, one of which was Laurax. They were asked to label each scent with the most appropriate label.

The researchers found that the label “Laurax” was most popular for scents with more compounds. In fact, the more compounds in a mixture, the more likely participants were to call it Laurax. The label went to mixtures with more than 40 compounds 57.1 percent of the time.

Another experiment replicated the first, except that it allowed for participants to label one of the scents “other,” a way to ensure “Laurax” wasn’t just a catch-all. Again, scents with more compounds were more likely to get the Laurax label.

The meaning of these results, the researchers wrote, is that olfactory white is a distinct smell, caused not by specific compounds but by certain mixes of compounds. The key is that the compounds are all of equal intensity and that they span the full range of human smells. That’s why roses and coffee, both of which have many smell compounds, don’t smell anything alike: Their compounds are unequally mixed and don’t span a large range of smells.

In other words, our brains treat smells as a single unit, not as a mixture of compounds to break down, analyze and put back together again. If they didn’t, they’d never see mixtures of completely different compounds as smelling the same.

Perhaps the next burning question is: What does olfactory white smell like? Unfortunately, the scent is so bland as to defy description. Participants rated it right in the middle of the scale for both pleasantness and edibility.

“The best way to appreciate the qualities of olfactory white is to smell it,” the researchers wrote.

http://www.livescience.com/24890-new-white-smell-discovered.html

Swedish woman charged with having sex with skeleton

The 37-year-old Swedish woman is accused of necrophilia and was formally charged on Tuesday at the Gothenburg District Court for the crime of “violating the peace of the dead.”

Police were initially notified that a gunshot had been fired from the woman’s apartment in September, which led to the alleged discovery of 100 skeleton parts in her apartment.

While searching her home, the police reportedly also found a CD titled “My Necrophilia” as well as photographs in which a woman is shown being intimate with the skeleton’s parts, including licking a skull, according to the Swedish news agency TT.

However, the woman has denied the charges, claiming she collected the bones out of historical interest, according to the AP.

“In the confidential section of the investigation we have material which indicates she used them in sexual situations,” the prosecutor Kristina Ehrenborg-Staffas told the TT news agency.

“Some of the photos show a woman licking a skull,” Ehrenborg-Staffas told The Local, a Swedish newspaper. “She has a lot of photos of morgues and chapels, and documents about how to have sex with recently deceased and otherwise dead people,” she told them.

http://www.huffingtonpost.com/2012/11/20/human-skeleton-sex-sweden_n_2167154.html