Brazilian woman almost killed by train while trying to retrieve her dropped cell phone

This video shows a woman trapped on the tracks below a commuter rail platform in Brazil escape within an inch of her life from an oncoming train. Two men pulled the woman to safety less than a second before the train speeds by at the Corinthians-Itaquera station in Sao Paulo.

Bystanders said she jumped onto the tracks to retrieve her dropped cellphone, but couldn’t climb back out.

http://cnews.canoe.ca/CNEWS/WeirdNews/2013/04/02/20704421.html

Never Scrape Again: Windshield Coating Repels Frost

shutterstock_67370692

A fogged-up camera lens can ruin a perfect shot, and a frosty car window can lead to potentially deadly accidents. To help keep glass clear in harsh weather, scientists are developing an advanced new coating that resists both fogging and frosting.

Glass fogs up and frosts because of water. So you might assume so-called hydrophobic materials, which repel water, provide the best method of fighting such moisture. However, these solutions tend only to make water bead up, scattering light and obscuring views.

Researchers have also experimented with the opposite tactic, attempting to prevent fogging and frosting using hydrophilic materials, which attract water. Here, researchers hope to smear water across the glass surfaces in uniform sheets, to keep the moisture from distorting light. Although these materials work against fog, they can’t prevent frosting. When cold glass encounters humid air, the layer of water that develops simply freezes.

However, the new coating possesses both water-repelling and water-attracting properties, so it works against both fog and frost. The material contains organic compounds with both hydrophilic and hydrophobic components. The hydrophilic ingredients love water so much they absorb moisture, trapping it and keeping it from easily forming ice crystals. This lowers water’s usual freezing temperature and dramatically reduces frosting.

Meanwhile, the material’s hydrophobic components help repel contaminants that might spoil the hydrophilic effect.

“We have no freezing of water, even at low temperatures. It remains completely clear,” researcher Michael Rubner, a materials scientist at MIT, told TechNewsDaily.

When the new coating warms up from the freezing cold, it releases the water, “which just evaporates,” Rubner added.

The new coating does have its limits. “If it’s overwhelmed with water, any excess water can freeze,” Rubner said. “You wouldn’t want this on an airplane wing that constantly gets water on it, but an application like eyeglasses or windshields, it can be amazing.”

The researchers are now seeking to enhance the material’s durability to mechanical stresses. They detailed their findings online Jan. 29 in the journal ACS Nano.

http://www.livescience.com/27611-never-scrape-again-windshield-coating-repels-frost.html

Stealth Wear fashion to shield people from drones and face-recognition software

drone-proof-burqa
As debate over the use of unmanned aerial vehicles in the U.S. rages on, a fashion designer introduces clothing that blocks drone-mounted infrared cameras.

As the U.S. government draws up plans to use surveillance drones in domestic airspace, opposition to what many consider an unwarranted and significant invasion of privacy is mounting across the country, from rural Virginia to techopolis Seattle. Although officials debate anti-drone legislation at federal, state and local levels, one man is fighting back with high-tech apparel.

A New York City privacy advocate-turned-urban-guerilla fashion designer is selling garments designed to make their wearers invisible to infrared surveillance cameras, particularly those on drones. And although Adam Harvey admits that his three-item Stealth Wear line of scarves and capes is more of a political statement than a money-making venture, the science behind the fashion is quite sound.

“Fighting drones is not my full-time job, but it could be,” says Harvey, an instructor of physical computing at Manhattan’s School of Visual Arts and the creator of the CV Dazzle project, which seeks to develop makeup and hairstyles that camouflage people from face-recognition cameras and software.

Harvey’s newest medium, metalized fabric, has been around for more than 20 years. It holds in body heat that would burn bright for infrared cameras—a characteristic that could prove attractive to those who do not want unmanned aerial vehicles spying on them.

Metalized fabric
Metal is very good at absorbing and scattering infrared light, says Cheng Sun, a Northwestern University assistant professor of mechanical engineering. In that sense there is nothing exotic in how metalized fabric works—it “would strongly attenuate the [infrared] light,” he says. The metal would dissipate heat to surroundings as well, making the wearer harder to pinpoint.

To date, the fabric has primarily been used in tape and gaskets to protect electronics and communications equipment from static electricity and electromagnetic interference, according to Larry Creasy, director of technology for metalized fabric-maker Laird Technologies, based in Saint Louis.

Here’s how metalizing works, at least at Laird: Woven fabric, commonly nylon or polyester, is coated with a special catalyst—a precious metal Creasy declined to specify—that helps copper bind to the fiber. Once dry, the fabric is submerged in a copper sulfate–plating bath and dried. A nickel sulfamate bath follows to help the finished fabric withstand the elements and abrasions. The result is a flexible, breathable fabric that can be cut with ordinary tools but that protects against electromagnetic interference and masks infrared radiation, Creasy says. The process adds weight to the original fabric. An untreated square yard of nylon weighs about 42.5 grams. Treated, the same patch weighs more than 70 grams.

The fashion
Harvey’s fabric is coated with copper, nickel and silver, a combination that gives his scarves, head-and-shoulders cloak and thigh-length “burqa” a silvery and “luxurious” feel. The material blocks cell signals, as well, adding an element of risk to tweeting, texting and other mobile activities, as the wearer must break cover to communicate.

Stealth Wear is sold only via a U.K. Web site. The burqa goes for about $2,300, the “hoodie” is $481 and the scarf is $565—luxury items, but so, too, is privacy today, Harvey says.

The impetus
The high cost and limited availability are significant drawbacks—Harvey says he’s only sold one Stealth Wear item online, a scarf. But the Federal Aviation Administration (FAA) predicts 10,000 commercial drones will ply domestic airspace by 2017—almost twice the that of the U.S. Air Force’s current fleet of unmanned aircraft. The number of drones flying in the U.S. today is hard to pin down because not every company and agency that gets FAA approval to fly a drone actually puts one in the air. In fact, 1,428 private-sector and government requests have been approved since 2007, according to the FAA. A Los Angeles Times report states that 327 of those permits are still active. Meanwhile, President Obama signed a law in February 2012 that gives the FAA until September 2015 to draw up rules that dictate how law enforcement, the military and other entities may use drones in U.S. airspace.

As of October 2012, 81 law agencies, universities, an Indian tribal agency and other entities had applied to the FAA to fly drones, according to documents released by the FAA to the Electronic Freedom Frontier following a Freedom of Information Act lawsuit. Government entities as diverse as the U.S. Department of State and Otter Tail County, Minn., are among them.

Discomfort rising
Although Harvey’s anti-drone fashions are not currently flying off the shelves, he could soon find himself leading a seller’s market if recent events are any metric:

•The Charlottesville, Va., city council has passed a watered-down ordinance that asks the federal and commonwealth governments not to use drone-derived information in court. Proponents had sought to make the city drone-free (pdf).

•Virginia, Minnesota, Oregon, Montana, Arizona (pdf) and Idaho legislators are trying to at least regulate or even prohibit, drones in their skies.

•Seattle Mayor Mike McGinn returned the city’s two surveillance drones after a hostile public reception.

•A bipartisan pair of U.S. Representatives has introduced legislation to limit information-gathering by government-operated drones as well as prohibit weapons on law-enforcement and privately owned unmanned aerial vehicles.

Drone advocates defend the use of the technology as a surveillance tool. “We clearly need to do a better job of educating people about the domestic use of drones,” says Ben Gielow, government relations manager for the Association for Unmanned Vehicle Systems International. Gielow says U.S. voters must decide the acceptability of data collection from all sources, adding, “Ultimately, an unmanned aircraft is no different than gathering data from the GPS on your phone or from satellites.”

GPS does not use infrared cameras, however, and satellites are not at the center the current privacy debate brewing in Washington—factors that could make Harvey’s designs all the more fashionable.

http://www.scientificamerican.com/article.cfm?id=drone-proof-anti-infrared-apparel&page=2

Communication of thoughts between rats on different continents, connected via brain-to-brain interface

The world’s first brain-to-brain connection has given rats the power to communicate by thought alone.

“Many people thought it could never happen,” says Miguel Nicolelis at Duke University in Durham, North Carolina. Although monkeys have been able to control robots with their mind using brain-to-machine interfaces, work by Nicolelis’s team has, for the first time, demonstrated a direct interface between two brains – with the rats able to share both motor and sensory information.

The feat was achieved by first training rats to press one of two levers when an LED above that lever was lit. A correct action opened a hatch containing a drink of water. The rats were then split into two groups, designated as “encoders” and “decoders”.

An array of microelectrodes – each about one-hundredth the width of a human hair – was then implanted in the encoder rats’ primary motor cortex, an area of the brain that processes movement. The team used the implant to record the neuronal activity that occurs just before the rat made a decision in the lever task. They found that pressing the left lever produced a different pattern of activity from pressing the right lever, regardless of which was the correct action.

Next, the team recreated these patterns in decoder rats, using an implant in the same brain area that stimulates neurons rather than recording from them. The decoders received a few training sessions to prime them to pick the correct lever in response to the different patterns of stimulation.

The researchers then wired up the implants of an encoder and a decoder rat. The pair were given the same lever-press task again, but this time only the encoder rats saw the LEDs come on. Brain signals from the encoder rat were recorded just before they pressed the lever and transmitted to the decoder rat. The team found that the decoders, despite having no visual cue, pressed the correct lever between 60 and 72 per cent of the time.

The rats’ ability to cooperate was reinforced by rewarding both rats if the communication resulted in a correct outcome. Such reinforcement led to the transmission of clearer signals, improving the rats’ success rate compared with cases where decoders were given a pre-recorded signal. This was a big surprise, says Nicolelis. “The encoder’s brain activity became more precise. This could have happened because the animal enhanced its attention during the performance of the next trial after a decoder error.”

If the decoders had not been primed to relate specific activity with the left or right lever prior to the being linked with an encoder, the only consequence would be that it would have taken a bit more time for them to learn the task while interacting with the encoder, says Nicolelis. “We simply primed the decoder so that it would get the gist of the task it had to perform.” In unpublished monkey experiments doing a similar task, the team did not need to prime the animals at all.

In a second experiment, rats were trained to explore a hole with their whiskers and indicate if it was narrow or wide by turning to the left or right. Pairs of rats were then connected as before, but this time the implants were placed in their primary somatosensory cortex, an area that processes touch. Decoder rats were able to indicate over 60 per cent of the time the width of a gap that only the encoder rats were exploring.

Finally, encoder rats were held still while their whiskers were stroked with metal bars. The researchers observed patterns of activity in the somatosensory cortex of the decoder rats that matched that of the encoder rats, even though the whiskers of the decoder rats had not been touched.

Pairs of rats were even able to cooperate across continents using cyberspace. Brain signals from an encoder rat at the Edmond and Lily Safra International Institute of Neuroscience of Natal in Brazil were sent to a decoder in Nicolelis’s lab in North Carolina via the internet. Though there was a slight transmission delay, the decoder rat still performed with an accuracy similar to those of rats in closer proximity with encoders.

Christopher James at the University of Warwick, UK, who works on brain-to-machine interfaces for prostheses, says the work is a “wake-up call” for people who haven’t caught up with recent advances in brain research.

We have the technology to create implants for long-term use, he says. What is missing, though, is a full understanding of the brain processes involved. In this case, Nicolelis’s team is “blasting a relatively large area of the brain with a signal they’re not sure is 100 per cent correct,” he says.

That’s because the exact information being communicated between the rats’ brains is not clear. The brain activity of the encoders cannot be transferred precisely to the decoders because that would require matching the patterns neuron for neuron, which is not currently possible. Instead, the two patterns are closely related in terms of their frequency and spatial representation.

“We are still using a sledgehammer to crack a walnut,” says James. “They’re not hearing the voice of God.” But the rats are certainly sending and receiving more than a binary signal that simply points to one or other lever, he says. “I think it will be possible one day to transfer an abstract thought.”

The decoders have to interpret relatively complex brain patterns, says Marshall Shuler at Johns Hopkins University in Baltimore, Maryland. The animals learn the relevance of these new patterns and their brains adapt to the signals. “But the decoders are probably not having the same quality of experience as the encoders,” he says.

Patrick Degenaar at Newcastle University in the UK says that the military might one day be able to deploy genetically modified insects or small mammals that are controlled by the brain signals of a remote human operator. These would be drones that could feed themselves, he says, and could be used for surveillance or even assassination missions. “You’d probably need a flying bug to get near the head [of someone to be targeted],” he says.

Nicolelis is most excited about the future of multiple networked brains. He is currently trialling the implants in monkeys, getting them to work together telepathically to complete a task. For example, each monkey might only have access to part of the information needed to make the right decision in a game. Several monkeys would then need to communicate with each other in order to successfully complete the task.

“In the distant future we may be able to communicate via a brain-net,” says Nicolelis. “I would be very glad if the brain-net my great grandchildren used was due to their great grandfather’s work.”

Journal reference: Nature Scientific Reports, DOI: 10.1038/srep01319

$300 dollar glasses sold on Amazon will correct colorblindness

OxyAmp_b

tohruMurakami_students1
Mark Changizi and Tim Barber turned research on human vision and blood flow into colorblindness-correcting glasses you can buy on Amazon. Here’s how they did it.

About 10 years ago, Mark Changizi started to develop research on human vision and how it could see changes in skin color. Like many academics, Changizi, an accomplished neurobiologist, went on to pen a book. The Vision Revolution challenged prevailing theories–no, we don’t see red only to spot berries and fruits amid the vegetation–and detailed the amazing capabilities of why we see the way we do.

If it were up to academia, Changizi’s story might have ended there. “I started out in math and physics, trying to understand the beauty in these fields,” he says, “You are taught, or come to believe, that applying something useful is inherently not interesting.”

Not only did Changizi manage to beat that impulse out of himself, but he and Tim Barber, a friend from middle school, teamed up several years ago to form a joint research institute. 2AI Labs allows the pair to focus on research into cognition and perception in humans and machines, and then to commercialize it. The most recent project? A pair of glasses with filters that just happen to cure colorblindness.

Changizi and Barber didn’t set out to cure colorblindness. Changizi just put forth the idea that humans’ ability to see colors evolved to detect oxygenation and hemoglobin changes in the skin so they could tell if someone was scared, uncomfortable or unhealthy. “We as humans blush and blanche, regardless of overall skin tone,” Barber explains, “We associate color with emotion. People turn purple with anger in every culture.” Once Changizi fully understood the connection between color vision and blood physiology, Changizi determined it would be possible to build filters that aimed to enhance the ability to see those subtle changes by making veins more or less distinct–by sharpening the ability to see the red-green or blue-yellow parts of the spectrum. He and Barber then began the process of patenting their invention.

When they started thinking about commercial applications, Changizi and Barber both admit their minds went straight to television cameras. Changizi was fascinated by the possibilities of infusing an already-enhanced HDTV experience with the capacity to see colors even more clearly.

“We looked into cameras photo receptors and decided that producing a filter for a camera would be too difficult and expensive,” Barber says. The easiest possible approach was not electronic at all, he says. Instead, they worked to develop a lens that adjusts the color signal that hits the human eye and the O2Amp was born.

The patented lens technology simply perfects what the eye does naturally: it read the changes in skin tone brought on by a flush, bruise, or blanch. The filters can be used in a range of products from indoor lighting (especially for hospital trauma centers) to windows, to perhaps eventually face cream. For now, one of the most promising applications is in glasses that correct colorblindness.

As a veteran entrepreneur, founding Clickbank and Keynetics among other ventures, Barber wasn’t interested in chasing the perfect color filter for a demo pair of glasses. “If you look for perfection you could spend a million dollars. And it is just a waste of time,” he says. A bunch of prototypes were created, and rejected. Some were too shiny, others too iridescent. “We finally found something that worked to get the tone spectrum we wanted and to produce a more interesting view of the world.”

What they got was about 90 percent of the way to total color enhancement across three different types of lenses: Oxy-Iso, Hemo-Iso, and Oxy-Amp. While the Amp, which boosts the wearer’s general perception of blood oxygenation under the skin (your own vision, but better), is the centerpiece of the technology, it was the Oxy-Iso, the lens that isolates and enhances the red-green part of the spectrum, that generated some unexpected feedback from users. Changizi says the testers told them that the Oxy-Iso lens appeared to “cure” their colorblindness.

Changizi knew this was a possibility, as the filter concentrates enhancement exactly where red-green colorblind people have a block. Professor Daniel Bor, a red-green colorblind neuroscientist at the University of Sussex tried them and was practically giddy with the results. Changizi published Bor’s testimony on his blog: “When I first put one of them on [the Oxy-Iso,], I got a shiver of excitement at how vibrant and red lips, clothes and other objects around me seemed. I’ve just done a quick 8 plate Ishihara colour blindness test. I scored 0/8 without the specs (so obviously colour blind), but 8/8 with them on (normal colour vision)!”

Despite these early testimonials, the pair thought that the O2Amp glasses would be primarily picked up by hospitals. The Hemo-Iso filter enhances variations along the yellow-blue dimension, which makes it easier for healthcare providers to see veins. “It’s a little scary to think about people drawing blood who can’t see see the veins,” Barber says. EMT workers were enthusiastic users thanks to the Hemo-Iso’s capability of making bruising more visible.

From there, Barber and Changizi embarked on a two-year odyssey to find a manufacturer to make the eyewear that would enable them to sell commercially. Through 2AI Labs, they were able push their discoveries into mainstream applications without having to rely on grants; any funding they earn from their inventions is reinvested. They also forewent some of the traditional development steps. “We bootstrapped the bench testing and we didn’t do any market research,” Barber says.

Plenty of cold calling to potential manufacturers ensued. “As scientists talking to manufacturers, it seemed like we were speaking a different language,” Barber says. Not to mention looking strange as they walked around wearing the purple and green-tinted glasses at trade shows. Changizi says they finally got lucky last year and found a few manufacturers able to produce the specialized specs. All are available on Amazon for just under $300.

Changizi and Barber aren’t done yet. In addition to overseeing sales reps who are trying to get the glasses into the hands of more buyers, the two are in talks with companies such as Oakley and Ray-Ban to put the technology into sunglasses. Imagine, says Changizi, if you could more easily see if you are getting a sunburn at the beach despite the glare. They’re testing a mirrored O2Amp lens specially for poker players (think: all the better to see the flush of a bluffer). Changizi says they are also working with cosmetics companies to embed the technology in creams that would enhance the skin’s vasculature. Move over Hope in a Jar. Barber says it’s not clear how profitable any of this will be yet: “We just want the technology to be used.”

http://www.popsci.com/science/article/2013-02/amazing-story-300-glasses-can-cure-colorblindness?page=2

New tools for posting to social media sites after death

130221171758-cemetery-headstones-story-top

Death already has a surprisingly vivid presence online. Social media sites are full of improvised memorials and outpourings of grief for loved ones, along with the unintentional mementos the departed leave behind in comments, photo streams and blog posts. Now technology is changing death again, with tools that let you get in one last goodbye after your demise, or even more extensive communications from beyond the grave. People have long left letters for loved ones (and the rare nemesis) with estate lawyers to be delivered after death. But a new crop of startups will handle sending prewritten e-mails and posting to Facebook or Twitter once a person passes. One company is even toying with a service that tweets just like a specific person after they are gone. The field got a boost last week when the plot of a British show “Black Mirror” featured similar tools, inspiring an article by The Guardian.

“It really allows you to be creative and literally extend the personality you had while alive in death,” said James Norris, founder of DeadSocial. “It allows you to be able to say those final goodbyes.”

DeadSocial covers all the post-death social media options, scheduling public Facebook posts, tweets and even LinkedIn posts to go out after someone has died. The free service will publish the text, video or audio messages directly from that person’s social media accounts, or it can send a series of scheduled messages in the future, say on an anniversary or a loved one’s birthday. For now, all DeadSocial messages will be public, but the company plans to add support for private missives in the future.

DeadSocial’s founders consulted with end of life specialists while developing their service. They compare the final result to the physical memory boxes sometimes created by terminally ill parents for their children. The boxes are filled with sentimental objects and memorabilia they want to share.

“It’s not physical, but there are unseen treasures that can be released over time,” Norris said of the posthumous digital messages.
Among the early beta users, Norris observed that younger participants were more likely to make jokes around their own deaths, while people who were slightly older created messages more sincere and emotional. He’s considered the potential for abuse but thinks the public nature of messages will be a deterrent. The site also requires members to pick a trusted executor, and there is a limit of six messages per week.

“I don’t think that somebody would continually be negative and troll from the afterlife,” Norris said optimistically. “Nobody really wants to be remembered as a horrible person.”

The UK-based startup will only guarantee messages scheduled for the next 100 years, but in theory you can schedule them for 400 years, should your descendants be able receive Facebook messages on their Google corneas. The company has only tested DeadSocial with a group of beta members, but it will finally launch the service for the public at the South by Southwest festival in March. Fittingly, the event will take place at the Museum of the Weird.

For those interested in sending more personal messages — confessions of love, apologies, “I told you so,” a map to buried treasure — there’s If I Die. This company will also post a public Facebook message when you die (the message goes up when at least three of your appointed trustees tell the service you’ve died), but it can also send out private messages to specific people over Facebook or via e-mail.

Though If I Die has attracted a number of terminally ill members, the company’s founders think it could be appeal to a much wider audience.

“Somebody that knows he’s about to die gets time to prepare himself; the big challenge is when it happens unexpectedly,” said Erez Rubinstein, a partner at If I Die.

The Israeli site launched in 2011 and already has 200,000 users. Most have opted to leave sentimental goodbyes, and written messages are more common than videos, according the company. So far, the service is entirely free, but it plans to launch premium paid options in the future.

“It’s an era where most of your life and most of your presence is digital, and you want to have some control over it. You want to be in charge of how you are perceived afterward,” Rubinstein said.

A more extreme version of this type of control lies at the heart of _LivesOn, a new project with the catchy tag line “When your heart stops beating, you’ll keep tweeting.”

Still in the early stages, _LivesOn is a Twitter tool in development at Lean Mean Fighting Machine, an advertising agency in the United Kingdom. The agency is partnering with the Queen Mary University to create Twitter accounts that post in the voice of a specific person, even after he or she has died.

When people sign up, the service will monitor their Twitter habits and patterns to learn what types of content they like and, in the future, possibly even learn to mimic their syntax. The tool will collect data and start populating a shadow Twitter account with a daily tweet that the algorithm determines match the person’s habits and interests. They can help train it with feedback and by favoriting tweets.

“It’s meant to be like a twin,” said Dave Bedwood, a partner at Lean Mean Fighting Machine.

In the short term, Bedwood and his team said it will serve as a nice content-recommendation engine. But eventually, in the more distant future, the goal is to have Twitter accounts that can carry on tweeting in the style and voice of the original account.

The people behind the project warn against expecting Twitter feeds fully powered by artificial intelligence, or worrying about Skynet, any time soon.

“People seem to think there’s a button you can press, and we’re going to raise all these people from the dead,” joked Bedwood, who has seen a huge spike in interest in the project over the past week. “People have a real faith in what technology can do.”

Artificial Intelligence is still a long way from being able to simulate a specific individual, but recreating the limited slice of personality reflected in a Twitter feed is an interesting place to start.

The _LivesOn service is hoping to roll out to a limited number of test users at the end of March. As with the other services, _LivesOn will require that members choose an executor. At this point, it’s as much a thought experiment as an attempt to create a usable tool.

All these companies see the potential for technology to change how people think about death. Goodbye messages can help people left behind through the grieving process, but composing them can also be comforting to people who are uncomfortable with or afraid of death.

“We shy away from death. It reaches us before we approach it,” DeadSocial’s Norris said. “We’re using tech to soften the impact that death has and dehumanize it. It allows us to think about death in a more logical way and detach ourselves from it.”

The prospect of artificial intelligence, even in 140-character bursts, can also be comforting to people who see it as a way to live on.

“The afterlife is not a new idea, it’s been around for quite a long time with all the different versions of heaven and hell,” Lean Mean Fighting Machine’s Bedwood said. “To me this isn’t any stranger than any one of those. In fact, it might be less strange.”

http://www.cnn.com/2013/02/22/tech/social-media/death-and-social-media/index.html?hpt=hp_c2

New dress becomes transparent when wearer is aroused

Dress1

Dress2

A Netherlands-based fashion designer has created a high-tech dress line that turns clear when you get excited. How’s that for being transparent on a date?

Called Intimacy — from designer Daan Roosegaarde, founder of Studio Roosegaarde — the project aims to explore the relationship between technology and the body’s interactions. The dresses, which are called ‘Intimacy White’ and ‘Intimacy Black,’ are made out of opaque smart e-foils. When the body gets excited and the heart races, the coils turn clear.

The smart foils have a blend of wireless technology, LED lights, cooper and other materials. “Social interactions determine the garmentsʼ level of transparency, creating a sensual play of disclosure,” the company says on its site.

Although the concept isn’t entirely new — the company has been working on prototypes since 2010 — its new 2.0 line has been making its rounds online in advance of Valentine’s Day. The dresses are currently on display privately in Hong Kong and Paris and will be shown at Kent State University in Ohio in September.

Studio Roosegaarde also has other high-tech garments in mind: “We’re currently working on a suit for men which becomes transparent when they lie.”

http://mashable.com/2013/02/06/transparent-dress/

Mind-meld brain power is best for steering spaceships

piggy

Two people have successfully steered a virtual spacecraft by combining the power of their thoughts – and their efforts were far more accurate than one person acting alone. One day groups of people hooked up to brain-computer interfaces (BCIs) might work together to control complex robotic and telepresence systems, maybe even in space.

A BCI system records the brain’s electrical activity using EEG signals, which are detected with electrodes attached to the scalp. Machine-learning software learns to recognise the patterns generated by each user as they think of a certain concept, such as “left” or “right”. BCIs have helped people with disabilities to steer a wheelchair, for example.

Researchers are discovering, however, that they get better results in some tasks by combining the signals from multiple BCI users. Until now, this “collaborative BCI” technique has been used in simple pattern-recognition tasks, but a team at the University of Essex in the UK wanted to test it more rigorously.

So they developed a simulator in which pairs of BCI users had to steer a craft towards the dead centre of a planet by thinking about one of eight directions that they could fly in, like using compass points. Brain signals representing the users’ chosen direction, as interpreted by the machine-learning system, were merged in real time and the spacecraft followed that path.

The results, to be presented at an Intelligent User Interfaces conference in California in March, strongly favoured two-brain navigation. Simulation flights were 67 per cent accurate for a single user, but 90 per cent on target for two users. And when coping with sudden changes in the simulated planet’s position, reaction times were halved, too. Combining signals eradicates the random noise that dogs EEG signals. “When you average signals from two people’s brains, the noise cancels out a bit,” says team member Riccardo Poli.

The technique can also compensate for a lapse in attention. “It is difficult to stay focused on the task at all times. So when a single user has momentary attention lapses, it matters. But when there are two users, a lapse by one will not have much effect, so you stay on target,” Poli says.

NASA’s Jet Propulsion Lab in Pasadena, California, has been observing the work while itself investigating BCI’s potential for controlling planetary rovers, for example. But don’t hold your breath, says JPL senior research scientist Adrian Stoica. “While potential uses for space applications exist, in terms of uses for planetary rover remote control, this is still a speculative idea,” he says.

http://www.newscientist.com/article/mg21729025.600-mindmeld-brain-power-is-best-for-steering-spaceships.html

‘Scarecrow’ Gene: Key to Efficient Crops, Could Lead to Staple Crops With Much Higher Yields

scarecrow gene
Cross section of a mature maize leaf showing Kranz (German for wreath) anatomy around a large vein. The bundle sheath cells (lighter red) encircle the vascular core (light blue). Mesophyll cells (dark red) encircle the bundle sheath cells. The interaction and cooperation between the mesophyll and bundle sheath is essential for the C4 photosynthetic mechanism. (Credit: Thomas Slewinski)

With projections of 9.5 billion people by 2050, humankind faces the challenge of feeding modern diets to additional mouths while using the same amounts of water, fertilizer and arable land as today.

Cornell researchers have taken a leap toward meeting those needs by discovering a gene that could lead to new varieties of staple crops with 50 percent higher yields.

The gene, called Scarecrow, is the first discovered to control a special leaf structure, known as Kranz anatomy, which leads to more efficient photosynthesis. Plants photosynthesize using one of two methods: C3, a less efficient, ancient method found in most plants, including wheat and rice; and C4, a more efficient adaptation employed by grasses, maize, sorghum and sugarcane that is better suited to drought, intense sunlight, heat and low nitrogen.

“Researchers have been trying to find the underlying genetics of Kranz anatomy so we can engineer it into C3 crops,” said Thomas Slewinski, lead author of a paper that appeared online in November in the journal Plant and Cell Physiology. Slewinski is a postdoctoral researcher in the lab of senior author Robert Turgeon, professor of plant biology in the College of Arts and Sciences.

The finding “provides a clue as to how this whole anatomical key is regulated,” said Turgeon. “There’s still a lot to be learned, but now the barn door is open and you are going to see people working on this Scarecrow pathway.” The promise of transferring C4 mechanisms into C3 plants has been fervently pursued and funded on a global scale for decades, he added.

If C4 photosynthesis is successfully transferred to C3 plants through genetic engineering, farmers could grow wheat and rice in hotter, dryer environments with less fertilizer, while possibly increasing yields by half, the researchers said.

C3 photosynthesis originated at a time in Earth’s history when the atmosphere had a high proportion of carbon dioxide. C4 plants have independently evolved from C3 plants some 60 times at different times and places. The C4 adaptation involves Kranz anatomy in the leaves, which includes a layer of special bundle sheath cells surrounding the veins and an outer layer of cells called mesophyll. Bundle sheath cells and mesophyll cells cooperate in a two-step version of photosynthesis, using different kinds of chloroplasts.

By looking closely at plant evolution and anatomy, Slewinski recognized that the bundle sheath cells in leaves of C4 plants were similar to endodermal cells that surrounded vascular tissue in roots and stems.

Slewinski suspected that if C4 leaves shared endodermal genes with roots and stems, the genetics that controlled those cell types may also be shared. Slewinski looked for experimental maize lines with mutant Scarecrow genes, which he knew governed endodermal cells in roots. When the researchers grew those plants, they first identified problems in the roots, then checked for abnormalities in the bundle sheath. They found that the leaves of Scarecrow mutants had abnormal and proliferated bundle sheath cells and irregular veins.

In all plants, an enzyme called RuBisCo facilitates a reaction that captures carbon dioxide from the air, the first step in producing sucrose, the energy-rich product of photosynthesis that powers the plant. But in C3 plants RuBisCo also facilitates a competing reaction with oxygen, creating a byproduct that has to be degraded, at a cost of about 30-40 percent overall efficiency. In C4 plants, carbon dioxide fixation takes place in two stages. The first step occurs in the mesophyll, and the product of this reaction is shuttled to the bundle sheath for the RuBisCo step. The RuBisCo step is very efficient because in the bundle sheath cells, the oxygen concentration is low and the carbon dioxide concentration is high. This eliminates the problem of the competing oxygen reaction, making the plant far more efficient.

The study was funded by the National Science Foundation and the U.S. Department of Agriculture.

http://www.sciencedaily.com/releases/2013/01/130124134051.htm