Astronomer Sir Martin Rees believes aliens have transitioned from organic forms to machines – and that humans will do the same.

British astrophysicist and cosmologist, Sir Martin Rees, believes if we manage to detect aliens, it will not be by stumbling across organic life, but from picking up a signal made by machines.

It’s likely these machines will have evolved from organic alien beings, and that humans will also make the transition from biological to mechanical in the future.

Sir Martin said that while the way we think has led to all culture and science on Earth, it will be a brief precursor to more powerful machine ‘brains’.

He thinks that life away from Earth has probably already gone through this transition from organic to machine.

On a planet orbiting a star far older than the sun, life ‘may have evolved much of the way toward a dominant machine intelligence,’ he writes.

Sir Martin believes it could be one or two more centuries before humans are overtaken by machine intelligence, which will then evolve over billions of years, either with us, or replacing us.

‘This suggests that if we were to detect ET, it would be far more likely to be inorganic: We would be most unlikely to “catch” alien intelligence in the brief sliver of time when it was still in organic form,’ he writes.

Despite this, the astronomer said Seti searches are worthwhile, because the stakes are so high.

Seti seeks our electromagnetic transmissions thought to be made artificially, but even if it did hit the jackpot and detect a possible message sent by aliens, Sir Martin says it is unlikely we would be able to decode it.

He thinks such a signal would probably be a byproduct or malfunction of a complex machine far beyond our understanding that could trace its lineage back to organic alien beings, which may still exist on a planet, or have died out.

He also points out that even if intelligence is widespread across the cosmos, we may only ever recognise a fraction of it because ‘brains’ may take a form unrecognisable to humans.

For example, instead of being an alien civilisation, ET may be a single integrated intelligence.

He mused that the galaxy may already teem with advanced life and that our descendants could ‘plug in’ to a galactic community.

Read more: http://www.dailymail.co.uk/sciencetech/article-3285966/Is-ET-ROBOT-Astronomer-Royal-believes-aliens-transitioned-organic-forms-machines-humans-same.html#ixzz3pOiCcJY8

Scientists encode memories in a way that bypasses damaged brain tissue

Researchers at University of South Carolina (USC) and Wake Forest Baptist Medical Center have developed a brain prosthesis that is designed to help individuals suffering from memory loss.

The prosthesis, which includes a small array of electrodes implanted into the brain, has performed well in laboratory testing in animals and is currently being evaluated in human patients.

Designed originally at USC and tested at Wake Forest Baptist, the device builds on decades of research by Ted Berger and relies on a new algorithm created by Dong Song, both of the USC Viterbi School of Engineering. The development also builds on more than a decade of collaboration with Sam Deadwyler and Robert Hampson of the Department of Physiology & Pharmacology of Wake Forest Baptist who have collected the neural data used to construct the models and algorithms.

When your brain receives the sensory input, it creates a memory in the form of a complex electrical signal that travels through multiple regions of the hippocampus, the memory center of the brain. At each region, the signal is re-encoded until it reaches the final region as a wholly different signal that is sent off for long-term storage.

If there’s damage at any region that prevents this translation, then there is the possibility that long-term memory will not be formed. That’s why an individual with hippocampal damage (for example, due to Alzheimer’s disease) can recall events from a long time ago – things that were already translated into long-term memories before the brain damage occurred – but have difficulty forming new long-term memories.

Song and Berger found a way to accurately mimic how a memory is translated from short-term memory into long-term memory, using data obtained by Deadwyler and Hampson, first from animals, and then from humans. Their prosthesis is designed to bypass a damaged hippocampal section and provide the next region with the correctly translated memory.

That’s despite the fact that there is currently no way of “reading” a memory just by looking at its electrical signal.

“It’s like being able to translate from Spanish to French without being able to understand either language,” Berger said.

Their research was presented at the 37th Annual International Conference of the IEEE Engineering in Medicine and Biology Society in Milan on August 27, 2015.

The effectiveness of the model was tested by the USC and Wake Forest Baptist teams. With the permission of patients who had electrodes implanted in their hippocampi to treat chronic seizures, Hampson and Deadwyler read the electrical signals created during memory formation at two regions of the hippocampus, then sent that information to Song and Berger to construct the model. The team then fed those signals into the model and read how the signals generated from the first region of the hippocampus were translated into signals generated by the second region of the hippocampus.

In hundreds of trials conducted with nine patients, the algorithm accurately predicted how the signals would be translated with about 90 percent accuracy.

“Being able to predict neural signals with the USC model suggests that it can be used to design a device to support or replace the function of a damaged part of the brain,” Hampson said.
Next, the team will attempt to send the translated signal back into the brain of a patient with damage at one of the regions in order to try to bypass the damage and enable the formation of an accurate long-term memory.

http://medicalxpress.com/news/2015-09-scientists-bypass-brain-re-encoding-memories.html#nRlv

Amazing photo technology

low mag

mag 3

Ever wonder how they ID’d the Boston bombers in a few days? This may help you to understand what the government is looking at. This photo was taken in Vancouver, Canada and shows about 700,000 people.

Hard to disappear in a crowd. Pick on a small part of the crowd click a couple of times — wait – then, click a few more times and see how clear each individual face will become each time. Or use the wheel on your mouse.

This picture was taken with a 70,000 x 30,000 pixel camera (2100 Mega Pixels.) These cameras are not sold to the public and are being installed in strategic locations. The camera can identify a face among a multitude of People.

Place your computer’s cursor in the mass of people and double-click a couple times. It is not so easy to hide in a crowd anymore.

http://www.gigapixel.com/mobile/?id=79995

Thanks to Pete Cuomo for bringing this to the It’s Interesting community.

Paralyzed man walks again, using only his mind.


Paraplegic Adam Fritz works out with Kristen Johnson, a spinal cord injury recovery specialist, at the Project Walk facility in Claremont, California on September 24. A brain-to-computer technology that can translate thoughts into leg movements has enabled Fritz, paralyzed from the waist down by a spinal cord injury, to become the first such patient to walk without the use of robotics.

It’s a technology that sounds lifted from the latest Marvel movie—a brain-computer interface functional electrical stimulation (BCI-FES) system that enables paralyzed users to walk again. But thanks to neurologists, biomedical engineers and other scientists at the University of California, Irvine, it’s very much a reality, though admittedly with only one successful test subject so far.

The team, led by Zoran Nenadic and An H. Do, built a device that translates brain waves into electrical signals than can bypass the damaged region of a paraplegic’s spine and go directly to the muscles, stimulating them to move. To test it, they recruited 28-year-old Adam Fritz, who had lost the use of his legs five years earlier in a motorcycle accident.

Fritz first had to learn how exactly he’d been telling his legs to move for all those years before his accident. The research team fitted him with an electroencephalogram (EEG) cap that read his brain waves as he visualized moving an avatar in a virtual reality environment. After hours training on the video game, he eventually figured out how to signal “walk.”

The next step was to transfer that newfound skill to his legs. The scientists wired up the EEG device so that it would send electrical signals to the muscles in Fritz’s leg. And then, along with physical therapy to strengthen his legs, he would practice walking—his legs suspended a few inches off the ground—using only his brain (and, of course, the device). On his 20th visit, Fritz was finally able to walk using a harness that supported his body weight and prevented him from falling. After a little more practice, he walked using just the BCI-FES system. After 30 trials run over a period of 19 weeks, he could successfully walk through a 12-foot-long course.

As encouraging as the trial sounds, there are experts who suggest the design has limitations. “It appears that the brain EEG signal only contributed a walk or stop command,” says Dr. Chet Moritz, an associate professor of rehab medicine, physiology and biophysics at the University of Washington. “This binary signal could easily be provided by the user using a sip-puff straw, eye-blink device or many other more reliable means of communicating a simple ‘switch.’”

Moritz believes it’s unlikely that an EEG alone would be reliable enough to extract any more specific input from the brain while the test subject is walking. In other words, it might not be able to do much more beyond beginning and ending a simple motion like moving your legs forward—not so helpful in stepping over curbs or turning a corner in a hallway.

The UC Irvine team hopes to improve the capability of its technology. A simplified version of the system has the potential to work as a means of noninvasive rehabilitation for a wide range of paralytic conditions, from less severe spinal cord injuries to stroke and multiple sclerosis.

“Once we’ve confirmed the usability of this noninvasive system, we can look into invasive means, such as brain implants,” said Nenadic in a statement announcing the project’s success. “We hope that an implant could achieve an even greater level of prosthesis control because brain waves are recorded with higher quality. In addition, such an implant could deliver sensation back to the brain, enabling the user to feel their legs.

http://www.newsweek.com/paralyzed-man-walks-again-using-only-his-mind-379531

Soon everyone you know will be able to rate you on the new ‘Yelp for people.’

You can already rate restaurants, hotels, movies, college classes, government agencies and bowel movements online.

So the most surprising thing about Peeple — basically Yelp, but for humans — may be the fact that no one has yet had the gall to launch something like it.

When the app does launch, probably in late November, you will be able to assign reviews and one- to five-star ratings to everyone you know: your exes, your co-workers, the old guy who lives next door. You can’t opt out — once someone puts your name in the Peeple system, it’s there unless you violate the site’s terms of service. And you can’t delete bad or biased reviews — that would defeat the whole purpose.

Imagine every interaction you’ve ever had suddenly open to the scrutiny of the Internet public.

“People do so much research when they buy a car or make those kinds of decisions,” said Julia Cordray, one of the app’s founders. “Why not do the same kind of research on other aspects of your life?”

This is, in a nutshell, Cordray’s pitch for the app — the one she has been making to development companies, private shareholders, and Silicon Valley venture capitalists. (As of Monday, the company’s shares put its value at $7.6 million.)

A bubbly, no-holds-barred “trendy lady” with a marketing degree and two recruiting companies, Cordray sees no reason you wouldn’t want to “showcase your character” online. Co-founder Nicole McCullough comes at the app from a different angle: As a mother of two in an era when people don’t always know their neighbors, she wanted something to help her decide whom to trust with her kids.

Given the importance of those kinds of decisions, Peeple’s “integrity features” are fairly rigorous — as Cordray will reassure you, in the most vehement terms, if you raise any concerns about shaming or bullying on the service. To review someone, you must be 21 and have an established Facebook account, and you must make reviews under your real name.

You must also affirm that you “know” the person in one of three categories: personal, professional or romantic. To add someone to the database who has not been reviewed before, you must have that person’s cell phone number.

Positive ratings post immediately; negative ratings are queued in a private inbox for 48 hours in case of disputes. If you haven’t registered for the site, and thus can’t contest those negative ratings, your profile only shows positive reviews.

On top of that, Peeple has outlawed a laundry list of bad behaviors, including profanity, sexism and mention of private health conditions.

“As two empathetic, female entrepreneurs in the tech space, we want to spread love and positivity,” Cordray stressed. “We want to operate with thoughtfulness.”

Unfortunately for the millions of people who could soon find themselves the unwilling subjects — make that objects — of Cordray’s app, her thoughts do not appear to have shed light on certain very critical issues, such as consent and bias and accuracy and the fundamental wrongness of assigning a number value to a person.

To borrow from the technologist and philosopher Jaron Lanier, Peeple is indicative of a sort of technology that values “the information content of the web over individuals;” it’s so obsessed with the perceived magic of crowd-sourced data that it fails to see the harms to ordinary people.

Where to even begin with those harms? There’s no way such a rating could ever accurately reflect the person in question: Even putting issues of personality and subjectivity aside, all rating apps, from Yelp to Rate My Professor, have a demonstrated problem with self-selection. (The only people who leave reviews are the ones who love or hate the subject.) In fact, as repeat studies of Rate My Professor have shown, ratings typically reflect the biases of the reviewer more than they do the actual skills of the teacher: On RMP, professors whom students consider attractive are way more likely to be given high ratings, and men and women are evaluated on totally different traits.

“Summative student ratings do not look directly or cleanly at the work being done,” the academic Edward Nuhfer wrote in 2010. “They are mixtures of affective feelings and learning.”

But at least student ratings have some logical and economic basis: You paid thousands of dollars to take that class, so you’re justified and qualified to evaluate the transaction. Peeple suggests a model in which everyone is justified in publicly evaluating everyone they encounter, regardless of their exact relationship.

It’s inherently invasive, even when complimentary. And it’s objectifying and reductive in the manner of all online reviews. One does not have to stretch far to imagine the distress and anxiety that such a system would cause even a slightly self-conscious person; it’s not merely the anxiety of being harassed or maligned on the platform — but of being watched and judged, at all times, by an objectifying gaze to which you did not consent.

Where once you may have viewed a date or a teacher conference as a private encounter, Peeple transforms it into a radically public performance: Everything you do can be judged, publicized, recorded.

“That’s feedback for you!” Cordray enthuses. “You can really use it to your advantage.”

That justification hasn’t worked out so well, though, for the various edgy apps that have tried it before. In 2013, Lulu promised to empower women by letting them review their dates, and to empower men by letting them see their scores.

After a tsunami of criticism — “creepy,” “toxic,” “gender hate in a prettier package” — Lulu added an automated opt-out feature to let men pull their names off the site. A year later, Lulu further relented by letting users rate only those men who opt in. In its current iteration, 2013’s most controversial start-up is basically a minor dating app.

That windy path is possible for Peeple too, Cordray says: True to her site’s radical philosophy, she has promised to take any and all criticism as feedback. If beta testers demand an opt-out feature, she’ll delay the launch date and add that in. If users feel uncomfortable rating friends and partners, maybe Peeple will professionalize: think Yelp meets LinkedIn. Right now, it’s Yelp for all parts of your life; that’s at least how Cordray hypes it on YouTube, where she’s publishing a reality Web series about the app’s process.

“It doesn’t matter how far apart we are in likes or dislikes,” she tells some bro at a bar in episode 10. “All that matters is what people say about us.”

It’s a weirdly dystopian vision to deliver to a stranger at a sports bar: In Peeple’s future, Cordray’s saying, the way some amorphous online “crowd” sees you will be definitively who you are.

https://www.washingtonpost.com/news/the-intersect/wp/2015/09/30/everyone-you-know-will-be-able-to-rate-you-on-the-terrifying-yelp-for-people-whether-you-want-them-to-or-not/

In 72 hours deep learning machine teaches itself to play chess at International Grand Master level by evaluating the board rather than using brute force to work out every possible move – a computer first.

t’s been almost 20 years since IBM’s Deep Blue supercomputer beat the reigning world chess champion, Gary Kasparov, for the first time under standard tournament rules. Since then, chess-playing computers have become significantly stronger, leaving the best humans little chance even against a modern chess engine running on a smartphone.

But while computers have become faster, the way chess engines work has not changed. Their power relies on brute force, the process of searching through all possible future moves to find the best next one.

Of course, no human can match that or come anywhere close. While Deep Blue was searching some 200 million positions per second, Kasparov was probably searching no more than five a second. And yet he played at essentially the same level. Clearly, humans have a trick up their sleeve that computers have yet to master.

This trick is in evaluating chess positions and narrowing down the most profitable avenues of search. That dramatically simplifies the computational task because it prunes the tree of all possible moves to just a few branches.

Computers have never been good at this, but today that changes thanks to the work of Matthew Lai at Imperial College London. Lai has created an artificial intelligence machine called Giraffe that has taught itself to play chess by evaluating positions much more like humans and in an entirely different way to conventional chess engines.

Straight out of the box, the new machine plays at the same level as the best conventional chess engines, many of which have been fine-tuned over many years. On a human level, it is equivalent to FIDE International Master status, placing it within the top 2.2 percent of tournament chess players.

The technology behind Lai’s new machine is a neural network. This is a way of processing information inspired by the human brain. It consists of several layers of nodes that are connected in a way that change as the system is trained. This training process uses lots of examples to fine-tune the connections so that the network produces a specific output given a certain input, to recognize the presence of face in a picture, for example.

In the last few years, neural networks have become hugely powerful thanks to two advances. The first is a better understanding of how to fine-tune these networks as they learn, thanks in part to much faster computers. The second is the availability of massive annotated datasets to train the networks.

That has allowed computer scientists to train much bigger networks organized into many layers. These so-called deep neural networks have become hugely powerful and now routinely outperform humans in pattern recognition tasks such as face recognition and handwriting recognition.

So it’s no surprise that deep neural networks ought to be able to spot patterns in chess and that’s exactly the approach Lai has taken. His network consists of four layers that together examine each position on the board in three different ways.

The first looks at the global state of the game, such as the number and type of pieces on each side, which side is to move, castling rights and so on. The second looks at piece-centric features such as the location of each piece on each side, while the final aspect is to map the squares that each piece attacks and defends.

Lai trains his network with a carefully generated set of data taken from real chess games. This data set must have the correct distribution of positions. “For example, it doesn’t make sense to train the system on positions with three queens per side, because those positions virtually never come up in actual games,” he says.

It must also have plenty of variety of unequal positions beyond those that usually occur in top level chess games. That’s because although unequal positions rarely arise in real chess games, they crop up all the time in the searches that the computer performs internally.

And this data set must be huge. The massive number of connections inside a neural network have to be fine-tuned during training and this can only be done with a vast dataset. Use a dataset that is too small and the network can settle into a state that fails to recognize the wide variety of patterns that occur in the real world.

Lai generated his dataset by randomly choosing five million positions from a database of computer chess games. He then created greater variety by adding a random legal move to each position before using it for training. In total he generated 175 million positions in this way.

The usual way of training these machines is to manually evaluate every position and use this information to teach the machine to recognize those that are strong and those that are weak.

But this is a huge task for 175 million positions. It could be done by another chess engine but Lai’s goal was more ambitious. He wanted the machine to learn itself.

Instead, he used a bootstrapping technique in which Giraffe played against itself with the goal of improving its prediction of its own evaluation of a future position. That works because there are fixed reference points that ultimately determine the value of a position—whether the game is later won, lost or drawn.

In this way, the computer learns which positions are strong and which are weak.

Having trained Giraffe, the final step is to test it and here the results make for interesting reading. Lai tested his machine on a standard database called the Strategic Test Suite, which consists of 1,500 positions that are chosen to test an engine’s ability to recognize different strategic ideas. “For example, one theme tests the understanding of control of open files, another tests the understanding of how bishop and knight’s values change relative to each other in different situations, and yet another tests the understanding of center control,” he says.

The results of this test are scored out of 15,000.

Lai uses this to test the machine at various stages during its training. As the bootstrapping process begins, Giraffe quickly reaches a score of 6,000 and eventually peaks at 9,700 after only 72 hours. Lai says that matches the best chess engines in the world.

“[That] is remarkable because their evaluation functions are all carefully hand-designed behemoths with hundreds of parameters that have been tuned both manually and automatically over several years, and many of them have been worked on by human grandmasters,” he adds.

Lai goes on to use the same kind of machine learning approach to determine the probability that a given move is likely to be worth pursuing. That’s important because it prevents unnecessary searches down unprofitable branches of the tree and dramatically improves computational efficiency.

Lai says this probabilistic approach predicts the best move 46 percent of the time and places the best move in its top three ranking, 70 percent of the time. So the computer doesn’t have to bother with the other moves.

That’s interesting work that represents a major change in the way chess engines work. It is not perfect, of course. One disadvantage of Giraffe is that neural networks are much slower than other types of data processing. Lai says Giraffe takes about 10 times longer than a conventional chess engine to search the same number of positions.

But even with this disadvantage, it is competitive. “Giraffe is able to play at the level of an FIDE International Master on a modern mainstream PC,” says Lai. By comparison, the top engines play at super-Grandmaster level.

That’s still impressive. “Unlike most chess engines in existence today, Giraffe derives its playing strength not from being able to see very far ahead, but from being able to evaluate tricky positions accurately, and understanding complicated positional concepts that are intuitive to humans, but have been elusive to chess engines for a long time,” says Lai. “This is especially important in the opening and end game phases, where it plays exceptionally well.”

And this is only the start. Lai says it should be straightforward to apply the same approach to other games. One that stands out is the traditional Chinese game of Go, where humans still hold an impressive advantage over their silicon competitors. Perhaps Lai could have a crack at that next.

http://www.technologyreview.com/view/541276/deep-learning-machine-teaches-itself-chess-in-72-hours-plays-at-international-master/

Thanks to Kebmodee for bringing this to the It’s Interesting community.

Scientists achieve implantation of memory into the brains of mice while they sleep

Sleeping minds: prepare to be hacked. For the first time, conscious memories have been implanted into the minds of mice while they sleep. The same technique could one day be used to alter memories in people who have undergone traumatic events.

When we sleep, our brain replays the day’s activities. The pattern of brain activity exhibited by mice when they explore a new area during the day, for example, will reappear, speeded up, while the animal sleeps. This is thought to be the brain practising an activity – an essential part of learning. People who miss out on sleep do not learn as well as those who get a good night’s rest, and when the replay process is disrupted in mice, so too is their ability to remember what they learned the previous day.

Karim Benchenane and his colleagues at the Industrial Physics and Chemistry Higher Educational Institution in Paris, France, hijacked this process to create new memories in sleeping mice. The team targeted the rodents’ place cells – neurons that fire in response to being in or thinking about a specific place. These cells are thought to help us form internal maps, and their discoverers won a Nobel prize last year.

Benchenane’s team used electrodes to monitor the activity of mice’s place cells as the animals explored an enclosed arena, and in each mouse they identified a cell that fired only in a certain arena location. Later, when the mice were sleeping, the researchers monitored the animals’ brain activity as they replayed the day’s experiences. A computer recognised when the specific place cell fired; each time it did, a separate electrode would stimulate brain areas associated with reward.

When the mice awoke, they made a beeline for the location represented by the place cell that had been linked to a rewarding feeling in their sleep. A brand new memory – linking a place with reward – had been formed.

It is the first time a conscious memory has been created in animals during sleep. In recent years, researchers have been able to form subconscious associations in sleeping minds – smokers keen to quit can learn to associate cigarettes with the smells of rotten eggs and fish in their sleep, for example.

Previous work suggested that if this kind of subconscious learning had occurred in Benchenane’s mice, they would have explored the arena in a random manner, perhaps stopping at the reward-associated location. But these mice headed straight for the location, suggesting a conscious memory. “The mouse develops a goal-directed behaviour to go towards the place,” says Benchenane. “It proves that it’s not an automatic behaviour. What we create is an association between a particular place and a reward that can be consciously accessed by the mouse.”

“The mouse is remembering enough abstract information to think ‘I want to go to a certain place’, and go there when it wakes up,” says neuroscientist Neil Burgess at University College London. “It’s a bigger breakthrough [than previous studies] because it really does show what the man in the street would call a memory – the ability to bring to mind abstract knowledge which can guide behaviour in a directed way.”

Benchenane doesn’t think the technique can be used to implant many other types of memories, such as skills – at least for the time being. Spatial memories are easier to modify because they are among the best understood.

His team’s findings also provide some of the strongest evidence for the way in which place cells work. It is almost impossible to test whether place cells function as an internal map while animals are awake, says Benchenane, because these animals also use external cues, such as landmarks, to navigate. By specifically targeting place cells while the mouse is asleep, the team were able to directly test theories that specific cells represent specific places.

“Even when those place cells fire in sleep, they still convey spatial information,” says Benchenane. “That provides evidence that when you’ve got activation of place cells during the consolidation of memories in sleep, you’ve got consolidation of the spatial information.”

Benchenane hopes that his technique could be developed to help alter people’s memories, perhaps of traumatic events (see “Now it’s our turn”, below).

Loren Frank at the University of California, San Francisco, agrees. “I think this is a really important step towards helping people with memory impairments or depression,” he says. “It is surprising to me how many neurological and psychiatric illnesses have something to do with memory, including schizophrenia and obsessive compulsive disorder.”

“In principle, you could selectively change brain processing during sleep to soften memories or change their emotional content,” he adds.

Journal reference: Nature Neuroscience, doi:10.1038/nn.3970

http://www.newscientist.com/article/dn27115-new-memories-implanted-in-mice-while-they-sleep.html#.VP_L9uOVquD

Thanks to Kebmodee for bringing this to the attention of the It’s Interesting community.

Mind-controlled drones promise a future of hands-free flying

There have been tentative steps into thought-controlled drones in the past, but Tekever and a team of European researchers just kicked things up a notch. They’ve successfully tested Brainflight, a project that uses your mental activity (detected through a cap) to pilot an unmanned aircraft. You have to learn how to fly on your own, but it doesn’t take long before you’re merely thinking about where you want to go. And don’t worry about crashing because of distractions or mental trauma, like seizures — there are “algorithms” to prevent the worst from happening.

You probably won’t be using Brainflight to fly anything larger than a small drone, at least not in the near future. There’s no regulatory framework that would cover mind-controlled aircraft, after all. Tekever is hopeful that its technology will change how we approach transportation, though. It sees brain power reducing complex activities like flying or driving to something you can do instinctively, like walking — you’d have freedom to focus on higher-level tasks like navigation. The underlying technology would also let people with injuries and physical handicaps steer vehicles and their own prosthetic limbs. Don’t be surprised if you eventually need little more than some headgear to take to the skies.

http://www.engadget.com/2015/02/25/tekever-mind-controlled-drone/?ncid=rss_truncated

The eternity drive: Why DNA could be the future of data storage

By Peter Shadbolt, for CNN

How long will the data last in your hard-drive or USB stick? Five years? 10 years? Longer?

Already a storage company called Backblaze is running 25,000 hard drives simultaneously to get to the bottom of the question. As each hard drive coughs its last, the company replaces it and logs its lifespan.

While this census has only been running five years, the statistics show a 22% attrition rate over four years.

Some may last longer than a decade, the company says, others may last little more than a year; but the short answer is that storage devices don’t last forever.

Science is now looking to nature, however, to find the best way to store data in a way that will make it last for millions of years.

Researchers at ETH Zurich, in Switzerland, believe the answer may lie in the data storage system that exists in every living cell: DNA.

So compact and complex are its strands that just 1 gram of DNA is theoretically capable of containing all the data of internet giants such as Google and Facebook, with room to spare.

In data storage terms, that gram would be capable of holding 455 exabytes, where one exabyte is equivalent to a billion gigabytes.

Fossilization has been known to preserve DNA in strands long enough to gain an animal’s entire genome — the complete set of genes present in a cell or organism.

So far, scientists have extracted and sequenced the genome of a 110,000-year-old polar bear and more recently a 700,000-year-old horse.

Robert Grass, lecturer at the Department of Chemistry and Applied Biosciences, said the problem with DNA is that it degrades quickly. The project, he said, wanted to find ways of combining the possibility of the large storage density in DNA with the stability of the DNA found in fossils.

“We have found elegant ways of making DNA very stable,” he told CNN. “So we wanted to combine these two stories — to get the high storage density of DNA and combine it with the archaeological aspects of DNA.”

The synthetic process of preserving DNA actually mimics processes found in nature.

As with fossils, keeping the DNA cool, dry and encased — in this case, with microscopic spheres of glass – could keep the information contained in its strands intact for thousands of years.

“The time limit with DNA in fossils is about 700,000 years but people speculate about finding one-million-year storage of genomic material in fossil bones,” he said.

“We were able to show that decay of our DNA and store of information decays at the same rate as the fossil DNA so we get to similar time frames of close to a million years.”

Fresh fossil discoveries are throwing up new surprises about the preservation of DNA.

Human bones discovered in the Sima de los Huesos cave network in Spain show maternally inherited “mitochondrial” DNA that is 400,000 years old – a new record for human remains.

The fact that the DNA survived in the relatively cool climate of a cave — rather than in a frozen environment as with the DNA extracted from mammoth remains in Siberia – has added to the mystery about DNA longevity.

“A lot of it is not really known,” Grass says. “What we’re trying to understand is how DNA decays and what the mechanisms are to get more insight into that.”

What is known is that water and oxygen are the enemy of DNA survival. DNA in a test tube and exposed to air will last little more than two to three years. Encasing it in glass — an inert, neutral agent – and cooling it increases its chances of survival.

Grass says sol-gel technology, which produces solid materials from small molecules, has made it a relatively easy process to get the glass around the DNA molecules.

While the team’s work invites immediate comparison with Jurassic Park, where DNA was extracted from amber fossils, Grass says that prehistoric insects encased in amber are a poor source of prehistoric DNA.

“The best DNA comes from sources that are ceramic and dry — so teeth, bones and even eggshells,” he said.

So far the team has tested their storage method by preserving just 83 kilobytes of data.

“The first is the Swiss Federal Charter of 1291 — it’s like the Swiss Magna Carta — and the other was the Archimedes Palimpsest; a copy of an Ancient Greek mathematics treatise made by a monk in the 10th century but which had been overwritten by other monks in the 15th century.

“We wanted to preserve these documents to show not just that the method works, but that the method is important too,” he said.

He estimates that the information will be readable in 10,000 years’ time, and if frozen, as long as a million years.

The cost of encoding just 83Kb of data cost about $2,000, making it a relatively expensive process, but Grass is optimistic that price will come down over time. Advances in technology for medical analysis, he said, are likely to help with this.

“Already the prices for human genome sequences have dropped from several millions of dollars a few years ago to just hundreds of dollars now,” Grass said.

“It makes sense to integrate these advances in medical and genome analysis into the world of IT.”

http://www.cnn.com/2015/02/25/tech/make-create-innovate-fossil-dna-data-storage/index.html?utm_source=feedburner&utm_medium=feed&utm_campaign=Feed%3A+rss%2Fcnn_latest+%28RSS%3A+Most+Recent%29

Risk of American ‘megadroughts’ for decades, NASA warns

There is no precedent in contemporary weather records for the kinds of droughts the country’s West will face, if greenhouse gas emissions stay on course, a NASA study said.

No precedent even in the past 1,000 years.

The feared droughts would cover most of the western half of the United States — the Central Plains and the Southwest.

Those regions have suffered severe drought in recent years. But it doesn’t compare in the slightest to the ‘megadroughts’ likely to hit them before the century is over due to global warming.
These will be epochal, worthy of a chapter in Earth’s natural history.

Even if emissions drop moderately, droughts in those regions will get much worse than they are now, NASA said.

The space agency’s study conjures visions of the sun scorching cracked earth that is baked dry of moisture for feet below the surface, across vast landscapes, for decades. Great lake reservoirs could dwindle to ponds, leaving cities to ration water to residents who haven’t fled east.

“Our projections for what we are seeing is that, with climate change, many of these types of droughts will likely last for 20, 30, even 40 years,” said NASA climate scientist Ben Cook.

That’s worse and longer than the historic Dust Bowl of the 1930s, when “black blizzards” — towering, blustery dust walls — buried Southern Plains homes, buggies and barns in dirt dunes.

It lasted about 10 years. Though long, it was within the framework of a contemporary natural drought.

To find something almost as extreme as what looms, one must go back to Medieval times.

Nestled in the shade of Southwestern mountain rock, earthen Ancestral Pueblo housing offers a foreshadowing. The tight, lively villages emptied out in the 13th century’s Great Drought that lasted more than 30 years.

No water. No crops. Starvation drove populations out to the east and south.

If NASA’s worst case scenario plays out, what’s to come could be worse.

Its computations are based on greenhouse gas emissions continuing on their current course. And they produce an 80% chance of at least one drought that could last for decades.

One “even exceeding the duration of the long term intense ‘megadroughts’ that characterized the really arid time period known as the Medieval Climate Anomaly,” Cook said.

That was a period of heightened global temperatures that lasted from about 1100 to 1300 — when those Ancestral Pueblos dispersed. Global average temperatures are already higher now than they were then, the study said.

The NASA team’s study was very data heavy.

It examined past wet and dry periods using tree rings going back 1,000 years and compared them with soil moisture from 17 climate models, NASA said in the study published in Science Advances.

Scientists used super computers to calculate the models forward along the lines of human induced global warming scenarios. The models all showed a much drier planet.

Some Southwestern areas that are currently drought-stricken are filling up with more people, creating more demand for water while reservoirs are already strained.

The predicted megadroughts will wrack water supplies much harder, NASA Goddard Space Flight Center said.

“These droughts really represent events that nobody in the history of the United States has ever had to deal with,” Cook said.

Compared with the last millennium, the dryness will be unprecedented. Adapting to it will be tough.

http://www.cnn.com/2015/02/14/us/nasa-study-western-megadrought/index.html