DNA is the future of data storage

A bioengineer and geneticist at Harvard’s Wyss Institute have successfully stored 5.5 petabits of data — around 700 terabytes — in a single gram of DNA, smashing the previous DNA data density record by a thousand times.

The work, carried out by George Church and Sri Kosuri, basically treats DNA as just another digital storage device. Instead of binary data being encoded as magnetic regions on a hard drive platter, strands of DNA that store 96 bits are synthesized, with each of the bases (TGAC) representing a binary value (T and G = 1, A and C = 0).

To read the data stored in DNA, you simply sequence it — just as if you were sequencing the human genome — and convert each of the TGAC bases back into binary. To aid with sequencing, each strand of DNA has a 19-bit address block at the start (the red bits in the image below) — so a whole vat of DNA can be sequenced out of order, and then sorted into usable data using the addresses.

Scientists have been eyeing up DNA as a potential storage medium for a long time, for three very good reasons: It’s incredibly dense (you can store one bit per base, and a base is only a few atoms large); it’s volumetric (beaker) rather than planar (hard disk); and it’s incredibly stable — where other bleeding-edge storage mediums need to be kept in sub-zero vacuums, DNA can survive for hundreds of thousands of years in a box in your garage.

It is only with recent advances in microfluidics and labs-on-a-chip that synthesizing and sequencing DNA has become an everyday task, though. While it took years for the original Human Genome Project to analyze a single human genome (some 3 billion DNA base pairs), modern lab equipment with microfluidic chips can do it in hours. Now this isn’t to say that Church and Kosuri’s DNA storage is fast — but it’s fast enough for very-long-term archival.

Just think about it for a moment: One gram of DNA can store 700 terabytes of data. That’s 14,000 50-gigabyte Blu-ray discs… in a droplet of DNA that would fit on the tip of your pinky. To store the same kind of data on hard drives — the densest storage medium in use today — you’d need 233 3TB drives, weighing a total of 151 kilos. In Church and Kosuri’s case, they have successfully stored around 700 kilobytes of data in DNA — Church’s latest book, in fact — and proceeded to make 70 billion copies (which they claim, jokingly, makes it the best-selling book of all time!) totaling 44 petabytes of data stored.

Looking forward, they foresee a world where biological storage would allow us to record anything and everything without reservation. Today, we wouldn’t dream of blanketing every square meter of Earth with cameras, and recording every moment for all eternity/human posterity — we simply don’t have the storage capacity. There is a reason that backed up data is usually only kept for a few weeks or months — it just isn’t feasible to have warehouses full of hard drives, which could fail at any time. If the entirety of human knowledge — every book, uttered word, and funny cat video — can be stored in a few hundred kilos of DNA, it might just be possible to record everything.

http://refreshingnews99.blogspot.in/2012/08/harvard-cracks-dna-storage-crams-700.html

Thanks to kebmodee for bringing this to the attention of the It’s Interesting community.

Disney Is ‘Face Cloning’ People to Create Terrifyingly Realistic Robots

The Hall of Presidents is about to get a whole lot creepier, at least if Disney’s researchers get their way. That’s because they’re “face cloning” people at a lab in Zurich in order to create the most realistic animatronic characters ever made.

First of all, yes, Disney has a laboratory in Zurich. It’s one of six around the world where the company researches things like computer graphics, 3D technology and, I can only assume, how to most efficiently suck money out of your pocket when you visit Disneyworld.

What does “physical face cloning” involve? Researchers used video cameras to capture several expressions on a subject’s face, recreating them in 3D computer models down to individual wrinkles and facial hair. They then experimented with different thicknesses of silicon for each part of the face until they could create a mold for the perfect synthetic skin.

They slapped that silicone skin on a 3D-printed model of the subject’s head to create their very own replicant. As the authors of the study point out (PDF), it’s not all that different from creating a 3D model for a Pixar movie, except that in real life you have to worry about things like materials and the motors that make the face change expressions.

The plan is to develop a “complete process for automating the physical reproduction of a human face on an animatronics device,” meaning all you’ll have to do in the future is record a person’s face and the computer will do the rest. This is a different process than the one used to make the famous Geminoid robots from Osaka University, whose skin is individually crafted by artists through trial and error.

The next step is developing more advanced actuators and multi-layered synthetic skin to give the researchers more degrees of freedom in mimicking facial expressions. That means next time you go on the Pirates of the Caribbean ride, don’t be surprised to see a terrifyingly realistic Johnny Depp-bot cavorting with an appropriately dead-eyed Orlando Bloom.

Read more: http://techland.time.com/2012/08/15/disney-is-face-cloning-people-to-create-terrifyingly-realistic-robots/?iid=tl-article-latest#ixzz23fBwVu61

Retinal device restores sight to blind mice

 

Researchers report they have developed in mice what they believe might one day become a breakthrough for humans: a retinal prosthesis that could restore near-normal sight to those who have lost their vision.

That would be a welcome development for the roughly 25 million people worldwide who are blind because of retinal disease, most notably macular degeneration.

The notion of using prosthetics to combat blindness is not new, with prior efforts involving retinal electrode implantation and/or gene therapy restoring a limited ability to pick out spots and rough edges of light.

The current effort takes matters to a new level. The scientists fashioned a prosthetic system packed with computer chips that replicate the “neural impulse codes” the eye uses to transmit light signals to the brain.

“This is a unique approach that hasn’t really been explored before, and we’re really very excited about it,” said study author Sheila Nirenberg, a professor and computational neuroscientist in the department of physiology and biophysics at Weill Medical College of Cornell University in New York City. “I’ve actually been working on this for 10 years. And suddenly, after a lot of work, I knew immediately that I could make a prosthetic that would work, by making one that could take in images and process them into a code that the brain can understand.”

Nirenberg and her co-author Chethan Pandarinath (a former Cornell graduate student now conducting postdoctoral research at Stanford University School of Medicine) report their work in the Aug. 14 issue of Proceedings of the National Academy of Sciences. Their efforts were funded by the U.S. National Institutes of Health and Cornell University’s Institute for Computational Biomedicine.

The study authors explained that retinal diseases destroy the light-catching photoreceptor cells on the retina’s surface. Without those, the eye cannot convert light into neural signals that can be sent to the brain.

However, most of these patients retain the use of their retina’s “output cells” — called ganglion cells — whose job it is to actually send these impulses to the brain. The goal, therefore, would be to jumpstart these ganglion cells by using a light-catching device that could produce critical neural signaling.

But past efforts to implant electrodes directly into the eye have only achieved a small degree of ganglion stimulation, and alternate strategies using gene therapy to insert light-sensitive proteins directly into the retina have also fallen short, the researchers said.

Nirenberg theorized that stimulation alone wasn’t enough if the neural signals weren’t exact replicas of those the brain receives from a healthy retina.

“So, what we did is figure out this code, the right set of mathematical equations,” Nirenberg explained. And by incorporating the code right into their prosthetic device’s chip, she and Pandarinath generated the kind of electrical and light impulses that the brain understood.

The team also used gene therapy to hypersensitize the ganglion output cells and get them to deliver the visual message up the chain of command.

Behavioral tests were then conducted among blind mice given a code-outfitted retinal prosthetic and among those given a prosthetic that lacked the code in question.

The result: The code group fared dramatically better on visual tracking than the non-code group, with the former able to distinguish images nearly as well as mice with healthy retinas.

“Now we hope to move on to human trials as soon as possible,” said Nirenberg. “Of course, we have to conduct standard safety studies before we get there. And I would say that we’re looking at five to seven years before this is something that might be ready to go, in the best possible case. But we do hope to start clinical trials in the next one to two years.”

Results achieved in animal studies don’t necessarily translate to humans.

Dr. Alfred Sommer, a professor of ophthalmology at Johns Hopkins University in Baltimore and dean emeritus of Hopkins’  Bloomberg School of Public Health, urged caution about the findings.

“This could be revolutionary,” he said. “But I doubt it. It’s a very, very complicated business. And people have been working on it intensively and incrementally for the last 30 years.”

“The fact that they have done something that sounds a little bit better than the last set of results is great,” Sommer added.  “It’s terrific. But this approach is really in its infancy. And I guarantee that it will be a long time before they get to the point where they can really restore vision to people using prosthetics.”

Other advances may offer benefits in the meantime, he said. “We now have new therapies that we didn’t have even five years ago,” Sommer said. “So we may be reaching a state where the amount of people losing their sight will decline even as these new techniques for providing artificial vision improve. It may not be as sci-fi. But I think it’s infinitely more important at this stage.”

http://health.usnews.com/health-news/news/articles/2012/08/13/retinal-device-restores-sight-to-blind-mice

Thanks to Kebmodee for bringing this to the attention of the It’s Interesting community.

Berkeley Laser Fires Pulses Hundreds of Times More Powerful Than All the World’s Electric Plants Combined

Blink and you’ll miss it. Don’t blink, and you’ll still miss it.

Imagine a device capable of delivering more power than all of the world’s electric plants. But this is not a prop for the next James Bond movie. A new laser at Lawrence Berkeley National Laboratory was put through its paces July 20, delivering pulses with a petawatt of power once per second. A petawatt is 1015 watts, or 1,000,000,000,000,000 watts—about 400 times as much as the combined instantaneous output of all the world’s electric plants.

How is that even possible? Well, the pulses at the Berkeley Lab Laser Accelerator (BELLA) are both exceedingly powerful and exceedingly short. Each petawatt burst lasts just 40 femtoseconds, or 0.00000000000004 second. Since it fires just one brief pulse per second, the laser’s average power is only about 40 watts—the same as an incandescent bulb in a reading lamp.

BELLA’s laser is not the first to pack so much power—a laser at Lawrence Livermore National Laboratory, just an hour’s drive inland from Berkeley, reached 1.25 petawatts in the 1990s. And the University of Texas at Austin has its own high-power laser, which hit the 1.1-petawatt mark in 2008. But the Berkeley laser is the first to deliver petawatt pulses with such frequency, the lab says. At full power, for comparison, the Texas Petawatt Laser can fire one shot per hour.

The Department of Energy plans to use the powerful laser to drive a very compact particle accelerator via a process called laser wakefield acceleration, boosting electrons to high energies for use in colliders or for imaging or medical applications. Electron beams are already in use to produce bright pulses of x-rays for high-speed imaging. An intense laser pulse can ionize the atoms in a gas, separating electrons from protons to produce a plasma. And laser-carved waves in the plasma [blue in image above] sweep up electrons [green], accelerating them outward at nearly the speed of light.

BELLA director Wim Leemans says that the project’s first experiments will seek to accelerate beams of electrons to energies of 10 billion electron-volts (or 10 GeV) by firing the laser through a plasma-based apparatus about one meter long. The laser apparatus itself is quite a bit larger, filling a good-size room. For comparison, the recently repurposed Stanford Linear Accelerator Center produced electron beams of 50 GeV from an accelerator 3.2 kilometers in length.

http://blogs.scientificamerican.com/observations/2012/08/01/berkeley-laser-fires-pulses-hundreds-of-times-more-powerful-than-all-the-worlds-electric-plants-combined/

Thanks to Ray Gaudette for bringing this to the attention of the It’s Interesting community.

Digital pills enter the marketplace

 

Digestible microchips embedded in drugs may soon tell doctors whether a patient is taking their medications as prescribed. These sensors are the first ingestible devices approved by the US Food and Drug Administration (FDA). To some, they signify the beginning of an era in digital medicine.

“About half of all people don’t take medications like they’re supposed to,” says Eric Topol, director of the Scripps Translational Science Institute in La Jolla,California. “This device could be a solution to that problem, so that doctors can know when to rev up a patient’s medication adherence.” Topol is not affiliated with the company that manufactures the device, Proteus Digital Health in Redwood City,California, but he embraces the sensor’s futuristic appeal, saying, “It’s like big brother watching you take your medicine.”

The sand-particle sized sensor consists of a minute silicon chip containing trace amounts of magnesium and copper. When swallowed, it generates a slight voltage in response to digestive juices, which conveys a signal to the surface of a person’s skin where a patch then relays the information to a mobile phone belonging to a healthcare-provider.

Currently, the FDA, and the analogous regulatory agency in Europe have only approved the device based on studies showing its safety and efficacy when implanted in placebo pills. But Proteus hopes to have the device approved within other drugs in the near future. Medicines that must be taken for years, such as those for drug resistant tuberculosis, diabetes, and for the elderly with chronic diseases, are top candidates, says George Savage, co-founder and chief medical officer at the company.

“The point is not for doctors to castigate people, but to understand how people are responding to their treatments,” Savage says. “This way doctors can prescribe a different dose or a different medicine if they learn that it’s not being taken appropriately.”

Proponents of digital medical devices predict that they will provide alternatives to doctor visits, blood tests, MRIs and CAT scans. Other gadgets in the pipeline include implantable devices that wirelessly inject drugs at pre-specified times, and sensors that deliver a person’s electrocardiogram to their smartphone.

In his book published in January, The Creative Destruction of Medicine, Topol says that the 2010s will be known as the era of digital medical devices. “There are so many of these new technologies coming along,” Topol says, “it’s going to be a new frontier for rendering care.”

Thanks to Kedmobee for bringing this to the attention of the It’s Interesting community.

http://blogs.nature.com/news/2012/07/digital-pills-make-their-way-to-market.html

Narrative Science: Can computers write convincing journalism stories?

Computer applications can drive cars, fly planes, play chess and even make music.

But can an app tell a story?

Chicago-based company Narrative Science has set out to prove that computers can tell stories good enough for a fickle human audience. It has created a program that takes raw data and turns it into a story, a system that’s worked well enough for the company to earn its own byline on Forbes.com.

Kristian Hammond, Narrative Science’s chief technology officer, said his team started the program by taking baseball box scores and turning them into game summaries.

“We did college baseball,” Hammond recalled. “And we built out a system that would take box scores and historical information, and we would write a game recap after a game. And we really liked it.”

Narrative Science then began branching out into finance and other topics that are driven heavily by data. Soon, Hammond says, large companies came looking for help sorting huge amounts of data themselves.

“I think the place where this technology is absolutely essential is the area that’s loosely referred to as big data,” Hammond said. “So almost every company in the world has decided at one point that in order to do a really good job, they need to meter and monitor everything.”

Narrative Science hasn’t disclosed how much money is being made or whether a profit is being turned with the app. The firm employs about 30 people. At least one other company, based in North Carolina, is working on similar technology.

Meanwhile, Hammond says Narrative Science is looking to eventually expand into long form news stories.

That’s an idea that’s unsettling to some journalism experts.

Kevin Smith, head of the Society of Professional Journalists Ethics Committee, says he laughed when he heard about the program.

“I can remember sitting there doing high school football games on a Friday night and using three-paragraph formulas,” Smith said. “So it made me laugh, thinking they have made a computer that can do that work.”

Smith says that, ultimately, it’s going to be hard for people to share the uniquely human custom of story telling with a machine.

“I can’t imagine that a machine is going to tell a story and present it in a way that other human beings are going to accept it,” he said. “At least not at this time. I don’t see that happening. And the fact that we’re even attempting to do it — we shouldn’t be doing it.”

Other experts are not as concerned. Greg Bowers, who teaches at the Missouri School of Journalism, says computers don’t have the same capacity for pitch, emotion and story structure.

“I’m not alarmed about it as some people are,” Bowers said. “If you’re writing briefs that can be easily replicated by a computer, then you’re not trying hard enough.”

http://www.cnn.com/2012/05/11/tech/innovation/computer-assisted-writing/index.html?hpt=hp_c2

Japanese Remote Hand Shaking

Japanese scientists at Osaka University have created a robot hand so people can shake hands with someone remotely. The robot hand communicates grip force, body temperature and touch. The creators are considering building telepresence robots with the robot hand so they can shake hands with people.
The creators of the robot hand say, “People have the preconceived notion that a robot hand will feel cold, so we give it a temperature slightly higher than skin temperature.”

http://www.sciencespacerobots.com/blog/32820121

Thanks to Kebmodee for bringing this to the attention of the It’s Interesting community.

The Nubrella

 

The Nubrella, which resembles a bubble wrapped around the user’s head and shoulders, works by strapping on a shoulder support and extending a canopy   around the head.

Weighing just over 1kg, it costs $49.99 and comes in either black or see-through style.

Inventor Alan Kaufman, 49, from Florida, said: “The major advantage is the wearer doesn’t have to carry anything when not in use as it goes behind   the head like a hood.

“The umbrella was long overdue for some innovation, now people can ride their bikes and work outdoors completely hands free while staying protected.

“Millions of people are required to work outdoors no matter what the conditions are and simply can’t hold an umbrella and perform their tasks.

“We believe this will revolutionise the industry and are targeting people who can’t use an umbrella or are too tired to hold an umbrella.”

http://www.telegraph.co.uk/news/newstopics/howaboutthat/9195303/Hands-free-umbrella-to-help-battle-April-showers.html

By it here:  http://www.nubrella.com/

 

Project Glass

 

Google says, “We think technology should work for you—to be there when you need it and get out of your way when you don’t. A group of us from Google[x] started Project Glass to build this kind of technology, one that helps you explore and share your world, putting you back in the moment. We’re sharing this information now because we want to start a conversation and learn from your valuable input. So we took a few design photos to show what this technology could look like and created a video to demonstrate what it might enable you to do.”

https://plus.google.com/u/0/111626127367496192147/posts#111626127367496192147/posts