Posts Tagged ‘DNA’

by PETER DOCKRILL

The appearance of wrinkled, weathered skin and the disappearance of hair are two of the regrettable hallmarks of getting older, but new research suggests these physical manifestations of ageing might not be permanent – and can potentially be reversed.

New experiments with mice show that by treating a mutation-based imbalance in mitochondrial function, animals that looked physically aged regrew hair and lost their wrinkles – restoring them to a healthy, youthful appearance in just weeks.

“To our knowledge, this observation is unprecedented,” says geneticist Keshav Singh from the University of Alabama at Birmingham.

One of the focal points of anti-ageing research is investigating the so-called mitochondrial theory of ageing, which posits that mutations in the DNA of our mitochondria – the ‘powerhouse of the cell’ – contribute over time to defects in these organelles, giving rise to ageing itself, associated chronic diseases, and other human pathologies.

To investigate these mechanisms, Singh and fellow researchers genetically modified mice to have depleted mitochondrial DNA (mtDNA).

They did this by adding the antibiotic doxycycline to the food and drinking water of transgenic mice. This turned on a mutation which causes mitochondrial dysfunction and depletes their healthy levels of mtDNA.

In the space of eight weeks, the previously healthy mice developed numerous physical changes reminiscent of natural ageing: greying and significantly thinning hair, wrinkled skin, along with slowed movements and lethargy.

The depleted mice also showed an increased numbers of skin cells, contributing to an abnormal thickening of the outer layer of their skin, in addition to dysfunctional hair follicles, and an imbalance between enzymes and inhibitors that usually prevents collagen fibres from wrinkling skin.

But once the doxycycline was no longer fed to the animals, and their mitochondria could get back to doing what they do best, the mice regained their healthy, youthful appearance within just four weeks.

Effectively, they reverted to the animals they were before their mitochondrial DNA content was tampered with – which could mean mitochondria are reversible regulators of skin ageing and hair loss.

“It suggests that epigenetic mechanisms underlying mitochondria-to-nucleus cross-talk must play an important role in the restoration of normal skin and hair phenotype,” says Singh.

“Further experiments are required to determine whether phenotypic changes in other organs can also be reversed to wildtype level by restoration of mitochondrial DNA.”

Even though the mitochondrial depletion affected the entire animal, for the most part the induced mutation did not seem to greatly affect other organs – suggesting hair and skin tissue are most susceptible to the depletion.

But it could also mean the discovery here isn’t the fountain of youth for slowing or reversing the wider physiological causes of ageing – only its more surface, cosmetic symptoms. Although, at least some in the scientific community aren’t persuaded yet.

“While this is a clever proof of principle, I am not convinced of the clinical relevance of this,” biologist Lindsay Wu, from the Laboratory for Ageing Research at the University of New South Wales, who was not involved in the study, told ScienceAlert.

“The rate of mitochondrial DNA mutations here is many orders of magnitude higher than the rate of mitochondrial DNA mutations observed during normal ageing.”

“I would be really keen to see what happens when they turn down the rate of mutations to a lower level more relevant to normal ageing,” Wu added.

In that vein – with further research, and assuming these effects can be replicated outside the bodies of mice, which isn’t yet known – it’s possible this could turn out to be a major discovery in the field.

For their part, at least, the researchers are convinced mtDNA mutations can teach us a lot more about how the clocks in our bodies might be stopped (or wound back to another time entirely).

“This mouse model should provide an unprecedented opportunity for the development of preventative and therapeutic drug development strategies to augment the mitochondrial functions for the treatment of ageing-associated skin and hair pathology,” the authors write in their paper, “and other human diseases in which mitochondrial dysfunction plays a significant role.”

The findings are reported in Cell Death and Disease.

https://www.sciencealert.com/unprecedented-dna-discovery-actually-reverses-wrinkles-and-hair-loss-mitochondria-mutation-mtdna

Advertisements

Scientists have revealed a new link between alcohol, heart health and our genes.

The researchers investigated faulty versions of a gene called titin which are carried by one in 100 people or 600,000 people in the UK.

Titin is crucial for maintaining the elasticity of the heart muscle, and faulty versions are linked to a type of heart failure called dilated cardiomyopathy.

Now new research suggests the faulty gene may interact with alcohol to accelerate heart failure in some patients with the gene, even if they only drink moderate amounts of alcohol.

The research was carried out by scientists from Imperial College London, Royal Brompton Hospital, and MRC London Institute of Medical Sciences, and published this week in the latest edition of the Journal of the American College of Cardiology.

The study was supported by the Department of Health and Social Care and the Wellcome Trust through the Health Innovation Challenge Fund.

In the first part of the study, the team analysed 141 patients with a type of heart failure called alcoholic cardiomyopathy (ACM). This condition is triggered by drinking more than 70 units a week (roughly seven bottles of wine) for five years or more. In severe cases the condition can be fatal, or leave patients requiring a heart transplant.

The team found that the faulty titin gene may also play a role in the condition. In the study 13.5 per cent of patients were found to carry the mutation – much higher than the proportion of people who carry them in the general population.

These results suggest this condition is not simply the result of alcohol poisoning, but arises from a genetic predisposition – and that other family members may be at risk too, explained Dr James Ware, study author from the National Heart and Lung Institute at Imperial.

“Our research strongly suggests alcohol and genetics are interacting – and genetic predisposition and alcohol consumption can act together to lead to heart failure. At the moment this condition is assumed to be simply due to too much alcohol. But this research suggests these patients should also be checked for a genetic cause – by asking about a family history and considering testing for a faulty titin gene, as well as other genes linked to heart failure,” he said.

He added that relatives of patients with ACM should receive assessment and heart scans – and in some cases have genetic tests – to see if they unknowingly carry the faulty gene.

In a second part of the study, the researchers investigated whether alcohol may play a role in another type of heart failure called dilated cardiomyopathy (DCM). This condition causes the heart muscle to become stretched and thin, and has a number of causes including viral infections and certain medications. The condition can also be genetic, and around 12 per cent of cases of DCM are thought to be linked to a faulty titin gene.

In the study the team asked 716 patients with dilated cardiomyopathy how much alcohol they consumed.

None of the patients consumed the high-levels of alcohol needed to cause ACM. But the team found that in patients whose DCM was caused by the faulty titin gene, even moderately increased alcohol intake (defined as drinking above the weekly recommended limit of 14 units), affected the heart’s pumping power.

Compared to DCM patients who didn’t consume excess alcohol (and whose condition wasn’t caused by the faulty titin gene), excess alcohol was linked to reduction in heart output of 30 per cent.

More research is now needed to investigate how alcohol may affect people who carry the faulty titin gene, but do not have heart problems, added Dr Paul Barton, study co-author from the National Heart and Lung Institute at Imperial:

“Alcohol and the heart have a complicated relationship. While moderate levels may have benefits for heart health, too much can cause serious cardiac problems. This research suggests that in people with titin-related heart failure, alcohol may worsen the condition.

“An important wider question is also raised by the study: do mutations in titin predispose people to heart failure when exposed to other things that stress the heart, such as cancer drugs or certain viral infections? This is something we are actively seeking to address.”

The research was supported by the Department of Health and Social Care and Wellcome Trust through the Health Innovation Challenge Fund, the Medical Research Council, the NIHR Cardiovascular Biomedical Research Unit at Royal Brompton & Harefield NHS Foundation Trust and the British Heart Foundation.

Reference: Ware, J. S., Amor-Salamanca, A., Tayal, U., Govind, R., Serrano, I., Salazar-Mendiguchía, J., … Garcia-Pavia, P. (2018). Genetic Etiology for Alcohol-Induced Cardiac Toxicity. Journal of the American College of Cardiology, 71(20), 2293–2302. https://doi.org/10.1016/j.jacc.2018.03.462

https://www.technologynetworks.com/genomics/news/faulty-gene-leads-to-alcohol-induced-heart-failure-304365?utm_campaign=Newsletter_TN_BreakingScienceNews&utm_source=hs_email&utm_medium=email&utm_content=63228690&_hsenc=p2ANqtz-9oqDIw3te1NPoj51s94kxnA1ClK8Oiecfela6I4WiITEbm_-SWdmw6pjMTwm2YP24gqSzRaBvUK1kkb2kZEJKPcL5JtQ&_hsmi=63228690

Prof Neil Gemmell, a New Zealand scientist leading the project, said he did not believe in Nessie, but was confident of finding genetic codes for other creatures.

He said a “biological explanation” might be found to explain some of the stories about the Loch Ness Monster.

The team will collect tiny fragments of skin and scales for two weeks in June.

Prof Gemmell, from the University of Otago in Dunedin, said: “I don’t believe in the idea of a monster, but I’m open to the idea that there are things yet to be discovered and not fully understood.

“Maybe there’s a biological explanation for some of the stories.”

The University of the Highlands and Islands’ UHI Rivers and Lochs Institute in Inverness is assisting in the project.

Other organisms

After the research team’s trip to Loch Ness, the samples will be sent to laboratories in New Zealand, Australia, Denmark and France to be analysed against a genetic database.

Prof Gemmell said: “There’s absolutely no doubt that we will find new stuff. And that’s very exciting.

“While the prospect of looking for evidence of the Loch Ness monster is the hook to this project, there is an extraordinary amount of new knowledge that we will gain from the work about organisms that inhabit Loch Ness – the UK’s largest freshwater body.”

The scientist said the team expected to find sequences of DNA from plants, fish and other organisms.

He said it would be possible to identify these plants and animals by comparing the sequences of their DNA against sequences held on a large, international database.

Prof Gemmell added: “There is this idea that an ancient Jurassic Age reptile might be in Loch Ness.

“If we find any reptilian DNA sequences in Loch Ness, that would be surprising and would be very, very interesting.”

The Loch Ness Monster is one of Scotland’s oldest and most enduring myths. It inspires books, TV shows and films, and sustains a major tourism industry around its home.

The story of the monster can be traced back 1,500 years when Irish missionary St Columba is said to have encountered a beast in the River Ness in 565AD.

Later, in the 1930s, The Inverness Courier reported the first modern sighting of Nessie.

Whale-like creature

In 1933, the newspaper’s Fort Augustus correspondent, Alec Campbell, reported a sighting by Aldie Mackay of what she believed to be Nessie.

Mr Campbell’s report described a whale-like creature and the loch’s water “cascading and churning”.

The editor at the time, Evan Barron, suggested the beast be described as a “monster”, kick starting the modern myth of the Loch Ness Monster.

Over the years various efforts have tried and failed to find the beast.

In tourism terms, there are two exhibitions dedicated to the monster and there is not a tourist shop in the Highlands, and even more widely across Scotland, where a cuddly toy of Nessie cannot be found.

In 2016, the inaugural Inverness Loch Ness International Knitting Festival exhibited knitted Nessie’s made from all parts of the world.

‘Record high’

In popular culture, the Loch Ness Monster has reared its head many times, including in 1975’s four-part Doctor Who – Terror of the Zygons, the 1980s cartoon The Family-Ness as well as The Simpsons and 1996’s Loch Ness starring Ted Danson.

In 2014, it was reported that for the first time in almost 90 years no “confirmed sightings” had been made of the Loch Ness Monster.

Gary Campbell, who keeps a register of sightings, said no-one had come forward in 18 months to say they had seen the monster.

But last year, sightings hit a record high.

http://www.bbc.com/news/uk-scotland-highlands-islands-44223259

In the age of big data, we are quickly producing far more digital information than we can possibly store. Last year, $20 billion was spent on new data centers in the US alone, doubling the capital expenditure on data center infrastructure from 2016. And even with skyrocketing investment in data storage, corporations and the public sector are falling behind.

But there’s hope.

With a nascent technology leveraging DNA for data storage, this may soon become a problem of the past. By encoding bits of data into tiny molecules of DNA, researchers and companies like Microsoft hope to fit entire data centers in a few flasks of DNA by the end of the decade.

But let’s back up.

Backdrop

After the 20th century, we graduated from magnetic tape, floppy disks, and CDs to sophisticated semiconductor memory chips capable of holding data in countless tiny transistors. In keeping with Moore’s Law, we’ve seen an exponential increase in the storage capacity of silicon chips. At the same time, however, the rate at which humanity produces new digital information is exploding. The size of the global datasphere is increasing exponentially, predicted to reach 160 zettabytes (160 trillion gigabytes) by 2025. As of 2016, digital users produced over 44 billion gigabytes of data per day. By 2025, the International Data Corporation (IDC) estimates this figure will surpass 460 billion. And with private sector efforts to improve global connectivity—such as OneWeb and Google’s Project Loon—we’re about to see an influx of data from five billion new minds.

By 2020, three billion new minds are predicted to join the web. With private sector efforts, this number could reach five billion. While companies and services are profiting enormously from this influx, it’s extremely costly to build data centers at the rate needed. At present, about $50 million worth of new data center construction is required just to keep up, not to mention millions in furnishings, equipment, power, and cooling. Moreover, memory-grade silicon is rarely found pure in nature, and researchers predict it will run out by 2040.

Take DNA, on the other hand. At its theoretical limit, we could fit 215 million gigabytes of data in a single gram of DNA.

But how?

Crash Course

DNA is built from a double helix chain of four nucleotide bases—adenine (A), thymine (T), cytosine (C), and guanine (G). Once formed, these chains fold tightly to form extremely dense, space-saving data stores. To encode data files into these bases, we can use various algorithms that convert binary to base nucleotides—0s and 1s into A, T, C, and G. “00” might be encoded as A, “01” as G, “10” as C, and “11” as T, for instance. Once encoded, information is then stored by synthesizing DNA with specific base patterns, and the final encoded sequences are stored in vials with an extraordinary shelf-life. To retrieve data, encoded DNA can then be read using any number of sequencing technologies, such as Oxford Nanopore’s portable MinION.

Still in its deceptive growth phase, DNA data storage—or NAM (nucleic acid memory)—is only beginning to approach the knee of its exponential growth curve. But while the process remains costly and slow, several players are beginning to crack its greatest challenge: retrieval. Just as you might click on a specific file and filter a search term on your desktop, random-access across large data stores has become a top priority for scientists at Microsoft Research and the University of Washington.

Storing over 400 DNA-encoded megabytes of data, U Washington’s DNA storage system now offers random access across all its data with no bit errors.

Applications

Even before we guarantee random access for data retrieval, DNA data storage has immediate market applications. According to IDC’s Age 2025 study (Figure 5 (PDF)), a huge proportion of enterprise data goes straight to an archive. Over time, the majority of stored data becomes only potentially critical, making it less of a target for immediate retrieval.

Particularly for storing past legal documents, medical records, and other archive data, why waste precious computing power, infrastructure, and overhead?

Data-encoded DNA can last 10,000 years—guaranteed—in cold, dark, and dry conditions at a fraction of the storage cost.

Now that we can easily use natural enzymes to replicate DNA, companies have tons to gain (literally) by using DNA as a backup system—duplicating files for later retrieval and risk mitigation.

And as retrieval algorithms and biochemical technologies improve, random access across data-encoded DNA may become as easy as clicking a file on your desktop.

As you scroll, researchers are already investigating the potential of molecular computing, completely devoid of silicon and electronics.

Harvard professor George Church and his lab, for instance, envision capturing data directly in DNA. As Church has stated, “I’m interested in making biological cameras that don’t have any electronic or mechanical components,” whereby information “goes straight into DNA.” According to Church, DNA recorders would capture audiovisual data automatically. “You could paint it up on walls, and if anything interesting happens, just scrape a little bit off and read it—it’s not that far off.” One day, we may even be able to record biological events in the body. In pursuit of this end, Church’s lab is working to develop an in vivo DNA recorder of neural activity, skipping electrodes entirely.

Perhaps the most ultra-compact, long-lasting, and universal storage mechanism at our fingertips, DNA offers us unprecedented applications in data storage—perhaps even computing.

Potential

As DNA data storage plummets in tech costs and rises in speed, commercial user interfaces will become both critical and wildly profitable. Once corporations, startups, and people alike can easily save files, images or even neural activity to DNA, opportunities for disruption abound. Imagine uploading files to the cloud, which travel to encrypted DNA vials, as opposed to massive and inefficient silicon-enabled data centers. Corporations could have their own warehouses and local data networks could allow for heightened cybersecurity—particularly for archives.

And since DNA lasts millennia without maintenance, forget the need to copy databases and power digital archives. As long as we’re human, regardless of technological advances and changes, DNA will always be relevant and readable for generations to come.

But perhaps the most exciting potential of DNA is its portability. If we were to send a single exabyte of data (one billion gigabytes) to Mars using silicon binary media, it would take five Falcon Heavy rockets and cost $486 million in freight alone.

With DNA, we would need five cubic centimeters.

At scale, DNA has the true potential to dematerialize entire space colonies worth of data. Throughout evolution, DNA has unlocked extraordinary possibilities—from humans to bacteria. Soon hosting limitless data in almost zero space, it may one day unlock many more.

https://singularityhub.com/2018/04/26/the-answer-to-the-digital-data-tsunami-is-literally-in-our-dna/?utm_source=Singularity+Hub+Newsletter&utm_campaign=fa76321507-Hub_Daily_Newsletter&utm_medium=email&utm_term=0_f0cf60cdae-fa76321507-58158129#sm.000kbyugh140cf5sxiv1mnz7bq65u

by Michelle Z. Donahue

A baby girl who lived some 11,500 years ago survived for just six weeks in the harsh climate of central Alaska, but her brief life is providing a surprising and challenging wealth of information to modern researchers.

Her genome is the oldest-yet complete genetic profile of a New World human. But if that isn’t enough, her genes also reveal the existence of a previously unknown population of people who are related to—but older and genetically distinct from— modern Native Americans.

This new information helps sketch in more details about how, when, and where the ancestors of all Native Americans became a distinct group, and how they may have dispersed into and throughout the New World.

The baby’s DNA showed that she belonged to a population that was genetically separate from other native groups present elsewhere in the New World at the end of the Pleistocene. Ben Potter, the University of Alaska Fairbanks archaeologist who unearthed the remains at the Upward River Sun site in 2013 , named this new group “Ancient Beringians.”

The discovery of the baby’s bones, named Xach’itee’aanenh T’eede Gaay, or Sunrise Child-Girl in a local Athabascan language, was completely unexpected, as were the genetic results, Potter says

Found in 2006 and accessible only by helicopter, the Upward River Sun site is located in the dense boreal forest of central Alaska’s Tanana River Valley. The encampment was buried under feet of sand and silt, an acidic environment that makes the survival of organic artifacts exceedingly rare. Potter previously excavated the cremated remains of a three-year-old child from a hearth pit in the encampment, and it was beneath this first burial that the six-week-old baby and a second, even younger infant were found.

A genomics team in Denmark, including University of Copenhagen geneticist Eske Willerslev, performed the sequencing work on the remains, comparing the child’s genome with the genes of 167 ancient and contemporary populations from around the world. The results appeared today in the journal Nature.

“We didn’t know this population even existed,” Potter says. “Now we know they were here for many thousands of years, and that they were really successful. How did they do it? How did they change? We now have examples of two genetic groups of people who were adapting to this very harsh landscape.”

The genetic analysis points towards a divergence of all ancient Native Americans from a single east Asian source population somewhere between 36,000 to 25,000 years ago—well before humans crossed into Beringia, an area that includes the land bridge connecting Siberia and Alaska at the end of the last ice age. That means that somewhere along the way, either in eastern Asia or in Beringia itself, a group of people became isolated from other east Asians for about 10,000 years, long enough to become a unique strain of humanity.

The girl’s genome also shows that the Beringians became genetically distinct from all other Native Americans around 20,000 years ago. But since humans in North America are not reliably documented before 14,600 years ago, how and where these two groups could have been separated long enough to become genetically distinct is still unclear.

The new study posits two new possibilities for how the separation could have happened.

The first is that the two groups became isolated while still in east Asia, and that they crossed the land bridge separately—perhaps at different times, or using different routes

A second theory is that a single group moved out of Asia, then split into Beringians and ancient Native Americans once in Beringia. The Beringians lingered in the west and interior of Alaska, while the ancestors of modern Native Americans continued on south some time around 15,700 years ago.

“It’s less like a tree branching out and more like a delta of streams and rivers that intersect and then move apart,” says Miguel Vilar, lead scientist for National Geographic’s Genographic Project. “Twenty years ago, we thought the peopling of America seemed quite simple, but then it turns out to be more complicated than anyone thought.”

John Hoffecker, who studies the paleoecology of Beringia at the University of Colorado-Boulder, says there is still plenty of room for debate about the geographic locations of the ancestral splits. But the new study fits well with where the thinking has been heading for the last decade, he adds.

“We think there was a great deal more diversity in the original Native American populations than is apparent today, so this is consistent with a lot of other evidence,” Hoffecker says.

However, that same diversity—revealed through research on Native American cranial morphology and tooth structure—creates its own dilemma. How does a relatively small group of New World migrants, barricaded by a challenging climate with no access to fresh genetic material, evolve such a deep bank of differences from their east Asian ancestors? It certainly doesn’t happen over just 15,000 years, Hoffecker insists, referring to the estimated date of divergence of ancient Native Americans from Beringians.

“We’ve been getting these signals of early divergence for decades—the first mitochondrial work in the 1990s from Native Americans were coming up with estimates of 30, 35, even 40,000 years ago,” Hoffecker says. “They were being dismissed by everybody, myself included. Then people began to suspect there were two dates: one for divergence, and one for dispersal, and this study supports that.”

“Knowing about the Beringians really informs us as to how complex the process of human migration and adaptation was,” adds Potter. “It prompts the scientist in all of us to ask better questions, and to be in awe of our capacity as a species to come into such a harsh area and be very successful.”

https://news.nationalgeographic.com/2018/01/alaska-dna-ancient-beringia-genome/

by Lisa Ryan

As genetic-ancestry kits increase in popularity, more white nationalists have been taking the spit-in-a-cup tests to prove their heritage — and many are left disappointed by results showing they aren’t as “white” as they had hoped, STAT News reports.

A new study from researchers at the University of California, Los Angeles, and the Data & Society Research Institute examined comments left in 12 million posts on the website Stormfront, left by more than 300,000 users. The team was able to find 70 discussion threads, where 153 users posted about their test results from companies like 23andMe and Ancestry.com — with more than 3,000 posts in response.

Sociologist Aaron Panofsky explained to STAT News that many of the white nationalists would post their results, even if they were upset to learn they weren’t completely “white” — which was surprising because “they will basically say if you want to be a member of Stormfront you have to be 100 percent white European, not Jewish.”

Only a third of people who posted their ancestry results were pleased with what they discovered — a commenter with the username Sloth even wrote, “Pretty damn pure blood.” Those who found themselves with results that weren’t 100 percent white European dealt with their disappointment by rejecting the test or disputing the results with the help of other users. Some would say they knew their genealogy better than whatever a genetic test may reveal; certain users also apparently tried to discredit the tests as a Jewish conspiracy.

Panofsky notes that there is “mainstream critical literature” on these tests that ague people should be cautious about the results. J. Scott Roberts, an associate professor at the University of Michigan who wasn’t involved in the study, told STAT News, “The science is often murky in those areas and gives ambiguous information. They try to give specific percentages from this region, or x percent disease risk, and my sense is that that is an artificially precise estimate.” However, STAT News points out that Ancestry.com and 23andMe are “meticulous” in how they analyze a person’s genetic material, and exclude outliers that can distort a person’s genetic data.

https://www.yahoo.com/news/study-finds-many-white-nationalists-172104845.html

by Andy Greenberg

WHEN BIOLOGISTS SYNTHESIZE DNA, they take pains not to create or spread a dangerous stretch of genetic code that could be used to create a toxin or, worse, an infectious disease. But one group of biohackers has demonstrated how DNA can carry a less expected threat—one designed to infect not humans nor animals but computers.

In new research they plan to present at the USENIX Security conference on Thursday, a group of researchers from the University of Washington has shown for the first time that it’s possible to encode malicious software into physical strands of DNA, so that when a gene sequencer analyzes it the resulting data becomes a program that corrupts gene-sequencing software and takes control of the underlying computer. While that attack is far from practical for any real spy or criminal, it’s one the researchers argue could become more likely over time, as DNA sequencing becomes more commonplace, powerful, and performed by third-party services on sensitive computer systems. And, perhaps more to the point for the cybersecurity community, it also represents an impressive, sci-fi feat of sheer hacker ingenuity.

“We know that if an adversary has control over the data a computer is processing, it can potentially take over that computer,” says Tadayoshi Kohno, the University of Washington computer science professor who led the project, comparing the technique to traditional hacker attacks that package malicious code in web pages or an email attachment. “That means when you’re looking at the security of computational biology systems, you’re not only thinking about the network connectivity and the USB drive and the user at the keyboard but also the information stored in the DNA they’re sequencing. It’s about considering a different class of threat.”

A Sci-Fi Hack
For now, that threat remains more of a plot point in a Michael Crichton novel than one that should concern computational biologists. But as genetic sequencing is increasingly handled by centralized services—often run by university labs that own the expensive gene sequencing equipment—that DNA-borne malware trick becomes ever so slightly more realistic. Especially given that the DNA samples come from outside sources, which may be difficult to properly vet.

If hackers did pull off the trick, the researchers say they could potentially gain access to valuable intellectual property, or possibly taint genetic analysis like criminal DNA testing. Companies could even potentially place malicious code in the DNA of genetically modified products, as a way to protect trade secrets, the researchers suggest. “There are a lot of interesting—or threatening may be a better word—applications of this coming in the future,” says Peter Ney, a researcher on the project.

Regardless of any practical reason for the research, however, the notion of building a computer attack—known as an “exploit”—with nothing but the information stored in a strand of DNA represented an epic hacker challenge for the University of Washington team. The researchers started by writing a well-known exploit called a “buffer overflow,” designed to fill the space in a computer’s memory meant for a certain piece of data and then spill out into another part of the memory to plant its own malicious commands.

But encoding that attack in actual DNA proved harder than they first imagined. DNA sequencers work by mixing DNA with chemicals that bind differently to DNA’s basic units of code—the chemical bases A, T, G, and C—and each emit a different color of light, captured in a photo of the DNA molecules. To speed up the processing, the images of millions of bases are split up into thousands of chunks and analyzed in parallel. So all the data that comprised their attack had to fit into just a few hundred of those bases, to increase the likelihood it would remain intact throughout the sequencer’s parallel processing.

When the researchers sent their carefully crafted attack to the DNA synthesis service Integrated DNA Technologies in the form of As, Ts, Gs, and Cs, they found that DNA has other physical restrictions too. For their DNA sample to remain stable, they had to maintain a certain ratio of Gs and Cs to As and Ts, because the natural stability of DNA depends on a regular proportion of A-T and G-C pairs. And while a buffer overflow often involves using the same strings of data repeatedly, doing so in this case caused the DNA strand to fold in on itself. All of that meant the group had to repeatedly rewrite their exploit code to find a form that could also survive as actual DNA, which the synthesis service would ultimately send them in a finger-sized plastic vial in the mail.

The result, finally, was a piece of attack software that could survive the translation from physical DNA to the digital format, known as FASTQ, that’s used to store the DNA sequence. And when that FASTQ file is compressed with a common compression program known as fqzcomp—FASTQ files are often compressed because they can stretch to gigabytes of text—it hacks that compression software with its buffer overflow exploit, breaking out of the program and into the memory of the computer running the software to run its own arbitrary commands.

A Far-Off Threat
Even then, the attack was fully translated only about 37 percent of the time, since the sequencer’s parallel processing often cut it short or—another hazard of writing code in a physical object—the program decoded it backward. (A strand of DNA can be sequenced in either direction, but code is meant to be read in only one. The researchers suggest in their paper that future, improved versions of the attack might be crafted as a palindrome.)

Despite that tortuous, unreliable process, the researchers admit, they also had to take some serious shortcuts in their proof-of-concept that verge on cheating. Rather than exploit an existing vulnerability in the fqzcomp program, as real-world hackers do, they modified the program’s open-source code to insert their own flaw allowing the buffer overflow. But aside from writing that DNA attack code to exploit their artificially vulnerable version of fqzcomp, the researchers also performed a survey of common DNA sequencing software and found three actual buffer overflow vulnerabilities in common programs. “A lot of this software wasn’t written with security in mind,” Ney says. That shows, the researchers say, that a future hacker might be able to pull off the attack in a more realistic setting, particularly as more powerful gene sequencers start analyzing larger chunks of data that could better preserve an exploit’s code.

Needless to say, any possible DNA-based hacking is years away. Illumina, the leading maker of gene-sequencing equipment, said as much in a statement responding to the University of Washington paper. “This is interesting research about potential long-term risks. We agree with the premise of the study that this does not pose an imminent threat and is not a typical cyber security capability,” writes Jason Callahan, the company’s chief information security officer “We are vigilant and routinely evaluate the safeguards in place for our software and instruments. We welcome any studies that create a dialogue around a broad future framework and guidelines to ensure security and privacy in DNA synthesis, sequencing, and processing.”

But hacking aside, the use of DNA for handling computer information is slowly becoming a reality, says Seth Shipman, one member of a Harvard team that recently encoded a video in a DNA sample. (Shipman is married to WIRED senior writer Emily Dreyfuss.) That storage method, while mostly theoretical for now, could someday allow data to be kept for hundreds of years, thanks to DNA’s ability to maintain its structure far longer than magnetic encoding in flash memory or on a hard drive. And if DNA-based computer storage is coming, DNA-based computer attacks may not be so farfetched, he says.
“I read this paper with a smile on my face, because I think it’s clever,” Shipman says. “Is it something we should start screening for now? I doubt it.” But he adds that, with an age of DNA-based data possibly on the horizon, the ability to plant malicious code in DNA is more than a hacker parlor trick.

“Somewhere down the line, when more information is stored in DNA and it’s being input and sequenced constantly,” Shipman says, “we’ll be glad we started thinking about these things.”

https://www.wired.com/story/malware-dna-hack/?mbid=nl_81017_p1&CNDID=50678559