Posts Tagged ‘science’

Doctors have newly outlined a type of dementia that could be more common than Alzheimer’s among the oldest adults, according to a report published Tuesday in the journal Brain.

The disease, called LATE, may often mirror the symptoms of Alzheimer’s disease, though it affects the brain differently and develops more slowly than Alzheimer’s. Doctors say the two are frequently found together, and in those cases may lead to a steeper cognitive decline than either by itself.

In developing its report, the international team of authors is hoping to spur research — and, perhaps one day, treatments — for a disease that tends to affect people over 80 and “has an expanding but under-recognized impact on public health,” according to the paper.

“We’re really overhauling the concept of what dementia is,” said lead author Dr. Peter Nelson, director of neuropathology at the University of Kentucky Medical Center.

Still, the disease itself didn’t come out of the blue. The evidence has been building for years, including reports of patients who didn’t quite fit the mold for known types of dementia such as Alzheimer’s.

“There isn’t going to be one single disease that is causing all forms of dementia,” said Sandra Weintraub, a professor of psychiatry, behavioral sciences and neurology at Northwestern University Feinberg School of Medicine. She was not involved in the new paper.

Weintraub said researchers have been well aware of the “heterogeneity of dementia,” but figuring out precisely why each type can look so different has been a challenge. Why do some people lose memory first, while others lose language or have personality changes? Why do some develop dementia earlier in life, while others develop it later?

Experts say this heterogeneity has complicated dementia research, including Alzheimer’s, because it hasn’t always been clear what the root cause was — and thus, if doctors were treating the right thing.

What is it?

The acronym LATE stands for limbic-predominant age-related TDP-43 encephalopathy. The full name refers to the area in the brain most likely to be affected, as well as the protein at the center of it all.

“These age-related dementia diseases are frequently associated with proteinaceous glop,” Nelson said. “But different proteins can contribute to the glop.”

In Alzheimer’s, you’ll find one set of glops. In Lewy body dementia, another glop.

And in LATE, the glop is a protein called TDP-43. Doctors aren’t sure why the protein is found in a modified, misfolded form in a disease like LATE.

“TDP-43 likes certain parts of the brain that the Alzheimer’s pathology is less enamored of,” explained Weintraub, who is also a member of Northwestern’s Mesulam Center for Cognitive Neurology and Alzheimer’s Disease.

“This is an area that’s going to be really huge in the future. What are the individual vulnerabilities that cause the proteins to go to particular regions of the brain?” she said. “It’s not just what the protein abnormality is, but where it is.”

More than a decade ago, doctors first linked the TDP protein to amyotrophic lateral sclerosis, otherwise known as ALS or Lou Gehrig’s disease. It was also linked to another type of dementia, called frontotemporal lobar degeneration.

LATE “is a disease that’s 100 times more common than either of those, and nobody knows about it,” said Nelson.

The new paper estimates, based on autopsy studies, that between 20 and 50% of people over 80 will have brain changes associated with LATE. And that prevalence increases with age.

Experts say nailing down these numbers — as well as finding better ways to detect and research the disease — is what they hope comes out of consensus statements like the new paper, which gives scientists a common language to discuss it, according to Nelson.

“People have, in their own separate bailiwicks, found different parts of the elephant,” he said. “But this is the first place where everybody gets together and says, ‘This is the whole elephant.’ ”

What this could mean for Alzheimer’s

The new guidelines could have an impact on Alzheimer’s research, as well. For one, experts say some high-profile drug trials may have suffered as a result of some patients having unidentified LATE — and thus not responding to treatment.

In fact, Nelson’s colleagues recently saw that firsthand: a patient, now deceased, who was part of an Alzheimer’s drug trial but developed dementia anyway.

“So, the clinical trial was a failure for Alzheimer’s disease,” Nelson said, “but it turns out he didn’t have Alzheimer’s disease. He had LATE.”

Nina Silverberg, director of the Alzheimer’s Disease Research Centers Program at the National Institute on Aging, said she suspects examples like this are not the majority — in part because people in clinical trials tend to be on the younger end of the spectrum.

“I’m sure it plays some part, but maybe not as much as one might think at first,” said Silverberg, who co-chaired the working group that led to the new paper.

Advances in testing had already shown that some patients in these trials lacked “the telltale signs of Alzheimer’s,” she said.

In some cases, perhaps it was LATE — “and it’s certainly possible that there are other, as yet undiscovered, pathologies that people may have,” she added.

“We could go back and screen all the people that had failed their Alzheimer’s disease therapies,” Nelson said. “But what we really need to do is go forward and try to get these people out of the Alzheimer’s clinical trials — and instead get them into their own clinical trials.”

Silverberg describes the new paper as “a roadmap” for research that could change as we come to discover more about the disease. And researchers can’t do it without a large, diverse group of patients, she added.

“It’s probably going to take years and research participants to help us understand all of that,” she said.

https://www.cnn.com/2019/04/30/health/dementia-late-alzheimers-study/index.html

Advertisements

ummary: Study identifies 104 high-risk genes for schizophrenia. One gene considered high-risk is also suspected in the development of autism.

Source: Vanderbilt University

Using a unique computational framework they developed, a team of scientist cyber-sleuths in the Vanderbilt University Department of Molecular Physiology and Biophysics and the Vanderbilt Genetics Institute (VGI) has identified 104 high-risk genes for schizophrenia.

Their discovery, which was reported April 15 in the journal Nature Neuroscience, supports the view that schizophrenia is a developmental disease, one which potentially can be detected and treated even before the onset of symptoms.

“This framework opens the door for several research directions,” said the paper’s senior author, Bingshan Li, PhD, associate professor of Molecular Physiology and Biophysics and an investigator in the VGI.

One direction is to determine whether drugs already approved for other, unrelated diseases could be repurposed to improve the treatment of schizophrenia. Another is to find in which cell types in the brain these genes are active along the development trajectory.

Ultimately, Li said, “I think we’ll have a better understanding of how prenatally these genes predispose risk, and that will give us a hint of how to potentially develop intervention strategies. It’s an ambitious goal … (but) by understanding the mechanism, drug development could be more targeted.”

Schizophrenia is a chronic, severe mental disorder characterized by hallucinations and delusions, “flat” emotional expression and cognitive difficulties.

Symptoms usually start between the ages of 16 and 30. Antipsychotic medications can relieve symptoms, but there is no cure for the disease.

Genetics plays a major role. While schizophrenia occurs in 1% of the population, the risk rises sharply to 50% for a person whose identical twin has the disease.

Recent genome-wide association studies (GWAS) have identified more than 100 loci, or fixed positions on different chromosomes, associated with schizophrenia. That may not be where high-risk genes are located, however. The loci could be regulating the activity of the genes at a distance — nearby or very far away.

To solve the problem, Li, with first authors Rui Chen, PhD, research instructor in Molecular Physiology and Biophysics, and postdoctoral research fellow Quan Wang, PhD, developed a computational framework they called the “Integrative Risk Genes Selector.”

The framework pulled the top genes from previously reported loci based on their cumulative supporting evidence from multi-dimensional genomics data as well as gene networks.

Which genes have high rates of mutation? Which are expressed prenatally? These are the kinds of questions a genetic “detective” might ask to identify and narrow the list of “suspects.”

The result was a list of 104 high-risk genes, some of which encode proteins targeted in other diseases by drugs already on the market. One gene is suspected in the development of autism spectrum disorder.

Much work remains to be done. But, said Chen, “Our framework can push GWAS a step forward … to further identify genes.” It also could be employed to help track down genetic suspects in other complex diseases.

Also contributing to the study were Li’s lab members Qiang Wei, PhD, Ying Ji and Hai Yang, PhD; VGI investigators Xue Zhong, PhD, Ran Tao, PhD, James Sutcliffe, PhD, and VGI Director Nancy Cox, PhD.

Chen also credits investigators in the Vanderbilt Center for Neuroscience Drug Discovery — Colleen Niswender, PhD, Branden Stansley, PhD, and center Director P. Jeffrey Conn, PhD — for their critical input.

Funding: The study was supported by the Vanderbilt Analysis Center for the Genome Sequencing Program and National Institutes of Health grant HG009086.

https://neurosciencenews.com/high-risk-schizophrenia-genes-12021/

By Stephanie Pappas

The Big Bang is commonly thought of as the start of it all: About 13.8 billion years ago, the observable universe went boom and expanded into being.

But what were things like before the Big Bang?

Short answer: We don’t know. Long answer: It could have been a lot of things, each mind-bending in its own way.

The first thing to understand is what the Big Bang actually was.

“The Big Bang is a moment in time, not a point in space,” said Sean Carroll, a theoretical physicist at the California Institute of Technology and author of “The Big Picture: On the Origins of Life, Meaning and the Universe Itself” (Dutton, 2016).

So, scrap the image of a tiny speck of dense matter suddenly exploding outward into a void. For one thing, the universe at the Big Bang may not have been particularly small, Carroll said. Sure, everything in the observable universe today — a sphere with a diameter of about 93 billion light-years containing at least 2 trillion galaxies — was crammed into a space less than a centimeter across. But there could be plenty outside of the observable universe that Earthlings can’t see because it’s physically impossible for the light to have traveled that far in 13.8 billion years.
Thus, it’s possible that the universe at the Big Bang was teeny-tiny or infinitely large, Carroll said, because there’s no way to look back in time at the stuff we can’t even see today. All we really know is that it was very, very dense and that it very quickly got less dense.

As a corollary, there really isn’t anything outside the universe, because the universe is, by definition, everything. So, at the Big Bang, everything was denser and hotter than it is now, but there was no more an “outside” of it than there is today. As tempting as it is to take a godlike view and imagine you could stand in a void and look at the scrunched-up baby universe right before the Big Bang, that would be impossible, Carroll said. The universe didn’t expand into space; space itself expanded.

“No matter where you are in the universe, if you trace yourself back 14 billion years, you come to this point where it was extremely hot, dense and rapidly expanding,” he said.

No one knows exactly what was happening in the universe until 1 second after the Big Bang, when the universe cooled off enough for protons and neutrons to collide and stick together. Many scientists do think that the universe went through a process of exponential expansion called inflation during that first second. This would have smoothed out the fabric of space-time and could explain why matter is so evenly distributed in the universe today.

Before the bang

It’s possible that before the Big Bang, the universe was an infinite stretch of an ultrahot, dense material, persisting in a steady state until, for some reason, the Big Bang occured. This extra-dense universe may have been governed by quantum mechanics, the physics of the extremely small scale, Carroll said. The Big Bang, then, would have represented the moment that classical physics took over as the major driver of the universe’s evolution.

For Stephen Hawking, this moment was all that mattered: Before the Big Bang, he said, events are unmeasurable, and thus undefined. Hawking called this the no-boundary proposal: Time and space, he said, are finite, but they don’t have any boundaries or starting or ending points, the same way that the planet Earth is finite but has no edge.

“Since events before the Big Bang have no observational consequences, one may as well cut them out of the theory and say that time began at the Big Bang,” he said in an interview on the National Geographic show “StarTalk” in 2018.

Or perhaps there was something else before the Big Bang that’s worth pondering. One idea is that the Big Bang isn’t the beginning of time, but rather that it was a moment of symmetry. In this idea, prior to the Big Bang, there was another universe, identical to this one but with entropy increasing toward the past instead of toward the future.

Increasing entropy, or increasing disorder in a system, is essentially the arrow of time, Carroll said, so in this mirror universe, time would run opposite to time in the modern universe and our universe would be in the past. Proponents of this theory also suggest that other properties of the universe would be flip-flopped in this mirror universe. For example, physicist David Sloan wrote in the University of Oxford Science Blog, asymmetries in molecules and ions (called chiralities) would be in opposite orientations to what they are in our universe.

A related theory holds that the Big Bang wasn’t the beginning of everything, but rather a moment in time when the universe switched from a period of contraction to a period of expansion. This “Big Bounce” notion suggests that there could be infinite Big Bangs as the universe expands, contracts and expands again. The problem with these ideas, Carroll said, is that there’s no explanation for why or how an expanding universe would contract and return to a low-entropy state.

Carroll and his colleague Jennifer Chen have their own pre-Big Bang vision. In 2004, the physicists suggested that perhaps the universe as we know it is the offspring of a parent universe from which a bit of space-time has ripped off.

It’s like a radioactive nucleus decaying, Carroll said: When a nucleus decays, it spits out an alpha or beta particle. The parent universe could do the same thing, except instead of particles, it spits out baby universes, perhaps infinitely. “It’s just a quantum fluctuation that lets it happen,” Carroll said. These baby universes are “literally parallel universes,” Carroll said, and don’t interact with or influence one another.

If that all sounds rather trippy, it is — because scientists don’t yet have a way to peer back to even the instant of the Big Bang, much less what came before it. There’s room to explore, though, Carroll said. The detection of gravitational waves from powerful galactic collisions in 2015 opens the possibility that these waves could be used to solve fundamental mysteries about the universes’ expansion in that first crucial second.

Theoretical physicists also have work to do, Carroll said, like making more-precise predictions about how quantum forces like quantum gravity might work.

“We don’t even know what we’re looking for,” Carroll said, “until we have a theory.”

https://www.livescience.com/65254-what-happened-before-big-big.html

by Linda Geddes

You need only to look at families to see that height is inherited — and studies of identical twins and families have long confirmed that suspicion. About 80% of variation in height is down to genetics, they suggest. But since the human genome was sequenced nearly two decades ago, researchers have struggled to fully identify the genetic factors responsible.

Studies seeking the genes that govern height have identified hundreds of common gene variants linked to the trait. But the findings also posed a quandry: each variant had a tiny effect on height that together didn’t amount to the genetic contribution predicted by family studies. This phenomenon, which occurs for many other traits and diseases, was dubbed missing heritability, and had even prompted some researchers to speculate that there’s something fundamentally wrong with our understanding of genetics.

Now, a study suggests that most of the missing heritability for height and body mass index (BMI) can, as some researchers had suspected, be found in rarer gene variants that had lain undiscovered until now.

“It is a reassuring paper because it suggests that there isn’t something terribly wrong with genetics,” says Tim Spector, a genetic epidemiologist at King’s College London. “It’s just that sorting it out is more complex than we thought.” The research was posted1 to the bioRxiv preprint server on 25 March.

Scouring the genome

To seek out the genetic factors that underlie diseases and traits, geneticists turn to mega-searches known as genome-wide association studies (GWAS). These scour the genomes of, typically, tens of thousands of people — or, increasingly, more than a million — for single-letter changes, or SNPs, in genes that commonly appear in individuals with a particular disease or that could explain a common trait such as height.

But GWAS have limitations. Because sequencing the entire genomes of thousands of people is expensive, GWAS themselves scan only a strategically selected set of SNPs, perhaps 500,000, in each person’s genome. That’s only a snapshot of the roughly six billion nucleotides — the building blocks of DNA — strung together in our genome. In turn, these 500,000 common variants would have been found from sequencing the genomes of just a few hundred people, says Timothy Frayling, a human geneticist at the University of Exeter, UK.

A team led by Peter Visscher at the Queensland Brain Institute in Brisbane, Australia, decided to investigate whether rarer SNPs than those typically scanned in GWAS might explain the missing heritability for height and BMI. They turned to whole-genome sequencing — performing a complete readout of all 6 billion bases — of 21,620 people. (The authors declined to comment on the preprint, because it is under submission at a journal.)

They relied on the simple, but powerful, principle that all people are related to some extent — albeit distantly — and that DNA can be used to calculate degrees of relatedness. Then, information on the people’s height and BMI could be combined to identify both common and rare SNPs that might be contributing to these traits.

Say, for instance, that a pair of third cousins is closer in height than a pair of second cousins is in a different family: that’s an indication that the third cousins’ height is mostly down to genetics, and the extent of that correlation will tell you how much, Frayling explains. “They used all of the genetic information, which enables you to work out how much of the relatedness was due to rarer things as well as the common things.”

As a result, the researchers captured genetic differences that occur in only 1 in 500, or even 1 in 5,000, people.

And by using information on both common and rare variants, the researchers arrived at roughly the same estimates of heritability as those indicated by twin studies. For height, Visscher and colleagues estimate a heritability of 79%, and for BMI, 40%. This means that if you take a large group of people, 79% of the height differences would be due to genes rather than to environmental factors, such as nutrition.

Complex processes

The researchers also suggest how the previously undiscovered variants might be contributing to physical traits. Tentatively, they found that these rare variants were slightly enriched in protein-coding regions of the genome, and that they had an increased likelihood of being disruptive to these regions, notes Terence Capellini, an evolutionary biologist at Harvard University in Cambridge, Massachusetts. This indicates that the rare variants might partly influence height by affecting protein-coding regions instead of the rest of the genome — the vast majority of which does not include instructions for making proteins, but might influence their expression.

The rarity of the variants also suggests that natural selection could be weeding them out, perhaps because they are harmful in some way.

The complexity of heritability means that understanding the roots of many common diseases — necessary if researchers are to develop effective therapies against them — will take considerably more time and money, and it could involve sequencing hundreds of thousands or even millions of whole genomes to identify the rare variants that explain a substantial portion of the illnesses’ genetic components.

The study reveals only the total amount of rare variants contributing to these common traits — not which ones are important, says Spector. “The next stage is to go and work out which of these rare variants are important for traits or diseases that you want to get a drug for.”

Nature 568, 444-445 (2019)

doi: 10.1038/d41586-019-01157-y

https://www.nature.com/articles/d41586-019-01157-y?utm_source=Nature+Briefing&utm_campaign=26855a4182-briefing-dy-20190424&utm_medium=email&utm_term=0_c9dfd39373-26855a4182-44039353

Summary: A new study looks at Leonardo da Vinci’s contribution to neuroscience and the advancement of modern sciences.

Source: Profiles, Inc

May 2, 2019, marks the 500th anniversary of Leonardo da Vinci’s death. A cultural icon, artist, engineer and experimentalist of the Renaissance period, Leonardo continues to inspire people around the globe. Jonathan Pevsner, PhD, professor and research scientist at the Kennedy Krieger Institute, wrote an article featured in the April edition of The Lancet titled, “Leonardo da Vinci’s studies of the brain.” In the piece, Pevsner highlights the exquisite drawings and curiosity, dedication and scientific rigor that led Leonardo to make penetrating insights into how the brain functions.

Through his research, Pevsner shares that Leonardo was the first to identify the olfactory nerve as a cranial nerve. He details how Leonardo performed intricate studies on the peripheral nervous system, challenging the findings of earlier authorities and introducing methods centuries earlier than other anatomists and physiologists. Pevsner also delves into Leonardo’s pioneering experiment on the ventricles by replicating his technique of injecting wax to make a cast of the ventricles in the brain to determine their overall shape and size. This further demonstrates Leonardo’s original thinking and advanced intelligence.

“Leonardo’s work reflects the emergence of the modern scientific era and forms a key part of his integrative approach to art and science,” said Pevsner.

“He asked questions about how the brain works in health and in disease. He sought to understand changes in the brain that occur in epilepsy, or why the mental state of a pregnant mother can directly affect the physical well-being of her child. At the Kennedy Krieger Institute, many of us struggle to answer the same questions. While science and technology have advanced at a breathtaking pace, we still need Leonardo’s qualities of passion, curiosity, the ability to visualize knowledge, and clear thinking to guide us forward.”

While Pevsner is viewed as an expert in Leonardo da Vinci, his main profession and passion is research into the molecular basis of childhood and adult brain disorders in his lab at Kennedy Krieger Institute. His lab reported the mutation that causes Sturge-Weber syndrome, and ongoing studies include bipolar disorder, autism spectrum disorder and schizophrenia. He is the author of the textbook, Bioinformatics and Functional Genomics.

https://neurosciencenews.com/da-vinci-brain-knowledge-11070/


Two-photon imaging shows neurons firing in a mouse brain. Recordings like this enable researchers to track which neurons are firing, and how they potentially correspond to different behaviors. The image is credited to Yiyang Gong, Duke University.

Summary: Convolutional neural network model significantly outperforms previous methods and is as accurate as humans in segmenting active and overlapping neurons.

Source: Duke University

Biomedical engineers at Duke University have developed an automated process that can trace the shapes of active neurons as accurately as human researchers can, but in a fraction of the time.

This new technique, based on using artificial intelligence to interpret video images, addresses a critical roadblock in neuron analysis, allowing researchers to rapidly gather and process neuronal signals for real-time behavioral studies.

The research appeared this week in the Proceedings of the National Academy of Sciences.

To measure neural activity, researchers typically use a process known as two-photon calcium imaging, which allows them to record the activity of individual neurons in the brains of live animals. These recordings enable researchers to track which neurons are firing, and how they potentially correspond to different behaviors.

While these measurements are useful for behavioral studies, identifying individual neurons in the recordings is a painstaking process. Currently, the most accurate method requires a human analyst to circle every ‘spark’ they see in the recording, often requiring them to stop and rewind the video until the targeted neurons are identified and saved. To further complicate the process, investigators are often interested in identifying only a small subset of active neurons that overlap in different layers within the thousands of neurons that are imaged.

This process, called segmentation, is fussy and slow. A researcher can spend anywhere from four to 24 hours segmenting neurons in a 30-minute video recording, and that’s assuming they’re fully focused for the duration and don’t take breaks to sleep, eat or use the bathroom.

In contrast, a new open source automated algorithm developed by image processing and neuroscience researchers in Duke’s Department of Biomedical Engineering can accurately identify and segment neurons in minutes.

“As a critical step towards complete mapping of brain activity, we were tasked with the formidable challenge of developing a fast automated algorithm that is as accurate as humans for segmenting a variety of active neurons imaged under different experimental settings,” said Sina Farsiu, the Paul Ruffin Scarborough Associate Professor of Engineering in Duke BME.

“The data analysis bottleneck has existed in neuroscience for a long time — data analysts have spent hours and hours processing minutes of data, but this algorithm can process a 30-minute video in 20 to 30 minutes,” said Yiyang Gong, an assistant professor in Duke BME. “We were also able to generalize its performance, so it can operate equally well if we need to segment neurons from another layer of the brain with different neuron size or densities.”

“Our deep learning-based algorithm is fast, and is demonstrated to be as accurate as (if not better than) human experts in segmenting active and overlapping neurons from two-photon microscopy recordings,” said Somayyeh Soltanian-Zadeh, a PhD student in Duke BME and first author on the paper.

Deep-learning algorithms allow researchers to quickly process large amounts of data by sending it through multiple layers of nonlinear processing units, which can be trained to identify different parts of a complex image. In their framework, this team created an algorithm that could process both spatial and timing information in the input videos. They then ‘trained’ the algorithm to mimic the segmentation of a human analyst while improving the accuracy.

The advance is a critical step towards allowing neuroscientists to track neural activity in real time. Because of their tool’s widespread usefulness, the team has made their software and annotated dataset available online.

Gong is already using the new method to more closely study the neural activity associated with different behaviors in mice. By better understanding which neurons fire for different activities, Gong hopes to learn how researchers can manipulate brain activity to modify behavior.

“This improved performance in active neuron detection should provide more information about the neural network and behavioral states, and open the door for accelerated progress in neuroscience experiments,” said Soltanian-Zadeh.

https://neurosciencenews.com/artificial-intelligence-neurons-11076/

Sydney Brenner was one of the first to view James Watson and Francis Crick’s double helix model of DNA in April 1953. The 26-year-old biologist from South Africa was then a graduate student at the University of Oxford, UK. So enthralled was he by the insights from the structure that he determined on the spot to devote his life to understanding genes.

Iconoclastic and provocative, he became one of the leading biologists of the twentieth century. Brenner shared in the 2002 Nobel Prize in Physiology or Medicine for deciphering the genetics of programmed cell death and animal development, including how the nervous system forms. He was at the forefront of the 1975 Asilomar meeting to discuss the appropriate use of emerging abilities to alter DNA, was a key proponent of the Human Genome Project, and much more. He died on 5 April.

Brenner was born in 1927 in Germiston, South Africa to poor immigrant parents. Bored by school, he preferred to read books borrowed (sometimes permanently) from the public library, or to dabble with a self-assembled chemistry set. His extraordinary intellect — he was reading newspapers by the age of four — did not go unnoticed. His teachers secured an award from the town council to send him to medical school.

Brenner entered the University of the Witwatersrand in Johannesburg at the age of 15 (alongside Aaron Klug, another science-giant-in-training). Here, certain faculty members, notably the anatomist Raymond Dart, and fellow research-oriented medical students enriched his interest in science. On finishing his six-year course, his youth legally precluded him from practising medicine, so he devoted two years to learning cell biology at the bench. His passion for research was such that he rarely set foot on the wards — and he initially failed his final examination in internal medicine.


Sydney Brenner (right) with John Sulston, who both shared the Nobel Prize in Physiology or Medicine with Robert Horvitz in 2002.Credit: Steve Russell/Toronto Star/Getty

In 1952 Brenner won a scholarship to the Department of Physical Chemistry at Oxford. His adviser, Cyril Hinshelwood, wanted to pursue the idea that the environment altered observable characteristics of bacteria. Brenner tried to convince him of the role of genetic mutation. Two years later, with doctorate in hand, Brenner spent the summer of 1954 in the United States visiting labs, including Cold Spring Harbor in New York state. Here he caught up with Watson and Crick again.

Impressed, Crick recruited the young South African to the University of Cambridge, UK, in 1956. In the early 1960s, using just bacteria and bacteriophages, Crick and Brenner deciphered many of the essentials of gene function in a breathtaking series of studies.

Brenner had proved theoretically in the mid-1950s that the genetic code is ‘non-overlapping’ — each nucleotide is part of only one triplet (three nucleotides specify each amino acid in a protein) and successive ‘triplet codons’ are read in order. In 1961, Brenner and Crick confirmed this in the lab. The same year, Brenner, with François Jacob and Matthew Meselson, published their demonstration of the existence of messenger RNA. Over the next two years, often with Crick, Brenner showed how the synthesis of proteins encoded by DNA sequences is terminated.

This intellectual partnership dissolved when Brenner began to focus on whole organisms in the mid-1960s. He finally alighted on Caenorhabditis elegans. Studies of this tiny worm in Brenner’s arm of the legendary Laboratory of Molecular Biology (LMB) in Cambridge led to the Nobel for Brenner, Robert Horvitz and John Sulston.


Maxine Singer, Norton Zinder, Sydney Brenner and Paul Berg (left to right) at the 1975 meeting on recombinant DNA technology in Asilomar, California.Credit: NAS

And his contributions went well beyond the lab. In 1975, with Paul Berg and others, he organized a meeting at Asilomar, California, to draft a position paper on the United States’ use of recombinant DNA technology — introducing genes from one species into another, usually bacteria. Brenner was influential in persuading attendees to treat ethical and societal concerns seriously. He stressed the importance of thoughtful guidelines for deploying the technology to avoid overly restrictive regulation.

He served as director of the LMB for about a decade. Despite describing the experience as the biggest mistake in his life, he took the lab (with its stable of Nobel laureates and distinguished staff) to unprecedented prominence. In 1986, he moved to a new Medical Research Council (MRC) unit of molecular genetics at the city’s Addenbrooke’s Hospital, and began work in the emerging discipline of evolutionary genomics. Brenner also orchestrated Britain’s involvement in the Human Genome Project in the early 1990s.

From the late 1980s, Brenner steered the development of biomedical research in Singapore. Here he masterminded Biopolis, a spectacular conglomerate of chrome and glass buildings dedicated to biomedical research. He also helped to guide the Janelia Farm campus of the Howard Hughes Medical Institute in Ashburn, Virginia, and to restructure molecular biology in Japan.

Brenner dazzled, amused and sometimes offended audiences with his humour, irony and disdain of authority and dogma — prompting someone to describe him as “one of biology’s mischievous children; the witty trickster who delights in stirring things up.” His popular columns in Current Biology (titled ‘Loose Ends’ and, later, ‘False Starts’) in the mid-1990s led some seminar hosts to introduce him as Uncle Syd, a pen name he ultimately adopted.

Sydney was aware of the debt he owed to being in the right place at the right time. He attributed his successes to having to learn scientific independence in a remote part of the world, with few role models and even fewer mentors. He recounted the importance of arriving in Oxford with few scientific biases, and leaving with the conviction that seeing the double helix model one chilly April morning would be a defining moment in his life.

The Brenner laboratories (he often operated more than one) spawned a generation of outstanding protégés, including five Nobel laureates. Those who dedicated their careers to understanding the workings of C. elegans now number in the thousands. Science will be considerably poorer without Sydney. But his name will live forever in the annals of biology.

https://www.nature.com/articles/d41586-019-01192-9