Posts Tagged ‘science’

ummary: Study identifies 104 high-risk genes for schizophrenia. One gene considered high-risk is also suspected in the development of autism.

Source: Vanderbilt University

Using a unique computational framework they developed, a team of scientist cyber-sleuths in the Vanderbilt University Department of Molecular Physiology and Biophysics and the Vanderbilt Genetics Institute (VGI) has identified 104 high-risk genes for schizophrenia.

Their discovery, which was reported April 15 in the journal Nature Neuroscience, supports the view that schizophrenia is a developmental disease, one which potentially can be detected and treated even before the onset of symptoms.

“This framework opens the door for several research directions,” said the paper’s senior author, Bingshan Li, PhD, associate professor of Molecular Physiology and Biophysics and an investigator in the VGI.

One direction is to determine whether drugs already approved for other, unrelated diseases could be repurposed to improve the treatment of schizophrenia. Another is to find in which cell types in the brain these genes are active along the development trajectory.

Ultimately, Li said, “I think we’ll have a better understanding of how prenatally these genes predispose risk, and that will give us a hint of how to potentially develop intervention strategies. It’s an ambitious goal … (but) by understanding the mechanism, drug development could be more targeted.”

Schizophrenia is a chronic, severe mental disorder characterized by hallucinations and delusions, “flat” emotional expression and cognitive difficulties.

Symptoms usually start between the ages of 16 and 30. Antipsychotic medications can relieve symptoms, but there is no cure for the disease.

Genetics plays a major role. While schizophrenia occurs in 1% of the population, the risk rises sharply to 50% for a person whose identical twin has the disease.

Recent genome-wide association studies (GWAS) have identified more than 100 loci, or fixed positions on different chromosomes, associated with schizophrenia. That may not be where high-risk genes are located, however. The loci could be regulating the activity of the genes at a distance — nearby or very far away.

To solve the problem, Li, with first authors Rui Chen, PhD, research instructor in Molecular Physiology and Biophysics, and postdoctoral research fellow Quan Wang, PhD, developed a computational framework they called the “Integrative Risk Genes Selector.”

The framework pulled the top genes from previously reported loci based on their cumulative supporting evidence from multi-dimensional genomics data as well as gene networks.

Which genes have high rates of mutation? Which are expressed prenatally? These are the kinds of questions a genetic “detective” might ask to identify and narrow the list of “suspects.”

The result was a list of 104 high-risk genes, some of which encode proteins targeted in other diseases by drugs already on the market. One gene is suspected in the development of autism spectrum disorder.

Much work remains to be done. But, said Chen, “Our framework can push GWAS a step forward … to further identify genes.” It also could be employed to help track down genetic suspects in other complex diseases.

Also contributing to the study were Li’s lab members Qiang Wei, PhD, Ying Ji and Hai Yang, PhD; VGI investigators Xue Zhong, PhD, Ran Tao, PhD, James Sutcliffe, PhD, and VGI Director Nancy Cox, PhD.

Chen also credits investigators in the Vanderbilt Center for Neuroscience Drug Discovery — Colleen Niswender, PhD, Branden Stansley, PhD, and center Director P. Jeffrey Conn, PhD — for their critical input.

Funding: The study was supported by the Vanderbilt Analysis Center for the Genome Sequencing Program and National Institutes of Health grant HG009086.

https://neurosciencenews.com/high-risk-schizophrenia-genes-12021/

Advertisements

By Stephanie Pappas

The Big Bang is commonly thought of as the start of it all: About 13.8 billion years ago, the observable universe went boom and expanded into being.

But what were things like before the Big Bang?

Short answer: We don’t know. Long answer: It could have been a lot of things, each mind-bending in its own way.

The first thing to understand is what the Big Bang actually was.

“The Big Bang is a moment in time, not a point in space,” said Sean Carroll, a theoretical physicist at the California Institute of Technology and author of “The Big Picture: On the Origins of Life, Meaning and the Universe Itself” (Dutton, 2016).

So, scrap the image of a tiny speck of dense matter suddenly exploding outward into a void. For one thing, the universe at the Big Bang may not have been particularly small, Carroll said. Sure, everything in the observable universe today — a sphere with a diameter of about 93 billion light-years containing at least 2 trillion galaxies — was crammed into a space less than a centimeter across. But there could be plenty outside of the observable universe that Earthlings can’t see because it’s physically impossible for the light to have traveled that far in 13.8 billion years.
Thus, it’s possible that the universe at the Big Bang was teeny-tiny or infinitely large, Carroll said, because there’s no way to look back in time at the stuff we can’t even see today. All we really know is that it was very, very dense and that it very quickly got less dense.

As a corollary, there really isn’t anything outside the universe, because the universe is, by definition, everything. So, at the Big Bang, everything was denser and hotter than it is now, but there was no more an “outside” of it than there is today. As tempting as it is to take a godlike view and imagine you could stand in a void and look at the scrunched-up baby universe right before the Big Bang, that would be impossible, Carroll said. The universe didn’t expand into space; space itself expanded.

“No matter where you are in the universe, if you trace yourself back 14 billion years, you come to this point where it was extremely hot, dense and rapidly expanding,” he said.

No one knows exactly what was happening in the universe until 1 second after the Big Bang, when the universe cooled off enough for protons and neutrons to collide and stick together. Many scientists do think that the universe went through a process of exponential expansion called inflation during that first second. This would have smoothed out the fabric of space-time and could explain why matter is so evenly distributed in the universe today.

Before the bang

It’s possible that before the Big Bang, the universe was an infinite stretch of an ultrahot, dense material, persisting in a steady state until, for some reason, the Big Bang occured. This extra-dense universe may have been governed by quantum mechanics, the physics of the extremely small scale, Carroll said. The Big Bang, then, would have represented the moment that classical physics took over as the major driver of the universe’s evolution.

For Stephen Hawking, this moment was all that mattered: Before the Big Bang, he said, events are unmeasurable, and thus undefined. Hawking called this the no-boundary proposal: Time and space, he said, are finite, but they don’t have any boundaries or starting or ending points, the same way that the planet Earth is finite but has no edge.

“Since events before the Big Bang have no observational consequences, one may as well cut them out of the theory and say that time began at the Big Bang,” he said in an interview on the National Geographic show “StarTalk” in 2018.

Or perhaps there was something else before the Big Bang that’s worth pondering. One idea is that the Big Bang isn’t the beginning of time, but rather that it was a moment of symmetry. In this idea, prior to the Big Bang, there was another universe, identical to this one but with entropy increasing toward the past instead of toward the future.

Increasing entropy, or increasing disorder in a system, is essentially the arrow of time, Carroll said, so in this mirror universe, time would run opposite to time in the modern universe and our universe would be in the past. Proponents of this theory also suggest that other properties of the universe would be flip-flopped in this mirror universe. For example, physicist David Sloan wrote in the University of Oxford Science Blog, asymmetries in molecules and ions (called chiralities) would be in opposite orientations to what they are in our universe.

A related theory holds that the Big Bang wasn’t the beginning of everything, but rather a moment in time when the universe switched from a period of contraction to a period of expansion. This “Big Bounce” notion suggests that there could be infinite Big Bangs as the universe expands, contracts and expands again. The problem with these ideas, Carroll said, is that there’s no explanation for why or how an expanding universe would contract and return to a low-entropy state.

Carroll and his colleague Jennifer Chen have their own pre-Big Bang vision. In 2004, the physicists suggested that perhaps the universe as we know it is the offspring of a parent universe from which a bit of space-time has ripped off.

It’s like a radioactive nucleus decaying, Carroll said: When a nucleus decays, it spits out an alpha or beta particle. The parent universe could do the same thing, except instead of particles, it spits out baby universes, perhaps infinitely. “It’s just a quantum fluctuation that lets it happen,” Carroll said. These baby universes are “literally parallel universes,” Carroll said, and don’t interact with or influence one another.

If that all sounds rather trippy, it is — because scientists don’t yet have a way to peer back to even the instant of the Big Bang, much less what came before it. There’s room to explore, though, Carroll said. The detection of gravitational waves from powerful galactic collisions in 2015 opens the possibility that these waves could be used to solve fundamental mysteries about the universes’ expansion in that first crucial second.

Theoretical physicists also have work to do, Carroll said, like making more-precise predictions about how quantum forces like quantum gravity might work.

“We don’t even know what we’re looking for,” Carroll said, “until we have a theory.”

https://www.livescience.com/65254-what-happened-before-big-big.html

by Linda Geddes

You need only to look at families to see that height is inherited — and studies of identical twins and families have long confirmed that suspicion. About 80% of variation in height is down to genetics, they suggest. But since the human genome was sequenced nearly two decades ago, researchers have struggled to fully identify the genetic factors responsible.

Studies seeking the genes that govern height have identified hundreds of common gene variants linked to the trait. But the findings also posed a quandry: each variant had a tiny effect on height that together didn’t amount to the genetic contribution predicted by family studies. This phenomenon, which occurs for many other traits and diseases, was dubbed missing heritability, and had even prompted some researchers to speculate that there’s something fundamentally wrong with our understanding of genetics.

Now, a study suggests that most of the missing heritability for height and body mass index (BMI) can, as some researchers had suspected, be found in rarer gene variants that had lain undiscovered until now.

“It is a reassuring paper because it suggests that there isn’t something terribly wrong with genetics,” says Tim Spector, a genetic epidemiologist at King’s College London. “It’s just that sorting it out is more complex than we thought.” The research was posted1 to the bioRxiv preprint server on 25 March.

Scouring the genome

To seek out the genetic factors that underlie diseases and traits, geneticists turn to mega-searches known as genome-wide association studies (GWAS). These scour the genomes of, typically, tens of thousands of people — or, increasingly, more than a million — for single-letter changes, or SNPs, in genes that commonly appear in individuals with a particular disease or that could explain a common trait such as height.

But GWAS have limitations. Because sequencing the entire genomes of thousands of people is expensive, GWAS themselves scan only a strategically selected set of SNPs, perhaps 500,000, in each person’s genome. That’s only a snapshot of the roughly six billion nucleotides — the building blocks of DNA — strung together in our genome. In turn, these 500,000 common variants would have been found from sequencing the genomes of just a few hundred people, says Timothy Frayling, a human geneticist at the University of Exeter, UK.

A team led by Peter Visscher at the Queensland Brain Institute in Brisbane, Australia, decided to investigate whether rarer SNPs than those typically scanned in GWAS might explain the missing heritability for height and BMI. They turned to whole-genome sequencing — performing a complete readout of all 6 billion bases — of 21,620 people. (The authors declined to comment on the preprint, because it is under submission at a journal.)

They relied on the simple, but powerful, principle that all people are related to some extent — albeit distantly — and that DNA can be used to calculate degrees of relatedness. Then, information on the people’s height and BMI could be combined to identify both common and rare SNPs that might be contributing to these traits.

Say, for instance, that a pair of third cousins is closer in height than a pair of second cousins is in a different family: that’s an indication that the third cousins’ height is mostly down to genetics, and the extent of that correlation will tell you how much, Frayling explains. “They used all of the genetic information, which enables you to work out how much of the relatedness was due to rarer things as well as the common things.”

As a result, the researchers captured genetic differences that occur in only 1 in 500, or even 1 in 5,000, people.

And by using information on both common and rare variants, the researchers arrived at roughly the same estimates of heritability as those indicated by twin studies. For height, Visscher and colleagues estimate a heritability of 79%, and for BMI, 40%. This means that if you take a large group of people, 79% of the height differences would be due to genes rather than to environmental factors, such as nutrition.

Complex processes

The researchers also suggest how the previously undiscovered variants might be contributing to physical traits. Tentatively, they found that these rare variants were slightly enriched in protein-coding regions of the genome, and that they had an increased likelihood of being disruptive to these regions, notes Terence Capellini, an evolutionary biologist at Harvard University in Cambridge, Massachusetts. This indicates that the rare variants might partly influence height by affecting protein-coding regions instead of the rest of the genome — the vast majority of which does not include instructions for making proteins, but might influence their expression.

The rarity of the variants also suggests that natural selection could be weeding them out, perhaps because they are harmful in some way.

The complexity of heritability means that understanding the roots of many common diseases — necessary if researchers are to develop effective therapies against them — will take considerably more time and money, and it could involve sequencing hundreds of thousands or even millions of whole genomes to identify the rare variants that explain a substantial portion of the illnesses’ genetic components.

The study reveals only the total amount of rare variants contributing to these common traits — not which ones are important, says Spector. “The next stage is to go and work out which of these rare variants are important for traits or diseases that you want to get a drug for.”

Nature 568, 444-445 (2019)

doi: 10.1038/d41586-019-01157-y

https://www.nature.com/articles/d41586-019-01157-y?utm_source=Nature+Briefing&utm_campaign=26855a4182-briefing-dy-20190424&utm_medium=email&utm_term=0_c9dfd39373-26855a4182-44039353

Summary: A new study looks at Leonardo da Vinci’s contribution to neuroscience and the advancement of modern sciences.

Source: Profiles, Inc

May 2, 2019, marks the 500th anniversary of Leonardo da Vinci’s death. A cultural icon, artist, engineer and experimentalist of the Renaissance period, Leonardo continues to inspire people around the globe. Jonathan Pevsner, PhD, professor and research scientist at the Kennedy Krieger Institute, wrote an article featured in the April edition of The Lancet titled, “Leonardo da Vinci’s studies of the brain.” In the piece, Pevsner highlights the exquisite drawings and curiosity, dedication and scientific rigor that led Leonardo to make penetrating insights into how the brain functions.

Through his research, Pevsner shares that Leonardo was the first to identify the olfactory nerve as a cranial nerve. He details how Leonardo performed intricate studies on the peripheral nervous system, challenging the findings of earlier authorities and introducing methods centuries earlier than other anatomists and physiologists. Pevsner also delves into Leonardo’s pioneering experiment on the ventricles by replicating his technique of injecting wax to make a cast of the ventricles in the brain to determine their overall shape and size. This further demonstrates Leonardo’s original thinking and advanced intelligence.

“Leonardo’s work reflects the emergence of the modern scientific era and forms a key part of his integrative approach to art and science,” said Pevsner.

“He asked questions about how the brain works in health and in disease. He sought to understand changes in the brain that occur in epilepsy, or why the mental state of a pregnant mother can directly affect the physical well-being of her child. At the Kennedy Krieger Institute, many of us struggle to answer the same questions. While science and technology have advanced at a breathtaking pace, we still need Leonardo’s qualities of passion, curiosity, the ability to visualize knowledge, and clear thinking to guide us forward.”

While Pevsner is viewed as an expert in Leonardo da Vinci, his main profession and passion is research into the molecular basis of childhood and adult brain disorders in his lab at Kennedy Krieger Institute. His lab reported the mutation that causes Sturge-Weber syndrome, and ongoing studies include bipolar disorder, autism spectrum disorder and schizophrenia. He is the author of the textbook, Bioinformatics and Functional Genomics.

https://neurosciencenews.com/da-vinci-brain-knowledge-11070/


Two-photon imaging shows neurons firing in a mouse brain. Recordings like this enable researchers to track which neurons are firing, and how they potentially correspond to different behaviors. The image is credited to Yiyang Gong, Duke University.

Summary: Convolutional neural network model significantly outperforms previous methods and is as accurate as humans in segmenting active and overlapping neurons.

Source: Duke University

Biomedical engineers at Duke University have developed an automated process that can trace the shapes of active neurons as accurately as human researchers can, but in a fraction of the time.

This new technique, based on using artificial intelligence to interpret video images, addresses a critical roadblock in neuron analysis, allowing researchers to rapidly gather and process neuronal signals for real-time behavioral studies.

The research appeared this week in the Proceedings of the National Academy of Sciences.

To measure neural activity, researchers typically use a process known as two-photon calcium imaging, which allows them to record the activity of individual neurons in the brains of live animals. These recordings enable researchers to track which neurons are firing, and how they potentially correspond to different behaviors.

While these measurements are useful for behavioral studies, identifying individual neurons in the recordings is a painstaking process. Currently, the most accurate method requires a human analyst to circle every ‘spark’ they see in the recording, often requiring them to stop and rewind the video until the targeted neurons are identified and saved. To further complicate the process, investigators are often interested in identifying only a small subset of active neurons that overlap in different layers within the thousands of neurons that are imaged.

This process, called segmentation, is fussy and slow. A researcher can spend anywhere from four to 24 hours segmenting neurons in a 30-minute video recording, and that’s assuming they’re fully focused for the duration and don’t take breaks to sleep, eat or use the bathroom.

In contrast, a new open source automated algorithm developed by image processing and neuroscience researchers in Duke’s Department of Biomedical Engineering can accurately identify and segment neurons in minutes.

“As a critical step towards complete mapping of brain activity, we were tasked with the formidable challenge of developing a fast automated algorithm that is as accurate as humans for segmenting a variety of active neurons imaged under different experimental settings,” said Sina Farsiu, the Paul Ruffin Scarborough Associate Professor of Engineering in Duke BME.

“The data analysis bottleneck has existed in neuroscience for a long time — data analysts have spent hours and hours processing minutes of data, but this algorithm can process a 30-minute video in 20 to 30 minutes,” said Yiyang Gong, an assistant professor in Duke BME. “We were also able to generalize its performance, so it can operate equally well if we need to segment neurons from another layer of the brain with different neuron size or densities.”

“Our deep learning-based algorithm is fast, and is demonstrated to be as accurate as (if not better than) human experts in segmenting active and overlapping neurons from two-photon microscopy recordings,” said Somayyeh Soltanian-Zadeh, a PhD student in Duke BME and first author on the paper.

Deep-learning algorithms allow researchers to quickly process large amounts of data by sending it through multiple layers of nonlinear processing units, which can be trained to identify different parts of a complex image. In their framework, this team created an algorithm that could process both spatial and timing information in the input videos. They then ‘trained’ the algorithm to mimic the segmentation of a human analyst while improving the accuracy.

The advance is a critical step towards allowing neuroscientists to track neural activity in real time. Because of their tool’s widespread usefulness, the team has made their software and annotated dataset available online.

Gong is already using the new method to more closely study the neural activity associated with different behaviors in mice. By better understanding which neurons fire for different activities, Gong hopes to learn how researchers can manipulate brain activity to modify behavior.

“This improved performance in active neuron detection should provide more information about the neural network and behavioral states, and open the door for accelerated progress in neuroscience experiments,” said Soltanian-Zadeh.

https://neurosciencenews.com/artificial-intelligence-neurons-11076/

Sydney Brenner was one of the first to view James Watson and Francis Crick’s double helix model of DNA in April 1953. The 26-year-old biologist from South Africa was then a graduate student at the University of Oxford, UK. So enthralled was he by the insights from the structure that he determined on the spot to devote his life to understanding genes.

Iconoclastic and provocative, he became one of the leading biologists of the twentieth century. Brenner shared in the 2002 Nobel Prize in Physiology or Medicine for deciphering the genetics of programmed cell death and animal development, including how the nervous system forms. He was at the forefront of the 1975 Asilomar meeting to discuss the appropriate use of emerging abilities to alter DNA, was a key proponent of the Human Genome Project, and much more. He died on 5 April.

Brenner was born in 1927 in Germiston, South Africa to poor immigrant parents. Bored by school, he preferred to read books borrowed (sometimes permanently) from the public library, or to dabble with a self-assembled chemistry set. His extraordinary intellect — he was reading newspapers by the age of four — did not go unnoticed. His teachers secured an award from the town council to send him to medical school.

Brenner entered the University of the Witwatersrand in Johannesburg at the age of 15 (alongside Aaron Klug, another science-giant-in-training). Here, certain faculty members, notably the anatomist Raymond Dart, and fellow research-oriented medical students enriched his interest in science. On finishing his six-year course, his youth legally precluded him from practising medicine, so he devoted two years to learning cell biology at the bench. His passion for research was such that he rarely set foot on the wards — and he initially failed his final examination in internal medicine.


Sydney Brenner (right) with John Sulston, who both shared the Nobel Prize in Physiology or Medicine with Robert Horvitz in 2002.Credit: Steve Russell/Toronto Star/Getty

In 1952 Brenner won a scholarship to the Department of Physical Chemistry at Oxford. His adviser, Cyril Hinshelwood, wanted to pursue the idea that the environment altered observable characteristics of bacteria. Brenner tried to convince him of the role of genetic mutation. Two years later, with doctorate in hand, Brenner spent the summer of 1954 in the United States visiting labs, including Cold Spring Harbor in New York state. Here he caught up with Watson and Crick again.

Impressed, Crick recruited the young South African to the University of Cambridge, UK, in 1956. In the early 1960s, using just bacteria and bacteriophages, Crick and Brenner deciphered many of the essentials of gene function in a breathtaking series of studies.

Brenner had proved theoretically in the mid-1950s that the genetic code is ‘non-overlapping’ — each nucleotide is part of only one triplet (three nucleotides specify each amino acid in a protein) and successive ‘triplet codons’ are read in order. In 1961, Brenner and Crick confirmed this in the lab. The same year, Brenner, with François Jacob and Matthew Meselson, published their demonstration of the existence of messenger RNA. Over the next two years, often with Crick, Brenner showed how the synthesis of proteins encoded by DNA sequences is terminated.

This intellectual partnership dissolved when Brenner began to focus on whole organisms in the mid-1960s. He finally alighted on Caenorhabditis elegans. Studies of this tiny worm in Brenner’s arm of the legendary Laboratory of Molecular Biology (LMB) in Cambridge led to the Nobel for Brenner, Robert Horvitz and John Sulston.


Maxine Singer, Norton Zinder, Sydney Brenner and Paul Berg (left to right) at the 1975 meeting on recombinant DNA technology in Asilomar, California.Credit: NAS

And his contributions went well beyond the lab. In 1975, with Paul Berg and others, he organized a meeting at Asilomar, California, to draft a position paper on the United States’ use of recombinant DNA technology — introducing genes from one species into another, usually bacteria. Brenner was influential in persuading attendees to treat ethical and societal concerns seriously. He stressed the importance of thoughtful guidelines for deploying the technology to avoid overly restrictive regulation.

He served as director of the LMB for about a decade. Despite describing the experience as the biggest mistake in his life, he took the lab (with its stable of Nobel laureates and distinguished staff) to unprecedented prominence. In 1986, he moved to a new Medical Research Council (MRC) unit of molecular genetics at the city’s Addenbrooke’s Hospital, and began work in the emerging discipline of evolutionary genomics. Brenner also orchestrated Britain’s involvement in the Human Genome Project in the early 1990s.

From the late 1980s, Brenner steered the development of biomedical research in Singapore. Here he masterminded Biopolis, a spectacular conglomerate of chrome and glass buildings dedicated to biomedical research. He also helped to guide the Janelia Farm campus of the Howard Hughes Medical Institute in Ashburn, Virginia, and to restructure molecular biology in Japan.

Brenner dazzled, amused and sometimes offended audiences with his humour, irony and disdain of authority and dogma — prompting someone to describe him as “one of biology’s mischievous children; the witty trickster who delights in stirring things up.” His popular columns in Current Biology (titled ‘Loose Ends’ and, later, ‘False Starts’) in the mid-1990s led some seminar hosts to introduce him as Uncle Syd, a pen name he ultimately adopted.

Sydney was aware of the debt he owed to being in the right place at the right time. He attributed his successes to having to learn scientific independence in a remote part of the world, with few role models and even fewer mentors. He recounted the importance of arriving in Oxford with few scientific biases, and leaving with the conviction that seeing the double helix model one chilly April morning would be a defining moment in his life.

The Brenner laboratories (he often operated more than one) spawned a generation of outstanding protégés, including five Nobel laureates. Those who dedicated their careers to understanding the workings of C. elegans now number in the thousands. Science will be considerably poorer without Sydney. But his name will live forever in the annals of biology.

https://www.nature.com/articles/d41586-019-01192-9

by Antonio Regalado

Human intelligence is one of evolution’s most consequential inventions. It is the result of a sprint that started millions of years ago, leading to ever bigger brains and new abilities. Eventually, humans stood upright, took up the plow, and created civilization, while our primate cousins stayed in the trees.

Now scientists in southern China report that they’ve tried to narrow the evolutionary gap, creating several transgenic macaque monkeys with extra copies of a human gene suspected of playing a role in shaping human intelligence.

“This was the first attempt to understand the evolution of human cognition using a transgenic monkey model,” says Bing Su, the geneticist at the Kunming Institute of Zoology who led the effort.

According to their findings, the modified monkeys did better on a memory test involving colors and block pictures, and their brains also took longer to develop—as those of human children do. There wasn’t a difference in brain size.

The experiments, described on March 27 in a Beijing journal, National Science Review, and first reported by Chinese media, remain far from pinpointing the secrets of the human mind or leading to an uprising of brainy primates.

Instead, several Western scientists, including one who collaborated on the effort, called the experiments reckless and said they questioned the ethics of genetically modifying primates, an area where China has seized a technological edge.

“The use of transgenic monkeys to study human genes linked to brain evolution is a very risky road to take,” says James Sikela, a geneticist who carries out comparative studies among primates at the University of Colorado. He is concerned that the experiment shows disregard for the animals and will soon lead to more extreme modifications. “It is a classic slippery slope issue and one that we can expect to recur as this type of research is pursued,” he says.

Research using primates is increasingly difficult in Europe and the US, but China has rushed to apply the latest high-tech DNA tools to the animals. The country was first to create monkeys altered with the gene-editing tool CRISPR, and this January a Chinese institute announced it had produced a half-dozen clones of a monkey with a severe mental disturbance.

“It is troubling that the field is steamrolling along in this manner,” says Sikela.

Evolution story

Su, a researcher at the Kunming Institute of Zoology, specializes in searching for signs of “Darwinian selection”—that is, genes that have been spreading because they’re successful. His quest has spanned such topics as Himalayan yaks’ adaptation to high altitude and the evolution of human skin color in response to cold winters.

The biggest riddle of all, though, is intelligence. What we know is that our humanlike ancestors’ brains rapidly grew in size and power. To find the genes that caused the change, scientists have sought out differences between humans and chimpanzees, whose genes are about 98% similar to ours. The objective, says, Sikela, was to locate “the jewels of our genome”—that is, the DNA that makes us uniquely human.

For instance, one popular candidate gene called FOXP2—the “language gene” in press reports—became famous for its potential link to human speech. (A British family whose members inherited an abnormal version had trouble speaking.) Scientists from Tokyo to Berlin were soon mutating the gene in mice and listening with ultrasonic microphones to see if their squeaks changed.

Su was fascinated by a different gene, MCPH1, or microcephalin. Not only did the gene’s sequence differ between humans and apes, but babies with damage to microcephalin are born with tiny heads, providing a link to brain size. With his students, Su once used calipers and head spanners to the measure the heads of 867 Chinese men and women to see if the results could be explained by differences in the gene.

By 2010, though, Su saw a chance to carry out a potentially more definitive experiment—adding the human microcephalin gene to a monkey. China by then had begun pairing its sizeable breeding facilities for monkeys (the country exports more than 30,000 a year) with the newest genetic tools, an effort that has turned it into a mecca for foreign scientists who need monkeys to experiment on.

To create the animals, Su and collaborators at the Yunnan Key Laboratory of Primate Biomedical Research exposed monkey embryos to a virus carrying the human version of microcephalin. They generated 11 monkeys, five of which survived to take part in a battery of brain measurements. Those monkeys each have between two and nine copies of the human gene in their bodies.

Su’s monkeys raise some unusual questions about animal rights. In 2010, Sikela and three colleagues wrote a paper called “The ethics of using transgenic non-human primates to study what makes us human,” in which they concluded that human brain genes should never be added to apes, such as chimpanzees, because they are too similar to us. “You just go to the Planet of the Apes immediately in the popular imagination,” says Jacqueline Glover, a University of Colorado bioethicist who was one of the authors. “To humanize them is to cause harm. Where would they live and what would they do? Do not create a being that can’t have a meaningful life in any context.”

In an e-mail, Su says he agrees that apes are so close to humans that their brains shouldn’t be changed. But monkeys and humans last shared an ancestor 25 million years ago. To Su, that alleviates the ethical concerns. “Although their genome is close to ours, there are also tens of millions of differences,” he says. He doesn’t think the monkeys will become anything more than monkeys. “Impossible by introducing only a few human genes,” he says.

Smart monkey?

Judging by their experiments, the Chinese team did expect that their transgenic monkeys could end up with increased intelligence and brain size. That is why they put the creatures inside MRI machines to measure their white matter and gave them computerized memory tests. According to their report, the transgenic monkeys didn’t have larger brains, but they did better on a short-term memory quiz, a finding the team considers remarkable.

Several scientists think the Chinese experiment didn’t yield much new information. One of them is Martin Styner, a University of North Carolina computer scientist and specialist in MRI who is listed among the coauthors of the Chinese report. Styner says his role was limited to training Chinese students to extract brain volume data from MRI images, and that he considered removing his name from the paper, which he says was not able to find a publisher in the West.

“There are a bunch of aspects of this study that you could not do in the US,” says Styner. “It raised issues about the type of research and whether the animals were properly cared for.”

After what he’s seen, Styner says he’s not looking forward to more evolution research on transgenic monkeys. “I don’t think that is a good direction,” he says. “Now we have created this animal which is different than it is supposed to be. When we do experiments, we have to have a good understanding of what we are trying to learn, to help society, and that is not the case here.” One issue is that genetically modified monkeys are expensive to create and care for. With just five modified monkeys, it’s hard to reach firm conclusions about whether they really differ from normal monkeys in terms of brain size or memory skills. “They are trying to understand brain development. And I don’t think they are getting there,” says Styner.

In an e-mail, Su agreed that the small number of animals was a limitation. He says he has a solution, though. He is making more of the monkeys and is also testing new brain evolution genes. One that he has his eye on is SRGAP2C, a DNA variant that arose about two million years ago, just when Australopithecus was ceding the African savannah to early humans. That gene has been dubbed the “humanity switch” and the “missing genetic link” for its likely role in the emergence of human intelligence.

Su says he’s been adding it to monkeys, but that it’s too soon to say what the results are.

https://www.technologyreview.com/s/613277/chinese-scientists-have-put-human-brain-genes-in-monkeysand-yes-they-may-be-smarter/