Drug to treat malaria could mitigate hereditary hearing loss


Kumar Alagramam. PhD, Case Western Reserve University

The ability to hear depends on proteins to reach the outer membrane of sensory cells in the inner ear. But in certain types of hereditary hearing loss, mutations in the protein prevent it from reaching these membranes. Using a zebrafish model, researchers at Case Western Reserve University School of Medicine have found that an anti-malarial drug called artemisinin may help prevent hearing loss associated with this genetic disorder.

In a recent study, published in the Proceedings of the National Academy of Sciences (PNAS), researchers found the classic anti-malarial drug can help sensory cells of the inner ear recognize and transport an essential protein to specialized membranes using established pathways within the cell.

The sensory cells of the inner ear are marked by hair-like projections on the surface, earning them the nickname “hair cells.” Hair cells convert sound and movement-induced vibrations into electrical signals that are conveyed through nerves and translated in the brain as information used for hearing and balance.

The mutant form of the protein–clarin1–render hair cells unable to recognize and transport them to membranes essential for hearing using typical pathways within the cell. Instead, most mutant clarin1 proteins gets trapped inside hair cells, where they are ineffective and detrimental to cell survival. Faulty clarin1 secretion can occur in people with Usher syndrome, a common genetic cause of hearing and vision loss.

The study found artemisinin restores inner ear sensory cell function—and thus hearing and balance—in zebrafish genetically engineered to have human versions of an essential hearing protein.

Senior author on the study, Kumar N. Alagramam, the Anthony J. Maniglia Chair for Research and Education and associate professor at Case Western Reserve University School of Medicine Department of Otolaryngology at University Hospitals Cleveland Medical Center, has been studying ways to get mutant clarin1 protein to reach cell membranes to improve hearing in people with Usher syndrome.

“We knew mutant protein largely fails to reach the cell membrane, except patients with this mutation are born hearing,” Alagramam said. “This suggested to us that, somehow, at least a fraction of the mutant protein must get to cell membranes in the inner ear.”

Alagramam’s team searched for any unusual secretion pathways mutant clarin1 could take to get to hair cell membranes. “If we can understand how the human clarin1 mutant protein is transported to the membrane, then we can exploit that mechanism therapeutically,” Alagramam said.

For the PNAS study, Alagramam’s team created several new zebrafish models. They swapped the genes encoding zebrafish clarin1 with human versions—either normal clarin1, or clarin1 containing mutations found in humans with a type of Usher syndrome, which can lead to profound hearing loss.

“Using these ‘humanized’ fish models,” Alagramam said, “we were able to study the function of normal clarin1 and, more importantly, the functional consequences of its mutant counterpart. To our knowledge, this is the first time a human protein involved in hearing loss has been examined in this manner.”

Zebrafish offer several advantages to study hearing. Their larvae are transparent, making it easy to monitor inner ear cell shape and function. Their genes are also nearly identical to humans—particularly when it comes to genes that underlie hearing. Replacing zebrafish clarin1 with human clarin1 made an even more precise model.

The researchers found the unconventional cellular secretion pathway they were looking for by using florescent labels to track human clarin1 moving through zebrafish hair cells. The mutated clarin1 gets to the cell membrane using proteins and trafficking mechanisms within the cell, normally reserved for misfolded proteins “stuck” in certain cellular compartments.

“As far as we know, this is the first time a human mutant protein associated with hearing loss has been shown to be ‘escorted’ by the unconventional cellular secretion pathway,” Alagramam said. “This mechanism may shed light on the process underlying hearing loss associated with other mutant membrane proteins.”

The study showed the majority of mutant clarin1 gets trapped inside a network of tubules within the cell analogous to stairs and hallways helping proteins, including clarin1, get from place to place. Alagramam’s team surmised that liberating the mutant protein from this tubular network would be therapeutic and tested two drugs that target it: thapsigargin (an anti-cancer drug) and artemisinin (an anti-malarial drug).

The drugs did enable zebrafish larvae to liberate the trapped proteins and have higher clarin1 levels in the membrane; but artemisinin was the more effective of the two. Not only did the drug help mutant clarin1 to reach the membrane, hearing and balance functions were better preserved in zebrafish treated with the anti-malarial drug than untreated fish.

In zebrafish, survival depends on normal swim behavior, which in turn depends on balance and the ability to detect water movement, both of which are tied to hair cell function. Survival rates in zebrafish expressing the mutant clarin1 jumped from 5% to 45% after artemisinin treatment.

“Our report highlights the potential of artemisinin to mitigate both hearing and vision loss caused by clarin1 mutations,” Alagramam said. “This could be a re-purposable drug, with a safe profile, to treat Usher syndrome patients.”

Alagramam added that the unconventional secretion mechanism and the activation of that mechanism using artemisinin or similar drugs may also be relevant to other genetic disorders that involve mutant membrane proteins aggregating in the cell’s tubular network, including sensory and non-sensory disorders.

Gopal SR, et al. “Unconventional secretory pathway activation restores hair cell mechanotransduction in an USH3A model.” PNAS.

Drug to treat malaria could mitigate hereditary hearing loss

Scientists are now able to take an MRI Scan of an atom

By Knvul Sheikh

As our devices get smaller and more sophisticated, so do the materials we use to make them. That means we have to get up close to engineer new materials. Really close.

Different microscopy techniques allow scientists to see the nucleotide-by-nucleotide genetic sequences in cells down to the resolution of a couple atoms as seen in an atomic force microscopy image. But scientists at the IBM Almaden Research Center in San Jose, Calif., and the Institute for Basic Sciences in Seoul, have taken imaging a step further, developing a new magnetic resonance imaging technique that provides unprecedented detail, right down to the individual atoms of a sample.

The technique relies on the same basic physics behind the M.R.I. scans that are done in hospitals.

When doctors want to detect tumors, measure brain function or visualize the structure of joints, they employ huge M.R.I. machines, which apply a magnetic field across the human body. This temporarily disrupts the protons spinning in the nucleus of every atom in every cell. A subsequent, brief pulse of radio-frequency energy causes the protons to spin perpendicular to the pulse. Afterward, the protons return to their normal state, releasing energy that can be measured by sensors and made into an image.

But to gather enough diagnostic data, traditional hospital M.R.I.s must scan billions and billions of protons in a person’s body, said Christopher Lutz, a physicist at IBM. So he and his colleagues decided to pack the power of an M.R.I. machine into the tip of another specialized instrument known as a scanning tunneling microscope to see if they could image individual atoms.


Four M.R.I. scans, combined, of a single titanium atom, showing the magnetic field of the atom at different strengths.CreditWillke et al.

The tip of a scanning tunneling microscope is just a few atoms wide. And it moves along the surface of a sample, it picks up details about the size and conformation of molecules.

The researchers attached magnetized iron atoms to the tip, effectively combining scanning-tunneling microscope and M.R.I. technologies.

When the magnetized tip swept over a metal wafer of iron and titanium, it applied a magnetic field to the sample, disrupting the electrons (rather than the protons, as a typical M.R.I. would) within each atom. Then the researchers quickly turned a radio-frequency pulse on and off, so that the electrons would emit energy that could be visualized. The results were described Monday in the journal Nature Physics.

“It’s a really magnificent combination of imaging technologies,” said A. Duke Shereen, director of the M.R.I. Core Facility at the Advanced Science Research Center in New York. “Medical M.R.I.s can do great characterization of samples, but not at this small scale.”

The atomic M.R.I. provides subångström-level resolution, meaning it can distinguish neighboring atoms from one another, as well as reveal which types of atoms are visible based on their magnetic interactions.

“It is the ultimate way to miniaturization,” Dr. Lutz said. He hopes the new technology could one day be used to design atomic-scale methods of storing information, for quantum computers.

Current transistors are thousands of atoms wide and need to switch on and off to store a single bit of information in a computer. The ability to corral individual atoms could drastically increase computing power and enable researchers to tackle complex calculations such as predicting weather patterns or diagnosing illnesses with artificial intelligence.

Moving an atom from one location to another in a composite could also change and lead to the development of new ones.

The technique might also help scientists study how proteins fold and develop new drugs that bind to specific curves in a biological structure.

“We can now see something that we couldn’t see before,” Dr. Lutz said. “So our imagination can go to a whole bunch of new ideas that we can test out with this technology.”

Children wise to fear hand dryers, and 13-year-old proves it with published paper

Calgary student Nora Keegan has been studying decibel levels in hand dryers since she was 9 years old.

Children who say hand dryers “hurt my ears” are correct.

A new research paper by that very title has just been published in Paediatrics & Child Health, Canada’s premier peer-reviewed pediatric journal. And the researcher, 13-year-old Nora Keegan, has been studying the issue since she was nine years old.

“In Grade 4, I noticed that my ears kind of hurt after the hand dryer,” Keegan told the Calgary Eyeopener. “And then later, at the start of Grade 5, I also noticed that my ears were hurting after I used the hand dryer. So then I decided to test it to see if they were dangerous to hearing, and it turns out they are.”

Keegan used a decibel meter, and measured the noise at different heights and different distances from the wall.

“I thought it would be good to have a lot of children’s heights and also women’s height and men’s height, and then I measured 18 inches from the wall, which is the industry standard. And I also measured 12 inches from the wall since I thought the children might stand closer because their hands and arms are shorter.”

She discovered something even more alarming.

“And then one time I was testing on the decibel meter and my hand accidentally passed into the airstream flow, and the decibels shot up a lot,” she said. “So then I decided to make that another part of my testing method. So I also measured with hands in the air flow and without hands in the air.”

Keegan discovered that the sound was even louder with the hands in the airflow.

“And it was also really loud at children’s heights and manufacturers don’t measure for children’s height as much either.”

Eventually, Keegan determined that there are two models in particular that are harmful for children’s ears: the Dyson and XCelerator, which both operate at about 110 decibels. Health Canada has regulated that no toys operate at more than 100 decibels.

“So this is very loud, around the level of a rock concert,” Keegan said. “And this is also louder than Health Canada’s regulation for children’s toys, as they know that at this level it poses a danger to children’s hearing.”

Children have smaller ear canals and more sensitive follicles. And they tend to stand closer to the dryers because their bodies are smaller and their arms are shorter.

These are all things Keegan started documenting in a series of research projects.

“So it started out as a school science fair in Grade 5. And then I really enjoyed it, and I thought I could do more with it,” she said. “So then I continued working on it in Grade 6, and then Grade 7, I started writing the paper, and it just got published now in Paediatrics & Child Health.”

Keegan is a Grade 8 student at Branton Junior High School in Calgary. The full title of her paper is, “Children who say hand dryers ‘hurt my ears’ are correct: A real-world study examining the loudness of automated hand dryers in public places.”

But the young scientist, who says she hopes to have a career as a marine biologist, isn’t stopping with this personal success. She wants to do something about the problem.

By experimenting with different materials, she’s made a model that reduces the noise by 11 decibels.

Keegan’s synthetic air filter, which looks like a fuzzy handbag, absorbs the sound waves.

“The air comes down further so even though your hands still reach the airflow, then your ears are a greater distance from where the air comes out.”

Keegan conducted an informal test of the air filter at her school.

“I couldn’t really find a way to test it, but I installed it in my school’s washroom and I found that it didn’t (heat up). People seemed to enjoy it and it didn’t seem to have a problem.”

Keegan said she hasn’t tried to do anything official with the air filter — yet.

“I think I might go and talk to the manufacturers and also I might go and talk to Health Canada because even though this is a study, it’s still only one study. So it’d be better if they tested more hand dryers and found more about that loudness of hand dryers.”

Keegan assessed 44 different hand dryers, from places that kids would be using them all over Calgary — arenas, restaurants, schools, libraries and shopping malls.

https://www.cbc.ca/news/canada/calgary/calgary-student-nora-keegan-hand-dyer-research-decibel-1.5185853?utm_source=Nature+Briefing&utm_campaign=34225bcef1-briefing-dy-20190701&utm_medium=email&utm_term=0_c9dfd39373-34225bcef1-44039353

A newly identified type of dementia that is sometimes mistaken for Alzheimer’s disease

Doctors have newly outlined a type of dementia that could be more common than Alzheimer’s among the oldest adults, according to a report published Tuesday in the journal Brain.

The disease, called LATE, may often mirror the symptoms of Alzheimer’s disease, though it affects the brain differently and develops more slowly than Alzheimer’s. Doctors say the two are frequently found together, and in those cases may lead to a steeper cognitive decline than either by itself.

In developing its report, the international team of authors is hoping to spur research — and, perhaps one day, treatments — for a disease that tends to affect people over 80 and “has an expanding but under-recognized impact on public health,” according to the paper.

“We’re really overhauling the concept of what dementia is,” said lead author Dr. Peter Nelson, director of neuropathology at the University of Kentucky Medical Center.

Still, the disease itself didn’t come out of the blue. The evidence has been building for years, including reports of patients who didn’t quite fit the mold for known types of dementia such as Alzheimer’s.

“There isn’t going to be one single disease that is causing all forms of dementia,” said Sandra Weintraub, a professor of psychiatry, behavioral sciences and neurology at Northwestern University Feinberg School of Medicine. She was not involved in the new paper.

Weintraub said researchers have been well aware of the “heterogeneity of dementia,” but figuring out precisely why each type can look so different has been a challenge. Why do some people lose memory first, while others lose language or have personality changes? Why do some develop dementia earlier in life, while others develop it later?

Experts say this heterogeneity has complicated dementia research, including Alzheimer’s, because it hasn’t always been clear what the root cause was — and thus, if doctors were treating the right thing.

What is it?

The acronym LATE stands for limbic-predominant age-related TDP-43 encephalopathy. The full name refers to the area in the brain most likely to be affected, as well as the protein at the center of it all.

“These age-related dementia diseases are frequently associated with proteinaceous glop,” Nelson said. “But different proteins can contribute to the glop.”

In Alzheimer’s, you’ll find one set of glops. In Lewy body dementia, another glop.

And in LATE, the glop is a protein called TDP-43. Doctors aren’t sure why the protein is found in a modified, misfolded form in a disease like LATE.

“TDP-43 likes certain parts of the brain that the Alzheimer’s pathology is less enamored of,” explained Weintraub, who is also a member of Northwestern’s Mesulam Center for Cognitive Neurology and Alzheimer’s Disease.

“This is an area that’s going to be really huge in the future. What are the individual vulnerabilities that cause the proteins to go to particular regions of the brain?” she said. “It’s not just what the protein abnormality is, but where it is.”

More than a decade ago, doctors first linked the TDP protein to amyotrophic lateral sclerosis, otherwise known as ALS or Lou Gehrig’s disease. It was also linked to another type of dementia, called frontotemporal lobar degeneration.

LATE “is a disease that’s 100 times more common than either of those, and nobody knows about it,” said Nelson.

The new paper estimates, based on autopsy studies, that between 20 and 50% of people over 80 will have brain changes associated with LATE. And that prevalence increases with age.

Experts say nailing down these numbers — as well as finding better ways to detect and research the disease — is what they hope comes out of consensus statements like the new paper, which gives scientists a common language to discuss it, according to Nelson.

“People have, in their own separate bailiwicks, found different parts of the elephant,” he said. “But this is the first place where everybody gets together and says, ‘This is the whole elephant.’ ”

What this could mean for Alzheimer’s

The new guidelines could have an impact on Alzheimer’s research, as well. For one, experts say some high-profile drug trials may have suffered as a result of some patients having unidentified LATE — and thus not responding to treatment.

In fact, Nelson’s colleagues recently saw that firsthand: a patient, now deceased, who was part of an Alzheimer’s drug trial but developed dementia anyway.

“So, the clinical trial was a failure for Alzheimer’s disease,” Nelson said, “but it turns out he didn’t have Alzheimer’s disease. He had LATE.”

Nina Silverberg, director of the Alzheimer’s Disease Research Centers Program at the National Institute on Aging, said she suspects examples like this are not the majority — in part because people in clinical trials tend to be on the younger end of the spectrum.

“I’m sure it plays some part, but maybe not as much as one might think at first,” said Silverberg, who co-chaired the working group that led to the new paper.

Advances in testing had already shown that some patients in these trials lacked “the telltale signs of Alzheimer’s,” she said.

In some cases, perhaps it was LATE — “and it’s certainly possible that there are other, as yet undiscovered, pathologies that people may have,” she added.

“We could go back and screen all the people that had failed their Alzheimer’s disease therapies,” Nelson said. “But what we really need to do is go forward and try to get these people out of the Alzheimer’s clinical trials — and instead get them into their own clinical trials.”

Silverberg describes the new paper as “a roadmap” for research that could change as we come to discover more about the disease. And researchers can’t do it without a large, diverse group of patients, she added.

“It’s probably going to take years and research participants to help us understand all of that,” she said.

https://www.cnn.com/2019/04/30/health/dementia-late-alzheimers-study/index.html

New high-risk genes for schizophrenia discovered

ummary: Study identifies 104 high-risk genes for schizophrenia. One gene considered high-risk is also suspected in the development of autism.

Source: Vanderbilt University

Using a unique computational framework they developed, a team of scientist cyber-sleuths in the Vanderbilt University Department of Molecular Physiology and Biophysics and the Vanderbilt Genetics Institute (VGI) has identified 104 high-risk genes for schizophrenia.

Their discovery, which was reported April 15 in the journal Nature Neuroscience, supports the view that schizophrenia is a developmental disease, one which potentially can be detected and treated even before the onset of symptoms.

“This framework opens the door for several research directions,” said the paper’s senior author, Bingshan Li, PhD, associate professor of Molecular Physiology and Biophysics and an investigator in the VGI.

One direction is to determine whether drugs already approved for other, unrelated diseases could be repurposed to improve the treatment of schizophrenia. Another is to find in which cell types in the brain these genes are active along the development trajectory.

Ultimately, Li said, “I think we’ll have a better understanding of how prenatally these genes predispose risk, and that will give us a hint of how to potentially develop intervention strategies. It’s an ambitious goal … (but) by understanding the mechanism, drug development could be more targeted.”

Schizophrenia is a chronic, severe mental disorder characterized by hallucinations and delusions, “flat” emotional expression and cognitive difficulties.

Symptoms usually start between the ages of 16 and 30. Antipsychotic medications can relieve symptoms, but there is no cure for the disease.

Genetics plays a major role. While schizophrenia occurs in 1% of the population, the risk rises sharply to 50% for a person whose identical twin has the disease.

Recent genome-wide association studies (GWAS) have identified more than 100 loci, or fixed positions on different chromosomes, associated with schizophrenia. That may not be where high-risk genes are located, however. The loci could be regulating the activity of the genes at a distance — nearby or very far away.

To solve the problem, Li, with first authors Rui Chen, PhD, research instructor in Molecular Physiology and Biophysics, and postdoctoral research fellow Quan Wang, PhD, developed a computational framework they called the “Integrative Risk Genes Selector.”

The framework pulled the top genes from previously reported loci based on their cumulative supporting evidence from multi-dimensional genomics data as well as gene networks.

Which genes have high rates of mutation? Which are expressed prenatally? These are the kinds of questions a genetic “detective” might ask to identify and narrow the list of “suspects.”

The result was a list of 104 high-risk genes, some of which encode proteins targeted in other diseases by drugs already on the market. One gene is suspected in the development of autism spectrum disorder.

Much work remains to be done. But, said Chen, “Our framework can push GWAS a step forward … to further identify genes.” It also could be employed to help track down genetic suspects in other complex diseases.

Also contributing to the study were Li’s lab members Qiang Wei, PhD, Ying Ji and Hai Yang, PhD; VGI investigators Xue Zhong, PhD, Ran Tao, PhD, James Sutcliffe, PhD, and VGI Director Nancy Cox, PhD.

Chen also credits investigators in the Vanderbilt Center for Neuroscience Drug Discovery — Colleen Niswender, PhD, Branden Stansley, PhD, and center Director P. Jeffrey Conn, PhD — for their critical input.

Funding: The study was supported by the Vanderbilt Analysis Center for the Genome Sequencing Program and National Institutes of Health grant HG009086.

High-risk genes for schizophrenia discovered

What Happened Before the Big Bang?

By Stephanie Pappas

The Big Bang is commonly thought of as the start of it all: About 13.8 billion years ago, the observable universe went boom and expanded into being.

But what were things like before the Big Bang?

Short answer: We don’t know. Long answer: It could have been a lot of things, each mind-bending in its own way.

The first thing to understand is what the Big Bang actually was.

“The Big Bang is a moment in time, not a point in space,” said Sean Carroll, a theoretical physicist at the California Institute of Technology and author of “The Big Picture: On the Origins of Life, Meaning and the Universe Itself” (Dutton, 2016).

So, scrap the image of a tiny speck of dense matter suddenly exploding outward into a void. For one thing, the universe at the Big Bang may not have been particularly small, Carroll said. Sure, everything in the observable universe today — a sphere with a diameter of about 93 billion light-years containing at least 2 trillion galaxies — was crammed into a space less than a centimeter across. But there could be plenty outside of the observable universe that Earthlings can’t see because it’s physically impossible for the light to have traveled that far in 13.8 billion years.
Thus, it’s possible that the universe at the Big Bang was teeny-tiny or infinitely large, Carroll said, because there’s no way to look back in time at the stuff we can’t even see today. All we really know is that it was very, very dense and that it very quickly got less dense.

As a corollary, there really isn’t anything outside the universe, because the universe is, by definition, everything. So, at the Big Bang, everything was denser and hotter than it is now, but there was no more an “outside” of it than there is today. As tempting as it is to take a godlike view and imagine you could stand in a void and look at the scrunched-up baby universe right before the Big Bang, that would be impossible, Carroll said. The universe didn’t expand into space; space itself expanded.

“No matter where you are in the universe, if you trace yourself back 14 billion years, you come to this point where it was extremely hot, dense and rapidly expanding,” he said.

No one knows exactly what was happening in the universe until 1 second after the Big Bang, when the universe cooled off enough for protons and neutrons to collide and stick together. Many scientists do think that the universe went through a process of exponential expansion called inflation during that first second. This would have smoothed out the fabric of space-time and could explain why matter is so evenly distributed in the universe today.

Before the bang

It’s possible that before the Big Bang, the universe was an infinite stretch of an ultrahot, dense material, persisting in a steady state until, for some reason, the Big Bang occured. This extra-dense universe may have been governed by quantum mechanics, the physics of the extremely small scale, Carroll said. The Big Bang, then, would have represented the moment that classical physics took over as the major driver of the universe’s evolution.

For Stephen Hawking, this moment was all that mattered: Before the Big Bang, he said, events are unmeasurable, and thus undefined. Hawking called this the no-boundary proposal: Time and space, he said, are finite, but they don’t have any boundaries or starting or ending points, the same way that the planet Earth is finite but has no edge.

“Since events before the Big Bang have no observational consequences, one may as well cut them out of the theory and say that time began at the Big Bang,” he said in an interview on the National Geographic show “StarTalk” in 2018.

Or perhaps there was something else before the Big Bang that’s worth pondering. One idea is that the Big Bang isn’t the beginning of time, but rather that it was a moment of symmetry. In this idea, prior to the Big Bang, there was another universe, identical to this one but with entropy increasing toward the past instead of toward the future.

Increasing entropy, or increasing disorder in a system, is essentially the arrow of time, Carroll said, so in this mirror universe, time would run opposite to time in the modern universe and our universe would be in the past. Proponents of this theory also suggest that other properties of the universe would be flip-flopped in this mirror universe. For example, physicist David Sloan wrote in the University of Oxford Science Blog, asymmetries in molecules and ions (called chiralities) would be in opposite orientations to what they are in our universe.

A related theory holds that the Big Bang wasn’t the beginning of everything, but rather a moment in time when the universe switched from a period of contraction to a period of expansion. This “Big Bounce” notion suggests that there could be infinite Big Bangs as the universe expands, contracts and expands again. The problem with these ideas, Carroll said, is that there’s no explanation for why or how an expanding universe would contract and return to a low-entropy state.

Carroll and his colleague Jennifer Chen have their own pre-Big Bang vision. In 2004, the physicists suggested that perhaps the universe as we know it is the offspring of a parent universe from which a bit of space-time has ripped off.

It’s like a radioactive nucleus decaying, Carroll said: When a nucleus decays, it spits out an alpha or beta particle. The parent universe could do the same thing, except instead of particles, it spits out baby universes, perhaps infinitely. “It’s just a quantum fluctuation that lets it happen,” Carroll said. These baby universes are “literally parallel universes,” Carroll said, and don’t interact with or influence one another.

If that all sounds rather trippy, it is — because scientists don’t yet have a way to peer back to even the instant of the Big Bang, much less what came before it. There’s room to explore, though, Carroll said. The detection of gravitational waves from powerful galactic collisions in 2015 opens the possibility that these waves could be used to solve fundamental mysteries about the universes’ expansion in that first crucial second.

Theoretical physicists also have work to do, Carroll said, like making more-precise predictions about how quantum forces like quantum gravity might work.

“We don’t even know what we’re looking for,” Carroll said, “until we have a theory.”

https://www.livescience.com/65254-what-happened-before-big-big.html

Genetic study homes in on height’s heritability mystery

by Linda Geddes

You need only to look at families to see that height is inherited — and studies of identical twins and families have long confirmed that suspicion. About 80% of variation in height is down to genetics, they suggest. But since the human genome was sequenced nearly two decades ago, researchers have struggled to fully identify the genetic factors responsible.

Studies seeking the genes that govern height have identified hundreds of common gene variants linked to the trait. But the findings also posed a quandry: each variant had a tiny effect on height that together didn’t amount to the genetic contribution predicted by family studies. This phenomenon, which occurs for many other traits and diseases, was dubbed missing heritability, and had even prompted some researchers to speculate that there’s something fundamentally wrong with our understanding of genetics.

Now, a study suggests that most of the missing heritability for height and body mass index (BMI) can, as some researchers had suspected, be found in rarer gene variants that had lain undiscovered until now.

“It is a reassuring paper because it suggests that there isn’t something terribly wrong with genetics,” says Tim Spector, a genetic epidemiologist at King’s College London. “It’s just that sorting it out is more complex than we thought.” The research was posted1 to the bioRxiv preprint server on 25 March.

Scouring the genome

To seek out the genetic factors that underlie diseases and traits, geneticists turn to mega-searches known as genome-wide association studies (GWAS). These scour the genomes of, typically, tens of thousands of people — or, increasingly, more than a million — for single-letter changes, or SNPs, in genes that commonly appear in individuals with a particular disease or that could explain a common trait such as height.

But GWAS have limitations. Because sequencing the entire genomes of thousands of people is expensive, GWAS themselves scan only a strategically selected set of SNPs, perhaps 500,000, in each person’s genome. That’s only a snapshot of the roughly six billion nucleotides — the building blocks of DNA — strung together in our genome. In turn, these 500,000 common variants would have been found from sequencing the genomes of just a few hundred people, says Timothy Frayling, a human geneticist at the University of Exeter, UK.

A team led by Peter Visscher at the Queensland Brain Institute in Brisbane, Australia, decided to investigate whether rarer SNPs than those typically scanned in GWAS might explain the missing heritability for height and BMI. They turned to whole-genome sequencing — performing a complete readout of all 6 billion bases — of 21,620 people. (The authors declined to comment on the preprint, because it is under submission at a journal.)

They relied on the simple, but powerful, principle that all people are related to some extent — albeit distantly — and that DNA can be used to calculate degrees of relatedness. Then, information on the people’s height and BMI could be combined to identify both common and rare SNPs that might be contributing to these traits.

Say, for instance, that a pair of third cousins is closer in height than a pair of second cousins is in a different family: that’s an indication that the third cousins’ height is mostly down to genetics, and the extent of that correlation will tell you how much, Frayling explains. “They used all of the genetic information, which enables you to work out how much of the relatedness was due to rarer things as well as the common things.”

As a result, the researchers captured genetic differences that occur in only 1 in 500, or even 1 in 5,000, people.

And by using information on both common and rare variants, the researchers arrived at roughly the same estimates of heritability as those indicated by twin studies. For height, Visscher and colleagues estimate a heritability of 79%, and for BMI, 40%. This means that if you take a large group of people, 79% of the height differences would be due to genes rather than to environmental factors, such as nutrition.

Complex processes

The researchers also suggest how the previously undiscovered variants might be contributing to physical traits. Tentatively, they found that these rare variants were slightly enriched in protein-coding regions of the genome, and that they had an increased likelihood of being disruptive to these regions, notes Terence Capellini, an evolutionary biologist at Harvard University in Cambridge, Massachusetts. This indicates that the rare variants might partly influence height by affecting protein-coding regions instead of the rest of the genome — the vast majority of which does not include instructions for making proteins, but might influence their expression.

The rarity of the variants also suggests that natural selection could be weeding them out, perhaps because they are harmful in some way.

The complexity of heritability means that understanding the roots of many common diseases — necessary if researchers are to develop effective therapies against them — will take considerably more time and money, and it could involve sequencing hundreds of thousands or even millions of whole genomes to identify the rare variants that explain a substantial portion of the illnesses’ genetic components.

The study reveals only the total amount of rare variants contributing to these common traits — not which ones are important, says Spector. “The next stage is to go and work out which of these rare variants are important for traits or diseases that you want to get a drug for.”

Nature 568, 444-445 (2019)

doi: 10.1038/d41586-019-01157-y

https://www.nature.com/articles/d41586-019-01157-y?utm_source=Nature+Briefing&utm_campaign=26855a4182-briefing-dy-20190424&utm_medium=email&utm_term=0_c9dfd39373-26855a4182-44039353

Exploring Leonardo da Vinci’s knowledge of the brain

Summary: A new study looks at Leonardo da Vinci’s contribution to neuroscience and the advancement of modern sciences.

Source: Profiles, Inc

May 2, 2019, marks the 500th anniversary of Leonardo da Vinci’s death. A cultural icon, artist, engineer and experimentalist of the Renaissance period, Leonardo continues to inspire people around the globe. Jonathan Pevsner, PhD, professor and research scientist at the Kennedy Krieger Institute, wrote an article featured in the April edition of The Lancet titled, “Leonardo da Vinci’s studies of the brain.” In the piece, Pevsner highlights the exquisite drawings and curiosity, dedication and scientific rigor that led Leonardo to make penetrating insights into how the brain functions.

Through his research, Pevsner shares that Leonardo was the first to identify the olfactory nerve as a cranial nerve. He details how Leonardo performed intricate studies on the peripheral nervous system, challenging the findings of earlier authorities and introducing methods centuries earlier than other anatomists and physiologists. Pevsner also delves into Leonardo’s pioneering experiment on the ventricles by replicating his technique of injecting wax to make a cast of the ventricles in the brain to determine their overall shape and size. This further demonstrates Leonardo’s original thinking and advanced intelligence.

“Leonardo’s work reflects the emergence of the modern scientific era and forms a key part of his integrative approach to art and science,” said Pevsner.

“He asked questions about how the brain works in health and in disease. He sought to understand changes in the brain that occur in epilepsy, or why the mental state of a pregnant mother can directly affect the physical well-being of her child. At the Kennedy Krieger Institute, many of us struggle to answer the same questions. While science and technology have advanced at a breathtaking pace, we still need Leonardo’s qualities of passion, curiosity, the ability to visualize knowledge, and clear thinking to guide us forward.”

While Pevsner is viewed as an expert in Leonardo da Vinci, his main profession and passion is research into the molecular basis of childhood and adult brain disorders in his lab at Kennedy Krieger Institute. His lab reported the mutation that causes Sturge-Weber syndrome, and ongoing studies include bipolar disorder, autism spectrum disorder and schizophrenia. He is the author of the textbook, Bioinformatics and Functional Genomics.

Exploring Leonardo da Vinci’s knowledge of the brain

Artificial intelligence singles out neurons faster than a human can


Two-photon imaging shows neurons firing in a mouse brain. Recordings like this enable researchers to track which neurons are firing, and how they potentially correspond to different behaviors. The image is credited to Yiyang Gong, Duke University.

Summary: Convolutional neural network model significantly outperforms previous methods and is as accurate as humans in segmenting active and overlapping neurons.

Source: Duke University

Biomedical engineers at Duke University have developed an automated process that can trace the shapes of active neurons as accurately as human researchers can, but in a fraction of the time.

This new technique, based on using artificial intelligence to interpret video images, addresses a critical roadblock in neuron analysis, allowing researchers to rapidly gather and process neuronal signals for real-time behavioral studies.

The research appeared this week in the Proceedings of the National Academy of Sciences.

To measure neural activity, researchers typically use a process known as two-photon calcium imaging, which allows them to record the activity of individual neurons in the brains of live animals. These recordings enable researchers to track which neurons are firing, and how they potentially correspond to different behaviors.

While these measurements are useful for behavioral studies, identifying individual neurons in the recordings is a painstaking process. Currently, the most accurate method requires a human analyst to circle every ‘spark’ they see in the recording, often requiring them to stop and rewind the video until the targeted neurons are identified and saved. To further complicate the process, investigators are often interested in identifying only a small subset of active neurons that overlap in different layers within the thousands of neurons that are imaged.

This process, called segmentation, is fussy and slow. A researcher can spend anywhere from four to 24 hours segmenting neurons in a 30-minute video recording, and that’s assuming they’re fully focused for the duration and don’t take breaks to sleep, eat or use the bathroom.

In contrast, a new open source automated algorithm developed by image processing and neuroscience researchers in Duke’s Department of Biomedical Engineering can accurately identify and segment neurons in minutes.

“As a critical step towards complete mapping of brain activity, we were tasked with the formidable challenge of developing a fast automated algorithm that is as accurate as humans for segmenting a variety of active neurons imaged under different experimental settings,” said Sina Farsiu, the Paul Ruffin Scarborough Associate Professor of Engineering in Duke BME.

“The data analysis bottleneck has existed in neuroscience for a long time — data analysts have spent hours and hours processing minutes of data, but this algorithm can process a 30-minute video in 20 to 30 minutes,” said Yiyang Gong, an assistant professor in Duke BME. “We were also able to generalize its performance, so it can operate equally well if we need to segment neurons from another layer of the brain with different neuron size or densities.”

“Our deep learning-based algorithm is fast, and is demonstrated to be as accurate as (if not better than) human experts in segmenting active and overlapping neurons from two-photon microscopy recordings,” said Somayyeh Soltanian-Zadeh, a PhD student in Duke BME and first author on the paper.

Deep-learning algorithms allow researchers to quickly process large amounts of data by sending it through multiple layers of nonlinear processing units, which can be trained to identify different parts of a complex image. In their framework, this team created an algorithm that could process both spatial and timing information in the input videos. They then ‘trained’ the algorithm to mimic the segmentation of a human analyst while improving the accuracy.

The advance is a critical step towards allowing neuroscientists to track neural activity in real time. Because of their tool’s widespread usefulness, the team has made their software and annotated dataset available online.

Gong is already using the new method to more closely study the neural activity associated with different behaviors in mice. By better understanding which neurons fire for different activities, Gong hopes to learn how researchers can manipulate brain activity to modify behavior.

“This improved performance in active neuron detection should provide more information about the neural network and behavioral states, and open the door for accelerated progress in neuroscience experiments,” said Soltanian-Zadeh.

Artificial intelligence singles out neurons faster than a human can

Sydney Brenner (1927-2019)

Sydney Brenner was one of the first to view James Watson and Francis Crick’s double helix model of DNA in April 1953. The 26-year-old biologist from South Africa was then a graduate student at the University of Oxford, UK. So enthralled was he by the insights from the structure that he determined on the spot to devote his life to understanding genes.

Iconoclastic and provocative, he became one of the leading biologists of the twentieth century. Brenner shared in the 2002 Nobel Prize in Physiology or Medicine for deciphering the genetics of programmed cell death and animal development, including how the nervous system forms. He was at the forefront of the 1975 Asilomar meeting to discuss the appropriate use of emerging abilities to alter DNA, was a key proponent of the Human Genome Project, and much more. He died on 5 April.

Brenner was born in 1927 in Germiston, South Africa to poor immigrant parents. Bored by school, he preferred to read books borrowed (sometimes permanently) from the public library, or to dabble with a self-assembled chemistry set. His extraordinary intellect — he was reading newspapers by the age of four — did not go unnoticed. His teachers secured an award from the town council to send him to medical school.

Brenner entered the University of the Witwatersrand in Johannesburg at the age of 15 (alongside Aaron Klug, another science-giant-in-training). Here, certain faculty members, notably the anatomist Raymond Dart, and fellow research-oriented medical students enriched his interest in science. On finishing his six-year course, his youth legally precluded him from practising medicine, so he devoted two years to learning cell biology at the bench. His passion for research was such that he rarely set foot on the wards — and he initially failed his final examination in internal medicine.


Sydney Brenner (right) with John Sulston, who both shared the Nobel Prize in Physiology or Medicine with Robert Horvitz in 2002.Credit: Steve Russell/Toronto Star/Getty

In 1952 Brenner won a scholarship to the Department of Physical Chemistry at Oxford. His adviser, Cyril Hinshelwood, wanted to pursue the idea that the environment altered observable characteristics of bacteria. Brenner tried to convince him of the role of genetic mutation. Two years later, with doctorate in hand, Brenner spent the summer of 1954 in the United States visiting labs, including Cold Spring Harbor in New York state. Here he caught up with Watson and Crick again.

Impressed, Crick recruited the young South African to the University of Cambridge, UK, in 1956. In the early 1960s, using just bacteria and bacteriophages, Crick and Brenner deciphered many of the essentials of gene function in a breathtaking series of studies.

Brenner had proved theoretically in the mid-1950s that the genetic code is ‘non-overlapping’ — each nucleotide is part of only one triplet (three nucleotides specify each amino acid in a protein) and successive ‘triplet codons’ are read in order. In 1961, Brenner and Crick confirmed this in the lab. The same year, Brenner, with François Jacob and Matthew Meselson, published their demonstration of the existence of messenger RNA. Over the next two years, often with Crick, Brenner showed how the synthesis of proteins encoded by DNA sequences is terminated.

This intellectual partnership dissolved when Brenner began to focus on whole organisms in the mid-1960s. He finally alighted on Caenorhabditis elegans. Studies of this tiny worm in Brenner’s arm of the legendary Laboratory of Molecular Biology (LMB) in Cambridge led to the Nobel for Brenner, Robert Horvitz and John Sulston.


Maxine Singer, Norton Zinder, Sydney Brenner and Paul Berg (left to right) at the 1975 meeting on recombinant DNA technology in Asilomar, California.Credit: NAS

And his contributions went well beyond the lab. In 1975, with Paul Berg and others, he organized a meeting at Asilomar, California, to draft a position paper on the United States’ use of recombinant DNA technology — introducing genes from one species into another, usually bacteria. Brenner was influential in persuading attendees to treat ethical and societal concerns seriously. He stressed the importance of thoughtful guidelines for deploying the technology to avoid overly restrictive regulation.

He served as director of the LMB for about a decade. Despite describing the experience as the biggest mistake in his life, he took the lab (with its stable of Nobel laureates and distinguished staff) to unprecedented prominence. In 1986, he moved to a new Medical Research Council (MRC) unit of molecular genetics at the city’s Addenbrooke’s Hospital, and began work in the emerging discipline of evolutionary genomics. Brenner also orchestrated Britain’s involvement in the Human Genome Project in the early 1990s.

From the late 1980s, Brenner steered the development of biomedical research in Singapore. Here he masterminded Biopolis, a spectacular conglomerate of chrome and glass buildings dedicated to biomedical research. He also helped to guide the Janelia Farm campus of the Howard Hughes Medical Institute in Ashburn, Virginia, and to restructure molecular biology in Japan.

Brenner dazzled, amused and sometimes offended audiences with his humour, irony and disdain of authority and dogma — prompting someone to describe him as “one of biology’s mischievous children; the witty trickster who delights in stirring things up.” His popular columns in Current Biology (titled ‘Loose Ends’ and, later, ‘False Starts’) in the mid-1990s led some seminar hosts to introduce him as Uncle Syd, a pen name he ultimately adopted.

Sydney was aware of the debt he owed to being in the right place at the right time. He attributed his successes to having to learn scientific independence in a remote part of the world, with few role models and even fewer mentors. He recounted the importance of arriving in Oxford with few scientific biases, and leaving with the conviction that seeing the double helix model one chilly April morning would be a defining moment in his life.

The Brenner laboratories (he often operated more than one) spawned a generation of outstanding protégés, including five Nobel laureates. Those who dedicated their careers to understanding the workings of C. elegans now number in the thousands. Science will be considerably poorer without Sydney. But his name will live forever in the annals of biology.

https://www.nature.com/articles/d41586-019-01192-9