A distinguishing characteristic of the disease myalgic encephalomyelitis/chronic fatigue syndrome appears to be the electrical response of patients’ blood cells when under stress, researchers report today (April 29) in PNAS. The team hopes the finding will speed diagnoses for people with the condition and facilitate research on it.
The University of California, San Diego’s Robert Naviaux, a genetics professor who was not involved in the research, tells the San Francisco Chronicle, “It’s a major milestone. If it holds up in larger numbers, this could be a transformative advance.”
Up to 2.5 million Americans are thought to have myalgic encephalomyelitis/chronic fatigue syndrome (ME/CFS), whose symptoms can include severe fatigue that isn’t explained by exertion, pain, and difficulties concentrating or remembering. It is currently diagnosed based on symptoms, as no biomarker for it exists.
To see whether the blood cells of ME/CFS patients respond differently to stress than those of healthy people, researchers exposed cells from a patient’s blood sample to salt to stress them, then ran them through a device that measures electrical impedance—a proxy for energy use. After the test picked up differences between the patient’s blood and that of healthy people, the research team used the test to compare blood cells from 20 patients with ME/CFS and 20 healthy people, and found that it reliably distinguished members of the two groups.
“We don’t know exactly why the cells and plasma are acting this way, or even what they’re doing. But there is scientific evidence that this disease is not a fabrication of a patient’s mind,” Ron Davis, a biochemist at Stanford University who began studying ME/CFS after his son became bedridden with the disease, tells The Sacramento Bee. “We clearly see a difference in the way healthy and chronic fatigue syndrome immune cells process stress.”
Research on ME/CFS has been controversial, with scientists who test talk therapy and exercise for the condition facing harassment from activists who see such treatments as harmful and rooted in a mistaken idea that the illness is psychological, according to a report last month in Reuters. Davis tells the San Francisco Chronicle that he hopes the discovery of a biomarker “will help the medical community accept that this is a real disease.”
Simon Wessely, a psychiatrist at King’s College London’s Institute of Psychiatry, Psychology & Neuroscience who works with ME/CFS patients, writes in an email to Reuters that the study was unable to solve two key issues: “The (first) issue is, can any biomarker distinguish CFS patients from those with other fatiguing illnesses? And second, is it measuring the cause, and not the consequence, of illness?”
Sara Hinesley, a third-grader who was born with no hands, recently won a national handwriting award for her impressive cursive skills.
By Char Adams
Ten-year-old Sara Hinesley has never been one to back down from a challenge.
“The things I can’t do, I try to figure out the ways I can do it and try my best to make it work,” the third-grader told WJZ. “I just try my hardest and put my mind to it and this is what happens.”
Hinesley, of Maryland, was born with no hands. And, recently, she won the Nicholas Maxim Award in the 2019 Zaner-Bloser National Handwriting Contest. The award is for students who have a cognitive delay or an intellectual, physical or developmental disability, according to Good Morning America.
“I felt excited and proud,” she told GMA of earning the award, which comes with a trophy, prize money and educational materials.
Hinesley was born in China and adopted four years ago by an American family, according to GMA. As she grew up, the little girl developed her own method of writing by gripping the pencil or pen with her arms.
“Sara is very motivated and a disciplined student,” her mother, Cathryn Hinesley, told GMA. “She excels really at about anything she tries.”
Hinesley goes to St. John Regional Catholic School in Frederick and when she isn’t busy excelling in the classroom, she enjoys doing the usual kid activities — “I like to play, I like to watch TV,” she told WJZ, adding that she loves spending time with her older sister Veronica.
She won the award for her impressive cursive-writing skills, which the 10 year old said wasn’t easy.
“I think it’s kind of hard — well sometimes easy and sometimes kind of hard — cause you don’t really remember all the letters to write,” she told the station.
Naturally, Hinesley’s excellence isn’t lost on school faculty.
“It’s pretty amazing given the physical disability,” Principal Karen Smith told WJZ. Hinesley’s teacher, Cheryl Churilla, told the Washington Post: “I have never heard this little girl say, ‘I can’t.’ She’s a little rock star. She tackles absolutely everything you can throw at her, and she gives it her best.”
She will receive her award at a ceremony on June 13.
“She has this independent streak where she just knows that she can do it and she’ll figure out her own way,” Cathryn told the Post. She is beautiful and strong and mighty just the way she is, and she just lives that way. She really does.”
ummary: Study identifies 104 high-risk genes for schizophrenia. One gene considered high-risk is also suspected in the development of autism.
Source: Vanderbilt University
Using a unique computational framework they developed, a team of scientist cyber-sleuths in the Vanderbilt University Department of Molecular Physiology and Biophysics and the Vanderbilt Genetics Institute (VGI) has identified 104 high-risk genes for schizophrenia.
Their discovery, which was reported April 15 in the journal Nature Neuroscience, supports the view that schizophrenia is a developmental disease, one which potentially can be detected and treated even before the onset of symptoms.
“This framework opens the door for several research directions,” said the paper’s senior author, Bingshan Li, PhD, associate professor of Molecular Physiology and Biophysics and an investigator in the VGI.
One direction is to determine whether drugs already approved for other, unrelated diseases could be repurposed to improve the treatment of schizophrenia. Another is to find in which cell types in the brain these genes are active along the development trajectory.
Ultimately, Li said, “I think we’ll have a better understanding of how prenatally these genes predispose risk, and that will give us a hint of how to potentially develop intervention strategies. It’s an ambitious goal … (but) by understanding the mechanism, drug development could be more targeted.”
Schizophrenia is a chronic, severe mental disorder characterized by hallucinations and delusions, “flat” emotional expression and cognitive difficulties.
Symptoms usually start between the ages of 16 and 30. Antipsychotic medications can relieve symptoms, but there is no cure for the disease.
Genetics plays a major role. While schizophrenia occurs in 1% of the population, the risk rises sharply to 50% for a person whose identical twin has the disease.
Recent genome-wide association studies (GWAS) have identified more than 100 loci, or fixed positions on different chromosomes, associated with schizophrenia. That may not be where high-risk genes are located, however. The loci could be regulating the activity of the genes at a distance — nearby or very far away.
To solve the problem, Li, with first authors Rui Chen, PhD, research instructor in Molecular Physiology and Biophysics, and postdoctoral research fellow Quan Wang, PhD, developed a computational framework they called the “Integrative Risk Genes Selector.”
The framework pulled the top genes from previously reported loci based on their cumulative supporting evidence from multi-dimensional genomics data as well as gene networks.
Which genes have high rates of mutation? Which are expressed prenatally? These are the kinds of questions a genetic “detective” might ask to identify and narrow the list of “suspects.”
The result was a list of 104 high-risk genes, some of which encode proteins targeted in other diseases by drugs already on the market. One gene is suspected in the development of autism spectrum disorder.
Much work remains to be done. But, said Chen, “Our framework can push GWAS a step forward … to further identify genes.” It also could be employed to help track down genetic suspects in other complex diseases.
Also contributing to the study were Li’s lab members Qiang Wei, PhD, Ying Ji and Hai Yang, PhD; VGI investigators Xue Zhong, PhD, Ran Tao, PhD, James Sutcliffe, PhD, and VGI Director Nancy Cox, PhD.
Chen also credits investigators in the Vanderbilt Center for Neuroscience Drug Discovery — Colleen Niswender, PhD, Branden Stansley, PhD, and center Director P. Jeffrey Conn, PhD — for their critical input.
Funding: The study was supported by the Vanderbilt Analysis Center for the Genome Sequencing Program and National Institutes of Health grant HG009086.
previously undiscovered “sequel” to Anthony Burgess’ dystopian cult classic “A Clockwork Orange” has been found among the author’s archives.
In the unfinished “The Clockwork Condition,” the author responds to the moral panic caused by Stanley Kubrick’s film adaptation of his most-famous novel, which had come out just weeks before.
The nonfiction work, which also includes a series of philosophical thoughts on the human condition, runs to around 200 typewritten pages, and features several handwritten notes. It had been left for decades among in his abandoned home in Bracciano, Italy, before being boxed up after his death in 1993 and sent to the International Anthony Burgess Foundation in Manchester, England, alongside several other works and possessions.
“It’s not finished, but there is quite a lot there,” Andrew Biswell, who works at the foundation and helped make the discovery, told CNN. “If you put the book together, you can see what might have been.”
“It’s given us more detail about a whole range of thoughts and feelings he had about culture, in the immediate aftermath of the film having come out,” Biswell added.
Kubrick’s 1971 adaptation ultimately received critical and commercial acclaim and boosted the popularity of Burgess’ book, but caused massive controversy on its release for its violent and sexual content.
The author was forced to confront suggestions that he glorified and encouraged violent acts through his work, which describes the horrific spree of “ultra-violence” by a gang of delinquent criminals in a futuristic Britain.
“Burgess felt very strongly that he was in the firing line,” Biswell says, describing the themes of the newly discovered manuscript. “He’s very concerned by the accusation that this film has provoked people to do evil things.”
In one section of the manuscript, Burgess writes that young people at the time had learned “a style of violence,” but not violence itself — which he felt was inherent in some people.
In another section, Burgess muses on the impact of television and the mass media on people in the 1970s. He writes of “man trapped in the world of machines, unable to grow as a human being and become himself.” He diagnoses the titular “Clockwork Condition” as the state of “feeling alienated, partly because of the mass media,” Biswell says.
“In that sense it’s a commentary about what’s happening to him, and his own life had been turned upside down by the success of the film,” he adds.
The text of “The Clockwork Condition” was meant to be supplemented by a series of around 80 photographs on the subject of freedom and the individual. The work was structured in the same way as one of his favorite poems, Dante’s “Inferno,” and was publicly mentioned by Burgess just once.
Burgess wrote a series of novels and comic works throughout his life, but none resonated with audiences like “A Clockwork Orange.” It was chosen by Time magazine as one of the 100 best English-language books written between 1923 and 2005, while Kubrick’s film was nominated for the Best Picture Oscar.
In a small study of patients referred to the Johns Hopkins Early Psychosis Intervention Clinic (EPIC), researchers report that about half the people referred to the clinic with a schizophrenia diagnosis did not actually have schizophrenia. People who reported hearing voices or having anxiety were the ones more likely to be misdiagnosed, according to the study published in the Journal of Psychiatric Practice.
The researchers say that therapies can vary widely for people with schizophrenia, bipolar disorder, major depression or other serious types of mental illness, and that misdiagnosis can lead to inappropriate or delayed treatment.
The findings, the researchers say, suggest that second opinions at a specialised schizophrenia clinic after initial diagnosis are wise efforts to reduce the risk of misdiagnosis, and ensure prompt and appropriate patient treatment.
“Because we’ve shined a spotlight in recent years on emerging and early signs of psychosis, diagnosis of schizophrenia is like a new fad, and it’s a problem especially for those who are not schizophrenia specialists because symptoms can be complex and misleading,” says Krista Baker, LCPC, Johns Hopkins Medicine, Baltimore, Maryland. “Diagnostic errors can be devastating for people, particularly the wrong diagnosis of a mental disorder,” she adds.
According to the National Institute of Mental Health, schizophrenia affects an estimated 0.5% of the world population, and is more common in men. It typically arises in the late adolescences, 20s and even as late as the early 30s in women. Symptoms such as disordered thinking, hallucinations, delusions, reduced emotions and unusual behaviours can be disabling, and drug treatments often create difficult side effects.
The new study was prompted in part by anecdotal evidence among healthcare providers in Baker’s specialty clinic that a fair number of people were being seen who were misdiagnosed. These patients usually had other mental illnesses, such as depression.
To see if there was rigorous evidence of such a trend, the researchers looked at patient data from 78 cases referred to EPIC for consultation between February 2011 and July 2017. Patients were an average age of 19, and about 69% were men, 74% were white, 12% African American and 14% were another ethnicity. Patients were referred to the clinic by general psychiatrists, outpatient psychiatric centres, primary care physicians, nurse practitioners, neurologists or psychologists.
Each consultation by the clinic took 3 to 4 hours, and included interviews with the patient and the family, physical exams, questionnaires, and medical and psychosocial histories.
Of the patients referred to the clinic, 54 people came with a predetermined diagnosis of a schizophrenia spectrum disorder. Of those, 26 received a confirmed diagnosis of a schizophrenia spectrum disorder following their consultation with the EPIC team, which is composed of clinicians and psychiatrists. Of the 54 cases, 51% were rediagnosed by clinic staff as having anxiety or mood disorders. Anxiety symptoms were prominent in 14 of the misdiagnosed patients.
One of the other most common symptoms that the researchers believe may have contributed to misdiagnosis of schizophrenia was hearing voices, as almost all incorrectly diagnosed patients reported auditory hallucinations.
“Hearing voices is a symptom of many different conditions, and sometimes it is just a fleeting phenomenon with little significance,” says Russell L. Margolis, MD, Johns Hopkins Schizophrenia Center, Johns Hopkins University School of Medicine, Baltimore, Maryland. “At other times when someone reports ‘hearing voices’ it may be a general statement of distress rather than the literal experience of hearing a voice. The key point is that hearing voices on its own doesn’t mean a diagnosis of schizophrenia.”
In speculating about other reasons why there might be so many misdiagnoses, the researchers say that it could be due to overly simplified application of criteria listed in the Diagnostic Statistical Manual of Mental Disorders, a standard guide to the diagnosis of psychiatric disorders.
“Electronic medical record systems, which often use pull-down diagnostic menus, increase the likelihood of this type of error,” says Dr. Margolis, who refers to the problem as “checklist psychiatry.”
“The big take-home message from our study is that careful consultative services by experts are important and likely underutilised in psychiatry,” says Dr. Margolis. “Just as a primary care clinician would refer a patient with possible cancer to an oncologist or a patient with possible heart disease to a cardiologist, it’s important for general mental health practitioners to get a second opinion from a psychiatry specialty clinic like ours for patients with confusing, complicated or severe conditions. This may minimise the possibility that a symptom will be missed or overinterpreted.”
Dr. Margolis cautioned that the study was limited to patients evaluated in 1 clinic. Nonetheless, he was encouraged by the willingness of so many patients, their families and their clinicians to ask for a second opinion from the Johns Hopkins clinic. If further study confirms their findings, it would lend support to the belief by the Johns Hopkins team that overdiagnosis may be a national problem, because they see patients from across the country who travel to Johns Hopkins for an opinion. They hope to examine the experience of other specialty consultation clinics in the future.
Illustrations of electrode placements on the research participants’ neural speech centers, from which activity patterns recorded during speech (colored dots) were translated into a computer simulation of the participant’s vocal tract (model, right) which then could be synthesized to reconstruct the sentence that had been spoken (sound wave & sentence, below). Credit: Chang lab / UCSF Dept. of Neurosurgery
A state-of-the-art brain-machine interface created by UC San Francisco neuroscientists can generate natural-sounding synthetic speech by using brain activity to control a virtual vocal tract—an anatomically detailed computer simulation including the lips, jaw, tongue, and larynx. The study was conducted in research participants with intact speech, but the technology could one day restore the voices of people who have lost the ability to speak due to paralysis and other forms of neurological damage.
Stroke, traumatic brain injury, and neurodegenerative diseases such as Parkinson’s disease, multiple sclerosis, and amyotrophic lateral sclerosis (ALS, or Lou Gehrig’s disease) often result in an irreversible loss of the ability to speak. Some people with severe speech disabilities learn to spell out their thoughts letter-by-letter using assistive devices that track very small eye or facial muscle movements. However, producing text or synthesized speech with such devices is laborious, error-prone, and painfully slow, typically permitting a maximum of 10 words per minute, compared to the 100-150 words per minute of natural speech.
The new system being developed in the laboratory of Edward Chang, MD—described April 24, 2019 in Nature—demonstrates that it is possible to create a synthesized version of a person’s voice that can be controlled by the activity of their brain’s speech centers. In the future, this approach could not only restore fluent communication to individuals with severe speech disability, the authors say, but could also reproduce some of the musicality of the human voice that conveys the speaker’s emotions and personality.
“For the first time, this study demonstrates that we can generate entire spoken sentences based on an individual’s brain activity,” said Chang, a professor of neurological surgery and member of the UCSF Weill Institute for Neuroscience. “This is an exhilarating proof of principle that with technology that is already within reach, we should be able to build a device that is clinically viable in patients with speech loss.”
Brief animation illustrates how patterns of brain activity from the brain’s speech centers in somatosensory cortex (top left) were first decoded into a computer simulation of a research participant’s vocal tract movements (top right), which were then translated into a synthesized version of the participant’s voice (bottom). Credit:Chang lab / UCSF Dept. of Neurosurgery. Simulated Vocal Tract Animation Credit:Speech Graphics
Virtual Vocal Tract Improves Naturalistic Speech Synthesis
The research was led by Gopala Anumanchipalli, Ph.D., a speech scientist, and Josh Chartier, a bioengineering graduate student in the Chang lab. It builds on a recent study in which the pair described for the first time how the human brain’s speech centers choreograph the movements of the lips, jaw, tongue, and other vocal tract components to produce fluent speech.
From that work, Anumanchipalli and Chartier realized that previous attempts to directly decode speech from brain activity might have met with limited success because these brain regions do not directly represent the acoustic properties of speech sounds, but rather the instructions needed to coordinate the movements of the mouth and throat during speech.
“The relationship between the movements of the vocal tract and the speech sounds that are produced is a complicated one,” Anumanchipalli said. “We reasoned that if these speech centers in the brain are encoding movements rather than sounds, we should try to do the same in decoding those signals.”
In their new study, Anumancipali and Chartier asked five volunteers being treated at the UCSF Epilepsy Center—patients with intact speech who had electrodes temporarily implanted in their brains to map the source of their seizures in preparation for neurosurgery—to read several hundred sentences aloud while the researchers recorded activity from a brain region known to be involved in language production.
Based on the audio recordings of participants’ voices, the researchers used linguistic principles to reverse engineer the vocal tract movements needed to produce those sounds: pressing the lips together here, tightening vocal cords there, shifting the tip of the tongue to the roof of the mouth, then relaxing it, and so on.
This detailed mapping of sound to anatomy allowed the scientists to create a realistic virtual vocal tract for each participant that could be controlled by their brain activity. This comprised two “neural network” machine learning algorithms: a decoder that transforms brain activity patterns produced during speech into movements of the virtual vocal tract, and a synthesizer that converts these vocal tract movements into a synthetic approximation of the participant’s voice.
The synthetic speech produced by these algorithms was significantly better than synthetic speech directly decoded from participants’ brain activity without the inclusion of simulations of the speakers’ vocal tracts, the researchers found. The algorithms produced sentences that were understandable to hundreds of human listeners in crowdsourced transcription tests conducted on the Amazon Mechanical Turk platform.
As is the case with natural speech, the transcribers were more successful when they were given shorter lists of words to choose from, as would be the case with caregivers who are primed to the kinds of phrases or requests patients might utter. The transcribers accurately identified 69 percent of synthesized words from lists of 25 alternatives and transcribed 43 percent of sentences with perfect accuracy. With a more challenging 50 words to choose from, transcribers’ overall accuracy dropped to 47 percent, though they were still able to understand 21 percent of synthesized sentences perfectly.
“We still have a ways to go to perfectly mimic spoken language,” Chartier acknowledged. “We’re quite good at synthesizing slower speech sounds like ‘sh’ and ‘z’ as well as maintaining the rhythms and intonations of speech and the speaker’s gender and identity, but some of the more abrupt sounds like ‘b’s and ‘p’s get a bit fuzzy. Still, the levels of accuracy we produced here would be an amazing improvement in real-time communication compared to what’s currently available.”
Artificial Intelligence, Linguistics, and Neuroscience Fueled Advance
The researchers are currently experimenting with higher-density electrode arrays and more advanced machine learning algorithms that they hope will improve the synthesized speech even further. The next major test for the technology is to determine whether someone who can’t speak could learn to use the system without being able to train it on their own voice and to make it generalize to anything they wish to say.
Image of an example array of intracranial electrodes of the type used to record brain activity in the current study. Credit: UCSF
Preliminary results from one of the team’s research participants suggest that the researchers’ anatomically based system can decode and synthesize novel sentences from participants’ brain activity nearly as well as the sentences the algorithm was trained on. Even when the researchers provided the algorithm with brain activity data recorded while one participant merely mouthed sentences without sound, the system was still able to produce intelligible synthetic versions of the mimed sentences in the speaker’s voice.
The researchers also found that the neural code for vocal movements partially overlapped across participants, and that one research subject’s vocal tract simulation could be adapted to respond to the neural instructions recorded from another participant’s brain. Together, these findings suggest that individuals with speech loss due to neurological impairment may be able to learn to control a speech prosthesis modeled on the voice of someone with intact speech.
“People who can’t move their arms and legs have learned to control robotic limbs with their brains,” Chartier said. “We are hopeful that one day people with speech disabilities will be able to learn to speak again using this brain-controlled artificial vocal tract.”
Added Anumanchipalli, “I’m proud that we’ve been able to bring together expertise from neuroscience, linguistics, and machine learning as part of this major milestone towards helping neurologically disabled patients.”
The Big Bang is commonly thought of as the start of it all: About 13.8 billion years ago, the observable universe went boom and expanded into being.
But what were things like before the Big Bang?
Short answer: We don’t know. Long answer: It could have been a lot of things, each mind-bending in its own way.
The first thing to understand is what the Big Bang actually was.
“The Big Bang is a moment in time, not a point in space,” said Sean Carroll, a theoretical physicist at the California Institute of Technology and author of “The Big Picture: On the Origins of Life, Meaning and the Universe Itself” (Dutton, 2016).
So, scrap the image of a tiny speck of dense matter suddenly exploding outward into a void. For one thing, the universe at the Big Bang may not have been particularly small, Carroll said. Sure, everything in the observable universe today — a sphere with a diameter of about 93 billion light-years containing at least 2 trillion galaxies — was crammed into a space less than a centimeter across. But there could be plenty outside of the observable universe that Earthlings can’t see because it’s physically impossible for the light to have traveled that far in 13.8 billion years.
Thus, it’s possible that the universe at the Big Bang was teeny-tiny or infinitely large, Carroll said, because there’s no way to look back in time at the stuff we can’t even see today. All we really know is that it was very, very dense and that it very quickly got less dense.
As a corollary, there really isn’t anything outside the universe, because the universe is, by definition, everything. So, at the Big Bang, everything was denser and hotter than it is now, but there was no more an “outside” of it than there is today. As tempting as it is to take a godlike view and imagine you could stand in a void and look at the scrunched-up baby universe right before the Big Bang, that would be impossible, Carroll said. The universe didn’t expand into space; space itself expanded.
“No matter where you are in the universe, if you trace yourself back 14 billion years, you come to this point where it was extremely hot, dense and rapidly expanding,” he said.
No one knows exactly what was happening in the universe until 1 second after the Big Bang, when the universe cooled off enough for protons and neutrons to collide and stick together. Many scientists do think that the universe went through a process of exponential expansion called inflation during that first second. This would have smoothed out the fabric of space-time and could explain why matter is so evenly distributed in the universe today.
Before the bang
It’s possible that before the Big Bang, the universe was an infinite stretch of an ultrahot, dense material, persisting in a steady state until, for some reason, the Big Bang occured. This extra-dense universe may have been governed by quantum mechanics, the physics of the extremely small scale, Carroll said. The Big Bang, then, would have represented the moment that classical physics took over as the major driver of the universe’s evolution.
For Stephen Hawking, this moment was all that mattered: Before the Big Bang, he said, events are unmeasurable, and thus undefined. Hawking called this the no-boundary proposal: Time and space, he said, are finite, but they don’t have any boundaries or starting or ending points, the same way that the planet Earth is finite but has no edge.
“Since events before the Big Bang have no observational consequences, one may as well cut them out of the theory and say that time began at the Big Bang,” he said in an interview on the National Geographic show “StarTalk” in 2018.
Or perhaps there was something else before the Big Bang that’s worth pondering. One idea is that the Big Bang isn’t the beginning of time, but rather that it was a moment of symmetry. In this idea, prior to the Big Bang, there was another universe, identical to this one but with entropy increasing toward the past instead of toward the future.
Increasing entropy, or increasing disorder in a system, is essentially the arrow of time, Carroll said, so in this mirror universe, time would run opposite to time in the modern universe and our universe would be in the past. Proponents of this theory also suggest that other properties of the universe would be flip-flopped in this mirror universe. For example, physicist David Sloan wrote in the University of Oxford Science Blog, asymmetries in molecules and ions (called chiralities) would be in opposite orientations to what they are in our universe.
A related theory holds that the Big Bang wasn’t the beginning of everything, but rather a moment in time when the universe switched from a period of contraction to a period of expansion. This “Big Bounce” notion suggests that there could be infinite Big Bangs as the universe expands, contracts and expands again. The problem with these ideas, Carroll said, is that there’s no explanation for why or how an expanding universe would contract and return to a low-entropy state.
Carroll and his colleague Jennifer Chen have their own pre-Big Bang vision. In 2004, the physicists suggested that perhaps the universe as we know it is the offspring of a parent universe from which a bit of space-time has ripped off.
It’s like a radioactive nucleus decaying, Carroll said: When a nucleus decays, it spits out an alpha or beta particle. The parent universe could do the same thing, except instead of particles, it spits out baby universes, perhaps infinitely. “It’s just a quantum fluctuation that lets it happen,” Carroll said. These baby universes are “literally parallel universes,” Carroll said, and don’t interact with or influence one another.
If that all sounds rather trippy, it is — because scientists don’t yet have a way to peer back to even the instant of the Big Bang, much less what came before it. There’s room to explore, though, Carroll said. The detection of gravitational waves from powerful galactic collisions in 2015 opens the possibility that these waves could be used to solve fundamental mysteries about the universes’ expansion in that first crucial second.
Theoretical physicists also have work to do, Carroll said, like making more-precise predictions about how quantum forces like quantum gravity might work.
“We don’t even know what we’re looking for,” Carroll said, “until we have a theory.”
You need only to look at families to see that height is inherited — and studies of identical twins and families have long confirmed that suspicion. About 80% of variation in height is down to genetics, they suggest. But since the human genome was sequenced nearly two decades ago, researchers have struggled to fully identify the genetic factors responsible.
Studies seeking the genes that govern height have identified hundreds of common gene variants linked to the trait. But the findings also posed a quandry: each variant had a tiny effect on height that together didn’t amount to the genetic contribution predicted by family studies. This phenomenon, which occurs for many other traits and diseases, was dubbed missing heritability, and had even prompted some researchers to speculate that there’s something fundamentally wrong with our understanding of genetics.
Now, a study suggests that most of the missing heritability for height and body mass index (BMI) can, as some researchers had suspected, be found in rarer gene variants that had lain undiscovered until now.
“It is a reassuring paper because it suggests that there isn’t something terribly wrong with genetics,” says Tim Spector, a genetic epidemiologist at King’s College London. “It’s just that sorting it out is more complex than we thought.” The research was posted1 to the bioRxiv preprint server on 25 March.
Scouring the genome
To seek out the genetic factors that underlie diseases and traits, geneticists turn to mega-searches known as genome-wide association studies (GWAS). These scour the genomes of, typically, tens of thousands of people — or, increasingly, more than a million — for single-letter changes, or SNPs, in genes that commonly appear in individuals with a particular disease or that could explain a common trait such as height.
But GWAS have limitations. Because sequencing the entire genomes of thousands of people is expensive, GWAS themselves scan only a strategically selected set of SNPs, perhaps 500,000, in each person’s genome. That’s only a snapshot of the roughly six billion nucleotides — the building blocks of DNA — strung together in our genome. In turn, these 500,000 common variants would have been found from sequencing the genomes of just a few hundred people, says Timothy Frayling, a human geneticist at the University of Exeter, UK.
A team led by Peter Visscher at the Queensland Brain Institute in Brisbane, Australia, decided to investigate whether rarer SNPs than those typically scanned in GWAS might explain the missing heritability for height and BMI. They turned to whole-genome sequencing — performing a complete readout of all 6 billion bases — of 21,620 people. (The authors declined to comment on the preprint, because it is under submission at a journal.)
They relied on the simple, but powerful, principle that all people are related to some extent — albeit distantly — and that DNA can be used to calculate degrees of relatedness. Then, information on the people’s height and BMI could be combined to identify both common and rare SNPs that might be contributing to these traits.
Say, for instance, that a pair of third cousins is closer in height than a pair of second cousins is in a different family: that’s an indication that the third cousins’ height is mostly down to genetics, and the extent of that correlation will tell you how much, Frayling explains. “They used all of the genetic information, which enables you to work out how much of the relatedness was due to rarer things as well as the common things.”
As a result, the researchers captured genetic differences that occur in only 1 in 500, or even 1 in 5,000, people.
And by using information on both common and rare variants, the researchers arrived at roughly the same estimates of heritability as those indicated by twin studies. For height, Visscher and colleagues estimate a heritability of 79%, and for BMI, 40%. This means that if you take a large group of people, 79% of the height differences would be due to genes rather than to environmental factors, such as nutrition.
Complex processes
The researchers also suggest how the previously undiscovered variants might be contributing to physical traits. Tentatively, they found that these rare variants were slightly enriched in protein-coding regions of the genome, and that they had an increased likelihood of being disruptive to these regions, notes Terence Capellini, an evolutionary biologist at Harvard University in Cambridge, Massachusetts. This indicates that the rare variants might partly influence height by affecting protein-coding regions instead of the rest of the genome — the vast majority of which does not include instructions for making proteins, but might influence their expression.
The rarity of the variants also suggests that natural selection could be weeding them out, perhaps because they are harmful in some way.
The complexity of heritability means that understanding the roots of many common diseases — necessary if researchers are to develop effective therapies against them — will take considerably more time and money, and it could involve sequencing hundreds of thousands or even millions of whole genomes to identify the rare variants that explain a substantial portion of the illnesses’ genetic components.
The study reveals only the total amount of rare variants contributing to these common traits — not which ones are important, says Spector. “The next stage is to go and work out which of these rare variants are important for traits or diseases that you want to get a drug for.”
Having a go on your PlayStation, going to the cinema with your friends, playing outdoors — that’s how the spare time of most 12-year-old children looks.
That’s not how it is for Jackson Oswalt though. Two years ago, the now 14-year-old achieved something even some of the most renowned scientists have been unable to: he carried out nuclear fusion, in his parent’s garage in Texarkana, Ark.
“One day I had a sudden epiphany,” wrote the teen on amateur physicist forum, Fusor. “I realized that I could be the absolute best at whatever video game, but in the end it still wouldn’t mean much. I realized that, in the grand scheme of things, video games had no role to play.”
It was at this point that he decided to dedicate himself to science and to pursue a new hobby — nuclear fusion.
While other children want a bicycle or a game console for their birthday or Christmas, Oswalt ordered the parts he needed for a nuclear reactor from eBay.
Instead of watching videos of gamers, Oswalt would watch physics videos — his parents agreed to give him financial support if he promised to first check through expert guidelines on a forum and to pay attention to their tips and advice.
They spent somewhere between $8,000 and $10,000 collecting the parts he needed to build his nuclear reactor, and also footed a bill for 50,000 volts and radioactive radiation.
Using Open Source Fusor Research Consortium— an online forum for amateur physicists — Oswalt relied on trial and error to ensure he was taking the appropriate measures to build a reactor and successfully carry out fusion reactions.
According to Fox News, just before his thirteenth birthday in early 2018, Oswalt finally succeeded in what he’d been working towards for such a long time — a nuclear fusion reactor.
“Being a parent of someone that was as driven as he was for 12 months was really impressive to see. I mean it was everyday grinding; every day learning something different; every day failing and watching him work through all those things,” said his father, Chris Oswalt.
Whether Oswalt is actually the youngest person ever to have succeeded in doing something like this now needs to be confirmed by experts.
In addition to a world record title, Oswalt may also be given a letter of recommendation from his school for a scholarship.
In the meantime, however, he still has some plans: he wants to build an even bigger nuclear reactor.
The bees that live on the roof of Notre Dame are alive and buzzing, having survived the devastating fire that ripped through the cathedral on Monday, the beekeeper Nicolas Geant confirmed to CNN.
“I got a call from Andre Finot, the spokesman for Notre Dame, who said there were bees flying in and out of the hives which means they are still alive!” Geant said. “Right after the fire I looked at the drone pictures and saw the hives weren’t burnt but there was no way of knowing if the bees had survived. Now I know there’s activity it’s a huge relief!”
Notre Dame has housed three beehives on the first floor on a roof over the sacristy, just beneath the rose window, since 2013. Each hive has about 60,000 bees.
Geant said the hives were not touched by the blaze because they are located about 30 meters below the main roof where the fire spread.
“They weren’t in the middle of the fire, had they been they wouldn’t have survived,” Geant said. “The hives are made of wood so they would have gone up in flames.”
“Wax melts at 63 degrees, if the hive had reached that temperature the wax would have melted and glued the bees together, they would have all perished.”
While it is likely that the hives were filled with smoke, that doesn’t impact them like it would with humans, Geant explained.
“Bees don’t have lungs like us,” he said. “And secondly, for centuries to work with the bees we have used bee smokers.”
A bee smoker is a box with bellows which creates a white, thick cold smoke in the hives, prompting the bees to calmly gorge on the honey while beekeepers do their work, Geant said.
Geant said he wouldn’t be able tell whether all of the bees are alive until he was able to inspect the site, but he’s confident because the hives didn’t burn, and because bees have been spotted flying in and out.
“I was incredibly sad about Notre Dame because it’s such a beautiful building, and as a catholic it means a lot to me. But to hear there is life when it comes to the bees, that’s just wonderful. I was overjoyed,” he added.
“Thank goodness the flames didn’t touch them. It’s a miracle!”