Depression affects many aspects of life, and for some people it may mean that they sleep longer or different hours than they would normally. “We have known for some time that there is a relationship between sleep timing and mood, but a question we often hear from clinicians is: How much earlier do we need to shift people to see a benefit?” says senior author Celine Vetter, assistant professor of integrative physiology at Colorado University Boulder, in a press release.
A previous study from Vetter and collaborators found that in a four-year study of 32,000 nurses that “early risers” were 27 percent less likely to develop depression symptoms. But how would shifting a sleep schedule potentially affect people? That’s what this new study focuses on.
The study followed 840,000 people and collected data on their chronotype, meaning what hours of the day they were predisposed to prefer, based on genetic information. One “clock gene” is thought to account for 12 to 42 percent of our sleep timing.
The researchers wanted to know if someone’s genetics makes them more likely to be an “early riser” if they also have lower risk for depression. So they gave some study participants sleep trackers and some filled out a sleep preference questionnaire. They then connected those data to genetic data.
The team focused on the sleep midpoint, calculated as halfway between bedtime and wake time. “We found that even one-hour earlier sleep timing is associated with significantly lower risk of depression,” says Vetter in the press release. So if someone who normally goes to bed at midnight instead goes to bed at 11 PM and sleeps for the same duration, they could cut their risk by 23 percent, according to the study. The effect could be nearly twice that if shifted by two hours.
The researchers aren’t certain why they are seeing these results, but it may have to do with light and darkness and how our bodies react. Light research has shown that light therapy can be helpful for treating some mood disorders.
The connection to depression symptoms could also be a result of societal norms. Simply having a chronotype that does not make you an early riser could be having an effect. “We live in a society that is designed for morning people, and evening people often feel as if they are in a constant state of misalignment with that societal clock,” says lead author Iyas Daghlas at the Broad Institute of MIT and Harvard.
If you want to shift to an earlier sleep schedule, there are some things you can do to help make that process easier. “Keep your days bright and your nights dark,” says Vetter. “Have your morning coffee on the porch. Walk or ride your bike to work if you can, and dim those electronics in the evening.”
The study, published in the journal Nature, shows how a drug available on the NHS can boost fitness of healthy stem cells in the gut, making them more resistant to sabotage from mutant stem cells that cause cancer.
Researchers in the Netherlands, funded by the UK charity Worldwide Cancer Research, have discovered a way to boost the fitness of healthy cells in the gut to prevent the development of bowel cancer. The findings have led to the initiation of a clinical trial to find out if a commonly used psychiatric drug could be used to prevent bowel cancer in people. The trial will recruit patients with a genetic mutation that means they are virtually 100% certain to develop bowel cancer in their lifetime, unless the entire large bowel is removed.
Bowel cancer affects more than 43,000 people each year in the UK and just over half of the people diagnosed survive their disease for 10 years or longer. It is thought that the majority of bowel cancer cases are caused by mutations in a gene called APC.
Intestinal stem cells with mutations to the APC gene have been shown to have a competitive advantage over their healthy counterparts and frequently outcompete them, leading to unrestricted growth and cancer.
Up until now it was unclear how the mutant stem cells win the upper hand, but new research, published in the journal Nature, now shows that the mutant stem cells actively emit signals that sabotage the function of healthy stem cells in the gut.
Professor Louis Vermeulen, Group Leader at the Center for Experimental Molecular Medicine at Amsterdam UMC, and senior author of the paper explained: “We have uncovered the very first steps in the development of bowel cancer. We found that following the occurrence of a mutation in a key gene that regulates stem cells in the intestine, these cells turn into cheaters that actively suppress the normal cells in the environment.
“This is a totally new concept as it was always thought that mutant cells that can turn into cancer simply proliferate faster or are resistant to cell death. But our findings indicate that cells on their way to a full malignancy can actively suppress the stem cells in the vicinity to gain a competitive edge. This is a concept we refer to as supercompetition.”
Critically, the researchers also discovered a way to prevent the mutant stem cells from interfering with the healthy ones. Lithium, a commonly used drug for the treatment of several psychiatric disorders, prevented the mutant stem cells from taking over and forming tumours in mice by rendering the healthy stem cells insensitive to the damaging signals.
A clinical trial funded by the Dutch Cancer Society (KWF) testing the effect of lithium of bowel cancer development in individuals with familial adenomatous polyposis (FAP) will now be performed in the Netherlands. FAP is a relatively uncommon genetic syndrome that affects about 1 in 7,000 to 1 in 22,000 people. FAP patients have mutations in their APC gene and develop hundreds of non-cancerous polyps and adenomas in their bowel. Without treatment nearly all of them will develop bowel cancer between the ages of 35 and 45.
The trial is set to recruit 10 young adult patients with FAP and will observe the patients before, during and after treatment with lithium for a total of 18 months. The researchers will collect evidence on the preventive effect of lithium on mutant stem cells and polyp formation, as well as test the safety profile of lithium. Results of the clinical trial are likely to build the basis for larger trials with more patients.
Sanne van Neerven, Ph.D. student who conducted the research, said: “Our clinical trial may reveal that lithium can be used to prevent cancer development in FAP individuals. But what is also important is that this trial can establish a proof of concept that manipulating competition between mutant cells and normal cells can be manipulated in such a way that the healthy cell outcompete the mutant cells. This is a novel strategy for cancer prevention and could be applied to many heritable cancer syndromes involving different mutations and organs, but more research is warranted in this area.”
Dr. Helen Rippon, chief executive at Worldwide Cancer Research said: “The discoveries made by Professor Vermeulen and his team are a huge breakthrough in our understanding of how bowel cancer develops. It’s amazing to see innovative research like this go from the lab to the clinic as it shows just how important early-stage discovery research is to starting new cancer cures. We are all very excited to see the results from this clinical trial and the future impact these findings might have on other people with inherited cancer syndromes.
“Around 1% of bowel cancers are caused by familial adenomatous polyposis (FAP). This may seem like a small number, but in the UK alone this means that over 400 people are diagnosed with bowel cancer caused by FAP every year. The only treatment option available for people with FAP is major surgery to remove the entire colon, which can be life altering and unfortunately cannot guarantee that cancer won’t develop. The launch of a clinical trial thanks to this incredible research will offer real hope to people that there could be a simple way to prevent bowel cancer in the future.”
There is a surprisingly relatable description of depression and heartbreak from 3,000 years ago in Mesopotamia, the land between the Euphrates and Tigris rivers that hosted the peoples of Babylon and Assyria:
“If Depression continually falls upon him, he continually sighs, he eats bread and drinks beer but it does not go well for him, then says, ‘Oh, my heart!’ and is dejected, he is sick with Lovesickness; it is the same for a man and a woman.”
Sighing, eating bread, and drinking beer, but not feeling better: These are all recognizable qualities of a low mood or break-up in the 21st century. And yet this text was translated from what’s known as the Diagnostic Handbook— a series of 40 clay tablets that date to the first millennium B.C.E. Portions of copies of the tablets were recovered in what is now Iraq and Syria, and put together to make a complete book.
The tablets were written in the Akkadian language in a writing system called cuneiform, which involved the text being impressed onto wet clay tablets and then dried—not chiseled into hard stone, as it may look.
The clay tablets are almost alien in their appearance—these are strange, geometric grooves imprinted into rock— but they contain a treasure trove of all-too-human experiences that can feel uncannily similar to emotions we feel today.
Though the handbook and other similar texts from this period describe physical conditions including epilepsy, seizure, and skin lesions, Moudhy Al-Rashid, an assyriologist at the University of Oxford, has been especially focused on unearthing the parts that have an emotional or mental component to them, like the depression-like symptoms in this passage from a medical text that describes ailments caused by witchcraft:
“If a man eats (and) drinks, but it does not approach his flesh, he is sometimes pale, sometimes red, sometimes his face becomes darker and darker, he is worried, he is depressed, his heart is not up to speaking.”
A text from around 900 to 600 B.C.E. described people forgetting speech, losing their appetite, having nightmares, struggling to fall asleep, or having low libido: “He has no desire for [bread and] beer, he has no desire to go to a woman, his ‘heart’ cannot arouse him toward a woman; he babbles, he has repeated cramps, he is depressed, he continually pours out, he says, ‘Have mercy on me!’” (A note on the translations: When brackets are used, it means a word missing in the original text, and inferred from other original materials or copies. Parentheses are used to denote words that were added to make the phrases more legible in English.)
Behaviors that suggested confusion— like wandering around without realizing what you’re doing, laughing without reason, or crying out—were recorded. There are, in fact, two phrases referring to crying out—one is just a sound, and the other translates to, “Oh my heart!” or “Oh my insides!”
The Diagnostic Handbook also includes, alongside the main emotional symptoms that Al-Rashid studies, many bodily complaints familiar to those dealing with anxiety or depression: stomach issues like indigestion, vertigo, dizziness, fatigue, sweating, weakness, and restlessness.
These documents are striking for how detailed they are, and also for the resonance a modern reader can find within them. They reveal how what we recognize in the present day as symptoms of mental and emotional distress have long existed in some form, even if they’ve been explained in different ways, depending on historical time and place. Sometimes, emotions are extreme enough that we enlist the help of others—in Mesopotamia that meant intervention from “exorcists” and “healers,” while today it’s psychotherapy or medication.
But the interpretation of these cuneiform texts raises an issue that we still struggle with: How best to categorize emotional distress in order to make sense of it. A previous tendency in the field, called “retrospective diagnosis,” is now being resisted by the next generation of interpreters, who do not want scholars to simply assign our contemporary diagnostic categories to translations from the Diagnostic Handbook, like Obsessive Compulsive Disorder (OCD), schizophrenia, or psychopathy.
Looking at the ways mental symptoms were described forces us to reckon with our own meaning-making structures around emotional distress and place them in our specific cultural and historical contexts. It also, though, connects us to a broader legacy: For thousands of years, humans have been trying to make sense of their emotions as they exist in relationship with the world; our distant ancestors struggled with similar agonies we do; and all along, people have sought to treat, be treated for, and understand that distress.
“They, too, were trying to bring some kind of order to the chaos,” Al-Rashid said.
Al-Rashid has had depression sporadically since childhood, and the intricate characterizations of depression-like symptoms resonated with her lived experience. Similarly, I came to this topic through a paper from 2012 presenting what the authors called a description of “OCD” behaviors in Babylon. Because I have OCD, I was curious if the symptoms would be similar.
The paper was written by a pioneer translator of the Diagnostic Handbook, British assyriologist James Kinnier Wilson, in collaboration with the neurologist Edwards Reynolds. They translated the “OCD” behaviors as such:
“He does not not know why he is compelled to take (things), to hide (things)… to step in blood or walk about over a place where blood has been shed…(or why) he has a phobia of meeting an accursed person or of an accursed person meeting him, or of sleeping in the bed, sitting in the chair, eating at the table, or drinking from the cup of an accursed person.”
Kinnier Wilson also translated what he called “phobias”: “He does not know why he has a (morbid) fear of beds, chairs, tables, lighted stoves, lamps… of leaving or entering (such and such) city, city gate, or house, or of (such and such) a street, temple, or road.” Other “phobias” included fear of certain days or months, of hunger, or of having the name of a god invoked in his presence.
I have certainly felt anxiety around sharing another person’s cup due to intrusive thoughts around contamination, but it’s not accurate to project my experience of OCD into the past to describe what a person was going through, said Chiara Thumiger, a historian of medicine at Kiel University in Germany.
Al-Rashid said that in her opinion, Kinnier Wilson’s translations are quite liberal; the original work they are excerpting from is called Shurpu, a collection of incantations to accompany a particular ritual, and it includes lists of potential sins committed by someone who might need to consult that text.
“This list method follows established methods of recording and presenting information in Assyrian and Babylonian scholarly texts where they basically try to be exhaustive by listing possibilities of things,” she explained.
Calling these sins “phobias” is at best a metaphor; this could be seen as a case where retrospective diagnosis can distract from the original meaning. “The modern psychiatrist will recognize a remarkably accurate description of an agitated depression with biological features including insomnia, anorexia, weakness (and probably weight loss), impaired concentration and memory,” Kennier Wilson and Reynolds wrote.
While the symptoms may be similar, our languages are different, as is our understanding of the body, science, and medicine. Retrospective diagnosis can obscure what the texts have recorded by trying to map modern illness concepts onto them, Al-Rashid said. It doesn’t tell us the complete story.
Diseases, including what we call mental illness, were understood in Mesopotamia to come from outside the body. Whether it was a seizure, skin lesion, or depression, the cause was usually understood as being supernatural. When a person had a broken heart, it could, people thought, have been caused by a goddess that needed to be appeased. Demons could cause illness, including specific demons which were associated with specific ailments. Gods and goddesses like Ishtar, the godless of love and fertility, were responsible for a wide variety of illnesses. Ghosts could also be responsible, and were often implicated as having caused mental symptoms. Depression was often tied to the figure of the witch, which was not wielded as a personal accusation against specific others, but against a demonic and chaotic unidentified figure.
These supernatural causes weren’t thought of as unusual. Our view, and word, for the supernatural implies that it’s beyond the natural. But to the Mesopotamians, the supernatural was part of the everyday.
The ašipu—translated as exorcists—who would often be called on to treat the mental symptoms were not shocking horror-movie-like figures. They were part of a regulated office. Calling on them was as normal as calling any kind of other doctor or official.
“I tell my students sometimes you should think of it as if, in America, alongside the IRS, we also had the Department of Exorcism,” said Gina Konstantopoulos, an assistant professor in assyriology and cuneiform studies at the University of California, Los Angeles.”It was part of an administrative and bureaucratic framework, and was a technical profession that someone trained extensively in.”
Elsewhere in the Diagnostic Handbook are detailed descriptions that are very similar to what we understand as stroke or epilepsy—neurological conditions. In a description of what we might call today a focal motor seizure, a text describes how a person’s left eye will move to the side, his lips will pucker, spit will come out from his mouth, and the left side of his body will jerk “like a newly-slaughtered sheep.”
The authors of the text did not share our understanding of the causes of such seizures, but knew they were dangerous, and offered a quantified approach to assessing their effects: “If an epilepsy demon falls many times upon him and on a given day he seven times pursues and possesses him, his life will be spared. If he should fall upon him eight times his life may not be spared.”
Al-Rashid takes a philological approach, which means studying the language in its context and deriving its meaning from when the words are used, when they’re used with other words, and how frequently they appear. This is a lot of work—her dissertation on the context, meaning, and use of just three Akkadian words runs around 400 pages.
One of the phrases that Al-Rashid is working on right now is ḫīp libbi, which means the breaking of the heart—a literal translation. “I think it refers to a type of anxiety in some context,” she said. “But then you read another context and it’s quite clearly a stomachache.”
These complications around retrospective diagnosis don’t mean we can’t compare ancient texts to modern understandings—we just have to be thoughtful in the interpretations. “What I do think is useful is looking at symptoms, rather than disease or illness, and there is a lot of overlap there with what we experience today,” she said.
Al-Rashid is currently looking at the metaphors people use to describe their experiences, and where there are poignant overlaps to the present. For example, there are descriptions in the ancient texts that describe the heart as being low, or the face being downcast.
“I think it’s interesting that the ‘sad is down’ metaphor appears 3000 years ago,” Al-Rashid said. “And we still do that. The word depression literally means a sunken down place.”
“When is an emotional excess a sign of mental illness?” said Marke Ahonen, a lecturer and researcher at the University of Helsinki. “Is mental illness a thing of the body or a thing of the soul? Can philosophers treat mental illness or is it the prerogative of a medical doctor?”
These are disputes that are still unresolved, and it’s meaningful that the same issues arise in the study of the past. Today, there is ongoing discussion around the validity and application of the Diagnostic and Statistical Manual of Mental Disorders, or DSM, and whether its classifications lead to over diagnosis and the medicalization of normal human emotions.
“You can go on psychiatric Twitter almost any day and find people arguing that depression and anxiety are normal parts of life,” said Jonathan Sadowsky, a historian of psychiatry at Case Western Reserve University, and author of The Empire of Depression.
Sadowsky agreed that sadness and anxiety are typical parts of life, and responses to all sorts of life events and circumstances. But something we gain from looking to the past is an understanding that as long as people felt sad or anxious within an expected everyday range, there have also been chronic and extreme forms of these emotions that people have been trying to understand, and come up with treatments for.
“Many of the people who want to deny depression and anxiety illness status want to focus on how it’s a new construction that came out of modern psychiatry,” Sadowsky said. “And in that sense, I think understanding that these observations of severe mood disorders in medical traditions are common and ancient does have some value.”
“From quite early on, we find the idea that low mood, fear and anxiety can sometimes arise without an adequate cause and are symptoms of an illness rather than ‘normal’ emotional states,” Ahonen said.
Melancholy, as a defined illness, appeared around the 1st century B.C.E to the 1st century A.D. “In melancholy, people experience distress and fear that can be extreme and the condition often involves fanciful delusions,” Ahonen said. “It could even involve lycanthopy, the delusion that one was turned into a wolf or a wild dog. This melancholy resembles modern depression, but is also quite different from it.”
The fact that the Mesopotamian descriptions were found in the Diagnostic Handbook means that “presumably the stuff that makes it into the medical corpus is something that is sufficiently chronic or extreme to be considered not a part of normal expectable response to something,” Al-Rashid said.
But Sadowsky doesn’t think it’s the “oldness” of these emotions that definitively legitimizes depression as an illness. “I think what qualifies something as an illness category is actually a social decision that is made in different contexts,” he said. “It depends on how the culture regards the symptoms, or if they even regard them as symptoms of an illness, and how they treat them.” Depression and anxiety should be under the purview of medicine because there are treatments, Sadowsky said, both pharmacological and non-pharmacological, that can help people.
For depression, one part of the Diagnostic Handbook outlines a treatment for a person who “has frequent nervous breakdowns,” “shakes with fear in his bedroom and his limbs have become ‘weak,’” “his limbs often hang limp, and he is sometimes so frightened that he cannot sleep by day or night and constantly sees disturbing dreams,” “has a ‘weakness’ in his limbs (from) not having enough food and drink,” and “he forgets (cannot find) the word which he is trying to say.” To treat such a condition requires a ritual of creating clay figurines, sacrificing a sheep, and chanting and incantation appealing to the god and goddess that has bestowed these ill wills on them.
“There’s an understanding of extreme emotion and there’s an understanding of grief and sorrow and certainly rage,” said Konstantopoulos. “But there is also an understanding that these emotions which we would think of, at least at present, as depression and extreme anxiety can be fixed within a system that has treatments and ritual procedures to address them and ritual specialists trained in those procedures.”
The past can provide lessons for the future too; Sadowsky said that looking at how antiquity and other cultures dealt with depression can help us remember that there are forms of social support and ritual that can be helpful outside of treatments like drugs or ECT.
“As to treatment,” Ahonen said, “their methods often vary from cruel to ludicrous, but there are also quite sensible approaches: alleviating fear, instilling joy, bringing distraction, correcting erratic thoughts. Physical treatment, [like] drugs or bloodletting, and psychological treatment were often combined, as they still are.”
Mesopotamian treatments were often about a practitioner spending a lot of time with a person. “They would make them these fancy necklaces with shiny, precious stones, putting them on the patient, saying incantations over the patient,” said Willis Monroe, a historian at the University of British Columba who studies astronomy and astrology in cuneiform texts. “You’re probably going to feel better after that to some degree. You walk away with a shiny necklace, a nice smelling sachet, and things seem a little bit brighter.”
In one of the medical texts, Monroe said, it begins by describing all the things a practitioner may see on their way to a sick person’s house. “In our modern conception, we wouldn’t think that has anything to do with a patient presenting symptoms,” he said. “But this text is teaching the practitioner to observe on the way to the house and think about what they see. It did train the practitioner to be observant in a way that I think we’re learning more to do now as well.”
Another facet of the texts is that physical illnesses aren’t privileged over mental illness, Konstantopoulos said. They were equally recognizable problems. And there wasn’t as much moralizing involved in mental symptoms, because they were caused by external forces.
“When we think about the stigmatization against mental illness that is present in the modern world, looking at a system where that isn’t
necessarily present in the diagnostic handbooks in how it’s being presented and treated is a helpful thing to look at,” she said.
Thumiger, who studies Greco-Roman antiquity, said that there’s also a lack of a sharp separation between mind and body. “Mind and body are really in a continuum and the doctor looks at both things as if they were of equal importance,” Thumiger said.
Al-Rashid believes that the messy, imperfect process of naming is important, both in past and present, because it can help make sense of what we’re experiencing. Monroe said that the fact that these texts exist, that there were practitioners who specialize in these texts, and made a living off their services shows how deep down, humans have long been trying to understand and soothe the anxiety they have about the world.
“People have always been worried about their future and about how they’re feeling,” Monroe said. “And there has long been a whole genre of knowledge that has dealt with this issue: what is the future, what’s going to happen to you, how can we make you feel better in the moment.”
It can be incredibly soothing to know that people have felt like you did, when you were at your lowest. I still remember vividly the first International OCD Foundation conference I went to, at the start of my OCD treatment, where I listened to panelists describe feelings and challenges that I myself was experiencing. Warm feelings of camaraderie and solidarity—and also hope—washed over me. I was not the only one to feel the way I did, and it was possible to get through it.
To Al-Rashid, one example from the Mesopomtamian texts that also serves that role is the Epic of Gilgamesh, often called the first story in the world. In it, the legendary ruler Gilgamesh grieved the loss of his friend and lover, Enkidu. Gilgamesh’s experience perfectly described what happens when you lose a person you love.
“Gilgamesh’s journey reminds anyone who has ever grieved that they’re not alone—the experience of extreme loss transcends the millennia-long gap between what it meant to be human then and what it means now,” Al-Rashid wrote recently in Psyche.
After the shock of Enkidu’s death and the subsequent funeral, Gilgamesh said, “Sorrow has entered my belly. I became afraid of death and go wandering the wild.”
“Even if the symptoms get organized slightly differently, or the labels are slightly different from one time period or place to the next, I think it’s important to show how old our experiences are,” Al-Rashid said. “There are these common denominators in our experiences of mental distress that have always been there. And a lot of people say it makes them feel less alone.”
Scientists at Jerusalem’s Hebrew University have managed to genetically engineer a potato to glow in a particular color when it is feeling under the weather.
Like humans, plants suffer stress if it is too hot or cold, or if they don’t get enough food or water.
New research published in Plant Physiology by Matanel Hipsch under the direction of Dr. Shilo Rosenwasser of the university’s Department of Plant Sciences describes the implanting of a gene with a fluorescent protein that changes color according to the level of free radicals — oxygen-containing molecules that accumulate when an organism is experiencing stress. High levels of free radicals can cause significant damage. The fluorescent signaling is picked up by a special fluorescent camera.
Dr. Rosenwasser told The Times of Israel that the work was still at the research and development stage and that the team planned to develop an easy-to-use and affordable camera for farmers to use. The hope is also to extend and if necessary adapt the technology to measure stress in other crops, he added.
get plants to do certain things, plant nanobionics uses minuscule sensors — tiny engineered particles that can access a plant’s cells and even subcellular structures, such as chloroplasts.
The MIT sensors are made by combining infinitesimally small tubes with a polymer coating to create fluorescence and emit light. The fluorescence changes color the moment a target material binds with the polymer coating. This color change is picked up by an infrared camera, which sends an alert to a cellphone or email address.
Used to detect the presence of materials such as arsenic in groundwater — a real problem for many rice farmers who cannot afford laboratory testing — MIT’s laboratory, led by Prof. Michael Strano, has also begun to use the sensors to intercept chemical signals that the plant sends when it is under stress.
Plants don’t only detect problems, but also have “internal signaling like humans have nerves,” Strano told The Times of Israel earlier this year.
MIT has even stretched the technology to make plants glow.
Rosenwasser said that the genetic approach had pros and cons. One advantage was that the genetic encoding only has to be done once. The characteristic passes on to all future generations of the plant that was tweaked. The downside was the fear that people have of genetically modified crops.
One way to counter the latter, Rosenwasser continued, was to plant a certain number of modified potatoes in a field that would communicate stress, and to remove them before the other potatoes are harvested for sale.
The research is being carried out at Hebrew University’s Robert H. Smith Faculty of Agriculture, Food and Environment.
A healthy lifestyle can lower dementia risk, even among those with a family history of cognitive decline, according to a study presented Thursday during an American Heart Association conference held virtually because of the COVID-19 pandemic.
This includes eating a healthy diet, exercising regularly, not smoking or drinking alcohol to excess and maintaining good sleep habits and a healthy body weight, the researchers said during the Epidemiology, Prevention, Lifestyle and Cardiometabolic Health Conference.
Adults ages 50 to 73 who embrace at least three of the behaviors can reduce their dementia risk by 30%, the data showed.
Those with a family history of dementia who followed at least three of the behaviors had a 25% to 35% reduced risk for the condition compared to those who followed two or fewer.
“When dementia runs in a family, both genetics and non-genetic factors, such as dietary patterns, physical activity and smoking status, affect an individual’s overall risk,” study co-author Angelique Brellenthin said in a press release.
However, the findings suggest “there may be opportunities for reducing risk by addressing those non-genetic factors,” said Brellenthin, an assistant professor of kinesiology at Iowa State University in Ames.
Having a close relative with dementia, such as a parent or sibling, can increase a person’s risk for the disease by nearly 75% compared to those with no family history, according to the Alzheimer’s Association.
Older age and high blood pressure, high cholesterol, Type 2 diabetes and depression also can increase a person’s risk for the condition, Brellenthin and her colleagues said.
For this study, the researchers analyzed health information on more than 302,000 adults ages 50 to 73 years who were free of dementia at the beginning of the study and filled out questionnaires about family health history and lifestyle habits.
Participants were given one point for each of six healthy lifestyle behaviors they followed.
These included eating a healthy diet with more fruits and vegetables and less processed meat and refined grains; meeting national exercise guidelines by engaging in 150 or more minutes of moderate to vigorous physical activity each week; and sleeping 6 to 9 hours each day.
They also received one point for drinking alcohol in moderation, not smoking and not being obese.
Participants’ health then was monitored for an average of about eight years.
Nearly 1,700, or 0.6%, of the participants developed dementia during that period, the data showed. Those with a family history of dementia had a nearly 70% higher risk for dementia compared to those who did not.
Following all six healthy lifestyle behaviors reduced participants’ risk for dementia by nearly one-half, compared to those who followed two or fewer healthy behaviors.
The results suggest that starting with small changes, such as engaging in at least three or more healthy lifestyle behaviors, can significantly lower a person’s risk for dementia, according to the researchers.
“This study provides important evidence that a healthy lifestyle can have a positive impact on brain health,” American Heart Association president Dr. Mitchell S.V. Elkind said in a press release.
“It should be reassuring and inspiring to people to know that following just a few healthy behaviors can delay cognitive decline, prevent dementia and preserve brain health,” said Elkind, a professor of neurology and epidemiology at Columbia University in New York City who was not part of Brellenthin’s study.
Time is all around us, a constant that keeps the world and universe ticking. (Image credit: Shutterstock)
When considering time, it’s easy to quickly get lost in the complexity of the topic. Time is all around us — it’s ever-present and is the basis of how we record life on Earth. It’s the constant that keeps the world, the solar system and even the universe ticking.
Civilizations have risen and fallen, stars have been born and extinguished, and our one method of keeping track of every event in the universe and on Earth has been comparing them to the present day with the regular passing of time. But is it really a constant? Is time really as simple as a movement from one second to the next?
Some 13.8 billion years ago, the universe was born, and since then time has flown by to the present day, overseeing the creation of galaxies and the expansion of space. But when it comes to comparing time, it’s daunting to realize just how little of it we have actually experienced.
Earth might be 4.5 billion years old, but modern humans have inhabited the planet for around 300,000 years — that’s just 0.002% the age of the universe. Feeling small and insignificant yet? It gets worse. We have experienced so little time on Earth that in astronomical terms we’re entirely negligible.
In the 17th century, physicist Isaac Newton saw time as an arrow fired from a bow, traveling in a direct, straight line and never deviating from its path. To Newton, one second on Earth was the same length of time as that same second on Mars, Jupiter or in deep space. He believed that absolute motion could not be detected, which meant that nothing in the universe had a constant speed, even light. By applying this theory, he was able to assume that if the speed of light could vary, then time must be constant. Time must tick from one second to the next, with no difference between the length of any two seconds. This is something that it’s easy to think is true. Every day has roughly 24 hours; you don’t have one day with 26 and one with 23 hours.
However, in 1905, Albert Einstein asserted that the speed of light doesn’t vary, but rather it is a constant, traveling at roughly 186,282 miles per second (299,792 kilometers per second). He postulated that time was more like a river, ebbing and flowing depending on the effects of gravity and space-time. Time would speed up and slow down around cosmological bodies with differing masses and velocities, and therefore one second on Earth was not the same length of time everywhere in the universe.
This posed a problem. If the speed of light was really a constant, then there had to be some variable that altered over large distances across the universe. With the universe expanding and planets and galaxies moving on a humongous scale, something had to give to allow for these small fluctuations. And this variable had to be time.
It was ultimately Einstein’s theory that was not only believed to be the truth, but also proven to be entirely accurate. In October 1971, two physicists named J.C. Hafele and Richard Keating set out to prove its validity. To do this, they flew four cesium atomic clocks on planes around the world, eastward and then westward.
According to Einstein’s theory, when compared with ground-based atomic clocks — in this instance at the U.S. Naval Observatory in Washington, D.C. — Hafele and Keating’s airborne clocks would be about 40 nanoseconds slower after their eastward trip, and about 275 nanoseconds faster after traveling west, due to the gravitational effects of the Earth on the velocity of the planes, according to their 1972 study in the journal Science. Incredibly, the clocks did indeed register a difference when traveling east and west around the world — about 59 nanoseconds slower and 273 nanoseconds faster, respectively, when compared with the U.S. Naval Observatory. This proved that Einstein was correct, specifically with his theory of time dilation, and that time did indeed fluctuate throughout the universe.
Newton and Einstein did agree on one thing, though — that time moves forward. So far, there is no evidence of anything in the universe that is able to dodge time and move forwards and backward at will. Everything ultimately moves forward in time, be it at a regular pace or slightly warped if approaching the speed of light. But why does time tick forward? Scientists aren’t certain, but they have several theories to explain time’s one-track “mind.” One of these brings in the laws of thermodynamics, specifically the second law. This states that everything in the universe wants to move from low to high entropy, or from uniformity to disorder, beginning with simplicity at the Big Bang and moving to the almost random arrangement of galaxies and their inhabitants in the present day. This is known as the “arrow of time,” or sometimes “time’s arrow,” likely coined by British astronomer Arthur Eddington in 1928, analytic philosopher Huw Price said at Séminaire Poincaré in 2006.
Eddington suggested that time was not symmetrical: “If as we follow the arrow, we find more and more of the random element in the state of the world, then the arrow is pointing towards the future; if the random element decreases, the arrow points towards the past,” he wrote in “The Nature of the Physical World” in 1928. For example, if you were to observe a star in almost uniformity, but later saw it explode as a supernova and become a scattered nebula, you would know that time had moved forward from equality to chaos. Advertisement
Discover high-throughput multiplex assays, in antibody discovery and immuno-oncology researchSEE MORE
Another theory suggests that the passage of time is due to the expansion of the universe. As the universe expands, it pulls time with it, as space and time are linked as one; but this would mean that if the universe were to reach a theoretical limit of expansion and begin to contract, then time would reverse — a slight paradox for scientists and astronomers. Would time really move backward, with everything coming back to an era of simplicity and ending with a Big Crunch? It’s unlikely we will be around to find out, but scientists can postulate on what might happen.
It’s incredible to think of the progress humanity has made in our understanding of time over the past century. From ancient time-telling sundials to modern atomic clocks, we can even track the passing of a second more closely than ever before. Time remains a complex topic, but thanks to scientific visionaries, we are getting closer to unlocking the secrets of this not-so-constant universal constant.
The importance of Einstein’s theory of special relativity
Einstein’s theory of special relativity relies on one key fact: The speed of light is the same no matter how you look at it. To put this into practice, imagine you are traveling in a car at 20 mph (32 km/h), and you drive past a friend who is standing still. As you pass them, you throw a ball out in front of the car at 10 mph (16 km/h).
To your friend, the ball’s speed combines with that of the car, and so appears to be traveling at 30 mph (48 km/h). Relative to you, however, the ball travels at only 10 mph, as you are already traveling at 20 mph.
Now imagine the same scenario, but this time you pass your stationary friend while traveling at half the speed of light. Through some imaginary contraption, your friend can observe you as you travel past. This time you shine a beam of light out of the car windscreen.
In our previous calculation we added together the speed of the ball and the car to find out what your friend saw, so in this instance, does your friend see the beam of light traveling at one-and-a-half times the speed of light?
According to Einstein, the answer is no. The speed of light always remains constant, and nothing can travel faster than it. On this occasion, both you and your friend observe the speed of light traveling at its universally agreed value at roughly 186,282 miles per second. This is the theory of special relativity, and it’s very important when talking about time.
It was once thought that space and time were separate, and that the universe was merely an assortment of cosmic bodies arranged in three dimensions. Einstein, however, introduced the concept of a fourth dimension — time — that meant that space and time were inextricably linked. The general theory of relativity suggests that space-time expands and contracts depending on the momentum and mass of nearby matter. The theory was sound, but all that was needed was proof.
That proof came courtesy of NASA’s Gravity Probe B, which demonstrated that space and time were indeed linked. Four gyroscopes were pointed in the direction of a distant star, and if gravity did not have an effect on space and time, they would remain locked in the same position. However, scientists clearly observed a “frame-dragging” effect due to the gravity of Earth, which meant the gyroscopes were pulled very slightly out of position. This seems to prove that the fabric of space itself can be altered, and if space and time are linked, then time itself can be stretched and contracted by gravity.
How long is a second?
There are two main ways of measuring time: dynamic and atomic time. The former relies on the motion of celestial bodies, including Earth, to keep track of time, whether it’s the rotation time of a distant spinning star such as a pulsar, the motion of a star across our night sky or the rotation of Earth. However, a spinning star not withstanding, which can be hard to observe, these methods are not always entirely accurate.
The old definition of a second was based on the rotation of Earth. As it takes the sun one day to rise in the east, set in the west and rise again, a day was almost arbitrarily divided into 24 hours, an hour into 60 minutes and a minute into 60 seconds. However, Earth doesn’t rotate uniformly. Its rotation decreases at a rate of about 30 seconds every 10,000 years due to factors such as tidal friction. Scientists have devised ways to account for the changing speed of Earth’s rotation, introducing leap seconds,” but for the most accurate time you have to go even smaller.Advertisement
Atomic time relies on the energy transition within an atom of a certain element, commonly caesium. By defining a second using the number of these transitions, time can be measured with an accuracy of losing a tiny portion of a second in a million years. The definition of a second is now defined as 9,192,631,770 transitions within a caesium atom, Scientific American reported.
Atomic clocks: The most accurate track of time
The most accurate clock in the universe would probably be a rotating star like a pulsar, but on Earth atomic clocks provide the most accurate track of time. The entire GPS system in orbit around Earth uses atomic clocks to accurately track positions and relay data to the planet, while entire scientific centers are set up to calculate the most accurate measure of time — usually by measuring transitions within a caesium atom.
While most atomic clocks rely on magnetic fields, modern clocks are using lasers to track and detect energy transitions within caesium atoms and keep a more definite measure of time. Although caesium clocks are currently used to keep time around the world, strontium clocks promise twice as much accuracy, while an experimental design based on charged mercury atoms could reduce discrepancies even further to less than 1 second lost or gained in 400 million years.
But for urban dwellers in big cities like Tokyo, Japan, smaller homes on small plots of land are the norm to start with, not the exception. In creating a new home for a couple in their 40s, Tokyo-based Unemori Architects managed to make the most of the tiny 280-square-foot plot of land by building up vertically and doing some strategic rearrangement of the home’s spatial volumes to bring in more sunlight and ventilation.
“In Tokyo, tiny plots of land are the standard. Houses in the city have to be compact and cleverly structured. With House Tokyo, we reacted to the challenge by designing the house as stacked, interlinked cubes with a very open floor plan.”
In stacking and manipulating the volumes, which are wrapped with corrugated galvanized steel, the home feels less hemmed in by the neighboring buildings. In addition, the new multipurpose outdoor terrace that has been created on top of one of the volumes helps to compensate for the absence of a backyard in this small house, which is located in a densely packed urban neighborhood. The clients’ busy urban lifestyle means that they are also out of the house often, making the most of what this cosmopolitan city has to offer.https://5b6e3329abfd6398f06ba853b74111bf.safeframe.googlesyndication.com/safeframe/1-0-38/html/container.html
Most importantly, the differential stacking produces gaps that allow for the diverse placement of windows, which is beneficial in many ways, says Unemori:
“The small gap between the neighboring houses brings a view to the sky, wind circulation, and of course, sunlight.”
A large kitchen and dining area occupy the main level, and it seems to be also a lounging area integrated here too, with a sofa suspended from the platform above, facing a television screen mounted on the far wall. There is plenty of storage here to be found in the long row of cabinets, some of which stretch over the entry hall, thus bridging the two spaces.
Thanks to the interplay of volumes here, the ceiling height here extends far up, creating a greater sense of spaciousness. In addition, heating and cooling are made more efficient with the installation of a ventilation duct here that directs warm air from the upper area back down to the living zones during the winter. Conversely, during the summer, one can flip a switch to bring warm air outside, so that the air conditioner operates more efficiently.
Below the main level is the bedroom, which is tucked away in the half-basement. Here it is darker and quieter—perfect for a bedroom. As it is equipped with two sliding door entries, the space here can also be potentially divided into two separate rooms, to accommodate the clients’ wishes that they might move out someday and have their house rented out to tenants instead.
In either one of two hallways leading out of the bedroom’s two doorways, we have a small washroom and toilet, and a separate shower room, in addition to various storag
e spaces and a washing machine that is tucked away underneath the bent metal stairs.
With so little land to work with, the architects’ intriguing design strategy has allowed them to create a series of unique spaces and interior views that are ultimately connected together enough to create a unified whole that feels big, despite its small size. Ultimately, it’ll be creative strategies like this that will help to make the small house typology more appealing and livable for a wider audience.
They can survive temperatures close to absolute zero. They can withstand heat beyond the boiling point of water. They can shrug off the vacuum of space and doses of radiation that would be lethal to humans. Now, researchers have subjected tardigrades, microscopic creatures affectionately known as water bears, to impacts as fast as a flying bullet. And the animals survive them, too—but only up to a point. The test places new limits on their ability to survive impacts in space—and potentially seed life on other planets.
The research was inspired by a 2019 Israeli mission called Beresheet, which attempted to land on the Moon. The probe infamously included tardigrades on board that mission managers had not disclosed to the public, and the lander crashed with its passengers in tow, raising concerns about contamination. “I was very curious,” says Alejandra Traspas, a Ph.D. student at Queen Mary University of London who led the study. “I wanted to know if they were alive.”
Traspas and her supervisor, Mark Burchell, a planetary scientist at the University of Kent, wanted to find out whether tardigrades could survive such an impact—and they wanted to conduct their experiment ethically. So after feeding about 20 tardigrades moss and mineral water, they put them into hibernation, a so-called “tun” state in which their metabolism decreases to 0.1% of their normal activity, by freezing them for 48 hours.
They then placed two to four at a time in a hollow nylon bullet and fired them at increasing speeds using a two-stage light gas gun, a tool in physics experiments that can achieve muzzle velocities far higher than any conventional gun. When shooting the bullets into a sand target several meters away, the researchers found the creatures could survive impacts up to about 900 meters per second (or about 3000 kilometers per hour), and momentary shock pressures up to a limit of 1.14 gigapascals (GPa), they report this month in Astrobiology. “Above [those speeds], they just mush,” Traspas says.
The results suggest the tardigrades on Beresheet were unlikely to survive. Although the lander is thought to have crashed at a few hundred meters per second, the shock pressure its metal frame generated hitting the surface would have been “well above” 1.14 GPa, Traspas says. “We can confirm they didn’t survive.”
The research also places new limits on a theory known as panspermia, which suggests some forms of life could move between worlds, as stowaways on meteorites kicked up after an asteroid strikes a planet or moon. Eventually, the meteorite could impact another planet—along with its living cargo.
Charles Cockell, an astrobiologist at the University of Edinburgh who was not involved in the study, says the research shows how unlikely panspermia is. “What this paper is showing is that complex multicellular animals cannot be easily transferred,” he says. “In other words, Earth is a biogeographical island with respect to animals. They’re trapped, like a flightless bird on an island.”
Traspas, however, says it shows panspermia “is hard,” but not impossible. Meteorite impacts on Earth typically arrive at speeds of more than 11 kilometers per second. On Mars, they collide at least at 8 kilometers per second. These speeds are well above the threshold for tardigrades to survive. However, some parts of a meteorite impacting Earth or Mars would experience lower shock pressures that a tardigrade could live through, Traspas says.
Objects strike the Moon at still lower speeds. When impacts on Earth send bits of rock and debris hurtling toward the Moon, about 40% of that material could travel at speeds low enough for any tardigrades to survive, Traspas and Burchell say, theoretically allowing them to jump from our planet to the Moon. A similar passage, they add, could take place from Mars to its moon, Phobos. And other life might have an even better chance of surviving; compared with water bears, some microbes can survive even faster impacts of up to 5000 meters per second, according to previous research.
The new experiment also has implications for our ability to detect life on icy moons in the outer Solar System. Saturn’s moon Enceladus, for example, ejects plumes of water into space from a subsurface ocean that could support life, as might Jupiter’s moon Europa. If the findings of the new study apply to potential life trapped in the plumes, a spacecraft orbiting Enceladus—at relatively low speeds of hundreds of meters per second—might sample and detect existing life without killing it.
No such orbiting mission is currently planned for Enceladus or Europa—upcoming NASA and European flyby missions will swoosh by the latter at high speeds of several kilometers per second. But perhaps one day far in the future an orbiter might be in the cards, with an ability to detect life at gentler speeds. “If you collect it and it died on impact, how do you know whether it’s been dead for millions of years?” asks Anna Butterworth, a planetary scientist at the University of California, Berkeley, who has studied plume impacts on spacecraft. “If you collect microscopic life and it’s moving around, you can say it’s alive.”
Abrain-implant system trained to decode the neural signals for handwriting from a paralyzed man enabled a computer to type up to 90 characters per minute with 94 percent accuracy, researchers report yesterday (May 12) in Nature. The study’s authors say this brain-computer interface (BCI) is a considerable improvement over other experimental devices aimed at facilitating communication for people who cannot speak or move, but many steps remain before it might be used clinically.
“There are so many aspects of [the study] that are great,” says Emily Oby, who works on BCIs at the University of Pittsburgh and was not involved in the work. “It’s a really good demonstration of human BCI that is working towards clinical viability,” and also contributes to understanding why the handwriting-based system seems to work better than BCIs based on translating the neural signals for more straightforward physical motions such as pointing at letters on a display.
The study came out of a long-term clinical trial called BrainGate2 in which participants who are paralyzed have sensors implanted in the motor cortex of their brains and work with researchers who aim to use the sensors’ data to develop BCIs. “Because of the animal model heritage and the history of the [BCI] field, a lot of the early stuff is focused on this point-and-click typing method where you move a cursor on a screen, and you type on keys individually,” explains Frank Willett, a member of the Neural Prosthetics Translational Laboratory (NPTL) at Stanford University and a Howard Hughes Medical Institute research specialist. “We’re interested in kind of pushing the boundaries and looking at other ways to let people communicate.”
Willett and his colleagues worked with a BrainGate2 participant nicknamed T5 who has a spinal injury, is able to talk, and has a sensor in an area of the brain known as the hand knob that is associated with hand movement. In several sessions, they asked T5 to pretend he was holding a pen and writing hundreds of sentences they showed him on a screen. They then used the activity detected by T5’s sensor to train a neural network to identify the letters T5 was writing, and tested the program’s ability to generate text in real time based on brain signals generated as he imagined writing new sentences.
An algorithm interpreted patterns of electrical signals from T5’s brain as he imagined writing letters.
The researchers report that the trained network enabled T5 to “type” at a speed of up to 90 characters per minute and had 94.1 percent accuracy in deciphering the letters he wrote. That’s a considerable improvement on a previous BCI the group developed that was based on having participants control a computer mouse with their brain signals and click on letters, which achieved about 40 characters per minute. In fact , the authors write, to their knowledge, it’s the fastest typing rate for any BCI so far.
Speed is critical for people who need BCIs to communicate, notes Oby, because “the faster and more efficiently that they can communicate the better, in terms of increasing their quality of life, and just making interactions more easy and smooth and less stressful.”
To see what accounts for this superior performance, the authors analyzed the neural patterns corresponding to letters and to the straight reaching movements used in the point-and-click BCI. They found that the patterns for the letters are more distinct from one another, making them easier for a neural network to decipher. They also devised their own 26-letter alphabet, replete with curvy lines, that their simulations indicate would enable an even more accurate BCI by eschewing letters that are written similarly to one another.
“[It] makes a lot of sense . . . that having more complex movement dynamics can really help improve the communication rate, the accuracy of the decoding,” says Edward Chang, a neurosurgeon at the University of California, San Francisco, who has worked informally with the NPTL group but was not involved in the current study. “They’re really exploiting a new dimension of features that help make the signals more discriminable.”
There are several improvements that would be needed to make the BCI ready for clinical use. Those include tweaks to the brain implant itself, such as making it smaller and capable of wireless signal transmission, says study coauthor Jaimie Henderson, a neurosurgeon in the NPTL who consults for the BCI company Neuralink and is on the medical advisory board for Enspire, a company exploring deep-brain stimulation for stroke recovery. In addition, in the study the researchers needed to regularly calibrate the BCI to account for minute shifts in the positions of the sensors that alter what neural activity they pick up; ideally, Henderson and Willett say, this process, as well as the initial training of the neural network, would be automated.
Henderson, Willett, and senior author Krishna Shenoy, another NPTL member and a Howard Hughes Medical Institute investigator who consults for or serves on the advisory boards of several BCI-related companies, have filed a patent application for the neural decoding method they used and are talking with companies about the possibility of licensing it, Shenoy says. Ultimately, Willett and Henderson say, they’re interested in exploring neural signals for speech as a way to enable even faster communication than with handwriting. The rate of speech is about 150–200 words per minute, Henderson notes, and decoding it is an interesting scientific endeavor because it’s uniquely human and because it’s not fully understood how speech is produced in the brain. “We feel like that’s a very rich area of exploration, and so one of our big goals over the next five to ten years is to really tackle the problem of understanding speech and decoding it into both text and spoken word.”
F.R. Willett et al., “High-performance brain-to-text communication via handwriting,” Nature, 593:249–54, 2021.
There’s a “sweet spot” for the amount of sleep you should get to reduce your risk of heart attack and stroke, new research shows.
Folks who get six to seven hours a sleep a night — no more, no less — have the lowest chance of dying from a heart attack or stroke, according to new findings.
Waking early or dozing on past that ideal window increases your risk of heart-related death by about 45%, researchers found.
This trend remained true even after they accounted for other known risk factors for heart disease or stroke, including age, high blood pressure, diabetes, smoking, BMI (body mass index) and high cholesterol levels.
“Even then, sleep came out to be an independent risk factor,” said lead researcher Dr. Kartik Gupta, a resident in internal medicine at Henry Ford Hospital in Detroit.
For the study, Gupta and his colleagues analyzed data from more than 14,000 participants in the federally funded U.S. National Health and Nutrition Examination Survey between 2005 and 2010. As part of the survey, these folks were asked how long they usually slept.
Researchers tracked participants for an average of 7.5 years to see if they died from heart attack, heart failure or stroke. They also assessed their heart health risk scores as well as their blood levels of C-reactive protein (CRP), which increases when there’s inflammation in your body. High CRP levels have been associated with heart disease.
The research team found a U-shaped relationship between heart risk and sleep duration, with risk at its lowest among people who got between six and seven hours of sleep on average.
A lack of sleep already has been linked to poor heart health, said Dr. Martha Gulati, editor-in-chief of CardioSmart.org, the American College of Cardiology’s educational site for patients.
“We have a lot of data related to less sleep,” said Gulati, a cardiologist. She noted that a number of key heart risk factors — blood pressure, glucose tolerance, diabetes and inflammation — are exacerbated by too little sleep.
There’s not as much evidence regarding those who slumber too long and their heart risk, however, Gulati and Gupta said.
Gupta and his colleagues found one possible explanation in their research. Based on patients’ levels of CRP, inflammation accounted for about 14% of heart-related deaths among short sleepers and 13% among long sleepers, versus just 11% of folks who got the optimal six to seven hours of sleep.
“Patients who sleep for six to seven hours have the least CRP, so this inflammation might be driving increased cardiovascular risk,” Gupta said.
It might be that people who sleep longer than seven hours are just getting lousy sleep, and so have to doze longer, Gulati said. Poor quality sleep could be driving the increased risk among late snoozers.
“You wonder if somebody is sleeping longer because they just didn’t get a good night’s sleep,” Gulati said. “I always say there’s good sleep and there’s bad sleep. You might be in bed for eight hours, but is it good quality sleep?”
Here are some tips for improving your sleep, courtesy of Harvard Medical School:
Avoid caffeine and nicotine four to six hours from bedtime.
Keep your bedroom dark, quiet and cool to promote better sleep.
Establish a relaxing routine an hour or so before bed.
Don’t try to force yourself to sleep — if you aren’t asleep within about 20 minutes, get up and do something relaxing for a bit until you feel sleepy.
Eat dinner several hours before bedtime and avoid foods that can upset your stomach.
Exercise earlier in the day, at least three hours before bed.
“In the medical community we know it’s important to sleep, but we still don’t treat it like something we should be asking about routinely,” Gulati said. “I wish I could say doctors were good enough at asking about sleep. I think it should be like a vital sign.”
The findings will be presented virtually May 15 at the annual meeting of the American College of Cardiology. Findings presented at medical meetings are considered preliminary until published in a peer-reviewed journal.
The U.S. Centers for Disease Control and Prevention offers more sleep basics.
SOURCES: Kartik Gupta, MD, internist, Henry Ford Hospital, Detroit; Martha Gulati, MD, editor-in-chief, CardioSmart.org; online presentation, American College of Cardiology virtual annual meeting, May 15, 2021