Scientists discover key brain cells that control eating portion size

111064_web

While researching the brain’s learning and memory system, scientists at Johns Hopkins say they stumbled upon a new type of nerve cell that seems to control feeding behaviors in mice. The finding, they report, adds significant detail to the way brains tell animals when to stop eating and, if confirmed in humans, could lead to new tools for fighting obesity. Details of the study were published by the journal Science today.

“When the type of brain cell we discovered fires and sends off signals, our laboratory mice stop eating soon after,” says Richard Huganir, Ph.D., director of the Department of Neuroscience at the Johns Hopkins University School of Medicine. “The signals seem to tell the mice they’ve had enough.”

Huganir says his team’s discovery grew out of studies of the proteins that strengthen and weaken the intersections, or synapses, between brain cells. These are an important target of research because synapse strength, particularly among cells in the hippocampus and cortex of the brain, is important in learning and memory.

In a search for details about synapse strength, Huganir and graduate student Olof Lagerlöf, M.D., focused on the enzyme OGT — a biological catalyst involved in many bodily functions, including insulin use and sugar chemistry. The enzyme’s job is to add a molecule called N-acetylglucosamine (GlcNAc), a derivative of glucose, to proteins, a phenomenon first discovered in 1984 by Gerald Hart, Ph.D., director of the Johns Hopkins University School of Medicine’s Department of Biological Chemistry and co-leader of the current study. By adding GlcNAc molecules, OGT alters the proteins’ behavior.

To learn about OGT’s role in the brain, Lagerlöf deleted the gene that codes for it from the primary nerve cells of the hippocampus and cortex in adult mice. Even before he looked directly at the impact of the deletion in the rodents’ brains, Lagerlöf reports, he noticed that the mice doubled in weight in just three weeks. It turned out that fat buildup, not muscle mass, was responsible.

When the team monitored the feeding patterns of the mice, they found that those missing OGT ate the same number of meals — on average, 18 a day — as their normal littermates but tarried over the food longer and ate more calories at each meal. When their food intake was restricted to that of a normal lab diet, they no longer gained extra weight, suggesting that the absence of OGT interfered with the animals’ ability to sense when they were full.

“These mice don’t understand that they’ve had enough food, so they keep eating,” says Lagerlöf.

Because the hippocampus and cortex are not known to directly regulate feeding behaviors in rodents or other mammals, the researchers looked for changes elsewhere in the brain, particularly in the hypothalamus, which is known to control body temperature, feeding, sleep and metabolism. There, they found OGT missing from a small subset of nerve cells within a cluster of neurons called the paraventricular nucleus.

Lagerlöf says these cells already were known to send and receive multiple signals related to appetite and food intake. When he looked for changes in the levels of those factors that might be traced to the absence of OGT, he found that most of them were not affected, and the activity of the appetite signals that many other research groups have focused on didn’t seem to be causing the weight gain, he adds.

Next, the team examined the chemical and biological activity of the OGT-negative cells. By measuring the background electrical activity in nonfiring brain cells, the researchers estimated the number of incoming synapses on the cells and found that they were three times as few, compared to normal cells.

“That result suggests that, in these cells, OGT helps maintain synapses,” says Huganir. “The number of synapses on these cells was so low that they probably aren’t receiving enough input to fire. In turn, that suggests that these cells are responsible for sending the message to stop eating.”

To verify this idea, the researchers genetically manipulated the cells in the paraventricular nucleus so that they would add blue light-sensitive proteins to their membranes. When they stimulated the cells with a beam of blue light, the cells fired and sent signals to other parts of the brain, and the mice decreased the amount they ate in a day by about 25 percent.

Finally, because glucose is needed to produce GlcNAc, they thought that glucose levels, which increase after meals, might affect the activity of OGT. Indeed, they found that if they added glucose to nerve cells in petri dishes, the level of proteins with the GlcNAc addition increased in proportion to the amount of glucose in the dishes. And when they looked at cells in the paraventricular nucleus of mice that hadn’t eaten in a while, they saw low levels of GlcNAc-decorated proteins.

“There are still many things about this system that we don’t know,” says Lagerlöf, “but we think that glucose works with OGT in these cells to control ‘portion size’ for the mice. We believe we have found a new receiver of information that directly affects brain activity and feeding behavior, and if our findings bear out in other animals, including people, they may advance the search for drugs or other means of controlling appetites.”

http://www.eurekalert.org/pub_releases/2016-03/jhm-pcc031416.php

Plastic-eating bacteria discovered in recycling plant

By Eva Botkin-Kowacki

Plastic is everywhere around us. We drink out of plastic cups, buy disposable water bottles, unwrap new electronics from plastic packaging, take home plastic shopping bags, and even wear plastic in polyester fabrics.

Some 311 million tons of plastic is produced across the globe annually, and just 10 percent makes it back to a recycling plant. The rest ends up in landfills, or as litter on land or in the ocean, where it remains for decades and longer.

As for the plastic that has been recycled, it has given rise to an unintended side effect: A team of scientists searching through sediments at a plastic bottle recycling plant in Osaka, Japan have found a strain of bacteria that has evolved to consume the most common type of plastic.

Ideonella sakaiensis 201-F6 can degrade poly (ethylene terephthalate), commonly called PET or PETE, in as little as six weeks, they report in a new paper published Thursday in the journal Science.

Common uses of PET include polyester fibers, disposable bottles, and food containers. The last two are typically labelled with a No. 1 inside a recycling symbol.

But this new paper doesn’t mean you should ditch your reusable water bottles in favor of a tray of disposable ones, or that we’re going to inject this bacteria into landfills tomorrow. This study simply evaluated if the bacteria in question could degrade PET and was conducted under laboratory conditions.

“We hope this bacterium could be applied to solve the severe problems by the wasted PET materials in nature,” Kohei Oda, one of the study authors, tells The Christian Science Monitor in an email. But “this is just the initiation for application.” More research has to be done in order to make this a practical solution to plastic pollution.

But could this sort of fix work in theory?

“[Plastics] have been engineered for cost and for durability, or longevity,” says Giora Proskurowski, an oceanographer at the University of Washington who studies plastic debris in the ocean but was not part of this study, in a phone interview with the Monitor. But he’s hopeful that this research could yield further studies and technologies to mitigate the problem.

The durability of plastic isn’t the only challenge this potential fix faces. Microbes are like teenagers, Christopher Reddy, a senior scientist at Woods Hole Oceanographic Institution who studies environmental pollution and was not part of this study, explains in an interview with the Monitor.

“You can tell them to clean the garage over the weekend but they’re going to do it on their own timescale, they’re going to do it when they want, they’re going to pick the easiest thing to do and they’re likely going to leave you more frustrated than you think,” he explains the metaphor. Similarly, you can’t rely on microbes to break down compounds. “Don’t rely on microbes to clean the environment.”

Dr. Reddy says that has a lot to do with the environment outside the lab. In the experiment, he says, the researchers controlled the situation so the bacteria ate the plastic, but in nature, they would have many options for food.

Also, if I. sakaiensis 201-F6 were to be applied, it would likely only help plastic pollution on land. PET particles are denser than water, so they tend to sink down into the sediment. The trillions of tons of plastic particles amassing in the oceans are other types of plastics, types for which this bacteria probably lacks an appetite. Also, Dr. Proskurowski says, marine organisms have evolved to withstand the saltwater and sunlight that sediment-dwelling organisms might not.

Still, perhaps this bacteria could be harnessed to accelerate degradation of plastics that make it to a landfill, he says.

But this study does show that “the environment is evolving and you get the microbes evolving along with that as well,” Proskurowski says. “These are evolving systems.”

Neither Proskurowski nor Reddy were surprised that the researchers found an organism that can consume PET.

“I’m surprised it’s taken this long. I’ve been waiting for results like this,” Proskurowski says.

“Nature is incredibly wily, microbes are incredibly wily,” Reddy says. “Microbes are very good eaters.”

This is not the first time researchers have found an organism that will eat trashed plastic. Last year engineers at Stanford University found a mealworm that can eat styrofoam. And in that case, it was not the animal’s digestion that broke down the styrofoam, but bacteria it its gut.

http://www.csmonitor.com/Science/2016/0310/Researchers-discover-plastic-eating-bacteria-in-recycling-plant

The greatest scientist you’ve never heard of: James Clerk Maxwell

Did you know that the person who invented the color photograph was from Scotland? So was the inventor of the color triangle that forms the bases of the RGB color model we use in computing today. So was the man who proved the link between electricity and magnetism, as was the guy who figured out what Saturn’s rings were made of, and innovated the model for a modern research laboratory. Not only did each of these developments originate from Scotland, but they came from the curiosity, intelligence and hard work of one man: James Clerk Maxwell.

Maxwell’s discoveries and innovations form the foundations of our current understanding of science. Without them we would not have X-rays or radio. In fact, many in the science community consider Maxwell to be as significant a figure as Einstein or Isaac Newton. His discovery of the laws of electrodynamics has been described by leading physicist Richard Feynman as “the most significant event of the 19th century.”

So why has Maxwell’s name been forgotten in popular history?

Whether it was his death at a young age from stomach cancer, or that many of his discoveries were only later commercialized into technology like radio by figures like Heinrich Hertz and Guglielmo Marconi, is hard to say. It also seems that Maxwell’s humility led him to focus on his work, rather than engage in self-promotion.

http://www.clerkmaxwellfoundation.org/

Signs of Modern Astronomy Seen in Ancient Babylon

28TABLET-COMBO-master1050

By Kenneth Chang

Clay tablets, including one at the left, revealed that Babylonian astronomers employed a sort of precalculus to describe Jupiter’s motion across the night sky relative to distant background stars. They did this 15 centuries earlier than Europeans were first credited with making such measurements.

For people living in the ancient city of Babylon, Marduk was their patron god, and thus it is not a surprise that Babylonian astronomers took an interest in tracking the comings and goings of the planet Jupiter, which they regarded as a celestial manifestation of Marduk.

What is perhaps more surprising is the sophistication with which they tracked the planet, judging from inscriptions on a small clay tablet dating to between 350 B.C. and 50 B.C. The tablet, a couple of inches wide and a couple of inches tall, reveals that the Babylonian astronomers employed a sort of precalculus in describing Jupiter’s motion across the night sky relative to the distant background stars. Until now, credit for this kind of mathematical technique had gone to Europeans who lived some 15 centuries later.

Additional tablets, including this one, show that the Babylonians realized that the area under the curve of a graph of velocity against time represented distance traveled.

“It’s a figure that describes a graph of velocity against time,” he said. “That is a highly modern concept.”

Mathematical calculations on four other tablets show that the Babylonians realized that the area under the curve on such a graph represented the distance traveled.

“I think it’s quite a remarkable discovery,” said Alexander Jones, a professor at the Institute for the Study of the Ancient World at New York University, who was not involved with the research. “It’s really quite clear from the text.”

Ancient Babylon, situated in what is now Iraq, south of Baghdad, was a thriving metropolis, a center of trade and science. Early Babylonian mathematicians who lived between 1800 B.C. and 1600 B.C. had figured out, for example, how to calculate the area of a trapezoid, and even how to divide a trapezoid into two smaller trapezoids of equal area.

For the most part, Babylonians used their mathematical skills for mundane calculations, like figuring out the size of a plot of land. But on some tablets from the later Babylonian period, there appear to be some trapezoid calculations related to astronomical observations.

In the 1950s, an Austrian-American mathematician and science historian, Otto E. Neugebauer, described two of them. Dr. Ossendrijver, in his recent research, turned up two more.

But it was not clear what the Babylonian astronomers were calculating.

A year ago, a visitor showed Dr. Ossendrijver a stack of photographs of Babylonian tablets that are now held by the British Museum in London. He saw a tablet he had not seen before. This tablet, with impressions of cuneiform script pressed into clay, did not mention trapezoids, but it recorded the motion of Jupiter, and the numbers matched those on the tablets with the trapezoid calculations.

“I was certain now it was Jupiter,” Dr. Ossendrijver said.

When Jupiter first appears in the night sky, it moves at a certain velocity relative to the background stars. Because Jupiter and Earth both constantly move in their orbits, to observers on Earth, Jupiter appears to slow down, and 120 days after it becomes visible, it comes to a standstill and reverses course.

In September, Dr. Ossendrijver went to the British Museum, where the tablets were taken in the late 19th century after being excavated. A close-up look of the new tablet confirmed it: The Babylonians were calculating the distance Jupiter traveled in the sky from its appearance to its position 60 days later. Using the technique of splitting a trapezoid into two smaller ones of equal area, they then figured out how long it took Jupiter to travel half that distance.

Dr. Ossendrijver said he did not know the astronomical or astrological motivation for these calculations.

It was an abstract concept not known elsewhere at the time. “Ancient Greek astronomers and mathematicians didn’t make plots of something against time,” Dr. Ossendrijver said. He said that until now, such calculations were not known until the 14th century by scholars in England and France. These mathematicians of the Middle Ages perhaps had seen some as yet unknown texts dating to Babylonian times, or they developed the same techniques independently.

“It anticipates integral calculus,” Dr. Ossendrijver said. “This is utterly familiar to any modern physicist or mathematician.”

Thanks to Kebmodee for bringing this to the It’s Interesting community.

Research uncovers genetic cause underlying schizophrenia

Excessive activity in complement component 4 (C4) genes linked to the development of schizophrenia may explain the excessive pruning and reduced number of synapses in the brains of patients with schizophrenia, according to a study published in Nature.

The study, co-funded by the Office of Genomics Research Coordination at the National Institute of Mental Health and the Stanley Center for Psychiatric Research at the Broad Institute in Cambridge, Massachusetts, analyzed various structurally diverse versions of the C4 gene.

Led by Steve McCarroll, PhD, of the Broad Institute of Harvard and MIT, researchers analyzed the genomes of 65 000 study participants and 700 postmortem brains, detecting a link between specific gene versions and the biological process that causes some cases of schizophrenia.

The team—including Beth Stevens, PhD; Michael Carroll, PhD; and Aswin Sekar, BBS— determined that C4 genes generate varying levels of C4A and C4B proteins; the more C4A found in a person, the higher his or her risk of developing schizophrenia. The researchers found that during critical periods of brain maturation, C4 identifies synapses for pruning. Overexpression of C4 results in higher amounts of C4A, which could cause excessive pruning during the late teens and early adulthood, “conspicuously corresponding to the age-of-onset of schizophrenia symptoms,” the researchers noted.

“It has been virtually impossible to model [schizophrenic] disorder in cells or animals,” said Dr McCarroll. “The human genome is providing a powerful new way into this disease. Understanding these genetic effects on risk is a way of prying open that black box, peering inside, and starting to see actual biological mechanisms.”

Research suggests that future schizophrenia treatments may be developed to target and suppress excessive levels of pruning, halting a process that has the potential to develop into psychotic illness.

Reference

Sekar A, Bialas AR, de Rivera H, et al. Schizophrenia risk from complex variation of complement component 4. Nature. 2016; doi: 10.1038/nature16549.

Scientists bring back animal that resembles the quagga, which went extinct over a century ago

An animal that went extinct over 100 years ago is coming back, thanks to a group of scientists. The creature is called the quagga and while that might not sound familiar, it is a close relative of the zebra.

Just like zebras, the quagga has stripes, but for them they only appear on the front half of their bodies, and they are also brown on the rear half of their bodies. A group of scientists outside of Cape Town, Africa, called The Quagga Project, have bred an animal that looks extremely similar by using DNA and selective breeding.

In the past, the quagga roamed South Africa, but they went extinct around the 1880s after European settlers killed them at an alarming rate. However, CNN reports that after testing remaining quagga skins, which revealed the animal was a sub species of the plains zebra, the scientists hypothesized that the genes which characterized the quagga would be present in zebras and could be manifested through selective breeding.

“The progress of the project has in fact followed that prediction. And in fact we have over the course of 4, 5 generations seen a progressive reduction in striping, and lately an increase in the brown background color showing that our original idea was in fact correct,” Eric Harley, the project’s leader, told CNN.

However not everybody thinks the project was a complete success. There are several critics who believe that the project was all a stunt and that all the scientists did was create a different looking zebra.

“There are a lot of detractors who are saying you can’t possibly put back the same as what was here,” says fellow project leader Mike Gregor to CNN. Adding, “there might have been other genetic characteristics [and] adaptations that we haven’t taken into account.”

The researches say there are only six of the creatures that they now call “Rau quaggas,” (after the project’s originator Reinhold Rau) but when they have 50 of them, they then plan for the herd to live together on one reserve.

Harley tells CNN, “if we can retrieve the animals or retrieve at least the appearance of the quagga, then we can say we’ve righted a wrong.”

http://wtnh.com/2016/01/27/scientists-bring-back-animal-that-went-extinct-over-a-century-ago/

Antarctic Research Center Tries to Mimic Mars Conditions on Earth

Mars exists on Earth…well, at least the closest thing to Mars.

According to CNN, the Concordia research station in Antarctica sits on a plateau that is 3,200 meters above sea level and for about four months every year it is engulfed in complete darkness.

Those who live in the research station live in complete isolation. In fact, CNN reports that the nearest human beings from the station can be found about 372 miles away, making the place more remote than the International Space Station.

And yet, 16 dedicated scientists call the research center home for an entire year.

This is because long time confinement, abnormal day and light cycles, extremely dry air, low oxygen levels, and limited supplies make Mars-like training possible at the research center.

And it will help people get ready for the human race’s eventual voyage to Mars.

“By watching how the human body and mind adapts in Antarctica, we can plan and predict what would happen in space,” Alex Kumar, a doctor with the National Institute for Health Research, told CNN.

http://www.ryot.org/antarctic-research-center-tries-to-mimic-mars-conditions-on-earth/947267

Scientists claim to have determined what Jesus looked like

British scientist Richard Neave used forensic facial reconstruction to reveal what he believes to be a true depiction of the face of Jesus Christ. While the image should not be taken as a definitive model of Jesus, it is a historically accurate representation of how a man born in Jesus’s time and place would have looked.

Of the hundreds of thousands of words written in the Bible, not one gives an accurate description of how Christ looked. And without any firsthand descriptions of Jesus’s appearance to work with either, we’ve long been forced to rely on various artists’ personal interpretations — hence the lean, long-haired Christ so many of us have become familiar with. However, experts have emphasized this depiction is an entirely inaccurate representation of what a man from Jesus’s era may have looked like.

Based on historical records, Jesus Christ was from Galilee, a northern region in what is now modern day Israel. In order to get a better picture of Jesus’s face, Neave, a medical artist who retired from the University of Manchester in England, analyzed three skulls of Galilean Semites from Christ’s era, Popular Mechanics reported. Although the actual image was created three years ago, the picture has recently recirculated the Internet. No doubt, just in time for Christmas.

Neave used the skulls to create a computerized map of the facial structures each man once had. These images were used to create a 3D cast of a typical Galilean skull from the era. Then, using specialized computer programs, Neave recreated muscles and skin out of clay to match the thickness of human facial tissue and to cover the casts of the skull. Once Neave applied simulated skin, a nose, lips, and eyelids to the model, the face began to take shape.

The image’s hair style and coloring was based off drawings from various archeological sites dating back to Christ’s time period. In addition, the image was given a beard, since having one was a popular Jewish tradition at the time. Perhaps one of the most conflicting features of the new image, however, is his hair.

Most images of Christ portray a man with long straight hair. However, not only was this hairstyle uncommon among men at the time, it was even described in the Bible by the apostle Paul as being disrespectful — it’s highly unlikely Paul, being such a devoted follower, would say this about Christ. Most scholars, according to Popular Mechanics, therefore believe Christ to have had short, tight curls instead.

Gone are the lean features of the classic Christ. Instead the image is more muscular and weather-beaten, traits that Neave believes are more fitting of a Jewish carpenter from the first century. What’s more, Neave also suggested Christ was about 5-foot-1 and 110 pounds — the size of the average man from the time period.

While we will likely never know exactly how Christ looked, other scholars agree Neave’s depiction is more historically accurate than those found in Christian children’s books.

http://www.medicaldaily.com/was-jesus-white-forensic-facial-reconstruction-allegedly-shows-what-jesus-really-365668?rel=most_read4

Science myths that will not die


False beliefs and wishful thinking about the human experience are common. They are hurting people — and holding back science.

Megan Scudellari

In 1997, physicians in southwest Korea began to offer ultrasound screening for early detection of thyroid cancer. News of the programme spread, and soon physicians around the region began to offer the service. Eventually it went nationwide, piggybacking on a government initiative to screen for other cancers. Hundreds of thousands took the test for just US$30–50.

Across the country, detection of thyroid cancer soared, from 5 cases per 100,000 people in 1999 to 70 per 100,000 in 2011. Two-thirds of those diagnosed had their thyroid glands removed and were placed on lifelong drug regimens, both of which carry risks.

Such a costly and extensive public-health programme might be expected to save lives. But this one did not. Thyroid cancer is now the most common type of cancer diagnosed in South Korea, but the number of people who die from it has remained exactly the same — about 1 per 100,000. Even when some physicians in Korea realized this, and suggested that thyroid screening be stopped in 2014, the Korean Thyroid Association, a professional society of endocrinologists and thyroid surgeons, argued that screening and treatment were basic human rights.

In Korea, as elsewhere, the idea that the early detection of any cancer saves lives had become an unshakeable belief.

This blind faith in cancer screening is an example of how ideas about human biology and behaviour can persist among people — including scientists — even though the scientific evidence shows the concepts to be false. “Scientists think they’re too objective to believe in something as folklore-ish as a myth,” says Nicholas Spitzer, director of the Kavli Institute for Brain and Mind at the University of California, San Diego. Yet they do.

These myths often blossom from a seed of a fact — early detection does save lives for some cancers — and thrive on human desires or anxieties, such as a fear of death. But they can do harm by, for instance, driving people to pursue unnecessary treatment or spend money on unproven products. They can also derail or forestall promising research by distracting scientists or monopolizing funding. And dispelling them is tricky.

Scientists should work to discredit myths, but they also have a responsibility to try to prevent new ones from arising, says Paul Howard-Jones, who studies neuroscience and education at the University of Bristol, UK. “We need to look deeper to understand how they come about in the first place and why they’re so prevalent and persistent.”

Some dangerous myths get plenty of air time: vaccines cause autism, HIV doesn’t cause AIDS. But many others swirl about, too, harming people, sucking up money, muddying the scientific enterprise — or simply getting on scientists’ nerves. Here, Nature looks at the origins and repercussions of five myths that refuse to die.

Myth 1: Screening saves lives for all types of cancer

Regular screening might be beneficial for some groups at risk of certain cancers, such as lung, cervical and colon, but this isn’t the case for all tests. Still, some patients and clinicians defend the ineffective ones fiercely.

The belief that early detection saves lives originated in the early twentieth century, when doctors realized that they got the best outcomes when tumours were identified and treated just after the onset of symptoms. The next logical leap was to assume that the earlier a tumour was found, the better the chance of survival. “We’ve all been taught, since we were at our mother’s knee, the way to deal with cancer is to find it early and cut it out,” says Otis Brawley, chief medical officer for the American Cancer Society.

But evidence from large randomized trials for cancers such as thyroid, prostate and breast has shown that early screening is not the lifesaver it is often advertised as. For example, a Cochrane review of five randomized controlled clinical trials totalling 341,342 participants found that screening did not significantly decrease deaths due to prostate cancer1.

“People seem to imagine the mere fact that you found a cancer so-called early must be a benefit. But that isn’t so at all,” says Anthony Miller at the University of Toronto in Canada. Miller headed the Canadian National Breast Screening Study, a 25-year study of 89,835 women aged 40–59 years old2 that found that annual mammograms did not reduce mortality from breast cancer. That’s because some tumours will lead to death irrespective of when they are detected and treated. Meanwhile, aggressive early screening has a slew of negative health effects. Many cancers grow slowly and will do no harm if left alone, so people end up having unnecessary thyroidectomies, mastectomies and prostatectomies. So on a population level, the benefits (lives saved) do not outweigh the risks (lives lost or interrupted by unnecessary treatment).

Still, individuals who have had a cancer detected and then removed are likely to feel that their life was saved, and these personal experiences help to keep the misconception alive. And oncologists routinely debate what ages and other risk factors would benefit from regular screening.

Focusing so much attention on the current screening tests comes at a cost for cancer research, says Brawley. “In breast cancer, we’ve spent so much time arguing about age 40 versus age 50 and not about the fact that we need a better test,” such as one that could detect fast-growing rather than slow-growing tumours. And existing diagnostics should be rigorously tested to prove that they actually save lives, says epidemiologist John Ioannidis of the Stanford Prevention Research Center in California, who this year reported that very few screening tests for 19 major diseases actually reduced mortality3.

Changing behaviours will be tough. Gilbert Welch at the Dartmouth Institute for Health Policy and Clinical Practice in Lebanon, New Hampshire, says that individuals would rather be told to get a quick test every few years than be told to eat well and exercise to prevent cancer. “Screening has become an easy way for both doctor and patient to think they are doing something good for their health, but their risk of cancer hasn’t changed at all.”

Myth 2: Antioxidants are good and free radicals are bad

In December 1945, chemist Denham Harman’s wife suggested that he read an article in Ladies’ Home Journal entitled ‘Tomorrow You May Be Younger’. It sparked his interest in ageing, and years later, as a research associate at the University of California, Berkeley, Harman had a thought “out of the blue”, as he later recalled. Ageing, he proposed, is caused by free radicals, reactive molecules that build up in the body as by-products of metabolism and lead to cellular damage.

Scientists rallied around the free-radical theory of ageing, including the corollary that antioxidants, molecules that neutralize free radicals, are good for human health. By the 1990s, many people were taking antioxidant supplements, such as vitamin C and β-carotene. It is “one of the few scientific theories to have reached the public: gravity, relativity and that free radicals cause ageing, so one needs to have antioxidants”, says Siegfried Hekimi, a biologist at McGill University in Montreal, Canada.

Yet in the early 2000s, scientists trying to build on the theory encountered bewildering results: mice genetically engineered to overproduce free radicals lived just as long as normal mice4, and those engineered to overproduce antioxidants didn’t live any longer than normal5. It was the first of an onslaught of negative data, which initially proved difficult to publish. The free-radical theory “was like some sort of creature we were trying to kill. We kept firing bullets into it, and it just wouldn’t die,” says David Gems at University College London, who started to publish his own negative results in 2003 (ref. 6). Then, one study in humans7 showed that antioxidant supplements prevent the health-promoting effects of exercise, and another associated them with higher mortality8.

None of those results has slowed the global antioxidant market, which ranges from food and beverages to livestock feed additives. It is projected to grow from US$2.1 billion in 2013 to $3.1 billion in 2020. “It’s a massive racket,” says Gems. “The reason the notion of oxidation and ageing hangs around is because it is perpetuated by people making money out of it.”

Today, most researchers working on ageing agree that free radicals can cause cellular damage, but that this seems to be a normal part of the body’s reaction to stress. Still, the field has wasted time and resources as a result. And the idea still holds back publications on possible benefits of free radicals, says Michael Ristow, a metabolism researcher at the Swiss Federal Institute of Technology in Zurich, Switzerland. “There is a significant body of evidence sitting in drawers and hard drives that supports this concept, but people aren’t putting it out,” he says. “It’s still a major problem.”

Some researchers also question the broader assumption that molecular damage of any kind causes ageing. “There’s a question mark about whether really the whole thing should be chucked out,” says Gems. The trouble, he says, is that “people don’t know where to go now”.

Myth 3: Humans have exceptionally large brains

The human brain — with its remarkable cognition — is often considered to be the pinnacle of brain evolution. That dominance is often attributed to the brain’s exceptionally large size in comparison to the body, as well as its density of neurons and supporting cells, called glia.

None of that, however, is true. “We cherry-pick the numbers that put us on top,” says Lori Marino, a neuroscientist at Emory University in Atlanta, Georgia. Human brains are about seven times larger than one might expect relative to similarly sized animals. But mice and dolphins have about the same proportions, and some birds have a larger ratio.

“Human brains respect the rules of scaling. We have a scaled-up primate brain,” says Chet Sherwood, a biological anthropologist at George Washington University in Washington DC. Even cell counts have been inflated: articles, reviews and textbooks often state that the human brain has 100 billion neurons. More accurate measures suggest that the number is closer to 86 billion. That may sound like a rounding error, but 14 billion neurons is roughly the equivalent of two macaque brains.

Human brains are different from those of other primates in other ways: Homo sapiens evolved an expanded cerebral cortex — the part of the brain involved in functions such as thought and language — and unique changes in neural structure and function in other areas of the brain.

The myth that our brains are unique because of an exceptional number of neurons has done a disservice to neuroscience because other possible differences are rarely investigated, says Sherwood, pointing to the examples of energy metabolism, rates of brain-cell development and long-range connectivity of neurons. “These are all places where you can find human differences, and they seem to be relatively unconnected to total numbers of neurons,” he says.

The field is starting to explore these topics. Projects such as the US National Institutes of Health’s Human Connectome Project and the Swiss Federal Institute of Technology in Lausanne’s Blue Brain Project are now working to understand brain function through wiring patterns rather than size.

Myth 4: Individuals learn best when taught in their preferred learning style

People attribute other mythical qualities to their unexceptionally large brains. One such myth is that individuals learn best when they are taught in the way they prefer to learn. A verbal learner, for example, supposedly learns best through oral instructions, whereas a visual learner absorbs information most effectively through graphics and other diagrams.

There are two truths at the core of this myth: many people have a preference for how they receive information, and evidence suggests that teachers achieve the best educational outcomes when they present information in multiple sensory modes. Couple that with people’s desire to learn and be considered unique, and conditions are ripe for myth-making.

“Learning styles has got it all going for it: a seed of fact, emotional biases and wishful thinking,” says Howard-Jones. Yet just like sugar, pornography and television, “what you prefer is not always good for you or right for you,” says Paul Kirschner, an educational psychologist at the Open University of the Netherlands.

In 2008, four cognitive neuroscientists reviewed the scientific evidence for and against learning styles. Only a few studies had rigorously put the ideas to the test and most of those that did showed that teaching in a person’s preferred style had no beneficial effect on his or her learning. “The contrast between the enormous popularity of the learning-styles approach within education and the lack of credible evidence for its utility is, in our opinion, striking and disturbing,” the authors of one study wrote9.

That hasn’t stopped a lucrative industry from pumping out books and tests for some 71 proposed learning styles. Scientists, too, perpetuate the myth, citing learning styles in more than 360 papers during the past 5 years. “There are groups of researchers who still adhere to the idea, especially folks who developed questionnaires and surveys for categorizing people. They have a strong vested interest,” says Richard Mayer, an educational psychologist at the University of California, Santa Barbara.

In the past few decades, research into educational techniques has started to show that there are interventions that do improve learning, including getting students to summarize or explain concepts to themselves. And it seems almost all individuals, barring those with learning disabilities, learn best from a mixture of words and graphics, rather than either alone.

Yet the learning-styles myth makes it difficult to get these evidence-backed concepts into classrooms. When Howard-Jones speaks to teachers to dispel the learning-styles myth, for example, they often don’t like to hear what he has to say. “They have disillusioned faces. Teachers invested hope, time and effort in these ideas,” he says. “After that, they lose interest in the idea that science can support learning and teaching.”

Myth 5: The human population is growing exponentially (and we’re doomed)

Fears about overpopulation began with Reverend Thomas Malthus in 1798, who predicted that unchecked exponential population growth would lead to famine and poverty.

But the human population has not and is not growing exponentially and is unlikely to do so, says Joel Cohen, a populations researcher at the Rockefeller University in New York City. The world’s population is now growing at just half the rate it was before 1965. Today there are an estimated 7.3 billion people, and that is projected to reach 9.7 billion by 2050. Yet beliefs that the rate of population growth will lead to some doomsday scenario have been continually perpetuated. Celebrated physicist Albert Bartlett, for example, gave more than 1,742 lectures on exponential human population growth and the dire consequences starting in 1969.

The world’s population also has enough to eat. According to the Food and Agriculture Organization of the United Nations, the rate of global food production outstrips the growth of the population. People grow enough calories in cereals alone to feed between 10 billion and 12 billion people. Yet hunger and malnutrition persist worldwide. This is because about 55% of the food grown is divided between feeding cattle, making fuel and other materials or going to waste, says Cohen. And what remains is not evenly distributed — the rich have plenty, the poor have little. Likewise, water is not scarce on a global scale, even though 1.2 billion people live in areas where it is.

“Overpopulation is really not overpopulation. It’s a question about poverty,” says Nicholas Eberstadt, a demographer at the American Enterprise Institute, a conservative think tank based in Washington DC. Yet instead of examining why poverty exists and how to sustainably support a growing population, he says, social scientists and biologists talk past each other, debating definitions and causes of overpopulation.

Cohen adds that “even people who know the facts use it as an excuse not to pay attention to the problems we have right now”, pointing to the example of economic systems that favour the wealthy.

Like others interviewed for this article, Cohen is less than optimistic about the chances of dispelling the idea of overpopulation and other ubiquitous myths (see ‘Myths that persist’), but he agrees that it is worthwhile to try to prevent future misconceptions. Many myths have emerged after one researcher extrapolated beyond the narrow conclusions of another’s work, as was the case for free radicals. That “interpretation creep”, as Spitzer calls it, can lead to misconceptions that are hard to excise. To prevent that, “we can make sure an extrapolation is justified, that we’re not going beyond the data”, suggests Spitzer. Beyond that, it comes down to communication, says Howard-Jones. Scientists need to be effective at communicating ideas and get away from simple, boiled-down messages.

Once a myth is here, it is often here to stay. Psychological studies suggest that the very act of attempting to dispel a myth leads to stronger attachment to it. In one experiment, exposure to pro-vaccination messages reduced parents’ intention to vaccinate their children in the United States. In another, correcting misleading claims from politicians increased false beliefs among those who already held them. “Myths are almost impossible to eradicate,” says Kirschner. “The more you disprove it, often the more hard core it becomes.”

http://www.nature.com/news/the-science-myths-that-will-not-die-1.19022

Nature 528, 322–325 (17 December 2015) doi:10.1038/528322a

1.Ilic, D., Neuberger, M. M., Djulbegovic, M. & Dahm, P. Cochrane Database Syst Rev. 1, CD004720 (2013).
2.Miller, A. B. et al. Br. Med. J. 348, g366 (2014).
3.Saquib, N., Saquib, J. & Ioannidis, J. P. A. Int. J. Epidemiol. 44, 264–277 (2015).
4.Doonan, R. et al. Genes Dev. 22, 3236–3241 (2008).
5.Pérez, V. I. et al. Aging Cell 8, 73–75 (2009).
6.Keaney, M. & Gems, D. Free Radic. Biol. Med. 34, 277–282 (2003).
7.Ristow, M. et al. Proc. Natl Acad. Sci. USA 106, 8665–8670 (2009).
8.Bjelakovic, G., Nikolova, D. & Gluud, C. J. Am. Med. Assoc. 310, 1178–1179 (2013).
9.Pashler, H., McDaniel, M., Rohrer, D. & Bjork, R. Psychol. Sci. Public Interest 9, 105–119 (2008).

Women can navigate better when given testosterone, study finds

To investigate whether the differences in how men and women navigate are related to our sex or to cultural conditioning, researchers in Norway measured male and female brain activity while volunteers tried to find their way through a virtual reality maze.

Wearing 3D goggles and using a joystick to make their way through an artificial environment, the participants (18 males and 18 females) had their brain functions continuously recorded by an fMRI scanner as they carried out virtual navigation tasks.

In line with previous findings, the men performed better, using shortcuts, orienting themselves more using cardinal directions, and solving 50 percent more tasks than the women in the study.

“Men’s sense of direction was more effective,” said Carl Pintzka, a neuroscientist at the Norwegian University of Science and Technology (NTNU). “They quite simply got to their destination faster.”

One of the reasons for this is because of the difference in how men and women use their brains when we’re finding our way around. According to the researchers, men use the hippocampus more, whereas women place greater reliance on their brains’ frontal areas.

“That’s in sync with the fact that the hippocampus is necessary to make use of cardinal directions,” said Pintzka. “[M]en usually go in the general direction where [their destination is] located. Women usually orient themselves along a route to get there.”

Generally, the cardinal approach is more efficient, as it depends less on where you start.

But women’s brains make them better at finding objects locally, the researchers say. “In ancient times, men were hunters and women were gatherers. Therefore, our brains probably evolved differently,” said Pintzka. “In simple terms, women are faster at finding things in the house, and men are faster at finding the house.”

What was most remarkable about the study was what happened when the researchers gave women a drop of testosterone to see how it affected their ability to navigate the virtual maze. In a separate experiment, 21 women received a drop of testosterone under their tongues, while 21 got a placebo.

The researchers found that the women receiving testosterone showed improved knowledge of the layout of the maze, and relied on their hippocampus more to find their way around. Having said that, these hormone-derived benefits didn’t enable them to solve more maze tasks in the exercise.

It’s worth bearing in mind that the study used a fairly small sample size in both of the experiments carried out, so the findings need to be read in light of that. Nonetheless, the scientists believe their paper, which is published in Behavioural Brain Research, will help us to better understand the different ways male and female brains work, which could assist in the fight against diseases such as Alzheimer’s.

“Almost all brain-related diseases are different in men and women, either in the number of affected individuals or in severity,” said Pintzka. “Therefore, something is likely protecting or harming people of one sex. Since we know that twice as many women as men are diagnosed with Alzheimer’s disease, there might be something related to sex hormones that is harmful.”

http://www.sciencealert.com/women-can-navigate-better-when-given-testosterone-study-finds

Thanks to Dr. Enrique Leira for bringing this to the It’s Interesting community.