Archive for the ‘Science’ Category

By Jason Bittel

Do baboons fart? What about salamanders? Millipedes?

These questions sound like the sort Bart Simpson might have asked to derail science class. But real-life scientists are now taking to Twitter to provide answers. So far, they’ve created a hashtag — #DoesItFart — and a Google Spreadsheet that details the flatulence habits of more than 60 animals.

So, which animals cut the proverbial cheese? Tons, it turns out. Bats do, according to David Bennett, a PhD candidate at Queen Mary University of London. And the bigger they are, the harder they honk.

Rats, zebras and bearded dragons are also among Those Creatures That Fart. Birds, on the other hand, do not seem to have a biological need for passing gas, but they could let one rip, theoretically. Marine invertebrates such as oysters, mussels and crabs? Alas, they are whoopee-impaired.

The science of farts is not just about potty humor, by the way. Cattle gas, for example, is a significant contributor to atmospheric methane that contributes to climate change. And fauna flatulence is also a hot topic among certain crowds — ones scientists want to engage.

“Does it fart?” is one of most frequent questions zoologists receive from kids, said Dani Rabaiotti of the Zoological Society of London. In fact, the whole #DoesItFart adventure started when her teenage brother asked if snakes ever experience flatulence. Rabaiotti knew from her own work that the wild dogs of Africa definitely fart, as do the extremely gassy seals that reside on the Atlantic island of South Georgia. But she wasn’t sure about snakes, so she consulted snake expert David Steen.

The short answer is yes, says Steen, a wildlife ecologist at Auburn University. “Snakes sometimes discharge feces and musk as a defensive strategy, and this is often accompanied by what I would consider classic fart noises,” he said.

Steen said this is far from the first time he’s fielded this question, as it seems to be a favorite of the preteen crowd.

“I don’t know if animal flatulence questions can serve as a significant gateway to a greater appreciation of biodiversity, but it is always fun to see what captures people’s attention,” he said. “It is at least an opportunity to engage with a larger audience and bring new folks into the conversation.”

And if engagement is the goal — or at least a byproduct — does it really matter what the topic is? “Just because it’s flatulence doesn’t mean it’s inherently silly,” said Adriana Lowe, a researcher of biological anthropology at the University of Kent in the United Kingdom. “The diets and digestive systems of animals are an important and fascinating field of study, and gas is just a part of that.”

Lowe studies chimpanzees in Uganda’s Budongo forest, animals whose gas appears to vary with their diet. “Fruit is tootier than leaves, and figs seem to be the worst offenders,” she said. On occasion, these bodily functions have even aided in her research. “Several times I have been with one or two chimps and not been aware others are nearby until the farts start,” says Lowe. “Some of them have that very long, air-being-released-from-a-balloon quality, which is handy because it gives you a bit longer to pinpoint where it’s coming from.”

by Esther Inglis-Arkell

Hennig Brand discovered the element of phosphorus in 1669. That sounds like quite an achievement, but Brand’s life wasn’t one that should, necessarily, be emulated. His steps to discovering this element were undignified, to say the least. His first step was marrying well; he was an officer in the army, but his wife had enough money for him to leave. She didn’t have enough money overall — at least not according to Brand — and so he used what money she had to try to make more money.

Sadly, his chosen path for this increase in wealth was alchemy. He wanted to come up with the philosopher’s stone, which turned everyday elements into gold. At that stage, the science generally meant doing weird and dangerous things to any substance scientists could get their hands on. It wasn’t cheap, and Brand burned through all of his wife’s money. She didn’t have to live in poverty only because she was born in the 1600s, and so died young. Brand mourned for a time, and then went in search of another financially secure wife. Surprisingly, he got one.

As soon as he got his hands on her money, he started his experiments again. Alchemists tried anything, but they generally fixated on certain substances. Terribly rare and precious elements were popular, but so were human fluids. Humans were alchemical factories, turning ordinary substances like meat and grain into all kinds of things. The easiest thing to be got from the body was urine, and Brand, somehow, acquired a lot of it. About 1500 gallons of urine went into his experiment, but it paid off. After a complicated process of boiling and separating and recombining, he utterly failed to come up with gold. He did, however, come up with something he called “cold fire.” It glowed, perpetually, in the dark. It was what we now call phosphorus.

Although no direct use was found for cold fire in Brand’s life, people were fascinated with it. Brand capitalized on that — probably to his wife’s great relief. He sold the secret to anyone who would pay enough, including Wilhelm Leibniz, the inventor of calculus. The buyers sold the secret to others, but it remained valuable and well-kept until 1737, when someone sold it to the Academy of Science in Paris and it was published.

How do you get phosphorus from urine? Boil the urine until it’s a “syrup.” Heat the syrup until a red oil comes out of it. Grab that oil! Let the rest cool. The substance will cool into two parts, a black upper part and a grainy lower part. Scrape off the lower part and throw it away. Mix the oil back into the black upper part. Heat that for about 16 hours. The oil will come back out, followed by phosphorus fumes. Channel the phosphorus into water to cool it down. Voila.

by Paul Ratner

Time crystals are hypothetical structures proposed by Nobel-Prize winning theoretical physicist Frank Wilczek in 2012. What’s special about them is that they would move without using energy, breaking a fundamental physics law of time-translation symmetry. Such crystals would move while remaining in their ground states, when they are at their lowest energy.

They’ve been deemed “impossible” by most physicists and yet, at the end of August, experimental physicists from University of California, Santa Barbara and Microsoft’s research lab station Q published a notable paper on how time crystals may be feasible and their plan for creating them. What’s also remarkable, if time crystals were actually created, they would re-define the nature of time itself, potentially reconciling the rather weird field of quantum mechanics with the theory of relativity.

Now comes news that scientists from the University of Maryland tried an experiment suggested by Frank Wilczek and actually made a time crystal that works. They created a ring-shaped quantum system of a group of ytterbium ions, cooled off to their ground state. In theory, this system should not be moving at all. But if it was to periodically rotate, that would prove the existence of symmetry-breaking time crystals.

The research scientists used a laser to change the spin of the ions to put them into perpetual oscillation. As reported by MIT Tech Review, they discovered that over time the oscillations eventually happened at twice the original rate. Since no energy was added to the system, the only explanation was that they created a time crystal.

As their paper undergoes the peer-review process, the physicists look for others to repeat their experiment. If their discovery is confirmed, the repercussions of this groundbreaking development are only beginning to be understood. One potential application suggested by the scientists may be in quantum computing, where time crystals may be utilized for quantum memory.

You can read the new paper “Observation of a Discrete Time Crystal” here:

by Tomas Chamorro-Premuzic

Although the scientific study of leadership is well established, its key discoveries are unfamiliar to most people, including an alarmingly large proportion of those in charge of evaluating and selecting leaders.

This science-practitioner gap explains our disappointing state of affairs. Leaders should drive employee engagement, yet only 30% of employees are engaged, costing the U.S. economy $550 billion a year in productivity loss. Moreover, a large global survey of employee attitudes toward management suggests that a whopping 82% of people don’t trust their boss. You only need to google “my boss is…” or “my manager is…” and see what the autocomplete text is to get a sense of what most people think of their leaders.

Unsurprisingly, over 50% of employees quit their job because of their managers. As the old saying goes, “people join companies, but quit their bosses.” And the rate of derailment, unethical incidents, and counterproductive work behaviors among leaders is so high that it is hard to be shocked by a leader’s dark side. Research indicates that 30%–60% of leaders act destructively, with an estimated cost of $1–$2.7 million for each failed senior manager.

Part of the problem is that many widely held beliefs about leadership are incongruent with the scientific evidence. As Mark Twain allegedly noted, “It ain’t what you don’t know that gets you into trouble. It’s what you know for sure that just ain’t so.” For example, it is quite common for people to believe that leadership is largely dependent on the situation, that it’s hard to predict whether someone will be a good (or bad) leader, and that any person can be a leader. In reality, some people have a much higher probability of becoming leaders, regardless of the context, and this probability can be precisely quantified with robust psychological tools.

What do we really know about the measurement of leadership potential? Here are some critical findings:

Who becomes a leader? Although leaders come in many shapes, a few personality characteristics consistently predict whether someone is likely to emerge as a leader. As the most widely cited meta-analysis in this area shows, people who are more adjusted, sociable, ambitious, and curious are much more likely to become leaders. (53% of the variability in leadership emergence is explained by these personality factors.) Unsurprisingly, higher levels of cognitive ability (IQ) also increase an individual’s likelihood to emerge as a leader, though by less than 5%. Of course, emergence doesn’t imply effectiveness, but one has to emerge in order to be effective.

What are the key qualities of effective leaders? The ultimate measure of leader effectiveness is the performance of the leader’s team or organization, particularly vis-à-vis competitors. Leadership is a resource for the group, and effective leaders enable a group to outperform other groups. While the same personality and ability traits described above help leaders become more effective — they are not just advantageous for emergence — the best leaders also show higher levels of integrity, which enables them to create a fair and just culture in their teams and organizations. In addition, effective leaders are generally more emotionally intelligent, which enables them to stay calm under pressure and have better people skills. Conversely, narcissistic leaders are more prone to behaving in unethical ways, which is likely to harm their teams.

How will the person lead? Not everyone leads in the same way. Leadership style is largely dependent on personality. Ambitious, thick-skinned leaders tend to be more entrepreneurial, so they are focused on growth and innovation. Curious, sociable, and sensitive leaders tend to be more charismatic, though charisma often reflects dark side traits, such as narcissism and psychopathy. Studies also highlight gender differences in leadership styles, with men being more transactional and women more transformational. However, gender roles are best understood as a psychological and normally distributed variable, as people differ in masculinity and femininity regardless of their biological sex.

Are leaders born or made? Any observable pattern of human behaviors is the byproduct of genetic and environmental influences, so the answer to this question is “both.” Estimates suggest that leadership is 30%-60% heritable, largely because the character traits that shape leadership — personality and intelligence — are heritable. While this suggests strong biological influences on leadership, it does not imply that nurture is trivial. Even more-heritable traits, such as weight (80%) and height (90%), are affected by environmental factors. Although there is no clear recipe for manipulating the environment in order to boost leadership potential, well-crafted coaching interventions boost critical leadership competencies by about 20%–30%.

What is the role of culture? Culture is key because it drives employee engagement and performance. However, culture isn’t the cause of leadership so much as the result of it. Thus leaders create the explicit and implicit rules of interaction for organizational members, and these rules affect morale and productivity levels. When people’s values are closely aligned with the values of the organization (and leadership), they will experience higher levels of fit and purpose.

How early can we predict potential? Any prediction is a measure of potential or the probability of something happening. Because leadership is partly dependent on genetic and early childhood experiences, predicting it from an early age is certainly possible. Whether doing so is ethical or legal is a different question. However, most of the commonly used indicators to gauge leadership potential — educational achievement, emotional intelligence, ambition, and IQ — can be predicted from a very early age, so it would be naïve to treat them as more malleable. Perhaps in the future, leadership potential will be assessed at a very early age by inspecting people’s saliva.

Does gender matter? Less than we think. The fact that so many leaders are male has much more to do with social factors (people’s expectations, cultural norms, and opportunities) than actual gender differences in leadership potential, which are virtually nonexistent. In fact, some studies have shown that women are slightly more effective as leaders on the job, but this may be because the standards for appointing women to leadership positions are higher than those for appointing men, which creates a surplus of incompetent men in leadership positions. The solution is not to get women to act more like men, but to select leaders based on their actual competence.

Why do leaders derail? We cannot ignore the wide range of undesirable and toxic outcomes associated with leadership. It is not the absence of bright side qualities, but rather their coexistence with dark side tendencies, that makes leaders derail. Indeed, as Sepp Blatter, Dominique Strauss-Kahn, and Bernie Madoff demonstrate, technical brilliance often coexists with self-destructive and other destructive traits. This is just one reason why it is so important for leadership development and executive coaching interventions to highlight leaders’ weaknesses, and help them keep their toxic tendencies in check.

Although these findings have been replicated in multiple studies, a skeptic could ask, “Now that we’re (allegedly) living in an era of unprecedented technological change, could some of these findings be outdated?”

Not really.

Leadership evolved over millions of years, enabling us to function as group-living animals. It is therefore unlikely that the core foundations of leadership will change. That said, the specific skills and qualities that enable leaders and their groups to adapt to the world are certainly somewhat context dependent. For example, just as physical strength mattered more, and intellectual ability less, in the past, it is conceivable that human differentiators such as curiosity, empathy, and creativity will become more important in a world of ever-growing technological dependence and ubiquitous artificial intelligence.

In short, the science of leadership is well established. There is no real need to advance it in order to improve real-world practices. We should focus instead on applying what we already know, and ignoring what we think we know that isn’t true.

by Simon Sharwood

The annual Ig Nobel Prizes were handed out on Thursday night, as always “honoring achievements that make people laugh, then think”.

Among this year’s winners were “Charles Foster, for living in the wild as, at different times, a badger, an otter, a deer, a fox, and a bird.” Foster turned that research into a book, Being a Beast, in which he “set out to know the ultimate other: the non-humans, the beasts.” That effort saw him live “… alongside badgers for weeks, sleeping in a sett in a Welsh hillside and eating earthworms, learning to sense the landscape through his nose rather than his eyes.”

Foster’s Oxford University bio says he’s “a Fellow of Green Templeton College, a Senior Research Associate at the Uehiro Centre for Practical Ethics, a Research Associate at the the Ethox and HeLEX Centres, (at at the University of Oxford), and a practising barrister.” Foster shared the Biology Prize with Thomas Thwaites, who created “prosthetic extensions of his limbs that allowed him to move in the manner of, and spend time roaming hills in the company of, goats.”

Volkswagen won the Chemistry prize “for solving the problem of excessive automobile pollution emissions by automatically, electromechanically producing fewer emissions whenever the cars are being tested.”

The Psychology Prize went to the authors of a paper titled “From Junior to Senior Pinocchio: A Cross-Sectional Lifespan Investigation of Deception” that the Ig Nobel committee summarised as “asking a thousand liars how often they lie, and for deciding whether to believe those answers.”

Japanese researchers won the Perception Prize for research titled “Perceived size and Perceived Distance of Targets Viewed From Between the Legs: Evidence for Proprioceptive Theory”, while a paper titled “On the Reception and Detection of Pseudo-Profound Bullshit” took out the Peace Prize.

The Ig Nobels are misunderstood as deriding rubbish science, but are actually about celebrating how even seemingly-obscure science gets us thinking. As the awards’ backer, Improbable Research, point out:

Good achievements can also be odd, funny, and even absurd; So can bad achievements. A lot of good science gets attacked because of its absurdity. A lot of bad science gets revered despite its absurdity.

The full list of winners is here:

Winners reportedly took home Ten Trillion Dollars, sadly Zimbabwe dollars, or about US$0.40.

Sixty trays can contain the entire human genome as 23,040 different fragments of cloned DNA. Credit James King-Holmes/Science Source


Scientists are now contemplating the fabrication of a human genome, meaning they would use chemicals to manufacture all the DNA contained in human chromosomes.

The prospect is spurring both intrigue and concern in the life sciences community because it might be possible, such as through cloning, to use a synthetic genome to create human beings without biological parents.

While the project is still in the idea phase, and also involves efforts to improve DNA synthesis in general.

Organizers said the project could have a big scientific payoff and would be a follow-up to the original Human Genome Project, which was aimed at reading the sequence of the three billion chemical letters in the DNA blueprint of human life. The new project, by contrast, would involve not reading, but rather writing the human genome — synthesizing all three billion units from chemicals.

But such an attempt would raise numerous ethical issues. Could scientists create humans with certain kinds of traits, perhaps people born and bred to be soldiers? Or might it be possible to make copies of specific people?

“Would it be O.K., for example, to sequence and then synthesize Einstein’s genome?” Drew Endy, a bioengineer at Stanford, and Laurie Zoloth, a bioethicist at Northwestern University, wrote in an essay criticizing the proposed project. “If so how many Einstein genomes should be made and installed in cells, and who would get to make them?”

The project was initially called HGP2: The Human Genome Synthesis Project, with HGP referring to the Human Genome Project. An invitation to the meeting at Harvard said that the primary goal “would be to synthesize a complete human genome in a cell line within a period of 10 years.”

But by the time the meeting was held, the name had been changed to “HGP-Write: Testing Large Synthetic Genomes in Cells.”

The project does not yet have funding, Dr. Church said, though various companies and foundations would be invited to contribute, and some have indicated interest. The federal government will also be asked. A spokeswoman for the National Institutes of Health declined to comment, saying the project was in too early a stage.

Besides Dr. Church, the organizers include Jef Boeke, director of the institute for systems genetics at NYU Langone Medical Center, and Andrew Hessel, a self-described futurist who works at the Bay Area software company Autodesk and who first proposed such a project in 2012.

Scientists and companies can now change the DNA in cells, for example, by adding foreign genes or changing the letters in the existing genes. This technique is routinely used to make drugs, such as insulin for diabetes, inside genetically modified cells, as well as to make genetically modified crops. And scientists are now debating the ethics of new technology that might allow genetic changes to be made in embryos.

But synthesizing a gene, or an entire genome, would provide the opportunity to make even more extensive changes in DNA.

For instance, companies are now using organisms like yeast to make complex chemicals, like flavorings and fragrances. That requires adding not just one gene to the yeast, like to make insulin, but numerous genes in order to create an entire chemical production process within the cell. With that much tinkering needed, it can be easier to synthesize the DNA from scratch.

Right now, synthesizing DNA is difficult and error-prone. Existing techniques can reliably make strands that are only about 200 base pairs long, with the base pairs being the chemical units in DNA. A single gene can be hundreds or thousands of base pairs long. To synthesize one of those, multiple 200-unit segments have to be spliced together.

But the cost and capabilities are rapidly improving. Dr. Endy of Stanford, who is a co-founder of a DNA synthesis company called Gen9, said the cost of synthesizing genes has plummeted from $4 per base pair in 2003 to 3 cents now. But even at that rate, the cost for three billion letters would be $90 million. He said if costs continued to decline at the same pace, that figure could reach $100,000 in 20 years.

J. Craig Venter, the genetic scientist, synthesized a bacterial genome consisting of about a million base pairs. The synthetic genome was inserted into a cell and took control of that cell. While his first synthetic genome was mainly a copy of an existing genome, Dr. Venter and colleagues this year synthesized a more original bacterial genome, about 500,000 base pairs long.

Dr. Boeke is leading an international consortium that is synthesizing the genome of yeast, which consists of about 12 million base pairs. The scientists are making changes, such as deleting stretches of DNA that do not have any function, in an attempt to make a more streamlined and stable genome.

But the human genome is more than 200 times as large as that of yeast and it is not clear if such a synthesis would be feasible.

Jeremy Minshull, chief executive of DNA2.0, a DNA synthesis company, questioned if the effort would be worth it.

“Our ability to understand what to build is so far behind what we can build,” said Dr. Minshull, who was invited to the meeting at Harvard but did not attend. “I just don’t think that being able to make more and more and more and cheaper and cheaper and cheaper is going to get us the understanding we need.”

A handful of scientists around the United States are trying to do something that some people find disturbing: make embryos that are part human, part animal.

The researchers hope these embryos, known as chimeras, could eventually help save the lives of people with a wide range of diseases.

One way would be to use chimera embryos to create better animal models to study how human diseases happen and how they progress.

Perhaps the boldest hope is to create farm animals that have human organs that could be transplanted into terminally ill patients.

But some scientists and bioethicists worry the creation of these interspecies embryos crosses the line. “You’re getting into unsettling ground that I think is damaging to our sense of humanity,” says Stuart Newman, a professor of cell biology and anatomy at the New York Medical College.

The experiments are so sensitive that the National Institutes of Health has imposed a moratorium on funding them while officials explore the ethical issues they raise.

Nevertheless, a small number of researchers are pursuing the work with alternative funding. They hope the results will persuade the NIH to lift the moratorium.

“We’re not trying to make a chimera just because we want to see some kind of monstrous creature,” says Pablo Ross, a reproductive biologist at the University of California, Davis. “We’re doing this for a biomedical purpose.”

The NIH is expected to announce soon how it plans to handle requests for funding.

Recently, Ross agreed to let me visit his lab for an unusual look at his research. During the visit, Ross demonstrated how he is trying to create a pancreas that theoretically could be transplanted into a patient with diabetes.

The first step involves using new gene-editing techniques to remove the gene that pig embryos need to make a pancreas.

Working under an elaborate microscope, Ross makes a small hole in the embryo’s outer membrane with a laser. Next, he injects a molecule synthesized in the laboratory to home in on and delete the pancreas gene inside. (In separate experiments, he has done this to sheep embryos, too.)

After the embryos have had their DNA edited this way, Ross creates another hole in the membrane so he can inject human induced pluripotent stem cells, or iPS for short, into the pig embryos.

Like human embryonic stem cells, iPS cells can turn into any kind of cell or tissue in the body. The researchers’ hope is that the human stem cells will take advantage of the void in the embryo to start forming a human pancreas.

Because iPS cells can be made from any adult’s skin cells, any organs they form would match the patient who needs the transplant, vastly reducing the risk that the body would reject the new organ.

But for the embryo to develop and produce an organ, Ross has to put the chimera embryos into the wombs of adult pigs. That involves a surgical procedure, which is performed in a large operating room across the street from Ross’s lab.

The day Ross opened his lab to me, a surgical team was anesthetizing an adult female pig so surgeons could make an incision to get access to its uterus.

Ross then rushed over with a special syringe filled with chimera embryos. He injected 25 embryos into each side of the animal’s uterus. The procedure took about an hour. He repeated the process on a second pig.

Every time Ross does this, he then waits a few weeks to allow the embryos to develop to their 28th day — a time when primitive structures such as organs start to form.

Ross then retrieves the chimeric embryos to dissect them so he can see what the human stem cells are doing inside. He examines whether the human stem cells have started to form a pancreas, and whether they have begun making any other types of tissues.

The uncertainty is part of what makes the work so controversial. Ross and other scientists conducting these experiments can’t know exactly where the human stem cells will go. Ross hopes they’ll only grow a human pancreas. But they could go elsewhere, such as to the brain.

“If you have pigs with partly human brains you would have animals that might actually have consciousness like a human,” Newman says. “It might have human-type needs. We don’t really know.”

That possibility raises new questions about the morality of using the animals for experimentation. Another concern is that the stem cells could form human sperm and human eggs in the chimeras.

“If a male chimeric pig mated with a female chimeric pig, the result could be a human fetus developing in the uterus of that female chimera,” Newman says. Another possibility is the animals could give birth to some kind of part-human, part-pig creature.

“One of the concerns that a lot of people have is that there’s something sacrosanct about what it means to be human expressed in our DNA,” says Jason Robert, a bioethicist at Arizona State University. “And that by inserting that into other animals and giving those other animals potentially some of the capacities of humans that this could be a kind of violation — a kind of, maybe, even a playing God.”

Ross defends what his work. “I don’t consider that we’re playing God or even close to that,” Ross says. “We’re just trying to use the technologies that we have developed to improve peoples’ life.”

Still, Ross acknowledges the concerns. So he’s moving very carefully, he says. For example, he’s only letting the chimera embryos develop for 28 days. At that point, he removes the embryos and dissects them.

If he discovers the stem cells are going to the wrong places in the embryos, he says he can take steps to stop that from happening. In addition, he’d make sure adult chimeras are never allowed to mate, he says.

“We’re very aware and sensitive to the ethical concerns,” he says. “One of the reasons we’re doing this research the way we’re doing it is because we want to provide scientific information to inform those concerns.”

Ross is working with Juan Carlos Izpisua Belmonte from the Salk Intitute for Biological Studies in La Jolla, Calif., and Hiromitsu Nakauchi at Stanford University. Daniel Garry of the University of Minnesota and colleagues are conducting similar work. The research is funded in part by the Defense Department and the California Institute for Regenerative Medicine (CIRM).

Thanks to Kebmodee for bringing this to the attention of the It’s Interesting community.

Did you know that the person who invented the color photograph was from Scotland? So was the inventor of the color triangle that forms the bases of the RGB color model we use in computing today. So was the man who proved the link between electricity and magnetism, as was the guy who figured out what Saturn’s rings were made of, and innovated the model for a modern research laboratory. Not only did each of these developments originate from Scotland, but they came from the curiosity, intelligence and hard work of one man: James Clerk Maxwell.

Maxwell’s discoveries and innovations form the foundations of our current understanding of science. Without them we would not have X-rays or radio. In fact, many in the science community consider Maxwell to be as significant a figure as Einstein or Isaac Newton. His discovery of the laws of electrodynamics has been described by leading physicist Richard Feynman as “the most significant event of the 19th century.”

So why has Maxwell’s name been forgotten in popular history?

Whether it was his death at a young age from stomach cancer, or that many of his discoveries were only later commercialized into technology like radio by figures like Heinrich Hertz and Guglielmo Marconi, is hard to say. It also seems that Maxwell’s humility led him to focus on his work, rather than engage in self-promotion.

Human cortical neurons in the brain. (David Scharf/Corbis)

By Jerry Adler
Smithsonian Magazine

Ken Hayworth, a neuroscientist, wants to be around in 100 years but recognizes that, at 43, he’s not likely to make it on his own. Nor does he expect to get there preserved in alcohol or a freezer; despite the claims made by advocates of cryonics, he says, the ability to revivify a frozen body “isn’t really on the horizon.” So Hayworth is hoping for what he considers the next best thing. He wishes to upload his mind—his memories, skills and personality—to a computer that can be programmed to emulate the processes of his brain, making him, or a simulacrum, effectively immortal (as long as someone keeps the power on).

Hayworth’s dream, which he is pursuing as president of the Brain Preservation Foundation, is one version of the “technological singularity.” It envisions a future of “substrate-independent minds,” in which human and machine consciousness will merge, transcending biological limits of time, space and memory. “This new substrate won’t be dependent on an oxygen atmosphere,” says Randal Koene, who works on the same problem at his organization, “It can go on a journey of 1,000 years, it can process more information at a higher speed, it can see in the X-ray spectrum if we build it that way.” Whether Hayworth or Koene will live to see this is an open question. Their most optimistic scenarios call for at least 50 years, and uncounted billions of dollars, to implement their goal. Meanwhile, Hayworth hopes to achieve the ability to preserve an entire human brain at death—through chemicals, cryonics or both—to keep the structure intact with enough detail that it can, at some future time, be scanned into a database and emulated on a computer.

That approach presumes, of course, that all of the subtleties of a human mind and memory are contained in its anatomical structure—conventional wisdom among neuroscientists, but it’s still a hypothesis. There are electrochemical processes at work. Are they captured by a static map of cells and synapses? We won’t know, advocates argue, until we try to do it.

The initiatives require a big bet on the future of technology. A 3-D map of all the cells and synapses in a nervous system is called a “connectome,” and so far researchers have produced exactly one, for a roundworm called Caenorhabditis elegans, with 302 neurons and about 7,000 connections among them. A human brain, according to one reasonable estimate, has about 86 billion neurons and 100 trillion synapses. And then there’s the electrochemical activity on top of that. In 2013, announcing a federal initiative to produce a complete model of the human brain, Francis Collins, head of the National Institutes of Health, said it could generate “yottabytes” of data—a million million million megabytes. To scan an entire human brain at the scale Hayworth thinks is necessary—effectively slicing it into virtual cubes ten nanometers on a side—would require, with today’s technology, “a million electron microscopes running in parallel for ten years.” Mainstream researchers are divided between those who regard Hayworth’s quest as impossible in practice, and those, like Miguel Nicolelis of Duke University, who consider it impossible in theory. “The brain,” he says, “is not computable.”

And what does it mean for a mind to exist outside a brain? One immediately thinks of the disembodied HAL in 2001: A Space Odyssey. But Koene sees no reason that, if computers continue to grow smaller and more powerful, an uploaded mind couldn’t have a body—a virtual one, or a robotic one. Will it sleep? Experience hunger, pain, desire? In the absence of hormones and chemical neurotransmitters, will it feel emotion? It will be you, in a sense, but will you be it?

These questions don’t trouble Hayworth. To him, the brain is the most sophisticated computer on earth, but only that, and he figures his mind could also live in one made of transistors instead. He hopes to become the first human being to live entirely in cyberspace, to send his virtual self into the far future.

Read more:

False beliefs and wishful thinking about the human experience are common. They are hurting people — and holding back science.

Megan Scudellari

In 1997, physicians in southwest Korea began to offer ultrasound screening for early detection of thyroid cancer. News of the programme spread, and soon physicians around the region began to offer the service. Eventually it went nationwide, piggybacking on a government initiative to screen for other cancers. Hundreds of thousands took the test for just US$30–50.

Across the country, detection of thyroid cancer soared, from 5 cases per 100,000 people in 1999 to 70 per 100,000 in 2011. Two-thirds of those diagnosed had their thyroid glands removed and were placed on lifelong drug regimens, both of which carry risks.

Such a costly and extensive public-health programme might be expected to save lives. But this one did not. Thyroid cancer is now the most common type of cancer diagnosed in South Korea, but the number of people who die from it has remained exactly the same — about 1 per 100,000. Even when some physicians in Korea realized this, and suggested that thyroid screening be stopped in 2014, the Korean Thyroid Association, a professional society of endocrinologists and thyroid surgeons, argued that screening and treatment were basic human rights.

In Korea, as elsewhere, the idea that the early detection of any cancer saves lives had become an unshakeable belief.

This blind faith in cancer screening is an example of how ideas about human biology and behaviour can persist among people — including scientists — even though the scientific evidence shows the concepts to be false. “Scientists think they’re too objective to believe in something as folklore-ish as a myth,” says Nicholas Spitzer, director of the Kavli Institute for Brain and Mind at the University of California, San Diego. Yet they do.

These myths often blossom from a seed of a fact — early detection does save lives for some cancers — and thrive on human desires or anxieties, such as a fear of death. But they can do harm by, for instance, driving people to pursue unnecessary treatment or spend money on unproven products. They can also derail or forestall promising research by distracting scientists or monopolizing funding. And dispelling them is tricky.

Scientists should work to discredit myths, but they also have a responsibility to try to prevent new ones from arising, says Paul Howard-Jones, who studies neuroscience and education at the University of Bristol, UK. “We need to look deeper to understand how they come about in the first place and why they’re so prevalent and persistent.”

Some dangerous myths get plenty of air time: vaccines cause autism, HIV doesn’t cause AIDS. But many others swirl about, too, harming people, sucking up money, muddying the scientific enterprise — or simply getting on scientists’ nerves. Here, Nature looks at the origins and repercussions of five myths that refuse to die.

Myth 1: Screening saves lives for all types of cancer

Regular screening might be beneficial for some groups at risk of certain cancers, such as lung, cervical and colon, but this isn’t the case for all tests. Still, some patients and clinicians defend the ineffective ones fiercely.

The belief that early detection saves lives originated in the early twentieth century, when doctors realized that they got the best outcomes when tumours were identified and treated just after the onset of symptoms. The next logical leap was to assume that the earlier a tumour was found, the better the chance of survival. “We’ve all been taught, since we were at our mother’s knee, the way to deal with cancer is to find it early and cut it out,” says Otis Brawley, chief medical officer for the American Cancer Society.

But evidence from large randomized trials for cancers such as thyroid, prostate and breast has shown that early screening is not the lifesaver it is often advertised as. For example, a Cochrane review of five randomized controlled clinical trials totalling 341,342 participants found that screening did not significantly decrease deaths due to prostate cancer1.

“People seem to imagine the mere fact that you found a cancer so-called early must be a benefit. But that isn’t so at all,” says Anthony Miller at the University of Toronto in Canada. Miller headed the Canadian National Breast Screening Study, a 25-year study of 89,835 women aged 40–59 years old2 that found that annual mammograms did not reduce mortality from breast cancer. That’s because some tumours will lead to death irrespective of when they are detected and treated. Meanwhile, aggressive early screening has a slew of negative health effects. Many cancers grow slowly and will do no harm if left alone, so people end up having unnecessary thyroidectomies, mastectomies and prostatectomies. So on a population level, the benefits (lives saved) do not outweigh the risks (lives lost or interrupted by unnecessary treatment).

Still, individuals who have had a cancer detected and then removed are likely to feel that their life was saved, and these personal experiences help to keep the misconception alive. And oncologists routinely debate what ages and other risk factors would benefit from regular screening.

Focusing so much attention on the current screening tests comes at a cost for cancer research, says Brawley. “In breast cancer, we’ve spent so much time arguing about age 40 versus age 50 and not about the fact that we need a better test,” such as one that could detect fast-growing rather than slow-growing tumours. And existing diagnostics should be rigorously tested to prove that they actually save lives, says epidemiologist John Ioannidis of the Stanford Prevention Research Center in California, who this year reported that very few screening tests for 19 major diseases actually reduced mortality3.

Changing behaviours will be tough. Gilbert Welch at the Dartmouth Institute for Health Policy and Clinical Practice in Lebanon, New Hampshire, says that individuals would rather be told to get a quick test every few years than be told to eat well and exercise to prevent cancer. “Screening has become an easy way for both doctor and patient to think they are doing something good for their health, but their risk of cancer hasn’t changed at all.”

Myth 2: Antioxidants are good and free radicals are bad

In December 1945, chemist Denham Harman’s wife suggested that he read an article in Ladies’ Home Journal entitled ‘Tomorrow You May Be Younger’. It sparked his interest in ageing, and years later, as a research associate at the University of California, Berkeley, Harman had a thought “out of the blue”, as he later recalled. Ageing, he proposed, is caused by free radicals, reactive molecules that build up in the body as by-products of metabolism and lead to cellular damage.

Scientists rallied around the free-radical theory of ageing, including the corollary that antioxidants, molecules that neutralize free radicals, are good for human health. By the 1990s, many people were taking antioxidant supplements, such as vitamin C and β-carotene. It is “one of the few scientific theories to have reached the public: gravity, relativity and that free radicals cause ageing, so one needs to have antioxidants”, says Siegfried Hekimi, a biologist at McGill University in Montreal, Canada.

Yet in the early 2000s, scientists trying to build on the theory encountered bewildering results: mice genetically engineered to overproduce free radicals lived just as long as normal mice4, and those engineered to overproduce antioxidants didn’t live any longer than normal5. It was the first of an onslaught of negative data, which initially proved difficult to publish. The free-radical theory “was like some sort of creature we were trying to kill. We kept firing bullets into it, and it just wouldn’t die,” says David Gems at University College London, who started to publish his own negative results in 2003 (ref. 6). Then, one study in humans7 showed that antioxidant supplements prevent the health-promoting effects of exercise, and another associated them with higher mortality8.

None of those results has slowed the global antioxidant market, which ranges from food and beverages to livestock feed additives. It is projected to grow from US$2.1 billion in 2013 to $3.1 billion in 2020. “It’s a massive racket,” says Gems. “The reason the notion of oxidation and ageing hangs around is because it is perpetuated by people making money out of it.”

Today, most researchers working on ageing agree that free radicals can cause cellular damage, but that this seems to be a normal part of the body’s reaction to stress. Still, the field has wasted time and resources as a result. And the idea still holds back publications on possible benefits of free radicals, says Michael Ristow, a metabolism researcher at the Swiss Federal Institute of Technology in Zurich, Switzerland. “There is a significant body of evidence sitting in drawers and hard drives that supports this concept, but people aren’t putting it out,” he says. “It’s still a major problem.”

Some researchers also question the broader assumption that molecular damage of any kind causes ageing. “There’s a question mark about whether really the whole thing should be chucked out,” says Gems. The trouble, he says, is that “people don’t know where to go now”.

Myth 3: Humans have exceptionally large brains

The human brain — with its remarkable cognition — is often considered to be the pinnacle of brain evolution. That dominance is often attributed to the brain’s exceptionally large size in comparison to the body, as well as its density of neurons and supporting cells, called glia.

None of that, however, is true. “We cherry-pick the numbers that put us on top,” says Lori Marino, a neuroscientist at Emory University in Atlanta, Georgia. Human brains are about seven times larger than one might expect relative to similarly sized animals. But mice and dolphins have about the same proportions, and some birds have a larger ratio.

“Human brains respect the rules of scaling. We have a scaled-up primate brain,” says Chet Sherwood, a biological anthropologist at George Washington University in Washington DC. Even cell counts have been inflated: articles, reviews and textbooks often state that the human brain has 100 billion neurons. More accurate measures suggest that the number is closer to 86 billion. That may sound like a rounding error, but 14 billion neurons is roughly the equivalent of two macaque brains.

Human brains are different from those of other primates in other ways: Homo sapiens evolved an expanded cerebral cortex — the part of the brain involved in functions such as thought and language — and unique changes in neural structure and function in other areas of the brain.

The myth that our brains are unique because of an exceptional number of neurons has done a disservice to neuroscience because other possible differences are rarely investigated, says Sherwood, pointing to the examples of energy metabolism, rates of brain-cell development and long-range connectivity of neurons. “These are all places where you can find human differences, and they seem to be relatively unconnected to total numbers of neurons,” he says.

The field is starting to explore these topics. Projects such as the US National Institutes of Health’s Human Connectome Project and the Swiss Federal Institute of Technology in Lausanne’s Blue Brain Project are now working to understand brain function through wiring patterns rather than size.

Myth 4: Individuals learn best when taught in their preferred learning style

People attribute other mythical qualities to their unexceptionally large brains. One such myth is that individuals learn best when they are taught in the way they prefer to learn. A verbal learner, for example, supposedly learns best through oral instructions, whereas a visual learner absorbs information most effectively through graphics and other diagrams.

There are two truths at the core of this myth: many people have a preference for how they receive information, and evidence suggests that teachers achieve the best educational outcomes when they present information in multiple sensory modes. Couple that with people’s desire to learn and be considered unique, and conditions are ripe for myth-making.

“Learning styles has got it all going for it: a seed of fact, emotional biases and wishful thinking,” says Howard-Jones. Yet just like sugar, pornography and television, “what you prefer is not always good for you or right for you,” says Paul Kirschner, an educational psychologist at the Open University of the Netherlands.

In 2008, four cognitive neuroscientists reviewed the scientific evidence for and against learning styles. Only a few studies had rigorously put the ideas to the test and most of those that did showed that teaching in a person’s preferred style had no beneficial effect on his or her learning. “The contrast between the enormous popularity of the learning-styles approach within education and the lack of credible evidence for its utility is, in our opinion, striking and disturbing,” the authors of one study wrote9.

That hasn’t stopped a lucrative industry from pumping out books and tests for some 71 proposed learning styles. Scientists, too, perpetuate the myth, citing learning styles in more than 360 papers during the past 5 years. “There are groups of researchers who still adhere to the idea, especially folks who developed questionnaires and surveys for categorizing people. They have a strong vested interest,” says Richard Mayer, an educational psychologist at the University of California, Santa Barbara.

In the past few decades, research into educational techniques has started to show that there are interventions that do improve learning, including getting students to summarize or explain concepts to themselves. And it seems almost all individuals, barring those with learning disabilities, learn best from a mixture of words and graphics, rather than either alone.

Yet the learning-styles myth makes it difficult to get these evidence-backed concepts into classrooms. When Howard-Jones speaks to teachers to dispel the learning-styles myth, for example, they often don’t like to hear what he has to say. “They have disillusioned faces. Teachers invested hope, time and effort in these ideas,” he says. “After that, they lose interest in the idea that science can support learning and teaching.”

Myth 5: The human population is growing exponentially (and we’re doomed)

Fears about overpopulation began with Reverend Thomas Malthus in 1798, who predicted that unchecked exponential population growth would lead to famine and poverty.

But the human population has not and is not growing exponentially and is unlikely to do so, says Joel Cohen, a populations researcher at the Rockefeller University in New York City. The world’s population is now growing at just half the rate it was before 1965. Today there are an estimated 7.3 billion people, and that is projected to reach 9.7 billion by 2050. Yet beliefs that the rate of population growth will lead to some doomsday scenario have been continually perpetuated. Celebrated physicist Albert Bartlett, for example, gave more than 1,742 lectures on exponential human population growth and the dire consequences starting in 1969.

The world’s population also has enough to eat. According to the Food and Agriculture Organization of the United Nations, the rate of global food production outstrips the growth of the population. People grow enough calories in cereals alone to feed between 10 billion and 12 billion people. Yet hunger and malnutrition persist worldwide. This is because about 55% of the food grown is divided between feeding cattle, making fuel and other materials or going to waste, says Cohen. And what remains is not evenly distributed — the rich have plenty, the poor have little. Likewise, water is not scarce on a global scale, even though 1.2 billion people live in areas where it is.

“Overpopulation is really not overpopulation. It’s a question about poverty,” says Nicholas Eberstadt, a demographer at the American Enterprise Institute, a conservative think tank based in Washington DC. Yet instead of examining why poverty exists and how to sustainably support a growing population, he says, social scientists and biologists talk past each other, debating definitions and causes of overpopulation.

Cohen adds that “even people who know the facts use it as an excuse not to pay attention to the problems we have right now”, pointing to the example of economic systems that favour the wealthy.

Like others interviewed for this article, Cohen is less than optimistic about the chances of dispelling the idea of overpopulation and other ubiquitous myths (see ‘Myths that persist’), but he agrees that it is worthwhile to try to prevent future misconceptions. Many myths have emerged after one researcher extrapolated beyond the narrow conclusions of another’s work, as was the case for free radicals. That “interpretation creep”, as Spitzer calls it, can lead to misconceptions that are hard to excise. To prevent that, “we can make sure an extrapolation is justified, that we’re not going beyond the data”, suggests Spitzer. Beyond that, it comes down to communication, says Howard-Jones. Scientists need to be effective at communicating ideas and get away from simple, boiled-down messages.

Once a myth is here, it is often here to stay. Psychological studies suggest that the very act of attempting to dispel a myth leads to stronger attachment to it. In one experiment, exposure to pro-vaccination messages reduced parents’ intention to vaccinate their children in the United States. In another, correcting misleading claims from politicians increased false beliefs among those who already held them. “Myths are almost impossible to eradicate,” says Kirschner. “The more you disprove it, often the more hard core it becomes.”

Nature 528, 322–325 (17 December 2015) doi:10.1038/528322a

1.Ilic, D., Neuberger, M. M., Djulbegovic, M. & Dahm, P. Cochrane Database Syst Rev. 1, CD004720 (2013).
2.Miller, A. B. et al. Br. Med. J. 348, g366 (2014).
3.Saquib, N., Saquib, J. & Ioannidis, J. P. A. Int. J. Epidemiol. 44, 264–277 (2015).
4.Doonan, R. et al. Genes Dev. 22, 3236–3241 (2008).
5.Pérez, V. I. et al. Aging Cell 8, 73–75 (2009).
6.Keaney, M. & Gems, D. Free Radic. Biol. Med. 34, 277–282 (2003).
7.Ristow, M. et al. Proc. Natl Acad. Sci. USA 106, 8665–8670 (2009).
8.Bjelakovic, G., Nikolova, D. & Gluud, C. J. Am. Med. Assoc. 310, 1178–1179 (2013).
9.Pashler, H., McDaniel, M., Rohrer, D. & Bjork, R. Psychol. Sci. Public Interest 9, 105–119 (2008).