Sea Level Could Rise 5 Feet in New York City by 2100

sea level nyc

The U.S.’s largest metropolis and the entire east coast could face frequent destruction unless the region takes previously unthinkable actions

By Mark Fischetti

By 2100 devastating flooding of the sort that Superstorm Sandy unleashed on New York City could happen every two years all along the valuable and densely populated U.S. east coast—anywhere from Boston to Miami.

And unless extreme protection measures are implemented, people could again die.

Hyperbole? Hardly. Even though Sandy’s storm surge was exceptionally high, if sea level rises as much as scientists agree is likely, even routine storms could cause similar destruction. Old, conservative estimates put the increase at two feet (0.6 meter) higher than the 2000 level by 2100. That number did not include any increase in ice melting from Greenland or Antarctica—yet in December new data showed that temperatures in Antarctica are rising three times faster than the rate used in the conservative models. Accelerated melting has also been reported in Greenland. Under what scientists call the rapid ice-melt scenario, global sea level would rise four feet (1.2 meters by the 2080s, according to Klaus Jacob, a research scientist at Columbia University’s Lamont–Doherty Earth Observatory. In New York City by 2100 “it will be five feet, plus or minus one foot,” Jacob says.

Skeptics doubt that number, but the science is solid. The projection comes in part from the realization that the ocean does not rise equally around the planet. The coast from Cape Cod near Boston to Cape Hatteras in North Carolina is a hot spot—figuratively and literally. In 2012 Asbury Sallenger, a coastal hazards expert at the U.S. Geological Survey (USGS), reported that for the prior 60 years sea level along that section of the Atlantic coast had increased three to four times faster than the global average. Looking ahead to 2100, Sallenger indicated that the region would experience 12 to 24 centimeters—4.7 to 9.4 inches—of sea level rise above and beyond the average global increase.

Sallenger (who died in February) was careful to point out that the surplus was related only to ocean changes—such as expansion of water due to higher temperature as well as adjustments to the Gulf Stream running up along the coast brought about by melting Arctic ice—not changes to the land.

Unfortunately, that land is also subsiding. Since North American glaciers began retreating 20,000 years ago, the crust from New York City to North Carolina has been sinking, as the larger continent continues to adjust to the unloading. The land will continue to subside by one to 1.5 millimeters (0.04 to 0.06 inch) a year, according to S. Jeffress Williams, a coastal marine geologist with the USGS and the University of Hawaii at Mānoa. The boundary zone where rising crust to the north changes to falling crust to the south runs roughly west to east from central New York State through Massachusetts.

Certain municipalities such as Atlantic City, N.J., are sinking even faster because they are rapidly extracting groundwater. Cities around Chesapeake Bay, such as Norfolk, Va., and Virginia Beach, are subsiding faster still because sediment underneath them continues to slump into the impact crater that formed the bay 35 million years ago.

When all these factors are taken into account, experts say, sea level rise of five feet (1.5 meters) by 2100 is reasonable along the entire east coast. That’s not really a surprise: the ocean was 20 to 26 feet (six to eight meters) higher during the most recent interglacial period.

Now for the flooding: Sandy’s storm surge topped out at about 11 feet (3.4 meters) above the most recent average sea level at the lower tip of Manhattan. But flood maps just updated by the Federal Emergency Management Agency in January indicate that even an eight-foot (2.5-meter) surge would cause widespread, destructive flooding. So if sea level rises by five feet (1.5 meters_, a surge of only three feet is needed to inflict considerable damage.

How frequently could that occur? Municipalities rarely plan for anything greater than the so-called one-in-100-year storm—which means that the chances of such a storm hitting during any given year is one in 100. Sandy was a one-in-500-year storm. If sea level rises by five feet, the chance in any year of a storm bringing a three-foot surge to New York City will increase to as high as one in three or even one in two, according to various projections. The 100-year-height for a storm in the year 2000 would be reached by a two-year storm in 2100.

With hundreds of people still homeless in Sandy’s wake, coastal cities worldwide are watching to see how New York City will fend off rising seas. Scientists and engineers have proposed solutions to pieces of the complex puzzle, and a notable subset of them on the New York City Panel on Climate Change are rushing to present options to Mayor Michael Bloomberg by the end of May. But extensive interviews with those experts leads to several controversial and expensive conclusions: Long-term, the only way to protect east coast cities against storm surges is to build massive flood barriers (pdf). The choices for protecting the long stretches of sandy coastlines between them—New Jersey, Maryland, the Carolinas, Florida—are even more limited.

As for sea level rise, retreat from low-lying shores may be the best option. Despite the gut reaction of “No, we won’t go,” climate forces already in motion may leave few options.

http://www.scientificamerican.com/article.cfm?id=fischetti-sea-level-could-rise-five-feet-new-york-city-nyc-2100

FDA Lets Drugs Approved on Fraudulent Research Stay on the Market

fraud-kit

The FDA in 2011 announced years’ worth of studies from a major drug research lab were potentially worthless, but it has not pulled any of the compounds from the market nor identified them

By Rob Garver, Charles Seife and ProPublica

On the morning of May 3, 2010, three agents of the Food and Drug Administration descended upon the Houston office of Cetero Research, a firm that conducted research for drug companies worldwide. Lead agent Patrick Stone, now retired from the FDA, had visited the Houston lab many times over the previous decade for routine inspections. This time was different. His team was there to investigate a former employee’s allegation that the company had tampered with records and manipulated test data. When Stone explained the gravity of the inquiry to Chinna Pamidi, the testing facility’s president, the Cetero executive made a brief phone call. Moments later, employees rolled in eight flatbed carts, each double-stacked with file boxes. The documents represented five years of data from some 1,400 drug trials.

Pamidi bluntly acknowledged that much of the lab’s work was fraudulent, Stone said. “You got us,” Stone recalled him saying.

Based partly on records in the file boxes, the FDA eventually concluded that the lab’s violations were so “egregious” and pervasive that studies conducted there between April 2005 and August 2009 might be worthless.

The health threat was potentially serious: About 100 drugs, including sophisticated chemotherapy compounds and addictive prescription painkillers, had been approved for sale in the United States at least in part on the strength of Cetero Houston’s tainted tests. The vast majority, 81, were generic versions of brand-name drugs on which Cetero scientists had often run critical tests to determine whether the copies did, in fact, act the same in the body as the originals. For example, one of these generic drugs was ibuprofen, sold as gelatin capsules by one of the nation’s largest grocery-store chains for months before the FDA received assurance they were safe.

The rest were new medications that required so much research to win approval that the FDA says Cetero’s tests were rarely crucial. Stone said he expected the FDA to move swiftly to compel new testing and to publicly warn patients and doctors.

Instead, the agency decided to handle the matter quietly, evaluating the medicines with virtually no public disclosure of what it had discovered. It pulled none of the drugs from the market, even temporarily, letting consumers take the ibuprofen and other medicines it no longer knew for sure were safe and effective. To this day, some drugs remain on the market despite the FDA having no additional scientific evidence to back up the safety and efficacy of these drugs.

By contrast, the FDA’s transatlantic counterpart, the European Medicines Agency, has pulled seven Cetero-tested medicines from the market.

The FDA also has moved slowly to shore up the science behind the drugs. Twice the FDA announced it was requiring drug makers to repeat, reanalyze or audit many of Cetero’s tests, and to submit their findings to the agency. Both times the agency set deadlines, yet it has allowed some companies to blow by them. Today, six months after the last of those deadlines expired and almost three years after Cetero’s misconduct was discovered, the FDA has received the required submissions for just 53 drugs. The agency says most companies met the deadlines but acknowledged that “a few have not yet submitted new studies.” Other companies, it said, have not submitted new research because they removed their drugs from the market altogether. For its part, the FDA has finished its review of just 21 of the 53 submissions it has received, raising the possibility that patients are taking medications today that the agency might pull off the market tomorrow.

To this day, the agency refuses to disclose the names of the drugs it is reassessing, on the grounds that doing so would expose “confidential commercial information.” ProPublica managed to identify five drugs (http://projects.propublica.org/graphics/cetero) that used Cetero tests to help win FDA approval.

FDA officials defended the agency’s handling of the Cetero case as prudent and scientifically sound, noting that the agency has found no discrepancies between any original drug and its generic copy and no sign that any patients have been harmed. “It is non-trivial to have to redo all this, to withdraw drugs, to alarm the public and the providers for a large range of drugs,” said Janet Woodcock, the director of the FDA’s Center for Drug Evaluation and Research. “There are consequences. To repeat the studies requires human experimentation, and that is not totally without risk.” Woodcock added that an agency risk assessment found the potential for harm from drugs tested by Cetero to be “quite low,” an assessment she said has been “confirmed” by the fact that no problems have been found in the drugs the agency has finished reviewing. She declined to release the risk assessment or detail its design. A subsequent statement from the agency described the assessment as “fluid” and “ongoing.” The FDA also has not released its 21 completed reviews, which ProPublica has requested. Some experts say that by withholding so much information in the Cetero case the FDA failed to meet its obligations to the public.

“If there are problems with the scientific studies, as there have been in this case, then the FDA’s review of those problems needs to be transparent,” said David Kessler, who headed the FDA from 1990 to 1997 and who is now a professor at the University of California at San Francisco. Putting its reviews in public view would let the medical community “understand the basis for the agency’s actions,” he said. “FDA may be right here, but if it wants public confidence, they should be transparent. Otherwise it’s just a black box.”

Another former senior FDA official, who spoke on condition of anonymity, also felt the FDA had moved too slowly and secretively. “They’re keeping it all in the dark. It’s not transparent at all,” he said.

By contrast, the European Medicines Agency has provided a public accounting of the science behind all the drugs it has reviewed. Its policy, the EMA said in response to questions, is to make public “all review procedures where the benefit-risk balance of a medicine is under scrutiny.”

Woodcock dismissed comparisons to the EMA. “Europe had a smaller handful of drugs,” she said, “and they may not have engaged in as extensive negotiation and investigations with the company as we did.” She said the FDA would have disclosed more, including the names of drugs, had it believed there was a risk to public health. “We believe that this did not rise to the level where the public should be notified,” she said. “We felt it would result in misunderstanding and inappropriate actions.”

In a written response to Kessler’s comments, the FDA said, “We’ve been as transparent as possible given the legal protections surrounding an FDA investigation of this or any type. The issue is not a lack of transparency but rather the difficulty of explaining why the problems we identified at Cetero, which on their face would appear to be highly significant in terms of patient risk, fortunately were not.” Still, the FDA’s secrecy has had other ramifications. Some of Cetero’s suspect research made its way unchallenged into the peer-reviewed scientific literature on which the medical community relies. In one case, a researcher and a journal editor told ProPublica they had no idea the Cetero tests had been called into doubt.

Cetero, in correspondence with the FDA, conceded misconduct. And in an interview, Cetero’s former attorney, Marc Scheineson, acknowledged that chemists at the Houston facility committed fraud but said the problem was limited to six people who had all been fired.

“There is still zero evidence that any of the test results…were wrong, inaccurate, or incorrect,” he said. Scheineson called the FDA’s actions “overkill” and said they led to the demise of Cetero and its successor company.

In 2012, the company filed for Chapter 11 bankruptcy and emerged with a new name, PRACS Institute. PRACS, in turn, filed for bankruptcy on March 22 of this year. A PRACS spokesperson said the company had closed the Houston facility in October 2012.

Pamidi, the Cetero executive who provided the carts of file boxes, declined to comment. As for Stone, the former FDA investigator, he said he was disturbed by the agency’s decisions.

“They could have done more,” he said. “They should have done more.”

Cross-checking U.S. and European public records, including regulatory filings, scientific studies and civil lawsuits, ProPublica was able to identify a few of the drugs that are on the U.S. market because of tests performed at Cetero’s Houston lab. There is no evidence that patients have suffered harm from these drugs; the FDA says it has detected no increase in reports of side effects or lack of efficacy among Cetero-tested medications.

To be sure, just because a crucial study is deemed potentially unreliable does not mean that a drug is unsafe or ineffective. What it does mean is that the FDA’s scientific basis for approving that drug has been undermined.

The risks are real, academic experts say, particularly for drugs such as blood thinners and anti-seizure medications that must be given at very specific doses. And generic versions of drugs have been known to act differently from name-brand products.

There is no indication the generic ibuprofen gelatin capsules hurt anyone, but their case shows how the FDA left a drug on the market for months without confirmation that the drug was equivalent to the name brand.

The capsules were manufactured by Banner Pharmacaps and carried by Supervalu, a grocery company that operates or licenses more than 2,400 stores across the United States, including Albertson’s, Jewel-Osco, Shop ‘n Save, Save-A-Lot, and Shoppers Food & Pharmacy.

Cetero had performed a key analysis to show that the capsules were equivalent to other forms of the drug. Banner, the drug’s maker, said the FDA first alerted it to the problems at Cetero in August 2011. The FDA required drug companies to redo many of Cetero’s tests, but, a spokesperson for Banner wrote in an email, “We received no directive from FDA to recall or otherwise interrupt manufacture of the product.”

Banner said it repeated the tainted Cetero tests at a different research firm, and the FDA said it received the new data in January 2012 — leaving a gap of at least five months when the FDA knew the drug was on the market without a rock-solid scientific basis.

An FDA spokesperson wrote in an email that the agency found the new studies Banner submitted “acceptable” and told Banner it had no further questions.

A spokesperson for Supervalu told ProPublica it purchased the ibuprofen from a supplier, which has assured the grocery company that “there are no issues with the product.”

According to U.S. and European records, another one of the drugs approved based on research at Cetero’s troubled Houston lab was a chemotherapy drug known as Temodar for Injection.

Temodar was originally approved in 1999 as a capsule to fight an aggressive brain cancer, glioblastoma multiforme. Some patients, however, can’t tolerate taking the medication orally, so drug maker Schering-Plough decided to make an intravenous form of the drug.

To get Temodar for Injection approved, the FDA required what it called a “pivotal” test comparing the well-established capsule form of Temodar to the form injected directly into the bloodstream.

Cetero Houston conducted that test, comparing blood samples of patients who received the capsule to samples of those who got the injection to determine if the same amount of the drug was reaching the bloodstream. This test is crucial, particularly in the case of Temodar, where there was a question about the right dosing regimen of the injectable version. If too little drug gets into the blood, the cancer could continue to grow unabated. If too much gets in, the drug’s debilitating side effects could be even worse.

Cetero performed the test between September 2006 and October 2007, according to documents from the European Medicines Agency, and FDA records indicate that same test was used to win approval in the U.S.

In 2011, the FDA notified Merck & Co., which had acquired Schering-Plough, about the problems with Cetero’s testing. In April 2012, the FDA publicly announced that analyses done by Cetero during the time when it performed the Temodar work would have to be redone. But according to Merck spokesman Ronald Rogers, the FDA has not asked Merck for any additional analyses to replace the questionable study.

The FDA declined to answer specific questions about the Temodar case, saying to do so would reveal confidential commercial information. But Woodcock said that in some cases, drug manufacturers had submitted alternative test results to the FDA that satisfied the agency that no retesting was necessary for specific drugs.

The FDA never removed Temodar for Injection from the market. The European Medicines Agency also kept the injection form of the drug on the market, but the two agencies handled their decision in sharply different ways.

The EMA has publicly laid out evidence — including studies not performed by Cetero — for why it believes the benefits of the injection drug outweigh its risks. But in the United States, the FDA has kept silent. To this day, Temodar’s label — the single most important way the FDA communicates the risks and benefits of medication — still displays data from the dubious Cetero study. (The label of at least one other drug, a powerful pain reliever marketed as Lazanda, also still displays questionable Cetero data.)

Woodcock said the agency hadn’t required manufacturers to alter their labels because, despite any question about precise numerical precision, the FDA’s overall recommendation had not changed.

In a written response to questions, Merck said it “stands behind the data in the TEMODAR (temozolomide) label.” The company said it learned about “misconduct at a contract research organization (CRO) facility in Houston” from the FDA and that it cooperated with investigations by the FDA and its European counterpart. It said that Cetero had performed no other studies for Merck.

Even one of the researchers involved in evaluating injectable Temodar didn’t know that the FDA had flagged Cetero’s analysis as potentially unreliable until contacted by a reporter for this story.

Dr. Max Schwarz, an oncologist and clinical professor at Monash University in Melbourne, Australia, treated some brain-cancer patients with the experimental injectable form of Temodar and others with the capsule formulation. Blood from his patients was sent to Cetero’s Houston lab for analysis.

Schwarz said he still has confidence in the injectable form of the drug, but said that he was “taken aback” when a reporter told him that the FDA had raised questions about the analysis. “I think we should have been told,” he said.

Suspect research conducted by Cetero Houston was not only used to win FDA approval but was also submitted to peer-reviewed scientific journals. Aided by the FDA’s silence, those articles remain in the scientific literature with no indication that they might, in fact, be compromised. For example, based on Cetero’s work, an article in the journal Cancer Chemotherapy and Pharmacology purports to show that Temodar for Injection is equivalent to Temodar capsules.

Edward Sausville, co-editor-in-chief of the journal, said in an email that the first he heard that something might be wrong with the Cetero research was when a reporter contacted him for this story. He also said the publisher of the journal would conduct a “review of relevant records pertinent to this case.”

During his years of inspecting the Houston lab, the FDA’s Stone said he often had the sense that something wasn’t right. When he went to other contract research firms and asked for data on a trial, they generally produced an overwhelming amount of paper: records of failed tests, meticulous explanations of how the chemists had made adjustments, and more.

Cetero’s records, by contrast, showed very clean, error-free procedures. As Stone and his colleagues dug through the data, though, they often found gaps. When pressed, Cetero officials would often produce additional data — data that ought to have been in the files originally handed over to the FDA.

Stone said, “We should have looked back and said, ‘Wait a minute, there’s always something missing from the studies from here. Why?'”

One reason, the FDA would determine, was that Cetero’s chemists were taking shortcuts and other actions prohibited by the FDA’s Good Laboratory Practice guidelines, which set out such matters as how records must be kept and how tests must be performed.

Stone and his FDA colleagues might never have realized Cetero was engaging in misconduct if a whistleblower hadn’t stepped forward.

Cashton J. Briscoe operated a liquid chromatography-tandem mass spectrometry device, or “mass spec,” a sensitive machine that measures the concentration of a drug in the blood.

He took blood samples prepared by Cetero chemists and used mass specs to perform “runs” — tests to see how much of a drug is in patients’ blood — that must always be performed with control samples. Often those controls show readings that are clearly wrong, and chemists have to abort runs, document the failure, recalibrate the machines, and redo the whole process.

But Cetero paid its Houston chemists based on how many runs they completed in a day. Some chemists doubled or even tripled their income by squeezing in extra tests, according to time sheets entered as evidence in a lawsuit filed in U.S. District Court in Houston by six chemists seeking overtime payments. Briscoe thought several chemists were cutting corners — by using the control-sample readings from one run in other runs, for example.

Attorney Scheineson, who represented Cetero during the FDA’s investigation, acknowledged that the Houston lab’s compensation system was “crappy” and that a handful of “dishonest” chemists at the Houston facility committed fraud.

In April 2009, Briscoe blew the whistle in a letter to the company written by his lawyer, reporting that “many of the chemists were manipulating and falsifying data.” Soon thereafter, Briscoe told the company that he had documented the misconduct. According to Stone and documents reviewed by ProPublica, Briscoe had photographic evidence that mass spec operators had switched the quality control samples between different runs; before-and-after copies of documents with the dates and other material changed; and information about a shadow computer filing system, where data from failed runs could be stored out of sight of FDA inspectors.

On June 5, apparently frustrated with Cetero’s response, Briscoe went a step further and called the FDA’s Dallas office. He agreed to meet Stone the following Monday, but never showed. Stone called him, as did other FDA officials, but Briscoe had changed his mind and clammed up.

Still, Stone’s brief phone conversation with Briscoe reminded the agent of all those suspiciously clean records he had seen at Cetero over the years. “Now that you have a bigger picture,” Stone recalled, “you’re like, ‘Oh, some of this stuff is cooked.'”

Two days after Stone’s aborted meeting with Briscoe, Cetero informed the FDA that an employee had made allegations of misconduct and that the company had hired an outside auditor to review five years’ worth of data. That led to months of back-and-forth between the agency and Cetero that culminated when Stone and his inspectors arrived in Houston in May 2010.

Two teams of FDA investigators eventually confirmed Briscoe’s main allegations and cited the company for falsifying records and other violations of Good Laboratory Practice. The net effect of the misconduct was far-reaching, agency officials wrote in a July 2011 letter:

“The pervasiveness and egregious nature of the violative practices by your firm has led FDA to have significant concerns that the bioequivalence and bioavailability data generated at the Cetero Houston facility from April 1, 2005, to June 15, 2010 … are unreliable.”

Bioequivalence studies measure whether a generic drug acts the same in the body as the name-brand drug; bioavailability studies measure how much drug gets into a patient’s system.

The FDA’s next step was to try to determine which drugs were implicated — information the agency couldn’t glean from its own records.

“We couldn’t really tell — because most of the applications we get are in paper — which studies were actually linked to the key studies in an application without asking the application holders,” the FDA’s Woodcock said. “So we asked the application holders,” meaning the drug manufacturers.

In the interim, the FDA continued to investigate processes and procedures at Cetero.

“We put their operations under a microscope,” said Woodcock. A team of clinical pharmacologists, statisticians and IT experts conducted a risk analysis of the problems at Cetero, she said, and they “concluded that the risk of a misleading result was very low given how the studies were done, how the data were captured and so forth.”

In April 2012, nearly three years after Briscoe first alerted the FDA to problems at Cetero, and nearly two years after Cetero handed over its documentation to inspectors, the FDA entered into a final agreement with the company. Drug makers would need to redo tests conducted at the company’s Houston facility between April 1, 2005 and Feb. 28, 2008, if those studies had been part of a drug application submitted to the FDA. If stored blood samples were still usable, they could be reanalyzed. If not, the entire study would need to be repeated, the FDA said. The agency set a deadline of six months.

Cetero tests done between March 1, 2008 and Aug. 31, 2009 would be accepted only if they were accompanied by an independent data integrity audit.

Analyses done after Sept. 1, 2009 would not require retesting. The FDA said that Cetero had issued a written directive on Sept. 1, 2009, ordering one kind of misconduct to stop, which was why it did not require any action on Cetero Houston studies after that date. According to public documents, however, the agency’s inspectors “found continued deficiencies” that persisted into December 2010.

In response to questions, the FDA said the problem period “was subsequently narrowed as more information regarding Cetero’s practices became available.”

A year after concluding its final agreement with Cetero, the FDA’s review is still not finished. “Without the process being public it’s hard to know, but it seems that this has been going on for too long,” said Kessler, the former FDA chief.

“The process has been long,” the FDA said, “because of the number of products involved and our wish to be thorough and accurate in both our requests for and our review of the data.”

Cetero’s attorney Scheineson said the FDA scaled back its requirements because it finally talked with company officials. He noted that Cetero had tried repeatedly to talk with the FDA before the agency issued its strongly worded July 2011 letter, and that more than 1,000 employees have since lost their jobs.

“If you would get an honest assessment from the leaders of the agency,” he said, “I think in retrospect they would have argued that this was overkill here and that they should have had input from the company before essentially going public with that death sentence.”

“I’m not sure what is meant by ‘death sentence,'” an FDA spokesperson wrote in response, “but our first priority was and is patient safety and we proceeded to conduct the investigation toward that objective.”

The FDA’s Stone draws little satisfaction from unraveling the problems at Cetero.

There are thousands of bioequivalence studies done every year, he pointed out, with each study generating thousands of pages of paper records. “Do you really think we’re going to look at 100 percent of them? We’re going to look at maybe 5 percent if we’re lucky,” he said. “Sometimes 1 percent.”

Still, given how often he and other FDA teams had inspected the Houston lab, he thinks regulators should have spotted Cetero’s misconduct sooner.

“In hindsight I look back and I’m like, ‘Wow, should I be proud of this?'” he said. “It’s cool that I was part of it, but it’s crap that we didn’t catch it five years ago. How could we let this go so long?”

Rob Garver can be reached at rob.garver@propublica.org, and Charles Seife can be reached at cgseife@nasw.org.

Research assistance for this story was contributed by Nick Stockton, Christine Kelly, Lily Newman, Joss Fong and Sarah Jacoby of the Science, Health, and Environmental Reporting Program at NYU.

http://www.scientificamerican.com/article.cfm?id=fda-let-drugs-approved-on-fraudulent-research-stay-on-market

Placebos Work Better for Nice People

placebo_pill

Having an agreeable personality might make you popular at work and lucky in love. It may also enhance your brain’s built-in painkilling powers, boosting the placebo effect.

Researchers at the University of Michigan, the University of North Carolina and the University of Maryland administered standard personality tests to 50 healthy volunteers, identifying general traits such as resiliency, straightforwardness, altruism and hostility. Each volunteer then received a painful injection, followed by a placebo—a sham painkiller. The volunteers who were resilient, straightforward or altruistic experienced a greater reduction in pain from the placebo compared with volunteers who had a so-called angry hostility personality trait.

http://www.scientificamerican.com/article.cfm?id=placebos-work-better-for-nice-peopl

China Resorting To Canned Air Because Pollution Is So Bad

canned-air

Chinese entrepreneur Chen Guangbiao has launched a line of canned air for the Chinese market, to give people something to breathe that isn’t the smog-filled Beijing air.

Chinese entrepreneur Chen Guangbiao has launched a line of canned air for the Chinese market, to give people something to breathe that isn’t the smog-filled Beijing air. Guangbiao, a billionaire who has become known for his stunts, is selling the product to bring more attention to the problems of pollution in China:

It comes with atmospheric flavours including pristine Tibet, post-industrial Taiwan and revolutionary Yan’an, the Communist Party’s early base area.

Chen said he wanted to make a point that China’s air was turning so bad that the idea of bottled fresh air is no longer fanciful.

“If we don’t start caring for the environment then after 20 or 30 years our children and grandchildren might be wearing gas masks and carry oxygen tanks,” said Chen.

http://www.scientificamerican.com/article.cfm?id=china-resorting-to-canned-air-becau-2013-01

Origin of the myth that we only use 10% of our brains

ten-percent-brain

The human brain is complex. Along with performing millions of mundane acts, it composes concertos, issues manifestos and comes up with elegant solutions to equations. It’s the wellspring of all human feelings, behaviors, experiences as well as the repository of memory and self-awareness. So it’s no surprise that the brain remains a mystery unto itself.

Adding to that mystery is the contention that humans “only” employ 10 percent of their brain. If only regular folk could tap that other 90 percent, they too could become savants who remember π to the twenty-thousandth decimal place or perhaps even have telekinetic powers.

Though an alluring idea, the “10 percent myth” is so wrong it is almost laughable, says neurologist Barry Gordon at Johns Hopkins School of Medicine in Baltimore. Although there’s no definitive culprit to pin the blame on for starting this legend, the notion has been linked to the American psychologist and author William James, who argued in The Energies of Men that “We are making use of only a small part of our possible mental and physical resources.” It’s also been associated with to Albert Einstein, who supposedly used it to explain his cosmic towering intellect.

The myth’s durability, Gordon says, stems from people’s conceptions about their own brains: they see their own shortcomings as evidence of the existence of untapped gray matter. This is a false assumption. What is correct, however, is that at certain moments in anyone’s life, such as when we are simply at rest and thinking, we may be using only 10 percent of our brains.

“It turns out though, that we use virtually every part of the brain, and that [most of] the brain is active almost all the time,” Gordon adds. “Let’s put it this way: the brain represents three percent of the body’s weight and uses 20 percent of the body’s energy.”

The average human brain weighs about three pounds and comprises the hefty cerebrum, which is the largest portion and performs all higher cognitive functions; the cerebellum, responsible for motor functions, such as the coordination of movement and balance; and the brain stem, dedicated to involuntary functions like breathing. The majority of the energy consumed by the brain powers the rapid firing of millions of neurons communicating with each other. Scientists think it is such neuronal firing and connecting that gives rise to all of the brain’s higher functions. The rest of its energy is used for controlling other activities—both unconscious activities, such as heart rate, and conscious ones, such as driving a car.

Although it’s true that at any given moment all of the brain’s regions are not concurrently firing, brain researchers using imaging technology have shown that, like the body’s muscles, most are continually active over a 24-hour period. “Evidence would show over a day you use 100 percent of the brain,” says John Henley, a neurologist at the Mayo Clinic in Rochester, Minn. Even in sleep, areas such as the frontal cortex, which controls things like higher level thinking and self-awareness, or the somatosensory areas, which help people sense their surroundings, are active, Henley explains.

Take the simple act of pouring coffee in the morning: In walking toward the coffeepot, reaching for it, pouring the brew into the mug, even leaving extra room for cream, the occipital and parietal lobes, motor sensory and sensory motor cortices, basal ganglia, cerebellum and frontal lobes all activate. A lightning storm of neuronal activity occurs almost across the entire brain in the time span of a few seconds.

“This isn’t to say that if the brain were damaged that you wouldn’t be able to perform daily duties,” Henley continues. “There are people who have injured their brains or had parts of it removed who still live fairly normal lives, but that is because the brain has a way of compensating and making sure that what’s left takes over the activity.”

Being able to map the brain’s various regions and functions is part and parcel of understanding the possible side effects should a given region begin to fail. Experts know that neurons that perform similar functions tend to cluster together. For example, neurons that control the thumb’s movement are arranged next to those that control the forefinger. Thus, when undertaking brain surgery, neurosurgeons carefully avoid neural clusters related to vision, hearing and movement, enabling the brain to retain as many of its functions as possible.

What’s not understood is how clusters of neurons from the diverse regions of the brain collaborate to form consciousness. So far, there’s no evidence that there is one site for consciousness, which leads experts to believe that it is truly a collective neural effort. Another mystery hidden within our crinkled cortices is that out of all the brain’s cells, only 10 percent are neurons; the other 90 percent are glial cells, which encapsulate and support neurons, but whose function remains largely unknown. Ultimately, it’s not that we use 10 percent of our brains, merely that we only understand about 10 percent of how it functions.

http://www.scientificamerican.com/article.cfm?id=people-only-use-10-percent-of-brain&page=2

Thanks to SRW for bringing this to the attention of the It’s Interesting community.

The myth of antioxidants and vitamins?

20081210_antioxidants

The hallowed notion that oxidative damage causes aging and that vitamins might preserve our youth is now in doubt.

•For decades researchers assumed that highly reactive molecules called free radicals caused aging by damaging cells and thus undermining the functioning of tissues and organs.
•Recent experiments, however, show that increases in certain free radicals in mice and worms correlate with longer life span. Indeed, in some circumstances, free radicals seem to signal cellular repair networks.
•If these results are confirmed, they may suggest that taking antioxidants in the form of vitamins or other supplements can do more harm than good in otherwise healthy individuals.

David Gems’s life was turned upside down in 2006 by a group of worms that kept on living when they were supposed to die. As assistant director of the Institute of Healthy Aging at University College London, Gems regularly runs experiments on Caenorhabditis elegans, a roundworm that is often used to study the biology of aging. In this case, he was testing the idea that a buildup of cellular damage caused by oxidation—technically, the chemical removal of electrons from a molecule by highly reactive compounds, such as free radicals—is the main mechanism behind aging. According to this theory, rampant oxidation mangles more and more lipids, proteins, snippets of DNA and other key components of cells over time, eventually compromising tissues and organs and thus the functioning of the body as a whole.

Gems genetically engineered the roundworms so they no longer produced certain enzymes that act as naturally occurring antioxidants by deactivating free radicals. Sure enough, in the absence of the antioxidants, levels of free radicals in the worms skyrocketed and triggered potentially damaging oxidative reactions throughout the worms’ bodies.

Contrary to Gems’s expectations, however, the mutant worms did not die prematurely. Instead they lived just as long as normal worms did. The researcher was mystified. “I said, ‘Come on, this can’t be right,’” he recalls. “‘Obviously something’s gone wrong here.’” He asked another investigator in his laboratory to check the results and do the experiment again. Nothing changed. The experimental worms did not produce these particular antioxidants; they accumulated free radicals as predicted, and yet they did not die young—despite suffering extreme oxidative damage.

Other scientists were finding similarly confounding results in different lab animals. In the U.S., Arlan Richardson, director of the Barshop Institute for Longevity and Aging Studies at the University of Texas Health Science Center in San Antonio, genetically engineered 18 different strains of mice, some of which produced more of certain antioxidant enzymes than normal and some of which produced fewer of them than normal. If the damage caused by free radical production and subsequent oxidation was responsible for aging, then the mice with extra antioxidants in their bodies should have lived longer than the mice missing their antioxidant enzymes. Yet “I watched those goddamn life span curves, and there was not an inch of difference between them,” Richardson says. He published his increasingly bewildering results in a series of papers between 2001 and 2009.

Meanwhile, a few doors down the hall from Richardson, physiologist Rochelle Buffenstein has spent the past 11 years trying to understand why the longest-living rodent, the naked mole rat, is able to survive up to 25 to 30 years—around eight times longer than a similarly sized mouse. Buffenstein’s experiments have shown that naked mole rats possess lower levels of natural antioxidants than mice and accumulate more oxidative damage to their tissues at an earlier age than other rodents. Yet paradoxically, they live virtually disease-free until they die at a very old age.

To proponents of the long-standing oxidative damage theory of aging, these findings are nothing short of heretical. They are, however, becoming less the exception and more the rule. Over the course of the past decade, many experiments designed to further support the idea that free radicals and other reactive molecules drive aging have instead directly challenged it. What is more, it seems that in certain amounts and situations, these high-energy molecules may not be dangerous but useful and healthy, igniting intrinsic defense mechanisms that keep our bodies in tip-top shape. These ideas not only have drastic implications for future antiaging interventions, but they also raise questions about the common wisdom of popping high doses of antioxidant vitamins. If the oxidative-damage theory is wrong, then aging is even more complicated than researchers thought—and they may ultimately need to revise their understanding of what healthy aging looks like on the molecular level.

“The field of aging has been gliding along on this set of paradigms, ideas about what aging is, that to some extent were kind of plucked out of the air,” Gems says. “We should probably be looking at other theories as well and considering, fundamentally, that we might have to look completely differently at biology.”

The Birth of a Radical Theory
The oxidative damage, or free radical, theory of aging can be traced back to Denham Harman, who found his true calling in December 1945, thanks to the Ladies’ Home Journal. His wife, Helen, brought a copy of the magazine home and pointed out an article on the potential causes of aging, which he read. It fascinated him.

Back then, the 29-year-old chemist was working at Shell Development, the research arm of Shell Oil, and he did not have much time to ponder the issue. Yet nine years later, after graduating from medical school and completing his training, he took a job as a research associate at the University of California, Berkeley, and began contemplating the science of aging more seriously. One morning while sitting in his office, he had an epiphany—“you know just ‘out the blue,’” he recalled in a 2003 interview: aging must be driven by free radicals.

Although free radicals had never before been linked to aging, it made sense to Harman that they might be the culprit. For one thing, he knew that ionizing radiation from x-rays and radioactive bombs, which can be deadly, sparks the production of free radicals in the body. Studies at the time suggested that diets rich in food-based antioxidants muted radiation’s ill effects, suggesting—correctly, as it turned out—that the radicals were a cause of those effects. Moreover, free radicals were normal by-products of breathing and metabolism and built up in the body over time. Because both cellular damage and free radical levels increased with age, free radicals probably caused the damage that was responsible for aging, Harman thought—and antioxidants probably slowed it.

Harman started testing his hypothesis. In one of his first experiments, he fed mice antioxidants and showed that they lived longer. (At high concentrations, however, the antioxidants had deleterious effects.) Other scientists soon began testing it, too. In 1969 researchers at Duke University discovered the first antioxidant enzyme produced inside the body—superoxide dismutase—and speculated that it evolved to counter the deleterious effects of free radical accumulation. With these new data, most biologists began accepting the idea. “If you work in aging, it’s like the air you breathe is the free radical theory,” Gems says. “It’s ubiquitous, it’s in every textbook. Every paper seems to refer to it either indirectly or directly.”

Still, over time scientists had trouble replicating some of Harman’s experimental findings. By the 1970s “there wasn’t a robust demonstration that feeding animals antioxidants really had an effect on life span,” Richardson says. He assumed that the conflicting experiments—which had been done by other scientists—simply had not been controlled very well. Perhaps the animals could not absorb the antioxidants that they had been fed, and thus the overall level of free radicals in their blood had not changed. By the 1990s, however, genetic advances allowed scientists to test the effects of antioxidants in a more precise way—by directly manipulating genomes to change the amount of antioxidant enzymes animals were capable of producing. Time and again, Richardson’s experiments with genetically modified mice showed that the levels of free radical molecules circulating in the animals’ bodies—and subsequently the amount of oxidative damage they endured—had no bearing on how long they lived.

More recently, Siegfried Hekimi, a biologist at McGill University, has bred roundworms that overproduce a specific free radical known as superoxide. “I thought they were going to help us prove the theory that oxidative stress causes aging,” says Hekimi, who had predicted that the worms would die young. Instead he reported in a 2010 paper in PLOS Biology that the engineered worms did not develop high levels of oxidative damage and that they lived, on average, 32 percent longer than normal worms. Indeed, treating these genetically modified worms with the antioxidant vitamin C prevented this increase in life span. Hekimi speculates that superoxide acts not as a destructive molecule but as a protective signal in the worms’ bodies, turning up the expression of genes that help to repair cellular damage.

In a follow-up experiment, Hekimi exposed normal worms, from birth, to low levels of a common weed-controlling herbicide that initiates free radical production in animals as well as plants. In the same 2010 paper he reported the counterintuitive result: the toxin-bathed worms lived 58 percent longer than untreated worms. Again, feeding the worms antioxidants quenched the toxin’s beneficial effects. Finally, in April 2012, he and his colleagues showed that knocking out, or deactivating, all five of the genes that code for superoxide dismutase enzymes in worms has virtually no effect on worm life span.

Do these discoveries mean that the free radical theory is flat-out wrong? Simon Melov, a biochemist at the Buck Institute for Research on Aging in Novato, Calif., believes that the issue is unlikely to be so simple; free radicals may be beneficial in some contexts and dangerous in others. Large amounts of oxidative damage have indisputably been shown to cause cancer and organ damage, and plenty of evidence indicates that oxidative damage plays a role in the development of some chronic conditions, such as heart disease. In addition, researchers at the University of Washington have demonstrated that mice live longer when they are genetically engineered to produce high levels of an antioxidant known as catalase. Saying that something, like oxidative damage, contributes to aging in certain instances, however, is “a very different thing than saying that it drives the pathology,” Melov notes. Aging probably is not a monolithic entity with a single cause and a single cure, he argues, and it was wishful thinking to ever suppose it was one.

Shifting Perspective
Assuming free radicals accumulate during aging but do not necessarily cause it, what effects do they have? So far that question has led to more speculation than definitive data.

“They’re actually part of the defense mechanism,” Hekimi asserts. Free radicals might, in some cases, be produced in response to cellular damage—as a way to signal the body’s own repair mechanisms, for example. In this scenario, free radicals are a consequence of age-related damage, not a cause of it. In large amounts, however, Hekimi says, free radicals may create damage as well.

The general idea that minor insults might help the body withstand bigger ones is not new. Indeed, that is how muscles grow stronger in response to a steady increase in the amount of strain that is placed on them. Many occasional athletes, on the other hand, have learned from painful firsthand experience that an abrupt increase in the physical demands they place on their body after a long week of sitting at an office desk is instead almost guaranteed to lead to pulled calves and hamstrings, among other significant injuries.

In 2002 researchers at the University of Colorado at Boulder briefly exposed worms to heat or to chemicals that induced the production of free radicals, showing that the environmental stressors each boosted the worms’ ability to survive larger insults later. The interventions also increased the worms’ life expectancy by 20 percent. It is unclear how these interventions affected overall levels of oxidative damage, however, because the investigators did not assess these changes. In 2010 researchers at the University of California, San Francisco, and Pohang University of Science and Technology in South Korea reported in Current Biology that some free radicals turn on a gene called HIF-1 that is itself responsible for activating a number of genes involved in cellular repair, including one that helps to repair mutated DNA.

Free radicals may also explain in part why exercise is beneficial. For years researchers assumed that exercise was good in spite of the fact that it produces free radicals, not because of it. Yet in a 2009 study published in the Proceedings of the National Academy of Sciences USA, Michael Ristow, a nutrition professor at the Friedrich Schiller University of Jena in Germany, and his colleagues compared the physiological profiles of exercisers who took antioxidants with exercisers who did not. Echoing Richardson’s results in mice, Ristow found that the exercisers who did not pop vitamins were healthier than those who did; among other things, the unsupplemented athletes showed fewer signs that they might develop type 2 diabetes. Research by Beth Levine, a microbiologist at the University of Texas Southwestern Medical Center, has shown that exercise also ramps up a biological process called autophagy, in which cells recycle worn-out bits of proteins and other subcellular pieces. The tool used to digest and disassemble the old molecules: free radicals. Just to complicate matters a bit, however, Levine’s research indicates that autophagy also reduces the overall level of free radicals, suggesting that the types and amounts of free radicals in different parts of the cell may play various roles, depending on the circumstances.

The Antioxidant Myth
If free radicals are not always bad, then their antidotes, antioxidants, may not always be good—a worrisome possibility given that 52 percent of Americans take considerable doses of antioxidants daily, such as vitamin E and beta-carotene, in the form of multivitamin supplements. In 2007 the Journal of the American Medical Association published a systematic review of 68 clinical trials, which concluded that antioxidant supplements do not reduce risk of death. When the authors limited their review to the trials that were least likely to be affected by bias—those in which assignment of participants to their research arms was clearly random and neither investigators nor participants knew who was getting what pill, for instance—they found that certain antioxidants were linked to an increased risk of death, in some cases by up to 16 percent.

Several U.S. organizations, including the American Heart Association and the American Diabetes Association, now advise that people should not take antioxidant supplements except to treat a diagnosed vitamin deficiency. “The literature is providing growing evidence that these supplements—in particular, at high doses—do not necessarily have the beneficial effects that they have been thought to,” says Demetrius Albanes, a senior investigator at the Nutritional Epidemiology Branch of the National Cancer Institute. Instead, he says, “we’ve become acutely aware of potential downsides.”

It is hard to imagine, however, that antioxidants will ever fall out of favor completely—or that most researchers who study aging will become truly comfortable with the idea of beneficial free radicals without a lot more proof. Yet slowly, it seems, the evidence is beginning to suggest that aging is far more intricate and complex than Harman imagined it to be nearly 60 years ago. Gems, for one, believes the evidence points to a new theory in which aging stems from the overactivity of certain biological processes involved in growth and reproduction. But no matter what idea (or ideas) scientists settle on, moving forward, “the constant drilling away of scientists at the facts is shifting the field into a slightly stranger, but a bit more real, place,” Gems says. “It’s an amazing breath of fresh air.”

http://www.nature.com/scientificamerican/journal/v308/n2/full/scientificamerican0213-62.html

In the Flesh: The Embedded Dangers of Untested Stem Cell Cosmetics

original

When cosmetic surgeon Allan Wu first heard the woman’s complaint, he wondered if she was imagining things or making it up. A resident of Los Angeles in her late sixties, she explained that she could not open her right eye without considerable pain and that every time she forced it open, she heard a strange click—a sharp sound, like a tiny castanet snapping shut. After examining her in person at The Morrow Institute in Rancho Mirage, Calif., Wu could see that something was wrong: Her eyelid drooped stubbornly, and the area around her eye was somewhat swollen. Six and a half hours of surgery later, he and his colleagues had dug out small chunks of bone from the woman’s eyelid and tissue surrounding her eye, which was scratched but largely intact. The clicks she heard were the bone fragments grinding against one another.

About three months earlier the woman had opted for a relatively new kind of cosmetic procedure at a different clinic in Beverly Hills—a face-lift that made use of her own adult stem cells. First, cosmetic surgeons had removed some the woman’s abdominal fat with liposuction and isolated the adult stem cells within—a family of cells that can make many copies of themselves in an immature state and can develop into several different kinds of mature tissue. In this case the doctors extracted mesenchymal stem cells—which can turn into bone, cartilage or fat, among other tissues—and injected those cells back into her face, especially around her eyes. The procedure cost her more than $20,000, Wu recollects. Such face-lifts supposedly rejuvenate the skin because stem cells turn into brand-new tissue and release chemicals that help heal aging cells and stimulate nearby cells to proliferate.

During the face-lift her clinicians had also injected some dermal filler, which plastic surgeons have safely used for more than 20 years to reduce the appearance of wrinkles. The principal component of such fillers is calcium hydroxylapatite, a mineral with which cell biologists encourage mesenchymal stem cells to turn into bone—a fact that escaped the woman’s clinicians. Wu thinks this unanticipated interaction explains her predicament. He successfully removed the pieces of bone from her eyelid in 2009 and says she is doing well today, but some living stem cells may linger in her face. These cells could turn into bone or other out-of-place tissues once again.

Dozens, perhaps hundreds, of clinics across the country offer a variety of similar, untested stem cell treatments for both cosmetic and medical purposes. Costing between $3,000 and $30,000, the treatments promise to alleviate everything from wrinkles to joint pain to autism. The U.S. Food and Drug Administration (FDA) has not approved any of these treatments and, with a limited budget, is struggling to keep track of all the unapproved therapies on the market. At the same time, pills, oils, creams and moisturizers that allegedly contain the right combination of ingredients to mobilize the body’s resident stem cells, or contain chemicals extracted from the stem cells in plants and animals, are popping up in pharmacies and online. There’s Stem Cell 100, for example, MEGA STEM and Apple Stem Cell Cloud Cream. Few of these cosmetics have been properly tested in published experiments, yet the companies that manufacture them say they may heal damaged organs, slow or reverse natural aging, restore youthful energy and revitalize the skin. Whether such cosmetics may also produce unintended and potentially harmful effects remains largely unexamined. The increasing number of untested and unauthorized stem cell treatments threaten both people who buy them and researchers hoping to conduct clinical trials for promising stem cell medicine.

So far, the FDA has only approved one stem cell treatment: a transplant of bone marrow stem cells for people with the blood cancer leukemia. Among the increasing number of unapproved stem cell treatments, some clearly violate the FDA’s regulations whereas others may technically be legal without its approval. In July 2012, for example, the U.S. District Court upheld an injunction brought by the FDA against Colorado-based Regenerative Sciences to regulate just one of the company’s several stem cell treatments for various joint injuries as an “unapproved biological drug product.” The decision hinged on what constitutes “minimal manipulation” of cells in the lab before they are injected into patients. In the treatment that the FDA won the right to regulate, stem cells are grown and modified in the lab for several weeks before they are returned to patients; in Regenerative Sciences’s other treatments, patients’ stem cells are extracted and injected within a day or two. Regenerative Sciences now offers the legally problematic treatment at a Cayman Island facility.

Many stem cell cosmetics reside in a legal gray area. Unlike drugs and “biologics” made from living cells and tissues, cosmetics do not require premarket approval from the FDA. But stem cell cosmetics often satisfy the FDA’s definitions for both cosmetics and drugs. In September 2012 the FDA posted a letter on its Web site warning Lancôme, a division of L’Oréal, that the way it describes its Genifique skin care products qualify the creams and serums as unapproved drugs: they are supposed to “boost the activity of genes,” for example, and “improve the condition of stem cells.” Other times the difference between needing or not needing FDA approval comes down to linguistic nuance—the difference between claiming that a product does something or appears to do something.

Personal Cell Sciences, in Eatontown, N.J., sells some of the more sophisticated stem cell–based cosmetics: an eye cream, moisturizer and serum infused with chemicals derived from a consumer’s own stem cells. According to its website and marketing materials, these products help “make skin more supple and radiant,” “reduce the appearance of fine lines and wrinkles around the eyes and lips,” “improve cellular renewal” and “stimulate cell turnover for renewed texture and tone.” In exchange for $3,000, Personal Cell Sciences will arrange for a participating physician to vacuum about 60 cubic centimeters (one quarter cup) of a customer’s fat from beneath his or her skin and ship it on ice to American CryoStem Corp. in Red Bank, N.J., where laboratory technicians isolate and grow the customer’s mesenchymal stem cells to around 30 million strong. Half these cells are frozen for storage; from the other half, technicians harvest hundreds of different kinds of exuded growth factors and cytokines—molecules that help heal damaged cells and encourage cells to divide, among other functions. These molecules are mixed with many other ingredients—including green tea extract, caffeine and vitamins—to create the company’s various “U Autologous” skin care products, which are then sold back to the consumer for between $400 and $800. When the customer wants a refill, technicians thaw some of the frozen cells, collect more cytokines and produce new bottles of cream.

In an unpublished safety trial sponsored by Personal Cell Sciences, Frederic Stern of the Stern Center for Aesthetic Surgery in Bellevue, Wash., and his colleagues monitored 19 patients for eight weeks as they used the U Autologous products on the left sides of their faces. A computer program meant to objectively analyze photos of the volunteers’ faces measured an average of 25.6 percent reduction in the volume of wrinkles on the treated side of the face. Analysis of tissue biopsies revealed increased levels of the protein elastin, which helps keep skin taut, and no signs of unusual or cancerous cell growth.

Supposedly, the primary active ingredients in the U Autologous skin care products are the hundreds of different kinds of cytokines they contain. Cytokines are a large and diverse family of proteins that cells release to communicate with and influence one another. Cytokines can stimulate cell division or halt it; they can suppress the immune system or provoke it; they can also change a cell’s shape, modulate its metabolism and force it to migrate from one location to another like a cowboy corralling cattle. Researchers have only named and characterized some of the many cytokines that stem cells secrete. Some of these molecules certainly help repair damaged cells and promote cell survival. Others seem to be involved in the development of tumors. In fact, some recent evidence suggests that the cytokines released by mesenchymal stem cells can trigger tumors by accelerating the growth of dormant cancer cells. Personal Cell Sciences does not pick and choose among the cytokines exuded by its customers’ stem cells—instead, it dumps them all into its skin care products.

Based on the available evidence so far, topical creams containing cytokines from stem cells pose far less risk of cancer than living stem cells injected beneath the skin. But scientists do not yet know enough about stem cell cytokines to reliably predict everything they will do when rubbed into the skin; they could interact with healthy skin cells in a completely unexpected way, just as the unintended interplay between calcium hydroxylapatite and stem cells produced bones in the Los Angeles woman’s eye. Stern acknowledges that unusual tissue growth is a concern for any treatment based on stem cells and the chemicals they release. “Down the line, we want to continue watching that,” he says. Unlike many other clinics, he and his colleagues have been keeping tabs on their patients through regular follow-ups. John Arnone, CEO of American CryoStem and founder of Personal Cell Sciences, says the fact that U Autologous skin care products contain such a diversity of cytokines does not bother him: “I’ve seen worse things out there. I’ve been putting this formulation for almost a year on myself prior to the study. I’m the best guinea pig here.”

Beyond the considerable risks to consumers, unapproved stem cell treatments also threaten the progress of basic research and clinical trials needed to establish safe stem cell therapies for serious illnesses. By harvesting stem cells, subsequently nourishing them in the lab and transplanting them back inside the human body, scientists hope to improve treatment for a variety of medical conditions, including heart failure, neurodegenerative disorders like Parkinson’s, and spinal cord injuries—essentially any condition in which the body needs new cells and tissues. Researchers are investigating many stem cell therapies in ongoing, carefully controlled clinical trials. Some of the principal questions entail which of the many kinds of stem cells to use; how to safely deliver stem cells to patients without stimulating tumors or the growth of unwanted tissues; and how to prevent the immune system from attacking stem cells provided by a donor. Securing funding for such research becomes all the more difficult if shortcuts taken by private clinics and cosmetic manufacturers—and the subsequent botched procedures and unanticipated consequences—imprint a stigma on stem cells.

“Many of us are super excited about stem cells, but at same time we have to be really careful,” says Paul Knoepfler, a cell biologist at the University of California, Davis, who regularly blogs about the regulation of stem cell treatments. “These aren’t your typical drugs. You can stop taking a pill and the chemicals go away. But if you get stem cells, most likely you will have some of those cells or their effects for the rest of your life. And we simply don’t know everything they are going to do.”

.
https://www.scientificamerican.com/article.cfm?id=stem-cell-cosmetics&WT.mc_id=SA_emailfriend

Thanks to Dr. Nakamura for bringing this to the attention of the It’s Interesting community.

After 30 years, Supersymmetry Fails Test and is Forcing Physicists to Seek New Ideas

As a young theorist in Moscow in 1982, Mikhail Shifman became enthralled with an elegant new theory called supersymmetry that attempted to incorporate the known elementary particles into a more complete inventory of the universe.

“My papers from that time really radiate enthusiasm,” said Shifman, now a 63-year-old professor at the University of Minnesota. Over the decades, he and thousands of other physicists developed the supersymmetry hypothesis, confident that experiments would confirm it. “But nature apparently doesn’t want it,” he said. “At least not in its original simple form.”

With the world’s largest supercollider unable to find any of the particles the theory says must exist, Shifman is joining a growing chorus of researchers urging their peers to change course.

In an essay posted last month on the physics website arXiv.org, Shifman called on his colleagues to abandon the path of “developing contrived baroque-like aesthetically unappealing modifications” of supersymmetry to get around the fact that more straightforward versions of the theory have failed experimental tests. The time has come, he wrote, to “start thinking and developing new ideas.”

But there is little to build on. So far, no hints of “new physics” beyond the Standard Model — the accepted set of equations describing the known elementary particles — have shown up in experiments at the Large Hadron Collider, operated by the European research laboratory CERN outside Geneva, or anywhere else. (The recently discovered Higgs boson was predicted by the Standard Model.) The latest round of proton-smashing experiments, presented earlier this month at the Hadron Collider Physics conference in Kyoto, Japan, ruled out another broad class of supersymmetry models, as well as other theories of “new physics,” by finding nothing unexpected in the rates of several particle decays.

“Of course, it is disappointing,” Shifman said. “We’re not gods. We’re not prophets. In the absence of some guidance from experimental data, how do you guess something about nature?”

Younger particle physicists now face a tough choice: follow the decades-long trail their mentors blazed, adopting ever more contrived versions of supersymmetry, or strike out on their own, without guidance from any intriguing new data.

“It’s a difficult question that most of us are trying not to answer yet,” said Adam Falkowski, a theoretical particle physicist from the University of Paris-South in Orsay, France, who is currently working at CERN. In a blog post about the recent experimental results, Falkowski joked that it was time to start applying for jobs in neuroscience.

“There’s no way you can really call it encouraging,” said Stephen Martin, a high-energy particle physicist at Northern Illinois University who works on supersymmetry, or SUSY for short. “I’m certainly not someone who believes SUSY has to be right; I just can’t think of anything better.”

Supersymmetry has dominated the particle physics landscape for decades, to the exclusion of all but a few alternative theories of physics beyond the Standard Model.

“It’s hard to overstate just how much particle physicists of the past 20 to 30 years have invested in SUSY as a hypothesis, so the failure of the idea is going to have major implications for the field,” said Peter Woit, a particle theorist and mathematician at Columbia University.

The theory is alluring for three primary reasons: It predicts the existence of particles that could constitute “dark matter,” an invisible substance that permeates the outskirts of galaxies. It unifies three of the fundamental forces at high energies. And — by far the biggest motivation for studying supersymmetry — it solves a conundrum in physics known as the hierarchy problem.

The problem arises from the disparity between gravity and the weak nuclear force, which is about 100 million trillion trillion (10^32) times stronger and acts at much smaller scales to mediate interactions inside atomic nuclei. The particles that carry the weak force, called W and Z bosons, derive their masses from the Higgs field, a field of energy saturating all space. But it is unclear why the energy of the Higgs field, and therefore the masses of the W and Z bosons, isn’t far greater. Because other particles are intertwined with the Higgs field, their energies should spill into it during events known as quantum fluctuations. This should quickly drive up the energy of the Higgs field, making the W and Z bosons much more massive and rendering the weak nuclear force about as weak as gravity.

Supersymmetry solves the hierarchy problem by theorizing the existence of a “superpartner” twin for every elementary particle. According to the theory, fermions, which constitute matter, have superpartners that are bosons, which convey forces, and existing bosons have fermion superpartners. Because particles and their superpartners are of opposite types, their energy contributions to the Higgs field have opposite signs: One dials its energy up, the other dials it down. The pair’s contributions cancel out, resulting in no catastrophic effect on the Higgs field. As a bonus, one of the undiscovered superpartners could make up dark matter.

“Supersymmetry is such a beautiful structure, and in physics, we allow that kind of beauty and aesthetic quality to guide where we think the truth may be,” said Brian Greene, a theoretical physicist at Columbia University.

Over time, as the superpartners failed to materialize, supersymmetry has grown less beautiful. According to mainstream models, to evade detection, superpartner particles would have to be much heavier than their twins, replacing an exact symmetry with something like a carnival mirror. Physicists have put forward a vast range of ideas for how the symmetry might have broken, spawning myriad versions of supersymmetry.

But the breaking of supersymmetry can pose a new problem. “The heavier you have to make some of the superpartners compared to the existing particles, the more that cancellation of their effects doesn’t quite work,” Martin explained.

Most particle physicists in the 1980s thought they would detect superpartners that are only slightly heavier than the known particles. But the Tevatron, the now-retired particle accelerator at Fermilab in Batavia, Ill., found no such evidence. As the Large Hadron Collider probes increasingly higher energies without any sign of supersymmetry particles, some physicists are saying the theory is dead. “I think the LHC was a last gasp,” Woit said.

Today, most of the remaining viable versions of supersymmetry predict superpartners so heavy that they would overpower the effects of their much lighter twins if not for fine-tuned cancellations between the various superpartners. But introducing fine-tuning in order to scale back the damage and solve the hierarchy problem makes some physicists uncomfortable. “This, perhaps, shows that we should take a step back and start thinking anew on the problems for which SUSY-based phenomenology was introduced,” Shifman said.

But some theorists are forging ahead, arguing that, in contrast to the beauty of the original theory, nature could just be an ugly combination of superpartner particles with a soupçon of fine-tuning. “I think it is a mistake to focus on popular versions of supersymmetry,” said Matt Strassler, a particle physicist at Rutgers University. “Popularity contests are not reliable measures of truth.”

In some of the less popular supersymmetry models, the lightest superpartners are not the ones the Large Hadron Collider experiments have looked for. In others, the superpartners are not heavier than existing particles but merely less stable, making them more difficult to detect. These theories will continue to be tested at the Large Hadron Collider after it is upgraded to full operational power in about two years.

If nothing new turns up — an outcome casually referred to as the “nightmare scenario” — physicists will be left with the same holes that riddled their picture of the universe three decades ago, before supersymmetry neatly plugged them. And, without an even higher-energy collider to test alternative ideas, Falkowski says, the field will undergo a slow decay: “The number of jobs in particle physics will steadily decrease, and particle physicists will die out naturally.”

Greene offers a brighter outlook. “Science is this wonderfully self-correcting enterprise,” he said. “Ideas that are wrong get weeded out in time because they are not fruitful or because they are leading us to dead ends. That happens in a wonderfully internal way. People continue to work on what they find fascinating, and science meanders toward truth.”

From Simons Science News (find the original story here)

http://www.scientificamerican.com/article.cfm?id=supersymmetry-fails-test-forcing-physics-seek-new-idea

Mother-Child Connection: Scientists Discover Children’s Cells Living in Mothers’ Brains, Including Male Cells Living in the Female Brain for Decades

scientists-discover-childrens-cells-living-in-mothers-brain_1

 

The link between a mother and child is profound, and new research suggests a physical connection even deeper than anyone thought. The profound psychological and physical bonds shared by the mother and her child begin during gestation when the mother is everything for the developing fetus, supplying warmth and sustenance, while her heartbeat provides a soothing constant rhythm.

The physical connection between mother and fetus is provided by the placenta, an organ, built of cells from both the mother and fetus, which serves as a conduit for the exchange of nutrients, gasses, and wastes. Cells may migrate through the placenta between the mother and the fetus, taking up residence in many organs of the body including the lung, thyroid muscle, liver, heart, kidney and skin. These may have a broad range of impacts, from tissue repair and cancer prevention to sparking immune disorders.

It is remarkable that it is so common for cells from one individual to integrate into the tissues of another distinct person. We are accustomed to thinking of ourselves as singular autonomous individuals, and these foreign cells seem to belie that notion, and suggest that most people carry remnants of other individuals. As remarkable as this may be, stunning results from a new study show that cells from other individuals are also found in the brain. In this study, male cells were found in the brains of women and had been living there, in some cases, for several decades. What impact they may have had is now only a guess, but this study revealed that these cells were less common in the brains of women who had Alzheimer’s disease, suggesting they may be related to the health of the brain.

We all consider our bodies to be our own unique being, so the notion that we may harbor cells from other people in our bodies seems strange. Even stranger is the thought that, although we certainly consider our actions and decisions as originating in the activity of our own individual brains, cells from other individuals are living and functioning in that complex structure. However, the mixing of cells from genetically distinct individuals is not at all uncommon. This condition is called chimerism after the fire-breathing Chimera from Greek mythology, a creature that was part serpent part lion and part goat. Naturally occurring chimeras are far less ominous though, and include such creatures as the slime mold and corals.

 Microchimerism is the persistent presence of a few genetically distinct cells in an organism. This was first noticed in humans many years ago when cells containing the male “Y” chromosome were found circulating in the blood of women after pregnancy. Since these cells are genetically male, they could not have been the women’s own, but most likely came from their babies during gestation.

In this new study, scientists observed that microchimeric cells are not only found circulating in the blood, they are also embedded in the brain. They examined the brains of deceased women for the presence of cells containing the male “Y” chromosome. They found such cells in more than 60 percent of the brains and in multiple brain regions. Since Alzheimer’s disease is more common in women who have had multiple pregnancies, they suspected that the number of fetal cells would be greater in women with AD compared to those who had no evidence for neurological disease. The results were precisely the opposite: there were fewer fetal-derived cells in women with Alzheimer’s. The reasons are unclear.

Microchimerism most commonly results from the exchange of cells across the placenta during pregnancy, however there is also evidence that cells may be transferred from mother to infant through nursing. In addition to exchange between mother and fetus, there may be exchange of cells between twins in utero, and there is also the possibility that cells from an older sibling residing in the mother may find their way back across the placenta to a younger sibling during the latter’s gestation. Women may have microchimeric cells both from their mother as well as from their own pregnancies, and there is even evidence for competition between cells from grandmother and infant within the mother.

What it is that fetal microchimeric cells do in the mother’s body is unclear, although there are some intriguing possibilities. For example, fetal microchimeric cells are similar to stem cells in that they are able to become a variety of different tissues and may aid in tissue repair. One research group investigating this possibility followed the activity of fetal microchimeric cells in a mother rat after the maternal heart was injured: they discovered that the fetal cells migrated to the maternal heart and differentiated into heart cells helping to repair the damage. In animal studies, microchimeric cells were found in maternal brains where they became nerve cells, suggesting they might be functionally integrated in the brain. It is possible that the same may true of such cells in the human brain.

These microchimeric cells may also influence the immune system. A fetal microchimeric cell from a pregnancy is recognized by the mother’s immune system partly as belonging to the mother, since the fetus is genetically half identical to the mother, but partly foreign, due to the father’s genetic contribution. This may “prime” the immune system to be alert for cells that are similar to the self, but with some genetic differences. Cancer cells which arise due to genetic mutations are just such cells, and there are studies which suggest that microchimeric cells may stimulate the immune system to stem the growth of tumors. Many more microchimeric cells are found in the blood of healthy women compared to those with breast cancer, for example, suggesting that microchimeric cells can somehow prevent tumor formation. In other circumstances, the immune system turns against the self, causing significant damage. Microchimerism is more common in patients suffering from Multiple Sclerosis than in their healthy siblings, suggesting chimeric cells may have a detrimental role in this disease, perhaps by setting off an autoimmune attack.

This is a burgeoning new field of inquiry with tremendous potential for novel findings as well as for practical applications. But it is also a reminder of our interconnectedness.

http://www.scientificamerican.com/article.cfm?id=scientists-discover-childrens-cells-living-in-mothers-brain

The Death of “Near Death” Experiences ?

near-death-experience-1

 

You careen headlong into a blinding light. Around you, phantasms of people and pets lost. Clouds billow and sway, giving way to a gilded and golden entrance. You feel the air, thrusted downward by delicate wings. Everything is soothing, comforting, familiar. Heaven.

It’s a paradise that some experience during an apparent demise. The surprising consistency of heavenly visions during a “near death experience” (or NDE) indicates for many that an afterlife awaits us. Religious believers interpret these similar yet varying accounts like blind men exploring an elephant—they each feel something different (the tail is a snake and the legs are tree trunks, for example); yet all touch the same underlying reality. Skeptics point to the curious tendency for Heaven to conform to human desires, or for Heaven’s fleeting visage to be so dependent on culture or time period.

Heaven, in a theological view, has some kind of entrance. When you die, this entrance is supposed to appear—a Platform 9 ¾ for those running towards the grave. Of course, the purported way to see Heaven without having to take the final run at the platform wall is the NDE. Thrust back into popular consciousness by a surgeon claiming that “Heaven is Real,” the NDE has come under both theological and scientific scrutiny for its supposed ability to preview the great gig in the sky.

But getting to see Heaven is hell—you have to die. Or do you?

This past October, neurosurgeon Dr. Eben Alexander claimed that “Heaven is Real”, making the cover of the now defunct Newsweek magazine. His account of Heaven was based on a series of visions he had while in a coma, suffering the ravages of a particularly vicious case of bacterial meningitis. Alexander claimed that because his neocortex was “inactivated” by this malady, his near death visions indicated an intellect apart from the grey matter, and therefore a part of us survives brain-death.

Alexander’s resplendent descriptions of the afterlife were intriguing and beautiful, but were also promoted as scientific proof. Because Alexander was a brain “scientist” (more accurately, a brain surgeon), his account carried apparent weight.

Scientifically, Alexander’s claims have been roundly criticized. Academic clinical neurologist Steve Novella removes the foundation of Alexander’s whole claim by noting that his assumption of cortex “inactivation” is flawed:

Alexander claims there is no scientific explanation for his experiences, but I just gave one. They occurred while his brain function was either on the way down or on the way back up, or both, not while there was little to no brain activity.

In another takedown of the popular article, neuroscientist Sam Harris (with characteristic sharpness) also points out this faulty premise, and notes that Alexander’s evidence for such inactivation is lacking:

The problem, however, is that “CT scans and neurological examinations” can’t determine neuronal inactivity—in the cortex or anywhere else. And Alexander makes no reference to functional data that might have been acquired by fMRI, PET, or EEG—nor does he seem to realize that only this sort of evidence could support his case.

Without a scientific foundation for Alexander’s claims, skeptics suggest he had a NDE later fleshed out by confirmation bias and colored by culture. Harris concludes in a follow-up post on his blog, “I am quite sure that I’ve never seen a scientist speak in a manner more suggestive of wishful thinking. If self-deception were an Olympic sport, this is how our most gifted athletes would appear when they were in peak condition.”

And these takedowns have company. Paul Raeburn in the Huffington Post, speaking of Alexander’s deathbed vision being promoted as a scientific account, wrote, “We are all demeaned, and our national conversation is demeaned, by people who promote this kind of thing as science. This is religious belief; nothing else.” We might expect this tone from skeptics, but even the faithful chime in. Greg Stier writes in the Christian post that while he fully believes in the existence of Heaven, we should not take NDE accounts like Alexander’s as proof of it.

These criticisms of Alexander point out that what he saw was a classic NDE—the white light, the tunnel, the feelings of connectedness, etc. This is effective in dismantling his account of an “immaterial intellect” because, so far, most symptoms of a NDE are in fact scientifically explainable. [ another article on this site provides a thorough description of the evidence, as does this study.]

One might argue that the scientific description of NDE symptoms is merely the physical account of what happens as you cross over. A brain without oxygen may experience “tunnel vision,” but a brain without oxygen is also near death and approaching the afterlife, for example. This argument rests on the fact that you are indeed dying. But without the theological gymnastics, I think there is an overlooked yet critical aspect to the near death phenomenon, one that can render Platform 9 ¾ wholly solid. Studies have shown that you don’t have to be near death to have a near death experience.

“Dying”

In 1990, a study was published in the Lancet that looked at the medical records of people who experienced NDE-like symptoms as a result of some injury or illness. It showed that out of 58 patients who reported “unusual” experiences associated with NDEs (tunnels, light, being outside one’s own body, etc.), 30 of them were not actually in any danger of dying, although they believed they were [1]. The authors of the study concluded that this finding offered support to the physical basis of NDEs, as well as the “transcendental” basis.

Why would the brain react to death (or even imagined death) in such a way? Well, death is a scary thing. Scientific accounts of the NDE characterize it as the body’s psychological and physiological response mechanism to such fear, producing chemicals in the brain that calm the individual while inducing euphoric sensations to reduce trauma.

Imagine an alpine climber whose pick fails to catch the next icy outcropping as he or she plummets towards a craggy mountainside. If one truly believes the next experience he or she will have is an intimate acquainting with a boulder, similar NDE-like sensations may arise (i.e., “My life flashed before my eyes…”). We know this because these men and women have come back to us, emerging from a cushion of snow after their fall rather than becoming a mountain’s Jackson Pollock installation.

You do not have to be, in reality, dying to have a near-death experience. Even if you are dying (but survive), you probably won’t have one. What does this make of Heaven? It follows that if you aren’t even on your way to the afterlife, the scientifically explicable NDE symptoms point to neurology, not paradise.

This Must Be the Place

Explaining the near death experience in a purely physical way is not to say that people cannot have a transformative vision or intense mental journey. The experience is real and tells us quite a bit about the brain (while raising even more fascinating questions about consciousness). But emotional and experiential gravitas says nothing of Heaven, or the afterlife in general. A healthy imbibing of ketamine can induce the same feelings, but rarely do we consider this euphoric haze a glance of God’s paradise.

In this case, as in science, a theory can be shot through with experimentation. As Richard Feynman said, “It doesn’t matter how beautiful your theory is, it doesn’t matter how smart you are. If it doesn’t agree with experiment, it’s wrong.

The experiment is exploring an NDE under different conditions. Can the same sensations be produced when you are in fact not dying? If so, your rapping on the Pearly Gates is an illusion, even if Heaven were real. St. Peter surely can tell the difference between a dying man and a hallucinating one.

The near death experience as a foreshadowing of Heaven is a beautiful theory perhaps, but wrong.

Barring a capricious conception of “God’s plan,” one can experience a beautiful white light at the end of a tunnel while still having a firm grasp of their mortal coil. This is the death of near death. Combine explainable symptoms with a plausible, physical theory as to why we have them and you get a description of what it is like to die, not what it is like to glimpse God.

Sitting atop clouds fluffy and white, Heaven may be waiting. We can’t prove that it is not. But rather than helping to clarify, the near death experience, not dependent on death, may only point to an ever interesting and complex human brain, nothing more.

http://blogs.scientificamerican.com/guest-blog/2012/12/03/the-death-of-near-death-even-if-heaven-is-real-you-arent-seeing-it/