FDA Lets Drugs Approved on Fraudulent Research Stay on the Market

fraud-kit

The FDA in 2011 announced years’ worth of studies from a major drug research lab were potentially worthless, but it has not pulled any of the compounds from the market nor identified them

By Rob Garver, Charles Seife and ProPublica

On the morning of May 3, 2010, three agents of the Food and Drug Administration descended upon the Houston office of Cetero Research, a firm that conducted research for drug companies worldwide. Lead agent Patrick Stone, now retired from the FDA, had visited the Houston lab many times over the previous decade for routine inspections. This time was different. His team was there to investigate a former employee’s allegation that the company had tampered with records and manipulated test data. When Stone explained the gravity of the inquiry to Chinna Pamidi, the testing facility’s president, the Cetero executive made a brief phone call. Moments later, employees rolled in eight flatbed carts, each double-stacked with file boxes. The documents represented five years of data from some 1,400 drug trials.

Pamidi bluntly acknowledged that much of the lab’s work was fraudulent, Stone said. “You got us,” Stone recalled him saying.

Based partly on records in the file boxes, the FDA eventually concluded that the lab’s violations were so “egregious” and pervasive that studies conducted there between April 2005 and August 2009 might be worthless.

The health threat was potentially serious: About 100 drugs, including sophisticated chemotherapy compounds and addictive prescription painkillers, had been approved for sale in the United States at least in part on the strength of Cetero Houston’s tainted tests. The vast majority, 81, were generic versions of brand-name drugs on which Cetero scientists had often run critical tests to determine whether the copies did, in fact, act the same in the body as the originals. For example, one of these generic drugs was ibuprofen, sold as gelatin capsules by one of the nation’s largest grocery-store chains for months before the FDA received assurance they were safe.

The rest were new medications that required so much research to win approval that the FDA says Cetero’s tests were rarely crucial. Stone said he expected the FDA to move swiftly to compel new testing and to publicly warn patients and doctors.

Instead, the agency decided to handle the matter quietly, evaluating the medicines with virtually no public disclosure of what it had discovered. It pulled none of the drugs from the market, even temporarily, letting consumers take the ibuprofen and other medicines it no longer knew for sure were safe and effective. To this day, some drugs remain on the market despite the FDA having no additional scientific evidence to back up the safety and efficacy of these drugs.

By contrast, the FDA’s transatlantic counterpart, the European Medicines Agency, has pulled seven Cetero-tested medicines from the market.

The FDA also has moved slowly to shore up the science behind the drugs. Twice the FDA announced it was requiring drug makers to repeat, reanalyze or audit many of Cetero’s tests, and to submit their findings to the agency. Both times the agency set deadlines, yet it has allowed some companies to blow by them. Today, six months after the last of those deadlines expired and almost three years after Cetero’s misconduct was discovered, the FDA has received the required submissions for just 53 drugs. The agency says most companies met the deadlines but acknowledged that “a few have not yet submitted new studies.” Other companies, it said, have not submitted new research because they removed their drugs from the market altogether. For its part, the FDA has finished its review of just 21 of the 53 submissions it has received, raising the possibility that patients are taking medications today that the agency might pull off the market tomorrow.

To this day, the agency refuses to disclose the names of the drugs it is reassessing, on the grounds that doing so would expose “confidential commercial information.” ProPublica managed to identify five drugs (http://projects.propublica.org/graphics/cetero) that used Cetero tests to help win FDA approval.

FDA officials defended the agency’s handling of the Cetero case as prudent and scientifically sound, noting that the agency has found no discrepancies between any original drug and its generic copy and no sign that any patients have been harmed. “It is non-trivial to have to redo all this, to withdraw drugs, to alarm the public and the providers for a large range of drugs,” said Janet Woodcock, the director of the FDA’s Center for Drug Evaluation and Research. “There are consequences. To repeat the studies requires human experimentation, and that is not totally without risk.” Woodcock added that an agency risk assessment found the potential for harm from drugs tested by Cetero to be “quite low,” an assessment she said has been “confirmed” by the fact that no problems have been found in the drugs the agency has finished reviewing. She declined to release the risk assessment or detail its design. A subsequent statement from the agency described the assessment as “fluid” and “ongoing.” The FDA also has not released its 21 completed reviews, which ProPublica has requested. Some experts say that by withholding so much information in the Cetero case the FDA failed to meet its obligations to the public.

“If there are problems with the scientific studies, as there have been in this case, then the FDA’s review of those problems needs to be transparent,” said David Kessler, who headed the FDA from 1990 to 1997 and who is now a professor at the University of California at San Francisco. Putting its reviews in public view would let the medical community “understand the basis for the agency’s actions,” he said. “FDA may be right here, but if it wants public confidence, they should be transparent. Otherwise it’s just a black box.”

Another former senior FDA official, who spoke on condition of anonymity, also felt the FDA had moved too slowly and secretively. “They’re keeping it all in the dark. It’s not transparent at all,” he said.

By contrast, the European Medicines Agency has provided a public accounting of the science behind all the drugs it has reviewed. Its policy, the EMA said in response to questions, is to make public “all review procedures where the benefit-risk balance of a medicine is under scrutiny.”

Woodcock dismissed comparisons to the EMA. “Europe had a smaller handful of drugs,” she said, “and they may not have engaged in as extensive negotiation and investigations with the company as we did.” She said the FDA would have disclosed more, including the names of drugs, had it believed there was a risk to public health. “We believe that this did not rise to the level where the public should be notified,” she said. “We felt it would result in misunderstanding and inappropriate actions.”

In a written response to Kessler’s comments, the FDA said, “We’ve been as transparent as possible given the legal protections surrounding an FDA investigation of this or any type. The issue is not a lack of transparency but rather the difficulty of explaining why the problems we identified at Cetero, which on their face would appear to be highly significant in terms of patient risk, fortunately were not.” Still, the FDA’s secrecy has had other ramifications. Some of Cetero’s suspect research made its way unchallenged into the peer-reviewed scientific literature on which the medical community relies. In one case, a researcher and a journal editor told ProPublica they had no idea the Cetero tests had been called into doubt.

Cetero, in correspondence with the FDA, conceded misconduct. And in an interview, Cetero’s former attorney, Marc Scheineson, acknowledged that chemists at the Houston facility committed fraud but said the problem was limited to six people who had all been fired.

“There is still zero evidence that any of the test results…were wrong, inaccurate, or incorrect,” he said. Scheineson called the FDA’s actions “overkill” and said they led to the demise of Cetero and its successor company.

In 2012, the company filed for Chapter 11 bankruptcy and emerged with a new name, PRACS Institute. PRACS, in turn, filed for bankruptcy on March 22 of this year. A PRACS spokesperson said the company had closed the Houston facility in October 2012.

Pamidi, the Cetero executive who provided the carts of file boxes, declined to comment. As for Stone, the former FDA investigator, he said he was disturbed by the agency’s decisions.

“They could have done more,” he said. “They should have done more.”

Cross-checking U.S. and European public records, including regulatory filings, scientific studies and civil lawsuits, ProPublica was able to identify a few of the drugs that are on the U.S. market because of tests performed at Cetero’s Houston lab. There is no evidence that patients have suffered harm from these drugs; the FDA says it has detected no increase in reports of side effects or lack of efficacy among Cetero-tested medications.

To be sure, just because a crucial study is deemed potentially unreliable does not mean that a drug is unsafe or ineffective. What it does mean is that the FDA’s scientific basis for approving that drug has been undermined.

The risks are real, academic experts say, particularly for drugs such as blood thinners and anti-seizure medications that must be given at very specific doses. And generic versions of drugs have been known to act differently from name-brand products.

There is no indication the generic ibuprofen gelatin capsules hurt anyone, but their case shows how the FDA left a drug on the market for months without confirmation that the drug was equivalent to the name brand.

The capsules were manufactured by Banner Pharmacaps and carried by Supervalu, a grocery company that operates or licenses more than 2,400 stores across the United States, including Albertson’s, Jewel-Osco, Shop ‘n Save, Save-A-Lot, and Shoppers Food & Pharmacy.

Cetero had performed a key analysis to show that the capsules were equivalent to other forms of the drug. Banner, the drug’s maker, said the FDA first alerted it to the problems at Cetero in August 2011. The FDA required drug companies to redo many of Cetero’s tests, but, a spokesperson for Banner wrote in an email, “We received no directive from FDA to recall or otherwise interrupt manufacture of the product.”

Banner said it repeated the tainted Cetero tests at a different research firm, and the FDA said it received the new data in January 2012 — leaving a gap of at least five months when the FDA knew the drug was on the market without a rock-solid scientific basis.

An FDA spokesperson wrote in an email that the agency found the new studies Banner submitted “acceptable” and told Banner it had no further questions.

A spokesperson for Supervalu told ProPublica it purchased the ibuprofen from a supplier, which has assured the grocery company that “there are no issues with the product.”

According to U.S. and European records, another one of the drugs approved based on research at Cetero’s troubled Houston lab was a chemotherapy drug known as Temodar for Injection.

Temodar was originally approved in 1999 as a capsule to fight an aggressive brain cancer, glioblastoma multiforme. Some patients, however, can’t tolerate taking the medication orally, so drug maker Schering-Plough decided to make an intravenous form of the drug.

To get Temodar for Injection approved, the FDA required what it called a “pivotal” test comparing the well-established capsule form of Temodar to the form injected directly into the bloodstream.

Cetero Houston conducted that test, comparing blood samples of patients who received the capsule to samples of those who got the injection to determine if the same amount of the drug was reaching the bloodstream. This test is crucial, particularly in the case of Temodar, where there was a question about the right dosing regimen of the injectable version. If too little drug gets into the blood, the cancer could continue to grow unabated. If too much gets in, the drug’s debilitating side effects could be even worse.

Cetero performed the test between September 2006 and October 2007, according to documents from the European Medicines Agency, and FDA records indicate that same test was used to win approval in the U.S.

In 2011, the FDA notified Merck & Co., which had acquired Schering-Plough, about the problems with Cetero’s testing. In April 2012, the FDA publicly announced that analyses done by Cetero during the time when it performed the Temodar work would have to be redone. But according to Merck spokesman Ronald Rogers, the FDA has not asked Merck for any additional analyses to replace the questionable study.

The FDA declined to answer specific questions about the Temodar case, saying to do so would reveal confidential commercial information. But Woodcock said that in some cases, drug manufacturers had submitted alternative test results to the FDA that satisfied the agency that no retesting was necessary for specific drugs.

The FDA never removed Temodar for Injection from the market. The European Medicines Agency also kept the injection form of the drug on the market, but the two agencies handled their decision in sharply different ways.

The EMA has publicly laid out evidence — including studies not performed by Cetero — for why it believes the benefits of the injection drug outweigh its risks. But in the United States, the FDA has kept silent. To this day, Temodar’s label — the single most important way the FDA communicates the risks and benefits of medication — still displays data from the dubious Cetero study. (The label of at least one other drug, a powerful pain reliever marketed as Lazanda, also still displays questionable Cetero data.)

Woodcock said the agency hadn’t required manufacturers to alter their labels because, despite any question about precise numerical precision, the FDA’s overall recommendation had not changed.

In a written response to questions, Merck said it “stands behind the data in the TEMODAR (temozolomide) label.” The company said it learned about “misconduct at a contract research organization (CRO) facility in Houston” from the FDA and that it cooperated with investigations by the FDA and its European counterpart. It said that Cetero had performed no other studies for Merck.

Even one of the researchers involved in evaluating injectable Temodar didn’t know that the FDA had flagged Cetero’s analysis as potentially unreliable until contacted by a reporter for this story.

Dr. Max Schwarz, an oncologist and clinical professor at Monash University in Melbourne, Australia, treated some brain-cancer patients with the experimental injectable form of Temodar and others with the capsule formulation. Blood from his patients was sent to Cetero’s Houston lab for analysis.

Schwarz said he still has confidence in the injectable form of the drug, but said that he was “taken aback” when a reporter told him that the FDA had raised questions about the analysis. “I think we should have been told,” he said.

Suspect research conducted by Cetero Houston was not only used to win FDA approval but was also submitted to peer-reviewed scientific journals. Aided by the FDA’s silence, those articles remain in the scientific literature with no indication that they might, in fact, be compromised. For example, based on Cetero’s work, an article in the journal Cancer Chemotherapy and Pharmacology purports to show that Temodar for Injection is equivalent to Temodar capsules.

Edward Sausville, co-editor-in-chief of the journal, said in an email that the first he heard that something might be wrong with the Cetero research was when a reporter contacted him for this story. He also said the publisher of the journal would conduct a “review of relevant records pertinent to this case.”

During his years of inspecting the Houston lab, the FDA’s Stone said he often had the sense that something wasn’t right. When he went to other contract research firms and asked for data on a trial, they generally produced an overwhelming amount of paper: records of failed tests, meticulous explanations of how the chemists had made adjustments, and more.

Cetero’s records, by contrast, showed very clean, error-free procedures. As Stone and his colleagues dug through the data, though, they often found gaps. When pressed, Cetero officials would often produce additional data — data that ought to have been in the files originally handed over to the FDA.

Stone said, “We should have looked back and said, ‘Wait a minute, there’s always something missing from the studies from here. Why?'”

One reason, the FDA would determine, was that Cetero’s chemists were taking shortcuts and other actions prohibited by the FDA’s Good Laboratory Practice guidelines, which set out such matters as how records must be kept and how tests must be performed.

Stone and his FDA colleagues might never have realized Cetero was engaging in misconduct if a whistleblower hadn’t stepped forward.

Cashton J. Briscoe operated a liquid chromatography-tandem mass spectrometry device, or “mass spec,” a sensitive machine that measures the concentration of a drug in the blood.

He took blood samples prepared by Cetero chemists and used mass specs to perform “runs” — tests to see how much of a drug is in patients’ blood — that must always be performed with control samples. Often those controls show readings that are clearly wrong, and chemists have to abort runs, document the failure, recalibrate the machines, and redo the whole process.

But Cetero paid its Houston chemists based on how many runs they completed in a day. Some chemists doubled or even tripled their income by squeezing in extra tests, according to time sheets entered as evidence in a lawsuit filed in U.S. District Court in Houston by six chemists seeking overtime payments. Briscoe thought several chemists were cutting corners — by using the control-sample readings from one run in other runs, for example.

Attorney Scheineson, who represented Cetero during the FDA’s investigation, acknowledged that the Houston lab’s compensation system was “crappy” and that a handful of “dishonest” chemists at the Houston facility committed fraud.

In April 2009, Briscoe blew the whistle in a letter to the company written by his lawyer, reporting that “many of the chemists were manipulating and falsifying data.” Soon thereafter, Briscoe told the company that he had documented the misconduct. According to Stone and documents reviewed by ProPublica, Briscoe had photographic evidence that mass spec operators had switched the quality control samples between different runs; before-and-after copies of documents with the dates and other material changed; and information about a shadow computer filing system, where data from failed runs could be stored out of sight of FDA inspectors.

On June 5, apparently frustrated with Cetero’s response, Briscoe went a step further and called the FDA’s Dallas office. He agreed to meet Stone the following Monday, but never showed. Stone called him, as did other FDA officials, but Briscoe had changed his mind and clammed up.

Still, Stone’s brief phone conversation with Briscoe reminded the agent of all those suspiciously clean records he had seen at Cetero over the years. “Now that you have a bigger picture,” Stone recalled, “you’re like, ‘Oh, some of this stuff is cooked.'”

Two days after Stone’s aborted meeting with Briscoe, Cetero informed the FDA that an employee had made allegations of misconduct and that the company had hired an outside auditor to review five years’ worth of data. That led to months of back-and-forth between the agency and Cetero that culminated when Stone and his inspectors arrived in Houston in May 2010.

Two teams of FDA investigators eventually confirmed Briscoe’s main allegations and cited the company for falsifying records and other violations of Good Laboratory Practice. The net effect of the misconduct was far-reaching, agency officials wrote in a July 2011 letter:

“The pervasiveness and egregious nature of the violative practices by your firm has led FDA to have significant concerns that the bioequivalence and bioavailability data generated at the Cetero Houston facility from April 1, 2005, to June 15, 2010 … are unreliable.”

Bioequivalence studies measure whether a generic drug acts the same in the body as the name-brand drug; bioavailability studies measure how much drug gets into a patient’s system.

The FDA’s next step was to try to determine which drugs were implicated — information the agency couldn’t glean from its own records.

“We couldn’t really tell — because most of the applications we get are in paper — which studies were actually linked to the key studies in an application without asking the application holders,” the FDA’s Woodcock said. “So we asked the application holders,” meaning the drug manufacturers.

In the interim, the FDA continued to investigate processes and procedures at Cetero.

“We put their operations under a microscope,” said Woodcock. A team of clinical pharmacologists, statisticians and IT experts conducted a risk analysis of the problems at Cetero, she said, and they “concluded that the risk of a misleading result was very low given how the studies were done, how the data were captured and so forth.”

In April 2012, nearly three years after Briscoe first alerted the FDA to problems at Cetero, and nearly two years after Cetero handed over its documentation to inspectors, the FDA entered into a final agreement with the company. Drug makers would need to redo tests conducted at the company’s Houston facility between April 1, 2005 and Feb. 28, 2008, if those studies had been part of a drug application submitted to the FDA. If stored blood samples were still usable, they could be reanalyzed. If not, the entire study would need to be repeated, the FDA said. The agency set a deadline of six months.

Cetero tests done between March 1, 2008 and Aug. 31, 2009 would be accepted only if they were accompanied by an independent data integrity audit.

Analyses done after Sept. 1, 2009 would not require retesting. The FDA said that Cetero had issued a written directive on Sept. 1, 2009, ordering one kind of misconduct to stop, which was why it did not require any action on Cetero Houston studies after that date. According to public documents, however, the agency’s inspectors “found continued deficiencies” that persisted into December 2010.

In response to questions, the FDA said the problem period “was subsequently narrowed as more information regarding Cetero’s practices became available.”

A year after concluding its final agreement with Cetero, the FDA’s review is still not finished. “Without the process being public it’s hard to know, but it seems that this has been going on for too long,” said Kessler, the former FDA chief.

“The process has been long,” the FDA said, “because of the number of products involved and our wish to be thorough and accurate in both our requests for and our review of the data.”

Cetero’s attorney Scheineson said the FDA scaled back its requirements because it finally talked with company officials. He noted that Cetero had tried repeatedly to talk with the FDA before the agency issued its strongly worded July 2011 letter, and that more than 1,000 employees have since lost their jobs.

“If you would get an honest assessment from the leaders of the agency,” he said, “I think in retrospect they would have argued that this was overkill here and that they should have had input from the company before essentially going public with that death sentence.”

“I’m not sure what is meant by ‘death sentence,'” an FDA spokesperson wrote in response, “but our first priority was and is patient safety and we proceeded to conduct the investigation toward that objective.”

The FDA’s Stone draws little satisfaction from unraveling the problems at Cetero.

There are thousands of bioequivalence studies done every year, he pointed out, with each study generating thousands of pages of paper records. “Do you really think we’re going to look at 100 percent of them? We’re going to look at maybe 5 percent if we’re lucky,” he said. “Sometimes 1 percent.”

Still, given how often he and other FDA teams had inspected the Houston lab, he thinks regulators should have spotted Cetero’s misconduct sooner.

“In hindsight I look back and I’m like, ‘Wow, should I be proud of this?'” he said. “It’s cool that I was part of it, but it’s crap that we didn’t catch it five years ago. How could we let this go so long?”

Rob Garver can be reached at rob.garver@propublica.org, and Charles Seife can be reached at cgseife@nasw.org.

Research assistance for this story was contributed by Nick Stockton, Christine Kelly, Lily Newman, Joss Fong and Sarah Jacoby of the Science, Health, and Environmental Reporting Program at NYU.

http://www.scientificamerican.com/article.cfm?id=fda-let-drugs-approved-on-fraudulent-research-stay-on-market

Have scientists rendered the final word on penis size?

sn-penis

No man is an island, and it turns out neither is his penis. New research suggests that size does matter (sorry, guys), but the penis is only one (sometimes) small contributor to manly allure. A man’s overall attractiveness to a woman, researchers have found, depends in part on the trio of height, body shape, and penis size.

Although the assault of penis pill spam in your inbox might make you think that “bigger is better,” scientific research has returned mixed results. Some findings say that women prefer longer penises, others say they like wider ones, and still others report that size doesn’t matter at all.

Most of these studies had either asked women directly about their preferences or had them rate the attractiveness of different male figures that varied only in penis length. The penis doesn’t exist in a vacuum, though, and biologists led by Brian Mautz, who was then at the Australian National University in Acton, wondered how penis size interacts with other body traits that are usually considered attractive or manly.

Using data from a large study of Italian men, the researchers created 343 computer-generated male figures that varied in penis size, as well as in height and shoulder-to-hip ratio—traits that other research has linked to attractiveness and reproductive success. Mautz and colleagues turned the figures into short video clips and projected them, life-sized, onto a wall for viewing by 105 women. Each woman watched a random set of 53 figures and rated their attractiveness as potential sexual partners on a scale of 1 to 7.

“The first thing we found was that penis size influences male attractiveness,” Mautz says. “There’s a couple of caveats to that, and the first is that the relationship isn’t a straight line.” Rather than the attractiveness rating consistently improving with each jump in penis size, the team found what Mautz calls “an odd kink in the middle.” Attractiveness increased quickly until flaccid penis length reached 7.6 centimeters (about 3 inches) and then began to slow down, the team reports online today in the Proceedings of the National Academy of Sciences.

The reason, Mautz says, is that penis size isn’t the only thing that matters. It interacts with other traits, and its effect depends on whether those other traits are already attractive to begin with. If one of the model men was tall and had a masculine, V-shaped torso with broad shoulders and narrower hips, for example, he was considered more attractive than his shorter, stockier counterparts, regardless of penis size.

An increase in penis size was also a bigger benefit to attractiveness, and a smaller penis was less of a detriment, to the taller, fitter figures than it was to shorter or potato-shaped ones. For example, a model that was 185 cm tall (about 6 ft) with a 7-cm-long (about 3-in-long) penis got an average score for attractiveness. To get that same score, a model that was 170 cm (about 5’6″) needed a penis of about 11 cm (about 4.5 in) in length. Boost the taller guy’s penis by just about centimeter, and the shorter guy needs double that to keep up and get the same attractiveness score. After that, the shorter male pretty much can’t continue to compete. To really reap the benefits of a big penis, a guy needs to be attractive in the first place, Mautz says. If he isn’t, even the biggest penis in the world won’t do him that much good.

So have women been responsible for the male penis getting larger—at least over the course of evolution? That’s a distinct possibility, the researchers say. Women may have selected for larger penises because they’re linked to higher rates of female orgasm and sexual satisfaction, which may explain why the human penis is proportionally larger than those of our evolutionary cousins.

That size matters, and that it matters in the context of other traits, makes sense, because proportionate features are attractive, says Adam Jones, a biologist who studies sexual selection and mate choice at Texas A&M University in College Station and who was not involved in the work. But he cautions that projections on a wall are no substitute for real life. Just because a woman prefers a man with a large penis doesn’t mean that she’s going to find one. Outside the lab, there’s greater variation and more traits to consider, so penis size might not be as important. That’s good, Jones says, because hurdles like competition with other women and her own perceived attractiveness could place her with a man who comes up a little short.

http://news.sciencemag.org/sciencenow/2013/04/the-final-word-on-penis-size.html?ref=em

Thanks to Dr. Rajadhyaksha for bringing this to the attention of the It’s Interesting community.

Maya Blue Paint Recipe Deciphered

mayan-king-120510

The ancient Maya used a vivid, remarkably durable blue paint to cover their palace walls, codices, pottery and maybe even the bodies of human sacrifices who were thrown to their deaths down sacred wells. Now a group of chemists claim to have cracked the recipe of Maya Blue. Scientists have long known the two chief ingredients of the intense blue pigment: indigo, a plant dye that’s used today to color denim; and palygorskite, a type of clay. But how the Maya cooked up the unfading paint remained a mystery. Now Spanish researchers report that they found traces of another pigment in Maya Blue, which they say gives clues about how the color was made.

“We detected a second pigment in the samples, dehydroindigo, which must have formed through oxidation of the indigo when it underwent exposure to the heat that is required to prepare Maya Blue,” Antonio Doménech, a researcher from the University of Valencia, said in a statement.

“Indigo is blue and dehydroindigo is yellow, therefore the presence of both pigments in variable proportions would justify the more or less greenish tone of Maya Blue,” Doménech explained. “It is possible that the Maya knew how to obtain the desired hue by varying the preparation temperature, for example heating the mixture for more or less time or adding more of less wood to the fire.”

American researchers in 2008 claimed that copal resin, which was used for incense, may have been the third secret ingredient for Maya Blue. Their research was based on a study of a bowl that had traces of the pigment and was used to burn incense. But Doménech’s team didn’t buy those findings. “The bowl contained Maya Blue mixed with copal incense, so the simplified conclusion was that it was only prepared by warming incense,” Doménech said in a statement.

The Spanish researchers say they are now investigating the chemical bonds that bind the paint’s organic component (indigo) to the inorganic component (clay), which is key to Maya Blue’s resilience.

Among the more remarkable discoveries of the paint in context was a 14-foot thick (4 meters) layer of blue mud at the bottom of a naturally formed sinkhole, called the Sacred Cenote, at the famous Pre-Columbian Maya site Chichén Itzá in the Yucatán Peninsula of Mexico. When the Sacred Cenote was first dredged in 1904, it puzzled researchers, but some scientists now believe it was probably left over from blue-coated human sacrifices thrown into the well as part of a Maya ritual.

The research was detailed this year in the journal Microporous and Mesoporous Materials.

http://www.livescience.com/28381-maya-blue-paint-recipe-discovered.html

Researchers explore connecting the brain to machines

brain

Behind a locked door in a white-walled basement in a research building in Tempe, Ariz., a monkey sits stone-still in a chair, eyes locked on a computer screen. From his head protrudes a bundle of wires; from his mouth, a plastic tube. As he stares, a picture of a green cursor on the black screen floats toward the corner of a cube. The monkey is moving it with his mind.

The monkey, a rhesus macaque named Oscar, has electrodes implanted in his motor cortex, detecting electrical impulses that indicate mental activity and translating them to the movement of the ball on the screen. The computer isn’t reading his mind, exactly — Oscar’s own brain is doing a lot of the lifting, adapting itself by trial and error to the delicate task of accurately communicating its intentions to the machine. (When Oscar succeeds in controlling the ball as instructed, the tube in his mouth rewards him with a sip of his favorite beverage, Crystal Light.) It’s not technically telekinesis, either, since that would imply that there’s something paranormal about the process. It’s called a “brain-computer interface” (BCI). And it just might represent the future of the relationship between human and machine.

Stephen Helms Tillery’s laboratory at Arizona State University is one of a growing number where researchers are racing to explore the breathtaking potential of BCIs and a related technology, neuroprosthetics. The promise is irresistible: from restoring sight to the blind, to helping the paralyzed walk again, to allowing people suffering from locked-in syndrome to communicate with the outside world. In the past few years, the pace of progress has been accelerating, delivering dazzling headlines seemingly by the week.

At Duke University in 2008, a monkey named Idoya walked on a treadmill, causing a robot in Japan to do the same. Then Miguel Nicolelis stopped the monkey’s treadmill — and the robotic legs kept walking, controlled by Idoya’s brain. At Andrew Schwartz’s lab at the University of Pittsburgh in December 2012, a quadriplegic woman named Jan Scheuermann learned to feed herself chocolate by mentally manipulating a robotic arm. Just last month, Nicolelis’ lab set up what it billed as the first brain-to-brain interface, allowing a rat in North Carolina to make a decision based on sensory data beamed via Internet from the brain of a rat in Brazil.

So far the focus has been on medical applications — restoring standard-issue human functions to people with disabilities. But it’s not hard to imagine the same technologies someday augmenting capacities. If you can make robotic legs walk with your mind, there’s no reason you can’t also make them run faster than any sprinter. If you can control a robotic arm, you can control a robotic crane. If you can play a computer game with your mind, you can, theoretically at least, fly a drone with your mind.

It’s tempting and a bit frightening to imagine that all of this is right around the corner, given how far the field has already come in a short time. Indeed, Nicolelis — the media-savvy scientist behind the “rat telepathy” experiment — is aiming to build a robotic bodysuit that would allow a paralyzed teen to take the first kick of the 2014 World Cup. Yet the same factor that has made the explosion of progress in neuroprosthetics possible could also make future advances harder to come by: the almost unfathomable complexity of the human brain.

From I, Robot to Skynet, we’ve tended to assume that the machines of the future would be guided by artificial intelligence — that our robots would have minds of their own. Over the decades, researchers have made enormous leaps in artificial intelligence (AI), and we may be entering an age of “smart objects” that can learn, adapt to, and even shape our habits and preferences. We have planes that fly themselves, and we’ll soon have cars that do the same. Google has some of the world’s top AI minds working on making our smartphones even smarter, to the point that they can anticipate our needs. But “smart” is not the same as “sentient.” We can train devices to learn specific behaviors, and even out-think humans in certain constrained settings, like a game of Jeopardy. But we’re still nowhere close to building a machine that can pass the Turing test, the benchmark for human-like intelligence. Some experts doubt we ever will.

Philosophy aside, for the time being the smartest machines of all are those that humans can control. The challenge lies in how best to control them. From vacuum tubes to the DOS command line to the Mac to the iPhone, the history of computing has been a progression from lower to higher levels of abstraction. In other words, we’ve been moving from machines that require us to understand and directly manipulate their inner workings to machines that understand how we work and respond readily to our commands. The next step after smartphones may be voice-controlled smart glasses, which can intuit our intentions all the more readily because they see what we see and hear what we hear.

The logical endpoint of this progression would be computers that read our minds, computers we can control without any physical action on our part at all. That sounds impossible. After all, if the human brain is so hard to compute, how can a computer understand what’s going on inside it?

It can’t. But as it turns out, it doesn’t have to — not fully, anyway. What makes brain-computer interfaces possible is an amazing property of the brain called neuroplasticity: the ability of neurons to form new connections in response to fresh stimuli. Our brains are constantly rewiring themselves to allow us to adapt to our environment. So when researchers implant electrodes in a part of the brain that they expect to be active in moving, say, the right arm, it’s not essential that they know in advance exactly which neurons will fire at what rate. When the subject attempts to move the robotic arm and sees that it isn’t quite working as expected, the person — or rat or monkey — will try different configurations of brain activity. Eventually, with time and feedback and training, the brain will hit on a solution that makes use of the electrodes to move the arm.

That’s the principle behind such rapid progress in brain-computer interface and neuroprosthetics. Researchers began looking into the possibility of reading signals directly from the brain in the 1970s, and testing on rats began in the early 1990s. The first big breakthrough for humans came in Georgia in 1997, when a scientist named Philip Kennedy used brain implants to allow a “locked in” stroke victim named Johnny Ray to spell out words by moving a cursor with his thoughts. (It took him six exhausting months of training to master the process.) In 2008, when Nicolelis got his monkey at Duke to make robotic legs run a treadmill in Japan, it might have seemed like mind-controlled exoskeletons for humans were just another step or two away. If he succeeds in his plan to have a paralyzed youngster kick a soccer ball at next year’s World Cup, some will pronounce the cyborg revolution in full swing.

Schwartz, the Pittsburgh researcher who helped Jan Scheuermann feed herself chocolate in December, is optimistic that neuroprosthetics will eventually allow paralyzed people to regain some mobility. But he says that full control over an exoskeleton would require a more sophisticated way to extract nuanced information from the brain. Getting a pair of robotic legs to walk is one thing. Getting robotic limbs to do everything human limbs can do may be exponentially more complicated. “The challenge of maintaining balance and staying upright on two feet is a difficult problem, but it can be handled by robotics without a brain. But if you need to move gracefully and with skill, turn and step over obstacles, decide if it’s slippery outside — that does require a brain. If you see someone go up and kick a soccer ball, the essential thing to ask is, ‘OK, what would happen if I moved the soccer ball two inches to the right?'” The idea that simple electrodes could detect things as complex as memory or cognition, which involve the firing of billions of neurons in patterns that scientists can’t yet comprehend, is far-fetched, Schwartz adds.

That’s not the only reason that companies like Apple and Google aren’t yet working on devices that read our minds (as far as we know). Another one is that the devices aren’t portable. And then there’s the little fact that they require brain surgery.

A different class of brain-scanning technology is being touted on the consumer market and in the media as a way for computers to read people’s minds without drilling into their skulls. It’s called electroencephalography, or EEG, and it involves headsets that press electrodes against the scalp. In an impressive 2010 TED Talk, Tan Le of the consumer EEG-headset company Emotiv Lifescience showed how someone can use her company’s EPOC headset to move objects on a computer screen.

Skeptics point out that these devices can detect only the crudest electrical signals from the brain itself, which is well-insulated by the skull and scalp. In many cases, consumer devices that claim to read people’s thoughts are in fact relying largely on physical signals like skin conductivity and tension of the scalp or eyebrow muscles.

Robert Oschler, a robotics enthusiast who develops apps for EEG headsets, believes the more sophisticated consumer headsets like the Emotiv EPOC may be the real deal in terms of filtering out the noise to detect brain waves. Still, he says, there are limits to what even the most advanced, medical-grade EEG devices can divine about our cognition. He’s fond of an analogy that he attributes to Gerwin Schalk, a pioneer in the field of invasive brain implants. The best EEG devices, he says, are “like going to a stadium with a bunch of microphones: You can’t hear what any individual is saying, but maybe you can tell if they’re doing the wave.” With some of the more basic consumer headsets, at this point, “it’s like being in a party in the parking lot outside the same game.”

It’s fairly safe to say that EEG headsets won’t be turning us into cyborgs anytime soon. But it would be a mistake to assume that we can predict today how brain-computer interface technology will evolve. Just last month, a team at Brown University unveiled a prototype of a low-power, wireless neural implant that can transmit signals to a computer over broadband. That could be a major step forward in someday making BCIs practical for everyday use. Meanwhile, researchers at Cornell last week revealed that they were able to use fMRI, a measure of brain activity, to detect which of four people a research subject was thinking about at a given time. Machines today can read our minds in only the most rudimentary ways. But such advances hint that they may be able to detect and respond to more abstract types of mental activity in the always-changing future.

http://www.ydr.com/living/ci_22800493/researchers-explore-connecting-brain-machines

Thriving bacteria discovered at the deepest point in the ocean

dn23277-1_300

Hollywood director James Cameron found little evidence of life when he descended nearly 11,000 metres to the deepest point in the world’s oceans last year. If only he had taken a microscope and looked just a few centimetres deeper.

Ronnie Glud at the University of Southern Denmark in Odense, and his colleagues, have discovered unusually high levels of microbial activity in the sediments at the site of Cameron’s dive – Challenger Deep at the bottom of the western Pacific’s Mariana Trench.

Glud’s team dispatched autonomous sensors and sample collectors into the trench to measure microbial activity in the top 20 centimetres of sediment on the sea bed. The pressure there is almost 1100 times greater than at the surface. Finding food, however, is an even greater challenge than surviving high pressures for anything calling the trench home.

Any nourishment must come in the form of detritus falling from the surface ocean, most of which is consumed by other organisms on the way down. Only 1 per cent of the organic matter generated at the surface reaches the sea floor’s abyssal plains, 3000 to 6000 metres below sea level. So what are the chances of organic matter making it even deeper, into the trenches that form when one tectonic plate ploughs beneath another?

Surprisingly, the odds seem high. Glud’s team compared sediment samples taken from Challenger Deep and a reference site on the nearby abyssal plain. The bacteria at Challenger Deep were around 10 times as abundant as those on the abyssal plain, with every cubic centimetre of sediment containing 10 million microbes. The deep microbes were also twice as active as their shallower kin.

These figures make sense, says Glud, because ocean trenches are particularly good at capturing sediment. They are broad as well as deep, with a steep slope down to the deepest point, so any sediment falling on their flanks quickly cascades down to the bottom in muddy avalanches. Although the sediment may contain no more than 1 per cent organic matter, so much of it ends up at Challenger Deep that the level of microbial activity shoots up.

“There is much more than meets the eye at the bottom of the sea,” says Hans Røy, at Aarhus University in Denmark. Last year, he studied seafloor sediments below the north Pacific gyre – an area that, unlike Challenger Deep, is almost devoid of nutrients. Remarkably, though, even here Røy found living microbes.

“With the exception of temperatures much above boiling, bacteria seem to cope with everything this planet can throw at them,” he says.

Journal reference: Nature Geoscience, DOI: 10.1038/ngeo1773

http://www.newscientist.com/article/dn23277-deepest-point-in-the-ocean-is-teeming-with-life.html?cmpid=RSS|NSNS|2012-GLOBAL|online-news

Water in faults vaporizes during an earthquake, depositing gold

gold-ed
The tyrannosaur of the minerals, this gold nugget in quartz weighs more than 70 ounces (2 kilograms).

Earthquakes have the Midas touch, a new study claims.

Water in faults vaporizes during an earthquake, depositing gold, according to a model published in the March 17 issue of the journal Nature Geoscience. The model provides a quantitative mechanism for the link between gold and quartz seen in many of the world’s gold deposits, said Dion Weatherley, a geophysicist at the University of Queensland in Australia and lead author of the study.

When an earthquake strikes, it moves along a rupture in the ground — a fracture called a fault. Big faults can have many small fractures along their length, connected by jogs that appear as rectangular voids. Water often lubricates faults, filling in fractures and jogs.

About 6 miles (10 kilometers) below the surface, under incredible temperatures and pressures, the water carries high concentrations of carbon dioxide, silica and economically attractive elements like gold.

During an earthquake, the fault jog suddenly opens wider. It’s like pulling the lid off a pressure cooker: The water inside the void instantly vaporizes, flashing to steam and forcing silica, which forms the mineral quartz, and gold out of the fluids and onto nearby surfaces, suggest Weatherley and co-author Richard Henley, of the Australian National University in Canberra.

While scientists have long suspected that sudden pressure drops could account for the link between giant gold deposits and ancient faults, the study takes this idea to the extreme, said Jamie Wilkinson, a geochemist at Imperial College London in the United Kingdom, who was not involved in the study.

“To me, it seems pretty plausible. It’s something that people would probably want to model either experimentally or numerically in a bit more detail to see if it would actually work,” Wilkinson told OurAmazingPlanet.

Previously, scientists suspected fluids would effervesce, bubbling like an opened soda bottle, during earthquakes or other pressure changes. This would line underground pockets with gold. Others suggested minerals would simply accumulate slowly over time.

Weatherley said the amount of gold left behind after an earthquake is tiny, because underground fluids carry at most only one part per million of the precious element. But an earthquake zone like New Zealand’s Alpine Fault, one of the world’s fastest, could build a mineable deposit in 100,000 years, he said.

Surprisingly, the quartz doesn’t even have time to crystallize, the study indicates. Instead, the mineral comes out of the fluid in the form of nanoparticles, perhaps even making a gel-like substance on the fracture walls. The quartz nanoparticles then crystallize over time.

Even earthquakes smaller than magnitude 4.0, which may rattle nerves but rarely cause damage, can trigger flash vaporization, the study finds.

“Given that small-magnitude earthquakes are exceptionally frequent in fault systems, this process may be the primary driver for the formation of economic gold deposits,” Weatherley told OurAmazingPlanet.

Quartz-linked gold has sourced some famous deposits, such as the placer gold that sparked the 19th-century California and Klondike gold rushes. Both deposits had eroded from quartz veins upstream. Placer gold consists of particles, flakes and nuggets mixed in with sand and gravel in stream and river beds. Prospectors traced the gravels back to their sources, where hard-rock mining continues today.

But earthquakes aren’t the only cataclysmic source of gold. Volcanoes and their underground plumbing are just as prolific, if not more so, at producing the precious metal. While Weatherley and Henley suggest that a similar process could take place under volcanoes, Wilkinson, who studies volcano-linked gold, said that’s not the case.

“Beneath volcanoes, most of the gold is not precipitated in faults that are active during earthquakes,” Wilkinson said. “It’s a very different mechanism.”

Understanding how gold forms helps companies prospect for new mines. “This new knowledge on gold-deposit formation mechanisms may assist future gold exploration efforts,” Weatherley said.

In their quest for gold, humans have pulled more than 188,000 tons (171,000 metric tons) of the metal from the ground, exhausting easily accessed sources, according to the World Gold Council, an industry group.

http://www.livescience.com/27953-earthquakes-make-gold.html

Putting the Clock in ‘Cock-A-Doodle-Doo’

130318132625-large
Of course, roosters crow with the dawn. But are they simply reacting to the environment, or do they really know what time of day it is? Researchers reporting on March 18 in Current Biology, a Cell Press publication, have evidence that puts the clock in “cock-a-doodle-doo”

“‘Cock-a-doodle-doo’ symbolizes the break of dawn in many countries,” says Takashi Yoshimura of Nagoya University. “But it wasn’t clear whether crowing is under the control of a biological clock or is simply a response to external stimuli.”

That’s because other things — a car’s headlights, for instance — will set a rooster off, too, at any time of day. To find out whether the roosters’ crowing is driven by an internal biological clock, Yoshimura and his colleague Tsuyoshi Shimmura placed birds under constant light conditions and turned on recorders to listen and watch.

Under round-the-clock dim lighting, the roosters kept right on crowing each morning just before dawn, proof that the behavior is entrained to a circadian rhythm. The roosters’ reactions to external events also varied over the course of the day.

In other words, predawn crowing and the crowing that roosters do in response to other cues both depend on a circadian clock.

The findings are just the start of the team’s efforts to unravel the roosters’ innate vocalizations, which aren’t learned like songbird songs or human speech, the researchers say.

“We still do not know why a dog says ‘bow-wow’ and a cat says ‘meow,’ Yoshimura says. “We are interested in the mechanism of this genetically controlled behavior and believe that chickens provide an excellent model.”

Tsuyoshi Shimmura, Takashi Yoshimura. Circadian clock determines the timing of rooster crowing. Current Biology, 2013; 23 (6): R231 DOI: 10.1016/j.cub.2013.02.015

http://www.sciencedaily.com/releases/2013/03/130318132625.htm

Rare brain condition makes woman see everything upside down

bojana2_150313_blic_630bojana_150313_blic_630

Bojana Danilovic has what you might call a unique worldview. Due to a rare condition, she sees everything upside down, all the time.

The 28-year-old Serbian council employee uses an upside down monitor at work and relaxes at home in front of an upside down television stacked on top of the normal one that the rest of her family watches.

“It may look incredible to other people but to me it’s completely normal,” Danilovic told local newspaper Blic.

“I was born that way. It’s just the way I see the world.”

Experts from Harvard University and the Massachusetts Institute of Technology have been consulted after local doctors were flummoxed by the extremely unusual condition.

They say she is suffering from a neurological syndrome called “spatial orientation phenomenon,” Blic reports.

“They say my eyes see the images the right way up but my brain changes them,” Danilovic said.

“But they don’t really seem to know exactly how it happens, just that it does and where it happens in my brain.

“They told me they’ve seen the case histories of some people who write the way I see, but never someone quite like me.”

http://au.news.yahoo.com/thewest/a/-/world/16375095/rare-brain-condition-leaves-woman-seeing-world-upside-down/

Ancient urine provides clues to Africa’s past

sn-hyrax

When it comes to peering into Africa’s climate past, the ancient homes of hyraxes are number one. Paleoclimatologists typically dig up muddy core samples and analyze their pollen content for clues to long-ago weather, but parts of southern and central Africa are too dry to preserve such evidence. Enter the rock hyrax (Procavia capensis) (inset), a furry mammal that looks like a large groundhog but is actually a distant cousin of the elephant. Brian Chase, a geographical scientist at the University of Montpellier in France, turned to urine accretions left by the animals thousands of years ago; hyrax colonies use the same rock shelters for generation after generation, depositing pollen, calcium remnants, charcoal particles, stable isotopes, and other detritus in their urine (black splotches on rock in main image). Most climate models predict arid conditions in southern Africa 12,000 years ago, but the pollen content of hyrax urine from that period indicates that they ate grasses, which flourish in wetter conditions Chase, who reported his findings here today at the annual meeting of the American Association for the Advancement of Science (which publishes ScienceNOW), believes his method can be used to give researchers a wealth of data to improve their models of Africa’s paleohistory. “You can turn a 2-meter pile of pee into a very nice section which you can bring back to the lab,” he told the audience. “These are very high-resolution records.”

http://news.sciencemag.org/sciencenow/2013/02/scienceshot-ancient-pee-provides.html?ref=hp

Thanks to Dr. Rajadhyaksha for bringing this to the attention of the It’s Intereting community.

Placebos Work Better for Nice People

placebo_pill

Having an agreeable personality might make you popular at work and lucky in love. It may also enhance your brain’s built-in painkilling powers, boosting the placebo effect.

Researchers at the University of Michigan, the University of North Carolina and the University of Maryland administered standard personality tests to 50 healthy volunteers, identifying general traits such as resiliency, straightforwardness, altruism and hostility. Each volunteer then received a painful injection, followed by a placebo—a sham painkiller. The volunteers who were resilient, straightforward or altruistic experienced a greater reduction in pain from the placebo compared with volunteers who had a so-called angry hostility personality trait.

http://www.scientificamerican.com/article.cfm?id=placebos-work-better-for-nice-peopl