Eyes and eardrums move in sync

Simply moving the eyes triggers the eardrums to move too, says a new study by Duke University neuroscientists.

The researchers found that keeping the head still but shifting the eyes to one side or the other sparks vibrations in the eardrums, even in the absence of any sounds.

Surprisingly, these eardrum vibrations start slightly before the eyes move, indicating that motion in the ears and the eyes are controlled by the same motor commands deep within the brain.

“It’s like the brain is saying, ‘I’m going to move the eyes, I better tell the eardrums, too,’” said Jennifer Groh, a professor in the departments of neurobiology and psychology and neuroscience at Duke.

The findings, which were replicated in both humans and rhesus monkeys, provide new insight into how the brain coordinates what we see and what we hear. It may also lead to new understanding of hearing disorders, such as difficulty following a conversation in a crowded room.

The paper appeared Jan. 23 in Proceedings of the National Academy of Sciences.

It’s no secret that the eyes and ears work together to make sense of the sights and sounds around us. Most people find it easier to understand somebody if they are looking at them and watching their lips move. And in a famous illusion called the McGurk Effect, videos of lip cues dubbed with mismatched audio cause people to hear the wrong sound.

But researchers are still puzzling over where and how the brain combines these two very different types of sensory information.

“Our brains would like to match up what we see and what we hear according to where these stimuli are coming from, but the visual system and the auditory system figure out where stimuli are located in two completely different ways,” Groh said. “The eyes are giving you a camera-like snapshot of the visual scene, whereas for sounds, you have to calculate where they are coming from based on differences in timing and loudness across the two ears.”

Because the eyes are usually darting about within the head, the visual and auditory worlds are constantly in flux with respect to one another, Groh added.

In an experiment designed by Kurtis Gruters, a formal doctoral student in Groh’s lab and co-first author on the paper, 16 participants were asked to sit in a dark room and follow shifting LED lights with their eyes. Each participant also wore small microphones in their ear canals that were sensitive enough to pick up the slight vibrations created when the eardrum sways back and forth.

Though eardrums vibrate primarily in response to outside sounds, the brain can also control their movements using small bones in the middle ear and hair cells in the cochlea. These mechanisms help modulate the volume of sounds that ultimately reach the inner ear and brain, and produce small sounds known as otoacoustic emissions.

Gruters found that when the eyes moved, both eardrums moved in sync with one another, one side bulging inward at the same time the other side bulged outward. They continued to vibrate back and forth together until shortly after the eyes stopped moving. Eye movements in opposite directions produced opposite patterns of vibrations.

Larger eye movements also triggered bigger vibrations than smaller eye movements, the team found.

“The fact that these eardrum movements are encoding spatial information about eye movements means that they may be useful for helping our brains merge visual and auditory space,” said David Murphy, a doctoral student in Groh’s lab and co-first author on the paper. “It could also signify a marker of a healthy interaction between the auditory and visual systems.”

The team, which included Christopher Shera at the University of Southern California and David W. Smith of the University of Florida, is still investigating how these eardrum vibrations impact what we hear, and what role they may play in hearing disorders. In future experiments, they will look at whether up and down eye movements also cause unique signatures in eardrum vibrations.

“The eardrum movements literally contain information about what the eyes are doing,” Groh said. “This demonstrates that these two sensory pathways are coupled, and they are coupled at the earliest points.”

Cole Jenson, an undergraduate neuroscience major at Duke, also coauthored the new study.

CITATION: “The Eardrums Move When the Eyes Move: A Multisensory Effect on the Mechanics of Hearing,” K. G. Gruters, D. L. K. Murphy, Cole D. Jensen, D. W. Smith, C. A. Shera and J. M. Groh. Proceedings of the National Academy of Sciences, Jan. 23, 2018. DOI: 10.1073/pnas.1717948115

Meet Zhong Zhong and Hua Hua, the First Monkey Clones Produced by Method that Made Dolly

The first primate clones made by somatic cell nuclear transfer are two genetically-identical long-tailed macaques born recently at the Institute of Neuroscience of Chinese Academy of Sciences in Shanghai. Researchers named the newborns Zhong Zhong and Hua Hua—born six and eight weeks ago, respectively—after the Chinese adjective “Zhōnghuá,” which means Chinese nation or people. The technical milestone, presented January 24 in the journal Cell, makes it a realistic possibility for labs to conduct research with customizable populations of genetically uniform monkeys.

“There are a lot of questions about primate biology that can be studied by having this additional model,” says senior author SUN Qiang, Director of the Nonhuman Primate Research Facility at the Chinese Academy of Sciences Institute of Neuroscience. “You can produce cloned monkeys with the same genetic background except the gene you manipulated. This will generate real models not just for genetically based brain diseases, but also cancer, immune or metabolic disorders, and allow us to test the efficacy of the drugs for these conditions before clinical use.”

Zhong Zhong and Hua Hua are not the first primate clones—the title goes to Tetra, a rhesus monkey made in 1999 by a simpler method called embryo splitting (Science, v. 287, no. 5451, pp. 317-319). This approach is how twins are made, but can only generate up to 4 offspring at a time. Zhong Zhong and Hua Hua are the product of somatic cell nuclear transfer (SCNT), the technique used to create Dolly the sheep over 20 years ago, in which researchers remove the nucleus from an egg cell and replace it with another nucleus from differentiated body cells. This reconstructed egg then develops into a clone of whatever donated the replacement nucleus.

Differentiated monkey cell nuclei, compared to other mammals such as mice or dogs, have proven resistant to SCNT. SUN and his colleagues overcame this challenge primarily by introducing epigenetic modulators after the nuclear transfer that switch on or off the genes that are inhibiting the embryo development. The researchers found their success rate increased by transferring nuclei taken from fetal differentiated cells, such as fibroblasts, a cell type in the connective tissue. Zhong Zhong and Hua Hua are clones of the same macaque fetal fibroblasts. Cells from adult donor cells were also used, but those babies only lived for a few hours after birth.

“We tried several different methods but only one worked,” says SUN. “There was much failure before we found a way to successfully clone a monkey.”

The first author LIU Zhen, a postdoctoral fellow, spent three years practicing and optimizing the SCNT procedure. Including quickly and precisely removing of the nuclear materials from the egg cell and various methods of promoting the fusion of the nucleus-donor cell and enucleated egg. With additional help of epigenetic modulators that help re-activate the suppressed genes in the differentiated nucleus, he was able to achieve much higher rates of normal embryo development and pregnancy in the surrogate female monkeys.

“The SCNT procedure is rather delicate, so the faster you do it the less damage to the egg you have, and Dr. LIU has a green thumb for doing this,” says Muming Poo, a co-author on the study, who directs the Institute of Neuroscience of CAS Center for Excellence in Brain Science and Intelligence Technology and helps to supervise the project. “It takes a lot of practice, not everybody can do the enucleation and cell fusion process quickly and precisely, and it is likely that the optimization of transfer procedure greatly helped us to achieve this success.”

The researchers plan to continue improving the technique, which will also benefit from future work in other labs, and monitoring Zhong Zhong and Hua Hua for their physical and intellectual development. The babies are currently bottle fed and are growing normally compared to monkeys their age. The group is also expecting more macaque clones to be born over the coming months.

The lab is following strict international guidelines for animal research set by the US National Institutes of Health, but encourage the scientific community to discuss what should or should not be acceptable practices when it comes to cloning of non-human primates. “We are very aware that future research using non-human primates anywhere in the world depends on scientists following very strict ethical standards,” Poo says.

This work was supported by grants from Chinese Academy of Sciences, the CAS Key Technology Talent Program, the Shanghai Municipal Government Bureau of Science and Technology, the National Postdoctoral Program for Innovative Talents and the China Postdoctoral Science Foundation.

http://english.cas.cn/head/201801/t20180123_189488.shtml

Fiber-Rich Diet Fights Off Obesity by Altering Microbiota

Consumption of dietary fiber can prevent obesity, metabolic syndrome and adverse changes in the intestine by promoting growth of “good” bacteria in the colon, according to a study led by Georgia State University.

The researchers found enriching the diet of mice with the fermentable fiber inulin prevented metabolic syndrome that is induced by a high-fat diet, and they identified specifically how this occurs in the body. Metabolic syndrome is a cluster of conditions closely linked to obesity that includes increased blood pressure, high blood sugar, excess body fat around the waist and abnormal cholesterol or triglyceride levels. When these conditions occur together, they increase a person’s risk of heart disease, stroke and diabetes.

Obesity and metabolic syndrome are associated with alterations in gut microbiota, the microorganism population that lives in the intestine. Modern changes in dietary habits, particularly the consumption of processed foods lacking fiber, are believed to affect microbiota and contribute to the increase of chronic inflammatory disease, including metabolic syndrome. Studies have found a high-fat diet destroys gut microbiota, reduces the production of epithelial cells lining the intestine and causes gut bacteria to invade intestinal epithelial cells.

This study found the fermentable fiber inulin restored gut health and protected mice against metabolic syndrome induced by a high-fat diet by restoring gut microbiota levels, increasing the production of intestinal epithelial cells and restoring expression of the protein interleukin-22 (IL-22), which prevented gut microbiota from invading epithelial cells. The findings are published in the journal Cell Host & Microbe.

“We found that manipulating dietary fiber content, particularly by adding fermentable fiber, guards against metabolic syndrome,” said Dr. Andrew Gewirtz, professor in the Institute for Biomedical Sciences at Georgia State. “This study revealed the specific mechanism used to restore gut health and suppress obesity and metabolic syndrome is the induction of IL-22 expression. These results contribute to the understanding of the mechanisms that underlie diet-induced obesity and offer insight into how fermentable fibers might promote better health.”

For four weeks, the researchers fed mice either a grain-based rodent chow, a high-fat diet (high fat and low fiber content with 5 percent cellulose as a source of fiber) or a high-fat diet supplemented with fiber (either fermentable inulin fiber or insoluble cellulose fiber). The high-fat diet is linked to an increase in obesity and conditions associated with metabolic syndrome.

They discovered a diet supplemented with inulin reduced weight gain and noticeably reduced obesity induced by a high-fat diet, which was accompanied by a reduction in the size of fat cells. Dietary enrichment with inulin also markedly lowered cholesterol levels and largely prevented dysglycemia (abnormal blood sugar levels). The researchers found insoluble cellulose fiber only modestly reduced obesity and dysglycemia

Supplementing the high-fat diet with inulin restored gut microbiota. However, inulin didn’t restore the microbiota levels to those of mice fed a chow diet. A distinct difference in microbiota levels remained between mice fed a high-fat diet versus those fed a chow diet. Enrichment of high-fat diets with cellulose had a mild effect on microbiota levels.

In addition, the researchers found switching mice from a grain-based chow diet to a high-fat diet resulted in a loss of colon mass, which they believe contributes to low-grade inflammation and metabolic syndrome. When they switched mice back to a chow diet, the colon mass was fully restored.

https://www.technologynetworks.com/tn/news/fiber-rich-diet-fights-off-obesity-by-altering-microbiota-296642?utm_campaign=Newsletter_TN_BreakingScienceNews&utm_source=hs_email&utm_medium=email&utm_content=60184554&_hsenc=p2ANqtz-9YDsGiTl44CBfQpgNtYgc43xBeVKpAbPZym9Lh_GzlHoEVts0rAwMhHHXIDam3Jit0D3aTqKGhCceUREgr6sZfLGMWpQ&_hsmi=60184554

US and Russian computer scientists develop algorithm called VarQuest that discovers over 1000 antibiotic proteins in a few hours

A team of American and Russian computer scientists has developed an algorithm that can rapidly search databases to discover novel variants of known antibiotics — a potential boon in fighting antibiotic resistance.

In just a few hours, the algorithm, called VarQuest, identified 10 times more variants of peptidic natural products, or PNPs, than all previous PNP discovery efforts combined, the researchers report in the latest issue of the journal Nature Microbiology. Previously, such a search might have taken hundreds of years of computation, said Hosein Mohimani, assistant professor in Carnegie Mellon University’s Computational Biology Department.

“Our results show that the antibiotics produced by microbes are much more diverse than had been assumed,” Mohimani said. VarQuest found more than a thousand variants of known antibiotics, he noted, providing a big picture perspective that microbiologists could not obtain while studying one antibiotic at a time.

Mohimani and Pavel A. Pevzner, professor of computer science at the University of California, San Diego, designed and directed the effort, which included colleagues at St. Petersburg State University in Russia.

PNPs have an unparalleled track record in pharmacology. Many antimicrobial and anticancer agents are PNPs, including the so-called “antibiotics of last resort,” vancomycin and daptomycin. As concerns mount regarding antibiotic drug resistance, finding more effective variants of known antibiotics is a means for preserving the clinical efficacy of antibiotic drugs in general.

The search for these novel variants received a boost in recent years with the advent of high-throughput methods that enable environmental samples to be processed in batches, rather than one at a time. Researchers also recently launched the Global Natural Products Social (GNPS) molecular network, a database of mass spectra of natural products collected by researchers worldwide. Already, the GNPS based at UC San Diego contains more than a billion mass spectra.

The GNPS represents a gold mine for drug discovery, Mohimani said. The VarQuest algorithm, which employs a smarter way of indexing the database to enhance searches, should help GNPS meet its promise, he added.

“Natural product discovery is turning into a Big Data territory, and the field has to get prepared for this transformation in terms of collecting, storing and making sense of Big Data,” Mohimani said. “VarQuest is the first step toward digesting the Big Data already collected by the community.”

Reference: Gurevich, A., Mikheenko, A., Shlemov, A., Korobeynikov, A., Mohimani, H., & Pevzner, P. A. (2018). Increased diversity of peptidic natural products revealed by modification-tolerant database search of mass spectra. Nature Microbiology, 1. https://doi.org/10.1038/s41564-017-0094-2

https://www.technologynetworks.com/informatics/news/algorithm-unearths-over-1000-antibiotic-proteins-in-a-few-hours-296639?utm_campaign=Newsletter_TN_BreakingScienceNews&utm_source=hs_email&utm_medium=email&utm_content=60184554&_hsenc=p2ANqtz-9YDsGiTl44CBfQpgNtYgc43xBeVKpAbPZym9Lh_GzlHoEVts0rAwMhHHXIDam3Jit0D3aTqKGhCceUREgr6sZfLGMWpQ&_hsmi=60184554

Desire and Dislike Mapped in the Amygdala

The amygdala is a tiny hub of emotions where in 2016 a team led by MIT neuroscientist Kay Tye found specific populations of neurons that assign good or bad feelings, or “valence,” to experience. Learning to associate pleasure with a tasty food, or aversion to a foul-tasting one, is a primal function and key to survival.

In a new study in Cell Reports, Tye’s team at the Picower Institute for Learning and Memory returns to the amygdala for an unprecedentedly deep dive into its inner workings. Focusing on a particular section called the basolateral amygdala, the researchers show how valence-processing circuitry is organized and how key neurons in those circuits interact with others. What they reveal is a region with distinct but diverse and dynamic neighborhoods where valence is sorted out by both connecting with other brain regions and sparking cross-talk within the basolateral amygdala itself.

“Perturbations of emotional valence processing is at the core of many mental health disorders,” says Tye, associate professor of neuroscience at the Picower Institute of Learning and Memory and the Department of Brain and Cognitive Sciences. “Anxiety and addiction, for example, may be an imbalance or a misassignment of positive or negative valence with different stimuli.”

Despite the importance of valence assignment in both healthy behavior and psychiatric disorders, neuroscientists don’t know how the process really works. The new study therefore sought to expose how the neurons and circuits are laid out and how they interact.

Bitter, sweet

To conduct the study, lead author Anna Beyeler, a former postdoc in Tye’s lab and currently a faculty member at the University of Bordeaux in France, led the group in training mice to associate appealing sucrose drops with one tone and bitter quinine drops with another. They recorded the response of different neurons in the basolateral amygdala when the tones were played to see which ones were associated with the conditioned learned valence of the different tastes. They labeled those key neurons associated with valence encoding and engineered them to become responsive to pulses of light. When the researchers then activated them, they recorded the electrical activity not only of those neurons but also of many of their neighbors to see what influence their activity had in local circuits.

They also found, labeled, and made similar measurements among neurons that became active on the occasion that a mouse actually licked the bitter quinine. With this additional step, they could measure not only the neural activity associated with the learned valence of the bitter taste but also that associated with the innate reaction to the actual experience.

Later in the lab, they used tracing technologies to highlight three different kinds of neurons more fully, visualizing them in distinct colors depending on which other region they projected their tendrilous axons to connect with. Neurons that project to a region called the nucleus accumbens are predominantly associated with positive valence, and those that connect to the central amygdala are mainly associated with negative valence. They found that neurons uniquely activated by the unconditioned experience of actually tasting the quinine tended to project to the ventral hippocampus.

In all, the team mapped over 1,600 neurons.

To observe the three-dimensional configuration of these distinct neuron populations, the researchers turned the surrounding brain tissues clear using a technique called CLARITY, invented by Kwanghun Chung, assistant professor of chemical engineering and neuroscience and a colleague in the Picower Institute.

Neighborhoods without fences

Beyeler, Tye, and their co-authors were able to make several novel observations about the inner workings of the basolateral amygdala’s valence circuitry.

One finding was that the different functional populations of neurons tended to cluster together in neighborhoods, or “hotspots.” For example, picturing the almond-shaped amygdala as standing upright on its fat bottom, the neurons projecting to the central amygdala tended to cluster toward the point at the top and then on the right toward the bottom. Meanwhile the neurons that projected to the nucleus accumbens tended to run down the middle, and the ones that projected to the hippocampus were clustered toward the bottom on the opposite side from the central amygdala projectors.

Despite these trends, the researchers also noted that the neighborhoods were hardly monolithic. Instead, neurons of different types frequently intermingled creating a diversity where the predominant neuron type was never far from at least some representatives of the other types.

Meanwhile, their electrical activity data revealed that the different types exerted different degrees of influence over their neighbors. For example, neurons projecting to the central amygdala, in keeping with their association with negative valence, had a very strong inhibitory effect on neighbors, while nucleus accumbens projectors had a smaller influence that was more balanced between excitation and inhibition.

Tye speculates that the intermingling of neurons of different types, including their propensity to influence each other with their activity, may provide a way for competing circuits to engage in cross-talk.

“Perhaps the intermingling that there is might facilitate the ability of these neurons to influence each other,” says Tye.

Notably, Tye’s research has indicated the projections the different cell types may appear immutable, but the influence those cells have over each other is flexible. The basolateral amygdala may therefore be arranged to both assign valence and negotiate it, for instance in those situations when a mouse spies some desirable cheese, but that mean cat is also nearby.

“This helps us understand how form might give rise to function,” says Tye.

Reference:
Beyeler et al. “Organization of Valence-Encoding and Projection Defined Neurons in the Basolateral Amygdala” Cell Reports. https://doi.org/10.1016/j.celrep.2017.12.097

https://www.technologynetworks.com/neuroscience/news/valence-mapped-brain-study-reveals-roots-of-desire-and-dislike-in-the-amygdala-296668?utm_campaign=NEWSLETTER_TN_Neuroscience_2017&utm_source=hs_email&utm_medium=email&utm_content=60184122&_hsenc=p2ANqtz-_uyMIjTK1pmq-79zMcyJIvQNsa8i7gH9l8Tn-_75Taz2opCD4t1otYN6OBmeI-iAKoenGO8wKWNZ7VV6E_JcYum4fHlA&_hsmi=60184122

Pupil size changes with different stages of sleep, getting smaller as sleep gets deeper, in mice

When people are awake, their pupils regularly change in size. Those changes are meaningful, reflecting shifting attention or vigilance, for example. Now, researchers reporting in Current Biology on January 18 have found in studies of mice that pupil size also fluctuates during sleep. They also show that pupil size is a reliable indicator of sleep states.

“We found that pupil size rhythmically fluctuates during sleep,” says Daniel Huber of the University of Geneva in Switzerland. “Intriguingly, these pupil fluctuations follow the sleep-related brain activity so closely that they can indicate with high accuracy the exact stage of sleep—the smaller the pupil, the deeper the sleep.”

Studies of pupil size had always been a challenge for an obvious reason: people and animals generally sleep with their eyes closed. Huber says that he and his colleagues were inspired to study pupil size in sleep after discovering that their laboratory mice sometimes sleep with their eyes open. They knew that pupil size varies strongly during wakefulness. What, they wondered, happened during sleep?

To investigate this question, they developed a novel optical pupil-tracking system for mice. The device includes an infrared light positioned close to the head of the animal. That invisible light travels through the skull and brain to illuminate the back of the eye. When the eyes are imaged with an infrared camera, the pupils appear as bright circles. Thanks to this new method, it was suddenly possible to track changes in pupil size accurately, particularly when the animals snoozed naturally with their eyelids open.

Their images show that mouse pupils rhythmically fluctuate during sleep and that those fluctuations are not at all random; they correlate with changes in sleep states.

Further experiments showed that changes in pupil size are not just a passive phenomenon, either. They are actively controlled by the parasympathetic autonomic nervous system. The evidence suggests that in mice, at least, pupils narrow in deep sleep to protect the animals from waking up with a sudden flash of light.

“The common saying that ‘the eyes are the window to the soul’ might even hold true behind closed eyelids during sleep,” Özge Yüzgeç, the student conducting the study, says. “The pupil continues to play an important role during sleep by blocking sensory input and thereby protecting the brain in periods of deep sleep, when memories should be consolidated.”

Huber says they would like to find out whether the findings hold in humans and whether their new method can be adapted in the sleep clinic. “Inferring brain activity by non-invasive pupil tracking might be an interesting alternative or complement to electrode recordings,” he says.

Reference:

Yüzgeç, Ö., Prsa, M., Zimmermann, R., & Huber, D. (2018). Pupil Size Coupling to Cortical States Protects the Stability of Deep Sleep via Parasympathetic Modulation. Current Biology. doi:10.1016/j.cub.2017.12.049

https://www.technologynetworks.com/neuroscience/news/pupil-size-couples-to-cortical-states-to-protect-deep-sleep-stability-296519?utm_campaign=NEWSLETTER_TN_Neuroscience_2017&utm_source=hs_email&utm_medium=email&utm_content=60184122&_hsenc=p2ANqtz-_uyMIjTK1pmq-79zMcyJIvQNsa8i7gH9l8Tn-_75Taz2opCD4t1otYN6OBmeI-iAKoenGO8wKWNZ7VV6E_JcYum4fHlA&_hsmi=60184122

This new blood test can detect early signs of 8 kinds of cancer

By DEBORAH NETBURN

Scientists have developed a noninvasive blood test that can detect signs of eight types of cancer long before any symptoms of the disease arise.

The test, which can also help doctors determine where in a person’s body the cancer is located, is called CancerSEEK. Its genesis is described in a paper published Thursday in the journal Science.

The authors said the new work represents the first noninvasive blood test that can screen for a range of cancers all at once: cancer of the ovary, liver, stomach, pancreas, esophagus, colon, lung and breast.

Together, these eight forms of cancer are responsible for more than 60% of cancer deaths in the United States, the authors said.

In addition, five of them — ovarian, liver, stomach, pancreatic and esophageal cancers — currently have no screening tests.

“The goal is to look for as many cancer types as possible in one test, and to identify cancer as early as possible,” said Nickolas Papadopoulos, a professor of oncology and pathology at Johns Hopkins who led the work. “We know from the data that when you find cancer early, it is easier to kill it by surgery or chemotherapy.”

CancerSEEK, which builds on 30 years of research, relies on two signals that a person might be harboring cancer.

First, it looks for 16 telltale genetic mutations in bits of free-floating DNA that have been deposited in the bloodstream by cancerous cells. Because these are present in such trace amounts, they can be very hard to find, Papadopoulos said. For example, one blood sample might have thousands of pieces of DNA that come from normal cells, and just two or five pieces from cancerous cells.

“We are dealing with a needle in a haystack,” he said.

To overcome this challenge, the team relied on recently developed digital technologies that allowed them to efficiently and cost-effectively sequence each individual piece of DNA one by one.

“If you take the hay in the haystack and go through it one by one, eventually you will find the needle,” Papadopoulos said.

In addition, CancerSEEK also screens for eight proteins that are frequently found in higher quantities in the blood samples of people who have cancer.

By measuring these two signals in tandem, CancerSEEK was able to detect cancer in 70% of blood samples pulled from 1,005 patients who had already been diagnosed with one of eight forms of the disease.

The test appeared to be more effective at finding some types of cancer than others, the authors noted. For example, it was able to spot ovarian cancer 98% of the time, but was successful at detecting breast cancer only 33% of the time.

The authors also report that CancerSEEK was better at detecting later stage cancer compared to cancer in earlier stages. It was able to spot the disease 78% of the time in people who had been diagnosed with stage III cancer, 73% of the time in people with stage II cancer and 43% of the time in people diagnosed with stage I cancer.

“I know a lot of people will say this sensitivity is not good enough, but for the five tumor types that currently have no test, going from zero chances of detection to what we did is a very good beginning,” Papadopoulos said.

It is also worth noting that when the researchers ran the test on 812 healthy control blood samples, they only saw seven false-positive results.

“Very high specificity was essential because false-positive results can subject patients to unnecessary invasive follow-up tests and procedures to confirm the presence of cancer,” said Kenneth Kinzler, a professor of oncology at Johns Hopkins who also worked on the study.

Finally, the researchers used machine learning to determine how different combination of proteins and mutations could provide clues to where in the body the cancer might be. The authors found they could narrow down the location of a tumor to just a few anatomic sites in 83% of patients.

CancerSEEK is not yet available to the public, and it probably won’t be for a year or longer, Papadopoulos said.

“We are still evaluating the test, and it hasn’t been commercialized yet,” he said. “I don’t want to guess when it will be available, but I hope it is soon.”

He said that eventually the test could cost less than $500 to run and could easily be administered by a primary care physician’s office.

In theory, a blood sample would be taken in a doctor’s office, and then sent to a lab that would look for the combination of mutations and proteins that would indicate that a patient has cancer. The data would then go into an algorithm that would determine whether or not the patient had the disease and where it might be.

“The idea is: You give blood, and you get results,” Papadopoulos said.

http://beta.latimes.com/science/sciencenow/la-sci-sn-blood-test-cancer-20180118-story.html

Machines Teaching Each Other Could Be the Biggest Exponential Trend in AI

By Aaron Frank

During an October 2015 press conference announcing the autopilot feature of the Tesla Model S, which allowed the car to drive semi-autonomously, Tesla CEO Elon Musk said each driver would become an “expert trainer” for every Model S. Each car could improve its own autonomous features by learning from its driver, but more significantly, when one Tesla learned from its own driver—that knowledge could then be shared with every other Tesla vehicle.

As Fred Lambert with Electrik reported shortly after, Model S owners noticed how quickly the car’s driverless features were improving. In one example, Teslas were taking incorrect early exits along highways, forcing their owners to manually steer the car along the correct route. After just a few weeks, owners noted the cars were no longer taking premature exits.

“I find it remarkable that it is improving this rapidly,” said one Tesla owner.

Intelligent systems, like those powered by the latest round of machine learning software, aren’t just getting smarter: they’re getting smarter faster. Understanding the rate at which these systems develop can be a particularly challenging part of navigating technological change.

Ray Kurzweil has written extensively on the gaps in human understanding between what he calls the “intuitive linear” view of technological change and the “exponential” rate of change now taking place. Almost two decades after writing the influential essay on what he calls “The Law of Accelerating Returns”—a theory of evolutionary change concerned with the speed at which systems improve over time—connected devices are now sharing knowledge between themselves, escalating the speed at which they improve.

“I think that this is perhaps the biggest exponential trend in AI,” said Hod Lipson, professor of mechanical engineering and data science at Columbia University, in a recent interview.

“All of the exponential technology trends have different ‘exponents,’” Lipson added. “But this one is potentially the biggest.”

According to Lipson, what we might call “machine teaching”—when devices communicate gained knowledge to one another—is a radical step up in the speed at which these systems improve.

“Sometimes it is cooperative, for example when one machine learns from another like a hive mind. But sometimes it is adversarial, like in an arms race between two systems playing chess against each other,” he said.

Lipson believes this way of developing AI is a big deal, in part, because it can bypass the need for training data.

“Data is the fuel of machine learning, but even for machines, some data is hard to get—it may be risky, slow, rare, or expensive. In those cases, machines can share experiences or create synthetic experiences for each other to augment or replace data. It turns out that this is not a minor effect, it actually is self-amplifying, and therefore exponential.”

Lipson sees the recent breakthrough from Google’s DeepMind, a project called AlphaGo Zero, as a stunning example of an AI learning without training data. Many are familiar with AlphaGo, the machine learning AI which became the world’s best Go a player after studying a massive training data-set comprised of millions of human Go moves. AlphaGo Zero, however, was able to beat even that Go-playing AI, simply by learning the rules of the game and playing by itself—no training data necessary. Then, just to show off, it beat the world’s best chess playing software after starting from scratch and training for only eight hours.

Now imagine thousands or more AlphaGo Zeroes instantaneously sharing their gained knowledge.

This isn’t just games though. Already, we’re seeing how it will have a major impact on the speed at which businesses can improve the performance of their devices.

One example is GE’s new industrial digital twin technology—a software simulation of a machine that models what is happening with the equipment. Think of it as a machine with its own self-image—which it can also share with technicians.

A steam turbine with a digital twin, for instance, can measure steam temperatures, rotor speeds, cold starts, and other data to predict breakdowns and warn technicians to prevent expensive repairs. The digital twins make these predictions by studying their own performance, but they also rely on models every other steam turbine has developed.

As machines begin to learn from their environments in new and powerful ways, their development is accelerated by communicating what they learn with each other. The collective intelligence of every GE turbine, spread across the planet, can accelerate each individual machine’s predictive ability. Where it may take one driverless car significant time to learn to navigate a particular city—one hundred driverless cars navigating that same city together, all sharing what they learn—can improve their algorithms in far less time.

As other AI-powered devices begin to leverage this shared knowledge transfer, we could see an even faster pace of development. So if you think things are developing quickly today, remember we’re only just getting started.

Machines Teaching Each Other Could Be the Biggest Exponential Trend in AI

Sea Spiders Pump Blood With Their Guts, Not Their Hearts

By Ed Yong

When an animal’s body consists almost entirely of leg, its biology gets really weird.

If sea spiders had a creation myth, it would go something like this. An inebriated deity stumbles home after a hard day’s creating, finds a bunch of leftover legs, glues them together, and zaps them to life before passing out and forgetting to add anything else. The resulting creature—all leg and little else—scuttles away to conquer the oceans.

This is fiction, of course, but it’s only slightly more fanciful than the actual biology of sea spiders. These bizarre marine creatures have four to six pairs of spindly, jointed legs that convene at a torso that barely exists. “They have to do most of their business in their legs,” says Amy Moran from the University of Hawaii at Mānoa, who studies these animals. They have, for example, no lungs, gills, or respiratory organs of any kind. Instead, they rely on oxygen diffusing passively across the large surface area provided by their legs.

Their genitals are found on their legs, too. A female will grow eggs in her thighs—“it’s as if my arms were full of ping-pong balls,” says Moran—and release them through pores. A male, clambering over her, releases sperm from similar pores to fertilize the eggs, which he scoops up and carries around. Among these animals, the dads care for the young.

The legs are also where most of sea spiders’ digestion takes place. There’s so little distance between their mouths and anuses that their guts send long branches down each leg. Put your wrists together, spread your hands out, and splay your fingers—that’s the shape of a sea spider’s gut.

For all their prominence, the legs themselves are oddly clumsy. “[Sea spiders are] very slow, they stumble around, and they fall over a lot,” says Moran. “Frankly, I don’t know how they get away with being so ineffective.” Perhaps it has to do with their choice of food. They feed on immobile prey like sea anemones or sponges, whose juices they suck with stabbing mouthparts at the end of their tiny heads.

Sea spiders, also known as pycnogonids, aren’t actual spiders. There’s a hazy consensus that they belong with the chelicerates—the group that does include true spiders—although some geneticists think that they’re more distantly related. Regardless, “they’re about as closely related to a terrestrial spider as a seahorse is to a horse,” says Moran.

They do live in the sea, though, so the Department of Naming Things got things half-right at least. There are around 1,300 known species, found in oceans all over the world. The smallest are just a millimeter long. The biggest, found in Antarctica, are the size of dinner plates. To prove this, here is a picture of one sitting on a dinner plate.

Moran and her colleague Arthur Woods started studying these creatures because they wanted to know why the giant Antarctic species got so big. Bigger animals need more oxygen. They need to get more of the gas into their bloodstreams, and they need to pump that blood around their bodies. Humans do so with our hearts, but when Woods examined the hearts of sea spiders, he discovered yet another remarkable trait about these already remarkable animals.

He injected fluorescent chemicals into their blood to see how far their hearts can push blood into their legs. Not very far, it turns out. Instead, the creatures largely pump their blood using their guts.

Each leg is a solid tube containing a branch of the sea spider’s guts and some blood vessels. The guts can contract to move food along, just as ours can. But unlike our abdomens, which are flexible, a sea spider’s leg is hard and can’t stretch or expand. If it pushes digestive fluids down its legs, it also forces blood back in the other direction. If it pushes the digestive fluids up, the blood goes back down.

After oxygen passively diffuses into the animal’s legs, it is actively pushed into its torso by the contracting guts.

Woods confirmed this by capturing sea spiders and lowering the oxygen levels in their water. In response, the animals’ guts started contracting faster. “It’s like when you take a person up to altitude and they breathe faster and their heart rate goes up,” says Moran. Same thing, except the sea spiders “are using their legs as gills and their guts as hearts.”

The creature’s actual heart is too small and weak to push blood down the long legs. It only takes over once the blood has reached the animal’s core, circulating it around the torso and head. Finally, says Claudia Arango, a sea spider specialist who was involved in the new study, “we know how they live without having an specialized system for pumping blood.”

Nothing else in nature behaves quite like this. Sea cucumbers breathe using feathery outgrowths of their guts, and several insect larvae breathe using butt snorkels. But all of these species have changed a part of their gut to take in oxygen. The sea spiders are the only ones that use the guts to pump their blood.

Like everything else about sea spiders, the origin of this weird circulatory system is mysterious. These animals are an ancient group that first appeared during around 500 million years ago during the Cambrian period—the point in Earth’s history when most modern animal groups exploded into existence. It could be that the earliest members already had spindly legs and branching guts, and simply co-opted these into ersatz hearts. Alternatively, the double-purpose guts may have come first, allowing the sea spiders to evolve their long legs.

Whatever the route, given how widespread and persistent these animals are, the results were undeniably successful.

https://www.theatlantic.com/science/archive/2017/07/sea-spiders-pump-blood-with-their-guts-not-their-hearts/533088/

Graphene filters economically and effectively generate clean water by removing chemicals, solutes, salts and pesticides

by Jane Bird

New approaches to filtration and extracting moisture from air promise to alleviate the world’s looming water scarcity crisis.

Filtration is being transformed by thin sheets of graphene, a carbon-based material invented in 2004 at Manchester University. Rahul Raveendran Nair, the university’s professor of materials physics, says graphene has the potential to deliver large quantities of clean water via desalination and the removal of pollutants.

Meanwhile, improved technology for capturing water vapour from the air holds out hope for arid regions.

In April 2017, Prof Nair demonstrated that a multi-layer membrane made from graphene oxide can filter out the sodium chloride in seawater much more quickly and cleanly than existing techniques.

“The graphene filter is like a mesh or sieve with holes so small that salt molecules cannot pass through,” Prof Nair says. The filters were recently shown to be able to filter even the dye molecules of whisky, turning the liquid colourless.

The university is in talks with potential manufacturers with a view to enlarging the membranes — currently A4-sized — and demonstrating that the technology can be used in practical applications.

“We have shown it works in the laboratory, now we want to demonstrate it in realistic conditions,” says Prof Nair. He hopes full-sized desalination plants with graphene membranes will be possible within five years.

Commitments to existing desalination technology may hold back large-scale commercial development of graphene systems in the short term, Prof Nair says. However, he thinks that a small scale version of the graphene filter can be developed for bottles and household units within two years. To explore the possibilities, the university is collaborating with Icon Lifesaver, a company based in the east of England.

The business currently makes filters that can remove microbes, bacteria and viruses. Joe Lovegrove, technical manager of Icon Lifesaver, says: “Graphene has the potential to create the ultimate filter that can also take out chemicals, solutes, salts and compounds such as pesticides.

“Its scope is absolutely massive. It would give us effectively the ultimate water filter that you could use to convert water at any source from being dangerous to safe as quickly as you can drink it.”

Icon Lifesaver hopes to demonstrate that 400ml and 750ml bottles with easy-to-clean filters are viable for mass production. “People don’t want to spend time and effort cleaning,” says Mr Lovegrove.

“Initial studies are showing that all this is absolutely feasible with the graphene membrane — it performs superbly, better than anything else.”

Graphene filters would have the further benefit that they will not let any liquid through when they come to the end of their life. “This is an advantage over many existing water filters where it is impossible to tell if they still work properly,” Mr Lovegrove says.

During the next few months, the portable graphene filters will be tested to check that they work for chemical contaminants such as cadmium, copper, arsenic, nitrites, nitrates and pesticides.

Once volume production is under way, the bottles are likely to cost £100 to £150 each. 

In arid areas, clean water production tends to focus on extracting moisture molecules from the air and cooling them down until they form droplets.

Currently this only works where humidity is at least 60 per cent. It is also very energy-intensive, says Beth Koigi, co-founder of Majik Water, a company set up to solve the problem. Majik Water uses desiccants — sponge-like materials such as silica gel — which extract water from the air. This is then released when the gel is heated. It can be re-used many times.

“Our prototype can work in humidity of just 35 per cent, and uses simple equipment and techniques to minimise energy demand,” Ms Koigi says. Each device can generate 10 litres a day and runs on solar power.

The goal is to produce clean water for a price of one cent per litre, far cheaper than bottled water or boiling water to purify it, while removing the risk of mineral contaminants. Achieving the target price will depend on reducing the cost of the solar panels, which currently account for $1,000 of the $1,400 cost of a 20 litre device.

After initial research in California, development has transferred to Kenya where the team is testing whether customers prefer household devices producing 20 litres a day or community systems generating more than 200 litres. Research might continue in Kenya or move to the Atacama Desert Chile, the driest place on earth.

Silica gel is abundant, safe and cheap, so it will help keep costs down. The team is also watching development of metal-organic frameworks (MOFs), which can hold up to three times their own weight in water, compared with 35 per cent for silica gel. They could produce more water with less energy but are still at the stage of laboratory testing. At present they are expensive and unlikely to be widely available in the next five to 10 years.

https://www.ft.com/content/d768030e-d8ec-11e7-9504-59efdb70e12f