The Humanity Star is now orbiting Earth

The universe has some added sparkle — now that a shiny, spherical satellite is traveling around our planet.

The newly launched satellite, dubbed the Humanity Star, resembles a disco ball. Its mission: to serve as a focal point for humanity and reminder about our fragile place in the universe.
“No matter where you are in the world, or what is happening in your life, everyone will be able to see the Humanity Star in the night sky,” said Peter Beck, founder of the private company Rocket Lab, in a statement.

“My hope is that all those looking up at it will look past it to the vast expanse of the universe and think a little differently about their lives, actions and what is important for humanity.”
The satellite is made from carbon fiber and has 65 reflective panels that reflect sunlight back to Earth. The Humanity Star spins rapidly, creating a blinking flashing effect.

https://www.cnn.com/2018/01/25/world/humanity-star-launch-trnd/index.html

Why roosters don’t go deaf from their crowing

by Noel Kirkpatrick

There’s a reason a rooster’s crow rouses the farm from a night’s slumber: It can be a very, very loud noise. It’s so loud, in fact, that you have to wonder how roosters don’t lose their hearing.

Which is exactly what researchers from the University of Antwerp and the University of Ghent in Belgium were wondering when they undertook this study in the journal Zoology.

https://www.sciencedirect.com/science/article/pii/S0944200617301976

The secret? Roosters can’t really hear themselves cock-a-doodle-doo.


A crow for your ears only

Our ears are delicate. A sound louder than 120 decibels — which is roughly the sound of a chainsaw — can cause permanent hearing loss. The air pressure waves from the noise can, over prolonged exposure, harm or even kill the cells that convert sound waves into the noises our brains can process. At 130 decibels, all it takes is half a second to cause a bit of hearing damage.

Given that roosters can crow at least as loud as 100 decibels, or the decibel level of a jackhammer, you’d expect them to experience some modicum of deafness over the course of their lifetimes. Instead, they continue to hear just fine — and to greet the new day with a blaring hoot.

To figure out just how loud the roosters got, and how they were able to keep their hearing, researchers strapped microphones to the heads of three roosters, with the receiving end pointed at their ears. This was done to measure the sound levels that the roosters themselves would hear when they crowed. The crows were also measured from a distance away. And one other measurement was taken: the researchers performed micro-CT scans on roosters and hens so they could pick apart the geometry of how sounds bounce around in their respective ear canals.

The decibels levels were all over 100 decibels, meaning loud enough to potentially cause damage. One rooster even hit 140 decibels, or the sound level on an aircraft carrier deck, and easily loud enough to cause some damage.

It turns out that roosters keep themselves safe from their own crows with an anatomy adaptation. When they open their beaks to the fullest, a quarter of the ear canal closes and soft tissue covers 50 percent of the eardrum. Basically, they have built-in earplugs that protect them from their own noises. Hens are also protected. Like roosters, the hens’ ear canals also close up, but not as much as their male counterparts’ do.

This built-in protective ability makes sense from an evolutionary perspective. Crowing also serves as a warning to other roosters that this particular group of hens is spoken for — so superlatives rule. The loudest rooster would end up being seen as the most fit to mate with the hens.

https://www.mnn.com/earth-matters/animals/stories/roosters-dont-go-deaf-their-own-crowing

Eyes and eardrums move in sync

Simply moving the eyes triggers the eardrums to move too, says a new study by Duke University neuroscientists.

The researchers found that keeping the head still but shifting the eyes to one side or the other sparks vibrations in the eardrums, even in the absence of any sounds.

Surprisingly, these eardrum vibrations start slightly before the eyes move, indicating that motion in the ears and the eyes are controlled by the same motor commands deep within the brain.

“It’s like the brain is saying, ‘I’m going to move the eyes, I better tell the eardrums, too,’” said Jennifer Groh, a professor in the departments of neurobiology and psychology and neuroscience at Duke.

The findings, which were replicated in both humans and rhesus monkeys, provide new insight into how the brain coordinates what we see and what we hear. It may also lead to new understanding of hearing disorders, such as difficulty following a conversation in a crowded room.

The paper appeared Jan. 23 in Proceedings of the National Academy of Sciences.

It’s no secret that the eyes and ears work together to make sense of the sights and sounds around us. Most people find it easier to understand somebody if they are looking at them and watching their lips move. And in a famous illusion called the McGurk Effect, videos of lip cues dubbed with mismatched audio cause people to hear the wrong sound.

But researchers are still puzzling over where and how the brain combines these two very different types of sensory information.

“Our brains would like to match up what we see and what we hear according to where these stimuli are coming from, but the visual system and the auditory system figure out where stimuli are located in two completely different ways,” Groh said. “The eyes are giving you a camera-like snapshot of the visual scene, whereas for sounds, you have to calculate where they are coming from based on differences in timing and loudness across the two ears.”

Because the eyes are usually darting about within the head, the visual and auditory worlds are constantly in flux with respect to one another, Groh added.

In an experiment designed by Kurtis Gruters, a formal doctoral student in Groh’s lab and co-first author on the paper, 16 participants were asked to sit in a dark room and follow shifting LED lights with their eyes. Each participant also wore small microphones in their ear canals that were sensitive enough to pick up the slight vibrations created when the eardrum sways back and forth.

Though eardrums vibrate primarily in response to outside sounds, the brain can also control their movements using small bones in the middle ear and hair cells in the cochlea. These mechanisms help modulate the volume of sounds that ultimately reach the inner ear and brain, and produce small sounds known as otoacoustic emissions.

Gruters found that when the eyes moved, both eardrums moved in sync with one another, one side bulging inward at the same time the other side bulged outward. They continued to vibrate back and forth together until shortly after the eyes stopped moving. Eye movements in opposite directions produced opposite patterns of vibrations.

Larger eye movements also triggered bigger vibrations than smaller eye movements, the team found.

“The fact that these eardrum movements are encoding spatial information about eye movements means that they may be useful for helping our brains merge visual and auditory space,” said David Murphy, a doctoral student in Groh’s lab and co-first author on the paper. “It could also signify a marker of a healthy interaction between the auditory and visual systems.”

The team, which included Christopher Shera at the University of Southern California and David W. Smith of the University of Florida, is still investigating how these eardrum vibrations impact what we hear, and what role they may play in hearing disorders. In future experiments, they will look at whether up and down eye movements also cause unique signatures in eardrum vibrations.

“The eardrum movements literally contain information about what the eyes are doing,” Groh said. “This demonstrates that these two sensory pathways are coupled, and they are coupled at the earliest points.”

Cole Jenson, an undergraduate neuroscience major at Duke, also coauthored the new study.

CITATION: “The Eardrums Move When the Eyes Move: A Multisensory Effect on the Mechanics of Hearing,” K. G. Gruters, D. L. K. Murphy, Cole D. Jensen, D. W. Smith, C. A. Shera and J. M. Groh. Proceedings of the National Academy of Sciences, Jan. 23, 2018. DOI: 10.1073/pnas.1717948115

Meet Zhong Zhong and Hua Hua, the First Monkey Clones Produced by Method that Made Dolly

The first primate clones made by somatic cell nuclear transfer are two genetically-identical long-tailed macaques born recently at the Institute of Neuroscience of Chinese Academy of Sciences in Shanghai. Researchers named the newborns Zhong Zhong and Hua Hua—born six and eight weeks ago, respectively—after the Chinese adjective “Zhōnghuá,” which means Chinese nation or people. The technical milestone, presented January 24 in the journal Cell, makes it a realistic possibility for labs to conduct research with customizable populations of genetically uniform monkeys.

“There are a lot of questions about primate biology that can be studied by having this additional model,” says senior author SUN Qiang, Director of the Nonhuman Primate Research Facility at the Chinese Academy of Sciences Institute of Neuroscience. “You can produce cloned monkeys with the same genetic background except the gene you manipulated. This will generate real models not just for genetically based brain diseases, but also cancer, immune or metabolic disorders, and allow us to test the efficacy of the drugs for these conditions before clinical use.”

Zhong Zhong and Hua Hua are not the first primate clones—the title goes to Tetra, a rhesus monkey made in 1999 by a simpler method called embryo splitting (Science, v. 287, no. 5451, pp. 317-319). This approach is how twins are made, but can only generate up to 4 offspring at a time. Zhong Zhong and Hua Hua are the product of somatic cell nuclear transfer (SCNT), the technique used to create Dolly the sheep over 20 years ago, in which researchers remove the nucleus from an egg cell and replace it with another nucleus from differentiated body cells. This reconstructed egg then develops into a clone of whatever donated the replacement nucleus.

Differentiated monkey cell nuclei, compared to other mammals such as mice or dogs, have proven resistant to SCNT. SUN and his colleagues overcame this challenge primarily by introducing epigenetic modulators after the nuclear transfer that switch on or off the genes that are inhibiting the embryo development. The researchers found their success rate increased by transferring nuclei taken from fetal differentiated cells, such as fibroblasts, a cell type in the connective tissue. Zhong Zhong and Hua Hua are clones of the same macaque fetal fibroblasts. Cells from adult donor cells were also used, but those babies only lived for a few hours after birth.

“We tried several different methods but only one worked,” says SUN. “There was much failure before we found a way to successfully clone a monkey.”

The first author LIU Zhen, a postdoctoral fellow, spent three years practicing and optimizing the SCNT procedure. Including quickly and precisely removing of the nuclear materials from the egg cell and various methods of promoting the fusion of the nucleus-donor cell and enucleated egg. With additional help of epigenetic modulators that help re-activate the suppressed genes in the differentiated nucleus, he was able to achieve much higher rates of normal embryo development and pregnancy in the surrogate female monkeys.

“The SCNT procedure is rather delicate, so the faster you do it the less damage to the egg you have, and Dr. LIU has a green thumb for doing this,” says Muming Poo, a co-author on the study, who directs the Institute of Neuroscience of CAS Center for Excellence in Brain Science and Intelligence Technology and helps to supervise the project. “It takes a lot of practice, not everybody can do the enucleation and cell fusion process quickly and precisely, and it is likely that the optimization of transfer procedure greatly helped us to achieve this success.”

The researchers plan to continue improving the technique, which will also benefit from future work in other labs, and monitoring Zhong Zhong and Hua Hua for their physical and intellectual development. The babies are currently bottle fed and are growing normally compared to monkeys their age. The group is also expecting more macaque clones to be born over the coming months.

The lab is following strict international guidelines for animal research set by the US National Institutes of Health, but encourage the scientific community to discuss what should or should not be acceptable practices when it comes to cloning of non-human primates. “We are very aware that future research using non-human primates anywhere in the world depends on scientists following very strict ethical standards,” Poo says.

This work was supported by grants from Chinese Academy of Sciences, the CAS Key Technology Talent Program, the Shanghai Municipal Government Bureau of Science and Technology, the National Postdoctoral Program for Innovative Talents and the China Postdoctoral Science Foundation.

http://english.cas.cn/head/201801/t20180123_189488.shtml

Fiber-Rich Diet Fights Off Obesity by Altering Microbiota

Consumption of dietary fiber can prevent obesity, metabolic syndrome and adverse changes in the intestine by promoting growth of “good” bacteria in the colon, according to a study led by Georgia State University.

The researchers found enriching the diet of mice with the fermentable fiber inulin prevented metabolic syndrome that is induced by a high-fat diet, and they identified specifically how this occurs in the body. Metabolic syndrome is a cluster of conditions closely linked to obesity that includes increased blood pressure, high blood sugar, excess body fat around the waist and abnormal cholesterol or triglyceride levels. When these conditions occur together, they increase a person’s risk of heart disease, stroke and diabetes.

Obesity and metabolic syndrome are associated with alterations in gut microbiota, the microorganism population that lives in the intestine. Modern changes in dietary habits, particularly the consumption of processed foods lacking fiber, are believed to affect microbiota and contribute to the increase of chronic inflammatory disease, including metabolic syndrome. Studies have found a high-fat diet destroys gut microbiota, reduces the production of epithelial cells lining the intestine and causes gut bacteria to invade intestinal epithelial cells.

This study found the fermentable fiber inulin restored gut health and protected mice against metabolic syndrome induced by a high-fat diet by restoring gut microbiota levels, increasing the production of intestinal epithelial cells and restoring expression of the protein interleukin-22 (IL-22), which prevented gut microbiota from invading epithelial cells. The findings are published in the journal Cell Host & Microbe.

“We found that manipulating dietary fiber content, particularly by adding fermentable fiber, guards against metabolic syndrome,” said Dr. Andrew Gewirtz, professor in the Institute for Biomedical Sciences at Georgia State. “This study revealed the specific mechanism used to restore gut health and suppress obesity and metabolic syndrome is the induction of IL-22 expression. These results contribute to the understanding of the mechanisms that underlie diet-induced obesity and offer insight into how fermentable fibers might promote better health.”

For four weeks, the researchers fed mice either a grain-based rodent chow, a high-fat diet (high fat and low fiber content with 5 percent cellulose as a source of fiber) or a high-fat diet supplemented with fiber (either fermentable inulin fiber or insoluble cellulose fiber). The high-fat diet is linked to an increase in obesity and conditions associated with metabolic syndrome.

They discovered a diet supplemented with inulin reduced weight gain and noticeably reduced obesity induced by a high-fat diet, which was accompanied by a reduction in the size of fat cells. Dietary enrichment with inulin also markedly lowered cholesterol levels and largely prevented dysglycemia (abnormal blood sugar levels). The researchers found insoluble cellulose fiber only modestly reduced obesity and dysglycemia

Supplementing the high-fat diet with inulin restored gut microbiota. However, inulin didn’t restore the microbiota levels to those of mice fed a chow diet. A distinct difference in microbiota levels remained between mice fed a high-fat diet versus those fed a chow diet. Enrichment of high-fat diets with cellulose had a mild effect on microbiota levels.

In addition, the researchers found switching mice from a grain-based chow diet to a high-fat diet resulted in a loss of colon mass, which they believe contributes to low-grade inflammation and metabolic syndrome. When they switched mice back to a chow diet, the colon mass was fully restored.

https://www.technologynetworks.com/tn/news/fiber-rich-diet-fights-off-obesity-by-altering-microbiota-296642?utm_campaign=Newsletter_TN_BreakingScienceNews&utm_source=hs_email&utm_medium=email&utm_content=60184554&_hsenc=p2ANqtz-9YDsGiTl44CBfQpgNtYgc43xBeVKpAbPZym9Lh_GzlHoEVts0rAwMhHHXIDam3Jit0D3aTqKGhCceUREgr6sZfLGMWpQ&_hsmi=60184554

US and Russian computer scientists develop algorithm called VarQuest that discovers over 1000 antibiotic proteins in a few hours

A team of American and Russian computer scientists has developed an algorithm that can rapidly search databases to discover novel variants of known antibiotics — a potential boon in fighting antibiotic resistance.

In just a few hours, the algorithm, called VarQuest, identified 10 times more variants of peptidic natural products, or PNPs, than all previous PNP discovery efforts combined, the researchers report in the latest issue of the journal Nature Microbiology. Previously, such a search might have taken hundreds of years of computation, said Hosein Mohimani, assistant professor in Carnegie Mellon University’s Computational Biology Department.

“Our results show that the antibiotics produced by microbes are much more diverse than had been assumed,” Mohimani said. VarQuest found more than a thousand variants of known antibiotics, he noted, providing a big picture perspective that microbiologists could not obtain while studying one antibiotic at a time.

Mohimani and Pavel A. Pevzner, professor of computer science at the University of California, San Diego, designed and directed the effort, which included colleagues at St. Petersburg State University in Russia.

PNPs have an unparalleled track record in pharmacology. Many antimicrobial and anticancer agents are PNPs, including the so-called “antibiotics of last resort,” vancomycin and daptomycin. As concerns mount regarding antibiotic drug resistance, finding more effective variants of known antibiotics is a means for preserving the clinical efficacy of antibiotic drugs in general.

The search for these novel variants received a boost in recent years with the advent of high-throughput methods that enable environmental samples to be processed in batches, rather than one at a time. Researchers also recently launched the Global Natural Products Social (GNPS) molecular network, a database of mass spectra of natural products collected by researchers worldwide. Already, the GNPS based at UC San Diego contains more than a billion mass spectra.

The GNPS represents a gold mine for drug discovery, Mohimani said. The VarQuest algorithm, which employs a smarter way of indexing the database to enhance searches, should help GNPS meet its promise, he added.

“Natural product discovery is turning into a Big Data territory, and the field has to get prepared for this transformation in terms of collecting, storing and making sense of Big Data,” Mohimani said. “VarQuest is the first step toward digesting the Big Data already collected by the community.”

Reference: Gurevich, A., Mikheenko, A., Shlemov, A., Korobeynikov, A., Mohimani, H., & Pevzner, P. A. (2018). Increased diversity of peptidic natural products revealed by modification-tolerant database search of mass spectra. Nature Microbiology, 1. https://doi.org/10.1038/s41564-017-0094-2

https://www.technologynetworks.com/informatics/news/algorithm-unearths-over-1000-antibiotic-proteins-in-a-few-hours-296639?utm_campaign=Newsletter_TN_BreakingScienceNews&utm_source=hs_email&utm_medium=email&utm_content=60184554&_hsenc=p2ANqtz-9YDsGiTl44CBfQpgNtYgc43xBeVKpAbPZym9Lh_GzlHoEVts0rAwMhHHXIDam3Jit0D3aTqKGhCceUREgr6sZfLGMWpQ&_hsmi=60184554

Desire and Dislike Mapped in the Amygdala

The amygdala is a tiny hub of emotions where in 2016 a team led by MIT neuroscientist Kay Tye found specific populations of neurons that assign good or bad feelings, or “valence,” to experience. Learning to associate pleasure with a tasty food, or aversion to a foul-tasting one, is a primal function and key to survival.

In a new study in Cell Reports, Tye’s team at the Picower Institute for Learning and Memory returns to the amygdala for an unprecedentedly deep dive into its inner workings. Focusing on a particular section called the basolateral amygdala, the researchers show how valence-processing circuitry is organized and how key neurons in those circuits interact with others. What they reveal is a region with distinct but diverse and dynamic neighborhoods where valence is sorted out by both connecting with other brain regions and sparking cross-talk within the basolateral amygdala itself.

“Perturbations of emotional valence processing is at the core of many mental health disorders,” says Tye, associate professor of neuroscience at the Picower Institute of Learning and Memory and the Department of Brain and Cognitive Sciences. “Anxiety and addiction, for example, may be an imbalance or a misassignment of positive or negative valence with different stimuli.”

Despite the importance of valence assignment in both healthy behavior and psychiatric disorders, neuroscientists don’t know how the process really works. The new study therefore sought to expose how the neurons and circuits are laid out and how they interact.

Bitter, sweet

To conduct the study, lead author Anna Beyeler, a former postdoc in Tye’s lab and currently a faculty member at the University of Bordeaux in France, led the group in training mice to associate appealing sucrose drops with one tone and bitter quinine drops with another. They recorded the response of different neurons in the basolateral amygdala when the tones were played to see which ones were associated with the conditioned learned valence of the different tastes. They labeled those key neurons associated with valence encoding and engineered them to become responsive to pulses of light. When the researchers then activated them, they recorded the electrical activity not only of those neurons but also of many of their neighbors to see what influence their activity had in local circuits.

They also found, labeled, and made similar measurements among neurons that became active on the occasion that a mouse actually licked the bitter quinine. With this additional step, they could measure not only the neural activity associated with the learned valence of the bitter taste but also that associated with the innate reaction to the actual experience.

Later in the lab, they used tracing technologies to highlight three different kinds of neurons more fully, visualizing them in distinct colors depending on which other region they projected their tendrilous axons to connect with. Neurons that project to a region called the nucleus accumbens are predominantly associated with positive valence, and those that connect to the central amygdala are mainly associated with negative valence. They found that neurons uniquely activated by the unconditioned experience of actually tasting the quinine tended to project to the ventral hippocampus.

In all, the team mapped over 1,600 neurons.

To observe the three-dimensional configuration of these distinct neuron populations, the researchers turned the surrounding brain tissues clear using a technique called CLARITY, invented by Kwanghun Chung, assistant professor of chemical engineering and neuroscience and a colleague in the Picower Institute.

Neighborhoods without fences

Beyeler, Tye, and their co-authors were able to make several novel observations about the inner workings of the basolateral amygdala’s valence circuitry.

One finding was that the different functional populations of neurons tended to cluster together in neighborhoods, or “hotspots.” For example, picturing the almond-shaped amygdala as standing upright on its fat bottom, the neurons projecting to the central amygdala tended to cluster toward the point at the top and then on the right toward the bottom. Meanwhile the neurons that projected to the nucleus accumbens tended to run down the middle, and the ones that projected to the hippocampus were clustered toward the bottom on the opposite side from the central amygdala projectors.

Despite these trends, the researchers also noted that the neighborhoods were hardly monolithic. Instead, neurons of different types frequently intermingled creating a diversity where the predominant neuron type was never far from at least some representatives of the other types.

Meanwhile, their electrical activity data revealed that the different types exerted different degrees of influence over their neighbors. For example, neurons projecting to the central amygdala, in keeping with their association with negative valence, had a very strong inhibitory effect on neighbors, while nucleus accumbens projectors had a smaller influence that was more balanced between excitation and inhibition.

Tye speculates that the intermingling of neurons of different types, including their propensity to influence each other with their activity, may provide a way for competing circuits to engage in cross-talk.

“Perhaps the intermingling that there is might facilitate the ability of these neurons to influence each other,” says Tye.

Notably, Tye’s research has indicated the projections the different cell types may appear immutable, but the influence those cells have over each other is flexible. The basolateral amygdala may therefore be arranged to both assign valence and negotiate it, for instance in those situations when a mouse spies some desirable cheese, but that mean cat is also nearby.

“This helps us understand how form might give rise to function,” says Tye.

Reference:
Beyeler et al. “Organization of Valence-Encoding and Projection Defined Neurons in the Basolateral Amygdala” Cell Reports. https://doi.org/10.1016/j.celrep.2017.12.097

https://www.technologynetworks.com/neuroscience/news/valence-mapped-brain-study-reveals-roots-of-desire-and-dislike-in-the-amygdala-296668?utm_campaign=NEWSLETTER_TN_Neuroscience_2017&utm_source=hs_email&utm_medium=email&utm_content=60184122&_hsenc=p2ANqtz-_uyMIjTK1pmq-79zMcyJIvQNsa8i7gH9l8Tn-_75Taz2opCD4t1otYN6OBmeI-iAKoenGO8wKWNZ7VV6E_JcYum4fHlA&_hsmi=60184122

Pupil size changes with different stages of sleep, getting smaller as sleep gets deeper, in mice

When people are awake, their pupils regularly change in size. Those changes are meaningful, reflecting shifting attention or vigilance, for example. Now, researchers reporting in Current Biology on January 18 have found in studies of mice that pupil size also fluctuates during sleep. They also show that pupil size is a reliable indicator of sleep states.

“We found that pupil size rhythmically fluctuates during sleep,” says Daniel Huber of the University of Geneva in Switzerland. “Intriguingly, these pupil fluctuations follow the sleep-related brain activity so closely that they can indicate with high accuracy the exact stage of sleep—the smaller the pupil, the deeper the sleep.”

Studies of pupil size had always been a challenge for an obvious reason: people and animals generally sleep with their eyes closed. Huber says that he and his colleagues were inspired to study pupil size in sleep after discovering that their laboratory mice sometimes sleep with their eyes open. They knew that pupil size varies strongly during wakefulness. What, they wondered, happened during sleep?

To investigate this question, they developed a novel optical pupil-tracking system for mice. The device includes an infrared light positioned close to the head of the animal. That invisible light travels through the skull and brain to illuminate the back of the eye. When the eyes are imaged with an infrared camera, the pupils appear as bright circles. Thanks to this new method, it was suddenly possible to track changes in pupil size accurately, particularly when the animals snoozed naturally with their eyelids open.

Their images show that mouse pupils rhythmically fluctuate during sleep and that those fluctuations are not at all random; they correlate with changes in sleep states.

Further experiments showed that changes in pupil size are not just a passive phenomenon, either. They are actively controlled by the parasympathetic autonomic nervous system. The evidence suggests that in mice, at least, pupils narrow in deep sleep to protect the animals from waking up with a sudden flash of light.

“The common saying that ‘the eyes are the window to the soul’ might even hold true behind closed eyelids during sleep,” Özge Yüzgeç, the student conducting the study, says. “The pupil continues to play an important role during sleep by blocking sensory input and thereby protecting the brain in periods of deep sleep, when memories should be consolidated.”

Huber says they would like to find out whether the findings hold in humans and whether their new method can be adapted in the sleep clinic. “Inferring brain activity by non-invasive pupil tracking might be an interesting alternative or complement to electrode recordings,” he says.

Reference:

Yüzgeç, Ö., Prsa, M., Zimmermann, R., & Huber, D. (2018). Pupil Size Coupling to Cortical States Protects the Stability of Deep Sleep via Parasympathetic Modulation. Current Biology. doi:10.1016/j.cub.2017.12.049

https://www.technologynetworks.com/neuroscience/news/pupil-size-couples-to-cortical-states-to-protect-deep-sleep-stability-296519?utm_campaign=NEWSLETTER_TN_Neuroscience_2017&utm_source=hs_email&utm_medium=email&utm_content=60184122&_hsenc=p2ANqtz-_uyMIjTK1pmq-79zMcyJIvQNsa8i7gH9l8Tn-_75Taz2opCD4t1otYN6OBmeI-iAKoenGO8wKWNZ7VV6E_JcYum4fHlA&_hsmi=60184122

This new blood test can detect early signs of 8 kinds of cancer

By DEBORAH NETBURN

Scientists have developed a noninvasive blood test that can detect signs of eight types of cancer long before any symptoms of the disease arise.

The test, which can also help doctors determine where in a person’s body the cancer is located, is called CancerSEEK. Its genesis is described in a paper published Thursday in the journal Science.

The authors said the new work represents the first noninvasive blood test that can screen for a range of cancers all at once: cancer of the ovary, liver, stomach, pancreas, esophagus, colon, lung and breast.

Together, these eight forms of cancer are responsible for more than 60% of cancer deaths in the United States, the authors said.

In addition, five of them — ovarian, liver, stomach, pancreatic and esophageal cancers — currently have no screening tests.

“The goal is to look for as many cancer types as possible in one test, and to identify cancer as early as possible,” said Nickolas Papadopoulos, a professor of oncology and pathology at Johns Hopkins who led the work. “We know from the data that when you find cancer early, it is easier to kill it by surgery or chemotherapy.”

CancerSEEK, which builds on 30 years of research, relies on two signals that a person might be harboring cancer.

First, it looks for 16 telltale genetic mutations in bits of free-floating DNA that have been deposited in the bloodstream by cancerous cells. Because these are present in such trace amounts, they can be very hard to find, Papadopoulos said. For example, one blood sample might have thousands of pieces of DNA that come from normal cells, and just two or five pieces from cancerous cells.

“We are dealing with a needle in a haystack,” he said.

To overcome this challenge, the team relied on recently developed digital technologies that allowed them to efficiently and cost-effectively sequence each individual piece of DNA one by one.

“If you take the hay in the haystack and go through it one by one, eventually you will find the needle,” Papadopoulos said.

In addition, CancerSEEK also screens for eight proteins that are frequently found in higher quantities in the blood samples of people who have cancer.

By measuring these two signals in tandem, CancerSEEK was able to detect cancer in 70% of blood samples pulled from 1,005 patients who had already been diagnosed with one of eight forms of the disease.

The test appeared to be more effective at finding some types of cancer than others, the authors noted. For example, it was able to spot ovarian cancer 98% of the time, but was successful at detecting breast cancer only 33% of the time.

The authors also report that CancerSEEK was better at detecting later stage cancer compared to cancer in earlier stages. It was able to spot the disease 78% of the time in people who had been diagnosed with stage III cancer, 73% of the time in people with stage II cancer and 43% of the time in people diagnosed with stage I cancer.

“I know a lot of people will say this sensitivity is not good enough, but for the five tumor types that currently have no test, going from zero chances of detection to what we did is a very good beginning,” Papadopoulos said.

It is also worth noting that when the researchers ran the test on 812 healthy control blood samples, they only saw seven false-positive results.

“Very high specificity was essential because false-positive results can subject patients to unnecessary invasive follow-up tests and procedures to confirm the presence of cancer,” said Kenneth Kinzler, a professor of oncology at Johns Hopkins who also worked on the study.

Finally, the researchers used machine learning to determine how different combination of proteins and mutations could provide clues to where in the body the cancer might be. The authors found they could narrow down the location of a tumor to just a few anatomic sites in 83% of patients.

CancerSEEK is not yet available to the public, and it probably won’t be for a year or longer, Papadopoulos said.

“We are still evaluating the test, and it hasn’t been commercialized yet,” he said. “I don’t want to guess when it will be available, but I hope it is soon.”

He said that eventually the test could cost less than $500 to run and could easily be administered by a primary care physician’s office.

In theory, a blood sample would be taken in a doctor’s office, and then sent to a lab that would look for the combination of mutations and proteins that would indicate that a patient has cancer. The data would then go into an algorithm that would determine whether or not the patient had the disease and where it might be.

“The idea is: You give blood, and you get results,” Papadopoulos said.

http://beta.latimes.com/science/sciencenow/la-sci-sn-blood-test-cancer-20180118-story.html

Machines Teaching Each Other Could Be the Biggest Exponential Trend in AI

By Aaron Frank

During an October 2015 press conference announcing the autopilot feature of the Tesla Model S, which allowed the car to drive semi-autonomously, Tesla CEO Elon Musk said each driver would become an “expert trainer” for every Model S. Each car could improve its own autonomous features by learning from its driver, but more significantly, when one Tesla learned from its own driver—that knowledge could then be shared with every other Tesla vehicle.

As Fred Lambert with Electrik reported shortly after, Model S owners noticed how quickly the car’s driverless features were improving. In one example, Teslas were taking incorrect early exits along highways, forcing their owners to manually steer the car along the correct route. After just a few weeks, owners noted the cars were no longer taking premature exits.

“I find it remarkable that it is improving this rapidly,” said one Tesla owner.

Intelligent systems, like those powered by the latest round of machine learning software, aren’t just getting smarter: they’re getting smarter faster. Understanding the rate at which these systems develop can be a particularly challenging part of navigating technological change.

Ray Kurzweil has written extensively on the gaps in human understanding between what he calls the “intuitive linear” view of technological change and the “exponential” rate of change now taking place. Almost two decades after writing the influential essay on what he calls “The Law of Accelerating Returns”—a theory of evolutionary change concerned with the speed at which systems improve over time—connected devices are now sharing knowledge between themselves, escalating the speed at which they improve.

“I think that this is perhaps the biggest exponential trend in AI,” said Hod Lipson, professor of mechanical engineering and data science at Columbia University, in a recent interview.

“All of the exponential technology trends have different ‘exponents,’” Lipson added. “But this one is potentially the biggest.”

According to Lipson, what we might call “machine teaching”—when devices communicate gained knowledge to one another—is a radical step up in the speed at which these systems improve.

“Sometimes it is cooperative, for example when one machine learns from another like a hive mind. But sometimes it is adversarial, like in an arms race between two systems playing chess against each other,” he said.

Lipson believes this way of developing AI is a big deal, in part, because it can bypass the need for training data.

“Data is the fuel of machine learning, but even for machines, some data is hard to get—it may be risky, slow, rare, or expensive. In those cases, machines can share experiences or create synthetic experiences for each other to augment or replace data. It turns out that this is not a minor effect, it actually is self-amplifying, and therefore exponential.”

Lipson sees the recent breakthrough from Google’s DeepMind, a project called AlphaGo Zero, as a stunning example of an AI learning without training data. Many are familiar with AlphaGo, the machine learning AI which became the world’s best Go a player after studying a massive training data-set comprised of millions of human Go moves. AlphaGo Zero, however, was able to beat even that Go-playing AI, simply by learning the rules of the game and playing by itself—no training data necessary. Then, just to show off, it beat the world’s best chess playing software after starting from scratch and training for only eight hours.

Now imagine thousands or more AlphaGo Zeroes instantaneously sharing their gained knowledge.

This isn’t just games though. Already, we’re seeing how it will have a major impact on the speed at which businesses can improve the performance of their devices.

One example is GE’s new industrial digital twin technology—a software simulation of a machine that models what is happening with the equipment. Think of it as a machine with its own self-image—which it can also share with technicians.

A steam turbine with a digital twin, for instance, can measure steam temperatures, rotor speeds, cold starts, and other data to predict breakdowns and warn technicians to prevent expensive repairs. The digital twins make these predictions by studying their own performance, but they also rely on models every other steam turbine has developed.

As machines begin to learn from their environments in new and powerful ways, their development is accelerated by communicating what they learn with each other. The collective intelligence of every GE turbine, spread across the planet, can accelerate each individual machine’s predictive ability. Where it may take one driverless car significant time to learn to navigate a particular city—one hundred driverless cars navigating that same city together, all sharing what they learn—can improve their algorithms in far less time.

As other AI-powered devices begin to leverage this shared knowledge transfer, we could see an even faster pace of development. So if you think things are developing quickly today, remember we’re only just getting started.

Machines Teaching Each Other Could Be the Biggest Exponential Trend in AI