Ever wish you could do a quick “breath check” before an important meeting or a big date? Now researchers, reporting in ACS’ journal Analytical Chemistry, have developed a sensor that detects tiny amounts of hydrogen sulfide gas, the compound responsible for bad breath, in human exhalations.

According to the American Dental Association, half of all adults have suffered from bad breath, or halitosis, at some point in their lives. Although in most cases bad breath is simply an annoyance, it can sometimes be a symptom of more serious medical and dental problems. However, many people aren’t aware that their breath is smelly unless somebody tells them, and doctors don’t have a convenient, objective test for diagnosing halitosis. Existing hydrogen sulfide sensors require a power source or precise calibration, or they show low sensitivity or a slow response. Il-Doo Kim and coworkers wanted to develop a sensitive, portable detector for halitosis that doctors could use to quickly and inexpensively diagnose the condition.

To develop their sensor, the team made use of lead(II) acetate – a chemical that turns brown when exposed to hydrogen sulfide gas. On its own, the chemical is not sensitive enough to detect trace amounts (2 ppm or less) of hydrogen sulfide in human breath. So the researchers anchored lead acetate to a 3D nanofiber web, providing numerous sites for lead acetate and hydrogen sulfide gas to react. By monitoring a color change from white to brown on the sensor surface, the researchers could detect as little as 400 ppb hydrogen sulfide with the naked eye in only 1 minute. In addition, the color-changing sensor detected traces of hydrogen sulfide added to breath samples from 10 healthy volunteers.

https://www.acs.org/content/acs/en/pressroom/presspacs/2018/acs-presspac-june-6-2018/sensor-detects-whiff-of-bad-breath.html

Advertisements


Jan Carette and his colleagues have discovered a “death code” that unleashes a type of cell death.

Dying cells generally have two options: go quietly, or go out with a bang.

The latter, while more conspicuous, is also mechanistically more mysterious. Now, scientists at the Stanford University School of Medicine have pinpointed what they believe is the molecular “code” that unleashes this more violent variety of cell death.

This particular version of cell suicide is called necroptosis, and it typically occurs as a result of some sort of infection or pathogenic invader. “Necroptosis is sort of like the cell’s version of ‘taking one for the team,’” said Jan Carette, PhD, assistant professor of microbiology and immunology. “As the cell dies, it releases its contents, including a damage signal that lets other cells know there’s a problem.”

Seen in this light, necroptosis seems almost altruistic, but the process is also a key contributor to autoimmune diseases; it’s even been implicated in the spread of cancer.

In a new study, Carette and his collaborators discovered the final step of necroptosis, the linchpin upon which the entire process depends. They call it “the death code.”

Their work, which was published online June 7 in Molecular Cell, not only clears up what happens during this type of cell death, but also opens the door to potential new treatments for diseases in which necroptosis plays a key role, such as inflammatory bowel disease and multiple sclerosis. Carette is the senior author, and postdoctoral scholar Cole Dovey, PhD, is the lead author.

Initiating detonation

When a cell’s health is threatened by an invader, such as a virus, a cascade of molecular switches and triggers readies the cell for death by necroptosis. Until recently, scientists thought they had traced the pathway down to the last step. But it turns out that the entire chain is rendered futile without one special molecule, called inositol hexakisphosphate, or IP6, which is part of a larger collection of molecules known as inositol phosphates. Carette likens IP6 to an access code; only in this case, when the code is punched in, it’s not a safe or a cellphone that’s unlocked: It’s cell death. Specifically, a protein called MLKL, which Carette has nicknamed “the executioner protein,” is unlocked.

“This was a big surprise. We didn’t know that the killer protein required a code, and now we find that it does,” Dovey said. “It’s held in check by a code, and it’s released by a code. So only when the code is correct does the killer activate, puncturing holes in the cell’s membrane as it prepares to burst the cell open.”

MLKL resides inside the cell, which may seem like an error on evolution’s part; why plant an explosive in life’s inner sanctum? But MLKL is tightly regulated, and it requires multiple green lights before it’s cleared to pulverize. Even if all other proteins and signaling molecules prepare MLKL for destruction, IP6 has the final say. If IP6 doesn’t bind, MLKL remains harmless, like a cotton ball floating inside the cell.

When it’s not killing cells, MLKL exists as multiple units, separate from one another. But when IP6 binds to one of these units, the protein gathers itself up into one functional complex. Only then, as a whole, is MLKL a full-fledged killer. It’s like a grenade split into its component parts. None of them are functional on their own. But put back together, the tiny bomb is ready to inflict damage.

“We’ve come to realize that, after the cell explodes, there are these ‘alarm’ molecules that alert the immune system,” Dovey said. “When the cell releases its contents, other cells pick up on these cautionary molecules and can either shore up defenses or prepare for necroptosis themselves.”

Screening for the Grim Reaper

In their quest to understand exactly how necroptosis occurs, Carette and Dovey performed an unbiased genetic screen, in which they scoured the entire genome for genes that seemed to be particularly critical toward the end of the pathway, where they knew MLKL took action. Before the IP6 finding, it was known that an intricate pathway impinged on MLKL. But only through this special genetic screen, in which they systematically tested the function of every gene at this end stage, were they able to see that IP6 was the key to necroptosis.

“Genetic screens are a lot of fun because you never know what you’re going to get,” Carette said. “We feel quite excited that we’ve been able to pinpoint IP6.”

Their screen revealed that IP6 binds with especially high specificity. Other similar versions of inositol phosphate, such as IP3, didn’t pass muster, and when bound to MLKL had no effect. This gave Carette an interesting idea. For conditions like irritable bowel disease, in which erroneous necroptosis contributes to the severity of the disease, it would be desirable to disable IP6 from binding under those conditions. Perhaps blocking the binding site, or tricking MLKL into binding to one of the other versions of inositol phosphate, could do the trick. Either way, Carette and his collaborators are now digging further into the structure of IP6 bound to MLKL to better understand exactly how the killer is unleashed.

“In terms of drug discovery, inositol phosphates have been somewhat ignored, so we’re really excited to be able to look into these small molecules for potential therapeutic reasons,” Carette said.

https://www.technologynetworks.com/cell-science/news/cellular-death-code-identified-304850?utm_campaign=Newsletter_TN_BreakingScienceNews&utm_source=hs_email&utm_medium=email&utm_content=63609833&_hsenc=p2ANqtz-9qdyzMcEm3q0J6mlEARWf6NhG5b_3NFqLfwxNaoJ8n6Y4bATQcn5d8BjpMNJZ4EFWXploBzGufQZD5OhVtNnjSDPtCtQ&_hsmi=63609833


Signals long thought to be “noise” appear to represent a distinct form of brain activity.

By Tanya Lewis

Every few seconds a wave of electrical activity travels through the brain, like a large swell moving through the ocean. Scientists first detected these ultraslow undulations decades ago in functional magnetic resonance imaging (fMRI) scans of people and other animals at rest—but the phenomenon was thought to be either electrical “noise” or the sum of much faster brain signals and was largely ignored.

Now a study that measured these “infraslow” (less than 0.1 hertz) brain waves in mice suggests they are a distinct type of brain activity that depends on an animal’s conscious state. But big questions remain about these waves’ origin and function.

An fMRI scan detects changes in blood flow that are assumed to be linked to neural activity. “When you put someone in a scanner, if you just look at the signal when you don’t ask the subject to do anything, it looks pretty noisy,” says Marcus Raichle, a professor of radiology and neurology at Washington University School of Medicine in St. Louis and senior author of the new study, published in April in Neuron. “All this resting-state activity brought to the forefront: What is this fMRI signal all about?”

To find out what was going on in the brain, Raichle’s team employed a combination of calcium/hemoglobin imaging, which uses fluorescent molecules to detect the activity of neurons at the cellular level, and electrophysiology, which can record signals from cells in different brain layers. They performed both measurements in awake and anesthetized mice; the awake mice were resting in tiny hammocks in a dark room.

The team found that infraslow waves traveled through the cortical layers of the awake rodents’ brains—and changed direction when the animals were anesthetized. The researchers say these waves are distinct from so-called delta waves (between 1 and 4 Hz) and other higher-frequency brain activity.

These superslow waves may be critical to how the brain functions, Raichle says. “Think of, say, waves on the water of Puget Sound. You can have very rough days where you have these big groundswells and then have whitecaps sitting on top of them,” he says. These “swells” make it easier for brain areas to become active—for “whitecaps” to form, in other words.

Other researchers praised the study’s general approach but were skeptical that it shows the infraslow waves are totally distinct from other brain activity. “I would caution against jumping to a conclusion that resting-state fMRI is measuring some other property of the brain that’s got nothing to do with the higher-frequency fluctuations between areas of the cortex,” says Elizabeth Hillman, a professor of biomedical engineering at Columbia University’s Zuckerman Institute, who was not involved in the work. Hillman published a study in 2016 finding that resting-state fMRI signals represent neural activity across a range of frequencies, not just low ones.

More studies are needed to tease apart how these different types of brain signals are related. “These kinds of patterns are very new,” Hillman notes. “We haven’t got much of a clue what they are, and figuring out what they are is really, really difficult.”

https://www.scientificamerican.com/article/superslow-brain-waves-may-play-a-critical-role-in-consciousness1/


Motion sensor “camera traps” unobtrusively take pictures of animals in their natural environment, oftentimes yielding images not otherwise observable. The artificial intelligence system automatically processes such images, here correctly reporting this as a picture of two impala standing.

A new paper in the Proceedings of the National Academy of Sciences (PNAS) reports how a cutting-edge artificial intelligence technique called deep learning can automatically identify, count and describe animals in their natural habitats.

Photographs that are automatically collected by motion-sensor cameras can then be automatically described by deep neural networks. The result is a system that can automate animal identification for up to 99.3 percent of images while still performing at the same 96.6 percent accuracy rate of crowdsourced teams of human volunteers.

“This technology lets us accurately, unobtrusively and inexpensively collect wildlife data, which could help catalyze the transformation of many fields of ecology, wildlife biology, zoology, conservation biology and animal behavior into ‘big data’ sciences. This will dramatically improve our ability to both study and conserve wildlife and precious ecosystems,” says Jeff Clune, the senior author of the paper. He is the Harris Associate Professor at the University of Wyoming and a senior research manager at Uber’s Artificial Intelligence Labs.

The paper was written by Clune; his Ph.D. student Mohammad Sadegh Norouzzadeh; his former Ph.D. student Anh Nguyen (now at Auburn University); Margaret Kosmala (Harvard University); Ali Swanson (University of Oxford); and Meredith Palmer and Craig Packer (both from the University of Minnesota).

Deep neural networks are a form of computational intelligence loosely inspired by how animal brains see and understand the world. They require vast amounts of training data to work well, and the data must be accurately labeled (e.g., each image being correctly tagged with which species of animal is present, how many there are, etc.).

This study obtained the necessary data from Snapshot Serengeti, a citizen science project on the http://www.zooniverse.org platform. Snapshot Serengeti has deployed a large number of “camera traps” (motion-sensor cameras) in Tanzania that collect millions of images of animals in their natural habitat, such as lions, leopards, cheetahs and elephants. The information in these photographs is only useful once it has been converted into text and numbers. For years, the best method for extracting such information was to ask crowdsourced teams of human volunteers to label each image manually. The study published today harnessed 3.2 million labeled images produced in this manner by more than 50,000 human volunteers over several years.

“When I told Jeff Clune we had 3.2 million labeled images, he stopped in his tracks,” says Packer, who heads the Snapshot Serengeti project. “We wanted to test whether we could use machine learning to automate the work of human volunteers. Our citizen scientists have done phenomenal work, but we needed to speed up the process to handle ever greater amounts of data. The deep learning algorithm is amazing and far surpassed my expectations. This is a game changer for wildlife ecology.”

Swanson, who founded Snapshot Serengeti, adds: “There are hundreds of camera-trap projects in the world, and very few of them are able to recruit large armies of human volunteers to extract their data. That means that much of the knowledge in these important data sets remains untapped. Although projects are increasingly turning to citizen science for image classification, we’re starting to see it take longer and longer to label each batch of images as the demand for volunteers grows. We believe deep learning will be key in alleviating the bottleneck for camera-trap projects: the effort of converting images into usable data.”

“Not only does the artificial intelligence system tell you which of 48 different species of animal is present, but it also tells you how many there are and what they are doing. It will tell you if they are eating, sleeping, if babies are present, etc.,” adds Kosmala, another Snapshot Serengeti leader. “We estimate that the deep learning technology pipeline we describe would save more than eight years of human labeling effort for each additional 3 million images. That is a lot of valuable volunteer time that can be redeployed to help other projects.”

First-author Sadegh Norouzzadeh points out that “Deep learning is still improving rapidly, and we expect that its performance will only get better in the coming years. Here, we wanted to demonstrate the value of the technology to the wildlife ecology community, but we expect that as more people research how to improve deep learning for this application and publish their datasets, the sky’s the limit. It is exciting to think of all the different ways this technology can help with our important scientific and conservation missions.”

The paper that in PNAS is titled, “Automatically identifying, counting, and describing wild animals in camera-trap images with deep learning.”

http://www.uwyo.edu/uw/news/2018/06/researchers-use-artificial-intelligence-to-identify,-count,-describe-wild-animals.html


by Nicolas Scherger

Dr. Thomas Hainmüller and Prof. Dr. Marlene Bartos of the Institute of Physiology of the University of Freiburg have established a new model to explain how the brain stores memories of tangible events. The model is based on an experiment that involved mice seeking a place where they received rewards in a virtual environment. The scientific journal “Nature” has published the study.

In the world of the mouse’s video game, the walls that depict a corridor four meters long are made up of green and blue patterned blocks. The floor is marked with turquoise dots. A short distance away, there’s a brown disc on the floor that looks like a cookie. That’s the symbol for the reward location. The mouse heads for it, gets there, and the symbol disappears. The next cookie promptly appears a bit further down the corridor. The mouse is surrounded by monitors and is standing on a styrofoam ball that is floating on compressed air and turns beneath the mouse when it runs. The ball makes it possible to transfer of the mouse’s movements to the virtual environment. If the mouse reaches the reward symbol, a straw is used to give it a drop of soy milk and stimulate it to form memories of its experiences in the virtual world. The mouse learns when, and at which location, it will receive a reward. It also learns how to locate itself and discriminate between different corridors in the video game.

Viewing the brain with a special microscope

“As the mouse is getting to know its environment, we use a special microscope to look from the outside into its brain and we record the activities of its nerve cells on video,” explains Thomas Hainmüller, a physician and doctoral candidate in the MD/PhD program of the Spemann Graduate School of Biology and Medicine (SGBM) of the University of Freiburg. He says that works because, in reality, the head of the mouse remains relatively still under the microscope as it runs through the virtual world of the video game. On the recordings, the mice’s genetically-manipulated nerve cells flash as soon as they become active. Hainmüller and Marlene Bartos, a Professor of Systemic and Cellular Neurobiology are using this method to investigate how memories are sorted and retrieved. “We repeatedly place the mouse in the virtual world on consecutive days,” says Hainmüller. “In that way, we can observe and compare the activity of the nerve cells in different stages of memory formation,” he explains.

Nerve cells encode places

The region of the brain called the hippocampus plays a decisive role in the formation of memory episodes – or memories of tangible experiences. Hainmüller and Bartos published that the nerve cells in the hippocampus create a map of the virtual world in which single neurons code for actual places in the video game. Earlier studies done at the Freiburg University Medical Center showed that nerve cells in the human hippocampus code video games in the same way. The cells become activated and flash when the mouse is at the respective place, otherwise they remain dark. “To our surprise, we found very different maps inside the hippocampus,” reports Hainmüller. In part, they provide an approximate overview of the position of the mouse in the corridor, yet they also consider time and context factors, and above all, information about in which of the corridors the mouse is located. The maps are also updated during the days of the experiment and as a result can be recognized as a learning process.

Better understanding of memory formation

The research team summarizes, saying that their observations provide a model that explains how activity of the nerve cells in the hippocampus can map the space, time and and context of memory episodes. The findings allow for better understanding of the biological processes that effect the formation of memory in the brain. Hainmüller says, “In the long term, we would like to use our results to contribute to the development of treatments to help people with neurological and psychiatric illnesses.”

Original publication
Thomas Hainmüller and Marlene Bartos (2018): Parallel emergence of stable and dynamic memory engrams in the hippocampus. In: Nature. doi: 10.1038/s41586-018-0191-2

https://www.pr.uni-freiburg.de/pm-en/online-magazine/research-and-discover/maps-made-of-nerve-cells

Honeybees can identify a piece of paper with zero dots as “less than” a paper with a few dots. Such a feat puts the insects in a select group—including the African grey parrot, nonhuman primates, and preschool children—that can understand the concept of zero, researchers report June 7 in Science.

“The fact that the bees generalized the rule ‘choose less’ to [blank paper] was consequently really surprising,” study coauthor Aurore Avarguès-Weber, a cognitive neuroscientist the University of Toulouse, tells The Scientist in an email. “It demonstrated that bees consider ‘nothing’ as a quantity below any number.”

In past studies, researchers have shown that bees can count up to five, but whether the insects could grasp more-complex ideas, such as addition or nothingness, has been unclear. In the latest study, Avarguès-Weber and her colleagues tested the bees’ ability to comprehend the absence of a stimulus by first training the insects to consistently choose sheets of paper either with fewer or more dots by landing on a tiny platform near the paper with the dots. If the bees chose correctly, they were rewarded with a sugary drink. The bees performed the task surprisingly well, Avarguès-Weber says. “The fact that they can do it while we were also controlling for potential confounding parameters confirms their capacity to discriminate numbers.”

The team then tested the bees’ ability to distinguish a blank piece of paper, or what the researchers call an empty set, from a sheet with one dot and found the insects chose correctly about 63 percent of the time. The behavior reveals “an understanding that an empty set is lower than one, which is challenging for some other animals,” the researchers write in the paper.

That bees can use the idea of “less than” to extrapolate that nothing has a quantitative nature is “very surprising,” says Andreas Nieder of the University of Tübingen in Germany who was not involved in the study. “Bees have minibrains compared with human brains—fewer than a million neurons compared with our 86 billion—yet they can understand the concept of an empty set.”

Nieder suggests honeybees, similar to humans, may have developed this ability to comprehend the absence of something as a survival advantage, to help with foraging, avoiding predation, and interacting with other bees of the same species. The absence of food or a mate is important to understand, he says.

Clint Perry, who studies bees at Queen Mary University of London and was not involved in the study, is a bit more cautious about the results. “I applaud these researchers. It is very difficult to test these types of cognitive abilities in bees,” he says. “But I don’t feel convinced that they were actually showing that the bees could understand the concept of zero or even the absence of information.” Perry suggests the bees might have selected where to land based solely on the total amount of black or white on each paper and that’s the choice that got rewarded, rather than distinguishing the number of dots or lack of them.

Avarguès-Weber and her colleagues argue, however, that the bees were always rewarded when shown dots. “In the test with zero (white paper) versus an image with a few dots, the bees chose the white picture without any previous experience with such stimulus. A choice based exclusively on learning would consist in choosing an image similar to the rewarded ones, ones presenting dots,” she says.

Perry says he’d like to see better control experiments to confirm the finding, while Nieder is interested in the underlying brain physiology that might drive the how the insects comprehend nothingness. How the absence of a stimulus is represented in the human brain hasn’t been well studied, though it has been explored in individual neurons in the brains of nonhuman primates. It could be even harder to study in bees, because they have much smaller brains, Nieder says. Setting up the experiments to test behavior and record brain activity would be challenging.

Avarguès-Weber and her colleagues propose a solution to that challenge—virtual reality. “We are developing a setup in which a tethered bee could learn a cognitive task as done in free-flying conditions so we could record brain activity in parallel,” she says. The team also plans to test the bees’ potential ability to perform simple addition or subtraction.

S. Howard et al., “Numerical ordering of zero in honey bees,” Science, doi:10.1126/science.aar4975, 2018.

https://www.the-scientist.com/?articles.view/articleNo/54776/title/Bees-Appear-Able-to-Comprehend-the-Concept-of-Zero/


Zinnias such as this one were among the first flowers to be grown on the International Space Station.

Researchers on the International Space Station are growing plants in systems that may one day sustain astronauts traveling far across the solar system and beyond.

Vibrant orange flowers crown a leafy green stem. The plant is surrounded by many just like it, growing in an artificially lit greenhouse about the size of a laboratory vent hood. On Earth, these zinnias, colorful members of the daisy family, probably wouldn’t seem so extraordinary. But these blooms are literally out of this world. Housed on the International Space Station (ISS), orbiting 381 kilometers above Earth, they are among the first flowers grown in space and set the stage for the cultivation of all sorts of plants even farther from humanity’s home planet.

Coaxing this little flower to bloom wasn’t easy, Gioia Massa, a plant biologist at NASA’s Kennedy Space Center in Florida, tells The Scientist. “Microgravity changes the way we grow plants.” With limited gravitational tug on them, plants aren’t sure which way to send their roots or shoots. They can easily dry out, too. In space, air and water don’t mix the way they do on Earth—liquid droplets glom together into large blobs that float about, instead of staying at the roots.

Massa is part of a group of scientists trying to overcome those challenges with a benchtop greenhouse called the Vegetable Production System, or Veggie. The system is a prototype for much larger greenhouses that could one day sustain astronauts on journeys to explore Mars. “As we’re looking to go deeper into space, we’re going to need ways to support astronaut crews nutritionally and cut costs financially,” says Matthew Romeyn, a long-duration food production scientist at Kennedy Space Center. “It’s a lot cheaper to send seeds than prepackaged food.”

In March 2014, Massa and colleagues developed “plant pillows”—small bags with fabric surfaces that contained a bit of soil and fertilizer in which to plant seeds. The bags sat atop a reservoir designed to wick water to the plants’ roots when needed (Open Agriculture, 2:33-41, 2017). At first, the ISS’s pillow-grown zinnias were getting too much water and turning moldy. After the crew ramped up the speed of Veggie’s fans, the flowers started drying out—an issue relayed to the scientists on the ground in 2015 by astronaut Scott Kelly, who took a special interest in the zinnias. Kelly suggested the astronauts water the plants by hand, just like a gardener would on Earth. A little injection of water into the pillows here and there, and the plants perked right up, Massa says.

With the zinnias growing happily, the astronauts began cultivating other flora, including cabbage, lettuce, and microgreens—shoots of salad vegetables—that they used to wrap their burgers and even to make imitation lobster rolls. The gardening helped to boost the astronauts’ diets, and also, anecdotally, brought them joy. “We’re just starting to study the psychological benefits of plants in space,” Massa says, noting that gardening has been shown to relieve stress. “If we’re going to have this opportunity available for longer-term missions, we have to start now.”

The team is currently working to make the greenhouses less dependent on people, as tending to plants during space missions might take astronauts away from more-critical tasks, Massa says. The researchers recently developed Veggie PONDS (Passive Orbital Nutrient Delivery System) with help from Techshot and Tupperware Brands Corporation. This system still uses absorbent mats to wick water to plants’ seeds and roots, but does so more consistently by evenly distributing the moisture. As a result, the crew shouldn’t have to keep such a close eye on the vegetation, and should be able to grow hard-to-cultivate garden plants, such as tomatoes and peppers. Time will tell. NASA sent Veggie PONDS to the ISS this past March, and astronauts are just now starting to compare the new system’s capabilities to those of Veggie.

“What they are doing on the ISS is really neat,” says astronomer Ed Guinan of the University of Pennsylvania. If astronauts are going to venture into deep space and be able to feed themselves, then they need to know how plants grow in environments other than Earth, and which grow best. The projects on the ISS will help answer those questions, he says. Guinan was so inspired by the ISS greenhouses he started his own project in 2017 studying how plants would grow in the soil of Mars—a likely future destination for manned space exploration. He ordered soil with characteristics of Martian dirt and told students in his astrobiology course, “You’re on Mars, there’s a colony there, and it’s your job to feed them.” Most of the students worked to grow nutritious plants, such as kale and other leafy greens, though one tried hops, a key ingredient in beer making. The hops, along with some of the other greens, grew well, Guinan reported at the American Astronomical Society meeting in January.

Yet, if and when astronauts go to Mars, they probably won’t be using the Red Planet’s dirt to grow food, notes Gene Giacomelli, a horticultural engineer at the University of Arizona. There are toxic chemicals called perchlorates to contend with, among other challenges, making it more probable that a Martian greenhouse will operate on hydroponics, similar to the systems being tested on the ISS. “The idea is to simplify things,” says Giacomelli, who has sought to design just such a greenhouse. “If you think about Martian dirt, we know very little about it—so do I trust it is going to be able to feed me, or do I take a system I know will feed me?”

For the past 10 years, Giacomelli has been working with others on a project, conceived by now-deceased business owner Phil Sadler, to build a self-regulating greenhouse that could support a crew of astronauts. This is not a benchtop system like you find on the space station, but a 5.5-meter-long, 2-meter-diameter cylinder that unfurls into an expansive greenhouse with tightly controlled circulation of air and water. The goal of the project, which was suspended in December due to lack of funding, was to show that the lab-size greenhouse could truly sustain astronauts. The greenhouse was only partially successful; the team calculated that a single cylinder would provide plenty of fresh drinking water, but would produce less than half the daily oxygen and calories an astronaut would need to survive a space mission. Though the project is on hold, Giacomelli says he hopes it will one day continue.

This kind of work, both here and on the ISS, is essential to someday sustaining astronauts in deep space, Giacomelli says. And, if researchers can figure out how to make such hydroponic systems efficient and waste-free, he notes, “the heck with Mars and the moon, we could bring that technology back to Earth.”

https://www.the-scientist.com/?articles.view/articleNo/54637/title/Researchers-Grow-Veggies-in-Space/