Posts Tagged ‘future’

In the age of big data, we are quickly producing far more digital information than we can possibly store. Last year, $20 billion was spent on new data centers in the US alone, doubling the capital expenditure on data center infrastructure from 2016. And even with skyrocketing investment in data storage, corporations and the public sector are falling behind.

But there’s hope.

With a nascent technology leveraging DNA for data storage, this may soon become a problem of the past. By encoding bits of data into tiny molecules of DNA, researchers and companies like Microsoft hope to fit entire data centers in a few flasks of DNA by the end of the decade.

But let’s back up.

Backdrop

After the 20th century, we graduated from magnetic tape, floppy disks, and CDs to sophisticated semiconductor memory chips capable of holding data in countless tiny transistors. In keeping with Moore’s Law, we’ve seen an exponential increase in the storage capacity of silicon chips. At the same time, however, the rate at which humanity produces new digital information is exploding. The size of the global datasphere is increasing exponentially, predicted to reach 160 zettabytes (160 trillion gigabytes) by 2025. As of 2016, digital users produced over 44 billion gigabytes of data per day. By 2025, the International Data Corporation (IDC) estimates this figure will surpass 460 billion. And with private sector efforts to improve global connectivity—such as OneWeb and Google’s Project Loon—we’re about to see an influx of data from five billion new minds.

By 2020, three billion new minds are predicted to join the web. With private sector efforts, this number could reach five billion. While companies and services are profiting enormously from this influx, it’s extremely costly to build data centers at the rate needed. At present, about $50 million worth of new data center construction is required just to keep up, not to mention millions in furnishings, equipment, power, and cooling. Moreover, memory-grade silicon is rarely found pure in nature, and researchers predict it will run out by 2040.

Take DNA, on the other hand. At its theoretical limit, we could fit 215 million gigabytes of data in a single gram of DNA.

But how?

Crash Course

DNA is built from a double helix chain of four nucleotide bases—adenine (A), thymine (T), cytosine (C), and guanine (G). Once formed, these chains fold tightly to form extremely dense, space-saving data stores. To encode data files into these bases, we can use various algorithms that convert binary to base nucleotides—0s and 1s into A, T, C, and G. “00” might be encoded as A, “01” as G, “10” as C, and “11” as T, for instance. Once encoded, information is then stored by synthesizing DNA with specific base patterns, and the final encoded sequences are stored in vials with an extraordinary shelf-life. To retrieve data, encoded DNA can then be read using any number of sequencing technologies, such as Oxford Nanopore’s portable MinION.

Still in its deceptive growth phase, DNA data storage—or NAM (nucleic acid memory)—is only beginning to approach the knee of its exponential growth curve. But while the process remains costly and slow, several players are beginning to crack its greatest challenge: retrieval. Just as you might click on a specific file and filter a search term on your desktop, random-access across large data stores has become a top priority for scientists at Microsoft Research and the University of Washington.

Storing over 400 DNA-encoded megabytes of data, U Washington’s DNA storage system now offers random access across all its data with no bit errors.

Applications

Even before we guarantee random access for data retrieval, DNA data storage has immediate market applications. According to IDC’s Age 2025 study (Figure 5 (PDF)), a huge proportion of enterprise data goes straight to an archive. Over time, the majority of stored data becomes only potentially critical, making it less of a target for immediate retrieval.

Particularly for storing past legal documents, medical records, and other archive data, why waste precious computing power, infrastructure, and overhead?

Data-encoded DNA can last 10,000 years—guaranteed—in cold, dark, and dry conditions at a fraction of the storage cost.

Now that we can easily use natural enzymes to replicate DNA, companies have tons to gain (literally) by using DNA as a backup system—duplicating files for later retrieval and risk mitigation.

And as retrieval algorithms and biochemical technologies improve, random access across data-encoded DNA may become as easy as clicking a file on your desktop.

As you scroll, researchers are already investigating the potential of molecular computing, completely devoid of silicon and electronics.

Harvard professor George Church and his lab, for instance, envision capturing data directly in DNA. As Church has stated, “I’m interested in making biological cameras that don’t have any electronic or mechanical components,” whereby information “goes straight into DNA.” According to Church, DNA recorders would capture audiovisual data automatically. “You could paint it up on walls, and if anything interesting happens, just scrape a little bit off and read it—it’s not that far off.” One day, we may even be able to record biological events in the body. In pursuit of this end, Church’s lab is working to develop an in vivo DNA recorder of neural activity, skipping electrodes entirely.

Perhaps the most ultra-compact, long-lasting, and universal storage mechanism at our fingertips, DNA offers us unprecedented applications in data storage—perhaps even computing.

Potential

As DNA data storage plummets in tech costs and rises in speed, commercial user interfaces will become both critical and wildly profitable. Once corporations, startups, and people alike can easily save files, images or even neural activity to DNA, opportunities for disruption abound. Imagine uploading files to the cloud, which travel to encrypted DNA vials, as opposed to massive and inefficient silicon-enabled data centers. Corporations could have their own warehouses and local data networks could allow for heightened cybersecurity—particularly for archives.

And since DNA lasts millennia without maintenance, forget the need to copy databases and power digital archives. As long as we’re human, regardless of technological advances and changes, DNA will always be relevant and readable for generations to come.

But perhaps the most exciting potential of DNA is its portability. If we were to send a single exabyte of data (one billion gigabytes) to Mars using silicon binary media, it would take five Falcon Heavy rockets and cost $486 million in freight alone.

With DNA, we would need five cubic centimeters.

At scale, DNA has the true potential to dematerialize entire space colonies worth of data. Throughout evolution, DNA has unlocked extraordinary possibilities—from humans to bacteria. Soon hosting limitless data in almost zero space, it may one day unlock many more.

https://singularityhub.com/2018/04/26/the-answer-to-the-digital-data-tsunami-is-literally-in-our-dna/?utm_source=Singularity+Hub+Newsletter&utm_campaign=fa76321507-Hub_Daily_Newsletter&utm_medium=email&utm_term=0_f0cf60cdae-fa76321507-58158129#sm.000kbyugh140cf5sxiv1mnz7bq65u

Advertisements

By Brandon Specktor

Imagine your least-favorite world leader. (Take as much time as you need.)

Now, imagine if that person wasn’t a human, but a network of millions of computers around the world. This digi-dictator has instant access to every scrap of recorded information about every person who’s ever lived. It can make millions of calculations in a fraction of a second, controls the world’s economy and weapons systems with godlike autonomy and — scariest of all — can never, ever die.

This unkillable digital dictator, according to Tesla and SpaceX founder Elon Musk, is one of the darker scenarios awaiting humankind’s future if artificial-intelligence research continues without serious regulation.

“We are rapidly headed toward digital superintelligence that far exceeds any human, I think it’s pretty obvious,” Musk said in a new AI documentary called “Do You Trust This Computer?” directed by Chris Paine (who interviewed Musk previously for the documentary “Who Killed The Electric Car?”). “If one company or a small group of people manages to develop godlike digital super-intelligence, they could take over the world.”

Humans have tried to take over the world before. However, an authoritarian AI would have one terrible advantage over like-minded humans, Musk said.

“At least when there’s an evil dictator, that human is going to die,” Musk added. “But for an AI there would be no death. It would live forever, and then you’d have an immortal dictator, from which we could never escape.”

And, this hypothetical AI-dictator wouldn’t even have to be evil to pose a threat to humans, Musk added. All it has to be is determined.

“If AI has a goal and humanity just happens to be in the way, it will destroy humanity as a matter of course without even thinking about it. No hard feelings,” Musk said. “It’s just like, if we’re building a road, and an anthill happens to be in the way. We don’t hate ants, we’re just building a road. So, goodbye, anthill.”

Those who follow news from the Musk-verse will not be surprised by his opinions in the new documentary; the tech mogul has long been a vocal critic of unchecked artificial intelligence. In 2014, Musk called AI humanity’s “biggest existential threat,” and in 2015, he joined a handful of other tech luminaries and researchers, including Stephen Hawking, to urge the United Nations to ban killer robots. He has said unregulated AI poses “vastly more risk than North Korea” and proposed starting some sort of federal oversight program to monitor the technology’s growth.

“Public risks require public oversight,” he tweeted. “Getting rid of the FAA [wouldn’t] make flying safer. They’re there for good reason.”

https://www.livescience.com/62239-elon-musk-immortal-artificial-intelligence-dictator.html?utm_source=notification

by Antonio Regalado

The startup accelerator Y Combinator is known for supporting audacious companies in its popular three-month boot camp.

There’s never been anything quite like Nectome, though.

Next week, at YC’s “demo days,” Nectome’s cofounder, Robert McIntyre, is going to describe his technology for exquisitely preserving brains in microscopic detail using a high-tech embalming process. Then the MIT graduate will make his business pitch. As it says on his website: “What if we told you we could back up your mind?”

So yeah. Nectome is a preserve-your-brain-and-upload-it company. Its chemical solution can keep a body intact for hundreds of years, maybe thousands, as a statue of frozen glass. The idea is that someday in the future scientists will scan your bricked brain and turn it into a computer simulation. That way, someone a lot like you, though not exactly you, will smell the flowers again in a data server somewhere.

This story has a grisly twist, though. For Nectome’s procedure to work, it’s essential that the brain be fresh. The company says its plan is to connect people with terminal illnesses to a heart-lung machine in order to pump its mix of scientific embalming chemicals into the big carotid arteries in their necks while they are still alive (though under general anesthesia).

The company has consulted with lawyers familiar with California’s two-year-old End of Life Option Act, which permits doctor-assisted suicide for terminal patients, and believes its service will be legal. The product is “100 percent fatal,” says McIntyre. “That is why we are uniquely situated among the Y Combinator companies.”

There’s a waiting list

Brain uploading will be familiar to readers of Ray Kurzweil’s books or other futurist literature. You may already be convinced that immortality as a computer program is definitely going to be a thing. Or you may think transhumanism, the umbrella term for such ideas, is just high-tech religion preying on people’s fear of death.

Either way, you should pay attention to Nectome. The company has won a large federal grant and is collaborating with Edward Boyden, a top neuroscientist at MIT, and its technique just claimed an $80,000 science prize for preserving a pig’s brain so well that every synapse inside it could be seen with an electron microscope.

McIntyre, a computer scientist, and his cofounder Michael McCanna have been following the tech entrepreneur’s handbook with ghoulish alacrity. “The user experience will be identical to physician-assisted suicide,” he says. “Product-market fit is people believing that it works.”

Nectome’s storage service is not yet for sale and may not be for several years. Also still lacking is evidence that memories can be found in dead tissue. But the company has found a way to test the market. Following the example of electric-vehicle maker Tesla, it is sizing up demand by inviting prospective customers to join a waiting list for a deposit of $10,000, fully refundable if you change your mind.

So far, 25 people have done so. One of them is Sam Altman, a 32-year-old investor who is one of the creators of the Y Combinator program. Altman tells MIT Technology Review he’s pretty sure minds will be digitized in his lifetime. “I assume my brain will be uploaded to the cloud,” he says.

Old idea, new approach

The brain storage business is not new. In Arizona, the Alcor Life Extension Foundation holds more than 150 bodies and heads in liquid nitrogen, including those of baseball great Ted Williams. But there’s dispute over whether such cryonic techniques damage the brain, perhaps beyond repair.

So starting several years ago, McIntyre, then working with cryobiologist Greg Fahy at a company named 21st Century Medicine, developed a different method, which combines embalming with cryonics. It proved effective at preserving an entire brain to the nanometer level, including the connectome—the web of synapses that connect neurons.

A connectome map could be the basis for re-creating a particular person’s consciousness, believes Ken Hayworth, a neuroscientist who is president of the Brain Preservation Foundation—the organization that, on March 13, recognized McIntyre and Fahy’s work with the prize for preserving the pig brain.

There’s no expectation here that the preserved tissue can be actually brought back to life, as is the hope with Alcor-style cryonics. Instead, the idea is to retrieve information that’s present in the brain’s anatomical layout and molecular details.

“If the brain is dead, it’s like your computer is off, but that doesn’t mean the information isn’t there,” says Hayworth.

A brain connectome is inconceivably complex; a single nerve can connect to 8,000 others, and the brain contains millions of cells. Today, imaging the connections in even a square millimeter of mouse brain is an overwhelming task. “But it may be possible in 100 years,” says Hayworth. “Speaking personally, if I were a facing a terminal illness I would likely choose euthanasia by [this method].”

A human brain

The Nectome team demonstrated the seriousness of its intentions starting this January, when McIntyre, McCanna, and a pathologist they’d hired spent several weeks camped out at an Airbnb in Portland, Oregon, waiting to purchase a freshly deceased body.

In February, they obtained the corpse of an elderly woman and were able to begin preserving her brain just 2.5 hours after her death. It was the first demonstration of their technique, called aldehyde-stabilized cryopreservation, on a human brain.

Fineas Lupeiu, founder of Aeternitas, a company that arranges for people to donate their bodies to science, confirmed that he provided Nectome with the body. He did not disclose the woman’s age or cause of death, or say how much he charged.

The preservation procedure, which takes about six hours, was carried out at a mortuary. “You can think of what we do as a fancy form of embalming that preserves not just the outer details but the inner details,” says McIntyre. He says the woman’s brain is “one of the best-preserved ever,” although her being dead for even a couple of hours damaged it. Her brain is not being stored indefinitely but is being sliced into paper-thin sheets and imaged with an electron microscope.

McIntyre says the undertaking was a trial run for what the company’s preservation service could look like. He says they are seeking to try it in the near future on a person planning doctor-assisted suicide because of a terminal illness.

Hayworth told me he’s quite anxious that Nectome refrain from offering its service commercially before the planned protocol is published in a medical journal. That’s so “the medical and ethics community can have a complete round of discussion.”

“If you are like me, and think that mind uploading is going to happen, it’s not that controversial,” he says. “But it could look like you are enticing someone to commit suicide to preserve their brain.” He thinks McIntyre is walking “a very fine line” by asking people to pay to join a waiting list. Indeed, he “may have already crossed it.”

Crazy or not ?

Some scientists say brain storage and reanimation is an essentially fraudulent proposition. Writing in our pages in 2015, the McGill University neuroscientist Michael Hendricks decried the “abjectly false hope” peddled by transhumanists promising resurrection in ways that technology can probably never deliver.

“Burdening future generations with our brain banks is just comically arrogant. Aren’t we leaving them with enough problems?” Hendricks told me this week after reviewing Nectome’s website. “I hope future people are appalled that in the 21st century, the richest and most comfortable people in history spent their money and resources trying to live forever on the backs of their descendants. I mean, it’s a joke, right? They are cartoon bad guys.”

Nectome has received substantial support for its technology, however. It has raised $1 million in funding so far, including the $120,000 that Y Combinator provides to all the companies it accepts. It has also won a $960,000 federal grant from the U.S. National Institute of Mental Health for “whole-brain nanoscale preservation and imaging,” the text of which foresees a “commercial opportunity in offering brain preservation” for purposes including drug research.

About a third of the grant funds are being spent in the MIT laboratory of Edward Boyden, a well-known neuroscientist. Boyden says he’s seeking to combine McIntyre’s preservation procedure with a technique MIT invented, expansion microscopy, which causes brain tissue to swell to 10 or 20 times its normal size, and which facilitates some types of measurements.

I asked Boyden what he thinks of brain preservation as a service. “I think that as long as they are up-front about what we do know and what we don’t know, the preservation of information in the brain might be a very useful thing,” he replied in an e-mail.

The unknowns, of course, are substantial. Not only does no one know what consciousness is (so it will be hard to tell if an eventual simulation has any), but it’s also unclear what brain structures and molecular details need to be retained to preserve a memory or a personality. Is it just the synapses, or is it every fleeting molecule? “Ultimately, to answer this question, data is needed,” Boyden says.

Demo day

Nectome has been honing its pitch for Y Combinator’s demo days, trying to create a sharp two-minute summary of its ideas to present to a group of elite investors. The team was leaning against showing an image of the elderly woman’s brain. Some people thought it was unpleasant. The company had also walked back its corporate slogan, changing it from “We archive your mind” to “Committed to the goal of archiving your mind,” which seemed less like an overpromise.

McIntyre sees his company in the tradition of “hard science” startups working on tough problems like quantum computing. “Those companies also can’t sell anything now, but there is a lot of interest in technologies that could be revolutionary if they are made to work,” he says. “I do think that brain preservation has amazing commercial potential.”

He also keeps in mind the dictum that entrepreneurs should develop products they want to use themselves. He sees good reasons to save a copy of himself somewhere, and copies of other people, too.

“There is a lot of philosophical debate, but to me a simulation is close enough that it’s worth something,” McIntyre told me. “And there is a much larger humanitarian aspect to the whole thing. Right now, when a generation of people die, we lose all their collective wisdom. You can transmit knowledge to the next generation, but it’s harder to transmit wisdom, which is learned. Your children have to learn from the same mistakes.”

“That was fine for a while, but we get more powerful every generation. The sheer immense potential of what we can do increases, but the wisdom does not.”

https://www.technologyreview.com/s/610456/a-startup-is-pitching-a-mind-uploading-service-that-is-100-percent-fatal/

by Emily Mullin

When David Graham wakes up in the morning, the flat white box that’s Velcroed to the wall of his room in Robbie’s Place, an assisted living facility in Marlborough, Massachusetts, begins recording his every movement.

It knows when he gets out of bed, gets dressed, walks to his window, or goes to the bathroom. It can tell if he’s sleeping or has fallen. It does this by using low-power wireless signals to map his gait speed, sleep patterns, location, and even breathing pattern. All that information gets uploaded to the cloud, where machine-learning algorithms find patterns in the thousands of movements he makes every day.

The rectangular boxes are part of an experiment to help researchers track and understand the symptoms of Alzheimer’s.

It’s not always obvious when patients are in the early stages of the disease. Alterations in the brain can cause subtle changes in behavior and sleep patterns years before people start experiencing confusion and memory loss. Researchers think artificial intelligence could recognize these changes early and identify patients at risk of developing the most severe forms of the disease.

Spotting the first indications of Alzheimer’s years before any obvious symptoms come on could help pinpoint people most likely to benefit from experimental drugs and allow family members to plan for eventual care. Devices equipped with such algorithms could be installed in people’s homes or in long-term care facilities to monitor those at risk. For patients who already have a diagnosis, such technology could help doctors make adjustments in their care.

Drug companies, too, are interested in using machine-learning algorithms, in their case to search through medical records for the patients most likely to benefit from experimental drugs. Once people are in a study, AI might be able to tell investigators whether the drug is addressing their symptoms.

Currently, there’s no easy way to diagnose Alzheimer’s. No single test exists, and brain scans alone can’t determine whether someone has the disease. Instead, physicians have to look at a variety of factors, including a patient’s medical history and observations reported by family members or health-care workers. So machine learning could pick up on patterns that otherwise would easily be missed.


David Graham, one of Vahia’s patients, has one of the AI-powered devices in his room at Robbie’s Place, an assisted living facility in Marlborough, Massachusetts.

Graham, unlike the four other patients with such devices in their rooms, hasn’t been diagnosed with Alzheimer’s. But researchers are monitoring his movements and comparing them with patterns seen in patients who doctors suspect have the disease.

Dina Katabi and her team at MIT’s Computer Science and Artificial Intelligence Laboratory initially developed the device as a fall detector for older people. But they soon realized it had far more uses. If it could pick up on a fall, they thought, it must also be able to recognize other movements, like pacing and wandering, which can be signs of Alzheimer’s.

Katabi says their intention was to monitor people without needing them to put on a wearable tracking device every day. “This is completely passive. A patient doesn’t need to put sensors on their body or do anything specific, and it’s far less intrusive than a video camera,” she says.

How it works

Graham hardly notices the white box hanging in his sunlit, tidy room. He’s most aware of it on days when Ipsit Vahia makes his rounds and tells him about the data it’s collecting. Vahia is a geriatric psychiatrist at McLean Hospital and Harvard Medical School, and he and the technology’s inventors at MIT are running a small pilot study of the device.

Graham looks forward to these visits. During a recent one, he was surprised when Vahia told him he was waking up at night. The device was able to detect it, though Graham didn’t know he was doing it.

The device’s wireless radio signal, only a thousandth as powerful as wi-fi, reflects off everything in a 30-foot radius, including human bodies. Every movement—even the slightest ones, like breathing—causes a change in the reflected signal.

Katabi and her team developed machine-learning algorithms that analyze all these minute reflections. They trained the system to recognize simple motions like walking and falling, and more complex movements like those associated with sleep disturbances. “As you teach it more and more, the machine learns, and the next time it sees a pattern, even if it’s too complex for a human to abstract that pattern, the machine recognizes that pattern,” Katabi says.

Over time, the device creates large readouts of data that show patterns of behavior. The AI is designed to pick out deviations from those patterns that might signify things like agitation, depression, and sleep disturbances. It could also pick up whether a person is repeating certain behaviors during the day. These are all classic symptoms of Alzheimer’s.

“If you can catch these deviations early, you will be able to anticipate them and help manage them,” Vahia says.

In a patient with an Alzheimer’s diagnosis, Vahia and Katabi were able to tell that she was waking up at 2 a.m. and wandering around her room. They also noticed that she would pace more after certain family members visited. After confirming that behavior with a nurse, Vahia adjusted the patient’s dose of a drug used to prevent agitation.


Ipsit Vahia and Dina Katabi are testing an AI-powered device that Katabi’s lab built to monitor the behaviors of people with Alzheimer’s as well as those at risk of developing the disease.

Brain changes

AI is also finding use in helping physicians detect early signs of Alzheimer’s in the brain and understand how those physical changes unfold in different people. “When a radiologist reads a scan, it’s impossible to tell whether a person will progress to Alzheimer’s disease,” says Pedro Rosa-Neto, a neurologist at McGill University in Montreal.

Rosa-Neto and his colleague Sulantha Mathotaarachchi developed an algorithm that analyzed hundreds of positron-emission tomography (PET) scans from people who had been deemed at risk of developing Alzheimer’s. From medical records, the researchers knew which of these patients had gone on to develop the disease within two years of a scan, but they wanted to see if the AI system could identify them just by picking up patterns in the images.

Sure enough, the algorithm was able to spot patterns in clumps of amyloid—a protein often associated with the disease—in certain regions of the brain. Even trained radiologists would have had trouble noticing these issues on a brain scan. From the patterns, it was able to detect with 84 percent accuracy which patients ended up with Alzheimer’s.

Machine learning is also helping doctors predict the severity of the disease in different patients. Duke University physician and scientist P. Murali Doraiswamy is using machine learning to figure out what stage of the disease patients are in and whether their condition is likely to worsen.

“We’ve been seeing Alzheimer’s as a one-size-fits all problem,” says Doraiswamy. But people with Alzheimer’s don’t all experience the same symptoms, and some might get worse faster than others. Doctors have no idea which patients will remain stable for a while or which will quickly get sicker. “So we thought maybe the best way to solve this problem was to let a machine do it,” he says.

He worked with Dragan Gamberger, an artificial-intelligence expert at the Rudjer Boskovic Institute in Croatia, to develop a machine-learning algorithm that sorted through brain scans and medical records from 562 patients who had mild cognitive impairment at the beginning of a five-year period.

Two distinct groups emerged: those whose cognition declined significantly and those whose symptoms changed little or not at all over the five years. The system was able to pick up changes in the loss of brain tissue over time.

A third group was somewhere in the middle, between mild cognitive impairment and advanced Alzheimer’s. “We don’t know why these clusters exist yet,” Doraiswamy says.

Clinical trials

From 2002 to 2012, 99 percent of investigational Alzheimer’s drugs failed in clinical trials. One reason is that no one knows exactly what causes the disease. But another reason is that it is difficult to identify the patients most likely to benefit from specific drugs.

AI systems could help design better trials. “Once we have those people together with common genes, characteristics, and imaging scans, that’s going to make it much easier to test drugs,” says Marilyn Miller, who directs AI research in Alzheimer’s at the National Institute on Aging, part of the US National Institutes of Health.

Then, once patients are enrolled in a study, researchers could continuously monitor them to see if they’re benefiting from the medication.

“One of the biggest challenges in Alzheimer’s drug development is we haven’t had a good way of parsing out the right population to test the drug on,” says Vaibhav Narayan, a researcher on Johnson & Johnson’s neuroscience team.

He says machine-learning algorithms will greatly speed the process of recruiting patients for drug studies. And if AI can pick out which patients are most likely to get worse more quickly, it will be easier for investigators to tell if a drug is having any benefit.

That way, if doctors like Vahia notice signs of Alzheimer’s in a person like Graham, they can quickly get him signed up for a clinical trial in hopes of curbing the devastating effects that would otherwise come years later.

Miller thinks AI could be used to diagnose and predict Alzheimer’s in patients in as soon as five years from now. But she says it’ll require a lot of data to make sure the algorithms are accurate and reliable. Graham, for one, is doing his part to help out.

https://www.technologyreview.com/s/609236/ai-can-spot-signs-of-alzheimers-before-your-family-does/


Illustration by Paweł Jońca

by Helen Thomson

In March 2015, Li-Huei Tsai set up a tiny disco for some of the mice in her laboratory. For an hour each day, she placed them in a box lit only by a flickering strobe. The mice — which had been engineered to produce plaques of the peptide amyloid-β in the brain, a hallmark of Alzheimer’s disease — crawled about curiously. When Tsai later dissected them, those that had been to the mini dance parties had significantly lower levels of plaque than mice that had spent the same time in the dark.

Tsai, a neuroscientist at Massachusetts Institute of Technology (MIT) in Cambridge, says she checked the result; then checked it again. “For the longest time, I didn’t believe it,” she says. Her team had managed to clear amyloid from part of the brain with a flickering light. The strobe was tuned to 40 hertz and was designed to manipulate the rodents’ brainwaves, triggering a host of biological effects that eliminated the plaque-forming proteins. Although promising findings in mouse models of Alzheimer’s disease have been notoriously difficult to replicate in humans, the experiment offered some tantalizing possibilities. “The result was so mind-boggling and so robust, it took a while for the idea to sink in, but we knew we needed to work out a way of trying out the same thing in humans,” Tsai says.

Scientists identified the waves of electrical activity that constantly ripple through the brain almost 100 years ago, but they have struggled to assign these oscillations a definitive role in behaviour or brain function. Studies have strongly linked brainwaves to memory consolidation during sleep, and implicated them in processing sensory inputs and even coordinating consciousness. Yet not everyone is convinced that brainwaves are all that meaningful. “Right now we really don’t know what they do,” says Michael Shadlen, a neuroscientist at Columbia University in New York City.

Now, a growing body of evidence, including Tsai’s findings, hint at a meaningful connection to neurological disorders such as Alzheimer’s and Parkinson’s diseases. The work offers the possibility of forestalling or even reversing the damage caused by such conditions without using a drug. More than two dozen clinical trials are aiming to modulate brainwaves in some way — some with flickering lights or rhythmic sounds, but most through the direct application of electrical currents to the brain or scalp. They aim to treat everything from insomnia to schizophrenia and premenstrual dysphoric disorder.

Tsai’s study was the first glimpse of a cellular response to brainwave manipulation. “Her results were a really big surprise,” says Walter Koroshetz, director of the US National Institute of Neurological Disorders and Stroke in Bethesda, Maryland. “It’s a novel observation that would be really interesting to pursue.”


A powerful wave

Brainwaves were first noticed by German psychiatrist Hans Berger. In 1929, he published a paper describing the repeating waves of current he observed when he placed electrodes on people’s scalps. It was the world’s first electroencephalogram (EEG) recording — but nobody took much notice. Berger was a controversial figure who had spent much of his career trying to identify the physiological basis of psychic phenomena. It was only after his colleagues began to confirm the results several years later that Berger’s invention was recognized as a window into brain activity.

Neurons communicate using electrical impulses created by the flow of ions into and out of each cell. Although a single firing neuron cannot be picked up through the electrodes of an EEG, when a group of neurons fires again and again in synchrony, it shows up as oscillating electrical ripples that sweep through the brain.

Those of the highest frequency are gamma waves, which range from 25 to 140 hertz. People often show a lot of this kind of activity when they are at peak concentration. At the other end of the scale are delta waves, which have the lowest frequency — around 0.5 to 4 hertz. These tend to occur in deep sleep (see ‘Rhythms of the mind’).

At any point in time, one type of brainwave tends to dominate, although other bands are always present to some extent. Scientists have long wondered what purpose, if any, this hum of activity serves, and some clues have emerged over the past three decades. For instance, in 1994, discoveries in mice indicated that the distinct patterns of oscillatory activity during sleep mirrored those during a previous learning exercise. Scientists suggested that these waves could be helping to solidify memories.

Brainwaves also seem to influence conscious perception. Randolph Helfrich at the University of California, Berkeley, and his colleagues devised a way to enhance or reduce gamma oscillations of around 40 hertz using a non-invasive technique called transcranial alternating current stimulation (tACS). By tweaking these oscillations, they were able to influence whether a person perceived a video of moving dots as travelling vertically or horizontally.

The oscillations also provide a potential mechanism for how the brain creates a coherent experience from the chaotic symphony of stimuli hitting the senses at any one time, a puzzle known as the ‘binding problem’. By synchronizing the firing rates of neurons responding to the same event, brainwaves might ensure that the all of the relevant information relating to one object arrives at the correct area of the brain at exactly the right time. Coordinating these signals is the key to perception, says Robert Knight, a cognitive neuroscientist at the University of California, Berkeley, “You can’t just pray that they will self-organize.”


Healthy oscillations

But these oscillations can become disrupted in certain disorders. In Parkinson’s disease, for example, the brain generally starts to show an increase in beta waves in the motor regions as body movement becomes impaired. In a healthy brain, beta waves are suppressed just before a body movement. But in Parkinson’s disease, neurons seem to get stuck in a synchronized pattern of activity. This leads to rigidity and movement difficulties. Peter Brown, who studies Parkinson’s disease at the University of Oxford, UK, says that current treatments for the symptoms of the disease — deep-brain stimulation and the drug levodopa — might work by reducing beta waves.

People with Alzheimer’s disease show a reduction in gamma oscillations5. So Tsai and others wondered whether gamma-wave activity could be restored, and whether this would have any effect on the disease.

They started by using optogenetics, in which brain cells are engineered to respond directly to a flash of light. In 2009, Tsai’s team, in collaboration with Christopher Moore, also at MIT at the time, demonstrated for the first time that it is possible to use the technique to drive gamma oscillations in a specific part of the mouse brain6.

Tsai and her colleagues subsequently found that tinkering with the oscillations sets in motion a host of biological events. It initiates changes in gene expression that cause microglia — immune cells in the brain — to change shape. The cells essentially go into scavenger mode, enabling them to better dispose of harmful clutter in the brain, such as amyloid-β. Koroshetz says that the link to neuroimmunity is new and striking. “The role of immune cells like microglia in the brain is incredibly important and poorly understood, and is one of the hottest areas for research now,” he says.

If the technique was to have any therapeutic relevance, however, Tsai and her colleagues had to find a less-invasive way of manipulating brainwaves. Flashing lights at specific frequencies has been shown to influence oscillations in some parts of the brain, so the researchers turned to strobe lights. They started by exposing young mice with a propensity for amyloid build-up to flickering LED lights for one hour. This created a drop in free-floating amyloid, but it was temporary, lasting less than 24 hours, and restricted to the visual cortex.

To achieve a longer-lasting effect on animals with amyloid plaques, they repeated the experiment for an hour a day over the course of a week, this time using older mice in which plaques had begun to form. Twenty-four hours after the end of the experiment, these animals showed a 67% reduction in plaque in the visual cortex compared with controls. The team also found that the technique reduced tau protein, another hallmark of Alzheimer’s disease.

Alzheimer’s plaques tend to have their earliest negative impacts on the hippocampus, however, not the visual cortex. To elicit oscillations where they are needed, Tsai and her colleagues are investigating other techniques. Playing rodents a 40-hertz noise, for example, seems to cause a decrease in amyloid in the hippocampus — perhaps because the hippo-campus sits closer to the auditory cortex than to the visual cortex.

Tsai and her colleague Ed Boyden, a neuro-scientist at MIT, have now formed a company, Cognito Therapeutics in Cambridge, to test similar treatments in humans. Last year, they started a safety trial, which involves testing a flickering light device, worn like a pair of glasses, on 12 people with Alzheimer’s.

Caveats abound. The mouse model of Alzheimer’s disease is not a perfect reflection of the disorder, and many therapies that have shown promise in rodents have failed in humans. “I used to tell people — if you’re going to get Alzheimer’s, first become a mouse,” says Thomas Insel, a neuroscientist and psychiatrist who led the US National Institute of Mental Health in Bethesda, Maryland, from 2002 until 2015.

Others are also looking to test how manipulating brainwaves might help people with Alzheimer’s disease. “We thought Tsai’s study was outstanding,” says Emiliano Santarnecchi at Harvard Medical School in Boston, Massachusetts. His team had already been using tACS to stimulate the brain, and he wondered whether it might elicit stronger effects than a flashing strobe. “This kind of stimulation can target areas of the brain more specifically than sensory stimulation can — after seeing Tsai’s results, it was a no-brainer that we should try it in Alzheimer’s patients.”

His team has begun an early clinical trial in which ten people with Alzheimer’s disease receive tACS for one hour daily for two weeks. A second trial, in collaboration with Boyden and Tsai, will look for signals of activated microglia and levels of tau protein. Results are expected from both trials by the end of the year.

Knight says that Tsai’s animal studies clearly show that oscillations have an effect on cellular metabolism — but whether the same effect will be seen in humans is another matter. “In the end, it’s data that will win out,” he says.

The studies may reveal risks, too. Gamma oscillations are the type most likely to induce seizures in people with photosensitive epilepsy, says Dora Hermes, a neuroscientist at Stanford University in California. She recalls a famous episode of a Japanese cartoon that featured flickering red and blue lights, which induced seizures in some viewers. “So many people watched that episode that there were almost 700 extra visits to the emergency department that day.”

A brain boost

Nevertheless, there is clearly a growing excitement around treating neurological diseases using neuromodulation, rather than pharmaceuticals. “There’s pretty good evidence that by changing neural-circuit activity we can get improvements in Parkinson’s, chronic pain, obsessive–compulsive disorder and depression,” says Insel. This is important, he says, because so far, pharmaceutical treatments for neurological disease have suffered from a lack of specificity. Koroshetz adds that funding institutes are eager for treatments that are innovative, non-invasive and quickly translatable to people.

Since publishing their mouse paper, Boyden says, he has had a deluge of requests from researchers wanting to use the same technique to treat other conditions. But there are a lot of details to work out. “We need to figure out what is the most effective, non-invasive way of manipulating oscillations in different parts of the brain,” he says. “Perhaps it is using light, but maybe it’s a smart pillow or a headband that could target these oscillations using electricity or sound.” One of the simplest methods that scientists have found is neurofeedback, which has shown some success in treating a range of conditions, including anxiety, depression and attention-deficit hyperactivity disorder. People who use this technique are taught to control their brainwaves by measuring them with an EEG and getting feedback in the form of visual or audio cues.

Phyllis Zee, a neurologist at Northwestern University in Chicago, Illinois, and her colleagues delivered pulses of ‘pink noise’ — audio frequencies that together sound a bit like a waterfall — to healthy older adults while they slept. They were particularly interested in eliciting the delta oscillations that characterize deep sleep. This aspect of sleep decreases with age, and is associated with a decreased ability to consolidate memories.

So far, her team has found that stimulation increased the amplitude of the slow waves, and was associated with a 25–30% improvement in recall of word pairs learnt the night before, compared with a fake treatment7. Her team is midway through a clinical trial to see whether longer-term acoustic stimulation might help people with mild cognitive impairment.

Although relatively safe, these kinds of technologies do have limitations. Neurofeedback is easy to learn, for instance, but it can take time to have an effect, and the results are often short-lived. In experiments that use magnetic or acoustic stimulation, it is difficult to know precisely what area of the brain is being affected. “The field of external brain stimulation is a little weak at the moment,” says Knight. Many approaches, he says, are open loop, meaning that they don’t track the effect of the modulation using an EEG. Closed loop, he says, would be more practical. Some experiments, such as Zee’s and those involving neuro-feedback, already do this. “I think the field is turning a corner,” Knight says. “It’s attracting some serious research.”

In addition to potentially leading to treatments, these studies could break open the field of neural oscillations in general, helping to link them more firmly to behaviour and how the brain works as a whole.

Shadlen says he is open to the idea that oscillations play a part in human behaviour and consciousness. But for now, he remains unconvinced that they are directly responsible for these phenomena — referring to the many roles people ascribe to them as “magical incantations”. He says he fully accepts that these brain rhythms are signatures of important brain processes, “but to posit the idea that synchronous spikes of activity are meaningful, that by suddenly wiggling inputs at a specific frequency, it suddenly elevates activity onto our conscious awareness? That requires more explanation.”

Whatever their role, Tsai mostly wants to discipline brainwaves and harness them against disease. Cognito Therapeutics has just received approval for a second, larger trial, which will look at whether the therapy has any effect on Alzheimer’s disease symptoms. Meanwhile, Tsai’s team is focusing on understanding more about the downstream biological effects and how to better target the hippocampus with non-invasive technologies.

For Tsai, the work is personal. Her grandmother, who raised her, was affected by dementia. “Her confused face made a deep imprint in my mind,” Tsai says. “This is the biggest challenge of our lifetime, and I will give it all I have.”

https://www.nature.com/articles/d41586-018-02391-6


93-year-old Mary Derr sits on her bed near her robot cat she calls “Buddy” in her home she shares with her daughter Jeanne Elliott in South Kingstown, R.I. Buddy is a Hasbro’s “Joy for All” robotic cat, aimed at seniors and meant to act as a “companion,” it has been on the market for two years. Derr has mild dementia, and Elliott purchased a robot earlier this year to keep her mother company.

By MICHELLE R. SMITH

Imagine a cat that can keep a person company, doesn’t need a litter box and can remind an aging relative to take her medicine or help find her eyeglasses.

That’s the vision of toymaker Hasbro and scientists at Brown University, who have received a three-year, $1 million grant from the National Science Foundation to find ways to add artificial intelligence to Hasbro’s “Joy for All” robotic cat .

The cat, which has been on the market for two years, is aimed at seniors and meant to act as a “companion.” It purrs and meows, and even appears to lick its paw and roll over to ask for a belly rub. The Brown-Hasbro project is aimed at developing additional capabilities for the cats to help older adults with simple tasks.

Researchers at Brown’s Humanity-Centered Robotics Initiative are working to determine which tasks make the most sense, and which can help older adults stay in their own homes longer, such as finding lost objects, or reminding the owner to call someone or go to a doctor’s appointment.

“It’s not going to iron and wash dishes,” said Bertram Malle, a professor of cognitive, linguistic and psychological sciences at Brown. “Nobody expects them to have a conversation. Nobody expects them to move around and fetch a newspaper. They’re really good at providing comfort.”

Malle said they don’t want to make overblown promises of what the cat can do, something he and his fellow researcher — computer science professor Michael Littman — said they’ve seen in other robots on the market. They hope to make a cat that would perform a small set of tasks very well.

They also want to keep it affordable, just a few hundred dollars. The current version costs $100.

They’ve given the project a name that gets at that idea: Affordable Robotic Intelligence for Elderly Support, or ARIES. The team includes researchers from Brown’s medical school, area hospitals and a designer at the University of Cincinnati.

It’s an idea that has appeal to Jeanne Elliott, whose 93-year-old mother, Mary Derr, lives with her in South Kingstown. Derr has mild dementia and the Joy for All cat Elliott purchased this year has become a true companion for Derr, keeping her company and soothing her while Elliott is at work. Derr treats it like a real cat, even though she knows it has batteries.

“Mom has a tendency to forget things,” she said, adding that a cat reminding her “we don’t have any appointments today, take your meds, be careful when you walk, things like that, be safe, reassuring things, to have that available during the day would be awesome.”

Diane Feeney Mahoney, a professor emerita at MGH Institute of Health Professions School of Nursing, who has studied technology for older people, said the project showed promise because of the team of researchers. She hopes they involve people from the Alzheimer’s community and that “we just don’t want to push technology for technology’s sake.”

She called the cat a tool that could make things easier for someone caring for a person with middle-stage dementia, or to be used in nursing homes where pets are not allowed.

The scientists are embarking on surveys, focus groups and interviews to get a sense of the landscape of everyday living for an older adult. They’re also trying to figure out how the souped-up robo-cats would do those tasks, and then how it would communicate that information. They don’t think they want a talking cat, Littman said.

“Cats don’t generally talk to you,” Littman said, and it might be upsetting if it did.

They’re looking at whether the cat could move its head in a certain way to get across the message it’s trying to communicate, for example.

In the end, they hope that by creating an interaction in which the human is needed, they could even help stem feelings of loneliness, depression and anxiety.

“The cat doesn’t do things on its own. It needs the human, and the human gets something back,” Malle said. “That interaction is a huge step up. Loneliness and uselessness feelings are hugely problematic.”

http://www.njherald.com/article/20171219/AP/312199965#//

By Jeffrey Kluger

If you’re traveling to Mars, you’re going to have to bring a lot of essentials along — water, air, fuel, food. And, let’s be honest, you probably wouldn’t mind packing some beer too. A two-year journey — the minimum length of a Mars mission — is an awfully long time to go without one of our home planet’s signature pleasures.

Now, Anheuser-Busch InBev, the manufacturer of Budweiser, has announced that it wants to bring cosmic bar service a little closer to reality: On Dec. 4, the company plans to launch 20 barley seeds to space, aboard a SpaceX rocket making a cargo run to the International Space Station (ISS). Studying how barley — one of the basic ingredients in beer — germinates in microgravity will, the company hopes, teach scientists a lot about the practicality of building an extraterrestrial brewery.

“We want to be part of the collective dream to get to Mars,” said Budweiser vice president Ricardo Marques in an email to TIME. “While this may not be in the near future, we are starting that journey now so that when the dream of colonizing Mars becomes a reality, Budweiser will be there.”

Nice idea. But apart from inevitable issues concerning Mars rovers with designated drivers and who exactly is going to check your ID when you’re 100 million miles from home, Budweiser faces an even bigger question: Is beer brewing even possible in space? The answer: Maybe, but it wouldn’t be easy.

Start with that first step Budweiser is investigating: the business of growing the barley. In the U.S. alone, farmers harvest about 2.5 million acres of barley per year. The majority of that is used for animal feed, but about 45% of it is converted to malt, most of which is used in beer. Even the thirstiest American astronauts don’t need quite so much on tap, so start with something modest — say a 20-liter batch. That’s about 42 pints, which should get a crew of five through at least two or three Friday nights. But even that won’t be easy to make in space.

“If you want to make 20-liters of beer on Earth you’re going to need 100 to 200 square feet of land to grow the barley,” wrote Tristan Stephenson, author of The Curious Bartender series, in an email to TIME. “No doubt they would use hydroponics and probably be a bit more efficient in terms of rate of growth, but that’s a fair bit of valuable space on a space station…just for some beer.”

Still, let’s assume you’re on the station, you’ve grown the crops, and now it’s time to brew your first batch. To start with, the barley grains will have to go through the malting process, which means soaking them in water for two or three days, allowing them to germinate partway and then effectively killing them with heat. For that you need specialized equipment, which has to be carried to space and stored onboard. Every pound of orbital cargo can currently cost about $10,000, according to NASA, though competition from private industry is driving the price down. Still, shipping costs to space are never going to be cheap and it’s hard to justify any beer that winds up costing a couple hundred bucks a swallow.

The brewing process itself would present an entirely different set of problems — most involving gravity. On Earth, Stephenson says, “Brewers measure fermentation progress by assessing the ‘gravity’ (density) of the beer. The measurement is taken using a floating hydrometer. You’re not going to be doing that in space.”

The carbonation in the beer would be all wrong too, making the overall drink both unsightly and too frothy. “The bubbles won’t rise in zero-g,” says Stephenson. “Instead they’ll flocculate together into frogspawn style clumps.”

Dispersed or froggy, once the bubbles go down your gullet, they do your body no favors in space. The burp you emit after a beer on Earth seems like a bad thing, but only compared to the alternative — which happens a lot in zero-g, as gasses don’t rise, but instead find their way deeper into your digestive tract.

The type of beer you could make in space is limited and pretty much excludes Lagers — or cold-fermented beer. “Lager takes longer to make compared to most beers, because the yeast works at a lower temperature,” says Stephenson. “This is also the reason for the notable clarity of lager: longer fermentation means more yeast falls out of the solution, resulting in a clearer, cleaner looking beer. Emphasis on ‘falls’ — and stuff doesn’t fall in space.”

Finally, if Budweiser’s stated goal is to grow beer crops on Mars, they’re going about the experiment all wrong. Germinating your seeds in what is effectively the zero-g environment of the ISS is very different from germinating them on Mars, where the gravity is 40% that of Earth’s — weak by our standards, but still considerable for a growing plant. Budweiser and its partners acknowledge this possibility and argue that the very purpose of the experiment is to try to address the problem.

http://time.com/5039091/budweiser-beer-mars-space-station/

Thanks to Pete Cuomo for bringing this to the It’s Interesting community.