Posts Tagged ‘future’

by Emily Mullin

When David Graham wakes up in the morning, the flat white box that’s Velcroed to the wall of his room in Robbie’s Place, an assisted living facility in Marlborough, Massachusetts, begins recording his every movement.

It knows when he gets out of bed, gets dressed, walks to his window, or goes to the bathroom. It can tell if he’s sleeping or has fallen. It does this by using low-power wireless signals to map his gait speed, sleep patterns, location, and even breathing pattern. All that information gets uploaded to the cloud, where machine-learning algorithms find patterns in the thousands of movements he makes every day.

The rectangular boxes are part of an experiment to help researchers track and understand the symptoms of Alzheimer’s.

It’s not always obvious when patients are in the early stages of the disease. Alterations in the brain can cause subtle changes in behavior and sleep patterns years before people start experiencing confusion and memory loss. Researchers think artificial intelligence could recognize these changes early and identify patients at risk of developing the most severe forms of the disease.

Spotting the first indications of Alzheimer’s years before any obvious symptoms come on could help pinpoint people most likely to benefit from experimental drugs and allow family members to plan for eventual care. Devices equipped with such algorithms could be installed in people’s homes or in long-term care facilities to monitor those at risk. For patients who already have a diagnosis, such technology could help doctors make adjustments in their care.

Drug companies, too, are interested in using machine-learning algorithms, in their case to search through medical records for the patients most likely to benefit from experimental drugs. Once people are in a study, AI might be able to tell investigators whether the drug is addressing their symptoms.

Currently, there’s no easy way to diagnose Alzheimer’s. No single test exists, and brain scans alone can’t determine whether someone has the disease. Instead, physicians have to look at a variety of factors, including a patient’s medical history and observations reported by family members or health-care workers. So machine learning could pick up on patterns that otherwise would easily be missed.

David Graham, one of Vahia’s patients, has one of the AI-powered devices in his room at Robbie’s Place, an assisted living facility in Marlborough, Massachusetts.

Graham, unlike the four other patients with such devices in their rooms, hasn’t been diagnosed with Alzheimer’s. But researchers are monitoring his movements and comparing them with patterns seen in patients who doctors suspect have the disease.

Dina Katabi and her team at MIT’s Computer Science and Artificial Intelligence Laboratory initially developed the device as a fall detector for older people. But they soon realized it had far more uses. If it could pick up on a fall, they thought, it must also be able to recognize other movements, like pacing and wandering, which can be signs of Alzheimer’s.

Katabi says their intention was to monitor people without needing them to put on a wearable tracking device every day. “This is completely passive. A patient doesn’t need to put sensors on their body or do anything specific, and it’s far less intrusive than a video camera,” she says.

How it works

Graham hardly notices the white box hanging in his sunlit, tidy room. He’s most aware of it on days when Ipsit Vahia makes his rounds and tells him about the data it’s collecting. Vahia is a geriatric psychiatrist at McLean Hospital and Harvard Medical School, and he and the technology’s inventors at MIT are running a small pilot study of the device.

Graham looks forward to these visits. During a recent one, he was surprised when Vahia told him he was waking up at night. The device was able to detect it, though Graham didn’t know he was doing it.

The device’s wireless radio signal, only a thousandth as powerful as wi-fi, reflects off everything in a 30-foot radius, including human bodies. Every movement—even the slightest ones, like breathing—causes a change in the reflected signal.

Katabi and her team developed machine-learning algorithms that analyze all these minute reflections. They trained the system to recognize simple motions like walking and falling, and more complex movements like those associated with sleep disturbances. “As you teach it more and more, the machine learns, and the next time it sees a pattern, even if it’s too complex for a human to abstract that pattern, the machine recognizes that pattern,” Katabi says.

Over time, the device creates large readouts of data that show patterns of behavior. The AI is designed to pick out deviations from those patterns that might signify things like agitation, depression, and sleep disturbances. It could also pick up whether a person is repeating certain behaviors during the day. These are all classic symptoms of Alzheimer’s.

“If you can catch these deviations early, you will be able to anticipate them and help manage them,” Vahia says.

In a patient with an Alzheimer’s diagnosis, Vahia and Katabi were able to tell that she was waking up at 2 a.m. and wandering around her room. They also noticed that she would pace more after certain family members visited. After confirming that behavior with a nurse, Vahia adjusted the patient’s dose of a drug used to prevent agitation.

Ipsit Vahia and Dina Katabi are testing an AI-powered device that Katabi’s lab built to monitor the behaviors of people with Alzheimer’s as well as those at risk of developing the disease.

Brain changes

AI is also finding use in helping physicians detect early signs of Alzheimer’s in the brain and understand how those physical changes unfold in different people. “When a radiologist reads a scan, it’s impossible to tell whether a person will progress to Alzheimer’s disease,” says Pedro Rosa-Neto, a neurologist at McGill University in Montreal.

Rosa-Neto and his colleague Sulantha Mathotaarachchi developed an algorithm that analyzed hundreds of positron-emission tomography (PET) scans from people who had been deemed at risk of developing Alzheimer’s. From medical records, the researchers knew which of these patients had gone on to develop the disease within two years of a scan, but they wanted to see if the AI system could identify them just by picking up patterns in the images.

Sure enough, the algorithm was able to spot patterns in clumps of amyloid—a protein often associated with the disease—in certain regions of the brain. Even trained radiologists would have had trouble noticing these issues on a brain scan. From the patterns, it was able to detect with 84 percent accuracy which patients ended up with Alzheimer’s.

Machine learning is also helping doctors predict the severity of the disease in different patients. Duke University physician and scientist P. Murali Doraiswamy is using machine learning to figure out what stage of the disease patients are in and whether their condition is likely to worsen.

“We’ve been seeing Alzheimer’s as a one-size-fits all problem,” says Doraiswamy. But people with Alzheimer’s don’t all experience the same symptoms, and some might get worse faster than others. Doctors have no idea which patients will remain stable for a while or which will quickly get sicker. “So we thought maybe the best way to solve this problem was to let a machine do it,” he says.

He worked with Dragan Gamberger, an artificial-intelligence expert at the Rudjer Boskovic Institute in Croatia, to develop a machine-learning algorithm that sorted through brain scans and medical records from 562 patients who had mild cognitive impairment at the beginning of a five-year period.

Two distinct groups emerged: those whose cognition declined significantly and those whose symptoms changed little or not at all over the five years. The system was able to pick up changes in the loss of brain tissue over time.

A third group was somewhere in the middle, between mild cognitive impairment and advanced Alzheimer’s. “We don’t know why these clusters exist yet,” Doraiswamy says.

Clinical trials

From 2002 to 2012, 99 percent of investigational Alzheimer’s drugs failed in clinical trials. One reason is that no one knows exactly what causes the disease. But another reason is that it is difficult to identify the patients most likely to benefit from specific drugs.

AI systems could help design better trials. “Once we have those people together with common genes, characteristics, and imaging scans, that’s going to make it much easier to test drugs,” says Marilyn Miller, who directs AI research in Alzheimer’s at the National Institute on Aging, part of the US National Institutes of Health.

Then, once patients are enrolled in a study, researchers could continuously monitor them to see if they’re benefiting from the medication.

“One of the biggest challenges in Alzheimer’s drug development is we haven’t had a good way of parsing out the right population to test the drug on,” says Vaibhav Narayan, a researcher on Johnson & Johnson’s neuroscience team.

He says machine-learning algorithms will greatly speed the process of recruiting patients for drug studies. And if AI can pick out which patients are most likely to get worse more quickly, it will be easier for investigators to tell if a drug is having any benefit.

That way, if doctors like Vahia notice signs of Alzheimer’s in a person like Graham, they can quickly get him signed up for a clinical trial in hopes of curbing the devastating effects that would otherwise come years later.

Miller thinks AI could be used to diagnose and predict Alzheimer’s in patients in as soon as five years from now. But she says it’ll require a lot of data to make sure the algorithms are accurate and reliable. Graham, for one, is doing his part to help out.


Illustration by Paweł Jońca

by Helen Thomson

In March 2015, Li-Huei Tsai set up a tiny disco for some of the mice in her laboratory. For an hour each day, she placed them in a box lit only by a flickering strobe. The mice — which had been engineered to produce plaques of the peptide amyloid-β in the brain, a hallmark of Alzheimer’s disease — crawled about curiously. When Tsai later dissected them, those that had been to the mini dance parties had significantly lower levels of plaque than mice that had spent the same time in the dark.

Tsai, a neuroscientist at Massachusetts Institute of Technology (MIT) in Cambridge, says she checked the result; then checked it again. “For the longest time, I didn’t believe it,” she says. Her team had managed to clear amyloid from part of the brain with a flickering light. The strobe was tuned to 40 hertz and was designed to manipulate the rodents’ brainwaves, triggering a host of biological effects that eliminated the plaque-forming proteins. Although promising findings in mouse models of Alzheimer’s disease have been notoriously difficult to replicate in humans, the experiment offered some tantalizing possibilities. “The result was so mind-boggling and so robust, it took a while for the idea to sink in, but we knew we needed to work out a way of trying out the same thing in humans,” Tsai says.

Scientists identified the waves of electrical activity that constantly ripple through the brain almost 100 years ago, but they have struggled to assign these oscillations a definitive role in behaviour or brain function. Studies have strongly linked brainwaves to memory consolidation during sleep, and implicated them in processing sensory inputs and even coordinating consciousness. Yet not everyone is convinced that brainwaves are all that meaningful. “Right now we really don’t know what they do,” says Michael Shadlen, a neuroscientist at Columbia University in New York City.

Now, a growing body of evidence, including Tsai’s findings, hint at a meaningful connection to neurological disorders such as Alzheimer’s and Parkinson’s diseases. The work offers the possibility of forestalling or even reversing the damage caused by such conditions without using a drug. More than two dozen clinical trials are aiming to modulate brainwaves in some way — some with flickering lights or rhythmic sounds, but most through the direct application of electrical currents to the brain or scalp. They aim to treat everything from insomnia to schizophrenia and premenstrual dysphoric disorder.

Tsai’s study was the first glimpse of a cellular response to brainwave manipulation. “Her results were a really big surprise,” says Walter Koroshetz, director of the US National Institute of Neurological Disorders and Stroke in Bethesda, Maryland. “It’s a novel observation that would be really interesting to pursue.”

A powerful wave

Brainwaves were first noticed by German psychiatrist Hans Berger. In 1929, he published a paper describing the repeating waves of current he observed when he placed electrodes on people’s scalps. It was the world’s first electroencephalogram (EEG) recording — but nobody took much notice. Berger was a controversial figure who had spent much of his career trying to identify the physiological basis of psychic phenomena. It was only after his colleagues began to confirm the results several years later that Berger’s invention was recognized as a window into brain activity.

Neurons communicate using electrical impulses created by the flow of ions into and out of each cell. Although a single firing neuron cannot be picked up through the electrodes of an EEG, when a group of neurons fires again and again in synchrony, it shows up as oscillating electrical ripples that sweep through the brain.

Those of the highest frequency are gamma waves, which range from 25 to 140 hertz. People often show a lot of this kind of activity when they are at peak concentration. At the other end of the scale are delta waves, which have the lowest frequency — around 0.5 to 4 hertz. These tend to occur in deep sleep (see ‘Rhythms of the mind’).

At any point in time, one type of brainwave tends to dominate, although other bands are always present to some extent. Scientists have long wondered what purpose, if any, this hum of activity serves, and some clues have emerged over the past three decades. For instance, in 1994, discoveries in mice indicated that the distinct patterns of oscillatory activity during sleep mirrored those during a previous learning exercise. Scientists suggested that these waves could be helping to solidify memories.

Brainwaves also seem to influence conscious perception. Randolph Helfrich at the University of California, Berkeley, and his colleagues devised a way to enhance or reduce gamma oscillations of around 40 hertz using a non-invasive technique called transcranial alternating current stimulation (tACS). By tweaking these oscillations, they were able to influence whether a person perceived a video of moving dots as travelling vertically or horizontally.

The oscillations also provide a potential mechanism for how the brain creates a coherent experience from the chaotic symphony of stimuli hitting the senses at any one time, a puzzle known as the ‘binding problem’. By synchronizing the firing rates of neurons responding to the same event, brainwaves might ensure that the all of the relevant information relating to one object arrives at the correct area of the brain at exactly the right time. Coordinating these signals is the key to perception, says Robert Knight, a cognitive neuroscientist at the University of California, Berkeley, “You can’t just pray that they will self-organize.”

Healthy oscillations

But these oscillations can become disrupted in certain disorders. In Parkinson’s disease, for example, the brain generally starts to show an increase in beta waves in the motor regions as body movement becomes impaired. In a healthy brain, beta waves are suppressed just before a body movement. But in Parkinson’s disease, neurons seem to get stuck in a synchronized pattern of activity. This leads to rigidity and movement difficulties. Peter Brown, who studies Parkinson’s disease at the University of Oxford, UK, says that current treatments for the symptoms of the disease — deep-brain stimulation and the drug levodopa — might work by reducing beta waves.

People with Alzheimer’s disease show a reduction in gamma oscillations5. So Tsai and others wondered whether gamma-wave activity could be restored, and whether this would have any effect on the disease.

They started by using optogenetics, in which brain cells are engineered to respond directly to a flash of light. In 2009, Tsai’s team, in collaboration with Christopher Moore, also at MIT at the time, demonstrated for the first time that it is possible to use the technique to drive gamma oscillations in a specific part of the mouse brain6.

Tsai and her colleagues subsequently found that tinkering with the oscillations sets in motion a host of biological events. It initiates changes in gene expression that cause microglia — immune cells in the brain — to change shape. The cells essentially go into scavenger mode, enabling them to better dispose of harmful clutter in the brain, such as amyloid-β. Koroshetz says that the link to neuroimmunity is new and striking. “The role of immune cells like microglia in the brain is incredibly important and poorly understood, and is one of the hottest areas for research now,” he says.

If the technique was to have any therapeutic relevance, however, Tsai and her colleagues had to find a less-invasive way of manipulating brainwaves. Flashing lights at specific frequencies has been shown to influence oscillations in some parts of the brain, so the researchers turned to strobe lights. They started by exposing young mice with a propensity for amyloid build-up to flickering LED lights for one hour. This created a drop in free-floating amyloid, but it was temporary, lasting less than 24 hours, and restricted to the visual cortex.

To achieve a longer-lasting effect on animals with amyloid plaques, they repeated the experiment for an hour a day over the course of a week, this time using older mice in which plaques had begun to form. Twenty-four hours after the end of the experiment, these animals showed a 67% reduction in plaque in the visual cortex compared with controls. The team also found that the technique reduced tau protein, another hallmark of Alzheimer’s disease.

Alzheimer’s plaques tend to have their earliest negative impacts on the hippocampus, however, not the visual cortex. To elicit oscillations where they are needed, Tsai and her colleagues are investigating other techniques. Playing rodents a 40-hertz noise, for example, seems to cause a decrease in amyloid in the hippocampus — perhaps because the hippo-campus sits closer to the auditory cortex than to the visual cortex.

Tsai and her colleague Ed Boyden, a neuro-scientist at MIT, have now formed a company, Cognito Therapeutics in Cambridge, to test similar treatments in humans. Last year, they started a safety trial, which involves testing a flickering light device, worn like a pair of glasses, on 12 people with Alzheimer’s.

Caveats abound. The mouse model of Alzheimer’s disease is not a perfect reflection of the disorder, and many therapies that have shown promise in rodents have failed in humans. “I used to tell people — if you’re going to get Alzheimer’s, first become a mouse,” says Thomas Insel, a neuroscientist and psychiatrist who led the US National Institute of Mental Health in Bethesda, Maryland, from 2002 until 2015.

Others are also looking to test how manipulating brainwaves might help people with Alzheimer’s disease. “We thought Tsai’s study was outstanding,” says Emiliano Santarnecchi at Harvard Medical School in Boston, Massachusetts. His team had already been using tACS to stimulate the brain, and he wondered whether it might elicit stronger effects than a flashing strobe. “This kind of stimulation can target areas of the brain more specifically than sensory stimulation can — after seeing Tsai’s results, it was a no-brainer that we should try it in Alzheimer’s patients.”

His team has begun an early clinical trial in which ten people with Alzheimer’s disease receive tACS for one hour daily for two weeks. A second trial, in collaboration with Boyden and Tsai, will look for signals of activated microglia and levels of tau protein. Results are expected from both trials by the end of the year.

Knight says that Tsai’s animal studies clearly show that oscillations have an effect on cellular metabolism — but whether the same effect will be seen in humans is another matter. “In the end, it’s data that will win out,” he says.

The studies may reveal risks, too. Gamma oscillations are the type most likely to induce seizures in people with photosensitive epilepsy, says Dora Hermes, a neuroscientist at Stanford University in California. She recalls a famous episode of a Japanese cartoon that featured flickering red and blue lights, which induced seizures in some viewers. “So many people watched that episode that there were almost 700 extra visits to the emergency department that day.”

A brain boost

Nevertheless, there is clearly a growing excitement around treating neurological diseases using neuromodulation, rather than pharmaceuticals. “There’s pretty good evidence that by changing neural-circuit activity we can get improvements in Parkinson’s, chronic pain, obsessive–compulsive disorder and depression,” says Insel. This is important, he says, because so far, pharmaceutical treatments for neurological disease have suffered from a lack of specificity. Koroshetz adds that funding institutes are eager for treatments that are innovative, non-invasive and quickly translatable to people.

Since publishing their mouse paper, Boyden says, he has had a deluge of requests from researchers wanting to use the same technique to treat other conditions. But there are a lot of details to work out. “We need to figure out what is the most effective, non-invasive way of manipulating oscillations in different parts of the brain,” he says. “Perhaps it is using light, but maybe it’s a smart pillow or a headband that could target these oscillations using electricity or sound.” One of the simplest methods that scientists have found is neurofeedback, which has shown some success in treating a range of conditions, including anxiety, depression and attention-deficit hyperactivity disorder. People who use this technique are taught to control their brainwaves by measuring them with an EEG and getting feedback in the form of visual or audio cues.

Phyllis Zee, a neurologist at Northwestern University in Chicago, Illinois, and her colleagues delivered pulses of ‘pink noise’ — audio frequencies that together sound a bit like a waterfall — to healthy older adults while they slept. They were particularly interested in eliciting the delta oscillations that characterize deep sleep. This aspect of sleep decreases with age, and is associated with a decreased ability to consolidate memories.

So far, her team has found that stimulation increased the amplitude of the slow waves, and was associated with a 25–30% improvement in recall of word pairs learnt the night before, compared with a fake treatment7. Her team is midway through a clinical trial to see whether longer-term acoustic stimulation might help people with mild cognitive impairment.

Although relatively safe, these kinds of technologies do have limitations. Neurofeedback is easy to learn, for instance, but it can take time to have an effect, and the results are often short-lived. In experiments that use magnetic or acoustic stimulation, it is difficult to know precisely what area of the brain is being affected. “The field of external brain stimulation is a little weak at the moment,” says Knight. Many approaches, he says, are open loop, meaning that they don’t track the effect of the modulation using an EEG. Closed loop, he says, would be more practical. Some experiments, such as Zee’s and those involving neuro-feedback, already do this. “I think the field is turning a corner,” Knight says. “It’s attracting some serious research.”

In addition to potentially leading to treatments, these studies could break open the field of neural oscillations in general, helping to link them more firmly to behaviour and how the brain works as a whole.

Shadlen says he is open to the idea that oscillations play a part in human behaviour and consciousness. But for now, he remains unconvinced that they are directly responsible for these phenomena — referring to the many roles people ascribe to them as “magical incantations”. He says he fully accepts that these brain rhythms are signatures of important brain processes, “but to posit the idea that synchronous spikes of activity are meaningful, that by suddenly wiggling inputs at a specific frequency, it suddenly elevates activity onto our conscious awareness? That requires more explanation.”

Whatever their role, Tsai mostly wants to discipline brainwaves and harness them against disease. Cognito Therapeutics has just received approval for a second, larger trial, which will look at whether the therapy has any effect on Alzheimer’s disease symptoms. Meanwhile, Tsai’s team is focusing on understanding more about the downstream biological effects and how to better target the hippocampus with non-invasive technologies.

For Tsai, the work is personal. Her grandmother, who raised her, was affected by dementia. “Her confused face made a deep imprint in my mind,” Tsai says. “This is the biggest challenge of our lifetime, and I will give it all I have.”

93-year-old Mary Derr sits on her bed near her robot cat she calls “Buddy” in her home she shares with her daughter Jeanne Elliott in South Kingstown, R.I. Buddy is a Hasbro’s “Joy for All” robotic cat, aimed at seniors and meant to act as a “companion,” it has been on the market for two years. Derr has mild dementia, and Elliott purchased a robot earlier this year to keep her mother company.


Imagine a cat that can keep a person company, doesn’t need a litter box and can remind an aging relative to take her medicine or help find her eyeglasses.

That’s the vision of toymaker Hasbro and scientists at Brown University, who have received a three-year, $1 million grant from the National Science Foundation to find ways to add artificial intelligence to Hasbro’s “Joy for All” robotic cat .

The cat, which has been on the market for two years, is aimed at seniors and meant to act as a “companion.” It purrs and meows, and even appears to lick its paw and roll over to ask for a belly rub. The Brown-Hasbro project is aimed at developing additional capabilities for the cats to help older adults with simple tasks.

Researchers at Brown’s Humanity-Centered Robotics Initiative are working to determine which tasks make the most sense, and which can help older adults stay in their own homes longer, such as finding lost objects, or reminding the owner to call someone or go to a doctor’s appointment.

“It’s not going to iron and wash dishes,” said Bertram Malle, a professor of cognitive, linguistic and psychological sciences at Brown. “Nobody expects them to have a conversation. Nobody expects them to move around and fetch a newspaper. They’re really good at providing comfort.”

Malle said they don’t want to make overblown promises of what the cat can do, something he and his fellow researcher — computer science professor Michael Littman — said they’ve seen in other robots on the market. They hope to make a cat that would perform a small set of tasks very well.

They also want to keep it affordable, just a few hundred dollars. The current version costs $100.

They’ve given the project a name that gets at that idea: Affordable Robotic Intelligence for Elderly Support, or ARIES. The team includes researchers from Brown’s medical school, area hospitals and a designer at the University of Cincinnati.

It’s an idea that has appeal to Jeanne Elliott, whose 93-year-old mother, Mary Derr, lives with her in South Kingstown. Derr has mild dementia and the Joy for All cat Elliott purchased this year has become a true companion for Derr, keeping her company and soothing her while Elliott is at work. Derr treats it like a real cat, even though she knows it has batteries.

“Mom has a tendency to forget things,” she said, adding that a cat reminding her “we don’t have any appointments today, take your meds, be careful when you walk, things like that, be safe, reassuring things, to have that available during the day would be awesome.”

Diane Feeney Mahoney, a professor emerita at MGH Institute of Health Professions School of Nursing, who has studied technology for older people, said the project showed promise because of the team of researchers. She hopes they involve people from the Alzheimer’s community and that “we just don’t want to push technology for technology’s sake.”

She called the cat a tool that could make things easier for someone caring for a person with middle-stage dementia, or to be used in nursing homes where pets are not allowed.

The scientists are embarking on surveys, focus groups and interviews to get a sense of the landscape of everyday living for an older adult. They’re also trying to figure out how the souped-up robo-cats would do those tasks, and then how it would communicate that information. They don’t think they want a talking cat, Littman said.

“Cats don’t generally talk to you,” Littman said, and it might be upsetting if it did.

They’re looking at whether the cat could move its head in a certain way to get across the message it’s trying to communicate, for example.

In the end, they hope that by creating an interaction in which the human is needed, they could even help stem feelings of loneliness, depression and anxiety.

“The cat doesn’t do things on its own. It needs the human, and the human gets something back,” Malle said. “That interaction is a huge step up. Loneliness and uselessness feelings are hugely problematic.”

By Jeffrey Kluger

If you’re traveling to Mars, you’re going to have to bring a lot of essentials along — water, air, fuel, food. And, let’s be honest, you probably wouldn’t mind packing some beer too. A two-year journey — the minimum length of a Mars mission — is an awfully long time to go without one of our home planet’s signature pleasures.

Now, Anheuser-Busch InBev, the manufacturer of Budweiser, has announced that it wants to bring cosmic bar service a little closer to reality: On Dec. 4, the company plans to launch 20 barley seeds to space, aboard a SpaceX rocket making a cargo run to the International Space Station (ISS). Studying how barley — one of the basic ingredients in beer — germinates in microgravity will, the company hopes, teach scientists a lot about the practicality of building an extraterrestrial brewery.

“We want to be part of the collective dream to get to Mars,” said Budweiser vice president Ricardo Marques in an email to TIME. “While this may not be in the near future, we are starting that journey now so that when the dream of colonizing Mars becomes a reality, Budweiser will be there.”

Nice idea. But apart from inevitable issues concerning Mars rovers with designated drivers and who exactly is going to check your ID when you’re 100 million miles from home, Budweiser faces an even bigger question: Is beer brewing even possible in space? The answer: Maybe, but it wouldn’t be easy.

Start with that first step Budweiser is investigating: the business of growing the barley. In the U.S. alone, farmers harvest about 2.5 million acres of barley per year. The majority of that is used for animal feed, but about 45% of it is converted to malt, most of which is used in beer. Even the thirstiest American astronauts don’t need quite so much on tap, so start with something modest — say a 20-liter batch. That’s about 42 pints, which should get a crew of five through at least two or three Friday nights. But even that won’t be easy to make in space.

“If you want to make 20-liters of beer on Earth you’re going to need 100 to 200 square feet of land to grow the barley,” wrote Tristan Stephenson, author of The Curious Bartender series, in an email to TIME. “No doubt they would use hydroponics and probably be a bit more efficient in terms of rate of growth, but that’s a fair bit of valuable space on a space station…just for some beer.”

Still, let’s assume you’re on the station, you’ve grown the crops, and now it’s time to brew your first batch. To start with, the barley grains will have to go through the malting process, which means soaking them in water for two or three days, allowing them to germinate partway and then effectively killing them with heat. For that you need specialized equipment, which has to be carried to space and stored onboard. Every pound of orbital cargo can currently cost about $10,000, according to NASA, though competition from private industry is driving the price down. Still, shipping costs to space are never going to be cheap and it’s hard to justify any beer that winds up costing a couple hundred bucks a swallow.

The brewing process itself would present an entirely different set of problems — most involving gravity. On Earth, Stephenson says, “Brewers measure fermentation progress by assessing the ‘gravity’ (density) of the beer. The measurement is taken using a floating hydrometer. You’re not going to be doing that in space.”

The carbonation in the beer would be all wrong too, making the overall drink both unsightly and too frothy. “The bubbles won’t rise in zero-g,” says Stephenson. “Instead they’ll flocculate together into frogspawn style clumps.”

Dispersed or froggy, once the bubbles go down your gullet, they do your body no favors in space. The burp you emit after a beer on Earth seems like a bad thing, but only compared to the alternative — which happens a lot in zero-g, as gasses don’t rise, but instead find their way deeper into your digestive tract.

The type of beer you could make in space is limited and pretty much excludes Lagers — or cold-fermented beer. “Lager takes longer to make compared to most beers, because the yeast works at a lower temperature,” says Stephenson. “This is also the reason for the notable clarity of lager: longer fermentation means more yeast falls out of the solution, resulting in a clearer, cleaner looking beer. Emphasis on ‘falls’ — and stuff doesn’t fall in space.”

Finally, if Budweiser’s stated goal is to grow beer crops on Mars, they’re going about the experiment all wrong. Germinating your seeds in what is effectively the zero-g environment of the ISS is very different from germinating them on Mars, where the gravity is 40% that of Earth’s — weak by our standards, but still considerable for a growing plant. Budweiser and its partners acknowledge this possibility and argue that the very purpose of the experiment is to try to address the problem.

Thanks to Pete Cuomo for bringing this to the It’s Interesting community.

by John H. Richardson

In an ordinary hospital room in Los Angeles, a young woman named Lauren Dickerson waits for her chance to make history.

She’s 25 years old, a teacher’s assistant in a middle school, with warm eyes and computer cables emerging like futuristic dreadlocks from the bandages wrapped around her head. Three days earlier, a neurosurgeon drilled 11 holes through her skull, slid 11 wires the size of spaghetti into her brain, and connected the wires to a bank of computers. Now she’s caged in by bed rails, with plastic tubes snaking up her arm and medical monitors tracking her vital signs. She tries not to move.

The room is packed. As a film crew prepares to document the day’s events, two separate teams of specialists get ready to work—medical experts from an elite neuroscience center at the University of Southern California and scientists from a technology company called Kernel. The medical team is looking for a way to treat Dickerson’s seizures, which an elaborate regimen of epilepsy drugs controlled well enough until last year, when their effects began to dull. They’re going to use the wires to search Dickerson’s brain for the source of her seizures. The scientists from Kernel are there for a different reason: They work for Bryan Johnson, a 40-year-old tech entrepreneur who sold his business for $800 million and decided to pursue an insanely ambitious dream—he wants to take control of evolution and create a better human. He intends to do this by building a “neuroprosthesis,” a device that will allow us to learn faster, remember more, “coevolve” with artificial intelligence, unlock the secrets of telepathy, and maybe even connect into group minds. He’d also like to find a way to download skills such as martial arts, Matrix-style. And he wants to sell this invention at mass-market prices so it’s not an elite product for the rich.

Right now all he has is an algorithm on a hard drive. When he describes the neuroprosthesis to reporters and conference audiences, he often uses the media-friendly expression “a chip in the brain,” but he knows he’ll never sell a mass-market product that depends on drilling holes in people’s skulls. Instead, the algorithm will eventually connect to the brain through some variation of noninvasive interfaces being developed by scientists around the world, from tiny sensors that could be injected into the brain to genetically engineered neurons that can exchange data wirelessly with a hatlike receiver. All of these proposed interfaces are either pipe dreams or years in the future, so in the meantime he’s using the wires attached to Dickerson’s hippo­campus to focus on an even bigger challenge: what you say to the brain once you’re connected to it.

That’s what the algorithm does. The wires embedded in Dickerson’s head will record the electrical signals that Dickerson’s neurons send to one another during a series of simple memory tests. The signals will then be uploaded onto a hard drive, where the algorithm will translate them into a digital code that can be analyzed and enhanced—or rewritten—with the goal of improving her memory. The algorithm will then translate the code back into electrical signals to be sent up into the brain. If it helps her spark a few images from the memories she was having when the data was gathered, the researchers will know the algorithm is working. Then they’ll try to do the same thing with memories that take place over a period of time, something nobody’s ever done before. If those two tests work, they’ll be on their way to deciphering the patterns and processes that create memories.

Although other scientists are using similar techniques on simpler problems, Johnson is the only person trying to make a commercial neurological product that would enhance memory. In a few minutes, he’s going to conduct his first human test. For a commercial memory prosthesis, it will be the first human test. “It’s a historic day,” Johnson says. “I’m insanely excited about it.”

For the record, just in case this improbable experiment actually works, the date is January 30, 2017.

At this point, you may be wondering if Johnson’s just another fool with too much money and an impossible dream. I wondered the same thing the first time I met him. He seemed like any other California dude, dressed in the usual jeans, sneakers, and T-shirt, full of the usual boyish enthusiasms. His wild pronouncements about “reprogramming the operating system of the world” seemed downright goofy.

But you soon realize this casual style is either camouflage or wishful thinking. Like many successful people, some brilliant and some barely in touch with reality, Johnson has endless energy and the distributed intelligence of an octopus—one tentacle reaches for the phone, another for his laptop, a third scouts for the best escape route. When he starts talking about his neuroprosthesis, they team up and squeeze till you turn blue.

And there is that $800 million that PayPal shelled out for Braintree, the online-­payment company Johnson started when he was 29 and sold when he was 36. And the $100 million he is investing into Kernel, the company he started to pursue this project. And the decades of animal tests to back up his sci-fi ambitions: Researchers have learned how to restore memories lost to brain damage, plant false memories, control the motions of animals through human thought, control appetite and aggression, induce sensations of pleasure and pain, even how to beam brain signals from one animal to another animal thousands of miles away.

And Johnson isn’t dreaming this dream alone—at this moment, Elon Musk and Mark Zuckerberg are weeks from announcing their own brain-hacking projects, the military research group known as Darpa already has 10 under way, and there’s no doubt that China and other countries are pursuing their own. But unlike Johnson, they’re not inviting reporters into any hospital rooms.

Here’s the gist of every public statement Musk has made about his project: (1) He wants to connect our brains to computers with a mysterious device called “neural lace.” (2) The name of the company he started to build it is Neuralink.

Thanks to a presentation at last spring’s F8 conference, we know a little more about what Zuckerberg is doing at Facebook: (1) The project was until recently overseen by Regina Dugan, a former director of Darpa and Google’s Advanced Technology group. (2) The team is working out of Building 8, Zuckerberg’s research lab for moon-shot projects. (3) They’re working on a noninvasive “brain–computer speech-to-text interface” that uses “optical imaging” to read the signals of neurons as they form words, find a way to translate those signals into code, and then send the code to a computer. (4) If it works, we’ll be able to “type” 100 words a minute just by thinking.

As for Darpa, we know that some of its projects are improvements on existing technology and some—such as an interface to make soldiers learn faster—sound just as futuristic as Johnson’s. But we don’t know much more than that. That leaves Johnson as our only guide, a job he says he’s taken on because he thinks the world needs to be prepared for what is coming.

All of these ambitious plans face the same obstacle, however: The brain has 86 billion neurons, and nobody understands how they all work. Scientists have made impressive progress uncovering, and even manipulating, the neural circuitry behind simple brain functions, but things such as imagination or creativity—and memory—are so complex that all the neuroscientists in the world may never solve them. That’s why a request for expert opinions on the viability of Johnson’s plans got this response from John Donoghue, the director of the Wyss Center for Bio and Neuroengineering in Geneva: “I’m cautious,” he said. “It’s as if I asked you to translate something from Swahili to Finnish. You’d be trying to go from one unknown language into another unknown language.” To make the challenge even more daunting, he added, all the tools used in brain research are as primitive as “a string between two paper cups.” So Johnson has no idea if 100 neurons or 100,000 or 10 billion control complex brain functions. On how most neurons work and what kind of codes they use to communicate, he’s closer to “Da-da” than “see Spot run.” And years or decades will pass before those mysteries are solved, if ever. To top it all off, he has no scientific background. Which puts his foot on the banana peel of a very old neuroscience joke: “If the brain was simple enough for us to understand, we’d be too stupid to understand it.”

I don’t need telepathy to know what you’re thinking now—there’s nothing more annoying than the big dreams of tech optimists. Their schemes for eternal life and floating libertarian nations are adolescent fantasies; their digital revolution seems to be destroying more jobs than it created, and the fruits of their scientific fathers aren’t exactly encouraging either. “Coming soon, from the people who brought you nuclear weapons!”

But Johnson’s motives go to a deep and surprisingly tender place. Born into a devout Mormon community in Utah, he learned an elaborate set of rules that are still so vivid in his mind that he brought them up in the first minutes of our first meeting: “If you get baptized at the age of 8, point. If you get into the priesthood at the age of 12, point. If you avoid pornography, point. Avoid masturbation? Point. Go to church every Sunday? Point.” The reward for a high point score was heaven, where a dutiful Mormon would be reunited with his loved ones and gifted with endless creativity.

When he was 4, Johnson’s father left the church and divorced his mother. Johnson skips over the painful details, but his father told me his loss of faith led to a long stretch of drug and alcohol abuse, and his mother said she was so broke that she had to send Johnson to school in handmade clothes. His father remembers the letters Johnson started sending him when he was 11, a new one every week: “Always saying 100 different ways, ‘I love you, I need you.’ How he knew as a kid the one thing you don’t do with an addict or an alcoholic is tell them what a dirtbag they are, I’ll never know.”

Johnson was still a dutiful believer when he graduated from high school and went to Ecuador on his mission, the traditional Mormon rite of passage. He prayed constantly and gave hundreds of speeches about Joseph Smith, but he became more and more ashamed about trying to convert sick and hungry children with promises of a better life in heaven. Wouldn’t it be better to ease their suffering here on earth?

“Bryan came back a changed boy,” his father says.

Soon he had a new mission, self-assigned. His sister remembers his exact words: “He said he wanted to be a millionaire by the time he was 30 so he could use those resources to change the world.”

His first move was picking up a degree at Brigham Young University, selling cell phones to help pay the tuition and inhaling every book that seemed to promise a way forward. One that left a lasting impression was Endurance, the story of Ernest Shackleton’s botched journey to the South Pole—if sheer grit could get a man past so many hardships, he would put his faith in sheer grit. He married “a nice Mormon girl,” fathered three Mormon children, and took a job as a door-to-door salesman to support them. He won a prize for Salesman of the Year and started a series of businesses that went broke—which convinced him to get a business degree at the University of Chicago.

When he graduated in 2008, he stayed in Chicago and started Braintree, perfecting his image as a world-beating Mormon entrepreneur. By that time, his father was sober and openly sharing his struggles, and Johnson was the one hiding his dying faith behind a very well-protected wall. He couldn’t sleep, ate like a wolf, and suffered intense headaches, fighting back with a long series of futile cures: antidepressants, biofeedback, an energy healer, even blind obedience to the rules of his church.

In 2012, at the age of 35, Johnson hit bottom. In his misery, he remembered Shackleton and seized a final hope—maybe he could find an answer by putting himself through a painful ordeal. He planned a trip to Mount Kilimanjaro, and on the second day of the climb he got a stomach virus. On the third day he got altitude sickness. When he finally made it to the peak, he collapsed in tears and then had to be carried down on a stretcher. It was time to reprogram his operating system.

The way Johnson tells it, he started by dropping the world-beater pose that hid his weakness and doubt. And although this may all sound a bit like a dramatic motivational talk at a TED conference, especially since Johnson still projects the image of a world-beating entrepreneur, this much is certain: During the following 18 months, he divorced his wife, sold Braintree, and severed his last ties to the church. To cushion the impact on his children, he bought a house nearby and visited them almost daily. He knew he was repeating his father’s mistakes but saw no other option—he was either going to die inside or start living the life he always wanted.

He started with the pledge he made when he came back from Ecuador, experimenting first with a good-government initiative in Washington and pivoting, after its inevitable doom, to a venture fund for “quantum leap” companies inventing futuristic products such as human-­organ-­mimicking silicon chips. But even if all his quantum leaps landed, they wouldn’t change the operating system of the world.

Finally, the Big Idea hit: If the root problems of humanity begin in the human mind, let’s change our minds.

Fantastic things were happening in neuroscience. Some of them sounded just like miracles from the Bible—with prosthetic legs controlled by thought and microchips connected to the visual cortex, scientists were learning to help the lame walk and the blind see. At the University of Toronto, a neurosurgeon named Andres Lozano slowed, and in some cases reversed, the cognitive declines of Alzheimer’s patients using deep brain stimulation. At a hospital in upstate New York, a neuro­technologist named Gerwin Schalk asked computer engineers to record the firing patterns of the auditory neurons of people listening to Pink Floyd. When the engineers turned those patterns back into sound waves, they produced a single that sounded almost exactly like “Another Brick in the Wall.” At the University of Washington, two professors in different buildings played a videogame together with the help of electroencephalography caps that fired off electrical pulses—when one professor thought about firing digital bullets, the other one felt an impulse to push the Fire button.

Johnson also heard about a biomedical engineer named Theodore Berger. During nearly 20 years of research, Berger and his collaborators at USC and Wake Forest University developed a neuroprosthesis to improve memory in rats. It didn’t look like much when he started testing it in 2002—just a slice of rat brain and a computer chip. But the chip held an algorithm that could translate the firing patterns of neurons into a kind of Morse code that corresponded with actual memories. Nobody had ever done that before, and some people found the very idea offensive—it’s so deflating to think of our most precious thoughts reduced to ones and zeros. Prominent medical ethicists accused Berger of tampering with the essence of identity. But the implications were huge: If Berger could turn the language of the brain into code, perhaps he could figure out how to fix the part of the code associated with neurological diseases.

In rats, as in humans, firing patterns in the hippocampus generate a signal or code that, somehow, the brain recognizes as a long-term memory. Berger trained a group of rats to perform a task and studied the codes that formed. He learned that rats remembered a task better when their neurons sent “strong code,” a term he explains by comparing it to a radio signal: At low volume you don’t hear all of the words, but at high volume everything comes through clear. He then studied the difference in the codes generated by the rats when they remembered to do something correctly and when they forgot. In 2011, through a breakthrough experiment conducted on rats trained to push a lever, he demonstrated he could record the initial memory codes, feed them into an algorithm, and then send stronger codes back into the rats’ brains. When he finished, the rats that had forgotten how to push the lever suddenly remembered.

Five years later, Berger was still looking for the support he needed for human trials. That’s when Johnson showed up. In August 2016, he announced he would pledge $100 million of his fortune to create Kernel and that Berger would join the company as chief science officer. After learning about USC’s plans to implant wires in Dickerson’s brain to battle her epilepsy, Johnson approached Charles Liu, the head of the prestigious neurorestoration division at the USC School of Medicine and the lead doctor on Dickerson’s trial. Johnson asked him for permission to test the algorithm on Dickerson while she had Liu’s wires in her hippocampus—in between Liu’s own work sessions, of course. As it happened, Liu had dreamed about expanding human powers with technology ever since he got obsessed with The Six Million Dollar Man as a kid. He helped Johnson get Dickerson’s consent and convinced USC’s institutional research board to approve the experiment. At the end of 2016, Johnson got the green light. He was ready to start his first human trial.

In the hospital room, Dickerson is waiting for the experiments to begin, and I ask her how she feels about being a human lab rat.

“If I’m going to be here,” she says, “I might as well do something useful.”

Useful? This starry-eyed dream of cyborg supermen? “You know he’s trying to make humans smarter, right?”

“Isn’t that cool?” she answers.

Over by the computers, I ask one of the scientists about the multi­colored grid on the screen. “Each one of these squares is an electrode that’s in her brain,” one says. Every time a neuron close to one of the wires in Dickerson’s brain fires, he explains, a pink line will jump in the relevant box.

Johnson’s team is going to start with simple memory tests. “You’re going to be shown words,” the scientist explains to her. “Then there will be some math problems to make sure you’re not rehearsing the words in your mind. Try to remember as many words as you can.”

One of the scientists hands Dickerson a computer tablet, and everyone goes quiet. Dickerson stares at the screen to take in the words. A few minutes later, after the math problem scrambles her mind, she tries to remember what she’d read. “Smoke … egg … mud … pearl.”

Next, they try something much harder, a group of memories in a sequence. As one of Kernel’s scientists explains to me, they can only gather so much data from wires connected to 30 or 40 neurons. A single face shouldn’t be too hard, but getting enough data to reproduce memories that stretch out like a scene in a movie is probably impossible.

Sitting by the side of Dickerson’s bed, a Kernel scientist takes on the challenge. “Could you tell me the last time you went to a restaurant?”

“It was probably five or six days ago,” Dickerson says. “I went to a Mexican restaurant in Mission Hills. We had a bunch of chips and salsa.”

He presses for more. As she dredges up other memories, another Kernel scientist hands me a pair of headphones connected to the computer bank. All I hear at first is a hissing sound. After 20 or 30 seconds go by I hear a pop.

“That’s a neuron firing,” he says.

As Dickerson continues, I listen to the mysterious language of the brain, the little pops that move our legs and trigger our dreams. She remembers a trip to Costco and the last time it rained, and I hear the sounds of Costco and rain.

When Dickerson’s eyelids start sinking, the medical team says she’s had enough and Johnson’s people start packing up. Over the next few days, their algorithm will turn Dickerson’s synaptic activity into code. If the codes they send back into Dickerson’s brain make her think of dipping a few chips in salsa, Johnson might be one step closer to reprogramming the operating system of the world.

But look, there’s another banana peel­—after two days of frantic coding, Johnson’s team returns to the hospital to send the new code into Dickerson’s brain. Just when he gets word that they can get an early start, a message arrives: It’s over. The experiment has been placed on “administrative hold.” The only reason USC would give in the aftermath was an issue between Johnson and Berger. Berger would later tell me he had no idea the experiment was under way and that Johnson rushed into it without his permission. Johnson said he is mystified by Berger’s accusations. “I don’t know how he could not have known about it. We were working with his whole lab, with his whole team.” The one thing they both agree on is that their relationship fell apart shortly afterward, with Berger leaving the company and taking his algorithm with him. He blames the break entirely on Johnson. “Like most investors, he wanted a high rate of return as soon as possible. He didn’t realize he’d have to wait seven or eight years to get FDA approval—I would have thought he would have looked that up.” But Johnson didn’t want to slow down. He had bigger plans, and he was in a hurry.

Eight months later, I go back to California to see where Johnson has ended up. He seems a little more relaxed. On the whiteboard behind his desk at Kernel’s new offices in Los Angeles, someone’s scrawled a playlist of songs in big letters. “That was my son,” he says. “He interned here this summer.” Johnson is a year into a romance with Taryn Southern, a charismatic 31-year-old performer and film producer. And since his break with Berger, Johnson has tripled Kernel’s staff—he’s up to 36 employees now—adding experts in fields like chip design and computational neuroscience. His new science adviser is Ed Boyden, the director of MIT’s Synthetic Neurobiology Group and a superstar in the neuroscience world. Down in the basement of the new office building, there’s a Dr. Frankenstein lab where scientists build prototypes and try them out on glass heads.

When the moment seems right, I bring up the purpose of my visit. “You said you had something to show me?”

Johnson hesitates. I’ve already promised not to reveal certain sensitive details, but now I have to promise again. Then he hands me two small plastic display cases. Inside, two pairs of delicate twisty wires rest on beds of foam rubber. They look scientific but also weirdly biological, like the antennae of some futuristic bug-bot.

I’m looking at the prototypes for Johnson’s brand-new neuromodulator. On one level, it’s just a much smaller version of the deep brain stimulators and other neuromodulators currently on the market. But unlike a typical stimulator, which just fires pulses of electricity, Johnson’s is designed to read the signals that neurons send to other neurons—and not just the 100 neurons the best of the current tools can harvest, but perhaps many more. That would be a huge advance in itself, but the implications are even bigger: With Johnson’s neuromodulator, scientists could collect brain data from thousands of patients, with the goal of writing precise codes to treat a variety of neurological diseases.

In the short term, Johnson hopes his neuromodulator will help him “optimize the gold rush” in neurotechnology—financial analysts are forecasting a $27 billion market for neural devices within six years, and countries around the world are committing billions to the escalating race to decode the brain. In the long term, Johnson believes his signal-reading neuromodulator will advance his bigger plans in two ways: (1) by giving neuroscientists a vast new trove of data they can use to decode the workings of the brain and (2) by generating the huge profits Kernel needs to launch a steady stream of innovative and profitable neural tools, keeping the company both solvent and plugged into every new neuroscience breakthrough. With those two achievements in place, Johnson can watch and wait until neuroscience reaches the level of sophistication he needs to jump-start human evolution with a mind-enhancing neuroprosthesis.

Liu, the neurologist with the Six Million Dollar Man dreams, compares Johnson’s ambition to flying. “Going back to Icarus, human beings have always wanted to fly. We don’t grow wings, so we build a plane. And very often these solutions will have even greater capabilities than the ones nature created—no bird ever flew to Mars.” But now that humanity is learning how to reengineer its own capabilities, we really can choose how we evolve. “We have to wrap our minds around that. It’s the most revolutionary thing in the world.”

The crucial ingredient is the profit motive, which always drives rapid innovation in science. That’s why Liu thinks Johnson could be the one to give us wings. “I’ve never met anyone with his urgency to take this to market,” he says.

When will this revolution arrive? “Sooner than you think,” Liu says.

Now we’re back where we began. Is Johnson a fool? Is he just wasting his time and fortune on a crazy dream? One thing is certain: Johnson will never stop trying to optimize the world. At the pristine modern house he rents in Venice Beach, he pours out idea after idea. He even took skepticism as helpful information—when I tell him his magic neuroprosthesis sounds like another version of the Mormon heaven, he’s delighted.

“Good point! I love it!”

He never has enough data. He even tries to suck up mine. What are my goals? My regrets? My pleasures? My doubts?

Every so often, he pauses to examine my “constraint program.”

“One, you have this biological disposition of curiosity. You want data. And when you consume that data, you apply boundaries of meaning-making.”

“Are you trying to hack me?” I ask.

Not at all, he says. He just wants us to share our algorithms. “That’s the fun in life,” he says, “this endless unraveling of the puzzle. And I think, ‘What if we could make the data transfer rate a thousand times faster? What if my consciousness is only seeing a fraction of reality? What kind of stories would we tell?’ ”

In his free time, Johnson is writing a book about taking control of human evolution and looking on the bright side of our mutant humanoid future. He brings this up every time I talk to him. For a long time I lumped this in with his dreamy ideas about reprogramming the operating system of the world: The future is coming faster than anyone thinks, our glorious digital future is calling, the singularity is so damn near that we should be cheering already—a spiel that always makes me want to hit him with a copy of the Unabomber Manifesto.

But his urgency today sounds different, so I press him on it: “How would you respond to Ted Kaczynski’s fears? The argument that technology is a cancerlike development that’s going to eat itself?”

“I would say he’s potentially on the wrong side of history.”

“Yeah? What about climate change?”

“That’s why I feel so driven,” he answered. “We’re in a race against time.”

He asks me for my opinion. I tell him I think he’ll still be working on cyborg brainiacs when the starving hordes of a ravaged planet destroy his lab looking for food—and for the first time, he reveals the distress behind his hope. The truth is, he has the same fear. The world has gotten way too complex, he says. The financial system is shaky, the population is aging, robots want our jobs, artificial intelligence is catching up, and climate change is coming fast. “It just feels out of control,” he says.

He’s invoked these dystopian ideas before, but only as a prelude to his sales pitch. This time he’s closer to pleading. “Why wouldn’t we embrace our own self-directed evolution? Why wouldn’t we just do everything we can to adapt faster?”

I turn to a more cheerful topic. If he ever does make a neuroprosthesis to revolutionize how we use our brain, which superpower would he give us first? Telepathy? Group minds? Instant kung fu?

He answers without hesitation. Because our thinking is so constrained by the familiar, he says, we can’t imagine a new world that isn’t just another version of the world we know. But we have to imagine something far better than that. So he’d try to make us more creative—that would put a new frame on everything.

Ambition like that can take you a long way. It can drive you to try to reach the South Pole when everyone says it’s impossible. It can take you up Mount Kilimanjaro when you’re close to dying and help you build an $800 million company by the time you’re 36. And Johnson’s ambitions drive straight for the heart of humanity’s most ancient dream: For operating system, substitute enlightenment.

By hacking our brains, he wants to make us one with everything.

By Ryan Browne

America’s second-highest ranking military officer, Gen. Paul Selva, advocated Tuesday for “keeping the ethical rules of war in place lest we unleash on humanity a set of robots that we don’t know how to control.”

Selva was responding to a question from Sen. Gary Peters, a Michigan Democrat, about his views on a Department of Defense directive that requires a human operator to be kept in the decision-making process when it comes to the taking of human life by autonomous weapons systems.

Peters said the restriction was “due to expire later this year.”

“I don’t think it’s reasonable for us to put robots in charge of whether or not we take a human life,” Selva told the Senate Armed Services Committee during a confirmation hearing for his reappointment as the vice chairman of the Joint Chiefs of Staff, during which a wide range of topics were covered, including North Korea, Iran and defense budget issues.

He predicted that “there will be a raucous debate in the department about whether or not we take humans out of the decision to take lethal action,” but added that he was “an advocate for keeping that restriction.”

Selva said humans needed to remain in the decision making process “because we take our values to war.” He pointed to the laws of war and the need to consider issues like proportional and discriminate action against an enemy, something he suggested could only be done by a human.

His comments come as the US military has sought increasingly autonomous weapons systems.

In July 2016, a group of concerned scientists, researchers and academics, including theoretical physicist Stephen Hawking and billionaire entrepreneur Elon Musk, argued against the development of autonomous weapons systems. They warned of an artificial intelligence arms race and called for a “ban on offensive autonomous weapons beyond meaningful human control.”

But Peters warned that America’s adversaries may be less hesitant to adopt such lethal technology.

“Our adversaries often do not to consider the same moral and ethical issues that we consider each and every day,” the senator told Selva.

Selva acknowledged the possibility of US adversaries developing such technology, but said the decision not to pursue it for the US military “doesn’t mean that we don’t have to address the development of those kinds of technologies and potentially find their vulnerabilities and exploit those vulnerabilities.”