Holographic brain stimulation can now fool us into thinking we are experiencing something real.

What if we could edit the sensations we feel; paste in our brain pictures that we never saw, cut out unwanted pain or insert non-existent scents into memory?

UC Berkeley neuroscientists are building the equipment to do just that, using holographic projection into the brain to activate or suppress dozens and ultimately thousands of neurons at once, hundreds of times each second, copying real patterns of brain activity to fool the brain into thinking it has felt, seen or sensed something.

The goal is to read neural activity constantly and decide, based on the activity, which sets of neurons to activate to simulate the pattern and rhythm of an actual brain response, so as to replace lost sensations after peripheral nerve damage, for example, or control a prosthetic limb.

“This has great potential for neural prostheses, since it has the precision needed for the brain to interpret the pattern of activation. If you can read and write the language of the brain, you can speak to it in its own language and it can interpret the message much better,” said Alan Mardinly, a postdoctoral fellow in the UC Berkeley lab of Hillel Adesnik, an assistant professor of molecular and cell biology. “This is one of the first steps in a long road to develop a technology that could be a virtual brain implant with additional senses or enhanced senses.”

Mardinly is one of three first authors of a paper appearing online April 30 in advance of publication in the journal Nature Neuroscience that describes the holographic brain modulator, which can activate up to 50 neurons at once in a three-dimensional chunk of brain containing several thousand neurons, and repeat that up to 300 times a second with different sets of 50 neurons.

“The ability to talk to the brain has the incredible potential to help compensate for neurological damage caused by degenerative diseases or injury,” said Ehud Isacoff, a UC Berkeley professor of molecular and cell biology and director of the Helen Wills Neuroscience Institute, who was not involved in the research project. “By encoding perceptions into the human cortex, you could allow the blind to see or the paralyzed to feel touch.”

Holographic projection

Each of the 2,000 to 3,000 neurons in the chunk of brain was outfitted with a protein that, when hit by a flash of light, turns the cell on to create a brief spike of activity. One of the key breakthroughs was finding a way to target each cell individually without hitting all at once.

To focus the light onto just the cell body — a target smaller than the width of a human hair — of nearly all cells in a chunk of brain, they turned to computer generated holography, a method of bending and focusing light to form a three-dimensional spatial pattern. The effect is as if a 3D image were floating in space.

In this case, the holographic image was projected into a thin layer of brain tissue at the surface of the cortex, about a tenth of a millimeter thick, though a clear window into the brain.

“The major advance is the ability to control neurons precisely in space and time,” said postdoc Nicolas Pégard, another first author who works both in Adesnik’s lab and the lab of co-author Laura Waller, an associate professor of electrical engineering and computer sciences. “In other words, to shoot the very specific sets of neurons you want to activate and do it at the characteristic scale and the speed at which they normally work.”

The researchers have already tested the prototype in the touch, vision and motor areas of the brains of mice as they walk on a treadmill with their heads immobilized. While they have not noted any behavior changes in the mice when their brain is stimulated, Mardinly said that their brain activity — which is measured in real-time with two-photon imaging of calcium levels in the neurons — shows patterns similar to a response to a sensory stimulus. They’re now training mice so they can detect behavior changes after stimulation.

Prosthetics and brain implants

The area of the brain covered — now a slice one-half millimeter square and one-tenth of a millimeter thick — can be scaled up to read from and write to more neurons in the brain’s outer layer, or cortex, Pégard said. And the laser holography setup could eventually be miniaturized to fit in a backpack a person could haul around.

Mardinly, Pégard and the other first author, postdoc Ian Oldenburg, constructed the holographic brain modulator by making technological advances in a number of areas. Mardinly and Oldenburg, together with Savitha Sridharan, a research associate in the lab, developed better optogenetic switches to insert into cells to turn them on and off. The switches — light-activated ion channels on the cell surface that open briefly when triggered — turn on strongly and then quickly shut off, all in about 3 milliseconds, so they’re ready to be re-stimulated up to 50 or more times per second, consistent with normal firing rates in the cortex.

Pégard developed the holographic projection system using a liquid crystal screen that acts like a holographic negative to sculpt the light from 40W lasers into the desired 3D pattern. The lasers are pulsed in 300 femtosecond-long bursts every microsecond. He, Mardinly, Oldenburg and their colleagues published a paper last year describing the device, which they call 3D-SHOT, for three-dimensional scanless holographic optogenetics with temporal focusing.

“This is the culmination of technologies that researchers have been working on for a while, but have been impossible to put together,” Mardinly said. “We solved numerous technical problems at the same time to bring it all together and finally realize the potential of this technology.”

As they improve their technology, they plan to start capturing real patterns of activity in the cortex in order to learn how to reproduce sensations and perceptions to play back through their holographic system.

Reference:
Mardinly, A. R., Oldenburg, I. A., Pégard, N. C., Sridharan, S., Lyall, E. H., Chesnov, K., . . . Adesnik, H. (2018). Precise multimodal optical control of neural ensemble activity. Nature Neuroscience. doi:10.1038/s41593-018-0139-8

https://www.technologynetworks.com/neuroscience/news/using-holography-to-activate-the-brain-300329?utm_campaign=Newsletter_TN_BreakingScienceNews&utm_source=hs_email&utm_medium=email&utm_content=62560457&_hsenc=p2ANqtz–bJrpQXF2dp2fYgPpEKUOIkhpHxOYZR7Nx-irsQ649T-Ua02wmYTaBOkA9joFtI9BGKIAUb1NoL7-s27Rj9XMPH44XUw&_hsmi=62560457

DNA can be used to store almost limitless amounts of data in almost no space

In the age of big data, we are quickly producing far more digital information than we can possibly store. Last year, $20 billion was spent on new data centers in the US alone, doubling the capital expenditure on data center infrastructure from 2016. And even with skyrocketing investment in data storage, corporations and the public sector are falling behind.

But there’s hope.

With a nascent technology leveraging DNA for data storage, this may soon become a problem of the past. By encoding bits of data into tiny molecules of DNA, researchers and companies like Microsoft hope to fit entire data centers in a few flasks of DNA by the end of the decade.

But let’s back up.

Backdrop

After the 20th century, we graduated from magnetic tape, floppy disks, and CDs to sophisticated semiconductor memory chips capable of holding data in countless tiny transistors. In keeping with Moore’s Law, we’ve seen an exponential increase in the storage capacity of silicon chips. At the same time, however, the rate at which humanity produces new digital information is exploding. The size of the global datasphere is increasing exponentially, predicted to reach 160 zettabytes (160 trillion gigabytes) by 2025. As of 2016, digital users produced over 44 billion gigabytes of data per day. By 2025, the International Data Corporation (IDC) estimates this figure will surpass 460 billion. And with private sector efforts to improve global connectivity—such as OneWeb and Google’s Project Loon—we’re about to see an influx of data from five billion new minds.

By 2020, three billion new minds are predicted to join the web. With private sector efforts, this number could reach five billion. While companies and services are profiting enormously from this influx, it’s extremely costly to build data centers at the rate needed. At present, about $50 million worth of new data center construction is required just to keep up, not to mention millions in furnishings, equipment, power, and cooling. Moreover, memory-grade silicon is rarely found pure in nature, and researchers predict it will run out by 2040.

Take DNA, on the other hand. At its theoretical limit, we could fit 215 million gigabytes of data in a single gram of DNA.

But how?

Crash Course

DNA is built from a double helix chain of four nucleotide bases—adenine (A), thymine (T), cytosine (C), and guanine (G). Once formed, these chains fold tightly to form extremely dense, space-saving data stores. To encode data files into these bases, we can use various algorithms that convert binary to base nucleotides—0s and 1s into A, T, C, and G. “00” might be encoded as A, “01” as G, “10” as C, and “11” as T, for instance. Once encoded, information is then stored by synthesizing DNA with specific base patterns, and the final encoded sequences are stored in vials with an extraordinary shelf-life. To retrieve data, encoded DNA can then be read using any number of sequencing technologies, such as Oxford Nanopore’s portable MinION.

Still in its deceptive growth phase, DNA data storage—or NAM (nucleic acid memory)—is only beginning to approach the knee of its exponential growth curve. But while the process remains costly and slow, several players are beginning to crack its greatest challenge: retrieval. Just as you might click on a specific file and filter a search term on your desktop, random-access across large data stores has become a top priority for scientists at Microsoft Research and the University of Washington.

Storing over 400 DNA-encoded megabytes of data, U Washington’s DNA storage system now offers random access across all its data with no bit errors.

Applications

Even before we guarantee random access for data retrieval, DNA data storage has immediate market applications. According to IDC’s Age 2025 study (Figure 5 (PDF)), a huge proportion of enterprise data goes straight to an archive. Over time, the majority of stored data becomes only potentially critical, making it less of a target for immediate retrieval.

Particularly for storing past legal documents, medical records, and other archive data, why waste precious computing power, infrastructure, and overhead?

Data-encoded DNA can last 10,000 years—guaranteed—in cold, dark, and dry conditions at a fraction of the storage cost.

Now that we can easily use natural enzymes to replicate DNA, companies have tons to gain (literally) by using DNA as a backup system—duplicating files for later retrieval and risk mitigation.

And as retrieval algorithms and biochemical technologies improve, random access across data-encoded DNA may become as easy as clicking a file on your desktop.

As you scroll, researchers are already investigating the potential of molecular computing, completely devoid of silicon and electronics.

Harvard professor George Church and his lab, for instance, envision capturing data directly in DNA. As Church has stated, “I’m interested in making biological cameras that don’t have any electronic or mechanical components,” whereby information “goes straight into DNA.” According to Church, DNA recorders would capture audiovisual data automatically. “You could paint it up on walls, and if anything interesting happens, just scrape a little bit off and read it—it’s not that far off.” One day, we may even be able to record biological events in the body. In pursuit of this end, Church’s lab is working to develop an in vivo DNA recorder of neural activity, skipping electrodes entirely.

Perhaps the most ultra-compact, long-lasting, and universal storage mechanism at our fingertips, DNA offers us unprecedented applications in data storage—perhaps even computing.

Potential

As DNA data storage plummets in tech costs and rises in speed, commercial user interfaces will become both critical and wildly profitable. Once corporations, startups, and people alike can easily save files, images or even neural activity to DNA, opportunities for disruption abound. Imagine uploading files to the cloud, which travel to encrypted DNA vials, as opposed to massive and inefficient silicon-enabled data centers. Corporations could have their own warehouses and local data networks could allow for heightened cybersecurity—particularly for archives.

And since DNA lasts millennia without maintenance, forget the need to copy databases and power digital archives. As long as we’re human, regardless of technological advances and changes, DNA will always be relevant and readable for generations to come.

But perhaps the most exciting potential of DNA is its portability. If we were to send a single exabyte of data (one billion gigabytes) to Mars using silicon binary media, it would take five Falcon Heavy rockets and cost $486 million in freight alone.

With DNA, we would need five cubic centimeters.

At scale, DNA has the true potential to dematerialize entire space colonies worth of data. Throughout evolution, DNA has unlocked extraordinary possibilities—from humans to bacteria. Soon hosting limitless data in almost zero space, it may one day unlock many more.

A Data Storage Revolution? DNA Can Store Near Limitless Data in Almost Zero Space

Small tooth sensor can track what we eat

by Vanessa Zainzinger

Wireless sensors are ubiquitous, providing a steady stream of information on anything from our physical activity to changes occurring in the world’s oceans. Now, scientists have developed a tiny form of the data-gathering tool, designed for an area that has so far escaped its reach: our teeth.

The 2-millimeter-by-2-millimeter devices (pictured) are made up of a film of polymers that detects chemicals in its environment. Sandwiched between two square-shaped gold rings that act as antennas, the sensor can transmit information on what’s going on—or what’s being chewed on—in our mouth to a digital device, such as a smartphone. The type of compound the inner layer detects—salt, for example, or ethanol—determines the spectrum and intensity of the radiofrequency waves that the sensor transmits. Because the sensor uses the ambient radio-frequency signals that are already around us, it doesn’t need a power supply.

The researchers tested their invention on people drinking alcohol, gargling mouthwash, or eating soup. In each case, the sensor was able to detect what the person was consuming by picking up on nutrients.

The devices could help health care and clinical researchers find links between dietary intake and health and, in the long run, allow each of us to keep track of how what we consume is affecting our bodies.

http://www.sciencemag.org/news/2018/03/tiny-sensor-your-tooth-could-help-keep-you-healthy

Computer system transcribes words that people speak silently inside their heads, by monitoring automatic subvocalization muscular movement in the face.


Arnav Kapur, a researcher in the Fluid Interfaces group at the MIT Media Lab, demonstrates the AlterEgo project. Image: Lorrie Lejeune/MIT

MIT researchers have developed a computer interface that can transcribe words that the user verbalizes internally but does not actually speak aloud.

The system consists of a wearable device and an associated computing system. Electrodes in the device pick up neuromuscular signals in the jaw and face that are triggered by internal verbalizations — saying words “in your head” — but are undetectable to the human eye. The signals are fed to a machine-learning system that has been trained to correlate particular signals with particular words.

The device also includes a pair of bone-conduction headphones, which transmit vibrations through the bones of the face to the inner ear. Because they don’t obstruct the ear canal, the headphones enable the system to convey information to the user without interrupting conversation or otherwise interfering with the user’s auditory experience.

The device is thus part of a complete silent-computing system that lets the user undetectably pose and receive answers to difficult computational problems. In one of the researchers’ experiments, for instance, subjects used the system to silently report opponents’ moves in a chess game and just as silently receive computer-recommended responses.

“The motivation for this was to build an IA device — an intelligence-augmentation device,” says Arnav Kapur, a graduate student at the MIT Media Lab, who led the development of the new system. “Our idea was: Could we have a computing platform that’s more internal, that melds human and machine in some ways and that feels like an internal extension of our own cognition?”

“We basically can’t live without our cellphones, our digital devices,” says Pattie Maes, a professor of media arts and sciences and Kapur’s thesis advisor. “But at the moment, the use of those devices is very disruptive. If I want to look something up that’s relevant to a conversation I’m having, I have to find my phone and type in the passcode and open an app and type in some search keyword, and the whole thing requires that I completely shift attention from my environment and the people that I’m with to the phone itself. So, my students and I have for a very long time been experimenting with new form factors and new types of experience that enable people to still benefit from all the wonderful knowledge and services that these devices give us, but do it in a way that lets them remain in the present.”

The researchers describe their device in a paper they presented at the Association for Computing Machinery’s ACM Intelligent User Interface conference. Kapur is first author on the paper, Maes is the senior author, and they’re joined by Shreyas Kapur, an undergraduate major in electrical engineering and computer science.

Subtle signals

The idea that internal verbalizations have physical correlates has been around since the 19th century, and it was seriously investigated in the 1950s. One of the goals of the speed-reading movement of the 1960s was to eliminate internal verbalization, or “subvocalization,” as it’s known.

But subvocalization as a computer interface is largely unexplored. The researchers’ first step was to determine which locations on the face are the sources of the most reliable neuromuscular signals. So they conducted experiments in which the same subjects were asked to subvocalize the same series of words four times, with an array of 16 electrodes at different facial locations each time.

The researchers wrote code to analyze the resulting data and found that signals from seven particular electrode locations were consistently able to distinguish subvocalized words. In the conference paper, the researchers report a prototype of a wearable silent-speech interface, which wraps around the back of the neck like a telephone headset and has tentacle-like curved appendages that touch the face at seven locations on either side of the mouth and along the jaws.

But in current experiments, the researchers are getting comparable results using only four electrodes along one jaw, which should lead to a less obtrusive wearable device.

Once they had selected the electrode locations, the researchers began collecting data on a few computational tasks with limited vocabularies — about 20 words each. One was arithmetic, in which the user would subvocalize large addition or multiplication problems; another was the chess application, in which the user would report moves using the standard chess numbering system.

Then, for each application, they used a neural network to find correlations between particular neuromuscular signals and particular words. Like most neural networks, the one the researchers used is arranged into layers of simple processing nodes, each of which is connected to several nodes in the layers above and below. Data are fed into the bottom layer, whose nodes process it and pass them to the next layer, whose nodes process it and pass them to the next layer, and so on. The output of the final layer yields is the result of some classification task.

The basic configuration of the researchers’ system includes a neural network trained to identify subvocalized words from neuromuscular signals, but it can be customized to a particular user through a process that retrains just the last two layers.

Practical matters
Using the prototype wearable interface, the researchers conducted a usability study in which 10 subjects spent about 15 minutes each customizing the arithmetic application to their own neurophysiology, then spent another 90 minutes using it to execute computations. In that study, the system had an average transcription accuracy of about 92 percent.

But, Kapur says, the system’s performance should improve with more training data, which could be collected during its ordinary use. Although he hasn’t crunched the numbers, he estimates that the better-trained system he uses for demonstrations has an accuracy rate higher than that reported in the usability study.

In ongoing work, the researchers are collecting a wealth of data on more elaborate conversations, in the hope of building applications with much more expansive vocabularies. “We’re in the middle of collecting data, and the results look nice,” Kapur says. “I think we’ll achieve full conversation some day.”

“I think that they’re a little underselling what I think is a real potential for the work,” says Thad Starner, a professor in Georgia Tech’s College of Computing. “Like, say, controlling the airplanes on the tarmac at Hartsfield Airport here in Atlanta. You’ve got jet noise all around you, you’re wearing these big ear-protection things — wouldn’t it be great to communicate with voice in an environment where you normally wouldn’t be able to? You can imagine all these situations where you have a high-noise environment, like the flight deck of an aircraft carrier, or even places with a lot of machinery, like a power plant or a printing press. This is a system that would make sense, especially because oftentimes in these types of or situations people are already wearing protective gear. For instance, if you’re a fighter pilot, or if you’re a firefighter, you’re already wearing these masks.”

“The other thing where this is extremely useful is special ops,” Starner adds. “There’s a lot of places where it’s not a noisy environment but a silent environment. A lot of time, special-ops folks have hand gestures, but you can’t always see those. Wouldn’t it be great to have silent-speech for communication between these folks? The last one is people who have disabilities where they can’t vocalize normally. For example, Roger Ebert did not have the ability to speak anymore because lost his jaw to cancer. Could he do this sort of silent speech and then have a synthesizer that would speak the words?”

Elon Musk Worries That AI Research Will Create an ‘Immortal Dictator’

By Brandon Specktor

Imagine your least-favorite world leader. (Take as much time as you need.)

Now, imagine if that person wasn’t a human, but a network of millions of computers around the world. This digi-dictator has instant access to every scrap of recorded information about every person who’s ever lived. It can make millions of calculations in a fraction of a second, controls the world’s economy and weapons systems with godlike autonomy and — scariest of all — can never, ever die.

This unkillable digital dictator, according to Tesla and SpaceX founder Elon Musk, is one of the darker scenarios awaiting humankind’s future if artificial-intelligence research continues without serious regulation.

“We are rapidly headed toward digital superintelligence that far exceeds any human, I think it’s pretty obvious,” Musk said in a new AI documentary called “Do You Trust This Computer?” directed by Chris Paine (who interviewed Musk previously for the documentary “Who Killed The Electric Car?”). “If one company or a small group of people manages to develop godlike digital super-intelligence, they could take over the world.”

Humans have tried to take over the world before. However, an authoritarian AI would have one terrible advantage over like-minded humans, Musk said.

“At least when there’s an evil dictator, that human is going to die,” Musk added. “But for an AI there would be no death. It would live forever, and then you’d have an immortal dictator, from which we could never escape.”

And, this hypothetical AI-dictator wouldn’t even have to be evil to pose a threat to humans, Musk added. All it has to be is determined.

“If AI has a goal and humanity just happens to be in the way, it will destroy humanity as a matter of course without even thinking about it. No hard feelings,” Musk said. “It’s just like, if we’re building a road, and an anthill happens to be in the way. We don’t hate ants, we’re just building a road. So, goodbye, anthill.”

Those who follow news from the Musk-verse will not be surprised by his opinions in the new documentary; the tech mogul has long been a vocal critic of unchecked artificial intelligence. In 2014, Musk called AI humanity’s “biggest existential threat,” and in 2015, he joined a handful of other tech luminaries and researchers, including Stephen Hawking, to urge the United Nations to ban killer robots. He has said unregulated AI poses “vastly more risk than North Korea” and proposed starting some sort of federal oversight program to monitor the technology’s growth.

“Public risks require public oversight,” he tweeted. “Getting rid of the FAA [wouldn’t] make flying safer. They’re there for good reason.”

https://www.livescience.com/62239-elon-musk-immortal-artificial-intelligence-dictator.html?utm_source=notification

Deep image reconstruction now allows computers to read our minds

Imagine a reality where computers can visualize what you are thinking.

Sound far out? It’s now closer to becoming a reality thanks to four scientists at Kyoto University in Kyoto, Japan. In late December, Guohua Shen, Tomoyasu Horikawa, Kei Majima and Yukiyasu Kamitani released the results of their recent research on using artificial intelligence to decode thoughts on the scientific platform, BioRxiv.

Click to access 240317.full.pdf

Machine learning has previously been used to study brain scans (MRIs, or magnetic resonance imaging) and generate visualizations of what a person is thinking when referring to simple, binary images like black and white letters or simple geographic shapes.

But the scientists from Kyoto developed new techniques of “decoding” thoughts using deep neural networks (artificial intelligence). The new technique allows the scientists to decode more sophisticated “hierarchical” images, which have multiple layers of color and structure, like a picture of a bird or a man wearing a cowboy hat, for example.

“We have been studying methods to reconstruct or recreate an image a person is seeing just by looking at the person’s brain activity,” Kamitani, one of the scientists, tells CNBC Make It. “Our previous method was to assume that an image consists of pixels or simple shapes. But it’s known that our brain processes visual information hierarchically extracting different levels of features or components of different complexities.”

And the new AI research allows computers to detect objects, not just binary pixels. “These neural networks or AI model can be used as a proxy for the hierarchical structure of the human brain,” Kamitani says.

For the research, over the course of 10 months, three subjects were shown natural images (like photographs of a bird or a person), artificial geometric shapes and alphabetical letters for varying lengths of time.

In some instances, brain activity was measured while a subject was looking at one of 25 images. In other cases, it was logged afterward, when subjects were asked to think of the image they were previously shown.

Once the brain activity was scanned, a computer reverse-engineered (or “decoded”) the information to generate visualizations of a subjects’ thoughts.

The flowchart, embedded below, is made by the research team at the Kamitani Lab at Kyoto University and breaks down the science of how a visualization is “decoded.”

The two charts embedded below show the results the computer reconstructed for subjects whose activity was logged while they were looking at natural images and images of letters.

As for the subjects’ whose brain waves were measured based on remembering the images, the scientists had another breakthrough.

“Unlike previous methods, we were able to reconstruct visual imagery a person produced by just thinking of some remembered images,” Kamitani says.

As seen in the chart embedded below, when decoding brain signals resulting from a subject remembering images, the AI system had a harder time reconstructing. That’s because it’s more difficult for a human to remember an image of a cheetah or a fish exactly as it was seen.

“The brain is less activated” in that scenario, Kamitani explains to CNBC Make It.

As the accuracy of the technology continues to improve, the potential applications are mind-boggling. The visualization technology would allow you to draw pictures or make art simply by imagining something; your dreams could be visualized by a computer; the hallucinations of psychiatric patients could be visualized aiding in their care; and brain-machine interfaces may one day allow communication with imagery or thoughts, Kamitani tells CNBC Make It.

While the idea of computers reading your brain may sound positively Jetson-esque, the Japanese researchers aren’t alone in their futuristic work to connect the brain with computing power.

For example, former GoogleX-er Mary Lou Jepsen is working to build a hat that will make telepathy possible within the decade, and entrepreneur Bryan Johnson is working to build computer chips to implant in the brain to improve neurological functions.

https://www.cnbc.com/2018/01/08/japanese-scientists-use-artificial-intelligence-to-decode-thoughts.html

Inside the Race to Hack the Human Brain

by John H. Richardson

In an ordinary hospital room in Los Angeles, a young woman named Lauren Dickerson waits for her chance to make history.

She’s 25 years old, a teacher’s assistant in a middle school, with warm eyes and computer cables emerging like futuristic dreadlocks from the bandages wrapped around her head. Three days earlier, a neurosurgeon drilled 11 holes through her skull, slid 11 wires the size of spaghetti into her brain, and connected the wires to a bank of computers. Now she’s caged in by bed rails, with plastic tubes snaking up her arm and medical monitors tracking her vital signs. She tries not to move.

The room is packed. As a film crew prepares to document the day’s events, two separate teams of specialists get ready to work—medical experts from an elite neuroscience center at the University of Southern California and scientists from a technology company called Kernel. The medical team is looking for a way to treat Dickerson’s seizures, which an elaborate regimen of epilepsy drugs controlled well enough until last year, when their effects began to dull. They’re going to use the wires to search Dickerson’s brain for the source of her seizures. The scientists from Kernel are there for a different reason: They work for Bryan Johnson, a 40-year-old tech entrepreneur who sold his business for $800 million and decided to pursue an insanely ambitious dream—he wants to take control of evolution and create a better human. He intends to do this by building a “neuroprosthesis,” a device that will allow us to learn faster, remember more, “coevolve” with artificial intelligence, unlock the secrets of telepathy, and maybe even connect into group minds. He’d also like to find a way to download skills such as martial arts, Matrix-style. And he wants to sell this invention at mass-market prices so it’s not an elite product for the rich.

Right now all he has is an algorithm on a hard drive. When he describes the neuroprosthesis to reporters and conference audiences, he often uses the media-friendly expression “a chip in the brain,” but he knows he’ll never sell a mass-market product that depends on drilling holes in people’s skulls. Instead, the algorithm will eventually connect to the brain through some variation of noninvasive interfaces being developed by scientists around the world, from tiny sensors that could be injected into the brain to genetically engineered neurons that can exchange data wirelessly with a hatlike receiver. All of these proposed interfaces are either pipe dreams or years in the future, so in the meantime he’s using the wires attached to Dickerson’s hippo­campus to focus on an even bigger challenge: what you say to the brain once you’re connected to it.

That’s what the algorithm does. The wires embedded in Dickerson’s head will record the electrical signals that Dickerson’s neurons send to one another during a series of simple memory tests. The signals will then be uploaded onto a hard drive, where the algorithm will translate them into a digital code that can be analyzed and enhanced—or rewritten—with the goal of improving her memory. The algorithm will then translate the code back into electrical signals to be sent up into the brain. If it helps her spark a few images from the memories she was having when the data was gathered, the researchers will know the algorithm is working. Then they’ll try to do the same thing with memories that take place over a period of time, something nobody’s ever done before. If those two tests work, they’ll be on their way to deciphering the patterns and processes that create memories.

Although other scientists are using similar techniques on simpler problems, Johnson is the only person trying to make a commercial neurological product that would enhance memory. In a few minutes, he’s going to conduct his first human test. For a commercial memory prosthesis, it will be the first human test. “It’s a historic day,” Johnson says. “I’m insanely excited about it.”

For the record, just in case this improbable experiment actually works, the date is January 30, 2017.

At this point, you may be wondering if Johnson’s just another fool with too much money and an impossible dream. I wondered the same thing the first time I met him. He seemed like any other California dude, dressed in the usual jeans, sneakers, and T-shirt, full of the usual boyish enthusiasms. His wild pronouncements about “reprogramming the operating system of the world” seemed downright goofy.

But you soon realize this casual style is either camouflage or wishful thinking. Like many successful people, some brilliant and some barely in touch with reality, Johnson has endless energy and the distributed intelligence of an octopus—one tentacle reaches for the phone, another for his laptop, a third scouts for the best escape route. When he starts talking about his neuroprosthesis, they team up and squeeze till you turn blue.

And there is that $800 million that PayPal shelled out for Braintree, the online-­payment company Johnson started when he was 29 and sold when he was 36. And the $100 million he is investing into Kernel, the company he started to pursue this project. And the decades of animal tests to back up his sci-fi ambitions: Researchers have learned how to restore memories lost to brain damage, plant false memories, control the motions of animals through human thought, control appetite and aggression, induce sensations of pleasure and pain, even how to beam brain signals from one animal to another animal thousands of miles away.

And Johnson isn’t dreaming this dream alone—at this moment, Elon Musk and Mark Zuckerberg are weeks from announcing their own brain-hacking projects, the military research group known as Darpa already has 10 under way, and there’s no doubt that China and other countries are pursuing their own. But unlike Johnson, they’re not inviting reporters into any hospital rooms.

Here’s the gist of every public statement Musk has made about his project: (1) He wants to connect our brains to computers with a mysterious device called “neural lace.” (2) The name of the company he started to build it is Neuralink.

Thanks to a presentation at last spring’s F8 conference, we know a little more about what Zuckerberg is doing at Facebook: (1) The project was until recently overseen by Regina Dugan, a former director of Darpa and Google’s Advanced Technology group. (2) The team is working out of Building 8, Zuckerberg’s research lab for moon-shot projects. (3) They’re working on a noninvasive “brain–computer speech-to-text interface” that uses “optical imaging” to read the signals of neurons as they form words, find a way to translate those signals into code, and then send the code to a computer. (4) If it works, we’ll be able to “type” 100 words a minute just by thinking.

As for Darpa, we know that some of its projects are improvements on existing technology and some—such as an interface to make soldiers learn faster—sound just as futuristic as Johnson’s. But we don’t know much more than that. That leaves Johnson as our only guide, a job he says he’s taken on because he thinks the world needs to be prepared for what is coming.

All of these ambitious plans face the same obstacle, however: The brain has 86 billion neurons, and nobody understands how they all work. Scientists have made impressive progress uncovering, and even manipulating, the neural circuitry behind simple brain functions, but things such as imagination or creativity—and memory—are so complex that all the neuroscientists in the world may never solve them. That’s why a request for expert opinions on the viability of Johnson’s plans got this response from John Donoghue, the director of the Wyss Center for Bio and Neuroengineering in Geneva: “I’m cautious,” he said. “It’s as if I asked you to translate something from Swahili to Finnish. You’d be trying to go from one unknown language into another unknown language.” To make the challenge even more daunting, he added, all the tools used in brain research are as primitive as “a string between two paper cups.” So Johnson has no idea if 100 neurons or 100,000 or 10 billion control complex brain functions. On how most neurons work and what kind of codes they use to communicate, he’s closer to “Da-da” than “see Spot run.” And years or decades will pass before those mysteries are solved, if ever. To top it all off, he has no scientific background. Which puts his foot on the banana peel of a very old neuroscience joke: “If the brain was simple enough for us to understand, we’d be too stupid to understand it.”

I don’t need telepathy to know what you’re thinking now—there’s nothing more annoying than the big dreams of tech optimists. Their schemes for eternal life and floating libertarian nations are adolescent fantasies; their digital revolution seems to be destroying more jobs than it created, and the fruits of their scientific fathers aren’t exactly encouraging either. “Coming soon, from the people who brought you nuclear weapons!”

But Johnson’s motives go to a deep and surprisingly tender place. Born into a devout Mormon community in Utah, he learned an elaborate set of rules that are still so vivid in his mind that he brought them up in the first minutes of our first meeting: “If you get baptized at the age of 8, point. If you get into the priesthood at the age of 12, point. If you avoid pornography, point. Avoid masturbation? Point. Go to church every Sunday? Point.” The reward for a high point score was heaven, where a dutiful Mormon would be reunited with his loved ones and gifted with endless creativity.

When he was 4, Johnson’s father left the church and divorced his mother. Johnson skips over the painful details, but his father told me his loss of faith led to a long stretch of drug and alcohol abuse, and his mother said she was so broke that she had to send Johnson to school in handmade clothes. His father remembers the letters Johnson started sending him when he was 11, a new one every week: “Always saying 100 different ways, ‘I love you, I need you.’ How he knew as a kid the one thing you don’t do with an addict or an alcoholic is tell them what a dirtbag they are, I’ll never know.”

Johnson was still a dutiful believer when he graduated from high school and went to Ecuador on his mission, the traditional Mormon rite of passage. He prayed constantly and gave hundreds of speeches about Joseph Smith, but he became more and more ashamed about trying to convert sick and hungry children with promises of a better life in heaven. Wouldn’t it be better to ease their suffering here on earth?

“Bryan came back a changed boy,” his father says.

Soon he had a new mission, self-assigned. His sister remembers his exact words: “He said he wanted to be a millionaire by the time he was 30 so he could use those resources to change the world.”

His first move was picking up a degree at Brigham Young University, selling cell phones to help pay the tuition and inhaling every book that seemed to promise a way forward. One that left a lasting impression was Endurance, the story of Ernest Shackleton’s botched journey to the South Pole—if sheer grit could get a man past so many hardships, he would put his faith in sheer grit. He married “a nice Mormon girl,” fathered three Mormon children, and took a job as a door-to-door salesman to support them. He won a prize for Salesman of the Year and started a series of businesses that went broke—which convinced him to get a business degree at the University of Chicago.

When he graduated in 2008, he stayed in Chicago and started Braintree, perfecting his image as a world-beating Mormon entrepreneur. By that time, his father was sober and openly sharing his struggles, and Johnson was the one hiding his dying faith behind a very well-protected wall. He couldn’t sleep, ate like a wolf, and suffered intense headaches, fighting back with a long series of futile cures: antidepressants, biofeedback, an energy healer, even blind obedience to the rules of his church.

In 2012, at the age of 35, Johnson hit bottom. In his misery, he remembered Shackleton and seized a final hope—maybe he could find an answer by putting himself through a painful ordeal. He planned a trip to Mount Kilimanjaro, and on the second day of the climb he got a stomach virus. On the third day he got altitude sickness. When he finally made it to the peak, he collapsed in tears and then had to be carried down on a stretcher. It was time to reprogram his operating system.

The way Johnson tells it, he started by dropping the world-beater pose that hid his weakness and doubt. And although this may all sound a bit like a dramatic motivational talk at a TED conference, especially since Johnson still projects the image of a world-beating entrepreneur, this much is certain: During the following 18 months, he divorced his wife, sold Braintree, and severed his last ties to the church. To cushion the impact on his children, he bought a house nearby and visited them almost daily. He knew he was repeating his father’s mistakes but saw no other option—he was either going to die inside or start living the life he always wanted.

He started with the pledge he made when he came back from Ecuador, experimenting first with a good-government initiative in Washington and pivoting, after its inevitable doom, to a venture fund for “quantum leap” companies inventing futuristic products such as human-­organ-­mimicking silicon chips. But even if all his quantum leaps landed, they wouldn’t change the operating system of the world.

Finally, the Big Idea hit: If the root problems of humanity begin in the human mind, let’s change our minds.

Fantastic things were happening in neuroscience. Some of them sounded just like miracles from the Bible—with prosthetic legs controlled by thought and microchips connected to the visual cortex, scientists were learning to help the lame walk and the blind see. At the University of Toronto, a neurosurgeon named Andres Lozano slowed, and in some cases reversed, the cognitive declines of Alzheimer’s patients using deep brain stimulation. At a hospital in upstate New York, a neuro­technologist named Gerwin Schalk asked computer engineers to record the firing patterns of the auditory neurons of people listening to Pink Floyd. When the engineers turned those patterns back into sound waves, they produced a single that sounded almost exactly like “Another Brick in the Wall.” At the University of Washington, two professors in different buildings played a videogame together with the help of electroencephalography caps that fired off electrical pulses—when one professor thought about firing digital bullets, the other one felt an impulse to push the Fire button.

Johnson also heard about a biomedical engineer named Theodore Berger. During nearly 20 years of research, Berger and his collaborators at USC and Wake Forest University developed a neuroprosthesis to improve memory in rats. It didn’t look like much when he started testing it in 2002—just a slice of rat brain and a computer chip. But the chip held an algorithm that could translate the firing patterns of neurons into a kind of Morse code that corresponded with actual memories. Nobody had ever done that before, and some people found the very idea offensive—it’s so deflating to think of our most precious thoughts reduced to ones and zeros. Prominent medical ethicists accused Berger of tampering with the essence of identity. But the implications were huge: If Berger could turn the language of the brain into code, perhaps he could figure out how to fix the part of the code associated with neurological diseases.

In rats, as in humans, firing patterns in the hippocampus generate a signal or code that, somehow, the brain recognizes as a long-term memory. Berger trained a group of rats to perform a task and studied the codes that formed. He learned that rats remembered a task better when their neurons sent “strong code,” a term he explains by comparing it to a radio signal: At low volume you don’t hear all of the words, but at high volume everything comes through clear. He then studied the difference in the codes generated by the rats when they remembered to do something correctly and when they forgot. In 2011, through a breakthrough experiment conducted on rats trained to push a lever, he demonstrated he could record the initial memory codes, feed them into an algorithm, and then send stronger codes back into the rats’ brains. When he finished, the rats that had forgotten how to push the lever suddenly remembered.

Five years later, Berger was still looking for the support he needed for human trials. That’s when Johnson showed up. In August 2016, he announced he would pledge $100 million of his fortune to create Kernel and that Berger would join the company as chief science officer. After learning about USC’s plans to implant wires in Dickerson’s brain to battle her epilepsy, Johnson approached Charles Liu, the head of the prestigious neurorestoration division at the USC School of Medicine and the lead doctor on Dickerson’s trial. Johnson asked him for permission to test the algorithm on Dickerson while she had Liu’s wires in her hippocampus—in between Liu’s own work sessions, of course. As it happened, Liu had dreamed about expanding human powers with technology ever since he got obsessed with The Six Million Dollar Man as a kid. He helped Johnson get Dickerson’s consent and convinced USC’s institutional research board to approve the experiment. At the end of 2016, Johnson got the green light. He was ready to start his first human trial.

In the hospital room, Dickerson is waiting for the experiments to begin, and I ask her how she feels about being a human lab rat.

“If I’m going to be here,” she says, “I might as well do something useful.”

Useful? This starry-eyed dream of cyborg supermen? “You know he’s trying to make humans smarter, right?”

“Isn’t that cool?” she answers.

Over by the computers, I ask one of the scientists about the multi­colored grid on the screen. “Each one of these squares is an electrode that’s in her brain,” one says. Every time a neuron close to one of the wires in Dickerson’s brain fires, he explains, a pink line will jump in the relevant box.

Johnson’s team is going to start with simple memory tests. “You’re going to be shown words,” the scientist explains to her. “Then there will be some math problems to make sure you’re not rehearsing the words in your mind. Try to remember as many words as you can.”

One of the scientists hands Dickerson a computer tablet, and everyone goes quiet. Dickerson stares at the screen to take in the words. A few minutes later, after the math problem scrambles her mind, she tries to remember what she’d read. “Smoke … egg … mud … pearl.”

Next, they try something much harder, a group of memories in a sequence. As one of Kernel’s scientists explains to me, they can only gather so much data from wires connected to 30 or 40 neurons. A single face shouldn’t be too hard, but getting enough data to reproduce memories that stretch out like a scene in a movie is probably impossible.

Sitting by the side of Dickerson’s bed, a Kernel scientist takes on the challenge. “Could you tell me the last time you went to a restaurant?”

“It was probably five or six days ago,” Dickerson says. “I went to a Mexican restaurant in Mission Hills. We had a bunch of chips and salsa.”

He presses for more. As she dredges up other memories, another Kernel scientist hands me a pair of headphones connected to the computer bank. All I hear at first is a hissing sound. After 20 or 30 seconds go by I hear a pop.

“That’s a neuron firing,” he says.

As Dickerson continues, I listen to the mysterious language of the brain, the little pops that move our legs and trigger our dreams. She remembers a trip to Costco and the last time it rained, and I hear the sounds of Costco and rain.

When Dickerson’s eyelids start sinking, the medical team says she’s had enough and Johnson’s people start packing up. Over the next few days, their algorithm will turn Dickerson’s synaptic activity into code. If the codes they send back into Dickerson’s brain make her think of dipping a few chips in salsa, Johnson might be one step closer to reprogramming the operating system of the world.

But look, there’s another banana peel­—after two days of frantic coding, Johnson’s team returns to the hospital to send the new code into Dickerson’s brain. Just when he gets word that they can get an early start, a message arrives: It’s over. The experiment has been placed on “administrative hold.” The only reason USC would give in the aftermath was an issue between Johnson and Berger. Berger would later tell me he had no idea the experiment was under way and that Johnson rushed into it without his permission. Johnson said he is mystified by Berger’s accusations. “I don’t know how he could not have known about it. We were working with his whole lab, with his whole team.” The one thing they both agree on is that their relationship fell apart shortly afterward, with Berger leaving the company and taking his algorithm with him. He blames the break entirely on Johnson. “Like most investors, he wanted a high rate of return as soon as possible. He didn’t realize he’d have to wait seven or eight years to get FDA approval—I would have thought he would have looked that up.” But Johnson didn’t want to slow down. He had bigger plans, and he was in a hurry.

Eight months later, I go back to California to see where Johnson has ended up. He seems a little more relaxed. On the whiteboard behind his desk at Kernel’s new offices in Los Angeles, someone’s scrawled a playlist of songs in big letters. “That was my son,” he says. “He interned here this summer.” Johnson is a year into a romance with Taryn Southern, a charismatic 31-year-old performer and film producer. And since his break with Berger, Johnson has tripled Kernel’s staff—he’s up to 36 employees now—adding experts in fields like chip design and computational neuroscience. His new science adviser is Ed Boyden, the director of MIT’s Synthetic Neurobiology Group and a superstar in the neuroscience world. Down in the basement of the new office building, there’s a Dr. Frankenstein lab where scientists build prototypes and try them out on glass heads.

When the moment seems right, I bring up the purpose of my visit. “You said you had something to show me?”

Johnson hesitates. I’ve already promised not to reveal certain sensitive details, but now I have to promise again. Then he hands me two small plastic display cases. Inside, two pairs of delicate twisty wires rest on beds of foam rubber. They look scientific but also weirdly biological, like the antennae of some futuristic bug-bot.

I’m looking at the prototypes for Johnson’s brand-new neuromodulator. On one level, it’s just a much smaller version of the deep brain stimulators and other neuromodulators currently on the market. But unlike a typical stimulator, which just fires pulses of electricity, Johnson’s is designed to read the signals that neurons send to other neurons—and not just the 100 neurons the best of the current tools can harvest, but perhaps many more. That would be a huge advance in itself, but the implications are even bigger: With Johnson’s neuromodulator, scientists could collect brain data from thousands of patients, with the goal of writing precise codes to treat a variety of neurological diseases.

In the short term, Johnson hopes his neuromodulator will help him “optimize the gold rush” in neurotechnology—financial analysts are forecasting a $27 billion market for neural devices within six years, and countries around the world are committing billions to the escalating race to decode the brain. In the long term, Johnson believes his signal-reading neuromodulator will advance his bigger plans in two ways: (1) by giving neuroscientists a vast new trove of data they can use to decode the workings of the brain and (2) by generating the huge profits Kernel needs to launch a steady stream of innovative and profitable neural tools, keeping the company both solvent and plugged into every new neuroscience breakthrough. With those two achievements in place, Johnson can watch and wait until neuroscience reaches the level of sophistication he needs to jump-start human evolution with a mind-enhancing neuroprosthesis.

Liu, the neurologist with the Six Million Dollar Man dreams, compares Johnson’s ambition to flying. “Going back to Icarus, human beings have always wanted to fly. We don’t grow wings, so we build a plane. And very often these solutions will have even greater capabilities than the ones nature created—no bird ever flew to Mars.” But now that humanity is learning how to reengineer its own capabilities, we really can choose how we evolve. “We have to wrap our minds around that. It’s the most revolutionary thing in the world.”

The crucial ingredient is the profit motive, which always drives rapid innovation in science. That’s why Liu thinks Johnson could be the one to give us wings. “I’ve never met anyone with his urgency to take this to market,” he says.

When will this revolution arrive? “Sooner than you think,” Liu says.

Now we’re back where we began. Is Johnson a fool? Is he just wasting his time and fortune on a crazy dream? One thing is certain: Johnson will never stop trying to optimize the world. At the pristine modern house he rents in Venice Beach, he pours out idea after idea. He even took skepticism as helpful information—when I tell him his magic neuroprosthesis sounds like another version of the Mormon heaven, he’s delighted.

“Good point! I love it!”

He never has enough data. He even tries to suck up mine. What are my goals? My regrets? My pleasures? My doubts?

Every so often, he pauses to examine my “constraint program.”

“One, you have this biological disposition of curiosity. You want data. And when you consume that data, you apply boundaries of meaning-making.”

“Are you trying to hack me?” I ask.

Not at all, he says. He just wants us to share our algorithms. “That’s the fun in life,” he says, “this endless unraveling of the puzzle. And I think, ‘What if we could make the data transfer rate a thousand times faster? What if my consciousness is only seeing a fraction of reality? What kind of stories would we tell?’ ”

In his free time, Johnson is writing a book about taking control of human evolution and looking on the bright side of our mutant humanoid future. He brings this up every time I talk to him. For a long time I lumped this in with his dreamy ideas about reprogramming the operating system of the world: The future is coming faster than anyone thinks, our glorious digital future is calling, the singularity is so damn near that we should be cheering already—a spiel that always makes me want to hit him with a copy of the Unabomber Manifesto.

But his urgency today sounds different, so I press him on it: “How would you respond to Ted Kaczynski’s fears? The argument that technology is a cancerlike development that’s going to eat itself?”

“I would say he’s potentially on the wrong side of history.”

“Yeah? What about climate change?”

“That’s why I feel so driven,” he answered. “We’re in a race against time.”

He asks me for my opinion. I tell him I think he’ll still be working on cyborg brainiacs when the starving hordes of a ravaged planet destroy his lab looking for food—and for the first time, he reveals the distress behind his hope. The truth is, he has the same fear. The world has gotten way too complex, he says. The financial system is shaky, the population is aging, robots want our jobs, artificial intelligence is catching up, and climate change is coming fast. “It just feels out of control,” he says.

He’s invoked these dystopian ideas before, but only as a prelude to his sales pitch. This time he’s closer to pleading. “Why wouldn’t we embrace our own self-directed evolution? Why wouldn’t we just do everything we can to adapt faster?”

I turn to a more cheerful topic. If he ever does make a neuroprosthesis to revolutionize how we use our brain, which superpower would he give us first? Telepathy? Group minds? Instant kung fu?

He answers without hesitation. Because our thinking is so constrained by the familiar, he says, we can’t imagine a new world that isn’t just another version of the world we know. But we have to imagine something far better than that. So he’d try to make us more creative—that would put a new frame on everything.

Ambition like that can take you a long way. It can drive you to try to reach the South Pole when everyone says it’s impossible. It can take you up Mount Kilimanjaro when you’re close to dying and help you build an $800 million company by the time you’re 36. And Johnson’s ambitions drive straight for the heart of humanity’s most ancient dream: For operating system, substitute enlightenment.

By hacking our brains, he wants to make us one with everything.

https://www.wired.com/story/inside-the-race-to-build-a-brain-machine-interface/?mbid=nl_111717_editorsnote_list1_p1

Low-Current Brain Stimulation Improves Memory Recollection

Low-current electrical pulses delivered to a specific brain area during learning improved recollection of distinct memories, according to a study published online in eLife.

Researchers at the University of California, Los Angeles (UCLA) believe electrical stimulation offers hope for the treatment of memory disorders, such as Alzheimer’s disease.

The study involved 13 patients with epilepsy who had ultrafine wires implanted in their brains to pinpoint the origin of seizures. During a person-recognition task, researchers monitored the wires to record neuronal activity as memories were formed, and then sent a specific pattern of quick pulses to the entorhinal area of the brain, an area critical to learning and memory.
In 8 of 9 patients who received electrical pulses to the right side of the entorhinal area, the ability to recognize specific faces and disregard similar-looking ones improved significantly. However, the 4 patients who received electrical stimulation on the left side of the brain area showed no improvement in recall.

By using the ultrafine wires, researchers were able to precisely target the stimulation while using a voltage that was one-tenth to one-fifth of the strength used in previous studies.

“These results suggest that microstimulation with physiologic level currents—a radical departure from commonly used deep brain stimulation protocols—is sufficient to modulate human behavior,” researchers wrote.

The findings also point to the importance of stimulating the right entorhinal region to promote improved memory recollection.

—Jolynn Tumolo

References

Titiz AS, Hill MRH, Mankin EA, et al. Theta-burst microstimulation in the human entorhinal area improves memory specificity. eLife. 2017 October 24.

AI Can Now Predict Suicide with Remarkable Accuracy

When someone commits suicide, their family and friends can be left with the heartbreaking and answerless question of what they could have done differently. Colin Walsh, data scientist at Vanderbilt University Medical Center, hopes his work in predicting suicide risk will give people the opportunity to ask “what can I do?” while there’s still a chance to intervene.

Walsh and his colleagues have created machine-learning algorithms that predict, with unnerving accuracy, the likelihood that a patient will attempt suicide. In trials, results have been 80-90% accurate when predicting whether someone will attempt suicide within the next two years, and 92% accurate in predicting whether someone will attempt suicide within the next week.

The prediction is based on data that’s widely available from all hospital admissions, including age, gender, zip codes, medications, and prior diagnoses. Walsh and his team gathered data on 5,167 patients from Vanderbilt University Medical Center that had been admitted with signs of self-harm or suicidal ideation. They read each of these cases to identify the 3,250 instances of suicide attempts.

This set of more than 5,000 cases was used to train the machine to identify those at risk of attempted suicide compared to those who committed self-harm but showed no evidence of suicidal intent. The researchers also built algorithms to predict attempted suicide among a group 12,695 randomly selected patients with no documented history of suicide attempts. It proved even more accurate at making suicide risk predictions within this large general population of patients admitted to the hospital.

Walsh’s paper, published in Clinical Psychological Science in April, is just the first stage of the work. He’s now working to establish whether his algorithm is effective with a completely different data set from another hospital. And, once confidant that the model is sound, Walsh hopes to work with a larger team to establish a suitable method of intervening. He expects to have an intervention program in testing within the next two years. “I’d like to think it’ll be fairly quick, but fairly quick in health care tends to be in the order of months,” he adds.

Suicide is such an intensely personal act that it seems, from a human perspective, impossible to make such accurate predictions based on a crude set of data. Walsh says it’s natural for clinicians to ask how the predictions are made, but the algorithms are so complex that it’s impossible to pull out single risk factors. “It’s a combination of risk factors that gets us the answers,” he says.

That said, Walsh and his team were surprised to note that taking melatonin seemed to be a significant factor in calculating the risk. “I don’t think melatonin is causing people to have suicidal thinking. There’s no physiology that gets us there. But one thing that’s been really important to suicide risk is sleep disorders,” says Walsh. It’s possible that prescriptions for melatonin capture the risk of sleep disorders—though that’s currently a hypothesis that’s yet to be proved.

The research raises broader ethical questions about the role of computers in health care and how truly personal information could be used. “There’s always the risk of unintended consequences,” says Walsh. “We mean well and build a system to help people, but sometimes problems can result down the line.”

Researchers will also have to decide how much computer-based decisions will determine patient care. As a practicing primary care doctor, Walsh says it’s unnerving to recognize that he could effectively follow orders from a machine. “Is there a problem with the fact that I might get a prediction of high risk when that’s not part of my clinical picture?” he says. “Are you changing the way I have to deliver care because of something a computer’s telling me to do?”

For now, the machine-learning algorithms are based on data from hospital admissions. But Walsh recognizes that many people at risk of suicide do not spend time in hospital beforehand. “So much of our lives is spent outside of the health care setting. If we only rely on data that’s present in the health care setting to do this work, then we’re only going to get part of the way there,” he says.

And where else could researchers get data? The internet is one promising option. We spend so much time on Facebook and Twitter, says Walsh, that there may well be social media data that could be used to predict suicide risk. “But we need to do the work to show that’s actually true.”

Facebook announced earlier this year that it was using its own artificial intelligence to review posts for signs of self-harm. And the results are reportedly already more accurate than the reports Facebook gets from people flagged by their friends as at-risk.

Training machines to identify warning signs of suicide is far from straightforward. And, for predictions and interventions to be done successfully, Walsh believes it’s essential to destigmatize suicide. “We’re never going to help people if we’re not comfortable talking about it,” he says.

But, with suicide leading to 800,000 deaths worldwide every year, this is a public health issue that cannot be ignored. Given that most humans, including doctors, are pretty terrible at identifying suicide risk, machine learning could provide an important solution.

https://www.doximity.com/doc_news/v2/entries/8004313

How to Save Your Digital Soul


With a selfie and some audio, a startup called Oben says, it can make you an avatar that can say—or sing—anything.

by Rachel Metz

I’ve met Nikhil Jain in the flesh, and now, on the laptop screen in front of me, I’m looking at a small animated version of him from the torso up, talking in the same tone and lilting accented English—only this version of Jain is bald (hair is tricky to animate convincingly), and his voice has a robotic sound.

For the past three years, Jain has been working on Oben, the startup he cofounded and leads. It’s building technology that uses a single image and an audio clip to automate the construction of what are sort of like digital souls: avatars that look and sound a lot like anyone, and can be made to speak or sing anything.

Of course it won’t really be you—or Beyoncé, or Michael Jackson, or whomever an Oben avatar depicts—but it could be a decent, potentially fun approximation that’s useful for all kinds of things. Maybe, like Jain, you want a virtual you to read stories to your kids when you can’t be there in person. Perhaps you’re a celebrity who wants to let fans do duets with your avatar on a mobile or virtual-reality app, or the estate of a dead celebrity who wants to continue to keep that person “alive” with avatar-based performances. The opportunities are endless—and, perhaps, endlessly eerie.

Oben, based in Pasadena, California, has raised about $9 million so far. The company is planning to release an app late this year that lets people make their own personal avatar and share video clips of it with friends.

Oben is also working with some as-yet-unnamed bands in Asia to make mobile-based avatars that will be able to sing duets with fans, and last month it announced it will launch a virtual-reality-enabled version of its avatar technology with the massively popular social app WeChat, for the HTC Vive headset.

For now, producing the kind of avatar Jain showed me still takes a lot of time, and it doesn’t even include the body below the waist (Jain says the company is experimenting with animating other body parts, but mainly it’s “focusing on other things”). While the avatar can be made with just one photo and two to 20 minutes of reading from a phoneme-rich script (the more, the better), a good avatar still takes Oben’s deep-learning system about eight hours to create. This includes cleaning up the recorded audio, creating a voice print for the person that reflects qualities such as accent and timbre, and making the 3-D visual model (facial movements are predicted from the selfie and voice print, Jain says). While speaking sounds pretty good, the singing clips I heard sounded very Auto-Tuned.

The avatars in the forthcoming app will be less focused on perfection but much faster to build, he says. Oben is also trying to figure out how to match speech and facial expressions so that the avatars can speak any language in a natural-looking way; for now, they’re limited to English and Chinese.

If digital copies like Oben’s are any good, they will raise questions about what should happen to your digital self over time. If you die, should an existing avatar be retained? Is it disturbing if others use digital breadcrumbs you left behind to, in a sense, re-create your digital self?

Jain isn’t sure what the right answer is, though he agrees that, like other companies that deal with user data, Oben does have to address death. And beyond big questions, there are potentially big business opportunities in that issue. The company’s business model is likely to be, in part, predicated on it: he says Oben has been approached by the estates of numerous celebrities, some of them long dead, some recently deceased.

https://www.technologyreview.com/s/607885/how-to-save-your-digital-soul/