Posts Tagged ‘Rachel Metz’

by Rachel Metz

There are about 45 million people in the US alone with a mental illness, and those illnesses and their courses of treatment can vary tremendously. But there is something most of those people have in common: a smartphone.

A startup founded in Palo Alto, California, by a trio of doctors, including the former director of the US National Institute of Mental Health, is trying to prove that our obsession with the technology in our pockets can help treat some of today’s most intractable medical problems: depression, schizophrenia, bipolar disorder, post-traumatic stress disorder, and substance abuse.

Mindstrong Health is using a smartphone app to collect measures of people’s cognition and emotional health as indicated by how they use their phones. Once a patient installs Mindstrong’s app, it monitors things like the way the person types, taps, and scrolls while using other apps. This data is encrypted and analyzed remotely using machine learning, and the results are shared with the patient and the patient’s medical provider.

The seemingly mundane minutiae of how you interact with your phone offers surprisingly important clues to your mental health, according to Mindstrong’s research—revealing, for example, a relapse of depression. With details gleaned from the app, Mindstrong says, a patient’s doctor or other care manager gets an alert when something may be amiss and can then check in with the patient by sending a message through the app (patients, too, can use it to message their care provider).

For years now, countless companies have offered everything from app-based therapy to games that help with mood and anxiety to efforts to track smartphone activities or voice and speech for signs of depression. But Mindstrong is different, because it’s considering how users’ physical interactions with the phones—not what they do, but how they do it—can point to signs of mental illness. That may lead to far more accurate ways to track these problems over time. If Mindstrong’s method works, it could be the first that manages to turn the technology in your pocket into the key to helping patients with a wide range of chronic brain disorders—and may even lead to ways to diagnose them before they start.

Digital fingerprints
Before starting Mindstrong, Paul Dagum, its founder and CEO, paid for two Bay Area–based studies to figure out whether there might be a systemic measure of cognitive ability—or disability—hidden in how we use our phones. One hundred and fifty research subjects came into a clinic and underwent a standardized neurocognitive assessment that tested things like episodic memory (how you remember events) and executive function (mental skills that include the ability to control impulses, manage time, and focus on a task)—the kinds of high-order brain functions that are weakened in people with mental illnesses.

The assessment included neuropsychological tests that have been used for decades, like a so-called timed trail-­tracing test, where you have to connect scattered letters and numbers in the proper order—a way to measure how well people can shift between tasks. People who have a brain disorder that weakens their attention may have a harder time with this.

Subjects went home with an app that measured the ways they touched their phone’s display (swipes, taps, and keyboard typing), which Dagum hoped would be an unobtrusive way to log these same kinds of behavior on a smartphone. For the next year, it ran in the background, gathering data and sending it to a remote server. Then the subjects came back for another round of neurocognitive tests.

As it turns out, the behaviors the researchers measured can tell you a lot. “There were signals in there that were measuring, correlating—predicting, in fact, not just correlating with—the neurocognitive function measures that the neuropsychologist had taken,” Dagum says.

For instance, memory problems, which are common hallmarks of brain disorders, can be spotted by looking at things including how rapidly you type and what errors you make (such as how frequently you delete characters), as well as by how fast you scroll down a list of contacts. (Mindstrong can first determine your baseline by looking at how you use your handset and combining those characteristics with general measures.) Even when you’re just using the smartphone’s keyboard, Dagum says, you’re switching your attention from one task to another all the time—for example, when you’re inserting punctuation into a sentence.

He became convinced the connections presented a new way to investigate human cognition and behavior over time, in a way that simply isn’t possible with typical treatment like regularly visiting a therapist or getting a new medication, taking it for a month, and then checking back in with a doctor. Brain-disorder treatment has stalled in part because doctors simply don’t know that someone’s having trouble until it’s well advanced; Dagum believes Mindstrong can figure it out much sooner and keep an eye on it 24 hours a day.

In 2016, Dagum visited Verily, Alphabet’s life sciences company, where he pitched his work to a group including Tom Insel, a psychiatrist who had spent 13 years as director of the National Institute of Mental Health before he joined Verily in 2015.

Verily was trying to figure out how to use phones to learn about depression or other mental health conditions. But Insel says that at first, what Dagum presented—more a concept than a show of actual data—didn’t seem like a big deal. “The bells didn’t go off about what he had done,” he says.

Over several meetings, however, Insel realized that Dagum could do something he believed nobody in the field of mental health had yet been able to accomplish. He had figured out smartphone signals that correlated strongly with a person’s cognitive performance—the kind of thing usually possible only through those lengthy lab tests. What’s more, he was collecting these signals for days, weeks, and months on end, making it possible, in essence, to look at a person’s brain function continuously and objectively. “It’s like having a continuous glucose monitor in the world of diabetes,” Insel says.

Why should anyone believe that what Mindstrong is doing can actually work? Dagum says that thousands of people are using the app, and the company now has five years of clinical study data to confirm its science and technology. It is continuing to perform numerous studies, and this past March it began working with patients and doctors in clinics.

In its current form, the Mindstrong app that patients see is fairly sparse. There’s a graph that updates daily with five different signals collected from your smartphone swipes and taps. Four of these signals are measures of cognition that are tightly tied to mood disorders (such as the ability to make goal-based decisions), and the other measures emotions. There’s also an option to chat with a clinician.

For now, Insel says, the company is working mainly with seriously ill people who are at risk of relapse for problems like depression, schizophrenia, and substance abuse. “This is meant for the most severely disabled people, who are really needing some innovation,” he says. “There are people who are high utilizers of health care and they’re not getting the benefits, so we’ve got to figure out some way to get them something that works better.” Actually predicting that a patient is headed toward a downward spiral is a harder task, but Dagum believes that having more people using the app over time will help cement patterns in the data.

There are thorny issues to consider, of course. Privacy, for one: while Mindstrong says it protects users’ data, collecting such data at all could be a scary prospect for many of the people it aims to help. Companies may be interested in, say, including it as part of an employee wellness plan, but most of us wouldn’t want our employers anywhere near our mental health data, no matter how well protected it may be.

Spotting problems before they start
A study in the works at the University of Michigan is looking at whether Mindstrong may be beneficial for people who do not have a mental illness but do have a high risk for depression and suicide. Led by Srijan Sen, a professor of psychiatry and neuroscience, the study tracks the moods of first-year doctors across the country—a group that is known to experience intense stress, frequent sleep deprivation, and very high rates of depression.

Participants log their mood each day and wear a Fitbit activity tracker to log sleep, activity, and heart-rate data. About 1,500 of the 2,000 participants also let a Mindstrong keyboard app run on their smartphones to collect data about the ways they type and figure out how their cognition changes throughout the year.

Sen hypothesizes that people’s memory patterns and thinking speed change in subtle ways before they realize they’re depressed. But he says he doesn’t know how long that lag will be, or what cognitive patterns will be predictive of depression.

Insel also believes Mindstrong may lead to more precise diagnoses than today’s often broadly defined mental health disorders. Right now, for instance, two people with a diagnosis of major depressive disorder might share just one of numerous symptoms: they could both feel depressed, but one might feel like sleeping all the time, while the other is hardly sleeping at all. We don’t know how many different illnesses are in the category of depression, Insel says. But over time Mindstrong may be able to use patient data to find out. The company is exploring how learning more about these distinctions might make it possible to tailor drug prescriptions for more effective treatment.

Insel says it’s not yet known if there are specific digital markers of, say, auditory hallucinations that someone with schizophrenia might experience, and the company is still working on how to predict future problems like post-traumatic stress disorder. But he is confident that the phone will be the key to figuring it out discreetly. “We want to be able to do this in a way that just fits into somebody’s regular life,” he says.

https://www.technologyreview.com/s/612266/the-smartphone-app-that-can-tell-youre-depressed-before-you-know-it-yourself/

Advertisements


With a selfie and some audio, a startup called Oben says, it can make you an avatar that can say—or sing—anything.

by Rachel Metz

I’ve met Nikhil Jain in the flesh, and now, on the laptop screen in front of me, I’m looking at a small animated version of him from the torso up, talking in the same tone and lilting accented English—only this version of Jain is bald (hair is tricky to animate convincingly), and his voice has a robotic sound.

For the past three years, Jain has been working on Oben, the startup he cofounded and leads. It’s building technology that uses a single image and an audio clip to automate the construction of what are sort of like digital souls: avatars that look and sound a lot like anyone, and can be made to speak or sing anything.

Of course it won’t really be you—or Beyoncé, or Michael Jackson, or whomever an Oben avatar depicts—but it could be a decent, potentially fun approximation that’s useful for all kinds of things. Maybe, like Jain, you want a virtual you to read stories to your kids when you can’t be there in person. Perhaps you’re a celebrity who wants to let fans do duets with your avatar on a mobile or virtual-reality app, or the estate of a dead celebrity who wants to continue to keep that person “alive” with avatar-based performances. The opportunities are endless—and, perhaps, endlessly eerie.

Oben, based in Pasadena, California, has raised about $9 million so far. The company is planning to release an app late this year that lets people make their own personal avatar and share video clips of it with friends.

Oben is also working with some as-yet-unnamed bands in Asia to make mobile-based avatars that will be able to sing duets with fans, and last month it announced it will launch a virtual-reality-enabled version of its avatar technology with the massively popular social app WeChat, for the HTC Vive headset.

For now, producing the kind of avatar Jain showed me still takes a lot of time, and it doesn’t even include the body below the waist (Jain says the company is experimenting with animating other body parts, but mainly it’s “focusing on other things”). While the avatar can be made with just one photo and two to 20 minutes of reading from a phoneme-rich script (the more, the better), a good avatar still takes Oben’s deep-learning system about eight hours to create. This includes cleaning up the recorded audio, creating a voice print for the person that reflects qualities such as accent and timbre, and making the 3-D visual model (facial movements are predicted from the selfie and voice print, Jain says). While speaking sounds pretty good, the singing clips I heard sounded very Auto-Tuned.

The avatars in the forthcoming app will be less focused on perfection but much faster to build, he says. Oben is also trying to figure out how to match speech and facial expressions so that the avatars can speak any language in a natural-looking way; for now, they’re limited to English and Chinese.

If digital copies like Oben’s are any good, they will raise questions about what should happen to your digital self over time. If you die, should an existing avatar be retained? Is it disturbing if others use digital breadcrumbs you left behind to, in a sense, re-create your digital self?

Jain isn’t sure what the right answer is, though he agrees that, like other companies that deal with user data, Oben does have to address death. And beyond big questions, there are potentially big business opportunities in that issue. The company’s business model is likely to be, in part, predicated on it: he says Oben has been approached by the estates of numerous celebrities, some of them long dead, some recently deceased.

https://www.technologyreview.com/s/607885/how-to-save-your-digital-soul/