Posts Tagged ‘machine learning’

A new study using machine learning has identified brain-based dimensions of mental health disorders, an advance towards much-needed biomarkers to more accurately diagnose and treat patients. A team at Penn Medicine led by Theodore D. Satterthwaite, MD, an assistant professor in the department of Psychiatry, mapped abnormalities in brain networks to four dimensions of psychopathology: mood, psychosis, fear, and disruptive externalizing behavior. The research is published in Nature Communications this week.

Currently, psychiatry relies on patient reporting and physician observations alone for clinical decision making, while other branches of medicine have incorporated biomarkers to aid in diagnosis, determination of prognosis, and selection of treatment for patients. While previous studies using standard clinical diagnostic categories have found evidence for brain abnormalities, the high level of diversity within disorders and comorbidity between disorders has limited how this kind of research may lead to improvements in clinical care.

“Psychiatry is behind the rest of medicine when it comes to diagnosing illness,” said Satterthwaite. “For example, when a patient comes in to see a doctor with most problems, in addition to talking to the patient, the physician will recommend lab tests and imaging studies to help diagnose their condition. Right now, that is not how things work in psychiatry. In most cases, all psychiatric diagnoses rely on just talking to the patient. One of the reasons for this is that we don’t understand how abnormalities in the brain lead to psychiatric symptoms. This research effort aims to link mental health issues and their associated brain network abnormalities to psychiatric symptoms using a data-driven approach.”

To uncover the brain networks associated with psychiatric disorders, the team studied a large sample of adolescents and young adults (999 participants, ages 8 to 22). All participants completed both functional MRI scans and a comprehensive evaluation of psychiatric symptoms as part of the Philadelphia Neurodevelopmental Cohort (PNC), an effort lead by Raquel E. Gur, MD, Ph.D., professor of Psychiatry, Neurology, and Radiology, that was funded by the National Institute of Mental Health. The brain and symptom data were then jointly analyzed using a machine learning method called sparse canonical correlation analysis.

This analysis revealed patterns of changes in brain networks that were strongly related to psychiatric symptoms. In particular, the findings highlighted four distinct dimensions of psychopathology—mood, psychosis, fear, and disruptive behavior—all of which were associated with a distinct pattern of abnormal connectivity across the brain.

The researchers found that each brain-guided dimension contained symptoms from several different clinical diagnostic categories. For example, the mood dimension was comprised of symptoms from three categories, e.g. depression (feeling sad), mania (irritability), and obsessive-compulsive disorder (recurrent thoughts of self-harm). Similarly, the disruptive externalizing behavior dimension was driven primarily by symptoms of both Attention Deficit Hyperactivity Disorder(ADHD) and Oppositional Defiant Disorder (ODD), but also included the irritability item from the depression domain. These findings suggest that when both brain and symptomatic data are taken into consideration, psychiatric symptoms do not neatly fall into established categories. Instead, groups of symptoms emerge from diverse clinical domains to form dimensions that are linked to specific patterns of abnormal connectivity in the brain.

“In addition to these specific brain patterns in each dimension, we also found common brain connectivity abnormalities that are shared across dimensions,” said Cedric Xia, a MD-Ph.D. candidate and the paper’s lead author. “Specifically, a pair of brain networks called default mode network and frontal-parietal network, whose connections usually grow apart during brain development, become abnormally integrated in all dimensions.”

These two brain networks have long intrigued psychiatrists and neuroscientists because of their crucial role in complex mental processes such as self-control, memory, and social interactions. The findings in this study support the theory that many types of psychiatric illness are related to abnormalities of brain development.

The team also examined how psychopathology differed across age and sex. They found that patterns associated with both mood and psychosis became significantly more prominent with age. Additionally, brain connectivity patterns linked to mood and fear were both stronger in female participants than males.

“This study shows that we can start to use the brain to guide our understanding of psychiatric disorders in a way that’s fundamentally different than grouping symptoms into clinical diagnostic categories. By moving away from clinical labels developed decades ago, perhaps we can let the biology speak for itself,” said Satterthwaite. “Our ultimate hope is that understanding the biology of mental illnesses will allow us to develop better treatments for our patients.”

More information: Cedric Huchuan Xia et al, Linked dimensions of psychopathology and connectivity in functional brain networks, Nature Communications (2018). DOI: 10.1038/s41467-018-05317-y

https://medicalxpress.com/news/2018-08-machine-links-dimensions-mental-illness.html

Advertisements

When someone commits suicide, their family and friends can be left with the heartbreaking and answerless question of what they could have done differently. Colin Walsh, data scientist at Vanderbilt University Medical Center, hopes his work in predicting suicide risk will give people the opportunity to ask “what can I do?” while there’s still a chance to intervene.

Walsh and his colleagues have created machine-learning algorithms that predict, with unnerving accuracy, the likelihood that a patient will attempt suicide. In trials, results have been 80-90% accurate when predicting whether someone will attempt suicide within the next two years, and 92% accurate in predicting whether someone will attempt suicide within the next week.

The prediction is based on data that’s widely available from all hospital admissions, including age, gender, zip codes, medications, and prior diagnoses. Walsh and his team gathered data on 5,167 patients from Vanderbilt University Medical Center that had been admitted with signs of self-harm or suicidal ideation. They read each of these cases to identify the 3,250 instances of suicide attempts.

This set of more than 5,000 cases was used to train the machine to identify those at risk of attempted suicide compared to those who committed self-harm but showed no evidence of suicidal intent. The researchers also built algorithms to predict attempted suicide among a group 12,695 randomly selected patients with no documented history of suicide attempts. It proved even more accurate at making suicide risk predictions within this large general population of patients admitted to the hospital.

Walsh’s paper, published in Clinical Psychological Science in April, is just the first stage of the work. He’s now working to establish whether his algorithm is effective with a completely different data set from another hospital. And, once confidant that the model is sound, Walsh hopes to work with a larger team to establish a suitable method of intervening. He expects to have an intervention program in testing within the next two years. “I’d like to think it’ll be fairly quick, but fairly quick in health care tends to be in the order of months,” he adds.

Suicide is such an intensely personal act that it seems, from a human perspective, impossible to make such accurate predictions based on a crude set of data. Walsh says it’s natural for clinicians to ask how the predictions are made, but the algorithms are so complex that it’s impossible to pull out single risk factors. “It’s a combination of risk factors that gets us the answers,” he says.

That said, Walsh and his team were surprised to note that taking melatonin seemed to be a significant factor in calculating the risk. “I don’t think melatonin is causing people to have suicidal thinking. There’s no physiology that gets us there. But one thing that’s been really important to suicide risk is sleep disorders,” says Walsh. It’s possible that prescriptions for melatonin capture the risk of sleep disorders—though that’s currently a hypothesis that’s yet to be proved.

The research raises broader ethical questions about the role of computers in health care and how truly personal information could be used. “There’s always the risk of unintended consequences,” says Walsh. “We mean well and build a system to help people, but sometimes problems can result down the line.”

Researchers will also have to decide how much computer-based decisions will determine patient care. As a practicing primary care doctor, Walsh says it’s unnerving to recognize that he could effectively follow orders from a machine. “Is there a problem with the fact that I might get a prediction of high risk when that’s not part of my clinical picture?” he says. “Are you changing the way I have to deliver care because of something a computer’s telling me to do?”

For now, the machine-learning algorithms are based on data from hospital admissions. But Walsh recognizes that many people at risk of suicide do not spend time in hospital beforehand. “So much of our lives is spent outside of the health care setting. If we only rely on data that’s present in the health care setting to do this work, then we’re only going to get part of the way there,” he says.

And where else could researchers get data? The internet is one promising option. We spend so much time on Facebook and Twitter, says Walsh, that there may well be social media data that could be used to predict suicide risk. “But we need to do the work to show that’s actually true.”

Facebook announced earlier this year that it was using its own artificial intelligence to review posts for signs of self-harm. And the results are reportedly already more accurate than the reports Facebook gets from people flagged by their friends as at-risk.

Training machines to identify warning signs of suicide is far from straightforward. And, for predictions and interventions to be done successfully, Walsh believes it’s essential to destigmatize suicide. “We’re never going to help people if we’re not comfortable talking about it,” he says.

But, with suicide leading to 800,000 deaths worldwide every year, this is a public health issue that cannot be ignored. Given that most humans, including doctors, are pretty terrible at identifying suicide risk, machine learning could provide an important solution.

https://www.doximity.com/doc_news/v2/entries/8004313