Posts Tagged ‘artificial intelligence’


Two-photon imaging shows neurons firing in a mouse brain. Recordings like this enable researchers to track which neurons are firing, and how they potentially correspond to different behaviors. The image is credited to Yiyang Gong, Duke University.

Summary: Convolutional neural network model significantly outperforms previous methods and is as accurate as humans in segmenting active and overlapping neurons.

Source: Duke University

Biomedical engineers at Duke University have developed an automated process that can trace the shapes of active neurons as accurately as human researchers can, but in a fraction of the time.

This new technique, based on using artificial intelligence to interpret video images, addresses a critical roadblock in neuron analysis, allowing researchers to rapidly gather and process neuronal signals for real-time behavioral studies.

The research appeared this week in the Proceedings of the National Academy of Sciences.

To measure neural activity, researchers typically use a process known as two-photon calcium imaging, which allows them to record the activity of individual neurons in the brains of live animals. These recordings enable researchers to track which neurons are firing, and how they potentially correspond to different behaviors.

While these measurements are useful for behavioral studies, identifying individual neurons in the recordings is a painstaking process. Currently, the most accurate method requires a human analyst to circle every ‘spark’ they see in the recording, often requiring them to stop and rewind the video until the targeted neurons are identified and saved. To further complicate the process, investigators are often interested in identifying only a small subset of active neurons that overlap in different layers within the thousands of neurons that are imaged.

This process, called segmentation, is fussy and slow. A researcher can spend anywhere from four to 24 hours segmenting neurons in a 30-minute video recording, and that’s assuming they’re fully focused for the duration and don’t take breaks to sleep, eat or use the bathroom.

In contrast, a new open source automated algorithm developed by image processing and neuroscience researchers in Duke’s Department of Biomedical Engineering can accurately identify and segment neurons in minutes.

“As a critical step towards complete mapping of brain activity, we were tasked with the formidable challenge of developing a fast automated algorithm that is as accurate as humans for segmenting a variety of active neurons imaged under different experimental settings,” said Sina Farsiu, the Paul Ruffin Scarborough Associate Professor of Engineering in Duke BME.

“The data analysis bottleneck has existed in neuroscience for a long time — data analysts have spent hours and hours processing minutes of data, but this algorithm can process a 30-minute video in 20 to 30 minutes,” said Yiyang Gong, an assistant professor in Duke BME. “We were also able to generalize its performance, so it can operate equally well if we need to segment neurons from another layer of the brain with different neuron size or densities.”

“Our deep learning-based algorithm is fast, and is demonstrated to be as accurate as (if not better than) human experts in segmenting active and overlapping neurons from two-photon microscopy recordings,” said Somayyeh Soltanian-Zadeh, a PhD student in Duke BME and first author on the paper.

Deep-learning algorithms allow researchers to quickly process large amounts of data by sending it through multiple layers of nonlinear processing units, which can be trained to identify different parts of a complex image. In their framework, this team created an algorithm that could process both spatial and timing information in the input videos. They then ‘trained’ the algorithm to mimic the segmentation of a human analyst while improving the accuracy.

The advance is a critical step towards allowing neuroscientists to track neural activity in real time. Because of their tool’s widespread usefulness, the team has made their software and annotated dataset available online.

Gong is already using the new method to more closely study the neural activity associated with different behaviors in mice. By better understanding which neurons fire for different activities, Gong hopes to learn how researchers can manipulate brain activity to modify behavior.

“This improved performance in active neuron detection should provide more information about the neural network and behavioral states, and open the door for accelerated progress in neuroscience experiments,” said Soltanian-Zadeh.

https://neurosciencenews.com/artificial-intelligence-neurons-11076/

Advertisements


Researchers at the University of North Carolina School of Medicine used MRI brain scans and machine learning techniques at birth to predict cognitive development at age 2 years with 95 percent accuracy.

“This prediction could help identify children at risk for poor cognitive development shortly after birth with high accuracy,” said senior author John H. Gilmore, MD, Thad and Alice Eure Distinguished Professor of psychiatry and director of the UNC Center for Excellence in Community Mental Health. “For these children, an early intervention in the first year or so of life – when cognitive development is happening – could help improve outcomes. For example, in premature infants who are at risk, one could use imaging to see who could have problems.”

The study, which was published online by the journal NeuroImage, used an application of artificial intelligence called machine learning to look at white matter connections in the brain at birth and the ability of these connections to predict cognitive outcomes.

Gilmore said researchers at UNC and elsewhere are working to find imaging biomarkers of risk for poor cognitive outcomes and for risk of neuropsychiatric conditions such as autism and schizophrenia. In this study, the researchers replicated the initial finding in a second sample of children who were born prematurely.

“Our study finds that the white matter network at birth is highly predictive and may be a useful imaging biomarker. The fact that we could replicate the findings in a second set of children provides strong evidence that this may be a real and generalizable finding,” he said.

Jessica B. Girault, PhD, a postdoctoral researcher at the Carolina Institute for Developmental Disabilities, is the study’s lead author. UNC co-authors are Barbara D. Goldman, PhD, of UNC’s Frank Porter Graham Child Development Institute, Juan C. Prieto, PhD, assistant professor, and Martin Styner, PhD, director of the Neuro Image Research and Analysis Laboratory in the department of psychiatry.

https://neurosciencenews.com/ai-mri-cognitive-development-10904/

by Isobel Asher Hamilton

– China’s state press agency has developed what it calls “AI news anchors,” avatars of real-life news presenters that read out news as it is typed.

– It developed the anchors with the Chinese search-engine giant Sogou.

– No details were given as to how the anchors were made, and one expert said they fell into the “uncanny valley,” in which avatars have an unsettling resemblance to humans.

China’s state-run press agency, Xinhua, has unveiled what it claims are the world’s first news anchors generated by artificial intelligence.

Xinhua revealed two virtual anchors at the World Internet Conference on Thursday. Both were modeled on real presenters, with one who speaks Chinese and another who speaks English.

“AI anchors have officially become members of the Xinhua News Agency reporting team,” Xinhua told the South China Morning Post. “They will work with other anchors to bring you authoritative, timely, and accurate news information in both Chinese and English.”

In a post, Xinhua said the generated anchors could work “24 hours a day” on its website and various social-media platforms, “reducing news production costs and improving efficiency.”

Xinhua developed the virtual anchors with Sogou, China’s second-biggest search engine. No details were given about how they were made.

Though Xinhua presents the avatars as independently learning from “live broadcasting videos,” the avatars do not appear to rely on true artificial intelligence, as they simply read text written by humans.

“I will work tirelessly to keep you informed as texts will be typed into my system uninterrupted,” the English-speaking anchor says in its first video, using a synthesized voice.

The Oxford computer-science professor Michael Wooldridge told the BBC that the anchor fell into the “uncanny valley,” in which avatars or objects that closely but do not fully resemble humans make observers more uncomfortable than ones that are more obviously artificial.

https://www.businessinsider.com/ai-news-anchor-created-by-china-xinhua-news-agency-2018-11


Researchers have developed a new deep learning algorithm that can reveal your personality type, based on the Big Five personality trait model, by simply tracking eye movements.

t’s often been said that the eyes are the window to the soul, revealing what we think and how we feel. Now, new research reveals that your eyes may also be an indicator of your personality type, simply by the way they move.

Developed by the University of South Australia in partnership with the University of Stuttgart, Flinders University and the Max Planck Institute for Informatics in Germany, the research uses state-of-the-art machine-learning algorithms to demonstrate a link between personality and eye movements.

Findings show that people’s eye movements reveal whether they are sociable, conscientious or curious, with the algorithm software reliably recognising four of the Big Five personality traits: neuroticism, extroversion, agreeableness, and conscientiousness.

Researchers tracked the eye movements of 42 participants as they undertook everyday tasks around a university campus, and subsequently assessed their personality traits using well-established questionnaires.

UniSA’s Dr Tobias Loetscher says the study provides new links between previously under-investigated eye movements and personality traits and delivers important insights for emerging fields of social signal processing and social robotics.

“There’s certainly the potential for these findings to improve human-machine interactions,” Dr Loetscher says.

“People are always looking for improved, personalised services. However, today’s robots and computers are not socially aware, so they cannot adapt to non-verbal cues.

“This research provides opportunities to develop robots and computers so that they can become more natural, and better at interpreting human social signals.”

Dr Loetscher says the findings also provide an important bridge between tightly controlled laboratory studies and the study of natural eye movements in real-world environments.

“This research has tracked and measured the visual behaviour of people going about their everyday tasks, providing more natural responses than if they were in a lab.

“And thanks to our machine-learning approach, we not only validate the role of personality in explaining eye movement in everyday life, but also reveal new eye movement characteristics as predictors of personality traits.”

Original Research: Open access research for “Eye Movements During Everyday Behavior Predict Personality Traits” by Sabrina Hoppe, Tobias Loetscher, Stephanie A. Morey and Andreas Bulling in Frontiers in Human Neuroscience. Published April 14 2018.
doi:10.3389/fnhum.2018.00105

https://neurosciencenews.com/ai-personality-9621/

Futuristic cityscape maze.

By Diana Kwon

A computer program can learn to navigate through space and spontaneously mimics the electrical activity of grid cells, neurons that help animals navigate their environments, according to a study published May 9 in Nature.

“This paper came out of the blue, like a shot, and it’s very exciting,” Edvard Moser, a neuroscientist at the Kavli Institute for Systems Neuroscience in Norway who was not involved in the work, tells Nature in an accompanying news story. “It is striking that the computer model, coming from a totally different perspective, ended up with the grid pattern we know from biology.” Moser shared a Nobel Prize for the discovery of grid cells with neuroscientists May-Britt Moser and John O’Keefe in 2014.

When scientists trained an artificial neural network to navigate in the form of virtual rats through a simulated environment, they found that the algorithm produced patterns of activity similar to that found in the grid cells of the human brain. “We wanted to see whether we could set up an artificial network with an appropriate task so that it would actually develop grid cells,” study coauthor Caswell Barry of University College London, tells Quanta. “What was surprising was how well it worked.”

The team then tested the program in a more-complex, maze-like environment, and found that not only did the virtual rats make their way to the end, they were also able to outperform a human expert at the task.

“It is doing the kinds of things that animals do and that is to take direct routes wherever possible and shortcuts when they are available,” coauthor Dharshan Kumaran, a senior researcher at Google’s AI company DeepMind, tells The Guardian.

DeepMind researchers hope to use these types of artificial neural networks to study other parts of the brain, such as those involved in understanding sound and controlling limbs, according to Wired. “This has proven to be extremely hard with traditional neuroscience so, in the future, if we could improve these artificial models, we could potentially use them to understand other brain functionalities,” study coauthor Andrea Banino, a research scientist at DeepMind, tells Wired. “This would be a giant step toward the future of brain understanding.”

https://www.the-scientist.com/?articles.view/articleNo/54534/title/Artificial-Intelligence-Mimics-Navigation-Cells-in-the-Brain/&utm_campaign=TS_DAILY%20NEWSLETTER_2018&utm_source=hs_email&utm_medium=email&utm_content=62845247&_hsenc=p2ANqtz-_1eI9gR1hZiJ5AMHakKnqqytoBx4h3r-AG5kHqEt0f3qMz5KQh5XeBQGeWxvqyvET-l70AGfikSD0n3SiVYETaAbpvtA&_hsmi=62845247

by Emily Mullin

When David Graham wakes up in the morning, the flat white box that’s Velcroed to the wall of his room in Robbie’s Place, an assisted living facility in Marlborough, Massachusetts, begins recording his every movement.

It knows when he gets out of bed, gets dressed, walks to his window, or goes to the bathroom. It can tell if he’s sleeping or has fallen. It does this by using low-power wireless signals to map his gait speed, sleep patterns, location, and even breathing pattern. All that information gets uploaded to the cloud, where machine-learning algorithms find patterns in the thousands of movements he makes every day.

The rectangular boxes are part of an experiment to help researchers track and understand the symptoms of Alzheimer’s.

It’s not always obvious when patients are in the early stages of the disease. Alterations in the brain can cause subtle changes in behavior and sleep patterns years before people start experiencing confusion and memory loss. Researchers think artificial intelligence could recognize these changes early and identify patients at risk of developing the most severe forms of the disease.

Spotting the first indications of Alzheimer’s years before any obvious symptoms come on could help pinpoint people most likely to benefit from experimental drugs and allow family members to plan for eventual care. Devices equipped with such algorithms could be installed in people’s homes or in long-term care facilities to monitor those at risk. For patients who already have a diagnosis, such technology could help doctors make adjustments in their care.

Drug companies, too, are interested in using machine-learning algorithms, in their case to search through medical records for the patients most likely to benefit from experimental drugs. Once people are in a study, AI might be able to tell investigators whether the drug is addressing their symptoms.

Currently, there’s no easy way to diagnose Alzheimer’s. No single test exists, and brain scans alone can’t determine whether someone has the disease. Instead, physicians have to look at a variety of factors, including a patient’s medical history and observations reported by family members or health-care workers. So machine learning could pick up on patterns that otherwise would easily be missed.


David Graham, one of Vahia’s patients, has one of the AI-powered devices in his room at Robbie’s Place, an assisted living facility in Marlborough, Massachusetts.

Graham, unlike the four other patients with such devices in their rooms, hasn’t been diagnosed with Alzheimer’s. But researchers are monitoring his movements and comparing them with patterns seen in patients who doctors suspect have the disease.

Dina Katabi and her team at MIT’s Computer Science and Artificial Intelligence Laboratory initially developed the device as a fall detector for older people. But they soon realized it had far more uses. If it could pick up on a fall, they thought, it must also be able to recognize other movements, like pacing and wandering, which can be signs of Alzheimer’s.

Katabi says their intention was to monitor people without needing them to put on a wearable tracking device every day. “This is completely passive. A patient doesn’t need to put sensors on their body or do anything specific, and it’s far less intrusive than a video camera,” she says.

How it works

Graham hardly notices the white box hanging in his sunlit, tidy room. He’s most aware of it on days when Ipsit Vahia makes his rounds and tells him about the data it’s collecting. Vahia is a geriatric psychiatrist at McLean Hospital and Harvard Medical School, and he and the technology’s inventors at MIT are running a small pilot study of the device.

Graham looks forward to these visits. During a recent one, he was surprised when Vahia told him he was waking up at night. The device was able to detect it, though Graham didn’t know he was doing it.

The device’s wireless radio signal, only a thousandth as powerful as wi-fi, reflects off everything in a 30-foot radius, including human bodies. Every movement—even the slightest ones, like breathing—causes a change in the reflected signal.

Katabi and her team developed machine-learning algorithms that analyze all these minute reflections. They trained the system to recognize simple motions like walking and falling, and more complex movements like those associated with sleep disturbances. “As you teach it more and more, the machine learns, and the next time it sees a pattern, even if it’s too complex for a human to abstract that pattern, the machine recognizes that pattern,” Katabi says.

Over time, the device creates large readouts of data that show patterns of behavior. The AI is designed to pick out deviations from those patterns that might signify things like agitation, depression, and sleep disturbances. It could also pick up whether a person is repeating certain behaviors during the day. These are all classic symptoms of Alzheimer’s.

“If you can catch these deviations early, you will be able to anticipate them and help manage them,” Vahia says.

In a patient with an Alzheimer’s diagnosis, Vahia and Katabi were able to tell that she was waking up at 2 a.m. and wandering around her room. They also noticed that she would pace more after certain family members visited. After confirming that behavior with a nurse, Vahia adjusted the patient’s dose of a drug used to prevent agitation.


Ipsit Vahia and Dina Katabi are testing an AI-powered device that Katabi’s lab built to monitor the behaviors of people with Alzheimer’s as well as those at risk of developing the disease.

Brain changes

AI is also finding use in helping physicians detect early signs of Alzheimer’s in the brain and understand how those physical changes unfold in different people. “When a radiologist reads a scan, it’s impossible to tell whether a person will progress to Alzheimer’s disease,” says Pedro Rosa-Neto, a neurologist at McGill University in Montreal.

Rosa-Neto and his colleague Sulantha Mathotaarachchi developed an algorithm that analyzed hundreds of positron-emission tomography (PET) scans from people who had been deemed at risk of developing Alzheimer’s. From medical records, the researchers knew which of these patients had gone on to develop the disease within two years of a scan, but they wanted to see if the AI system could identify them just by picking up patterns in the images.

Sure enough, the algorithm was able to spot patterns in clumps of amyloid—a protein often associated with the disease—in certain regions of the brain. Even trained radiologists would have had trouble noticing these issues on a brain scan. From the patterns, it was able to detect with 84 percent accuracy which patients ended up with Alzheimer’s.

Machine learning is also helping doctors predict the severity of the disease in different patients. Duke University physician and scientist P. Murali Doraiswamy is using machine learning to figure out what stage of the disease patients are in and whether their condition is likely to worsen.

“We’ve been seeing Alzheimer’s as a one-size-fits all problem,” says Doraiswamy. But people with Alzheimer’s don’t all experience the same symptoms, and some might get worse faster than others. Doctors have no idea which patients will remain stable for a while or which will quickly get sicker. “So we thought maybe the best way to solve this problem was to let a machine do it,” he says.

He worked with Dragan Gamberger, an artificial-intelligence expert at the Rudjer Boskovic Institute in Croatia, to develop a machine-learning algorithm that sorted through brain scans and medical records from 562 patients who had mild cognitive impairment at the beginning of a five-year period.

Two distinct groups emerged: those whose cognition declined significantly and those whose symptoms changed little or not at all over the five years. The system was able to pick up changes in the loss of brain tissue over time.

A third group was somewhere in the middle, between mild cognitive impairment and advanced Alzheimer’s. “We don’t know why these clusters exist yet,” Doraiswamy says.

Clinical trials

From 2002 to 2012, 99 percent of investigational Alzheimer’s drugs failed in clinical trials. One reason is that no one knows exactly what causes the disease. But another reason is that it is difficult to identify the patients most likely to benefit from specific drugs.

AI systems could help design better trials. “Once we have those people together with common genes, characteristics, and imaging scans, that’s going to make it much easier to test drugs,” says Marilyn Miller, who directs AI research in Alzheimer’s at the National Institute on Aging, part of the US National Institutes of Health.

Then, once patients are enrolled in a study, researchers could continuously monitor them to see if they’re benefiting from the medication.

“One of the biggest challenges in Alzheimer’s drug development is we haven’t had a good way of parsing out the right population to test the drug on,” says Vaibhav Narayan, a researcher on Johnson & Johnson’s neuroscience team.

He says machine-learning algorithms will greatly speed the process of recruiting patients for drug studies. And if AI can pick out which patients are most likely to get worse more quickly, it will be easier for investigators to tell if a drug is having any benefit.

That way, if doctors like Vahia notice signs of Alzheimer’s in a person like Graham, they can quickly get him signed up for a clinical trial in hopes of curbing the devastating effects that would otherwise come years later.

Miller thinks AI could be used to diagnose and predict Alzheimer’s in patients in as soon as five years from now. But she says it’ll require a lot of data to make sure the algorithms are accurate and reliable. Graham, for one, is doing his part to help out.

https://www.technologyreview.com/s/609236/ai-can-spot-signs-of-alzheimers-before-your-family-does/

by Daniel Oberhaus

Amanda Feilding used to take lysergic acid diethylamide every day to boost creativity and productivity at work before LSD, known as acid, was made illegal in 1968. During her downtime, Feilding, who now runs the Beckley Foundation for psychedelic research, would get together with her friends to play the ancient Chinese game of Go, and came to notice something curious about her winning streaks.

“I found that if I was on LSD and my opponent wasn’t, I won more games,” Feilding told me over Skype. “For me that was a very clear indication that it improves cognitive function, particularly a kind of intuitive pattern recognition.”

An interesting observation to be sure. But was LSD actually helping Feilding in creative problem solving?

A half-century ban on psychedelic research has made answering this question in a scientific manner impossible. In recent years, however, psychedelic research has been experiencing something of a “renaissance” and now Feilding wants to put her intuition to the test by running a study in which participants will “microdose” while playing Go—a strategy game that is like chess on steroids—against an artificial intelligence.

Microdosing LSD is one of the hallmarks of the so-called “Psychedelic Renaissance.” It’s a regimen that involves regularly taking doses of acid that are so low they don’t impart any of the drug’s psychedelic effects. Microdosers claim the practice results in heightened creativity, lowered depression, and even relief from chronic somatic pain.

But so far, all evidence in favor of microdosing LSD has been based on self-reports, raising the possibility that these reported positive effects could all be placebo. So the microdosing community is going to have to do some science to settle the debate. That means clinical trials with quantifiable results like the one proposed by Feilding.

As the first scientific trial to investigate the effects of microdosing, Feilding’s study will consist of 20 participants who will be given low doses—10, 20 and 50 micrograms of LSD—or a placebo on four different occasions. After taking the acid, the brains of these subjects will be imaged using MRI and MEG while they engage in a variety of cognitive tasks, such as the neuropsychology staples the Wisconsin Card Sorting test and the Tower of London test. Importantly, the participants will also be playing Go against an AI, which will assess the players’ performance during the match.

By imaging the brain while it’s under the influence of small amounts of LSD, Feilding hopes to learn how the substance changes connectivity in the brain to enhance creativity and problem solving. If the study goes forward, this will only be the second time that subjects on LSD have had their brain imaged while tripping. (That 2016 study at Imperial College London was also funded by the Beckley Foundation, which found that there was a significant uptick in neural activity in areas of the brain associated with vision during acid trips.)

Before Feilding can go ahead with her planned research, a number of obstacles remain in her way, starting with funding. She estimates she’ll need to raise about $350,000 to fund the study.

“It’s frightening how expensive this kind of research is,” Feilding said. “I’m very keen on trying to alter how drug policy categorizes these compounds because the research is much more costly simply because LSD is a controlled substance.”

To tackle this problem, Feilding has partnered with Rodrigo Niño, a New York entrepreneur who recently launched Fundamental, a platform for donations to support psychedelic research at institutions like the Beckley Foundation, Johns Hopkins University, and New York University.

The study is using smaller doses of LSD than Feilding’s previous LSD study, so she says she doesn’t anticipate problems getting ethical clearance to pursue this. A far more difficult challenge will be procuring the acid to use in her research. In 2016, she was able to use LSD that had been synthesized for research purposes by a government certified lab, but she suspects that this stash has long since been used up.

But if there’s anyone who can make the impossible possible, it would be Feilding, a psychedelic science pioneer known as much for drilling a hole in her own head (https://www.vice.com/en_us/article/drilling-a-hole-in-your-head-for-a-higher-state-of-consciousness) to explore consciousness as for the dozens of peer-reviewed scientific studies on psychedelic use she has authored in her lifetime. And according to Feilding, the potential benefits of microdosing are too great to be ignored and may even come to replace selective serotonin reuptake inhibitors, or SSRIs as a common antidepressant.

“I think the microdose is a very delicate and sensitive way of treating people,” said Feilding. “We need to continue to research it and make it available to people.”

https://motherboard.vice.com/en_us/article/first-ever-lsd-microdosing-study-will-pit-the-human-brain-against-ai