Posts Tagged ‘AI’

35472460 - digital illustration of human lungs in colour background

Artificial intelligence (AI) can be an invaluable aid to help lung doctors interpret respiratory symptoms accurately and make a correct diagnosis, according to new research presented yesterday (Wednesday) at the European Respiratory Society International Congress.

Dr Marko Topalovic (PhD), a postdoctoral researcher at the Laboratory for Respiratory Diseases, Catholic University of Leuven (KU Leuven), Belgium, told the meeting that after training an AI computer algorithm using good quality data, it proved to be more consistent and accurate in interpreting respiratory test results and suggesting diagnoses than lung specialists.

“Pulmonary function tests provide an extensive series of numerical outputs and their patterns can be hard for the human eye to perceive and recognise; however, it is easy for computers to manage large quantities of data like these and so we thought AI could be useful for pulmonologists. We explored if this was true with 120 pulmonologists from 16 hospitals. We found that diagnosis by AI was more accurate in twice as many cases as diagnosis by pulmonologists. These results show how AI can serve as a second opinion for pulmonologists when they are assessing and diagnosing their patients,” he said.

Pulmonary function tests (PFT) include: spirometry, which involves the patient breathing through a mouthpiece to measure the amount of air inhaled and exhaled; a body box or plethysmography test, which enables doctors to assess lung volume by measuring the pressure in a booth in which the patient is sitting and breathing through a mouthpiece; and a diffusion capacity test, which tests how well a patient’s lungs are able to transfer oxygen and carbon dioxide to and from the bloodstream by testing the efficiency of the alveoli (small air sacks in the lungs). Results from these tests give doctors important information about the functioning of the lungs, but do not tell them what is wrong with the patient. This requires interpretation of the results in order to reach a diagnosis.

In this study, the researchers used historical data from 1430 patients from 33 Belgian hospitals. The data were assessed by an expert panel of pulmonologists and interpretations were measured against gold standard guidelines from the European Respiratory Society and the American Thoracic Society. The expert panel considered patients’ medical histories, results of all PFTs and any additional tests, before agreeing on the correct interpretation and diagnosis for each patient.

“When training the AI algorithm, the use of good quality data is of utmost importance,” explained Dr Topalovic. “An expert panel examined all the results from the pulmonary function tests, and the other tests and medical information as well. They used these to reach agreement on final diagnoses that the experts were confident were correct. These were then used to develop an algorithm to train the AI, before validating it by incorporating it into real clinical practice at the University Hospital Leuven. The challenging part was making sure the algorithm recognised patterns of up to nine different diseases.”

Then, 120 pulmonologists from 16 European hospitals (from Belgium, France, The Netherlands, Germany and Luxembourg) made 6000 interpretations of PFT data from 50 randomly selected patients. The AI also examined the same data. The results from both were measured against the gold standard guidelines in the same way as during development of the algorithm.

The researchers found that the interpretation of the PFTs by the pulmonologists matched the guidelines in 74% of cases (with a range of 56-88%), but the AI-based software interpretations perfectly matched the guidelines (100%). The doctors were able to correctly diagnose the primary disease in 45% of cases (with a range of 24-62%), while the AI gave a correct diagnosis in 82% of cases.

Dr Topalovic said: “We found that the interpretation of pulmonary function tests and the diagnosis of respiratory disease by pulmonologists is not an easy task. It takes more information and further tests to reach a satisfactory level of accuracy. On the other hand, the AI-based software has superior performance and therefore can provide a powerful decision support tool to improve current clinical practice. Feedback from doctors is very positive, particularly as it helps them to identify difficult patterns of rare diseases.”

Two large Belgian hospitals are already using the AI-based software to improve interpretations and diagnoses. “We firmly believe that we can empower doctors to make their interpretations and diagnoses easier, faster and better. AI will not replace doctors, that is certain, because doctors are able to see a broader perspective than that presented by pulmonary function tests alone. This enables them to make decisions based on a combination of many different factors. However, it is evident that AI will augment our abilities to accomplish more and decrease chances for errors and redundant work. The AI-based software has superior performance and therefore may provide a powerful decision support tool to improve current clinical practice.

“Nowadays, we trust computers to fly our planes, to drive our cars and to survey our security. We can also have confidence in computers to label medical conditions based on specific data. The beauty is that, independent of location or medical coverage, AI can provide the highest standards of PFT interpretation and patients can have the best and affordable diagnostic experience. Whether it will be widely used in future clinical applications is just a matter of time, but will be driven by the acceptance of the medical community,” said Dr Topalovic.

He said the next step would be to get more hospitals to use this technology and investigate transferring the AI technology to primary care, where the data would be captured by general practitioners (GPs) to help them make correct diagnoses and referrals.

Professor Mina Gaga is President of the European Respiratory Society, and Medical Director and Head of the Respiratory Department of Athens Chest Hospital, Greece, and was not involved in the study. She said: “This work shows the exciting possibilities that artificial intelligence offers to doctors to help them provide a better, quicker service to their patients. Over the past 20 to 30 years, the evolution in technology has led to better diagnosis and treatments: a revolution in imaging techniques, in molecular testing and in targeted treatments have make medicine easier and more effective. AI is the new addition! I think it will be invaluable in helping doctors and patients and will be an important aid to their decision-making.”

[1] Abstract no: PA5290, “Artificial intelligence improves experts in reading pulmonary function tests”, by M. Topalovic et al; Poster Discussion “The importance of the pulmonary function test in different clinical settings”, 08.30-10.30 hrs CEST, Wednesday 19 September, Room 7.2D.

The research was funded by Vlaams Agentschap Innoveren & Ondernemen – VLAIO (Belgian government body: Agency for Innovation and Entrepreneurship – VLAIO)

http://www.europeanlung.org/en/news-and-events/media-centre/press-releases/artificial-intelligence-improves-doctors%E2%80%99-ability-to-correctly-interpret-tests-and-diagnose-lung-disease

Advertisements

machine-learning-model-provides-early-dementia-diagnosis-306451

A machine learning-based model using data routinely gathered in primary care identified patients with dementia in such settings, according to research recently published in BJGP Open.

“Improving dementia care through increased and timely diagnosis is a priority, yet almost half of those living with dementia do not receive a timely diagnosis,” Emmanuel A. Jammeh, PhD, of the science and engineering department at Plymouth University in the United Kingdom, and colleagues wrote.

“A cost-effective tool that can be used by [primary care providers] to identify patients likely to be living with dementia, based only on routine data would be extremely useful. Such a tool could be used to select high-risk patients who could be invited for targeted screening,” they added.

The researchers used Read codes, a set of clinical terms used in the U.K. to summarize data for general practice, to develop a machine learning-based model to identify patients with dementia. The Read codes were selected based on their significant association with patients with dementia, and included codes for risk factors, symptoms and behaviors that are collected in primary care. To test the model, researchers collected Read-encoded data from 26,483 patients living in England aged 65 years and older.

Jammeh and colleagues found that their machine-based model achieved a sensitivity of 84.47% and a specificity of 86.67% for identifying dementia.

“This is the first demonstration of a machine-learning approach to identifying dementia using routinely collected [National Health Service] data, researchers wrote.

“With the expected growth in dementia prevalence, the number of specialist memory clinics may be insufficient to meet the expected demand for diagnosis. Furthermore, although current ‘gold standards’ in dementia diagnosis may be effective, they involve the use of expensive neuroimaging (for example, positron emission tomography scans) and time-consuming neuropsychological assessments which is not ideal for routine screening of dementia,” they continued.

The model will be evaluated with other datasets, and have its validation tested “more extensively” at general practitioner practices in the future, Jammeh and colleagues added. – by Janel Miller

https://www.healio.com/family-medicine/geriatric-medicine/news/online/%7B62392171-6ad7-481a-9289-bd69df49d4a4%7D/machine-learning-based-model-may-identify-dementia-in-primary-care

A new study using machine learning has identified brain-based dimensions of mental health disorders, an advance towards much-needed biomarkers to more accurately diagnose and treat patients. A team at Penn Medicine led by Theodore D. Satterthwaite, MD, an assistant professor in the department of Psychiatry, mapped abnormalities in brain networks to four dimensions of psychopathology: mood, psychosis, fear, and disruptive externalizing behavior. The research is published in Nature Communications this week.

Currently, psychiatry relies on patient reporting and physician observations alone for clinical decision making, while other branches of medicine have incorporated biomarkers to aid in diagnosis, determination of prognosis, and selection of treatment for patients. While previous studies using standard clinical diagnostic categories have found evidence for brain abnormalities, the high level of diversity within disorders and comorbidity between disorders has limited how this kind of research may lead to improvements in clinical care.

“Psychiatry is behind the rest of medicine when it comes to diagnosing illness,” said Satterthwaite. “For example, when a patient comes in to see a doctor with most problems, in addition to talking to the patient, the physician will recommend lab tests and imaging studies to help diagnose their condition. Right now, that is not how things work in psychiatry. In most cases, all psychiatric diagnoses rely on just talking to the patient. One of the reasons for this is that we don’t understand how abnormalities in the brain lead to psychiatric symptoms. This research effort aims to link mental health issues and their associated brain network abnormalities to psychiatric symptoms using a data-driven approach.”

To uncover the brain networks associated with psychiatric disorders, the team studied a large sample of adolescents and young adults (999 participants, ages 8 to 22). All participants completed both functional MRI scans and a comprehensive evaluation of psychiatric symptoms as part of the Philadelphia Neurodevelopmental Cohort (PNC), an effort lead by Raquel E. Gur, MD, Ph.D., professor of Psychiatry, Neurology, and Radiology, that was funded by the National Institute of Mental Health. The brain and symptom data were then jointly analyzed using a machine learning method called sparse canonical correlation analysis.

This analysis revealed patterns of changes in brain networks that were strongly related to psychiatric symptoms. In particular, the findings highlighted four distinct dimensions of psychopathology—mood, psychosis, fear, and disruptive behavior—all of which were associated with a distinct pattern of abnormal connectivity across the brain.

The researchers found that each brain-guided dimension contained symptoms from several different clinical diagnostic categories. For example, the mood dimension was comprised of symptoms from three categories, e.g. depression (feeling sad), mania (irritability), and obsessive-compulsive disorder (recurrent thoughts of self-harm). Similarly, the disruptive externalizing behavior dimension was driven primarily by symptoms of both Attention Deficit Hyperactivity Disorder(ADHD) and Oppositional Defiant Disorder (ODD), but also included the irritability item from the depression domain. These findings suggest that when both brain and symptomatic data are taken into consideration, psychiatric symptoms do not neatly fall into established categories. Instead, groups of symptoms emerge from diverse clinical domains to form dimensions that are linked to specific patterns of abnormal connectivity in the brain.

“In addition to these specific brain patterns in each dimension, we also found common brain connectivity abnormalities that are shared across dimensions,” said Cedric Xia, a MD-Ph.D. candidate and the paper’s lead author. “Specifically, a pair of brain networks called default mode network and frontal-parietal network, whose connections usually grow apart during brain development, become abnormally integrated in all dimensions.”

These two brain networks have long intrigued psychiatrists and neuroscientists because of their crucial role in complex mental processes such as self-control, memory, and social interactions. The findings in this study support the theory that many types of psychiatric illness are related to abnormalities of brain development.

The team also examined how psychopathology differed across age and sex. They found that patterns associated with both mood and psychosis became significantly more prominent with age. Additionally, brain connectivity patterns linked to mood and fear were both stronger in female participants than males.

“This study shows that we can start to use the brain to guide our understanding of psychiatric disorders in a way that’s fundamentally different than grouping symptoms into clinical diagnostic categories. By moving away from clinical labels developed decades ago, perhaps we can let the biology speak for itself,” said Satterthwaite. “Our ultimate hope is that understanding the biology of mental illnesses will allow us to develop better treatments for our patients.”

More information: Cedric Huchuan Xia et al, Linked dimensions of psychopathology and connectivity in functional brain networks, Nature Communications (2018). DOI: 10.1038/s41467-018-05317-y

https://medicalxpress.com/news/2018-08-machine-links-dimensions-mental-illness.html

Futuristic cityscape maze.

By Diana Kwon

A computer program can learn to navigate through space and spontaneously mimics the electrical activity of grid cells, neurons that help animals navigate their environments, according to a study published May 9 in Nature.

“This paper came out of the blue, like a shot, and it’s very exciting,” Edvard Moser, a neuroscientist at the Kavli Institute for Systems Neuroscience in Norway who was not involved in the work, tells Nature in an accompanying news story. “It is striking that the computer model, coming from a totally different perspective, ended up with the grid pattern we know from biology.” Moser shared a Nobel Prize for the discovery of grid cells with neuroscientists May-Britt Moser and John O’Keefe in 2014.

When scientists trained an artificial neural network to navigate in the form of virtual rats through a simulated environment, they found that the algorithm produced patterns of activity similar to that found in the grid cells of the human brain. “We wanted to see whether we could set up an artificial network with an appropriate task so that it would actually develop grid cells,” study coauthor Caswell Barry of University College London, tells Quanta. “What was surprising was how well it worked.”

The team then tested the program in a more-complex, maze-like environment, and found that not only did the virtual rats make their way to the end, they were also able to outperform a human expert at the task.

“It is doing the kinds of things that animals do and that is to take direct routes wherever possible and shortcuts when they are available,” coauthor Dharshan Kumaran, a senior researcher at Google’s AI company DeepMind, tells The Guardian.

DeepMind researchers hope to use these types of artificial neural networks to study other parts of the brain, such as those involved in understanding sound and controlling limbs, according to Wired. “This has proven to be extremely hard with traditional neuroscience so, in the future, if we could improve these artificial models, we could potentially use them to understand other brain functionalities,” study coauthor Andrea Banino, a research scientist at DeepMind, tells Wired. “This would be a giant step toward the future of brain understanding.”

https://www.the-scientist.com/?articles.view/articleNo/54534/title/Artificial-Intelligence-Mimics-Navigation-Cells-in-the-Brain/&utm_campaign=TS_DAILY%20NEWSLETTER_2018&utm_source=hs_email&utm_medium=email&utm_content=62845247&_hsenc=p2ANqtz-_1eI9gR1hZiJ5AMHakKnqqytoBx4h3r-AG5kHqEt0f3qMz5KQh5XeBQGeWxvqyvET-l70AGfikSD0n3SiVYETaAbpvtA&_hsmi=62845247

By Brandon Specktor

Imagine your least-favorite world leader. (Take as much time as you need.)

Now, imagine if that person wasn’t a human, but a network of millions of computers around the world. This digi-dictator has instant access to every scrap of recorded information about every person who’s ever lived. It can make millions of calculations in a fraction of a second, controls the world’s economy and weapons systems with godlike autonomy and — scariest of all — can never, ever die.

This unkillable digital dictator, according to Tesla and SpaceX founder Elon Musk, is one of the darker scenarios awaiting humankind’s future if artificial-intelligence research continues without serious regulation.

“We are rapidly headed toward digital superintelligence that far exceeds any human, I think it’s pretty obvious,” Musk said in a new AI documentary called “Do You Trust This Computer?” directed by Chris Paine (who interviewed Musk previously for the documentary “Who Killed The Electric Car?”). “If one company or a small group of people manages to develop godlike digital super-intelligence, they could take over the world.”

Humans have tried to take over the world before. However, an authoritarian AI would have one terrible advantage over like-minded humans, Musk said.

“At least when there’s an evil dictator, that human is going to die,” Musk added. “But for an AI there would be no death. It would live forever, and then you’d have an immortal dictator, from which we could never escape.”

And, this hypothetical AI-dictator wouldn’t even have to be evil to pose a threat to humans, Musk added. All it has to be is determined.

“If AI has a goal and humanity just happens to be in the way, it will destroy humanity as a matter of course without even thinking about it. No hard feelings,” Musk said. “It’s just like, if we’re building a road, and an anthill happens to be in the way. We don’t hate ants, we’re just building a road. So, goodbye, anthill.”

Those who follow news from the Musk-verse will not be surprised by his opinions in the new documentary; the tech mogul has long been a vocal critic of unchecked artificial intelligence. In 2014, Musk called AI humanity’s “biggest existential threat,” and in 2015, he joined a handful of other tech luminaries and researchers, including Stephen Hawking, to urge the United Nations to ban killer robots. He has said unregulated AI poses “vastly more risk than North Korea” and proposed starting some sort of federal oversight program to monitor the technology’s growth.

“Public risks require public oversight,” he tweeted. “Getting rid of the FAA [wouldn’t] make flying safer. They’re there for good reason.”

https://www.livescience.com/62239-elon-musk-immortal-artificial-intelligence-dictator.html?utm_source=notification

by Emily Mullin

When David Graham wakes up in the morning, the flat white box that’s Velcroed to the wall of his room in Robbie’s Place, an assisted living facility in Marlborough, Massachusetts, begins recording his every movement.

It knows when he gets out of bed, gets dressed, walks to his window, or goes to the bathroom. It can tell if he’s sleeping or has fallen. It does this by using low-power wireless signals to map his gait speed, sleep patterns, location, and even breathing pattern. All that information gets uploaded to the cloud, where machine-learning algorithms find patterns in the thousands of movements he makes every day.

The rectangular boxes are part of an experiment to help researchers track and understand the symptoms of Alzheimer’s.

It’s not always obvious when patients are in the early stages of the disease. Alterations in the brain can cause subtle changes in behavior and sleep patterns years before people start experiencing confusion and memory loss. Researchers think artificial intelligence could recognize these changes early and identify patients at risk of developing the most severe forms of the disease.

Spotting the first indications of Alzheimer’s years before any obvious symptoms come on could help pinpoint people most likely to benefit from experimental drugs and allow family members to plan for eventual care. Devices equipped with such algorithms could be installed in people’s homes or in long-term care facilities to monitor those at risk. For patients who already have a diagnosis, such technology could help doctors make adjustments in their care.

Drug companies, too, are interested in using machine-learning algorithms, in their case to search through medical records for the patients most likely to benefit from experimental drugs. Once people are in a study, AI might be able to tell investigators whether the drug is addressing their symptoms.

Currently, there’s no easy way to diagnose Alzheimer’s. No single test exists, and brain scans alone can’t determine whether someone has the disease. Instead, physicians have to look at a variety of factors, including a patient’s medical history and observations reported by family members or health-care workers. So machine learning could pick up on patterns that otherwise would easily be missed.


David Graham, one of Vahia’s patients, has one of the AI-powered devices in his room at Robbie’s Place, an assisted living facility in Marlborough, Massachusetts.

Graham, unlike the four other patients with such devices in their rooms, hasn’t been diagnosed with Alzheimer’s. But researchers are monitoring his movements and comparing them with patterns seen in patients who doctors suspect have the disease.

Dina Katabi and her team at MIT’s Computer Science and Artificial Intelligence Laboratory initially developed the device as a fall detector for older people. But they soon realized it had far more uses. If it could pick up on a fall, they thought, it must also be able to recognize other movements, like pacing and wandering, which can be signs of Alzheimer’s.

Katabi says their intention was to monitor people without needing them to put on a wearable tracking device every day. “This is completely passive. A patient doesn’t need to put sensors on their body or do anything specific, and it’s far less intrusive than a video camera,” she says.

How it works

Graham hardly notices the white box hanging in his sunlit, tidy room. He’s most aware of it on days when Ipsit Vahia makes his rounds and tells him about the data it’s collecting. Vahia is a geriatric psychiatrist at McLean Hospital and Harvard Medical School, and he and the technology’s inventors at MIT are running a small pilot study of the device.

Graham looks forward to these visits. During a recent one, he was surprised when Vahia told him he was waking up at night. The device was able to detect it, though Graham didn’t know he was doing it.

The device’s wireless radio signal, only a thousandth as powerful as wi-fi, reflects off everything in a 30-foot radius, including human bodies. Every movement—even the slightest ones, like breathing—causes a change in the reflected signal.

Katabi and her team developed machine-learning algorithms that analyze all these minute reflections. They trained the system to recognize simple motions like walking and falling, and more complex movements like those associated with sleep disturbances. “As you teach it more and more, the machine learns, and the next time it sees a pattern, even if it’s too complex for a human to abstract that pattern, the machine recognizes that pattern,” Katabi says.

Over time, the device creates large readouts of data that show patterns of behavior. The AI is designed to pick out deviations from those patterns that might signify things like agitation, depression, and sleep disturbances. It could also pick up whether a person is repeating certain behaviors during the day. These are all classic symptoms of Alzheimer’s.

“If you can catch these deviations early, you will be able to anticipate them and help manage them,” Vahia says.

In a patient with an Alzheimer’s diagnosis, Vahia and Katabi were able to tell that she was waking up at 2 a.m. and wandering around her room. They also noticed that she would pace more after certain family members visited. After confirming that behavior with a nurse, Vahia adjusted the patient’s dose of a drug used to prevent agitation.


Ipsit Vahia and Dina Katabi are testing an AI-powered device that Katabi’s lab built to monitor the behaviors of people with Alzheimer’s as well as those at risk of developing the disease.

Brain changes

AI is also finding use in helping physicians detect early signs of Alzheimer’s in the brain and understand how those physical changes unfold in different people. “When a radiologist reads a scan, it’s impossible to tell whether a person will progress to Alzheimer’s disease,” says Pedro Rosa-Neto, a neurologist at McGill University in Montreal.

Rosa-Neto and his colleague Sulantha Mathotaarachchi developed an algorithm that analyzed hundreds of positron-emission tomography (PET) scans from people who had been deemed at risk of developing Alzheimer’s. From medical records, the researchers knew which of these patients had gone on to develop the disease within two years of a scan, but they wanted to see if the AI system could identify them just by picking up patterns in the images.

Sure enough, the algorithm was able to spot patterns in clumps of amyloid—a protein often associated with the disease—in certain regions of the brain. Even trained radiologists would have had trouble noticing these issues on a brain scan. From the patterns, it was able to detect with 84 percent accuracy which patients ended up with Alzheimer’s.

Machine learning is also helping doctors predict the severity of the disease in different patients. Duke University physician and scientist P. Murali Doraiswamy is using machine learning to figure out what stage of the disease patients are in and whether their condition is likely to worsen.

“We’ve been seeing Alzheimer’s as a one-size-fits all problem,” says Doraiswamy. But people with Alzheimer’s don’t all experience the same symptoms, and some might get worse faster than others. Doctors have no idea which patients will remain stable for a while or which will quickly get sicker. “So we thought maybe the best way to solve this problem was to let a machine do it,” he says.

He worked with Dragan Gamberger, an artificial-intelligence expert at the Rudjer Boskovic Institute in Croatia, to develop a machine-learning algorithm that sorted through brain scans and medical records from 562 patients who had mild cognitive impairment at the beginning of a five-year period.

Two distinct groups emerged: those whose cognition declined significantly and those whose symptoms changed little or not at all over the five years. The system was able to pick up changes in the loss of brain tissue over time.

A third group was somewhere in the middle, between mild cognitive impairment and advanced Alzheimer’s. “We don’t know why these clusters exist yet,” Doraiswamy says.

Clinical trials

From 2002 to 2012, 99 percent of investigational Alzheimer’s drugs failed in clinical trials. One reason is that no one knows exactly what causes the disease. But another reason is that it is difficult to identify the patients most likely to benefit from specific drugs.

AI systems could help design better trials. “Once we have those people together with common genes, characteristics, and imaging scans, that’s going to make it much easier to test drugs,” says Marilyn Miller, who directs AI research in Alzheimer’s at the National Institute on Aging, part of the US National Institutes of Health.

Then, once patients are enrolled in a study, researchers could continuously monitor them to see if they’re benefiting from the medication.

“One of the biggest challenges in Alzheimer’s drug development is we haven’t had a good way of parsing out the right population to test the drug on,” says Vaibhav Narayan, a researcher on Johnson & Johnson’s neuroscience team.

He says machine-learning algorithms will greatly speed the process of recruiting patients for drug studies. And if AI can pick out which patients are most likely to get worse more quickly, it will be easier for investigators to tell if a drug is having any benefit.

That way, if doctors like Vahia notice signs of Alzheimer’s in a person like Graham, they can quickly get him signed up for a clinical trial in hopes of curbing the devastating effects that would otherwise come years later.

Miller thinks AI could be used to diagnose and predict Alzheimer’s in patients in as soon as five years from now. But she says it’ll require a lot of data to make sure the algorithms are accurate and reliable. Graham, for one, is doing his part to help out.

https://www.technologyreview.com/s/609236/ai-can-spot-signs-of-alzheimers-before-your-family-does/

By Aaron Frank

During an October 2015 press conference announcing the autopilot feature of the Tesla Model S, which allowed the car to drive semi-autonomously, Tesla CEO Elon Musk said each driver would become an “expert trainer” for every Model S. Each car could improve its own autonomous features by learning from its driver, but more significantly, when one Tesla learned from its own driver—that knowledge could then be shared with every other Tesla vehicle.

As Fred Lambert with Electrik reported shortly after, Model S owners noticed how quickly the car’s driverless features were improving. In one example, Teslas were taking incorrect early exits along highways, forcing their owners to manually steer the car along the correct route. After just a few weeks, owners noted the cars were no longer taking premature exits.

“I find it remarkable that it is improving this rapidly,” said one Tesla owner.

Intelligent systems, like those powered by the latest round of machine learning software, aren’t just getting smarter: they’re getting smarter faster. Understanding the rate at which these systems develop can be a particularly challenging part of navigating technological change.

Ray Kurzweil has written extensively on the gaps in human understanding between what he calls the “intuitive linear” view of technological change and the “exponential” rate of change now taking place. Almost two decades after writing the influential essay on what he calls “The Law of Accelerating Returns”—a theory of evolutionary change concerned with the speed at which systems improve over time—connected devices are now sharing knowledge between themselves, escalating the speed at which they improve.

“I think that this is perhaps the biggest exponential trend in AI,” said Hod Lipson, professor of mechanical engineering and data science at Columbia University, in a recent interview.

“All of the exponential technology trends have different ‘exponents,’” Lipson added. “But this one is potentially the biggest.”

According to Lipson, what we might call “machine teaching”—when devices communicate gained knowledge to one another—is a radical step up in the speed at which these systems improve.

“Sometimes it is cooperative, for example when one machine learns from another like a hive mind. But sometimes it is adversarial, like in an arms race between two systems playing chess against each other,” he said.

Lipson believes this way of developing AI is a big deal, in part, because it can bypass the need for training data.

“Data is the fuel of machine learning, but even for machines, some data is hard to get—it may be risky, slow, rare, or expensive. In those cases, machines can share experiences or create synthetic experiences for each other to augment or replace data. It turns out that this is not a minor effect, it actually is self-amplifying, and therefore exponential.”

Lipson sees the recent breakthrough from Google’s DeepMind, a project called AlphaGo Zero, as a stunning example of an AI learning without training data. Many are familiar with AlphaGo, the machine learning AI which became the world’s best Go a player after studying a massive training data-set comprised of millions of human Go moves. AlphaGo Zero, however, was able to beat even that Go-playing AI, simply by learning the rules of the game and playing by itself—no training data necessary. Then, just to show off, it beat the world’s best chess playing software after starting from scratch and training for only eight hours.

Now imagine thousands or more AlphaGo Zeroes instantaneously sharing their gained knowledge.

This isn’t just games though. Already, we’re seeing how it will have a major impact on the speed at which businesses can improve the performance of their devices.

One example is GE’s new industrial digital twin technology—a software simulation of a machine that models what is happening with the equipment. Think of it as a machine with its own self-image—which it can also share with technicians.

A steam turbine with a digital twin, for instance, can measure steam temperatures, rotor speeds, cold starts, and other data to predict breakdowns and warn technicians to prevent expensive repairs. The digital twins make these predictions by studying their own performance, but they also rely on models every other steam turbine has developed.

As machines begin to learn from their environments in new and powerful ways, their development is accelerated by communicating what they learn with each other. The collective intelligence of every GE turbine, spread across the planet, can accelerate each individual machine’s predictive ability. Where it may take one driverless car significant time to learn to navigate a particular city—one hundred driverless cars navigating that same city together, all sharing what they learn—can improve their algorithms in far less time.

As other AI-powered devices begin to leverage this shared knowledge transfer, we could see an even faster pace of development. So if you think things are developing quickly today, remember we’re only just getting started.

https://singularityhub.com/2018/01/21/machines-teaching-each-other-could-be-the-biggest-exponential-trend-in-ai/?utm_source=Singularity+Hub+Newsletter&utm_campaign=5f48d9fc0e-Hub_Daily_Newsletter&utm_medium=email&utm_term=0_f0cf60cdae-5f48d9fc0e-58158129#sm.000m9tscdyjvdn7108w1po1nkm8rg