How a New AI Translated Brain Activity to Speech With 97 Percent Accuracy

By Edd Gent

The idea of a machine that can decode your thoughts might sound creepy, but for thousands of people who have lost the ability to speak due to disease or disability it could be game-changing. Even for the able-bodied, being able to type out an email by just thinking or sending commands to your digital assistant telepathically could be hugely useful.

That vision may have come a step closer after researchers at the University of California, San Francisco demonstrated that they could translate brain signals into complete sentences with error rates as low as three percent, which is below the threshold for professional speech transcription.

While we’ve been able to decode parts of speech from brain signals for around a decade, so far most of the solutions have been a long way from consistently translating intelligible sentences. Last year, researchers used a novel approach that achieved some of the best results so far by using brain signals to animate a simulated vocal tract, but only 70 percent of the words were intelligible.

The key to the improved performance achieved by the authors of the new paper in Nature Neuroscience was their realization that there were strong parallels between translating brain signals to text and machine translation between languages using neural networks, which is now highly accurate for many languages.

While most efforts to decode brain signals have focused on identifying neural activity that corresponds to particular phonemes—the distinct chunks of sound that make up words—the researchers decided to mimic machine translation, where the entire sentence is translated at once. This has proven a powerful approach; as certain words are always more likely to appear close together, the system can rely on context to fill in any gaps.

The team used the same encoder-decoder approach commonly used for machine translation, in which one neural network analyzes the input signal—normally text, but in this case brain signals—to create a representation of the data, and then a second neural network translates this into the target language.

They trained their system using brain activity recorded from 4 women with electrodes implanted in their brains to monitor seizures as they read out a set of 50 sentences, including 250 unique words. This allowed the first network to work out what neural activity correlated with which parts of speech.

In testing, it relied only on the neural signals and was able to achieve error rates of below eight percent on two out of the four subjects, which matches the kinds of accuracy achieved by professional transcribers.

Inevitably, there are caveats. Firstly, the system was only able to decode 30-50 specific sentences using a limited vocabulary of 250 words. It also requires people to have electrodes implanted in their brains, which is currently only permitted for a limited number of highly specific medical reasons. However, there are a number of signs that this direction holds considerable promise.

One concern was that because the system was being tested on sentences that were included in its training data, it might simply be learning to match specific sentences to specific neural signatures. That would suggest it wasn’t really learning the constituent parts of speech, which would make it harder to generalize to unfamiliar sentences.

But when the researchers added another set of recordings to the training data that were not included in testing, it reduced error rates significantly, suggesting that the system is learning sub-sentence information like words.

They also found that pre-training the system on data from the volunteer that achieved the highest accuracy before training on data from one of the worst performers significantly reduced error rates. This suggests that in practical applications, much of the training could be done before the system is given to the end user, and they would only have to fine-tune it to the quirks of their brain signals.

The vocabulary of such a system is likely to improve considerably as people build upon this approach—but even a limited palette of 250 words could be incredibly useful to a paraplegic, and could likely be tailored to a specific set of commands for telepathic control of other devices.

Now the ball is back in the court of the scrum of companies racing to develop the first practical neural interfaces.

How a New AI Translated Brain Activity to Speech With 97 Percent Accuracy

AI can pick out specific odours from a combination of smells


A new AI can detect odours in a two-step process that mimics the way our noses smell

An AI can sniff out certain scents, giving us a glimpse of how our nose might work in detecting them.

Thomas Cleland at Cornell University, New York, and Nabil Imam at tech firm Intel created an AI based on the mammalian olfactory bulb (MOB), the area of the brain that is responsible for processing odours. The algorithm mimics a part of the MOB that distinguishes between different smells that are usually present as a mixture of compounds in the air.

This area of the MOB contains two key types of neuron: mitral cells, which are activated when an odour is present but don’t identify it, and granule cells that learn to become specialised and pick out chemicals in the smell. The algorithm mimics these processes, says Imam.

Cleland and Imam trained the AI to detect 10 different odours, including those of ammonia and carbon monoxide. They used data from previous work that recorded the activity of chemical sensors in a wind tunnel in response to these smells.

When fed that data, the AI learns to detect that a smell is present based on the sensors’ responses to the chemicals, and then goes on to identify it on the basis of the patterns in that data. As it does so, the AI has a spike of activity analogous to the spikes of electrical activity in the human brain, says Imam.

The AI refined its learning over five cycles of exposure, eventually showing activity spikes specific to each odour. The researchers then tested the AI’s ability to sniff out smells among others that it hadn’t been trained to detect. They considered an odour successfully identified when the AI’s fifth spike pattern matched or was similar to the pattern produced by the sensors.

The AI got it almost 100 per cent correct for eight of the smells and about 90 per cent correct for the remaining two. To test how it might identify odorous contaminants in the environment, the researchers blocked 80 per cent of the smell signal to mimic more realistic scenarios. In these tests, the AI’s accuracy dipped to less than 30 per cent.

“I think the link [to the MOB] is quite strong – this algorithm might be an explanation to how it works in the human nose, to some abstraction,” says Thomas Nowotny at the University of Sussex, UK. But the AI’s ability to solve real life problems, such as detecting bombs by picking out hazardous smells associated with them, is still some way off, he says.

Read more: https://www.newscientist.com/article/2237534-ai-can-pick-out-specific-odours-from-a-combination-of-smells/#ixzz6GxdKsmxq

Researchers Use Artificial Intelligence to Identify, Count, Describe Wild Animals


Motion sensor “camera traps” unobtrusively take pictures of animals in their natural environment, oftentimes yielding images not otherwise observable. The artificial intelligence system automatically processes such images, here correctly reporting this as a picture of two impala standing.

A new paper in the Proceedings of the National Academy of Sciences (PNAS) reports how a cutting-edge artificial intelligence technique called deep learning can automatically identify, count and describe animals in their natural habitats.

Photographs that are automatically collected by motion-sensor cameras can then be automatically described by deep neural networks. The result is a system that can automate animal identification for up to 99.3 percent of images while still performing at the same 96.6 percent accuracy rate of crowdsourced teams of human volunteers.

“This technology lets us accurately, unobtrusively and inexpensively collect wildlife data, which could help catalyze the transformation of many fields of ecology, wildlife biology, zoology, conservation biology and animal behavior into ‘big data’ sciences. This will dramatically improve our ability to both study and conserve wildlife and precious ecosystems,” says Jeff Clune, the senior author of the paper. He is the Harris Associate Professor at the University of Wyoming and a senior research manager at Uber’s Artificial Intelligence Labs.

The paper was written by Clune; his Ph.D. student Mohammad Sadegh Norouzzadeh; his former Ph.D. student Anh Nguyen (now at Auburn University); Margaret Kosmala (Harvard University); Ali Swanson (University of Oxford); and Meredith Palmer and Craig Packer (both from the University of Minnesota).

Deep neural networks are a form of computational intelligence loosely inspired by how animal brains see and understand the world. They require vast amounts of training data to work well, and the data must be accurately labeled (e.g., each image being correctly tagged with which species of animal is present, how many there are, etc.).

This study obtained the necessary data from Snapshot Serengeti, a citizen science project on the http://www.zooniverse.org platform. Snapshot Serengeti has deployed a large number of “camera traps” (motion-sensor cameras) in Tanzania that collect millions of images of animals in their natural habitat, such as lions, leopards, cheetahs and elephants. The information in these photographs is only useful once it has been converted into text and numbers. For years, the best method for extracting such information was to ask crowdsourced teams of human volunteers to label each image manually. The study published today harnessed 3.2 million labeled images produced in this manner by more than 50,000 human volunteers over several years.

“When I told Jeff Clune we had 3.2 million labeled images, he stopped in his tracks,” says Packer, who heads the Snapshot Serengeti project. “We wanted to test whether we could use machine learning to automate the work of human volunteers. Our citizen scientists have done phenomenal work, but we needed to speed up the process to handle ever greater amounts of data. The deep learning algorithm is amazing and far surpassed my expectations. This is a game changer for wildlife ecology.”

Swanson, who founded Snapshot Serengeti, adds: “There are hundreds of camera-trap projects in the world, and very few of them are able to recruit large armies of human volunteers to extract their data. That means that much of the knowledge in these important data sets remains untapped. Although projects are increasingly turning to citizen science for image classification, we’re starting to see it take longer and longer to label each batch of images as the demand for volunteers grows. We believe deep learning will be key in alleviating the bottleneck for camera-trap projects: the effort of converting images into usable data.”

“Not only does the artificial intelligence system tell you which of 48 different species of animal is present, but it also tells you how many there are and what they are doing. It will tell you if they are eating, sleeping, if babies are present, etc.,” adds Kosmala, another Snapshot Serengeti leader. “We estimate that the deep learning technology pipeline we describe would save more than eight years of human labeling effort for each additional 3 million images. That is a lot of valuable volunteer time that can be redeployed to help other projects.”

First-author Sadegh Norouzzadeh points out that “Deep learning is still improving rapidly, and we expect that its performance will only get better in the coming years. Here, we wanted to demonstrate the value of the technology to the wildlife ecology community, but we expect that as more people research how to improve deep learning for this application and publish their datasets, the sky’s the limit. It is exciting to think of all the different ways this technology can help with our important scientific and conservation missions.”

The paper that in PNAS is titled, “Automatically identifying, counting, and describing wild animals in camera-trap images with deep learning.”

http://www.uwyo.edu/uw/news/2018/06/researchers-use-artificial-intelligence-to-identify,-count,-describe-wild-animals.html