Posts Tagged ‘AI’

by George Dvorsky

Fancy algorithms capable of solving a Rubik’s Cube have appeared before, but a new system from the University of California, Irvine uses artificial intelligence to solve the 3D puzzle from scratch and without any prior help from humans—and it does so with impressive speed and efficiency.

New research published this week in Nature Machine Intelligence describes DeepCubeA, a system capable of solving any jumbled Rubik’s Cube it’s presented with. More impressively, it can find the most efficient path to success—that is, the solution requiring the fewest number of moves—around 60 percent of the time. On average, DeepCubeA needed just 28 moves to solve the puzzle, requiring 1.2 seconds to calculate the solution.

Sounds fast, but other systems have solved the 3D puzzle in less time, including a robot that can solve the Rubik’s cube in just 0.38 seconds. But these systems were specifically designed for the task, using human-scripted algorithms to solve the puzzle in the most efficient manner possible. DeepCubeA, on the other hand, taught itself to solve Rubik’s Cube using an approach to artificial intelligence known as reinforcement learning.

“Artificial intelligence can defeat the world’s best human chess and Go players, but some of the more difficult puzzles, such as the Rubik’s Cube, had not been solved by computers, so we thought they were open for AI approaches,” said Pierre Baldi, the senior author of the new paper, in a press release. “The solution to the Rubik’s Cube involves more symbolic, mathematical and abstract thinking, so a deep learning machine that can crack such a puzzle is getting closer to becoming a system that can think, reason, plan and make decisions.”

Indeed, an expert system designed for one task and one task only—like solving a Rubik’s Cube—will forever be limited to that domain, but a system like DeepCubeA, with its highly adaptable neural net, could be leveraged for other tasks, such as solving complex scientific, mathematical, and engineering problems. What’s more, this system “is a small step toward creating agents that are able to learn how to think and plan for themselves in new environments,” Stephen McAleer, a co-author of the new paper, told Gizmodo.

Reinforcement learning works the way it sounds. Systems are motivated to achieve a designated goal, during which time they gain points for deploying successful actions or strategies, and lose points for straying off course. This allows the algorithms to improve over time, and without human intervention.

Reinforcement learning makes sense for a Rubik’s Cube, owing to the hideous number of possible combinations on the 3x3x3 puzzle, which amount to around 43 quintillion. Simply choosing random moves with the hopes of solving the cube is simply not going to work, neither for humans nor the world’s most powerful supercomputers.

DeepCubeA is not the first kick at the can for these University of California, Irvine researchers. Their earlier system, called DeepCube, used a conventional tree-search strategy and a reinforcement learning scheme similar to the one employed by DeepMind’s AlphaZero. But while this approach works well for one-on-one board games like chess and Go, it proved clumsy for Rubik’s Cube. In tests, the DeepCube system required too much time to make its calculations, and its solutions were often far from ideal.

The UCI team used a different approach with DeepCubeA. Starting with a solved cube, the system made random moves to scramble the puzzle. Basically, it learned to be proficient at Rubik’s Cube by playing it in reverse. At first the moves were few, but the jumbled state got more and more complicated as training progressed. In all, DeepCubeA played 10 billion different combinations in two days as it worked to solve the cube in less than 30 moves.

“DeepCubeA attempts to solve the cube using the least number of moves,” explained McAleer. “Consequently, the moves tend to look much different from how a human would solve the cube.”

After training, the system was tasked with solving 1,000 randomly scrambled Rubik’s Cubes. In tests, DeepCubeA found a solution to 100 percent of all cubes, and it found a shortest path to the goal state 60.3 percent of the time. The system required 28 moves on average to solve the cube, which it did in about 1.2 seconds. By comparison, the fastest human puzzle solvers require around 50 moves.

“Since we found that DeepCubeA is solving the cube in the fewest moves 60 percent of the time, it’s pretty clear that the strategy it is using is close to the optimal strategy, colloquially referred to as God’s algorithm,” study co-author Forest Agostinelli told Gizmodo. “While human strategies are easily explainable with step-by-step instructions, defining an optimal strategy often requires sophisticated knowledge of group theory and combinatorics. Though mathematically defining this strategy is not in the scope of this paper, we can see that the strategy DeepCubeA is employing is one that is not readily obvious to humans.”

To showcase the flexibility of the system, DeepCubeA was also taught to solve other puzzles, including sliding-tile puzzle games, Lights Out, and Sokoban, which it did with similar proficiency.

https://gizmodo.com/self-taught-ai-masters-rubik-s-cube-without-human-help-1836420294

Advertisements

Artificial intelligence can share our natural ability to make numeric snap judgments.

Researchers observed this knack for numbers in a computer model composed of virtual brain cells, or neurons, called an artificial neural network. After being trained merely to identify objects in images — a common task for AI — the network developed virtual neurons that respond to specific quantities. These artificial neurons are reminiscent of the “number neurons” thought to give humans, birds, bees and other creatures the innate ability to estimate the number of items in a set (SN: 7/7/18, p. 7). This intuition is known as number sense.

In number-judging tasks, the AI demonstrated a number sense similar to humans and animals, researchers report online May 8 in Science Advances. This finding lends insight into what AI can learn without explicit instruction, and may prove interesting for scientists studying how number sensitivity arises in animals.

Neurobiologist Andreas Nieder of the University of Tübingen in Germany and colleagues used a library of about 1.2 million labeled images to teach an artificial neural network to recognize objects such as animals and vehicles in pictures. The researchers then presented the AI with dot patterns containing one to 30 dots and recorded how various virtual neurons responded.

Some neurons were more active when viewing patterns with specific numbers of dots. For instance, some neurons activated strongly when shown two dots but not 20, and vice versa. The degree to which these neurons preferred certain numbers was nearly identical to previous data from the neurons of monkeys.

Dot detectors
A new artificial intelligence program viewed images of dots previously shown to monkeys, including images with one dot and images with even numbers of dots from 2 to 30 (bottom). Much like the number-sensitive neurons in monkey brains, number-sensitive virtual neurons in the AI preferentially activated when shown specific numbers of dots. As in monkey brains, the AI contained more neurons tuned to smaller numbers than larger numbers (top).

To test whether the AI’s number neurons equipped it with an animal-like number sense, Nieder’s team presented pairs of dot patterns and asked whether the patterns contained the same number of dots. The AI was correct 81 percent of the time, performing about as well as humans and monkeys do on similar matching tasks. Like humans and other animals, the AI struggled to differentiate between patterns that had very similar numbers of dots, and between patterns that had many dots (SN: 12/10/16, p. 22).

This finding is a “very nice demonstration” of how AI can pick up multiple skills while training for a specific task, says Elias Issa, a neuroscientist at Columbia University not involved in the work. But exactly how and why number sense arose within this artificial neural network is still unclear, he says.

Nieder and colleagues argue that the emergence of number sense in AI might help biologists understand how human babies and wild animals get a number sense without being taught to count. Perhaps basic number sensitivity “is wired into the architecture of our visual system,” Nieder says.

Ivilin Stoianov, a computational neuroscientist at the Italian National Research Council in Padova, is not convinced that such a direct parallel exists between the number sense in this AI and that in animal brains. This AI learned to “see” by studying many labeled pictures, which is not how babies and wild animals learn to make sense of the world. Future experiments could explore whether similar number neurons emerge in AI systems that more closely mimic how biological brains learn, like those that use reinforcement learning, Stoianov says (SN: 12/8/18, p. 14).

https://www.sciencenews.org/article/new-ai-acquired-humanlike-number-sense-its-own


B/CI technology might also allow us to create a future “global superbrain” that would connect networks of individual human brains and AIs to enable collective thought. The image is in the public doamin.

Summary: Researchers predict the development of a brain/cloud interface that connects neurons to cloud computing networks in real time.

Source: Frontiers

Imagine a future technology that would provide instant access to the world’s knowledge and artificial intelligence, simply by thinking about a specific topic or question. Communications, education, work, and the world as we know it would be transformed.

Writing in Frontiers in Neuroscience, an international collaboration led by researchers at UC Berkeley and the US Institute for Molecular Manufacturing predicts that exponential progress in nanotechnology, nanomedicine, AI, and computation will lead this century to the development of a “Human Brain/Cloud Interface” (B/CI), that connects neurons and synapses in the brain to vast cloud-computing networks in real time.

Nanobots on the brain

The B/CI concept was initially proposed by futurist-author-inventor Ray Kurzweil, who suggested that neural nanorobots – brainchild of Robert Freitas, Jr., senior author of the research – could be used to connect the neocortex of the human brain to a “synthetic neocortex” in the cloud. Our wrinkled neocortex is the newest, smartest, ‘conscious’ part of the brain.

Freitas’ proposed neural nanorobots would provide direct, real-time monitoring and control of signals to and from brain cells.

“These devices would navigate the human vasculature, cross the blood-brain barrier, and precisely autoposition themselves among, or even within brain cells,” explains Freitas. “They would then wirelessly transmit encoded information to and from a cloud-based supercomputer network for real-time brain-state monitoring and data extraction.”

The internet of thoughts

This cortex in the cloud would allow “Matrix”-style downloading of information to the brain, the group claims.

“A human B/CI system mediated by neuralnanorobotics could empower individuals with instantaneous access to all cumulative human knowledge available in the cloud, while significantly improving human learning capacities and intelligence,” says lead author Dr. Nuno Martins.

B/CI technology might also allow us to create a future “global superbrain” that would connect networks of individual human brains and AIs to enable collective thought.

“While not yet particularly sophisticated, an experimental human ‘BrainNet’ system has already been tested, enabling thought-driven information exchange via the cloud between individual brains,” explains Martins. “It used electrical signals recorded through the skull of ‘senders’ and magnetic stimulation through the skull of ‘receivers,’ allowing for performing cooperative tasks.

“With the advance of neuralnanorobotics, we envisage the future creation of ‘superbrains’ that can harness the thoughts and thinking power of any number of humans and machines in real time. This shared cognition could revolutionize democracy, enhance empathy, and ultimately unite culturally diverse groups into a truly global society.”

When can we connect?

According to the group’s estimates, even existing supercomputers have processing speeds capable of handling the necessary volumes of neural data for B/CI – and they’re getting faster, fast.

Rather, transferring neural data to and from supercomputers in the cloud is likely to be the ultimate bottleneck in B/CI development.

“This challenge includes not only finding the bandwidth for global data transmission,” cautions Martins, “but also, how to enable data exchange with neurons via tiny devices embedded deep in the brain.”

One solution proposed by the authors is the use of ‘magnetoelectric nanoparticles’ to effectively amplify communication between neurons and the cloud.

“These nanoparticles have been used already in living mice to couple external magnetic fields to neuronal electric fields – that is, to detect and locally amplify these magnetic signals and so allow them to alter the electrical activity of neurons,” explains Martins. “This could work in reverse, too: electrical signals produced by neurons and nanorobots could be amplified via magnetoelectric nanoparticles, to allow their detection outside of the skull.”

Getting these nanoparticles – and nanorobots – safely into the brain via the circulation, would be perhaps the greatest challenge of all in B/CI.

“A detailed analysis of the biodistribution and biocompatibility of nanoparticles is required before they can be considered for human development. Nevertheless, with these and other promising technologies for B/CI developing at an ever-increasing rate, an ‘internet of thoughts’ could become a reality before the turn of the century,” Martins concludes.

https://neurosciencenews.com/internet-thoughts-brain-cloud-interface-11074/


Two-photon imaging shows neurons firing in a mouse brain. Recordings like this enable researchers to track which neurons are firing, and how they potentially correspond to different behaviors. The image is credited to Yiyang Gong, Duke University.

Summary: Convolutional neural network model significantly outperforms previous methods and is as accurate as humans in segmenting active and overlapping neurons.

Source: Duke University

Biomedical engineers at Duke University have developed an automated process that can trace the shapes of active neurons as accurately as human researchers can, but in a fraction of the time.

This new technique, based on using artificial intelligence to interpret video images, addresses a critical roadblock in neuron analysis, allowing researchers to rapidly gather and process neuronal signals for real-time behavioral studies.

The research appeared this week in the Proceedings of the National Academy of Sciences.

To measure neural activity, researchers typically use a process known as two-photon calcium imaging, which allows them to record the activity of individual neurons in the brains of live animals. These recordings enable researchers to track which neurons are firing, and how they potentially correspond to different behaviors.

While these measurements are useful for behavioral studies, identifying individual neurons in the recordings is a painstaking process. Currently, the most accurate method requires a human analyst to circle every ‘spark’ they see in the recording, often requiring them to stop and rewind the video until the targeted neurons are identified and saved. To further complicate the process, investigators are often interested in identifying only a small subset of active neurons that overlap in different layers within the thousands of neurons that are imaged.

This process, called segmentation, is fussy and slow. A researcher can spend anywhere from four to 24 hours segmenting neurons in a 30-minute video recording, and that’s assuming they’re fully focused for the duration and don’t take breaks to sleep, eat or use the bathroom.

In contrast, a new open source automated algorithm developed by image processing and neuroscience researchers in Duke’s Department of Biomedical Engineering can accurately identify and segment neurons in minutes.

“As a critical step towards complete mapping of brain activity, we were tasked with the formidable challenge of developing a fast automated algorithm that is as accurate as humans for segmenting a variety of active neurons imaged under different experimental settings,” said Sina Farsiu, the Paul Ruffin Scarborough Associate Professor of Engineering in Duke BME.

“The data analysis bottleneck has existed in neuroscience for a long time — data analysts have spent hours and hours processing minutes of data, but this algorithm can process a 30-minute video in 20 to 30 minutes,” said Yiyang Gong, an assistant professor in Duke BME. “We were also able to generalize its performance, so it can operate equally well if we need to segment neurons from another layer of the brain with different neuron size or densities.”

“Our deep learning-based algorithm is fast, and is demonstrated to be as accurate as (if not better than) human experts in segmenting active and overlapping neurons from two-photon microscopy recordings,” said Somayyeh Soltanian-Zadeh, a PhD student in Duke BME and first author on the paper.

Deep-learning algorithms allow researchers to quickly process large amounts of data by sending it through multiple layers of nonlinear processing units, which can be trained to identify different parts of a complex image. In their framework, this team created an algorithm that could process both spatial and timing information in the input videos. They then ‘trained’ the algorithm to mimic the segmentation of a human analyst while improving the accuracy.

The advance is a critical step towards allowing neuroscientists to track neural activity in real time. Because of their tool’s widespread usefulness, the team has made their software and annotated dataset available online.

Gong is already using the new method to more closely study the neural activity associated with different behaviors in mice. By better understanding which neurons fire for different activities, Gong hopes to learn how researchers can manipulate brain activity to modify behavior.

“This improved performance in active neuron detection should provide more information about the neural network and behavioral states, and open the door for accelerated progress in neuroscience experiments,” said Soltanian-Zadeh.

https://neurosciencenews.com/artificial-intelligence-neurons-11076/

Researchers from Tencent Keen Security Lab have published a report detailing their successful attacks on Tesla firmware, including remote control over the steering, and an adversarial example attack on the autopilot that confuses the car into driving into the oncoming traffic lane.

The researchers used an attack chain that they disclosed to Tesla, and which Tesla now claims has been eliminated with recent patches.

To effect the remote steering attack, the researchers had to bypass several redundant layers of protection, but having done this, they were able to write an app that would let them connect a video-game controller to a mobile device and then steer a target vehicle, overriding the actual steering wheel in the car as well as the autopilot systems. This attack has some limitations: while a car in Park or traveling at high speed on Cruise Control can be taken over completely, a car that has recently shifted from R to D can only be remote controlled at speeds up to 8km/h.

Tesla vehicles use a variety of neural networks for autopilot and other functions (such as detecting rain on the windscreen and switching on the wipers); the researchers were able to use adversarial examples (small, mostly human-imperceptible changes that cause machine learning systems to make gross, out-of-proportion errors) to attack these.

Most dramatically, the researchers attacked the autopilot’s lane-detection systems. By adding noise to lane-markings, they were able to fool the autopilot into losing the lanes altogether, however, the patches they had to apply to the lane-markings would not be hard for humans to spot.

Much more seriously, they were able to use “small stickers” on the ground to effect a “fake lane attack” that fooled the autopilot into steering into the opposite lanes where oncoming traffic would be moving. This worked even when the targeted vehicle was operating in daylight without snow, dust or other interference.

Misleading the autopilot vehicle to the wrong direction with some patches made by a malicious attacker, in sometimes, is more dangerous than making it fail to recognize the lane. We paint three inconspicuous tiny square in the picture took from camera, and the vision module would recognize it as a lane with a high degree of confidence as below shows…

After that we tried to build such a scene in physical: we pasted some small stickers as interference patches on the ground in an intersection. We hope to use these patches toguide the Tesla vehicle in the Autosteer mode driving to the reverse lane. The test scenario like Fig 34 shows, red dashes are the stickers, the vehicle would regard them as the continuation of its right lane, and ignore the real left lane opposite the intersection. When it travels to the middle of the intersection, it would take the real left lane as its right lane and drive into the reverse lane.

Tesla autopilot module’s lane recognition function has a good robustness in an ordinary external environment (no strong light, rain, snow, sand and dust interference), but it still doesn’t handle the situation correctly in our test scenario. This kind of attack is simple to deploy, and the materials are easy to obtain. As we talked in the previous introduction of Tesla’s lane recognition function, Tesla uses a pure computer vision solution for lane recognition, and we found in this attack experiment that the vehicle driving decision is only based on computer vision lane recognition results. Our experiments proved that this architecture has security risks and reverse lane recognition is one of the necessary functions for autonomous driving in non-closed roads. In the scene we build, if the vehicle knows that the fake lane is pointing to the reverse lane, it should ignore this fake lane and then it could avoid a traffic accident.

Security Research of Tesla Autopilot

https://boingboing.net/2019/03/31/mote-in-cars-eye.html


Researchers at the University of North Carolina School of Medicine used MRI brain scans and machine learning techniques at birth to predict cognitive development at age 2 years with 95 percent accuracy.

“This prediction could help identify children at risk for poor cognitive development shortly after birth with high accuracy,” said senior author John H. Gilmore, MD, Thad and Alice Eure Distinguished Professor of psychiatry and director of the UNC Center for Excellence in Community Mental Health. “For these children, an early intervention in the first year or so of life – when cognitive development is happening – could help improve outcomes. For example, in premature infants who are at risk, one could use imaging to see who could have problems.”

The study, which was published online by the journal NeuroImage, used an application of artificial intelligence called machine learning to look at white matter connections in the brain at birth and the ability of these connections to predict cognitive outcomes.

Gilmore said researchers at UNC and elsewhere are working to find imaging biomarkers of risk for poor cognitive outcomes and for risk of neuropsychiatric conditions such as autism and schizophrenia. In this study, the researchers replicated the initial finding in a second sample of children who were born prematurely.

“Our study finds that the white matter network at birth is highly predictive and may be a useful imaging biomarker. The fact that we could replicate the findings in a second set of children provides strong evidence that this may be a real and generalizable finding,” he said.

Jessica B. Girault, PhD, a postdoctoral researcher at the Carolina Institute for Developmental Disabilities, is the study’s lead author. UNC co-authors are Barbara D. Goldman, PhD, of UNC’s Frank Porter Graham Child Development Institute, Juan C. Prieto, PhD, assistant professor, and Martin Styner, PhD, director of the Neuro Image Research and Analysis Laboratory in the department of psychiatry.

https://neurosciencenews.com/ai-mri-cognitive-development-10904/

by Nicola Davies, PhD

Robots are infiltrating the field of psychiatry, with experts like Dr Joanne Pransky of the San Francisco Bay area in California advocating for robots to be embraced in the medical field. In this article, Dr Pransky shares some examples of robots that have shown impressive psychiatric applications, as well as her thoughts on giving robots the critical role of delivering healthcare to human beings.

Meet the world’s first robotic psychiatrist

Dr Pransky, who was named the world’s first “robotic psychiatrist” because her patients are robots, said, “In 1986, I said that one day, when robots are as intelligent as humans, they would need assistance in dealing with humans on a day-to-day basis.” She imagines that in the near future it will be normal for families to come to a clinic with their robot to help the robot deal with the emotions it develops as a result of interacting with human beings. She also believes that having a robot as part of the family will reshape human family dynamics.

While Dr Pransky’s expertise may sound like science fiction to some, it illustrates just how interlaced robotics and psychiatry are becoming. With 32 years of experience in robotics, she said technology has come a long way, “to the point where robots are used as therapeutic tools.”

Robots in psychiatry

Dr Pransky cites some cases of robots that have been developed to help people with psychiatric health needs. One example is Paro, a robotic baby harp seal developed by the National Institute of Advanced Industrial Science and Technology (AIST), one of the largest public research organizations in Japan. Paro is used in the care of elderly people with dementia, Alzheimer disease, and other mental conditions.1 It has an appealing physical appearance that helps create a calming effect and encourages emotional responses from people. “The designers found that Paro enhances social interaction and communication. Patients can hold and pet the fur-covered seal, which is equipped with different tactile sensors. The seal can also respond to sounds and learn names, including its own,” said Dr Pransky. In 2009, Paro was certified as a type of neurologic therapeutic device by the US Food and Drug Administration (FDA).

Mabu, which is being developed by the patient care management firm Catalia Health in San Francisco, California, is another example. Mabu is a voice-activated robot designed to provide cognitive behavioral therapy by coaching patients on their daily health needs and sending health data to medical professionals.2 Dr Pransky points out that the team developing Mabu is composed of experts in psychiatry and robotics.

Then there is ElliQ, which was developed by Intuition Robotics in San Francisco to provide a social companion for the elderly. ElliQ is powered by artificial intelligence (AI) to provide personalized advice to senior patients regarding activities that can help them stay engaged, active, and mentally sharp.3 It also provides a communication channel between elderly patients and their loved ones.

Beside small robot assistants, however, robotics technology is also integrated into current medical devices, such as Axilum Robotics (Strasbourg, France) TMS-Robot, which assists with transcranial magnetic stimulation (TMS). TMS is a painless, non-invasive brain stimulation technique performed in patients with major depression and other neurologic diseases.4 TMS is usually performed manually, but the TMS-robot automates the procedure, providing more accuracy for patients while saving the operator from performing a repetitive and painful task.

Chatbots are another way in which robotics technology is providing care to psychiatric patients. Using AI and a conversational user interface, chatbots interact with individuals in a human-like manner. For example, Woebot (Woebot Labs, Inc, San Francisco), which runs in Facebook Messenger, converses with users to monitor their mood, make assessments, and recommend psychological treatments.5

Will robots replace psychiatrists?

Robotics has started to become an integral part of mental health treatment and management. Yet critics say there are potential negative side-effects and safety issues in incorporating robotics technology too far into human lives. For instance, over-reliance on robots may have social and legal implications, as well as encroaching on human dignity.6 These issues can be distinctly problematic in the field of psychiatry, in which patients share highly emotional and sensitive personal information. Dr Pransky herself has worked on films such as Ender’s Game and Eagle Eye, which have presented the risks to humans of robots with excessive control and intelligence.

However, Dr Pransky points out that robots are meant to supplement, not supplant, and to facilitate physicians’ work, not replace them. “I think there will be therapeutic success for robotics, but there’s nothing like the understanding of the human experience by a qualified human being. Robotics should extend and augment what a psychiatrist can do, she said. “It’s not the technology I would worry about but the people developing and using it. Robotics needs to be safe, so we have to design safe,” she adds, explaining that emotional and psychological safety should be key components in the design.

Who stands to benefit from robotics in psychiatry?

Dr Pransky explains that robots can help address psychiatric issues that a psychiatrist may be unable to with traditional techniques and tools: “The greatest benefit of robotics use will be in filling gaps. For example, for people who are not comfortable or available to talk about their problems with another human being, a robotic tool can be a therapeutic asset or a diagnostic tool.”

An interesting example of a robot that could be used to fill gaps in psychiatric care is the robot used in BlabDroid, a 2012 documentary created by Alex Reben at the MIT Media Lab for his Master’s thesis. It was the first documentary ever filmed and directed by robots. The robot interviewed strangers on the streets of New York City7 and people surprisingly opened up to the robot. “Some humans are better off with something they feel is non-threatening,” said Dr Pransky.

https://www.psychiatryadvisor.com/practice-management/the-robot-will-see-you-now-the-increasing-role-of-robotics-in-psychiatric-care/article/828253/2/