Elon Musk’s Euralink soon to reveal a working brain-computer chip for “human-AI symbiosis”

By Anthony Cuthbertson

Elon Musk has said he will demonstrate a functional brain-computer interface this week during a live presentation from his mysterious Neuralink startup.

The billionaire entrepreneur, who also heads SpaceX and Tesla, founded Neuralink in 2016 with the ultimate aim of merging artificial intelligence with the human brain.

Until now, there has only been one public event showing off the startup’s technology, during which Musk revealed a “sewing machine-like” device capable of stitching threads into a person’s head.

The procedure to implant the chip will eventually be similar in speed and efficiency to Lasik laser eye surgery, according to Musk, and will be performed by a robot.

The robot and the working brain chip will be unveiled during a live webcast at 3pm PT (11pm BST) on Friday, Musk tweeted on Tuesday night.

In response to a question on Twitter, he said that the comparison with laser eye surgery was still some way off. “Still far from Lasik, but could get pretty close in a few years,” he tweeted.

He also said that Friday’s demonstration would show “neurons firing in real-time… the matrix in the matrix.”

The device has already been tested on animals and human trials were originally planned for 2020, though it is not yet clear whether they have started.


A robot designed by Neuralink would insert the ‘threads’ into the brain using a needle


A fully implantable neural interface connects to the brain through tiny threads


Neuralink says learning to use the device is ‘like learning to touch type or play the piano’


Neuralink says learning to use the device is ‘like learning to touch type or play the piano’

In the build up to Friday’s event, Musk has drip fed details about Neuralink’s technology and the capabilities it could deliver to people using it.

In a series of tweets last month, he said the chip “could extend the range of hearing beyond normal frequencies and amplitudes,” as well as allow wearers to stream music directly to their brain.

Other potential applications include regulating hormone levels and delivering “enhanced abilities” like greater reasoning and anxiety relief.

Earlier this month, scientists unconnected to Neuralink unveiled a new bio-synthetic material that they claim could be used to help integrate electronics with the human body.

The breakthrough could help achieve Musk’s ambition of augmenting human intelligence and abilities, which he claims is necessary allow humanity to compete with advanced artificial intelligence.

He claims that humans risk being overtaken by AI within the next five years, and that AI could eventually view us in the same way we currently view house pets.

“I don’t love the idea of being a house cat, but what’s the solution?” he said in 2016, just months before he founded Neuralink. “I think one of the solutions that seems maybe the best is to add an AI layer.”

https://www.independent.co.uk/life-style/gadgets-and-tech/news/elon-musk-neuralink-brain-computer-chip-ai-event-when-a9688966.html

How a New AI Translated Brain Activity to Speech With 97 Percent Accuracy

By Edd Gent

The idea of a machine that can decode your thoughts might sound creepy, but for thousands of people who have lost the ability to speak due to disease or disability it could be game-changing. Even for the able-bodied, being able to type out an email by just thinking or sending commands to your digital assistant telepathically could be hugely useful.

That vision may have come a step closer after researchers at the University of California, San Francisco demonstrated that they could translate brain signals into complete sentences with error rates as low as three percent, which is below the threshold for professional speech transcription.

While we’ve been able to decode parts of speech from brain signals for around a decade, so far most of the solutions have been a long way from consistently translating intelligible sentences. Last year, researchers used a novel approach that achieved some of the best results so far by using brain signals to animate a simulated vocal tract, but only 70 percent of the words were intelligible.

The key to the improved performance achieved by the authors of the new paper in Nature Neuroscience was their realization that there were strong parallels between translating brain signals to text and machine translation between languages using neural networks, which is now highly accurate for many languages.

While most efforts to decode brain signals have focused on identifying neural activity that corresponds to particular phonemes—the distinct chunks of sound that make up words—the researchers decided to mimic machine translation, where the entire sentence is translated at once. This has proven a powerful approach; as certain words are always more likely to appear close together, the system can rely on context to fill in any gaps.

The team used the same encoder-decoder approach commonly used for machine translation, in which one neural network analyzes the input signal—normally text, but in this case brain signals—to create a representation of the data, and then a second neural network translates this into the target language.

They trained their system using brain activity recorded from 4 women with electrodes implanted in their brains to monitor seizures as they read out a set of 50 sentences, including 250 unique words. This allowed the first network to work out what neural activity correlated with which parts of speech.

In testing, it relied only on the neural signals and was able to achieve error rates of below eight percent on two out of the four subjects, which matches the kinds of accuracy achieved by professional transcribers.

Inevitably, there are caveats. Firstly, the system was only able to decode 30-50 specific sentences using a limited vocabulary of 250 words. It also requires people to have electrodes implanted in their brains, which is currently only permitted for a limited number of highly specific medical reasons. However, there are a number of signs that this direction holds considerable promise.

One concern was that because the system was being tested on sentences that were included in its training data, it might simply be learning to match specific sentences to specific neural signatures. That would suggest it wasn’t really learning the constituent parts of speech, which would make it harder to generalize to unfamiliar sentences.

But when the researchers added another set of recordings to the training data that were not included in testing, it reduced error rates significantly, suggesting that the system is learning sub-sentence information like words.

They also found that pre-training the system on data from the volunteer that achieved the highest accuracy before training on data from one of the worst performers significantly reduced error rates. This suggests that in practical applications, much of the training could be done before the system is given to the end user, and they would only have to fine-tune it to the quirks of their brain signals.

The vocabulary of such a system is likely to improve considerably as people build upon this approach—but even a limited palette of 250 words could be incredibly useful to a paraplegic, and could likely be tailored to a specific set of commands for telepathic control of other devices.

Now the ball is back in the court of the scrum of companies racing to develop the first practical neural interfaces.

How a New AI Translated Brain Activity to Speech With 97 Percent Accuracy

Powerful antibiotics discovered using AI


Machine learning spots molecules that work even against ‘untreatable’ strains of bacteria.

by Jo Marchant

A pioneering machine-learning approach has identified powerful new types of antibiotic from a pool of more than 100 million molecules — including one that works against a wide range of bacteria, including tuberculosis and strains considered untreatable.

The researchers say the antibiotic, called halicin, is the first discovered with artificial intelligence (AI). Although AI has been used to aid parts of the antibiotic-discovery process before, they say that this is the first time it has identified completely new kinds of antibiotic from scratch, without using any previous human assumptions. The work, led by synthetic biologist Jim Collins at the Massachusetts Institute of Technology in Cambridge, is published in Cell1.

The study is remarkable, says Jacob Durrant, a computational biologist at the University of Pittsburgh, Pennsylvania. The team didn’t just identify candidates, but also validated promising molecules in animal tests, he says. What’s more, the approach could also be applied to other types of drug, such as those used to treat cancer or neurodegenerative diseases, says Durrant.

Bacterial resistance to antibiotics is rising dramatically worldwide, and researchers predict that unless new drugs are developed urgently, resistant infections could kill ten million people per year by 2050. But over the past few decades, the discovery and regulatory approval of new antibiotics has slowed. “People keep finding the same molecules over and over,” says Collins. “We need novel chemistries with novel mechanisms of action.”

Forget your assumptions
Collins and his team developed a neural network — an AI algorithm inspired by the brain’s architecture — that learns the properties of molecules atom by atom.

The researchers trained its neural network to spot molecules that inhibit the growth of the bacterium Escherichia coli, using a collection of 2,335 molecules for which the antibacterial activity was known. This includes a library of about 300 approved antibiotics, as well as 800 natural products from plant, animal and microbial sources.

The algorithm learns to predict molecular function without any assumptions about how drugs work and without chemical groups being labelled, says Regina Barzilay, an AI researcher at MIT and a co-author of the study. “As a result, the model can learn new patterns unknown to human experts.”

Once the model was trained, the researchers used it to screen a library called the Drug Repurposing Hub, which contains around 6,000 molecules under investigation for human diseases. They asked it to predict which would be effective against E. coli, and to show them only molecules that look different from conventional antibiotics.

From the resulting hits, the researchers selected about 100 candidates for physical testing. One of these — a molecule being investigated as a diabetes treatment — turned out to be a potent antibiotic, which they called halicin after HAL, the intelligent computer in the film 2001: A Space Odyssey. In tests in mice, this molecule was active against a wide spectrum of pathogens, including a strain of Clostridioides difficile and one of Acinetobacter baumannii that is ‘pan-resistant’ and against which new antibiotics are urgently required.

Proton block
Antibiotics work through a range of mechanisms, such as blocking the enzymes involved in cell-wall biosynthesis, DNA repair or protein synthesis. But halicin’s mechanism is unconventional: it disrupts the flow of protons across a cell membrane. In initial animal tests, it also seemed to have low toxicity and be robust against resistance. In experiments, resistance to other antibiotic compounds typically arises within a day or two, says Collins. “But even after 30 days of such testing we didn’t see any resistance against halicin.”

The team then screened more than 107 million molecular structures in a database called ZINC15. From a shortlist of 23, physical tests identified 8 with antibacterial activity. Two of these had potent activity against a broad range of pathogens, and could overcome even antibiotic-resistant strains of E. coli.

The study is “a great example of the growing body of work using computational methods to discover and predict properties of potential drugs”, says Bob Murphy, a computational biologist at Carnegie Mellon University in Pittsburgh. He notes that AI methods have previously been developed to mine huge databases of genes and metabolites to identify molecule types that could include new antibiotics2,3.

But Collins and his team say that their approach is different — rather than search for specific structures or molecular classes, they’re training their network to look for molecules with a particular activity. The team is now hoping to partner with an outside group or company to get halicin into clinical trials. It also wants to broaden the approach to find more new antibiotics, and design molecules from scratch. Barzilay says their latest work is a proof of concept. “This study puts it all together and demonstrates what it can do.”

doi: 10.1038/d41586-020-00018-3
References
1.
Stokes, J. M. et al. Cell https://doi.org/10.1016/j.cell.2020.01.021 (2020).

https://www.nature.com/articles/d41586-020-00018-3?utm_source=Nature+Briefing&utm_campaign=f680a1d26d-briefing-dy-20200221&utm_medium=email&utm_term=0_c9dfd39373-f680a1d26d-44039353

AI at Case Western Reserve lab predicts which pre-malignant breast lesions will progress to invasive cancer

New research at Case Western Reserve University could help better determine which patients diagnosed with the pre-malignant breast cancer commonly referred to as stage 0 are likely to progress to invasive breast cancer and therefore might benefit from additional therapy over and above surgery alone.

Once a lumpectomy of breast tissue reveals this pre-cancerous tumor, most women have surgery to remove the remainder of the affected tissue and some are given radiation therapy as well, said Anant Madabhushi, the F. Alex Nason Professor II of Biomedical Engineering at the Case School of Engineering.

“Current testing places patients in high risk, low risk and indeterminate risk—but then treats those indeterminates with radiation, anyway,” said Madabhushi, whose Center for Computational Imaging and Personalized Diagnostics (CCIPD) conducted the new research. “They err on the side of caution, but we’re saying that it appears that it should go the other way—the middle should be classified with the lower risk.

“In short, we’re probably overtreating patients,” Madabhushi continued. “That goes against prevailing wisdom, but that’s what our analysis is finding.”

The most common breast cancer

Stage 0 breast cancer is the most common type and known clinically as ductal carcinoma in situ (DCIS), indicating that the cancer cell growth starts in the milk ducts.

About 60,000 cases of DCIS are diagnosed in the United States each year, accounting for about one of every five new breast cancer cases, according to the American Cancer Society. People with a type of breast cancer that has not spread beyond the breast tissue live at least five years after diagnosis, according to the cancer society.

Lead researcher Haojia Li, a graduate student in the CCIPD, used a computer program to analyze the spatial architecture, texture and orientation of the individual cells and nuclei from scanned and digitized lumpectomy tissue samples from 62 DCIS patients.

The result: Both the size and orientation of the tumors characterized as “indeterminate” were actually much closer to those confirmed as low risk for recurrence by an expensive genetic test called Oncotype DX.

Li then validated the features that distinguished the low and high risk Oncotype groups in being able to predict the likelihood of progression from DCIS to invasive ductal carcinoma in an independent set of 30 patients.

“This could be a tool for determining who really needs the radiation, or who needs the gene test, which is also very expensive,” she said.

The research led by Li was published Oct. 17 in the journal Breast Cancer Research.

Madabhushi established the CCIPD at Case Western Reserve in 2012. The lab now includes nearly 60 researchers. The lab has become a global leader in the detection, diagnosis and characterization of various cancers and other diseases, including breast cancer, by meshing medical imaging, machine learning and artificial intelligence (AI).

Some of the lab’s most recent work, in collaboration with New York University and Yale University, has used AI to predict which lung cancer patients would benefit from adjuvant chemotherapy based on tissue slide images. That advancement was named by Prevention Magazine as one of the top 10 medical breakthroughs of 2018.

AI at Case Western Reserve lab predicts which pre-malignant breast lesions will progress to invasive cancer

Robot priests can bless you, advise you, and even perform your funeral

By Sigal Samuel

A new priest named Mindar is holding forth at Kodaiji, a 400-year-old Buddhist temple in Kyoto, Japan. Like other clergy members, this priest can deliver sermons and move around to interface with worshippers. But Mindar comes with some … unusual traits. A body made of aluminum and silicone, for starters.

Mindar is a robot.

Designed to look like Kannon, the Buddhist deity of mercy, the $1 million machine is an attempt to reignite people’s passion for their faith in a country where religious affiliation is on the decline.

For now, Mindar is not AI-powered. It just recites the same preprogrammed sermon about the Heart Sutra over and over. But the robot’s creators say they plan to give it machine-learning capabilities that’ll enable it to tailor feedback to worshippers’ specific spiritual and ethical problems.

“This robot will never die; it will just keep updating itself and evolving,” said Tensho Goto, the temple’s chief steward. “With AI, we hope it will grow in wisdom to help people overcome even the most difficult troubles. It’s changing Buddhism.”

Robots are changing other religions, too. In 2017, Indians rolled out a robot that performs the Hindu aarti ritual, which involves moving a light round and round in front of a deity. That same year, in honor of the Protestant Reformation’s 500th anniversary, Germany’s Protestant Church created a robot called BlessU-2. It gave preprogrammed blessings to over 10,000 people.

Then there’s SanTO — short for Sanctified Theomorphic Operator — a 17-inch-tall robot reminiscent of figurines of Catholic saints. If you tell it you’re worried, it’ll respond by saying something like, “From the Gospel according to Matthew, do not worry about tomorrow, for tomorrow will worry about itself. Each day has enough trouble of its own.”

Roboticist Gabriele Trovato designed SanTO to offer spiritual succor to elderly people whose mobility and social contact may be limited. Next, he wants to develop devices for Muslims, though it remains to be seen what form those might take.

As more religious communities begin to incorporate robotics — in some cases, AI-powered and in others, not — it stands to change how people experience faith. It may also alter how we engage in ethical reasoning and decision-making, which is a big part of religion.

For the devout, there’s plenty of positive potential here: Robots can get disinterested people curious about religion or allow for a ritual to be performed when a human priest is inaccessible. But robots also pose risks for religion — for example, by making it feel too mechanized or homogenized or by challenging core tenets of theology. On the whole, will the emergence of AI religion make us better or worse off? The answer depends on how we design and deploy it — and on whom you ask.

Some cultures are more open to religious robots than others
New technologies often make us uncomfortable. Which ones we ultimately accept — and which ones we reject — is determined by an array of factors, ranging from our degree of exposure to the emerging technology to our moral presuppositions.

Japanese worshippers who visit Mindar are reportedly not too bothered by questions about the risks of siliconizing spirituality. That makes sense given that robots are already so commonplace in the country, including in the religious domain.

For years now, people who can’t afford to pay a human priest to perform a funeral have had the option to pay a robot named Pepper to do it at a much cheaper rate. And in China, at Beijing’s Longquan Monastery, an android monk named Xian’er recites Buddhist mantras and offers guidance on matters of faith.

What’s more, Buddhism’s non-dualistic metaphysical notion that everything has inherent “Buddha nature” — that all beings have the potential to become enlightened — may predispose its adherents to be receptive to spiritual guidance that comes from technology.

At the temple in Kyoto, Goto put it like this: “Buddhism isn’t a belief in a God; it’s pursuing Buddha’s path. It doesn’t matter whether it’s represented by a machine, a piece of scrap metal, or a tree.”

“Mindar’s metal skeleton is exposed, and I think that’s an interesting choice — its creator, Hiroshi Ishiguro, is not trying to make something that looks totally human,” said Natasha Heller, an associate professor of Chinese religions at the University of Virginia. She told me the deity Kannon, upon whom Mindar is based, is an ideal candidate for cyborgization because the Lotus Sutra explicitly says Kannon can manifest in different forms — whatever forms will best resonate with the humans of a given time and place.

Westerners seem more disturbed by Mindar, likening it to Frankenstein’s monster. In Western economies, we don’t yet have robots enmeshed in many aspects of our lives. What we do have is a pervasive cultural narrative, reinforced by Hollywood blockbusters, about our impending enslavement at the hands of “robot overlords.”

Plus, Abrahamic religions like Islam or Judaism tend to be more metaphysically dualistic — there’s the sacred and then there’s the profane. And they have more misgivings than Buddhism about visually depicting divinity, so they may take issue with Mindar-style iconography.

They also have different ideas about what makes a r

eligious practice effective. For example, Judaism places a strong emphasis on intentionality, something machines don’t possess. When a worshipper prays, what matters is not just that their mouth forms the right words — it’s also very important that they have the right intention.

Meanwhile, some Buddhists use prayer wheels containing scrolls printed with sacred words and believe that spinning the wheel has its own spiritual efficacy, even if nobody recites the words aloud. In hospice settings, elderly Buddhists who don’t have people on hand to recite prayers on their behalf will use devices known as nianfo ji — small machines about the size of an iPhone, which recite the name of the Buddha endlessly.

Despite such theological differences, it’s ironic that many Westerners have a knee-jerk negative reaction to a robot like Mindar. The dream of creating artificial life goes all the way back to ancient Greece, where the ancients actually invented real animated machines as the Stanford classicist Adrienne Mayor has documented in her book Gods and Robots. And there is a long tradition of religious robots in the West.

In the Middle Ages, Christians designed automata to perform the mysteries of Easter and Christmas. One proto-roboticist in the 16th century designed a mechanical monk that is, amazingly, performing ritual gestures to this day. With his right arm, he strikes his chest in a mea culpa; with his left, he raises a rosary to his lips.

In other words, the real novelty is not the use of robots in the religious domain but the use of AI.

How AI may change our theology and ethics
Even as our theology shapes the AI we create and embrace, AI will also shape our theology. It’s a two-way street.

Some people believe AI will force a truly momentous change in theology, because if humans create intelligent machines with free will, we’ll eventually have to ask whether they have something functionally similar to a soul.

“There will be a point in the future when these free-willed beings that we’ve made will say to us, ‘I believe in God. What do I do?’ At that point, we should have a response,” said Kevin Kelly, a Christian co-founder of Wired magazine who argues we need to develop “a catechism for robots.”

Other people believe that, rather than seeking to join a human religion, AI itself will become an object of worship. Anthony Levandowski, the Silicon Valley engineer who triggered a major Uber/Waymo lawsuit, has set up the first church of artificial intelligence, called Way of the Future. Levandowski’s new religion is dedicated to “the realization, acceptance, and worship of a Godhead based on artificial intelligence (AI) developed through computer hardware and software.”

Meanwhile, Ilia Delio, a Franciscan sister who holds two PhDs and a chair in theology at Villanova University, told me AI may also force a traditional religion like Catholicism to reimagine its understanding of human priests as divinely called and consecrated — a status that grants them special authority.

“The Catholic notion would say the priest is ontologically changed upon ordination. Is that really true?” she asked. Maybe priestliness is not an esoteric essence but a programmable trait that even a “fallen” creation like a robot can embody. “We have these fixed philosophical ideas and AI challenges those ideas — it challenges Catholicism to move toward a post-human priesthood.” (For now, she joked, a robot would probably do better as a Protestant.)

Then there are questions about how robotics will change our religious experiences. Traditionally, those experiences are valuable in part because they leave room for the spontaneous and surprising, the emotional and even the mystical. That could be lost if we mechanize them.

To visualize an automated ritual, take a look at this video of a robotic arm performing a Hindu aarti ceremony:

Another risk has to do with how an AI priest would handle ethical queries and decision-making. Robots whose algorithms learn from previous data may nudge us toward decisions based on what people have done in the past, incrementally homogenizing answers to our queries and narrowing the scope of our spiritual imagination.

That risk also exists with human clergy, Heller pointed out: “The clergy is bounded too — there’s already a built-in nudging or limiting factor, even without AI.”

But AI systems can be particularly problematic in that they often function as black boxes. We typically don’t know what sorts of biases are coded into them or what sorts of human nuance and context they’re failing to understand.

Let’s say you tell a robot you’re feeling depressed because you’re unemployed and broke, and the only job that’s available to you seems morally odious. Maybe the robot responds by reciting a verse from Proverbs 14: “In all toil there is profit, but mere talk tends only to poverty.” Even if it doesn’t presume to interpret the verse for you, in choosing that verse it’s already doing hidden interpretational work. It’s analyzing your situation and algorithmically determining a recommendation — in this case, one that may prompt you to take the job.

But perhaps it would’ve worked out better for you if the robot had recited a verse from Proverbs 16: “Commit your work to the Lord, and your plans will be established.” Maybe that verse would prompt you to pass on the morally dubious job, and, being a sensitive soul, you’ll later be happy you did. Or maybe your depression is severe enough that the job issue is somewhat beside the point and the crucial thing is for you to seek out mental health treatment.

A human priest who knows your broader context as a whole person may gather this and give you the right recommendation. An android priest might miss the nuances and just respond to the localized problem as you’ve expressed it.

The fact is human clergy members do so much more than provide answers. They serve as the anchor for a community, bringing people together. They offer pastoral care. And they provide human contact, which is in danger of becoming a luxury good as we create robots to more cheaply do the work of people.

On the other hand, Delio said, robots can excel in a social role in some ways that human priests might not. “Take the Catholic Church. It’s very male, very patriarchal, and we have this whole sexual abuse crisis. So would I want a robot priest? Maybe!” she said. “A robot can be gender-neutral. It might be able to transcend some of those divides and be able to enhance community in a way that’s more liberating.”

Ultimately, in religion as in other domains, robots and humans are perhaps best understood not as competitors but as collaborators. Each offers something the other lacks.

As Delio put it, “We tend to think in an either/or framework: It’s either us or the robots. But this is about partnership, not replacement. It can be a symbiotic relationship — if we approach it that way.”

https://www.vox.com/future-perfect/2019/9/9/20851753/ai-religion-robot-priest-mindar-buddhism-christianity

Thanks to Kebmodee for bringing this to the It’s Interesting community.

Self-Taught AI Masters Rubik’s Cube Without Human Help

by George Dvorsky

Fancy algorithms capable of solving a Rubik’s Cube have appeared before, but a new system from the University of California, Irvine uses artificial intelligence to solve the 3D puzzle from scratch and without any prior help from humans—and it does so with impressive speed and efficiency.

New research published this week in Nature Machine Intelligence describes DeepCubeA, a system capable of solving any jumbled Rubik’s Cube it’s presented with. More impressively, it can find the most efficient path to success—that is, the solution requiring the fewest number of moves—around 60 percent of the time. On average, DeepCubeA needed just 28 moves to solve the puzzle, requiring 1.2 seconds to calculate the solution.

Sounds fast, but other systems have solved the 3D puzzle in less time, including a robot that can solve the Rubik’s cube in just 0.38 seconds. But these systems were specifically designed for the task, using human-scripted algorithms to solve the puzzle in the most efficient manner possible. DeepCubeA, on the other hand, taught itself to solve Rubik’s Cube using an approach to artificial intelligence known as reinforcement learning.

“Artificial intelligence can defeat the world’s best human chess and Go players, but some of the more difficult puzzles, such as the Rubik’s Cube, had not been solved by computers, so we thought they were open for AI approaches,” said Pierre Baldi, the senior author of the new paper, in a press release. “The solution to the Rubik’s Cube involves more symbolic, mathematical and abstract thinking, so a deep learning machine that can crack such a puzzle is getting closer to becoming a system that can think, reason, plan and make decisions.”

Indeed, an expert system designed for one task and one task only—like solving a Rubik’s Cube—will forever be limited to that domain, but a system like DeepCubeA, with its highly adaptable neural net, could be leveraged for other tasks, such as solving complex scientific, mathematical, and engineering problems. What’s more, this system “is a small step toward creating agents that are able to learn how to think and plan for themselves in new environments,” Stephen McAleer, a co-author of the new paper, told Gizmodo.

Reinforcement learning works the way it sounds. Systems are motivated to achieve a designated goal, during which time they gain points for deploying successful actions or strategies, and lose points for straying off course. This allows the algorithms to improve over time, and without human intervention.

Reinforcement learning makes sense for a Rubik’s Cube, owing to the hideous number of possible combinations on the 3x3x3 puzzle, which amount to around 43 quintillion. Simply choosing random moves with the hopes of solving the cube is simply not going to work, neither for humans nor the world’s most powerful supercomputers.

DeepCubeA is not the first kick at the can for these University of California, Irvine researchers. Their earlier system, called DeepCube, used a conventional tree-search strategy and a reinforcement learning scheme similar to the one employed by DeepMind’s AlphaZero. But while this approach works well for one-on-one board games like chess and Go, it proved clumsy for Rubik’s Cube. In tests, the DeepCube system required too much time to make its calculations, and its solutions were often far from ideal.

The UCI team used a different approach with DeepCubeA. Starting with a solved cube, the system made random moves to scramble the puzzle. Basically, it learned to be proficient at Rubik’s Cube by playing it in reverse. At first the moves were few, but the jumbled state got more and more complicated as training progressed. In all, DeepCubeA played 10 billion different combinations in two days as it worked to solve the cube in less than 30 moves.

“DeepCubeA attempts to solve the cube using the least number of moves,” explained McAleer. “Consequently, the moves tend to look much different from how a human would solve the cube.”

After training, the system was tasked with solving 1,000 randomly scrambled Rubik’s Cubes. In tests, DeepCubeA found a solution to 100 percent of all cubes, and it found a shortest path to the goal state 60.3 percent of the time. The system required 28 moves on average to solve the cube, which it did in about 1.2 seconds. By comparison, the fastest human puzzle solvers require around 50 moves.

“Since we found that DeepCubeA is solving the cube in the fewest moves 60 percent of the time, it’s pretty clear that the strategy it is using is close to the optimal strategy, colloquially referred to as God’s algorithm,” study co-author Forest Agostinelli told Gizmodo. “While human strategies are easily explainable with step-by-step instructions, defining an optimal strategy often requires sophisticated knowledge of group theory and combinatorics. Though mathematically defining this strategy is not in the scope of this paper, we can see that the strategy DeepCubeA is employing is one that is not readily obvious to humans.”

To showcase the flexibility of the system, DeepCubeA was also taught to solve other puzzles, including sliding-tile puzzle games, Lights Out, and Sokoban, which it did with similar proficiency.

https://gizmodo.com/self-taught-ai-masters-rubik-s-cube-without-human-help-1836420294

Artificial intelligence singles out neurons faster than a human can


Two-photon imaging shows neurons firing in a mouse brain. Recordings like this enable researchers to track which neurons are firing, and how they potentially correspond to different behaviors. The image is credited to Yiyang Gong, Duke University.

Summary: Convolutional neural network model significantly outperforms previous methods and is as accurate as humans in segmenting active and overlapping neurons.

Source: Duke University

Biomedical engineers at Duke University have developed an automated process that can trace the shapes of active neurons as accurately as human researchers can, but in a fraction of the time.

This new technique, based on using artificial intelligence to interpret video images, addresses a critical roadblock in neuron analysis, allowing researchers to rapidly gather and process neuronal signals for real-time behavioral studies.

The research appeared this week in the Proceedings of the National Academy of Sciences.

To measure neural activity, researchers typically use a process known as two-photon calcium imaging, which allows them to record the activity of individual neurons in the brains of live animals. These recordings enable researchers to track which neurons are firing, and how they potentially correspond to different behaviors.

While these measurements are useful for behavioral studies, identifying individual neurons in the recordings is a painstaking process. Currently, the most accurate method requires a human analyst to circle every ‘spark’ they see in the recording, often requiring them to stop and rewind the video until the targeted neurons are identified and saved. To further complicate the process, investigators are often interested in identifying only a small subset of active neurons that overlap in different layers within the thousands of neurons that are imaged.

This process, called segmentation, is fussy and slow. A researcher can spend anywhere from four to 24 hours segmenting neurons in a 30-minute video recording, and that’s assuming they’re fully focused for the duration and don’t take breaks to sleep, eat or use the bathroom.

In contrast, a new open source automated algorithm developed by image processing and neuroscience researchers in Duke’s Department of Biomedical Engineering can accurately identify and segment neurons in minutes.

“As a critical step towards complete mapping of brain activity, we were tasked with the formidable challenge of developing a fast automated algorithm that is as accurate as humans for segmenting a variety of active neurons imaged under different experimental settings,” said Sina Farsiu, the Paul Ruffin Scarborough Associate Professor of Engineering in Duke BME.

“The data analysis bottleneck has existed in neuroscience for a long time — data analysts have spent hours and hours processing minutes of data, but this algorithm can process a 30-minute video in 20 to 30 minutes,” said Yiyang Gong, an assistant professor in Duke BME. “We were also able to generalize its performance, so it can operate equally well if we need to segment neurons from another layer of the brain with different neuron size or densities.”

“Our deep learning-based algorithm is fast, and is demonstrated to be as accurate as (if not better than) human experts in segmenting active and overlapping neurons from two-photon microscopy recordings,” said Somayyeh Soltanian-Zadeh, a PhD student in Duke BME and first author on the paper.

Deep-learning algorithms allow researchers to quickly process large amounts of data by sending it through multiple layers of nonlinear processing units, which can be trained to identify different parts of a complex image. In their framework, this team created an algorithm that could process both spatial and timing information in the input videos. They then ‘trained’ the algorithm to mimic the segmentation of a human analyst while improving the accuracy.

The advance is a critical step towards allowing neuroscientists to track neural activity in real time. Because of their tool’s widespread usefulness, the team has made their software and annotated dataset available online.

Gong is already using the new method to more closely study the neural activity associated with different behaviors in mice. By better understanding which neurons fire for different activities, Gong hopes to learn how researchers can manipulate brain activity to modify behavior.

“This improved performance in active neuron detection should provide more information about the neural network and behavioral states, and open the door for accelerated progress in neuroscience experiments,” said Soltanian-Zadeh.

Artificial intelligence singles out neurons faster than a human can

AI and MRIs at birth can predict cognitive development at age 2


Researchers at the University of North Carolina School of Medicine used MRI brain scans and machine learning techniques at birth to predict cognitive development at age 2 years with 95 percent accuracy.

“This prediction could help identify children at risk for poor cognitive development shortly after birth with high accuracy,” said senior author John H. Gilmore, MD, Thad and Alice Eure Distinguished Professor of psychiatry and director of the UNC Center for Excellence in Community Mental Health. “For these children, an early intervention in the first year or so of life – when cognitive development is happening – could help improve outcomes. For example, in premature infants who are at risk, one could use imaging to see who could have problems.”

The study, which was published online by the journal NeuroImage, used an application of artificial intelligence called machine learning to look at white matter connections in the brain at birth and the ability of these connections to predict cognitive outcomes.

Gilmore said researchers at UNC and elsewhere are working to find imaging biomarkers of risk for poor cognitive outcomes and for risk of neuropsychiatric conditions such as autism and schizophrenia. In this study, the researchers replicated the initial finding in a second sample of children who were born prematurely.

“Our study finds that the white matter network at birth is highly predictive and may be a useful imaging biomarker. The fact that we could replicate the findings in a second set of children provides strong evidence that this may be a real and generalizable finding,” he said.

Jessica B. Girault, PhD, a postdoctoral researcher at the Carolina Institute for Developmental Disabilities, is the study’s lead author. UNC co-authors are Barbara D. Goldman, PhD, of UNC’s Frank Porter Graham Child Development Institute, Juan C. Prieto, PhD, assistant professor, and Martin Styner, PhD, director of the Neuro Image Research and Analysis Laboratory in the department of psychiatry.

AI and MRIs at birth can predict cognitive development at age 2

China created what it claims is the first AI news anchor

by Isobel Asher Hamilton

– China’s state press agency has developed what it calls “AI news anchors,” avatars of real-life news presenters that read out news as it is typed.

– It developed the anchors with the Chinese search-engine giant Sogou.

– No details were given as to how the anchors were made, and one expert said they fell into the “uncanny valley,” in which avatars have an unsettling resemblance to humans.

China’s state-run press agency, Xinhua, has unveiled what it claims are the world’s first news anchors generated by artificial intelligence.

Xinhua revealed two virtual anchors at the World Internet Conference on Thursday. Both were modeled on real presenters, with one who speaks Chinese and another who speaks English.

“AI anchors have officially become members of the Xinhua News Agency reporting team,” Xinhua told the South China Morning Post. “They will work with other anchors to bring you authoritative, timely, and accurate news information in both Chinese and English.”

In a post, Xinhua said the generated anchors could work “24 hours a day” on its website and various social-media platforms, “reducing news production costs and improving efficiency.”

Xinhua developed the virtual anchors with Sogou, China’s second-biggest search engine. No details were given about how they were made.

Though Xinhua presents the avatars as independently learning from “live broadcasting videos,” the avatars do not appear to rely on true artificial intelligence, as they simply read text written by humans.

“I will work tirelessly to keep you informed as texts will be typed into my system uninterrupted,” the English-speaking anchor says in its first video, using a synthesized voice.

The Oxford computer-science professor Michael Wooldridge told the BBC that the anchor fell into the “uncanny valley,” in which avatars or objects that closely but do not fully resemble humans make observers more uncomfortable than ones that are more obviously artificial.

https://www.businessinsider.com/ai-news-anchor-created-by-china-xinhua-news-agency-2018-11

Artificial Intelligence Can Predict Your Personality By Simply Tracking Your Eyes


Researchers have developed a new deep learning algorithm that can reveal your personality type, based on the Big Five personality trait model, by simply tracking eye movements.

t’s often been said that the eyes are the window to the soul, revealing what we think and how we feel. Now, new research reveals that your eyes may also be an indicator of your personality type, simply by the way they move.

Developed by the University of South Australia in partnership with the University of Stuttgart, Flinders University and the Max Planck Institute for Informatics in Germany, the research uses state-of-the-art machine-learning algorithms to demonstrate a link between personality and eye movements.

Findings show that people’s eye movements reveal whether they are sociable, conscientious or curious, with the algorithm software reliably recognising four of the Big Five personality traits: neuroticism, extroversion, agreeableness, and conscientiousness.

Researchers tracked the eye movements of 42 participants as they undertook everyday tasks around a university campus, and subsequently assessed their personality traits using well-established questionnaires.

UniSA’s Dr Tobias Loetscher says the study provides new links between previously under-investigated eye movements and personality traits and delivers important insights for emerging fields of social signal processing and social robotics.

“There’s certainly the potential for these findings to improve human-machine interactions,” Dr Loetscher says.

“People are always looking for improved, personalised services. However, today’s robots and computers are not socially aware, so they cannot adapt to non-verbal cues.

“This research provides opportunities to develop robots and computers so that they can become more natural, and better at interpreting human social signals.”

Dr Loetscher says the findings also provide an important bridge between tightly controlled laboratory studies and the study of natural eye movements in real-world environments.

“This research has tracked and measured the visual behaviour of people going about their everyday tasks, providing more natural responses than if they were in a lab.

“And thanks to our machine-learning approach, we not only validate the role of personality in explaining eye movement in everyday life, but also reveal new eye movement characteristics as predictors of personality traits.”

Original Research: Open access research for “Eye Movements During Everyday Behavior Predict Personality Traits” by Sabrina Hoppe, Tobias Loetscher, Stephanie A. Morey and Andreas Bulling in Frontiers in Human Neuroscience. Published April 14 2018.
doi:10.3389/fnhum.2018.00105

Artificial Intelligence Can Predict Your Personality By Simply Tracking Your Eyes