Elon Musk’s Euralink soon to reveal a working brain-computer chip for “human-AI symbiosis”

By Anthony Cuthbertson

Elon Musk has said he will demonstrate a functional brain-computer interface this week during a live presentation from his mysterious Neuralink startup.

The billionaire entrepreneur, who also heads SpaceX and Tesla, founded Neuralink in 2016 with the ultimate aim of merging artificial intelligence with the human brain.

Until now, there has only been one public event showing off the startup’s technology, during which Musk revealed a “sewing machine-like” device capable of stitching threads into a person’s head.

The procedure to implant the chip will eventually be similar in speed and efficiency to Lasik laser eye surgery, according to Musk, and will be performed by a robot.

The robot and the working brain chip will be unveiled during a live webcast at 3pm PT (11pm BST) on Friday, Musk tweeted on Tuesday night.

In response to a question on Twitter, he said that the comparison with laser eye surgery was still some way off. “Still far from Lasik, but could get pretty close in a few years,” he tweeted.

He also said that Friday’s demonstration would show “neurons firing in real-time… the matrix in the matrix.”

The device has already been tested on animals and human trials were originally planned for 2020, though it is not yet clear whether they have started.


A robot designed by Neuralink would insert the ‘threads’ into the brain using a needle


A fully implantable neural interface connects to the brain through tiny threads


Neuralink says learning to use the device is ‘like learning to touch type or play the piano’


Neuralink says learning to use the device is ‘like learning to touch type or play the piano’

In the build up to Friday’s event, Musk has drip fed details about Neuralink’s technology and the capabilities it could deliver to people using it.

In a series of tweets last month, he said the chip “could extend the range of hearing beyond normal frequencies and amplitudes,” as well as allow wearers to stream music directly to their brain.

Other potential applications include regulating hormone levels and delivering “enhanced abilities” like greater reasoning and anxiety relief.

Earlier this month, scientists unconnected to Neuralink unveiled a new bio-synthetic material that they claim could be used to help integrate electronics with the human body.

The breakthrough could help achieve Musk’s ambition of augmenting human intelligence and abilities, which he claims is necessary allow humanity to compete with advanced artificial intelligence.

He claims that humans risk being overtaken by AI within the next five years, and that AI could eventually view us in the same way we currently view house pets.

“I don’t love the idea of being a house cat, but what’s the solution?” he said in 2016, just months before he founded Neuralink. “I think one of the solutions that seems maybe the best is to add an AI layer.”

https://www.independent.co.uk/life-style/gadgets-and-tech/news/elon-musk-neuralink-brain-computer-chip-ai-event-when-a9688966.html

Powerful antibiotics discovered using AI


Machine learning spots molecules that work even against ‘untreatable’ strains of bacteria.

by Jo Marchant

A pioneering machine-learning approach has identified powerful new types of antibiotic from a pool of more than 100 million molecules — including one that works against a wide range of bacteria, including tuberculosis and strains considered untreatable.

The researchers say the antibiotic, called halicin, is the first discovered with artificial intelligence (AI). Although AI has been used to aid parts of the antibiotic-discovery process before, they say that this is the first time it has identified completely new kinds of antibiotic from scratch, without using any previous human assumptions. The work, led by synthetic biologist Jim Collins at the Massachusetts Institute of Technology in Cambridge, is published in Cell1.

The study is remarkable, says Jacob Durrant, a computational biologist at the University of Pittsburgh, Pennsylvania. The team didn’t just identify candidates, but also validated promising molecules in animal tests, he says. What’s more, the approach could also be applied to other types of drug, such as those used to treat cancer or neurodegenerative diseases, says Durrant.

Bacterial resistance to antibiotics is rising dramatically worldwide, and researchers predict that unless new drugs are developed urgently, resistant infections could kill ten million people per year by 2050. But over the past few decades, the discovery and regulatory approval of new antibiotics has slowed. “People keep finding the same molecules over and over,” says Collins. “We need novel chemistries with novel mechanisms of action.”

Forget your assumptions
Collins and his team developed a neural network — an AI algorithm inspired by the brain’s architecture — that learns the properties of molecules atom by atom.

The researchers trained its neural network to spot molecules that inhibit the growth of the bacterium Escherichia coli, using a collection of 2,335 molecules for which the antibacterial activity was known. This includes a library of about 300 approved antibiotics, as well as 800 natural products from plant, animal and microbial sources.

The algorithm learns to predict molecular function without any assumptions about how drugs work and without chemical groups being labelled, says Regina Barzilay, an AI researcher at MIT and a co-author of the study. “As a result, the model can learn new patterns unknown to human experts.”

Once the model was trained, the researchers used it to screen a library called the Drug Repurposing Hub, which contains around 6,000 molecules under investigation for human diseases. They asked it to predict which would be effective against E. coli, and to show them only molecules that look different from conventional antibiotics.

From the resulting hits, the researchers selected about 100 candidates for physical testing. One of these — a molecule being investigated as a diabetes treatment — turned out to be a potent antibiotic, which they called halicin after HAL, the intelligent computer in the film 2001: A Space Odyssey. In tests in mice, this molecule was active against a wide spectrum of pathogens, including a strain of Clostridioides difficile and one of Acinetobacter baumannii that is ‘pan-resistant’ and against which new antibiotics are urgently required.

Proton block
Antibiotics work through a range of mechanisms, such as blocking the enzymes involved in cell-wall biosynthesis, DNA repair or protein synthesis. But halicin’s mechanism is unconventional: it disrupts the flow of protons across a cell membrane. In initial animal tests, it also seemed to have low toxicity and be robust against resistance. In experiments, resistance to other antibiotic compounds typically arises within a day or two, says Collins. “But even after 30 days of such testing we didn’t see any resistance against halicin.”

The team then screened more than 107 million molecular structures in a database called ZINC15. From a shortlist of 23, physical tests identified 8 with antibacterial activity. Two of these had potent activity against a broad range of pathogens, and could overcome even antibiotic-resistant strains of E. coli.

The study is “a great example of the growing body of work using computational methods to discover and predict properties of potential drugs”, says Bob Murphy, a computational biologist at Carnegie Mellon University in Pittsburgh. He notes that AI methods have previously been developed to mine huge databases of genes and metabolites to identify molecule types that could include new antibiotics2,3.

But Collins and his team say that their approach is different — rather than search for specific structures or molecular classes, they’re training their network to look for molecules with a particular activity. The team is now hoping to partner with an outside group or company to get halicin into clinical trials. It also wants to broaden the approach to find more new antibiotics, and design molecules from scratch. Barzilay says their latest work is a proof of concept. “This study puts it all together and demonstrates what it can do.”

doi: 10.1038/d41586-020-00018-3
References
1.
Stokes, J. M. et al. Cell https://doi.org/10.1016/j.cell.2020.01.021 (2020).

https://www.nature.com/articles/d41586-020-00018-3?utm_source=Nature+Briefing&utm_campaign=f680a1d26d-briefing-dy-20200221&utm_medium=email&utm_term=0_c9dfd39373-f680a1d26d-44039353

AI is learning how to use brain scans to predict the right antidepressant for patients

By Jason Arunn Murugesu

An AI can predict from people’s brainwaves whether an antidepressant is likely to help them. The technique may offer a new approach to prescribing medicines for mental illnesses.

Antidepressants don’t always work, and we aren’t sure why. “We have a central problem in psychiatry because we characterise diseases by their end point, such as what behaviours they cause,” says Amit Etkin at Stanford University in California. “You tell me you’re depressed, and I don’t know any more than that. I don’t really know what’s going on in the brain and we prescribe medication on very little information.”

Etkin wanted to find out if a machine-learning algorithm could predict from the brain scans of people diagnosed with depression who was most likely to respond to treatment with the antidepressant sertraline. The drug is typically effective in only a third of the people who take it.

He and his team gathered electroencephalogram (EEG) recordings showing the brainwaves of 228 people aged between 18 and 65 with depression. These individuals had previously tried antidepressants, but weren’t on such drugs at the start of the study.

Roughly half the participants were given sertraline, while the rest got a placebo. The researchers then monitored the participants’ mood over eight weeks, measuring any changes using a depression rating scale.

Brain activity patterns
By comparing the EEG recordings of those who responded well to the drug with those who didn’t, the machine-learning algorithm was able to identify a specific pattern of brain activity linked with a higher likelihood of finding sertraline helpful.

The team then tested the algorithm on a different group of 279 people. Although only 41 per cent of overall participants responded well to sertraline, 76 per cent of those the algorithm predicted would benefit did so.

Etkin has founded a company called Alto Neuroscience to develop the technology. He hopes it results in more efficient sertraline prescription by giving doctors “the tools to make decisions about their patients using objective tests, decisions that they’re currently making by chance”, says Etkin.

This AI “could have potential future relevance to patients with depression”, says Christian Gluud at the Copenhagen Trial Unit in Denmark. But the results need to be replicated by other researchers “before any transfer to clinical practice can be considered”, he says.

Journal reference: Nature Biotechnology, DOI: 10.1038/s41587-019-0397-3

Read more: https://www.newscientist.com/article/2232792-brain-scans-can-help-predict-wholl-benefit-from-an-antidepressant/#ixzz6DeyTJYpK

AI can determine if you’re going to die soon from looking at your ECG, and even cardiologists don’t understand how it does this.



Researchers found that a black-box algorithm predicted patient death better than humans.

They used ECG results to sort historical patient data into groups based on who would die within a year.

Although the algorithm performed better, scientists don’t understand how or why it did.

Albert Einstein’s famous expression “spooky action at a distance” refers to quantum entanglement, a phenomenon seen on the most micro of scales. But machine learning seems to grow more mysterious and powerful every day, and scientists don’t always understand how it works. The spookiest action yet is a new study of heart patients where a machine-learning algorithm decided who was most likely to die within a year based on echocardiogram (ECG) results, reported by New Scientist.

The algorithm performed better than the traditional measures used by cardiologists. The study was done by researchers in Pennsylvania’s Geisinger regional healthcare group, a low-cost and not-for-profit provider.

Much of machine learning involves feeding complex data into computers that are better able to examine it really closely. To analogize to calculus, if human reasoning is a Riemann sum, machine learning may be the integral that results as the Riemann calculation approaches infinity. Human doctors do the best they can with what they have, but whatever the ECG algorithm is finding in the data, those studying the algorithm can’t reverse engineer what it is.

The most surprising axis may be the number of people cardiologists believed were healthy based on normal ECG results: “The AI accurately predicted risk of death even in people deemed by cardiologists to have a normal ECG,” New Scientist reports.

To imitate the decision-making of individual cardiologists, the Geisinger team made a parallel algorithm out of the factors that cardiologists use to calculate risk in the accepted way. It’s not practical to record the individual impressions of 400,000 real human doctors instead of the results of the algorithm, but that level of granularity could show that cardiologists are more able to predict poor outcomes than the algorithm indicates.

It could also show they perform worse than the algorithm—we just don’t know. Head to head, having a better algorithm could add to doctors’ human skillset and lead to even better outcomes for at-risk patients.

Machine learning experts use a metric called area under the curve (AUC) to measure how well their algorithm can sort people into different groups. In this case, researchers programmed the algorithm to decide which people would survive and which would die within the year, and its success was measured in how many people it placed in the correct groups. This is why future action is so complicated: People can be misplaced in both directions, leading to false positives and false negatives that could impact treatment. The algorithm did show an improvement, scoring 85 percent versus the 65 to 80 percent success rate of the traditional calculus.

As in other studies, one flaw in this research is that the scientists used past data where the one-year window had finished. The data set is closed and scientists can directly compare their results to a certain outcome. There’s a difference—and in medicine it’s an ethical one—between studying closed data and using a mysterious, unstudied mechanism to change how we treat patients today.

Medical research faces the same ethical hurdles across the board. What if intervening based on machine learning changes outcomes and saves lives? Is it ever right to treat one group of patients better than a control group that receives less effective care? These obstacles make a big difference in how future studies will pursue the results of this study. If the phenomenon of better prediction holds up, it may be decades before patients are treated differently.

https://www.popularmechanics.com/science/health/a29762613/ai-predict-death-health/

AI is the new Grandmaster of StarCraft II

by PETER DOCKRILL

Video games were invented for humans, by humans. But that doesn’t necessarily mean we’re the best when it comes to playing them.

In a new achievement that signifies just how far artificial intelligence (AI) has progressed, scientists have developed a learning algorithm that rose to the very top echelon of the esports powerhouse StarCraft II, reaching Grandmaster level.

According to the researchers who created the AI – called AlphaStar – the accomplishment of reaching the Grandmaster League means you’re in the top 0.2 percent of StarCraft II players.

In other words, AlphaStar competes at a level in this multi-player real-time strategy game that could trounce millions of humans foolhardy enough to take it on.

In recent years, we’ve seen AI come to dominate games that represent more traditional tests of human skill, mastering the strategies of chess, poker, and Go.

For David Silver, principal research scientist at AI firm DeepMind in the UK, those kinds of milestones – many of which DeepMind pioneered – are what’s led us to this inevitable moment: a game representing even greater problems than the ancient games that have challenged human minds for centuries.

“Ever since computers cracked Go, chess, and poker, StarCraft has emerged by consensus as the next grand challenge,” Silver says.

“The game’s complexity is much greater than chess, because players control hundreds of units; more complex than Go, because there are 1,026 possible choices for every move; and players have less information about their opponents than in poker.”

Add it all together and mastering the complex real-time battles of StarCraft seems almost impossible for a machine, so how did they do it?

In a new paper published this week, the DeepMind team describes how they developed a multi-agent reinforcement learning algorithm, which trained itself up through self-play, including playing against itself, and playing humans, learning to mimic successful strategies, and also effective counter-strategies.

The research team has been working towards this goal for years. An earlier version of the system made headlines back in January when it started to beat human professionals.

“I will never forget the excitement and emotion we all felt when AlphaStar first started playing real competitive matches,” says Dario “TLO” Wünsch, one of the top human StarCraft II players beaten by the algorithm.

“The system is very skilled at assessing its strategic position, and knows exactly when to engage or disengage with its opponent.”

The latest algorithm takes things even further than that preliminary incarnation, and now effectively plays under artificial constraints designed to most realistically simulate gameplay as experienced by a human (such as observing the game at a distance, through a camera, and feeling the delay of network latency).

With all the imposed limitations of a human, AlphaStar still reached Grandmaster level in real, online competitive play, representing not just a world-first, but perhaps a sunset of these kinds of gaming challenges, given what the achievement now may make possible.

“Like StarCraft, real-world domains such as personal assistants, self-driving cars, or robotics require real-time decisions, over combinatorial or structured action spaces, given imperfectly observed information,” the authors write.

“The success of AlphaStar in StarCraft II suggests that general-purpose machine learning algorithms may have a substantial effect on complex real-world problems.”

The findings are reported in Nature.

https://www.sciencealert.com/starcraft-ii-has-a-new-grandmaster-and-it-s-not-human?perpetual=yes&limitstart=1

AI at Case Western Reserve lab predicts which pre-malignant breast lesions will progress to invasive cancer

New research at Case Western Reserve University could help better determine which patients diagnosed with the pre-malignant breast cancer commonly referred to as stage 0 are likely to progress to invasive breast cancer and therefore might benefit from additional therapy over and above surgery alone.

Once a lumpectomy of breast tissue reveals this pre-cancerous tumor, most women have surgery to remove the remainder of the affected tissue and some are given radiation therapy as well, said Anant Madabhushi, the F. Alex Nason Professor II of Biomedical Engineering at the Case School of Engineering.

“Current testing places patients in high risk, low risk and indeterminate risk—but then treats those indeterminates with radiation, anyway,” said Madabhushi, whose Center for Computational Imaging and Personalized Diagnostics (CCIPD) conducted the new research. “They err on the side of caution, but we’re saying that it appears that it should go the other way—the middle should be classified with the lower risk.

“In short, we’re probably overtreating patients,” Madabhushi continued. “That goes against prevailing wisdom, but that’s what our analysis is finding.”

The most common breast cancer

Stage 0 breast cancer is the most common type and known clinically as ductal carcinoma in situ (DCIS), indicating that the cancer cell growth starts in the milk ducts.

About 60,000 cases of DCIS are diagnosed in the United States each year, accounting for about one of every five new breast cancer cases, according to the American Cancer Society. People with a type of breast cancer that has not spread beyond the breast tissue live at least five years after diagnosis, according to the cancer society.

Lead researcher Haojia Li, a graduate student in the CCIPD, used a computer program to analyze the spatial architecture, texture and orientation of the individual cells and nuclei from scanned and digitized lumpectomy tissue samples from 62 DCIS patients.

The result: Both the size and orientation of the tumors characterized as “indeterminate” were actually much closer to those confirmed as low risk for recurrence by an expensive genetic test called Oncotype DX.

Li then validated the features that distinguished the low and high risk Oncotype groups in being able to predict the likelihood of progression from DCIS to invasive ductal carcinoma in an independent set of 30 patients.

“This could be a tool for determining who really needs the radiation, or who needs the gene test, which is also very expensive,” she said.

The research led by Li was published Oct. 17 in the journal Breast Cancer Research.

Madabhushi established the CCIPD at Case Western Reserve in 2012. The lab now includes nearly 60 researchers. The lab has become a global leader in the detection, diagnosis and characterization of various cancers and other diseases, including breast cancer, by meshing medical imaging, machine learning and artificial intelligence (AI).

Some of the lab’s most recent work, in collaboration with New York University and Yale University, has used AI to predict which lung cancer patients would benefit from adjuvant chemotherapy based on tissue slide images. That advancement was named by Prevention Magazine as one of the top 10 medical breakthroughs of 2018.

AI at Case Western Reserve lab predicts which pre-malignant breast lesions will progress to invasive cancer

Robot priests can bless you, advise you, and even perform your funeral

By Sigal Samuel

A new priest named Mindar is holding forth at Kodaiji, a 400-year-old Buddhist temple in Kyoto, Japan. Like other clergy members, this priest can deliver sermons and move around to interface with worshippers. But Mindar comes with some … unusual traits. A body made of aluminum and silicone, for starters.

Mindar is a robot.

Designed to look like Kannon, the Buddhist deity of mercy, the $1 million machine is an attempt to reignite people’s passion for their faith in a country where religious affiliation is on the decline.

For now, Mindar is not AI-powered. It just recites the same preprogrammed sermon about the Heart Sutra over and over. But the robot’s creators say they plan to give it machine-learning capabilities that’ll enable it to tailor feedback to worshippers’ specific spiritual and ethical problems.

“This robot will never die; it will just keep updating itself and evolving,” said Tensho Goto, the temple’s chief steward. “With AI, we hope it will grow in wisdom to help people overcome even the most difficult troubles. It’s changing Buddhism.”

Robots are changing other religions, too. In 2017, Indians rolled out a robot that performs the Hindu aarti ritual, which involves moving a light round and round in front of a deity. That same year, in honor of the Protestant Reformation’s 500th anniversary, Germany’s Protestant Church created a robot called BlessU-2. It gave preprogrammed blessings to over 10,000 people.

Then there’s SanTO — short for Sanctified Theomorphic Operator — a 17-inch-tall robot reminiscent of figurines of Catholic saints. If you tell it you’re worried, it’ll respond by saying something like, “From the Gospel according to Matthew, do not worry about tomorrow, for tomorrow will worry about itself. Each day has enough trouble of its own.”

Roboticist Gabriele Trovato designed SanTO to offer spiritual succor to elderly people whose mobility and social contact may be limited. Next, he wants to develop devices for Muslims, though it remains to be seen what form those might take.

As more religious communities begin to incorporate robotics — in some cases, AI-powered and in others, not — it stands to change how people experience faith. It may also alter how we engage in ethical reasoning and decision-making, which is a big part of religion.

For the devout, there’s plenty of positive potential here: Robots can get disinterested people curious about religion or allow for a ritual to be performed when a human priest is inaccessible. But robots also pose risks for religion — for example, by making it feel too mechanized or homogenized or by challenging core tenets of theology. On the whole, will the emergence of AI religion make us better or worse off? The answer depends on how we design and deploy it — and on whom you ask.

Some cultures are more open to religious robots than others
New technologies often make us uncomfortable. Which ones we ultimately accept — and which ones we reject — is determined by an array of factors, ranging from our degree of exposure to the emerging technology to our moral presuppositions.

Japanese worshippers who visit Mindar are reportedly not too bothered by questions about the risks of siliconizing spirituality. That makes sense given that robots are already so commonplace in the country, including in the religious domain.

For years now, people who can’t afford to pay a human priest to perform a funeral have had the option to pay a robot named Pepper to do it at a much cheaper rate. And in China, at Beijing’s Longquan Monastery, an android monk named Xian’er recites Buddhist mantras and offers guidance on matters of faith.

What’s more, Buddhism’s non-dualistic metaphysical notion that everything has inherent “Buddha nature” — that all beings have the potential to become enlightened — may predispose its adherents to be receptive to spiritual guidance that comes from technology.

At the temple in Kyoto, Goto put it like this: “Buddhism isn’t a belief in a God; it’s pursuing Buddha’s path. It doesn’t matter whether it’s represented by a machine, a piece of scrap metal, or a tree.”

“Mindar’s metal skeleton is exposed, and I think that’s an interesting choice — its creator, Hiroshi Ishiguro, is not trying to make something that looks totally human,” said Natasha Heller, an associate professor of Chinese religions at the University of Virginia. She told me the deity Kannon, upon whom Mindar is based, is an ideal candidate for cyborgization because the Lotus Sutra explicitly says Kannon can manifest in different forms — whatever forms will best resonate with the humans of a given time and place.

Westerners seem more disturbed by Mindar, likening it to Frankenstein’s monster. In Western economies, we don’t yet have robots enmeshed in many aspects of our lives. What we do have is a pervasive cultural narrative, reinforced by Hollywood blockbusters, about our impending enslavement at the hands of “robot overlords.”

Plus, Abrahamic religions like Islam or Judaism tend to be more metaphysically dualistic — there’s the sacred and then there’s the profane. And they have more misgivings than Buddhism about visually depicting divinity, so they may take issue with Mindar-style iconography.

They also have different ideas about what makes a r

eligious practice effective. For example, Judaism places a strong emphasis on intentionality, something machines don’t possess. When a worshipper prays, what matters is not just that their mouth forms the right words — it’s also very important that they have the right intention.

Meanwhile, some Buddhists use prayer wheels containing scrolls printed with sacred words and believe that spinning the wheel has its own spiritual efficacy, even if nobody recites the words aloud. In hospice settings, elderly Buddhists who don’t have people on hand to recite prayers on their behalf will use devices known as nianfo ji — small machines about the size of an iPhone, which recite the name of the Buddha endlessly.

Despite such theological differences, it’s ironic that many Westerners have a knee-jerk negative reaction to a robot like Mindar. The dream of creating artificial life goes all the way back to ancient Greece, where the ancients actually invented real animated machines as the Stanford classicist Adrienne Mayor has documented in her book Gods and Robots. And there is a long tradition of religious robots in the West.

In the Middle Ages, Christians designed automata to perform the mysteries of Easter and Christmas. One proto-roboticist in the 16th century designed a mechanical monk that is, amazingly, performing ritual gestures to this day. With his right arm, he strikes his chest in a mea culpa; with his left, he raises a rosary to his lips.

In other words, the real novelty is not the use of robots in the religious domain but the use of AI.

How AI may change our theology and ethics
Even as our theology shapes the AI we create and embrace, AI will also shape our theology. It’s a two-way street.

Some people believe AI will force a truly momentous change in theology, because if humans create intelligent machines with free will, we’ll eventually have to ask whether they have something functionally similar to a soul.

“There will be a point in the future when these free-willed beings that we’ve made will say to us, ‘I believe in God. What do I do?’ At that point, we should have a response,” said Kevin Kelly, a Christian co-founder of Wired magazine who argues we need to develop “a catechism for robots.”

Other people believe that, rather than seeking to join a human religion, AI itself will become an object of worship. Anthony Levandowski, the Silicon Valley engineer who triggered a major Uber/Waymo lawsuit, has set up the first church of artificial intelligence, called Way of the Future. Levandowski’s new religion is dedicated to “the realization, acceptance, and worship of a Godhead based on artificial intelligence (AI) developed through computer hardware and software.”

Meanwhile, Ilia Delio, a Franciscan sister who holds two PhDs and a chair in theology at Villanova University, told me AI may also force a traditional religion like Catholicism to reimagine its understanding of human priests as divinely called and consecrated — a status that grants them special authority.

“The Catholic notion would say the priest is ontologically changed upon ordination. Is that really true?” she asked. Maybe priestliness is not an esoteric essence but a programmable trait that even a “fallen” creation like a robot can embody. “We have these fixed philosophical ideas and AI challenges those ideas — it challenges Catholicism to move toward a post-human priesthood.” (For now, she joked, a robot would probably do better as a Protestant.)

Then there are questions about how robotics will change our religious experiences. Traditionally, those experiences are valuable in part because they leave room for the spontaneous and surprising, the emotional and even the mystical. That could be lost if we mechanize them.

To visualize an automated ritual, take a look at this video of a robotic arm performing a Hindu aarti ceremony:

Another risk has to do with how an AI priest would handle ethical queries and decision-making. Robots whose algorithms learn from previous data may nudge us toward decisions based on what people have done in the past, incrementally homogenizing answers to our queries and narrowing the scope of our spiritual imagination.

That risk also exists with human clergy, Heller pointed out: “The clergy is bounded too — there’s already a built-in nudging or limiting factor, even without AI.”

But AI systems can be particularly problematic in that they often function as black boxes. We typically don’t know what sorts of biases are coded into them or what sorts of human nuance and context they’re failing to understand.

Let’s say you tell a robot you’re feeling depressed because you’re unemployed and broke, and the only job that’s available to you seems morally odious. Maybe the robot responds by reciting a verse from Proverbs 14: “In all toil there is profit, but mere talk tends only to poverty.” Even if it doesn’t presume to interpret the verse for you, in choosing that verse it’s already doing hidden interpretational work. It’s analyzing your situation and algorithmically determining a recommendation — in this case, one that may prompt you to take the job.

But perhaps it would’ve worked out better for you if the robot had recited a verse from Proverbs 16: “Commit your work to the Lord, and your plans will be established.” Maybe that verse would prompt you to pass on the morally dubious job, and, being a sensitive soul, you’ll later be happy you did. Or maybe your depression is severe enough that the job issue is somewhat beside the point and the crucial thing is for you to seek out mental health treatment.

A human priest who knows your broader context as a whole person may gather this and give you the right recommendation. An android priest might miss the nuances and just respond to the localized problem as you’ve expressed it.

The fact is human clergy members do so much more than provide answers. They serve as the anchor for a community, bringing people together. They offer pastoral care. And they provide human contact, which is in danger of becoming a luxury good as we create robots to more cheaply do the work of people.

On the other hand, Delio said, robots can excel in a social role in some ways that human priests might not. “Take the Catholic Church. It’s very male, very patriarchal, and we have this whole sexual abuse crisis. So would I want a robot priest? Maybe!” she said. “A robot can be gender-neutral. It might be able to transcend some of those divides and be able to enhance community in a way that’s more liberating.”

Ultimately, in religion as in other domains, robots and humans are perhaps best understood not as competitors but as collaborators. Each offers something the other lacks.

As Delio put it, “We tend to think in an either/or framework: It’s either us or the robots. But this is about partnership, not replacement. It can be a symbiotic relationship — if we approach it that way.”

https://www.vox.com/future-perfect/2019/9/9/20851753/ai-religion-robot-priest-mindar-buddhism-christianity

Thanks to Kebmodee for bringing this to the It’s Interesting community.

Self-Taught AI Masters Rubik’s Cube Without Human Help

by George Dvorsky

Fancy algorithms capable of solving a Rubik’s Cube have appeared before, but a new system from the University of California, Irvine uses artificial intelligence to solve the 3D puzzle from scratch and without any prior help from humans—and it does so with impressive speed and efficiency.

New research published this week in Nature Machine Intelligence describes DeepCubeA, a system capable of solving any jumbled Rubik’s Cube it’s presented with. More impressively, it can find the most efficient path to success—that is, the solution requiring the fewest number of moves—around 60 percent of the time. On average, DeepCubeA needed just 28 moves to solve the puzzle, requiring 1.2 seconds to calculate the solution.

Sounds fast, but other systems have solved the 3D puzzle in less time, including a robot that can solve the Rubik’s cube in just 0.38 seconds. But these systems were specifically designed for the task, using human-scripted algorithms to solve the puzzle in the most efficient manner possible. DeepCubeA, on the other hand, taught itself to solve Rubik’s Cube using an approach to artificial intelligence known as reinforcement learning.

“Artificial intelligence can defeat the world’s best human chess and Go players, but some of the more difficult puzzles, such as the Rubik’s Cube, had not been solved by computers, so we thought they were open for AI approaches,” said Pierre Baldi, the senior author of the new paper, in a press release. “The solution to the Rubik’s Cube involves more symbolic, mathematical and abstract thinking, so a deep learning machine that can crack such a puzzle is getting closer to becoming a system that can think, reason, plan and make decisions.”

Indeed, an expert system designed for one task and one task only—like solving a Rubik’s Cube—will forever be limited to that domain, but a system like DeepCubeA, with its highly adaptable neural net, could be leveraged for other tasks, such as solving complex scientific, mathematical, and engineering problems. What’s more, this system “is a small step toward creating agents that are able to learn how to think and plan for themselves in new environments,” Stephen McAleer, a co-author of the new paper, told Gizmodo.

Reinforcement learning works the way it sounds. Systems are motivated to achieve a designated goal, during which time they gain points for deploying successful actions or strategies, and lose points for straying off course. This allows the algorithms to improve over time, and without human intervention.

Reinforcement learning makes sense for a Rubik’s Cube, owing to the hideous number of possible combinations on the 3x3x3 puzzle, which amount to around 43 quintillion. Simply choosing random moves with the hopes of solving the cube is simply not going to work, neither for humans nor the world’s most powerful supercomputers.

DeepCubeA is not the first kick at the can for these University of California, Irvine researchers. Their earlier system, called DeepCube, used a conventional tree-search strategy and a reinforcement learning scheme similar to the one employed by DeepMind’s AlphaZero. But while this approach works well for one-on-one board games like chess and Go, it proved clumsy for Rubik’s Cube. In tests, the DeepCube system required too much time to make its calculations, and its solutions were often far from ideal.

The UCI team used a different approach with DeepCubeA. Starting with a solved cube, the system made random moves to scramble the puzzle. Basically, it learned to be proficient at Rubik’s Cube by playing it in reverse. At first the moves were few, but the jumbled state got more and more complicated as training progressed. In all, DeepCubeA played 10 billion different combinations in two days as it worked to solve the cube in less than 30 moves.

“DeepCubeA attempts to solve the cube using the least number of moves,” explained McAleer. “Consequently, the moves tend to look much different from how a human would solve the cube.”

After training, the system was tasked with solving 1,000 randomly scrambled Rubik’s Cubes. In tests, DeepCubeA found a solution to 100 percent of all cubes, and it found a shortest path to the goal state 60.3 percent of the time. The system required 28 moves on average to solve the cube, which it did in about 1.2 seconds. By comparison, the fastest human puzzle solvers require around 50 moves.

“Since we found that DeepCubeA is solving the cube in the fewest moves 60 percent of the time, it’s pretty clear that the strategy it is using is close to the optimal strategy, colloquially referred to as God’s algorithm,” study co-author Forest Agostinelli told Gizmodo. “While human strategies are easily explainable with step-by-step instructions, defining an optimal strategy often requires sophisticated knowledge of group theory and combinatorics. Though mathematically defining this strategy is not in the scope of this paper, we can see that the strategy DeepCubeA is employing is one that is not readily obvious to humans.”

To showcase the flexibility of the system, DeepCubeA was also taught to solve other puzzles, including sliding-tile puzzle games, Lights Out, and Sokoban, which it did with similar proficiency.

https://gizmodo.com/self-taught-ai-masters-rubik-s-cube-without-human-help-1836420294

A new AI acquired humanlike ‘number sense’ on its own

Artificial intelligence can share our natural ability to make numeric snap judgments.

Researchers observed this knack for numbers in a computer model composed of virtual brain cells, or neurons, called an artificial neural network. After being trained merely to identify objects in images — a common task for AI — the network developed virtual neurons that respond to specific quantities. These artificial neurons are reminiscent of the “number neurons” thought to give humans, birds, bees and other creatures the innate ability to estimate the number of items in a set (SN: 7/7/18, p. 7). This intuition is known as number sense.

In number-judging tasks, the AI demonstrated a number sense similar to humans and animals, researchers report online May 8 in Science Advances. This finding lends insight into what AI can learn without explicit instruction, and may prove interesting for scientists studying how number sensitivity arises in animals.

Neurobiologist Andreas Nieder of the University of Tübingen in Germany and colleagues used a library of about 1.2 million labeled images to teach an artificial neural network to recognize objects such as animals and vehicles in pictures. The researchers then presented the AI with dot patterns containing one to 30 dots and recorded how various virtual neurons responded.

Some neurons were more active when viewing patterns with specific numbers of dots. For instance, some neurons activated strongly when shown two dots but not 20, and vice versa. The degree to which these neurons preferred certain numbers was nearly identical to previous data from the neurons of monkeys.

Dot detectors
A new artificial intelligence program viewed images of dots previously shown to monkeys, including images with one dot and images with even numbers of dots from 2 to 30 (bottom). Much like the number-sensitive neurons in monkey brains, number-sensitive virtual neurons in the AI preferentially activated when shown specific numbers of dots. As in monkey brains, the AI contained more neurons tuned to smaller numbers than larger numbers (top).

To test whether the AI’s number neurons equipped it with an animal-like number sense, Nieder’s team presented pairs of dot patterns and asked whether the patterns contained the same number of dots. The AI was correct 81 percent of the time, performing about as well as humans and monkeys do on similar matching tasks. Like humans and other animals, the AI struggled to differentiate between patterns that had very similar numbers of dots, and between patterns that had many dots (SN: 12/10/16, p. 22).

This finding is a “very nice demonstration” of how AI can pick up multiple skills while training for a specific task, says Elias Issa, a neuroscientist at Columbia University not involved in the work. But exactly how and why number sense arose within this artificial neural network is still unclear, he says.

Nieder and colleagues argue that the emergence of number sense in AI might help biologists understand how human babies and wild animals get a number sense without being taught to count. Perhaps basic number sensitivity “is wired into the architecture of our visual system,” Nieder says.

Ivilin Stoianov, a computational neuroscientist at the Italian National Research Council in Padova, is not convinced that such a direct parallel exists between the number sense in this AI and that in animal brains. This AI learned to “see” by studying many labeled pictures, which is not how babies and wild animals learn to make sense of the world. Future experiments could explore whether similar number neurons emerge in AI systems that more closely mimic how biological brains learn, like those that use reinforcement learning, Stoianov says (SN: 12/8/18, p. 14).

A new AI acquired humanlike ‘number sense’ on its own

Heads in the cloud: Scientists predict internet of thoughts ‘within decades’


B/CI technology might also allow us to create a future “global superbrain” that would connect networks of individual human brains and AIs to enable collective thought. The image is in the public doamin.

Summary: Researchers predict the development of a brain/cloud interface that connects neurons to cloud computing networks in real time.

Source: Frontiers

Imagine a future technology that would provide instant access to the world’s knowledge and artificial intelligence, simply by thinking about a specific topic or question. Communications, education, work, and the world as we know it would be transformed.

Writing in Frontiers in Neuroscience, an international collaboration led by researchers at UC Berkeley and the US Institute for Molecular Manufacturing predicts that exponential progress in nanotechnology, nanomedicine, AI, and computation will lead this century to the development of a “Human Brain/Cloud Interface” (B/CI), that connects neurons and synapses in the brain to vast cloud-computing networks in real time.

Nanobots on the brain

The B/CI concept was initially proposed by futurist-author-inventor Ray Kurzweil, who suggested that neural nanorobots – brainchild of Robert Freitas, Jr., senior author of the research – could be used to connect the neocortex of the human brain to a “synthetic neocortex” in the cloud. Our wrinkled neocortex is the newest, smartest, ‘conscious’ part of the brain.

Freitas’ proposed neural nanorobots would provide direct, real-time monitoring and control of signals to and from brain cells.

“These devices would navigate the human vasculature, cross the blood-brain barrier, and precisely autoposition themselves among, or even within brain cells,” explains Freitas. “They would then wirelessly transmit encoded information to and from a cloud-based supercomputer network for real-time brain-state monitoring and data extraction.”

The internet of thoughts

This cortex in the cloud would allow “Matrix”-style downloading of information to the brain, the group claims.

“A human B/CI system mediated by neuralnanorobotics could empower individuals with instantaneous access to all cumulative human knowledge available in the cloud, while significantly improving human learning capacities and intelligence,” says lead author Dr. Nuno Martins.

B/CI technology might also allow us to create a future “global superbrain” that would connect networks of individual human brains and AIs to enable collective thought.

“While not yet particularly sophisticated, an experimental human ‘BrainNet’ system has already been tested, enabling thought-driven information exchange via the cloud between individual brains,” explains Martins. “It used electrical signals recorded through the skull of ‘senders’ and magnetic stimulation through the skull of ‘receivers,’ allowing for performing cooperative tasks.

“With the advance of neuralnanorobotics, we envisage the future creation of ‘superbrains’ that can harness the thoughts and thinking power of any number of humans and machines in real time. This shared cognition could revolutionize democracy, enhance empathy, and ultimately unite culturally diverse groups into a truly global society.”

When can we connect?

According to the group’s estimates, even existing supercomputers have processing speeds capable of handling the necessary volumes of neural data for B/CI – and they’re getting faster, fast.

Rather, transferring neural data to and from supercomputers in the cloud is likely to be the ultimate bottleneck in B/CI development.

“This challenge includes not only finding the bandwidth for global data transmission,” cautions Martins, “but also, how to enable data exchange with neurons via tiny devices embedded deep in the brain.”

One solution proposed by the authors is the use of ‘magnetoelectric nanoparticles’ to effectively amplify communication between neurons and the cloud.

“These nanoparticles have been used already in living mice to couple external magnetic fields to neuronal electric fields – that is, to detect and locally amplify these magnetic signals and so allow them to alter the electrical activity of neurons,” explains Martins. “This could work in reverse, too: electrical signals produced by neurons and nanorobots could be amplified via magnetoelectric nanoparticles, to allow their detection outside of the skull.”

Getting these nanoparticles – and nanorobots – safely into the brain via the circulation, would be perhaps the greatest challenge of all in B/CI.

“A detailed analysis of the biodistribution and biocompatibility of nanoparticles is required before they can be considered for human development. Nevertheless, with these and other promising technologies for B/CI developing at an ever-increasing rate, an ‘internet of thoughts’ could become a reality before the turn of the century,” Martins concludes.

Heads in the cloud: Scientists predict internet of thoughts ‘within decades’