Posts Tagged ‘The Singularity’

By Sigal Samuel

A new priest named Mindar is holding forth at Kodaiji, a 400-year-old Buddhist temple in Kyoto, Japan. Like other clergy members, this priest can deliver sermons and move around to interface with worshippers. But Mindar comes with some … unusual traits. A body made of aluminum and silicone, for starters.

Mindar is a robot.

Designed to look like Kannon, the Buddhist deity of mercy, the $1 million machine is an attempt to reignite people’s passion for their faith in a country where religious affiliation is on the decline.

For now, Mindar is not AI-powered. It just recites the same preprogrammed sermon about the Heart Sutra over and over. But the robot’s creators say they plan to give it machine-learning capabilities that’ll enable it to tailor feedback to worshippers’ specific spiritual and ethical problems.

“This robot will never die; it will just keep updating itself and evolving,” said Tensho Goto, the temple’s chief steward. “With AI, we hope it will grow in wisdom to help people overcome even the most difficult troubles. It’s changing Buddhism.”

Robots are changing other religions, too. In 2017, Indians rolled out a robot that performs the Hindu aarti ritual, which involves moving a light round and round in front of a deity. That same year, in honor of the Protestant Reformation’s 500th anniversary, Germany’s Protestant Church created a robot called BlessU-2. It gave preprogrammed blessings to over 10,000 people.

Then there’s SanTO — short for Sanctified Theomorphic Operator — a 17-inch-tall robot reminiscent of figurines of Catholic saints. If you tell it you’re worried, it’ll respond by saying something like, “From the Gospel according to Matthew, do not worry about tomorrow, for tomorrow will worry about itself. Each day has enough trouble of its own.”

Roboticist Gabriele Trovato designed SanTO to offer spiritual succor to elderly people whose mobility and social contact may be limited. Next, he wants to develop devices for Muslims, though it remains to be seen what form those might take.

As more religious communities begin to incorporate robotics — in some cases, AI-powered and in others, not — it stands to change how people experience faith. It may also alter how we engage in ethical reasoning and decision-making, which is a big part of religion.

For the devout, there’s plenty of positive potential here: Robots can get disinterested people curious about religion or allow for a ritual to be performed when a human priest is inaccessible. But robots also pose risks for religion — for example, by making it feel too mechanized or homogenized or by challenging core tenets of theology. On the whole, will the emergence of AI religion make us better or worse off? The answer depends on how we design and deploy it — and on whom you ask.

Some cultures are more open to religious robots than others
New technologies often make us uncomfortable. Which ones we ultimately accept — and which ones we reject — is determined by an array of factors, ranging from our degree of exposure to the emerging technology to our moral presuppositions.

Japanese worshippers who visit Mindar are reportedly not too bothered by questions about the risks of siliconizing spirituality. That makes sense given that robots are already so commonplace in the country, including in the religious domain.

For years now, people who can’t afford to pay a human priest to perform a funeral have had the option to pay a robot named Pepper to do it at a much cheaper rate. And in China, at Beijing’s Longquan Monastery, an android monk named Xian’er recites Buddhist mantras and offers guidance on matters of faith.

What’s more, Buddhism’s non-dualistic metaphysical notion that everything has inherent “Buddha nature” — that all beings have the potential to become enlightened — may predispose its adherents to be receptive to spiritual guidance that comes from technology.

At the temple in Kyoto, Goto put it like this: “Buddhism isn’t a belief in a God; it’s pursuing Buddha’s path. It doesn’t matter whether it’s represented by a machine, a piece of scrap metal, or a tree.”

“Mindar’s metal skeleton is exposed, and I think that’s an interesting choice — its creator, Hiroshi Ishiguro, is not trying to make something that looks totally human,” said Natasha Heller, an associate professor of Chinese religions at the University of Virginia. She told me the deity Kannon, upon whom Mindar is based, is an ideal candidate for cyborgization because the Lotus Sutra explicitly says Kannon can manifest in different forms — whatever forms will best resonate with the humans of a given time and place.

Westerners seem more disturbed by Mindar, likening it to Frankenstein’s monster. In Western economies, we don’t yet have robots enmeshed in many aspects of our lives. What we do have is a pervasive cultural narrative, reinforced by Hollywood blockbusters, about our impending enslavement at the hands of “robot overlords.”

Plus, Abrahamic religions like Islam or Judaism tend to be more metaphysically dualistic — there’s the sacred and then there’s the profane. And they have more misgivings than Buddhism about visually depicting divinity, so they may take issue with Mindar-style iconography.

They also have different ideas about what makes a r

eligious practice effective. For example, Judaism places a strong emphasis on intentionality, something machines don’t possess. When a worshipper prays, what matters is not just that their mouth forms the right words — it’s also very important that they have the right intention.

Meanwhile, some Buddhists use prayer wheels containing scrolls printed with sacred words and believe that spinning the wheel has its own spiritual efficacy, even if nobody recites the words aloud. In hospice settings, elderly Buddhists who don’t have people on hand to recite prayers on their behalf will use devices known as nianfo ji — small machines about the size of an iPhone, which recite the name of the Buddha endlessly.

Despite such theological differences, it’s ironic that many Westerners have a knee-jerk negative reaction to a robot like Mindar. The dream of creating artificial life goes all the way back to ancient Greece, where the ancients actually invented real animated machines as the Stanford classicist Adrienne Mayor has documented in her book Gods and Robots. And there is a long tradition of religious robots in the West.

In the Middle Ages, Christians designed automata to perform the mysteries of Easter and Christmas. One proto-roboticist in the 16th century designed a mechanical monk that is, amazingly, performing ritual gestures to this day. With his right arm, he strikes his chest in a mea culpa; with his left, he raises a rosary to his lips.

In other words, the real novelty is not the use of robots in the religious domain but the use of AI.

How AI may change our theology and ethics
Even as our theology shapes the AI we create and embrace, AI will also shape our theology. It’s a two-way street.

Some people believe AI will force a truly momentous change in theology, because if humans create intelligent machines with free will, we’ll eventually have to ask whether they have something functionally similar to a soul.

“There will be a point in the future when these free-willed beings that we’ve made will say to us, ‘I believe in God. What do I do?’ At that point, we should have a response,” said Kevin Kelly, a Christian co-founder of Wired magazine who argues we need to develop “a catechism for robots.”

Other people believe that, rather than seeking to join a human religion, AI itself will become an object of worship. Anthony Levandowski, the Silicon Valley engineer who triggered a major Uber/Waymo lawsuit, has set up the first church of artificial intelligence, called Way of the Future. Levandowski’s new religion is dedicated to “the realization, acceptance, and worship of a Godhead based on artificial intelligence (AI) developed through computer hardware and software.”

Meanwhile, Ilia Delio, a Franciscan sister who holds two PhDs and a chair in theology at Villanova University, told me AI may also force a traditional religion like Catholicism to reimagine its understanding of human priests as divinely called and consecrated — a status that grants them special authority.

“The Catholic notion would say the priest is ontologically changed upon ordination. Is that really true?” she asked. Maybe priestliness is not an esoteric essence but a programmable trait that even a “fallen” creation like a robot can embody. “We have these fixed philosophical ideas and AI challenges those ideas — it challenges Catholicism to move toward a post-human priesthood.” (For now, she joked, a robot would probably do better as a Protestant.)

Then there are questions about how robotics will change our religious experiences. Traditionally, those experiences are valuable in part because they leave room for the spontaneous and surprising, the emotional and even the mystical. That could be lost if we mechanize them.

To visualize an automated ritual, take a look at this video of a robotic arm performing a Hindu aarti ceremony:

Another risk has to do with how an AI priest would handle ethical queries and decision-making. Robots whose algorithms learn from previous data may nudge us toward decisions based on what people have done in the past, incrementally homogenizing answers to our queries and narrowing the scope of our spiritual imagination.

That risk also exists with human clergy, Heller pointed out: “The clergy is bounded too — there’s already a built-in nudging or limiting factor, even without AI.”

But AI systems can be particularly problematic in that they often function as black boxes. We typically don’t know what sorts of biases are coded into them or what sorts of human nuance and context they’re failing to understand.

Let’s say you tell a robot you’re feeling depressed because you’re unemployed and broke, and the only job that’s available to you seems morally odious. Maybe the robot responds by reciting a verse from Proverbs 14: “In all toil there is profit, but mere talk tends only to poverty.” Even if it doesn’t presume to interpret the verse for you, in choosing that verse it’s already doing hidden interpretational work. It’s analyzing your situation and algorithmically determining a recommendation — in this case, one that may prompt you to take the job.

But perhaps it would’ve worked out better for you if the robot had recited a verse from Proverbs 16: “Commit your work to the Lord, and your plans will be established.” Maybe that verse would prompt you to pass on the morally dubious job, and, being a sensitive soul, you’ll later be happy you did. Or maybe your depression is severe enough that the job issue is somewhat beside the point and the crucial thing is for you to seek out mental health treatment.

A human priest who knows your broader context as a whole person may gather this and give you the right recommendation. An android priest might miss the nuances and just respond to the localized problem as you’ve expressed it.

The fact is human clergy members do so much more than provide answers. They serve as the anchor for a community, bringing people together. They offer pastoral care. And they provide human contact, which is in danger of becoming a luxury good as we create robots to more cheaply do the work of people.

On the other hand, Delio said, robots can excel in a social role in some ways that human priests might not. “Take the Catholic Church. It’s very male, very patriarchal, and we have this whole sexual abuse crisis. So would I want a robot priest? Maybe!” she said. “A robot can be gender-neutral. It might be able to transcend some of those divides and be able to enhance community in a way that’s more liberating.”

Ultimately, in religion as in other domains, robots and humans are perhaps best understood not as competitors but as collaborators. Each offers something the other lacks.

As Delio put it, “We tend to think in an either/or framework: It’s either us or the robots. But this is about partnership, not replacement. It can be a symbiotic relationship — if we approach it that way.”

https://www.vox.com/future-perfect/2019/9/9/20851753/ai-religion-robot-priest-mindar-buddhism-christianity

Thanks to Kebmodee for bringing this to the It’s Interesting community.

Artificial intelligence can share our natural ability to make numeric snap judgments.

Researchers observed this knack for numbers in a computer model composed of virtual brain cells, or neurons, called an artificial neural network. After being trained merely to identify objects in images — a common task for AI — the network developed virtual neurons that respond to specific quantities. These artificial neurons are reminiscent of the “number neurons” thought to give humans, birds, bees and other creatures the innate ability to estimate the number of items in a set (SN: 7/7/18, p. 7). This intuition is known as number sense.

In number-judging tasks, the AI demonstrated a number sense similar to humans and animals, researchers report online May 8 in Science Advances. This finding lends insight into what AI can learn without explicit instruction, and may prove interesting for scientists studying how number sensitivity arises in animals.

Neurobiologist Andreas Nieder of the University of Tübingen in Germany and colleagues used a library of about 1.2 million labeled images to teach an artificial neural network to recognize objects such as animals and vehicles in pictures. The researchers then presented the AI with dot patterns containing one to 30 dots and recorded how various virtual neurons responded.

Some neurons were more active when viewing patterns with specific numbers of dots. For instance, some neurons activated strongly when shown two dots but not 20, and vice versa. The degree to which these neurons preferred certain numbers was nearly identical to previous data from the neurons of monkeys.

Dot detectors
A new artificial intelligence program viewed images of dots previously shown to monkeys, including images with one dot and images with even numbers of dots from 2 to 30 (bottom). Much like the number-sensitive neurons in monkey brains, number-sensitive virtual neurons in the AI preferentially activated when shown specific numbers of dots. As in monkey brains, the AI contained more neurons tuned to smaller numbers than larger numbers (top).

To test whether the AI’s number neurons equipped it with an animal-like number sense, Nieder’s team presented pairs of dot patterns and asked whether the patterns contained the same number of dots. The AI was correct 81 percent of the time, performing about as well as humans and monkeys do on similar matching tasks. Like humans and other animals, the AI struggled to differentiate between patterns that had very similar numbers of dots, and between patterns that had many dots (SN: 12/10/16, p. 22).

This finding is a “very nice demonstration” of how AI can pick up multiple skills while training for a specific task, says Elias Issa, a neuroscientist at Columbia University not involved in the work. But exactly how and why number sense arose within this artificial neural network is still unclear, he says.

Nieder and colleagues argue that the emergence of number sense in AI might help biologists understand how human babies and wild animals get a number sense without being taught to count. Perhaps basic number sensitivity “is wired into the architecture of our visual system,” Nieder says.

Ivilin Stoianov, a computational neuroscientist at the Italian National Research Council in Padova, is not convinced that such a direct parallel exists between the number sense in this AI and that in animal brains. This AI learned to “see” by studying many labeled pictures, which is not how babies and wild animals learn to make sense of the world. Future experiments could explore whether similar number neurons emerge in AI systems that more closely mimic how biological brains learn, like those that use reinforcement learning, Stoianov says (SN: 12/8/18, p. 14).

https://www.sciencenews.org/article/new-ai-acquired-humanlike-number-sense-its-own

By Greg Ip

It’s time to stop worrying that robots will take our jobs — and start worrying that they will decide who gets jobs.

Millions of low-paid workers’ lives are increasingly governed by software and algorithms. This was starkly illustrated by a report last week that Amazon.com tracks the productivity of its employees and regularly fires those who underperform, with almost no human intervention.

“Amazon’s system tracks the rates of each individual associate’s productivity and automatically generates any warnings or terminations regarding quality or productivity without input from supervisors,” a law firm representing Amazon said in a letter to the National Labor Relations Board, as first reported by technology news site The Verge. Amazon was responding to a complaint that it had fired an employee from a Baltimore fulfillment center for federally protected activity, which could include union organizing. Amazon said the employee was fired for failing to meet productivity targets.

Perhaps it was only a matter of time before software started firing people. After all, it already screens resumes, recommends job applicants, schedules shifts and assigns projects. In the workplace, “sophisticated technology to track worker productivity on a minute-by-minute or even second-by-second basis is incredibly pervasive,” says Ian Larkin, a business professor at the University of California at Los Angeles specializing in human resources.

Industrial laundry services track how many seconds it takes to press a laundered shirt; on-board computers track truckers’ speed, gear changes and engine revolutions per minute; and checkout terminals at major discount retailers report if the cashier is scanning items quickly enough to meet a preset goal. In all these cases, results are shared in real time with the employee, and used to determine who is terminated, says Mr. Larkin.

Of course, weeding out underperforming employees is a basic function of management. General Electric Co.’s former chief executive Jack Welch regularly culled the company’s underperformers. “In banking and management consulting it is standard to exit about 20% of employees a year, even in good times, using ‘rank and yank’ systems,” says Nick Bloom, an economist at Stanford University specializing in management.

For employees of General Electric, Goldman Sachs Group Inc.and McKinsey & Co., that risk is more than compensated for by the reward of stimulating and challenging work and handsome paychecks. The risk-reward trade-off in industrial laundries, fulfillment centers and discount stores is not nearly so enticing: the work is repetitive and the pay is low. Those who aren’t weeded out one year may be the next if the company raises its productivity targets. Indeed, wage inequality doesn’t fully capture how unequal work has become: enjoyable and secure at the top, monotonous and insecure at the bottom.

At fulfillment centers, employees locate, scan and box all the items in an order. Amazon’s “Associate Development and Performance Tracker,” or Adapt, tracks how each employee performs on these steps against externally-established benchmarks and warns employees when they are falling short.

Amazon employees have complained of being monitored continuously — even having bathroom breaks measured — and being held to ever-rising productivity benchmarks. There is no public data to determine if such complaints are more or less common at Amazon than its peers. The company says about 300 employees — roughly 10% of the Baltimore center’s employment level — were terminated for productivity reasons in the year before the law firm’s letter was sent to the NLRB.

Mr. Larkin says 10% is not unusually high. Yet, automating the discipline process, he says, “makes an already difficult job seem even more inhuman and undesirable. Dealing with these tough situations is one of the key roles of managers.”

“Managers make final decisions on all personnel matters,” an Amazon spokeswoman said. “The [Adapt system] simply tracks and ensures consistency of data and process across hundreds of employees to ensure fairness.” The number of terminations has decreased in the last two years at the Baltimore facility and across North America, she said. Termination notices can be appealed.

Companies use these systems because they work well for them.

Mr. Bloom and his co-authors find that companies that more aggressively hire, fire and monitor employees have faster productivity growth. They also have wider gaps between the highest- and lowest-paid employees.

Computers also don’t succumb to the biases managers do. Economists Mitchell Hoffman, Lisa Kahn and Danielle Li looked at how 15 firms used a job-testing technology that tested applicants on computer and technical skills, personality, cognitive skills, fit for the job and various job scenarios. Drawing on past correlations, the algorithm ranked applicants as having high, moderate or low potential. Their study found employees hired against the software’s recommendation were below-average performers: “This suggests that managers often overrule test recommendations because they are biased or mistaken, not only because they have superior private information,” they wrote.

Last fall Amazon raised its starting pay to $15 an hour, several dollars more than what the brick-and-mortar stores being displaced by Amazon pay. Ruthless performance tracking is how Amazon ensures employees are productive enough to merit that salary. This also means that, while employees may increasingly be supervised by technology, at least they’re not about to be replaced by it.

Write to Greg Ip at greg.ip@wsj.com

https://www.morningstar.com/news/glbnewscan/TDJNDN_201905017114/for-lowerpaid-workers-the-robot-overlords-have-arrived.html

Thanks to Kebmodee for bringing this to the It’s Interesting community.


Illustrations of electrode placements on the research participants’ neural speech centers, from which activity patterns recorded during speech (colored dots) were translated into a computer simulation of the participant’s vocal tract (model, right) which then could be synthesized to reconstruct the sentence that had been spoken (sound wave & sentence, below). Credit: Chang lab / UCSF Dept. of Neurosurgery

A state-of-the-art brain-machine interface created by UC San Francisco neuroscientists can generate natural-sounding synthetic speech by using brain activity to control a virtual vocal tract—an anatomically detailed computer simulation including the lips, jaw, tongue, and larynx. The study was conducted in research participants with intact speech, but the technology could one day restore the voices of people who have lost the ability to speak due to paralysis and other forms of neurological damage.

Stroke, traumatic brain injury, and neurodegenerative diseases such as Parkinson’s disease, multiple sclerosis, and amyotrophic lateral sclerosis (ALS, or Lou Gehrig’s disease) often result in an irreversible loss of the ability to speak. Some people with severe speech disabilities learn to spell out their thoughts letter-by-letter using assistive devices that track very small eye or facial muscle movements. However, producing text or synthesized speech with such devices is laborious, error-prone, and painfully slow, typically permitting a maximum of 10 words per minute, compared to the 100-150 words per minute of natural speech.

The new system being developed in the laboratory of Edward Chang, MD—described April 24, 2019 in Nature—demonstrates that it is possible to create a synthesized version of a person’s voice that can be controlled by the activity of their brain’s speech centers. In the future, this approach could not only restore fluent communication to individuals with severe speech disability, the authors say, but could also reproduce some of the musicality of the human voice that conveys the speaker’s emotions and personality.

“For the first time, this study demonstrates that we can generate entire spoken sentences based on an individual’s brain activity,” said Chang, a professor of neurological surgery and member of the UCSF Weill Institute for Neuroscience. “This is an exhilarating proof of principle that with technology that is already within reach, we should be able to build a device that is clinically viable in patients with speech loss.”

Brief animation illustrates how patterns of brain activity from the brain’s speech centers in somatosensory cortex (top left) were first decoded into a computer simulation of a research participant’s vocal tract movements (top right), which were then translated into a synthesized version of the participant’s voice (bottom). Credit:Chang lab / UCSF Dept. of Neurosurgery. Simulated Vocal Tract Animation Credit:Speech Graphics
Virtual Vocal Tract Improves Naturalistic Speech Synthesis

The research was led by Gopala Anumanchipalli, Ph.D., a speech scientist, and Josh Chartier, a bioengineering graduate student in the Chang lab. It builds on a recent study in which the pair described for the first time how the human brain’s speech centers choreograph the movements of the lips, jaw, tongue, and other vocal tract components to produce fluent speech.

From that work, Anumanchipalli and Chartier realized that previous attempts to directly decode speech from brain activity might have met with limited success because these brain regions do not directly represent the acoustic properties of speech sounds, but rather the instructions needed to coordinate the movements of the mouth and throat during speech.

“The relationship between the movements of the vocal tract and the speech sounds that are produced is a complicated one,” Anumanchipalli said. “We reasoned that if these speech centers in the brain are encoding movements rather than sounds, we should try to do the same in decoding those signals.”

In their new study, Anumancipali and Chartier asked five volunteers being treated at the UCSF Epilepsy Center—patients with intact speech who had electrodes temporarily implanted in their brains to map the source of their seizures in preparation for neurosurgery—to read several hundred sentences aloud while the researchers recorded activity from a brain region known to be involved in language production.

Based on the audio recordings of participants’ voices, the researchers used linguistic principles to reverse engineer the vocal tract movements needed to produce those sounds: pressing the lips together here, tightening vocal cords there, shifting the tip of the tongue to the roof of the mouth, then relaxing it, and so on.

This detailed mapping of sound to anatomy allowed the scientists to create a realistic virtual vocal tract for each participant that could be controlled by their brain activity. This comprised two “neural network” machine learning algorithms: a decoder that transforms brain activity patterns produced during speech into movements of the virtual vocal tract, and a synthesizer that converts these vocal tract movements into a synthetic approximation of the participant’s voice.

The synthetic speech produced by these algorithms was significantly better than synthetic speech directly decoded from participants’ brain activity without the inclusion of simulations of the speakers’ vocal tracts, the researchers found. The algorithms produced sentences that were understandable to hundreds of human listeners in crowdsourced transcription tests conducted on the Amazon Mechanical Turk platform.

As is the case with natural speech, the transcribers were more successful when they were given shorter lists of words to choose from, as would be the case with caregivers who are primed to the kinds of phrases or requests patients might utter. The transcribers accurately identified 69 percent of synthesized words from lists of 25 alternatives and transcribed 43 percent of sentences with perfect accuracy. With a more challenging 50 words to choose from, transcribers’ overall accuracy dropped to 47 percent, though they were still able to understand 21 percent of synthesized sentences perfectly.

“We still have a ways to go to perfectly mimic spoken language,” Chartier acknowledged. “We’re quite good at synthesizing slower speech sounds like ‘sh’ and ‘z’ as well as maintaining the rhythms and intonations of speech and the speaker’s gender and identity, but some of the more abrupt sounds like ‘b’s and ‘p’s get a bit fuzzy. Still, the levels of accuracy we produced here would be an amazing improvement in real-time communication compared to what’s currently available.”

Artificial Intelligence, Linguistics, and Neuroscience Fueled Advance

The researchers are currently experimenting with higher-density electrode arrays and more advanced machine learning algorithms that they hope will improve the synthesized speech even further. The next major test for the technology is to determine whether someone who can’t speak could learn to use the system without being able to train it on their own voice and to make it generalize to anything they wish to say.


Image of an example array of intracranial electrodes of the type used to record brain activity in the current study. Credit: UCSF

Preliminary results from one of the team’s research participants suggest that the researchers’ anatomically based system can decode and synthesize novel sentences from participants’ brain activity nearly as well as the sentences the algorithm was trained on. Even when the researchers provided the algorithm with brain activity data recorded while one participant merely mouthed sentences without sound, the system was still able to produce intelligible synthetic versions of the mimed sentences in the speaker’s voice.

The researchers also found that the neural code for vocal movements partially overlapped across participants, and that one research subject’s vocal tract simulation could be adapted to respond to the neural instructions recorded from another participant’s brain. Together, these findings suggest that individuals with speech loss due to neurological impairment may be able to learn to control a speech prosthesis modeled on the voice of someone with intact speech.

“People who can’t move their arms and legs have learned to control robotic limbs with their brains,” Chartier said. “We are hopeful that one day people with speech disabilities will be able to learn to speak again using this brain-controlled artificial vocal tract.”

Added Anumanchipalli, “I’m proud that we’ve been able to bring together expertise from neuroscience, linguistics, and machine learning as part of this major milestone towards helping neurologically disabled patients.”

https://medicalxpress.com/news/2019-04-synthetic-speech-brain.html


B/CI technology might also allow us to create a future “global superbrain” that would connect networks of individual human brains and AIs to enable collective thought. The image is in the public doamin.

Summary: Researchers predict the development of a brain/cloud interface that connects neurons to cloud computing networks in real time.

Source: Frontiers

Imagine a future technology that would provide instant access to the world’s knowledge and artificial intelligence, simply by thinking about a specific topic or question. Communications, education, work, and the world as we know it would be transformed.

Writing in Frontiers in Neuroscience, an international collaboration led by researchers at UC Berkeley and the US Institute for Molecular Manufacturing predicts that exponential progress in nanotechnology, nanomedicine, AI, and computation will lead this century to the development of a “Human Brain/Cloud Interface” (B/CI), that connects neurons and synapses in the brain to vast cloud-computing networks in real time.

Nanobots on the brain

The B/CI concept was initially proposed by futurist-author-inventor Ray Kurzweil, who suggested that neural nanorobots – brainchild of Robert Freitas, Jr., senior author of the research – could be used to connect the neocortex of the human brain to a “synthetic neocortex” in the cloud. Our wrinkled neocortex is the newest, smartest, ‘conscious’ part of the brain.

Freitas’ proposed neural nanorobots would provide direct, real-time monitoring and control of signals to and from brain cells.

“These devices would navigate the human vasculature, cross the blood-brain barrier, and precisely autoposition themselves among, or even within brain cells,” explains Freitas. “They would then wirelessly transmit encoded information to and from a cloud-based supercomputer network for real-time brain-state monitoring and data extraction.”

The internet of thoughts

This cortex in the cloud would allow “Matrix”-style downloading of information to the brain, the group claims.

“A human B/CI system mediated by neuralnanorobotics could empower individuals with instantaneous access to all cumulative human knowledge available in the cloud, while significantly improving human learning capacities and intelligence,” says lead author Dr. Nuno Martins.

B/CI technology might also allow us to create a future “global superbrain” that would connect networks of individual human brains and AIs to enable collective thought.

“While not yet particularly sophisticated, an experimental human ‘BrainNet’ system has already been tested, enabling thought-driven information exchange via the cloud between individual brains,” explains Martins. “It used electrical signals recorded through the skull of ‘senders’ and magnetic stimulation through the skull of ‘receivers,’ allowing for performing cooperative tasks.

“With the advance of neuralnanorobotics, we envisage the future creation of ‘superbrains’ that can harness the thoughts and thinking power of any number of humans and machines in real time. This shared cognition could revolutionize democracy, enhance empathy, and ultimately unite culturally diverse groups into a truly global society.”

When can we connect?

According to the group’s estimates, even existing supercomputers have processing speeds capable of handling the necessary volumes of neural data for B/CI – and they’re getting faster, fast.

Rather, transferring neural data to and from supercomputers in the cloud is likely to be the ultimate bottleneck in B/CI development.

“This challenge includes not only finding the bandwidth for global data transmission,” cautions Martins, “but also, how to enable data exchange with neurons via tiny devices embedded deep in the brain.”

One solution proposed by the authors is the use of ‘magnetoelectric nanoparticles’ to effectively amplify communication between neurons and the cloud.

“These nanoparticles have been used already in living mice to couple external magnetic fields to neuronal electric fields – that is, to detect and locally amplify these magnetic signals and so allow them to alter the electrical activity of neurons,” explains Martins. “This could work in reverse, too: electrical signals produced by neurons and nanorobots could be amplified via magnetoelectric nanoparticles, to allow their detection outside of the skull.”

Getting these nanoparticles – and nanorobots – safely into the brain via the circulation, would be perhaps the greatest challenge of all in B/CI.

“A detailed analysis of the biodistribution and biocompatibility of nanoparticles is required before they can be considered for human development. Nevertheless, with these and other promising technologies for B/CI developing at an ever-increasing rate, an ‘internet of thoughts’ could become a reality before the turn of the century,” Martins concludes.

https://neurosciencenews.com/internet-thoughts-brain-cloud-interface-11074/

When Amanda Kitts’s car was hit head-on by a Ford F-350 truck in 2006, her arm was damaged beyond repair. “It looked like minced meat,” Kitts, now 50, recalls. She was immediately rushed to the hospital, where doctors amputated what remained of her mangled limb.

While still in the hospital, Kitts discovered that researchers at the Rehabilitation Institute of Chicago (now the Shirley Ryan AbilityLab) were investigating a new technique called targeted muscle reinnervation, which would enable people to control motorized prosthetics with their minds. The procedure, which involves surgically rewiring residual nerves from an amputated limb into a nearby muscle, allows movement-related electrical signals—sent from the brain to the innervated muscles—to move a prosthetic device.

Kitts immediately enrolled in the study and had the reinnervation surgery around a year after her accident. With her new prosthetic, Kitts regained a functional limb that she could use with her thoughts alone. But something important was missing. “I was able to move a prosthetic just by thinking about it, but I still couldn’t tell if I was holding or letting go of something,” Kitts says. “Sometimes my muscle might contract, and whatever I was holding would drop—so I found myself [often] looking at my arm when I was using it.”

What Kitts’s prosthetic limb failed to provide was a sense of kinesthesia—the awareness of where one’s body parts are and how they are moving. (Kinesthesia is a form of proprioception with a more specific focus on motion than on position.) Taken for granted by most people, kinesthesia is what allows us to unconsciously grab a coffee mug off a desk or to rapidly catch a falling object before it hits the ground. “It’s how we make such nice, elegant, coordinated movements, but you don’t necessarily think about it when it happens,” explains Paul Marasco, a neuroscientist at the Cleveland Clinic in Ohio. “There’s constant and rapid communication that goes on between the muscles and the brain.” The brain sends the intent to move the muscle, the muscle moves, and the awareness of that movement is fed back to the brain.

Prosthetic technology has advanced significantly in recent years, but proprioception is one thing that many of these modern devices still cannot reproduce, Marasco says. And it’s clear that this is something that people find important, he adds, because many individuals with upper-limb amputations still prefer old-school body-powered hook prosthetics. Despite being low tech—the devices work using a bicycle brake–like cable system that’s powered by the body’s own movements—they provide an inherent sense of proprioception.

To restore this sense for amputees who use the more modern prosthetics, Marasco and his colleagues decided to create a device based on what’s known as the kinesthetic illusion: the strange phenomenon in which vibrating a person’s muscle gives her the false sense of movement. A buzz to the triceps will make you think your arm is flexing, while stimulating the biceps will make you feel that it’s extending. The best illustration of this effect is the so-called Pinocchio illusion: holding your nose while someone applies a vibrating device to your bicep will confuse your brain into thinking your nose is growing.

“Your brain doesn’t like conflict,” Marasco explains. So if it thinks “my arm’s moving and I’m holding onto my nose, that must mean my nose is extending.”

To test the device, the team applied vibrations to the reinnervated muscles on six amputee participants’ chests or upper arms and asked them to indicate how they felt their hands were moving. Each amputee reported feeling various hand, wrist, and elbow motions, or “percepts,” in their missing limbs. Kitts, who had met Marasco while taking part in the studies he was involved in at the institute in Chicago, was one of the subjects in the experiment. “The first time I felt the sense of movement was remarkable,” she says.

In total, the experimenters documented 22 different percepts from their participants. “It’s hard to get this sense reliably, so I was encouraged to see the capability of several different subjects to get a reasonable sense of hand position from this illusion,” says Dustin Tyler, a biomedical engineer at Case Western Reserve University who was not involved in the work. He adds that while this is a new, noninvasive approach to proprioception, he and others are also working on devices that restore this sense by stimulating nerves directly with implanted devices.

Marasco and his colleagues then melded the vibration with the movement-controlled prostheses, so that when participants decided to move their artificial limbs, a vibrating stimulus was applied to the muscles to provide them with proprioceptive feedback. When the subjects conducted various movement-related tasks with this new system, their performance significantly improve.

“This was an extremely thorough set of experiments,” says Marcia O’Malley, a biomedical engineer at Rice University who did not take part in that study. “I think it is really promising.”

Although the mechanisms behind the illusion largely remain a mystery, Marasco says, the vibrations may be activating specific muscle receptors that provide the body with a sense of movement. Interestingly, he and his colleagues have found that the “sweet spot” vibration frequency for movement perception is nearly identical in humans and rats—about 90 Hz.

For Kitts, a system that provides proprioceptive feedback means being able to use her prosthetic without constantly watching it—and feeling it instead. “It’s whole new level of having a real part of your body,” she says.

https://www.the-scientist.com/notebook/vibrations-restore-sense-of-movement-in-prosthetics-64691

DARPA’s new research in brain-computer interfaces is allowing a pilot to control multiple simulated aircraft at once.

A person with a brain chip can now pilot a swarm of drones — or even advanced fighter jets, thanks to research funded by the U.S. military’s Defense Advanced Research Projects Agency, or DARPA.

The work builds on research from 2015, which allowed a paralyzed woman to steer a virtual F-35 Joint Strike Fighter with only a small, surgically-implantable microchip. On Thursday, agency officials announced that they had scaled up the technology to allow a user to steer multiple jets at once.

“As of today, signals from the brain can be used to command and control … not just one aircraft but three simultaneous types of aircraft,” said Justin Sanchez, who directs DARPA’s biological technology office, at the Agency’s 60th-anniversary event in Maryland.

More importantly, DARPA was able to improve the interaction between pilot and the simulated jet to allow the operator, a paralyzed man named Nathan, to not just send but receive signals from the craft.

“The signals from those aircraft can be delivered directly back to the brain so that the brain of that user [or pilot] can also perceive the environment,” said Sanchez. “It’s taken a number of years to try and figure this out.”

In essence, it’s the difference between having a brain joystick and having a real telepathic conversation with multiple jets or drones about what’s going on, what threats might be flying over the horizon, and what to do about them. “We’ve scaled it to three [aircraft], and have full sensory [signals] coming back. So you can have those other planes out in the environment and then be detecting something and send that signal back into the brain,” said Sanchez.

The experiment occured a “handful of months ago,” he said.

It’s another breakthrough in the rapidly advancing field of brain-computer interfaces, or BCIs, for a variety of purposes. The military has been leading interesting research in the field since at least 2007,. And in 2012, DARPA issued a $4 million grant to build a non-invasive “synthetic telepathy” interface by placing sensors close to the brain’s motor centers to pick up electrical signals — non-invasively, over the skin.

But the science has advanced rapidly in recent years, allowing for breakthroughs in brain-based communication, control of prosthetic limbs, and even memory repair.

https://www.defenseone.com/technology/2018/09/its-now-possible-telepathically-communicate-drone-swarm/151068/?oref=d-channeltop