Posts Tagged ‘The Future’


Neurons in the brain. Rather than implanting directly into the brain, the bionic neurons are built into ultra-low power microchips that form the basis for devices that would plug straight into the nervous system.

Scientists have created artificial neurons that could potentially be implanted into patients to overcome paralysis, restore failing brain circuits, and even connect their minds to machines.

The bionic neurons can receive electrical signals from healthy nerve cells, and process them in a natural way, before sending fresh signals on to other neurons, or to muscles and organs elsewhere in the body.

One of the first applications may be a treatment for a form of heart failure that develops when a particular neural circuit at the base of the brain deteriorates through age or disease and fails to send the right signals to make the heart pump properly.

Rather than implanting directly into the brain, the artificial neurons are built into ultra-low power microchips a few millimetres wide. The chips form the basis for devices that would plug straight into the nervous system, for example by intercepting signals that pass between the brain and leg muscles.

“Any area where you have some degenerative disease, such as Alzheimer’s, or where the neurons stop firing properly because of age, disease, or injury, then in theory you could replace the faulty biocircuit with a synthetic circuit,” said Alain Nogaret, a physicist who led the project at the University of Bath.

The breakthrough came when researchers found they could model live neurons in a computer program and then recreate their firing patterns in silicon chips with more than 94% accuracy. The program allows the scientists to mimic the full variety of neurons found in the nervous system.

Writing in the journal Nature Communications, the researchers describe how they fed the program with data recorded from two types of rat neuron, which were stimulated in a dish. The neurons were either from the hippocampus, a region that is crucial for memory and learning, or were involved in the subconscious control of breathing.

Armed with the program, the researchers claim they can now build bionic neurons based on any of the real nerve cells found in the brain, spinal cord, or the more distant reaches of the peripheral nervous system, such as the sensory neurons in the skin.

Because the artificial neurons both receive and send signals, they can be used to make implants that respond to neural feedback signals that are constantly coursing around the body.

“The potential is endless in terms of understanding how the brain works, because we now have the fundamental understanding and insight into the functional unit of the brain, and indeed applications, which might be to improve memory, to overcome paralysis and ameliorate disease,” said Julian Paton, a co-author on the study who holds posts at the Universities of Bristol and Auckland.

“They can be used in isolation or connected together to form neuronal networks to perform brain functions,” he added.

With development, trials and regulations to satisfy, it could be many years before the artificial neurons are helping patients. But if they prove safe and effective, they could ultimately be used to circumvent nerve damage in broken spines and help paralysed people regain movement, or to connect people’s brains to robotic limbs that can send touch sensations back through the implant to the brain.

Despite the vast possibilities the artificial neurons open up, Nogaret said the team was nowhere near building a whole brain, an organ which in a human consists of 86bn neurons and at least as many supporting cells. “We are not claiming that we are building a brain, there’s absolutely no way,” he said.

The scientists’ approach differs from that taken by many other peers who hope to recreate brain activity in computers. Rather than focusing on individual neurons, they typically model brain regions or even whole brains, but with far less precision. For example, the million-processor SpiNNaker machine at the University of Manchester can model an entire mouse brain, but not to the level of individual brain cells.

“If you wanted to model a whole mouse brain using the approach in this paper you might end up designing 100 million individual, but very precise, neurons on silicon, which is clearly unfeasible within a reasonable time and budget,” said Stephen Furber, professor of computer engineering at the University of Manchester.

“Because the approach is detailed and laboriously painstaking, it can really only be applied in practice to smallish neural units, such as the respiratory neurons described above, but there are quite a few critical small neural control circuits that are vital to keeping us alive,” he added.

https://www.theguardian.com/science/2019/dec/03/bionic-neurons-could-enable-implants-to-restore-failing-brain-circuits

By Donna Lu

An artificial intelligence has debated the dangers of AI – narrowly convincing audience members that the technology will do more good than harm.

Project Debater, a robot developed by IBM, spoke on both sides of the argument, with two human teammates for each side helping it out. Talking in a female American voice to a crowd at the University of Cambridge Union on Thursday evening, the AI gave each side’s opening statements, using arguments drawn from more than 1100 human submissions made ahead of time.

On the proposition side, arguing that AI will bring more harm than good, Project Debater’s opening remarks were darkly ironic. “AI can cause a lot of harm,” it said. “AI will not be able to make a decision that is the morally correct one, because morality is unique to humans.”

“AI companies still have too little expertise on how to properly assess datasets and filter out bias,” it added. “AI will take human bias and will fixate it for generations.”

The AI used an application known as “speech by crowd” to generate its arguments, analysing submissions people had sent in online. Project Debater then sorted these into key themes, as well as identifying redundancy – submissions making the same point using different words.

The AI argued coherently but had a few slip-ups. Sometimes it repeated itself – while talking about the ability of AI to perform mundane and repetitive tasks, for example – and it didn’t provide detailed examples to support its claims.

While debating on the opposition side, which was advocating for the overall benefits of AI, Project Debater argued that AI would create new jobs in certain sectors and “bring a lot more efficiency to the workplace”.

But then it made a point that was counter to its argument: “AI capabilities caring for patients or robots teaching schoolchildren – there is no longer a demand for humans in those fields either.”

The pro-AI side narrowly won, gaining 51.22 per cent of the audience vote.

Project Debater argued with humans for the first time last year, and in February this year lost in a one-on-one against champion debater Harish Natarajan, who also spoke at Cambridge as the third speaker for the team arguing in favour of AI.

IBM has plans to use the speech-by-crowd AI as a tool for collecting feedback from large numbers of people. For instance, it could be used by governments seeking public opinions about policies or by companies wanting input from employees, said IBM engineer Noam Slonim.

“This technology can help to establish an interesting and effective communication channel between the decision maker and the people that are going to be impacted by the decision,” he said.

Read more: https://www.newscientist.com/article/2224585-robot-debates-humans-about-the-dangers-of-artificial-intelligence/#ixzz66PTD9YuG

Doctors have placed humans in suspended animation for the first time, as part of a trial in the US that aims to make it possible to fix traumatic injuries that would otherwise cause death.

Samuel Tisherman, at the University of Maryland School of Medicine, told New Scientist that his team of medics had placed at least one patient in suspended animation, calling it “a little surreal” when they first did it. He wouldn’t reveal how many people had survived as a result.

The technique, officially called emergency preservation and resuscitation (EPR), is being carried out on people who arrive at the University of Maryland Medical Centre in Baltimore with an acute trauma – such as a gunshot or stab wound – and have had a cardiac arrest. Their heart will have stopped beating and they will have lost more than half their blood. There are only minutes to operate, with a less than 5 per cent chance that they would normally survive.

EPR involves rapidly cooling a person to around 10 to 15°C by replacing all of their blood with ice-cold saline. The patient’s brain activity almost completely stops. They are then disconnected from the cooling system and their body – which would otherwise be classified as dead – is moved to the operating theatre.

A surgical team then has 2 hours to fix the person’s injuries before they are warmed up and their heart restarted. Tisherman says he hopes to be able to announce the full results of the trial by the end of 2020.

At normal body temperature – about 37°C – our cells need a constant supply of oxygen to produce energy. When our heart stops beating, blood no longer carries oxygen to cells. Without oxygen, our brain can only survive for about 5 minutes before irreversible damage occurs. However, lowering the temperature of the body and brain slows or stops all the chemical reactions in our cells, which need less oxygen as a consequence.

Tisherman’s plan for the trial was that 10 people who receive EPR will be compared with 10 people who would have been eligible for the treatment but for the fact that the correct team wasn’t in the hospital at the time of admittance.

The trial was given the go-ahead by the US Food and Drug Administration. The FDA made it exempt from needing patient consent as the participants’ injuries are likely to be fatal and there is no alternative treatment. The team had discussions with the local community and placed ads in newspapers describing the trial, pointing people to a website where they can opt out.

Tisherman’s interest in trauma research was ignited by an early incident in his career in which a young man was stabbed in the heart after an altercation over bowling shoes. “He was a healthy young man just minutes before, then suddenly he was dead. We could have saved him if we’d had enough time,” he says. This led him to start investigating ways in which cooling might allow surgeons more time to do their job.

Animal studies showed that pigs with acute trauma could be cooled for 3 hours, stitched up and resuscitated. “We felt it was time to take it to our patients,” says Tisherman. “Now we are doing it and we are learning a lot as we move forward with the trial. Once we can prove it works here, we can expand the utility of this technique to help patients survive that otherwise would not.”

“I want to make clear that we’re not trying to send people off to Saturn,” he says. “We’re trying to buy ourselves more time to save lives.”

In fact, how long you can extend the time in which someone is in suspended animation isn’t clear. When a person’s cells are warmed up, they can experience reperfusion injuries, in which a series of chemical reactions damage the cell – and the longer they are without oxygen, the more damage occurs.

It may be possible to give people a cocktail of drugs to help minimise these injuries and extend the time in which they are suspended, says Tisherman, “but we haven’t identified all the causes of reperfusion injuries yet”.

Tisherman described the team’s progress on Monday at a symposium at the New York Academy of Sciences. Ariane Lewis, director of the division of neuro-critical care at NYU Langone Health, said she thought it was important work, but that it was just first steps. “We have to see whether it works and then we can start to think about how and where we can use it.”

Read more: https://www.newscientist.com/article/2224004-exclusive-humans-placed-in-suspended-animation-for-the-first-time/#ixzz65qFgVd3X

by David Hambling

Everyone’s heart is different. Like the iris or fingerprint, our unique cardiac signature can be used as a way to tell us apart. Crucially, it can be done from a distance.

It’s that last point that has intrigued US Special Forces. Other long-range biometric techniques include gait analysis, which identifies someone by the way he or she walks. This method was supposedly used to identify an infamous ISIS terrorist before a drone strike. But gaits, like faces, are not necessarily distinctive. An individual’s cardiac signature is unique, though, and unlike faces or gait, it remains constant and cannot be altered or disguised.

Long-range detection
A new device, developed for the Pentagon after US Special Forces requested it, can identify people without seeing their face: instead it detects their unique cardiac signature with an infrared laser. While it works at 200 meters (219 yards), longer distances could be possible with a better laser. “I don’t want to say you could do it from space,” says Steward Remaly, of the Pentagon’s Combatting Terrorism Technical Support Office, “but longer ranges should be possible.”

Contact infrared sensors are often used to automatically record a patient’s pulse. They work by detecting the changes in reflection of infrared light caused by blood flow. By contrast, the new device, called Jetson, uses a technique known as laser vibrometry to detect the surface movement caused by the heartbeat. This works though typical clothing like a shirt and a jacket (though not thicker clothing such as a winter coat).

The most common way of carrying out remote biometric identification is by face recognition. But this needs good, frontal view of the face, which can be hard to obtain, especially from a drone. Face recognition may also be confused by beards, sunglasses, or headscarves.

Cardiac signatures are already used for security identification. The Canadian company Nymi has developed a wrist-worn pulse sensor as an alternative to fingerprint identification. The technology has been trialed by the Halifax building society in the UK.

Jetson extends this approach by adapting an off-the shelf device that is usually used to check vibration from a distance in structures such as wind turbines. For Jetson, a special gimbal was added so that an invisible, quarter-size laser spot could be kept on a target. It takes about 30 seconds to get a good return, so at present the device is only effective where the subject is sitting or standing.

Better than face recognition
Remaly’s team then developed algorithms capable of extracting a cardiac signature from the laser signals. He claims that Jetson can achieve over 95% accuracy under good conditions, and this might be further improved. In practice, it’s likely that Jetson would be used alongside facial recognition or other identification methods.

Wenyao Xu of the State University of New York at Buffalo has also developed a remote cardiac sensor, although it works only up to 20 meters away and uses radar. He believes the cardiac approach is far more robust than facial recognition. “Compared with face, cardiac biometrics are more stable and can reach more than 98% accuracy,” he says.

One glaring limitation is the need for a database of cardiac signatures, but even without this the system has its uses. For example, an insurgent seen in a group planting an IED could later be positively identified from a cardiac signature, even if the person’s name and face are unknown. Biometric data is also routinely collected by US armed forces in Iraq and Afghanistan, so cardiac data could be added to that library.

In the longer run, this technology could find many more uses, its developers believe. For example, a doctor could scan for arrythmias and other conditions remotely, or hospitals could monitor the condition of patients without having to wire them up to machines.

https://www.technologyreview.com/s/613891/the-pentagon-has-a-laser-that-can-identify-people-from-a-distanceby-their-heartbeat/

Artificial intelligence can share our natural ability to make numeric snap judgments.

Researchers observed this knack for numbers in a computer model composed of virtual brain cells, or neurons, called an artificial neural network. After being trained merely to identify objects in images — a common task for AI — the network developed virtual neurons that respond to specific quantities. These artificial neurons are reminiscent of the “number neurons” thought to give humans, birds, bees and other creatures the innate ability to estimate the number of items in a set (SN: 7/7/18, p. 7). This intuition is known as number sense.

In number-judging tasks, the AI demonstrated a number sense similar to humans and animals, researchers report online May 8 in Science Advances. This finding lends insight into what AI can learn without explicit instruction, and may prove interesting for scientists studying how number sensitivity arises in animals.

Neurobiologist Andreas Nieder of the University of Tübingen in Germany and colleagues used a library of about 1.2 million labeled images to teach an artificial neural network to recognize objects such as animals and vehicles in pictures. The researchers then presented the AI with dot patterns containing one to 30 dots and recorded how various virtual neurons responded.

Some neurons were more active when viewing patterns with specific numbers of dots. For instance, some neurons activated strongly when shown two dots but not 20, and vice versa. The degree to which these neurons preferred certain numbers was nearly identical to previous data from the neurons of monkeys.

Dot detectors
A new artificial intelligence program viewed images of dots previously shown to monkeys, including images with one dot and images with even numbers of dots from 2 to 30 (bottom). Much like the number-sensitive neurons in monkey brains, number-sensitive virtual neurons in the AI preferentially activated when shown specific numbers of dots. As in monkey brains, the AI contained more neurons tuned to smaller numbers than larger numbers (top).

To test whether the AI’s number neurons equipped it with an animal-like number sense, Nieder’s team presented pairs of dot patterns and asked whether the patterns contained the same number of dots. The AI was correct 81 percent of the time, performing about as well as humans and monkeys do on similar matching tasks. Like humans and other animals, the AI struggled to differentiate between patterns that had very similar numbers of dots, and between patterns that had many dots (SN: 12/10/16, p. 22).

This finding is a “very nice demonstration” of how AI can pick up multiple skills while training for a specific task, says Elias Issa, a neuroscientist at Columbia University not involved in the work. But exactly how and why number sense arose within this artificial neural network is still unclear, he says.

Nieder and colleagues argue that the emergence of number sense in AI might help biologists understand how human babies and wild animals get a number sense without being taught to count. Perhaps basic number sensitivity “is wired into the architecture of our visual system,” Nieder says.

Ivilin Stoianov, a computational neuroscientist at the Italian National Research Council in Padova, is not convinced that such a direct parallel exists between the number sense in this AI and that in animal brains. This AI learned to “see” by studying many labeled pictures, which is not how babies and wild animals learn to make sense of the world. Future experiments could explore whether similar number neurons emerge in AI systems that more closely mimic how biological brains learn, like those that use reinforcement learning, Stoianov says (SN: 12/8/18, p. 14).

https://www.sciencenews.org/article/new-ai-acquired-humanlike-number-sense-its-own

By Greg Ip

It’s time to stop worrying that robots will take our jobs — and start worrying that they will decide who gets jobs.

Millions of low-paid workers’ lives are increasingly governed by software and algorithms. This was starkly illustrated by a report last week that Amazon.com tracks the productivity of its employees and regularly fires those who underperform, with almost no human intervention.

“Amazon’s system tracks the rates of each individual associate’s productivity and automatically generates any warnings or terminations regarding quality or productivity without input from supervisors,” a law firm representing Amazon said in a letter to the National Labor Relations Board, as first reported by technology news site The Verge. Amazon was responding to a complaint that it had fired an employee from a Baltimore fulfillment center for federally protected activity, which could include union organizing. Amazon said the employee was fired for failing to meet productivity targets.

Perhaps it was only a matter of time before software started firing people. After all, it already screens resumes, recommends job applicants, schedules shifts and assigns projects. In the workplace, “sophisticated technology to track worker productivity on a minute-by-minute or even second-by-second basis is incredibly pervasive,” says Ian Larkin, a business professor at the University of California at Los Angeles specializing in human resources.

Industrial laundry services track how many seconds it takes to press a laundered shirt; on-board computers track truckers’ speed, gear changes and engine revolutions per minute; and checkout terminals at major discount retailers report if the cashier is scanning items quickly enough to meet a preset goal. In all these cases, results are shared in real time with the employee, and used to determine who is terminated, says Mr. Larkin.

Of course, weeding out underperforming employees is a basic function of management. General Electric Co.’s former chief executive Jack Welch regularly culled the company’s underperformers. “In banking and management consulting it is standard to exit about 20% of employees a year, even in good times, using ‘rank and yank’ systems,” says Nick Bloom, an economist at Stanford University specializing in management.

For employees of General Electric, Goldman Sachs Group Inc.and McKinsey & Co., that risk is more than compensated for by the reward of stimulating and challenging work and handsome paychecks. The risk-reward trade-off in industrial laundries, fulfillment centers and discount stores is not nearly so enticing: the work is repetitive and the pay is low. Those who aren’t weeded out one year may be the next if the company raises its productivity targets. Indeed, wage inequality doesn’t fully capture how unequal work has become: enjoyable and secure at the top, monotonous and insecure at the bottom.

At fulfillment centers, employees locate, scan and box all the items in an order. Amazon’s “Associate Development and Performance Tracker,” or Adapt, tracks how each employee performs on these steps against externally-established benchmarks and warns employees when they are falling short.

Amazon employees have complained of being monitored continuously — even having bathroom breaks measured — and being held to ever-rising productivity benchmarks. There is no public data to determine if such complaints are more or less common at Amazon than its peers. The company says about 300 employees — roughly 10% of the Baltimore center’s employment level — were terminated for productivity reasons in the year before the law firm’s letter was sent to the NLRB.

Mr. Larkin says 10% is not unusually high. Yet, automating the discipline process, he says, “makes an already difficult job seem even more inhuman and undesirable. Dealing with these tough situations is one of the key roles of managers.”

“Managers make final decisions on all personnel matters,” an Amazon spokeswoman said. “The [Adapt system] simply tracks and ensures consistency of data and process across hundreds of employees to ensure fairness.” The number of terminations has decreased in the last two years at the Baltimore facility and across North America, she said. Termination notices can be appealed.

Companies use these systems because they work well for them.

Mr. Bloom and his co-authors find that companies that more aggressively hire, fire and monitor employees have faster productivity growth. They also have wider gaps between the highest- and lowest-paid employees.

Computers also don’t succumb to the biases managers do. Economists Mitchell Hoffman, Lisa Kahn and Danielle Li looked at how 15 firms used a job-testing technology that tested applicants on computer and technical skills, personality, cognitive skills, fit for the job and various job scenarios. Drawing on past correlations, the algorithm ranked applicants as having high, moderate or low potential. Their study found employees hired against the software’s recommendation were below-average performers: “This suggests that managers often overrule test recommendations because they are biased or mistaken, not only because they have superior private information,” they wrote.

Last fall Amazon raised its starting pay to $15 an hour, several dollars more than what the brick-and-mortar stores being displaced by Amazon pay. Ruthless performance tracking is how Amazon ensures employees are productive enough to merit that salary. This also means that, while employees may increasingly be supervised by technology, at least they’re not about to be replaced by it.

Write to Greg Ip at greg.ip@wsj.com

https://www.morningstar.com/news/glbnewscan/TDJNDN_201905017114/for-lowerpaid-workers-the-robot-overlords-have-arrived.html

Thanks to Kebmodee for bringing this to the It’s Interesting community.


B/CI technology might also allow us to create a future “global superbrain” that would connect networks of individual human brains and AIs to enable collective thought. The image is in the public doamin.

Summary: Researchers predict the development of a brain/cloud interface that connects neurons to cloud computing networks in real time.

Source: Frontiers

Imagine a future technology that would provide instant access to the world’s knowledge and artificial intelligence, simply by thinking about a specific topic or question. Communications, education, work, and the world as we know it would be transformed.

Writing in Frontiers in Neuroscience, an international collaboration led by researchers at UC Berkeley and the US Institute for Molecular Manufacturing predicts that exponential progress in nanotechnology, nanomedicine, AI, and computation will lead this century to the development of a “Human Brain/Cloud Interface” (B/CI), that connects neurons and synapses in the brain to vast cloud-computing networks in real time.

Nanobots on the brain

The B/CI concept was initially proposed by futurist-author-inventor Ray Kurzweil, who suggested that neural nanorobots – brainchild of Robert Freitas, Jr., senior author of the research – could be used to connect the neocortex of the human brain to a “synthetic neocortex” in the cloud. Our wrinkled neocortex is the newest, smartest, ‘conscious’ part of the brain.

Freitas’ proposed neural nanorobots would provide direct, real-time monitoring and control of signals to and from brain cells.

“These devices would navigate the human vasculature, cross the blood-brain barrier, and precisely autoposition themselves among, or even within brain cells,” explains Freitas. “They would then wirelessly transmit encoded information to and from a cloud-based supercomputer network for real-time brain-state monitoring and data extraction.”

The internet of thoughts

This cortex in the cloud would allow “Matrix”-style downloading of information to the brain, the group claims.

“A human B/CI system mediated by neuralnanorobotics could empower individuals with instantaneous access to all cumulative human knowledge available in the cloud, while significantly improving human learning capacities and intelligence,” says lead author Dr. Nuno Martins.

B/CI technology might also allow us to create a future “global superbrain” that would connect networks of individual human brains and AIs to enable collective thought.

“While not yet particularly sophisticated, an experimental human ‘BrainNet’ system has already been tested, enabling thought-driven information exchange via the cloud between individual brains,” explains Martins. “It used electrical signals recorded through the skull of ‘senders’ and magnetic stimulation through the skull of ‘receivers,’ allowing for performing cooperative tasks.

“With the advance of neuralnanorobotics, we envisage the future creation of ‘superbrains’ that can harness the thoughts and thinking power of any number of humans and machines in real time. This shared cognition could revolutionize democracy, enhance empathy, and ultimately unite culturally diverse groups into a truly global society.”

When can we connect?

According to the group’s estimates, even existing supercomputers have processing speeds capable of handling the necessary volumes of neural data for B/CI – and they’re getting faster, fast.

Rather, transferring neural data to and from supercomputers in the cloud is likely to be the ultimate bottleneck in B/CI development.

“This challenge includes not only finding the bandwidth for global data transmission,” cautions Martins, “but also, how to enable data exchange with neurons via tiny devices embedded deep in the brain.”

One solution proposed by the authors is the use of ‘magnetoelectric nanoparticles’ to effectively amplify communication between neurons and the cloud.

“These nanoparticles have been used already in living mice to couple external magnetic fields to neuronal electric fields – that is, to detect and locally amplify these magnetic signals and so allow them to alter the electrical activity of neurons,” explains Martins. “This could work in reverse, too: electrical signals produced by neurons and nanorobots could be amplified via magnetoelectric nanoparticles, to allow their detection outside of the skull.”

Getting these nanoparticles – and nanorobots – safely into the brain via the circulation, would be perhaps the greatest challenge of all in B/CI.

“A detailed analysis of the biodistribution and biocompatibility of nanoparticles is required before they can be considered for human development. Nevertheless, with these and other promising technologies for B/CI developing at an ever-increasing rate, an ‘internet of thoughts’ could become a reality before the turn of the century,” Martins concludes.

https://neurosciencenews.com/internet-thoughts-brain-cloud-interface-11074/