Posts Tagged ‘The Singularity’


Researchers at the University of Minnesota use a customized 3D printer to print electronics on a real hand. Image: McAlpine group, University of Minnesota

Soldiers are commonly thrust into situations where the danger is the unknown: Where is the enemy, how many are there, what weaponry is being used? The military already uses a mix of technology to help answer those questions quickly, and another may be on its way. Researchers at the University of Minnesota have developed a low-cost 3D printer that prints sensors and electronics directly on skin. The development could allow soldiers to directly print temporary, disposable sensors on their hands to detect such things as chemical or biological agents in the field.

The technology also could be used in medicine. The Minnesota researchers successfully used bioink with the device to print cells directly on the wounds of a mouse. Researchers believe it could eventually provide new methods of faster and more efficient treatment, or direct printing of grafts for skin wounds or conditions.

“The concept was to go beyond smart materials, to integrate them directly on to skin,” says Michael McAlpine, professor of mechanical engineering whose research group focuses on 3D printing functional materials and devices. “It is a biological merger with electronics. We wanted to push the limits of what a 3D printer can do.”

McAlpine calls it a very simple idea, “One of those ideas so simple, it turns out no one has done it.”

Others have used 3D printers to print electronics and biological cells. But printing on skin presented a few challenges. No matter how hard a person tries to remain still, there always will be some movement during the printing process. “If you put a hand under the printer, it is going to move,” he says.

To adjust for that, the printer the Minnesota team developed uses a machine vision algorithm written by Ph.D. student Zhijie Zhu to track the motion of the hand in real time while printing. Temporary markers are placed on the skin, which then is scanned. The printer tracks the hand using the markers and adjusts in real time to any movement. That allows the printed electronics to maintain a circuit shape. The printed device can be peeled off the skin when it is no longer needed.

The team also needed to develop a special ink that could not only be conductive but print and cure at room temperature. Standard 3D printing inks cure at high temperatures of 212 °F and would burn skin.

In a paper recently published in Advanced Materals, the team identified three criteria for conductive inks: The viscosity of the ink should be tunable while maintaining self-supporting structures; the ink solvent should evaporate quickly so the device becomes functional on the same timescale as the printing process; and the printed electrodes should become highly conductive under ambient conditions.

The solution was an ink using silver flakes to provide conductivity rather than particles more commonly used in other applications. Fibers were found to be too large, and cure at high temperatures. The flakes are aligned by their shear forces during printing, and the addition of ethanol to the mix increases speed of evaporation, allowing the ink to cure quickly at room temperature.

“Printing electronics directly on skin would have been a breakthrough in itself, but when you add all of these other components, this is big,” McAlpine says.

The printer is portable, lightweight and cost less than $400. It consists of a delta robot, monitor cameras for long-distance observation of printing states and tracking cameras mounted for precise localization of the surface. The team added a syringe-type nozzle to squeeze and deliver the ink

Furthering the printer’s versatility, McAlpine’s team worked with staff from the university’s medical school and hospital to print skin cells directly on a skin wound of a mouse. The mouse was anesthetized, but still moved slightly during the procedure, he says. The initial success makes the team optimistic that it could open up a new method of treating skin diseases.

“Think about what the applications could be,” McAlpine says. “A soldier in the field could take the printer out of a pack and print a solar panel. On the cellular side, you could bring a printer to the site of an accident and print cells directly on wounds, speeding the treatment. Eventually, you may be able to print biomedical devices within the body.”

In its paper, the team suggests that devices can be “autonomously fabricated without the need for microfabrication facilities in freeform geometries that are actively adaptive to target surfaces in real time, driven by advances in multifunctional 3D printing technologies.”

Besides the ability to print directly on skin, McAlpine says the work may offer advantages over other skin electronic devices. For example, soft, thin, stretchable patches that stick to the skin have been fitted with off-the-shelf chip-based electronics for monitoring a patient’s health. They stick to skin like a temporary tattoo and send updates wirelessly to a computer.

“The advantage of our approach is that you don’t have to start with electronic wafers made in a clean room,” McAlpine says. “This is a completely new paradigm for printing electronics using 3D printing.”

http://www.asme.org/engineering-topics/articles/bioengineering/researchers-3d-print-skin-breakthrough

Advertisements

Futuristic cityscape maze.

By Diana Kwon

A computer program can learn to navigate through space and spontaneously mimics the electrical activity of grid cells, neurons that help animals navigate their environments, according to a study published May 9 in Nature.

“This paper came out of the blue, like a shot, and it’s very exciting,” Edvard Moser, a neuroscientist at the Kavli Institute for Systems Neuroscience in Norway who was not involved in the work, tells Nature in an accompanying news story. “It is striking that the computer model, coming from a totally different perspective, ended up with the grid pattern we know from biology.” Moser shared a Nobel Prize for the discovery of grid cells with neuroscientists May-Britt Moser and John O’Keefe in 2014.

When scientists trained an artificial neural network to navigate in the form of virtual rats through a simulated environment, they found that the algorithm produced patterns of activity similar to that found in the grid cells of the human brain. “We wanted to see whether we could set up an artificial network with an appropriate task so that it would actually develop grid cells,” study coauthor Caswell Barry of University College London, tells Quanta. “What was surprising was how well it worked.”

The team then tested the program in a more-complex, maze-like environment, and found that not only did the virtual rats make their way to the end, they were also able to outperform a human expert at the task.

“It is doing the kinds of things that animals do and that is to take direct routes wherever possible and shortcuts when they are available,” coauthor Dharshan Kumaran, a senior researcher at Google’s AI company DeepMind, tells The Guardian.

DeepMind researchers hope to use these types of artificial neural networks to study other parts of the brain, such as those involved in understanding sound and controlling limbs, according to Wired. “This has proven to be extremely hard with traditional neuroscience so, in the future, if we could improve these artificial models, we could potentially use them to understand other brain functionalities,” study coauthor Andrea Banino, a research scientist at DeepMind, tells Wired. “This would be a giant step toward the future of brain understanding.”

https://www.the-scientist.com/?articles.view/articleNo/54534/title/Artificial-Intelligence-Mimics-Navigation-Cells-in-the-Brain/&utm_campaign=TS_DAILY%20NEWSLETTER_2018&utm_source=hs_email&utm_medium=email&utm_content=62845247&_hsenc=p2ANqtz-_1eI9gR1hZiJ5AMHakKnqqytoBx4h3r-AG5kHqEt0f3qMz5KQh5XeBQGeWxvqyvET-l70AGfikSD0n3SiVYETaAbpvtA&_hsmi=62845247

What if we could edit the sensations we feel; paste in our brain pictures that we never saw, cut out unwanted pain or insert non-existent scents into memory?

UC Berkeley neuroscientists are building the equipment to do just that, using holographic projection into the brain to activate or suppress dozens and ultimately thousands of neurons at once, hundreds of times each second, copying real patterns of brain activity to fool the brain into thinking it has felt, seen or sensed something.

The goal is to read neural activity constantly and decide, based on the activity, which sets of neurons to activate to simulate the pattern and rhythm of an actual brain response, so as to replace lost sensations after peripheral nerve damage, for example, or control a prosthetic limb.

“This has great potential for neural prostheses, since it has the precision needed for the brain to interpret the pattern of activation. If you can read and write the language of the brain, you can speak to it in its own language and it can interpret the message much better,” said Alan Mardinly, a postdoctoral fellow in the UC Berkeley lab of Hillel Adesnik, an assistant professor of molecular and cell biology. “This is one of the first steps in a long road to develop a technology that could be a virtual brain implant with additional senses or enhanced senses.”

Mardinly is one of three first authors of a paper appearing online April 30 in advance of publication in the journal Nature Neuroscience that describes the holographic brain modulator, which can activate up to 50 neurons at once in a three-dimensional chunk of brain containing several thousand neurons, and repeat that up to 300 times a second with different sets of 50 neurons.

“The ability to talk to the brain has the incredible potential to help compensate for neurological damage caused by degenerative diseases or injury,” said Ehud Isacoff, a UC Berkeley professor of molecular and cell biology and director of the Helen Wills Neuroscience Institute, who was not involved in the research project. “By encoding perceptions into the human cortex, you could allow the blind to see or the paralyzed to feel touch.”

Holographic projection

Each of the 2,000 to 3,000 neurons in the chunk of brain was outfitted with a protein that, when hit by a flash of light, turns the cell on to create a brief spike of activity. One of the key breakthroughs was finding a way to target each cell individually without hitting all at once.

To focus the light onto just the cell body — a target smaller than the width of a human hair — of nearly all cells in a chunk of brain, they turned to computer generated holography, a method of bending and focusing light to form a three-dimensional spatial pattern. The effect is as if a 3D image were floating in space.

In this case, the holographic image was projected into a thin layer of brain tissue at the surface of the cortex, about a tenth of a millimeter thick, though a clear window into the brain.

“The major advance is the ability to control neurons precisely in space and time,” said postdoc Nicolas Pégard, another first author who works both in Adesnik’s lab and the lab of co-author Laura Waller, an associate professor of electrical engineering and computer sciences. “In other words, to shoot the very specific sets of neurons you want to activate and do it at the characteristic scale and the speed at which they normally work.”

The researchers have already tested the prototype in the touch, vision and motor areas of the brains of mice as they walk on a treadmill with their heads immobilized. While they have not noted any behavior changes in the mice when their brain is stimulated, Mardinly said that their brain activity — which is measured in real-time with two-photon imaging of calcium levels in the neurons — shows patterns similar to a response to a sensory stimulus. They’re now training mice so they can detect behavior changes after stimulation.

Prosthetics and brain implants

The area of the brain covered — now a slice one-half millimeter square and one-tenth of a millimeter thick — can be scaled up to read from and write to more neurons in the brain’s outer layer, or cortex, Pégard said. And the laser holography setup could eventually be miniaturized to fit in a backpack a person could haul around.

Mardinly, Pégard and the other first author, postdoc Ian Oldenburg, constructed the holographic brain modulator by making technological advances in a number of areas. Mardinly and Oldenburg, together with Savitha Sridharan, a research associate in the lab, developed better optogenetic switches to insert into cells to turn them on and off. The switches — light-activated ion channels on the cell surface that open briefly when triggered — turn on strongly and then quickly shut off, all in about 3 milliseconds, so they’re ready to be re-stimulated up to 50 or more times per second, consistent with normal firing rates in the cortex.

Pégard developed the holographic projection system using a liquid crystal screen that acts like a holographic negative to sculpt the light from 40W lasers into the desired 3D pattern. The lasers are pulsed in 300 femtosecond-long bursts every microsecond. He, Mardinly, Oldenburg and their colleagues published a paper last year describing the device, which they call 3D-SHOT, for three-dimensional scanless holographic optogenetics with temporal focusing.

“This is the culmination of technologies that researchers have been working on for a while, but have been impossible to put together,” Mardinly said. “We solved numerous technical problems at the same time to bring it all together and finally realize the potential of this technology.”

As they improve their technology, they plan to start capturing real patterns of activity in the cortex in order to learn how to reproduce sensations and perceptions to play back through their holographic system.

Reference:
Mardinly, A. R., Oldenburg, I. A., Pégard, N. C., Sridharan, S., Lyall, E. H., Chesnov, K., . . . Adesnik, H. (2018). Precise multimodal optical control of neural ensemble activity. Nature Neuroscience. doi:10.1038/s41593-018-0139-8

https://www.technologynetworks.com/neuroscience/news/using-holography-to-activate-the-brain-300329?utm_campaign=Newsletter_TN_BreakingScienceNews&utm_source=hs_email&utm_medium=email&utm_content=62560457&_hsenc=p2ANqtz–bJrpQXF2dp2fYgPpEKUOIkhpHxOYZR7Nx-irsQ649T-Ua02wmYTaBOkA9joFtI9BGKIAUb1NoL7-s27Rj9XMPH44XUw&_hsmi=62560457

In the age of big data, we are quickly producing far more digital information than we can possibly store. Last year, $20 billion was spent on new data centers in the US alone, doubling the capital expenditure on data center infrastructure from 2016. And even with skyrocketing investment in data storage, corporations and the public sector are falling behind.

But there’s hope.

With a nascent technology leveraging DNA for data storage, this may soon become a problem of the past. By encoding bits of data into tiny molecules of DNA, researchers and companies like Microsoft hope to fit entire data centers in a few flasks of DNA by the end of the decade.

But let’s back up.

Backdrop

After the 20th century, we graduated from magnetic tape, floppy disks, and CDs to sophisticated semiconductor memory chips capable of holding data in countless tiny transistors. In keeping with Moore’s Law, we’ve seen an exponential increase in the storage capacity of silicon chips. At the same time, however, the rate at which humanity produces new digital information is exploding. The size of the global datasphere is increasing exponentially, predicted to reach 160 zettabytes (160 trillion gigabytes) by 2025. As of 2016, digital users produced over 44 billion gigabytes of data per day. By 2025, the International Data Corporation (IDC) estimates this figure will surpass 460 billion. And with private sector efforts to improve global connectivity—such as OneWeb and Google’s Project Loon—we’re about to see an influx of data from five billion new minds.

By 2020, three billion new minds are predicted to join the web. With private sector efforts, this number could reach five billion. While companies and services are profiting enormously from this influx, it’s extremely costly to build data centers at the rate needed. At present, about $50 million worth of new data center construction is required just to keep up, not to mention millions in furnishings, equipment, power, and cooling. Moreover, memory-grade silicon is rarely found pure in nature, and researchers predict it will run out by 2040.

Take DNA, on the other hand. At its theoretical limit, we could fit 215 million gigabytes of data in a single gram of DNA.

But how?

Crash Course

DNA is built from a double helix chain of four nucleotide bases—adenine (A), thymine (T), cytosine (C), and guanine (G). Once formed, these chains fold tightly to form extremely dense, space-saving data stores. To encode data files into these bases, we can use various algorithms that convert binary to base nucleotides—0s and 1s into A, T, C, and G. “00” might be encoded as A, “01” as G, “10” as C, and “11” as T, for instance. Once encoded, information is then stored by synthesizing DNA with specific base patterns, and the final encoded sequences are stored in vials with an extraordinary shelf-life. To retrieve data, encoded DNA can then be read using any number of sequencing technologies, such as Oxford Nanopore’s portable MinION.

Still in its deceptive growth phase, DNA data storage—or NAM (nucleic acid memory)—is only beginning to approach the knee of its exponential growth curve. But while the process remains costly and slow, several players are beginning to crack its greatest challenge: retrieval. Just as you might click on a specific file and filter a search term on your desktop, random-access across large data stores has become a top priority for scientists at Microsoft Research and the University of Washington.

Storing over 400 DNA-encoded megabytes of data, U Washington’s DNA storage system now offers random access across all its data with no bit errors.

Applications

Even before we guarantee random access for data retrieval, DNA data storage has immediate market applications. According to IDC’s Age 2025 study (Figure 5 (PDF)), a huge proportion of enterprise data goes straight to an archive. Over time, the majority of stored data becomes only potentially critical, making it less of a target for immediate retrieval.

Particularly for storing past legal documents, medical records, and other archive data, why waste precious computing power, infrastructure, and overhead?

Data-encoded DNA can last 10,000 years—guaranteed—in cold, dark, and dry conditions at a fraction of the storage cost.

Now that we can easily use natural enzymes to replicate DNA, companies have tons to gain (literally) by using DNA as a backup system—duplicating files for later retrieval and risk mitigation.

And as retrieval algorithms and biochemical technologies improve, random access across data-encoded DNA may become as easy as clicking a file on your desktop.

As you scroll, researchers are already investigating the potential of molecular computing, completely devoid of silicon and electronics.

Harvard professor George Church and his lab, for instance, envision capturing data directly in DNA. As Church has stated, “I’m interested in making biological cameras that don’t have any electronic or mechanical components,” whereby information “goes straight into DNA.” According to Church, DNA recorders would capture audiovisual data automatically. “You could paint it up on walls, and if anything interesting happens, just scrape a little bit off and read it—it’s not that far off.” One day, we may even be able to record biological events in the body. In pursuit of this end, Church’s lab is working to develop an in vivo DNA recorder of neural activity, skipping electrodes entirely.

Perhaps the most ultra-compact, long-lasting, and universal storage mechanism at our fingertips, DNA offers us unprecedented applications in data storage—perhaps even computing.

Potential

As DNA data storage plummets in tech costs and rises in speed, commercial user interfaces will become both critical and wildly profitable. Once corporations, startups, and people alike can easily save files, images or even neural activity to DNA, opportunities for disruption abound. Imagine uploading files to the cloud, which travel to encrypted DNA vials, as opposed to massive and inefficient silicon-enabled data centers. Corporations could have their own warehouses and local data networks could allow for heightened cybersecurity—particularly for archives.

And since DNA lasts millennia without maintenance, forget the need to copy databases and power digital archives. As long as we’re human, regardless of technological advances and changes, DNA will always be relevant and readable for generations to come.

But perhaps the most exciting potential of DNA is its portability. If we were to send a single exabyte of data (one billion gigabytes) to Mars using silicon binary media, it would take five Falcon Heavy rockets and cost $486 million in freight alone.

With DNA, we would need five cubic centimeters.

At scale, DNA has the true potential to dematerialize entire space colonies worth of data. Throughout evolution, DNA has unlocked extraordinary possibilities—from humans to bacteria. Soon hosting limitless data in almost zero space, it may one day unlock many more.

https://singularityhub.com/2018/04/26/the-answer-to-the-digital-data-tsunami-is-literally-in-our-dna/?utm_source=Singularity+Hub+Newsletter&utm_campaign=fa76321507-Hub_Daily_Newsletter&utm_medium=email&utm_term=0_f0cf60cdae-fa76321507-58158129#sm.000kbyugh140cf5sxiv1mnz7bq65u

by Vanessa Zainzinger

Wireless sensors are ubiquitous, providing a steady stream of information on anything from our physical activity to changes occurring in the world’s oceans. Now, scientists have developed a tiny form of the data-gathering tool, designed for an area that has so far escaped its reach: our teeth.

The 2-millimeter-by-2-millimeter devices (pictured) are made up of a film of polymers that detects chemicals in its environment. Sandwiched between two square-shaped gold rings that act as antennas, the sensor can transmit information on what’s going on—or what’s being chewed on—in our mouth to a digital device, such as a smartphone. The type of compound the inner layer detects—salt, for example, or ethanol—determines the spectrum and intensity of the radiofrequency waves that the sensor transmits. Because the sensor uses the ambient radio-frequency signals that are already around us, it doesn’t need a power supply.

The researchers tested their invention on people drinking alcohol, gargling mouthwash, or eating soup. In each case, the sensor was able to detect what the person was consuming by picking up on nutrients.

The devices could help health care and clinical researchers find links between dietary intake and health and, in the long run, allow each of us to keep track of how what we consume is affecting our bodies.

http://www.sciencemag.org/news/2018/03/tiny-sensor-your-tooth-could-help-keep-you-healthy


Arnav Kapur, a researcher in the Fluid Interfaces group at the MIT Media Lab, demonstrates the AlterEgo project. Image: Lorrie Lejeune/MIT

MIT researchers have developed a computer interface that can transcribe words that the user verbalizes internally but does not actually speak aloud.

The system consists of a wearable device and an associated computing system. Electrodes in the device pick up neuromuscular signals in the jaw and face that are triggered by internal verbalizations — saying words “in your head” — but are undetectable to the human eye. The signals are fed to a machine-learning system that has been trained to correlate particular signals with particular words.

The device also includes a pair of bone-conduction headphones, which transmit vibrations through the bones of the face to the inner ear. Because they don’t obstruct the ear canal, the headphones enable the system to convey information to the user without interrupting conversation or otherwise interfering with the user’s auditory experience.

The device is thus part of a complete silent-computing system that lets the user undetectably pose and receive answers to difficult computational problems. In one of the researchers’ experiments, for instance, subjects used the system to silently report opponents’ moves in a chess game and just as silently receive computer-recommended responses.

“The motivation for this was to build an IA device — an intelligence-augmentation device,” says Arnav Kapur, a graduate student at the MIT Media Lab, who led the development of the new system. “Our idea was: Could we have a computing platform that’s more internal, that melds human and machine in some ways and that feels like an internal extension of our own cognition?”

“We basically can’t live without our cellphones, our digital devices,” says Pattie Maes, a professor of media arts and sciences and Kapur’s thesis advisor. “But at the moment, the use of those devices is very disruptive. If I want to look something up that’s relevant to a conversation I’m having, I have to find my phone and type in the passcode and open an app and type in some search keyword, and the whole thing requires that I completely shift attention from my environment and the people that I’m with to the phone itself. So, my students and I have for a very long time been experimenting with new form factors and new types of experience that enable people to still benefit from all the wonderful knowledge and services that these devices give us, but do it in a way that lets them remain in the present.”

The researchers describe their device in a paper they presented at the Association for Computing Machinery’s ACM Intelligent User Interface conference. Kapur is first author on the paper, Maes is the senior author, and they’re joined by Shreyas Kapur, an undergraduate major in electrical engineering and computer science.

Subtle signals

The idea that internal verbalizations have physical correlates has been around since the 19th century, and it was seriously investigated in the 1950s. One of the goals of the speed-reading movement of the 1960s was to eliminate internal verbalization, or “subvocalization,” as it’s known.

But subvocalization as a computer interface is largely unexplored. The researchers’ first step was to determine which locations on the face are the sources of the most reliable neuromuscular signals. So they conducted experiments in which the same subjects were asked to subvocalize the same series of words four times, with an array of 16 electrodes at different facial locations each time.

The researchers wrote code to analyze the resulting data and found that signals from seven particular electrode locations were consistently able to distinguish subvocalized words. In the conference paper, the researchers report a prototype of a wearable silent-speech interface, which wraps around the back of the neck like a telephone headset and has tentacle-like curved appendages that touch the face at seven locations on either side of the mouth and along the jaws.

But in current experiments, the researchers are getting comparable results using only four electrodes along one jaw, which should lead to a less obtrusive wearable device.

Once they had selected the electrode locations, the researchers began collecting data on a few computational tasks with limited vocabularies — about 20 words each. One was arithmetic, in which the user would subvocalize large addition or multiplication problems; another was the chess application, in which the user would report moves using the standard chess numbering system.

Then, for each application, they used a neural network to find correlations between particular neuromuscular signals and particular words. Like most neural networks, the one the researchers used is arranged into layers of simple processing nodes, each of which is connected to several nodes in the layers above and below. Data are fed into the bottom layer, whose nodes process it and pass them to the next layer, whose nodes process it and pass them to the next layer, and so on. The output of the final layer yields is the result of some classification task.

The basic configuration of the researchers’ system includes a neural network trained to identify subvocalized words from neuromuscular signals, but it can be customized to a particular user through a process that retrains just the last two layers.

Practical matters
Using the prototype wearable interface, the researchers conducted a usability study in which 10 subjects spent about 15 minutes each customizing the arithmetic application to their own neurophysiology, then spent another 90 minutes using it to execute computations. In that study, the system had an average transcription accuracy of about 92 percent.

But, Kapur says, the system’s performance should improve with more training data, which could be collected during its ordinary use. Although he hasn’t crunched the numbers, he estimates that the better-trained system he uses for demonstrations has an accuracy rate higher than that reported in the usability study.

In ongoing work, the researchers are collecting a wealth of data on more elaborate conversations, in the hope of building applications with much more expansive vocabularies. “We’re in the middle of collecting data, and the results look nice,” Kapur says. “I think we’ll achieve full conversation some day.”

“I think that they’re a little underselling what I think is a real potential for the work,” says Thad Starner, a professor in Georgia Tech’s College of Computing. “Like, say, controlling the airplanes on the tarmac at Hartsfield Airport here in Atlanta. You’ve got jet noise all around you, you’re wearing these big ear-protection things — wouldn’t it be great to communicate with voice in an environment where you normally wouldn’t be able to? You can imagine all these situations where you have a high-noise environment, like the flight deck of an aircraft carrier, or even places with a lot of machinery, like a power plant or a printing press. This is a system that would make sense, especially because oftentimes in these types of or situations people are already wearing protective gear. For instance, if you’re a fighter pilot, or if you’re a firefighter, you’re already wearing these masks.”

“The other thing where this is extremely useful is special ops,” Starner adds. “There’s a lot of places where it’s not a noisy environment but a silent environment. A lot of time, special-ops folks have hand gestures, but you can’t always see those. Wouldn’t it be great to have silent-speech for communication between these folks? The last one is people who have disabilities where they can’t vocalize normally. For example, Roger Ebert did not have the ability to speak anymore because lost his jaw to cancer. Could he do this sort of silent speech and then have a synthesizer that would speak the words?”

By Brandon Specktor

Imagine your least-favorite world leader. (Take as much time as you need.)

Now, imagine if that person wasn’t a human, but a network of millions of computers around the world. This digi-dictator has instant access to every scrap of recorded information about every person who’s ever lived. It can make millions of calculations in a fraction of a second, controls the world’s economy and weapons systems with godlike autonomy and — scariest of all — can never, ever die.

This unkillable digital dictator, according to Tesla and SpaceX founder Elon Musk, is one of the darker scenarios awaiting humankind’s future if artificial-intelligence research continues without serious regulation.

“We are rapidly headed toward digital superintelligence that far exceeds any human, I think it’s pretty obvious,” Musk said in a new AI documentary called “Do You Trust This Computer?” directed by Chris Paine (who interviewed Musk previously for the documentary “Who Killed The Electric Car?”). “If one company or a small group of people manages to develop godlike digital super-intelligence, they could take over the world.”

Humans have tried to take over the world before. However, an authoritarian AI would have one terrible advantage over like-minded humans, Musk said.

“At least when there’s an evil dictator, that human is going to die,” Musk added. “But for an AI there would be no death. It would live forever, and then you’d have an immortal dictator, from which we could never escape.”

And, this hypothetical AI-dictator wouldn’t even have to be evil to pose a threat to humans, Musk added. All it has to be is determined.

“If AI has a goal and humanity just happens to be in the way, it will destroy humanity as a matter of course without even thinking about it. No hard feelings,” Musk said. “It’s just like, if we’re building a road, and an anthill happens to be in the way. We don’t hate ants, we’re just building a road. So, goodbye, anthill.”

Those who follow news from the Musk-verse will not be surprised by his opinions in the new documentary; the tech mogul has long been a vocal critic of unchecked artificial intelligence. In 2014, Musk called AI humanity’s “biggest existential threat,” and in 2015, he joined a handful of other tech luminaries and researchers, including Stephen Hawking, to urge the United Nations to ban killer robots. He has said unregulated AI poses “vastly more risk than North Korea” and proposed starting some sort of federal oversight program to monitor the technology’s growth.

“Public risks require public oversight,” he tweeted. “Getting rid of the FAA [wouldn’t] make flying safer. They’re there for good reason.”

https://www.livescience.com/62239-elon-musk-immortal-artificial-intelligence-dictator.html?utm_source=notification