Google’s AI translation tool seems to have invented its own secret internal language

Google AI computers have created their own secret language, creating a fascinating and existentially challenging development.

In September, Google announced that its Neural Machine Translation system had gone live. It uses deep learning to produce better, more natural translations between languages.

Following on this success, GNMT’s creators were curious about something. If you teach the translation system to translate English to Korean and vice versa, and also English to Japanese and vice versa… could it translate Korean to Japanese, without resorting to English as a bridge between them?

This is called zero-shot translation, illustrated below.

Indeed, Google’s AI has evolves to produce reasonable translations between two languages that it has not explicitly linked in any way.

But this raised a second question. If the computer is able to make connections between concepts and words that have not been formally linked… does that mean that the computer has formed a concept of shared meaning for those words, meaning at a deeper level than simply that one word or phrase is the equivalent of another?

n other words, has the computer developed its own internal language to represent the concepts it uses to translate between other languages? Based on how various sentences are related to one another in the memory space of the neural network, Google’s language and AI boffins think that it has.

This “interlingua” seems to exist as a deeper level of representation that sees similarities between a sentence or word in all three languages. Beyond that, it’s hard to say, since the inner processes of complex neural networks are infamously difficult to describe.

It could be something sophisticated, or it could be something simple. But the fact that it exists at all — an original creation of the system’s own to aid in its understanding of concepts it has not been trained to understand — is, philosophically speaking, pretty powerful stuff.

Google’s AI translation tool seems to have invented its own secret internal language

Thanks to Kebmodee for bringing this to the attention of the It’s Interesting community.

‘Brain wi-fi’ shown to be able to reverse leg paralysis in a primate.

By James Gallagher

An implant that beams instructions out of the brain has been used to restore movement in paralysed primates for the first time, say scientists.

Rhesus monkeys were paralysed in one leg due to a damaged spinal cord. The team at the Swiss Federal Institute of Technology bypassed the injury by sending the instructions straight from the brain to the nerves controlling leg movement. Experts said the technology could be ready for human trials within a decade.

Spinal-cord injuries block the flow of electrical signals from the brain to the rest of the body resulting in paralysis. It is a wound that rarely heals, but one potential solution is to use technology to bypass the injury.

In the study, a chip was implanted into the part of the monkeys’ brain that controls movement. Its job was to read the spikes of electrical activity that are the instructions for moving the legs and send them to a nearby computer. It deciphered the messages and sent instructions to an implant in the monkey’s spine to electrically stimulate the appropriate nerves. The process all takes place in real time. The results, published in the journal Nature, showed the monkeys regained some control of their paralysed leg within six days and could walk in a straight line on a treadmill.

Dr Gregoire Courtine, one of the researchers, said: “This is the first time that a neurotechnology has restored locomotion in primates.” He told the BBC News website: “The movement was close to normal for the basic walking pattern, but so far we have not been able to test the ability to steer.” The technology used to stimulate the spinal cord is the same as that used in deep brain stimulation to treat Parkinson’s disease, so it would not be a technological leap to doing the same tests in patients. “But the way we walk is different to primates, we are bipedal and this requires more sophisticated ways to stimulate the muscle,” said Dr Courtine.

Jocelyne Bloch, a neurosurgeon from the Lausanne University Hospital, said: “The link between decoding of the brain and the stimulation of the spinal cord is completely new. “For the first time, I can image a completely paralysed patient being able to move their legs through this brain-spine interface.”

Using technology to overcome paralysis is a rapidly developing field:
Brainwaves have been used to control a robotic arm
Electrical stimulation of the spinal cord has helped four paralysed people stand again
An implant has helped a paralysed man play a guitar-based computer game

Dr Mark Bacon, the director of research at the charity Spinal Research, said: “This is quite impressive work. Paralysed patients want to be able to regain real control, that is voluntary control of lost functions, like walking, and the use of implantable devices may be one way of achieving this. The current work is a clear demonstration that there is progress being made in the right direction.”

Dr Andrew Jackson, from the Institute of Neuroscience and Newcastle University, said: “It is not unreasonable to speculate that we could see the first clinical demonstrations of interfaces between the brain and spinal cord by the end of the decade.” However, he said, rhesus monkeys used all four limbs to move and only one leg had been paralysed, so it would be a greater challenge to restore the movement of both legs in people. “Useful locomotion also requires control of balance, steering and obstacle avoidance, which were not addressed,” he added.

The other approach to treating paralysis involves transplanting cells from the nasal cavity into the spinal cord to try to biologically repair the injury. Following this treatment, Darek Fidyka, who was paralysed from the chest down in a knife attack in 2010, can now walk using a frame.

Neither approach is ready for routine use.

http://www.bbc.com/news/health-37914543

Thanks to Kebmodee for bringing this to the It’s Interesting community.

US military enhancing human skills with electrical brain stimulation


Study paves way for personnel such as drone operators to have electrical pulses sent into their brains to improve effectiveness in high pressure situations.

US military scientists have used electrical brain stimulators to enhance mental skills of staff, in research that aims to boost the performance of air crews, drone operators and others in the armed forces’ most demanding roles.

The successful tests of the devices pave the way for servicemen and women to be wired up at critical times of duty, so that electrical pulses can be beamed into their brains to improve their effectiveness in high pressure situations.

The brain stimulation kits use five electrodes to send weak electric currents through the skull and into specific parts of the cortex. Previous studies have found evidence that by helping neurons to fire, these minor brain zaps can boost cognitive ability.

The technology is seen as a safer alternative to prescription drugs, such as modafinil and ritalin, both of which have been used off-label as performance enhancing drugs in the armed forces.

But while electrical brain stimulation appears to have no harmful side effects, some experts say its long-term safety is unknown, and raise concerns about staff being forced to use the equipment if it is approved for military operations.

Others are worried about the broader implications of the science on the general workforce because of the advance of an unregulated technology.

In a new report, scientists at Wright-Patterson Air Force Base in Ohio describe how the performance of military personnel can slump soon after they start work if the demands of the job become too intense.

“Within the air force, various operations such as remotely piloted and manned aircraft operations require a human operator to monitor and respond to multiple events simultaneously over a long period of time,” they write. “With the monotonous nature of these tasks, the operator’s performance may decline shortly after their work shift commences.”

Advertisement

But in a series of experiments at the air force base, the researchers found that electrical brain stimulation can improve people’s multitasking skills and stave off the drop in performance that comes with information overload. Writing in the journal Frontiers in Human Neuroscience, they say that the technology, known as transcranial direct current stimulation (tDCS), has a “profound effect”.

For the study, the scientists had men and women at the base take a test developed by Nasa to assess multitasking skills. The test requires people to keep a crosshair inside a moving circle on a computer screen, while constantly monitoring and responding to three other tasks on the screen.

To investigate whether tDCS boosted people’s scores, half of the volunteers had a constant two milliamp current beamed into the brain for the 36-minute-long test. The other half formed a control group and had only 30 seconds of stimulation at the start of the test.

According to the report, the brain stimulation group started to perform better than the control group four minutes into the test. “The findings provide new evidence that tDCS has the ability to augment and enhance multitasking capability in a human operator,” the researchers write. Larger studies must now look at whether the improvement in performance is real and, if so, how long it lasts.

The tests are not the first to claim beneficial effects from electrical brain stimulation. Last year, researchers at the same US facility found that tDCS seemed to work better than caffeine at keeping military target analysts vigilant after long hours at the desk. Brain stimulation has also been tested for its potential to help soldiers spot snipers more quickly in VR training programmes.

Neil Levy, deputy director of the Oxford Centre for Neuroethics, said that compared with prescription drugs, electrical brain stimulation could actually be a safer way to boost the performance of those in the armed forces. “I have more serious worries about the extent to which participants can give informed consent, and whether they can opt out once it is approved for use,” he said. “Even for those jobs where attention is absolutely critical, you want to be very careful about making it compulsory, or there being a strong social pressure to use it, before we are really sure about its long-term safety.”

But while the devices may be safe in the hands of experts, the technology is freely available, because the sale of brain stimulation kits is unregulated. They can be bought on the internet or assembled from simple components, which raises a greater concern, according to Levy. Young people whose brains are still developing may be tempted to experiment with the devices, and try higher currents than those used in laboratories, he says. “If you use high currents you can damage the brain,” he says.

In 2014 another Oxford scientist, Roi Cohen Kadosh, warned that while brain stimulation could improve performance at some tasks, it made people worse at others. In light of the work, Kadosh urged people not to use brain stimulators at home.

If the technology is proved safe in the long run though, it could help those who need it most, said Levy. “It may have a levelling-up effect, because it is cheap and enhancers tend to benefit the people that perform less well,” he said.

https://www.theguardian.com/science/2016/nov/07/us-military-successfully-tests-electrical-brain-stimulation-to-enhance-staff-skills

Thanks to Kebmodee for bringing this to the It’s Interesting community.

Google’s AI DeepMind to get smarter by taking on video game Starcraft II

by Jeremy Kahn

Google’s DeepMind AI unit, which earlier this year achieved a breakthrough in computer intelligence by creating software that beat the world’s best human player at the strategy game Go, is turning its attention to the sci-fi video game Starcraft II.

The company said it had reached a deal with Blizzard Entertainment Inc., the Irvine, California-based division of Activision Blizzard, which makes the Starcraft game series, to create an interface to let artificial intelligence researchers connect machine-learning software to the game.

London-based DeepMind, which Google purchased in 2014, has not said it has created software that can play Starcraft expertly — at least not yet. “We’re still a long way from being able to challenge a professional human player,” DeepMind research scientist Oriol Vinyals said in a blog post Friday. But the company’s announcement shows it’s looking seriously at Starcraft as a candidate for a breakthrough in machine intelligence.

Starcraft fascinates artificial intelligence researchers because it comes closer to simulating “the messiness of the real world” than games like chess or Go, Vinyals said. “An agent that can play Starcraft will need to demonstrate effective use of memory, an ability to plan over a long time and the capacity to adapt plans to new information,” he said, adding that techniques required to create a machine-learning system that mastered these skills in order to play Starcraft “could ultimately transfer to real-world tasks.”

Virtual Mining

In the game, which is played in real-time over the internet, players choose one of three character types, each of which has distinct strengths and weaknesses. Players must run an in-game economy, discovering and mining minerals and other commodities in order to conquer new territory.A successful player needs to remember large volumes of information about places they’ve scouted in the past, even when those places are not immediately observable on their screen.

The player’s view of what an opposing player is doing is limited — unlike chess or Go where opponents can observe the whole board at one time. Furthermore,unlike in a game where players take turns, a machine-learning system has to deal with an environment that is constantly in flux. Starcraft in particular also requires an ability to plan both a long-term strategy and make very quick tactical decisions to stay ahead of an opponent — and designing software that is good at both types of decision-making is difficult.

Facebook, Microsoft

Researchers at Facebook Inc. and Microsoft Corp. have also published papers on ways to interface artificial intelligence systems with earlier versions of Starcraft. And some Starcraft-playing bots have already been created, but so far these systems have not been able to defeat talented human players.

Microsoft Chief Executive Officer Satya Nadella has taken swipes at Google’s focus on games in its AI research, telling the audience at a company event in Atlanta in September that Microsoft was “not pursuing AI to beat humans at games” and that Microsoft wanted to build AI “to solve the most pressing problems of our society and economy.”

Games have long-served as important tests and milestones for artificial intelligence research. In the mid-1990s, International Business Machines Corp.’s supercomputer Deep Blue defeated world chess champion Garry Kasparov on several occasions. IBM’s Watson artificial intelligence beat top human players in the game show Jeopardy in 2011, an achievement that showcased IBM’s strides in natural language processing. In 2015, DeepMind developed machine learning software that taught itself how to play dozens of retro Atari games, such as Breakout, as well or better than a human. Then, in March of 2016, DeepMind’s Alpha Go program, trained in a different way, defeated Go world champion Lee Sodol.

In the twenty years since Starcraft debuted, the game has acquired a massive and devoted following. More than 9.5 million copies of the original game were sold within the first decade of its release, with more than half of those being sold in Korea, where the game was especially popular. Starcraft II shattered sales records for a strategy game when it was released in 2010, selling 1.5 million copies within 48 hours. Pitting two players against one another in real-time, Starcraft was a pioneer in professional video game competitions and remains an important game in the world of e-sports, although its prominence has since been eclipsed by other games.

http://www.detroitnews.com/story/business/2016/11/05/deepmind-master-go-takes-video-game-starcraft/93370028/

Paralyzed man’s robotic arm gains a sense of touch and shakes Obama’s hand

by Lorenzo Tanos

The mind-controlled robotic arm of Pennsylvania man Nathan Copeland hasn’t just gotten the sense of touch. It’s also got to shake the hand of the U.S. President himself, Barack Obama.

Copeland, 30, was part of a groundbreaking research project involving researchers from the University of Pittsburgh and the University of Pittsburgh Medical Center. In this experiment, Copeland’s brain was implanted with microscopic electrodes — a report from the Washington Post describes the tiny particles as being “smaller than a grain of sand.” With the particles implanted into the cortex of the man’s brain, they then interacted with his robotic arm. This allowed Copeland to gain some feeling in his paralyzed right hand’s fingers, as the process worked around the spinal cord damage that robbed him of the sense of touch.

More than a decade had passed since Copeland, then a college student in his teens, had suffered his injuries in a car accident. The wreck had resulted in tetraplegia, or the paralysis of both arms and legs, though it didn’t completely rob the Western Pennsylvania resident of the ability to move his shoulders. He then volunteered in 2011 for the University of Pittsburgh Medical Center project, a broader research initiative with the goal of helping paralyzed individuals feel again. The Washington Post describes this process as something “even more difficult” than helping these people move again.

For Nathan Copeland, the robotic arm experiment has proven to be a success, as he’s regained the ability to feel most of his fingers. He told the Washington Post on Wednesday that the type of feeling does differ at times, but he can “tell most of the fingers with definite precision.” Likewise, UPMC biomedical engineer Robert Gaunt told the publication that he felt “relieved” that the project allowed Copeland to feel parts of the hand that had no feeling for the past 10 years.

Prior to this experiment, mind-controlled robotic arm capabilities were already quite impressive, but lacking one key ingredient – the sense of touch. These prosthetics allowed people to move objects around, but since the individuals using the arms didn’t have working peripheral nerve systems, they couldn’t feel the sense of touch, and movements with the robotic limbs were typically mechanical in nature. But that’s not the case with Nathan Copeland, according to UPMC’s Gaunt.

“With Nathan, he can control a prosthetic arm, do a handshake, fist bump, move objects around,” Gaunt observed. “And in this (study), he can experience sensations from his own hand. Now we want to put those two things together so that when he reaches out to grasp an object, he can feel it. … He can pick something up that’s soft and not squash it or drop it.”

But it wasn’t just ordinary handshakes that Copeland was sharing on Thursday. On that day, he had exchanged a handshake and fist bump with President Barack Obama, who was in Pittsburgh for a White House Frontiers Conference. And Obama appeared to be suitably impressed with what Gaunt and his team had achieved, as it allowed Copeland’s robotic arm and hand to have “pretty impressive” precision.

“When I’m moving the hand, it is also sending signals to Nathan so he is feeling me touching or moving his arm,” said Obama.

Unfortunately, Copeland won’t be able to go home with his specialized prosthesis. In a report from the Associated Press, he said that the experiment mainly amounts to having “done some cool stuff with some cool people.” But he nonetheless remains hopeful, as he believes that his experience with the robotic arm will mark some key advances in the quest to make paralyzed people regain their natural sense of touch.

Read more at http://www.inquisitr.com/3599638/paralyzed-mans-robotic-arm-gets-to-feel-again-shakes-obamas-hand/#xVzFDHGXukJWBV05.99

Tech billionaires who think we’re living in a computer simulation run by an advanced civilization are secretly funding a way out

By Owen Hughes

Technology moguls convinced that we are all living in a Matrix-like simulation are secretly bankrolling efforts to help us break free of it, according to a new report. It’s alleged that two Silicon Valley billionaires are funding work by scientists on proving the simulation hypothesis, a theory backed by Space X CEO Elon Musk.

The simulation hypothesis is based on the idea that humans are not living in reality at all, and are instead a product of a simulation being run by an extremely advanced post-human civilisation. Much like in the Matrix, this simulation is so sophisticated that humans aren’t even aware they are living in it.

It seems like a far-fetched notion, but it’s one that’s held in increasing regard in the wake of recent technological leaps in computing power and artificial intelligence. According to the New Yorker, some of tech’s top minds are so convinced by this theory that they are now funding a solution – though exactly what this would look like is unclear.

“Many people in Silicon Valley have become obsessed with the simulation hypothesis, the argument that what we experience as reality is in fact fabricated in a computer,” reports the New Yorker. “Two tech billionaires have gone so far as to secretly engage scientists to work on breaking us out of the simulation.”

The comments were made by author Tad Friend within a profile piece on Sam Altman, CEO of Y Combinator. Neither of the two billionaires referenced were named, although one prominent figure to have made his views on the subject vocal is Elon Musk.

Musk has previously suggested that given the rate of progress in 3D graphics, at some point in the future, video games will be indistinguishable from reality. Thus, it would be impossible to tell if we had already advanced to that point and are now living through a simulation.

In fact, Musk believes that the chance we humans are living in the “base reality” – that is, the true reality – is “one in billions”.

“The strongest argument for us probably being in a simulation is that 40 years ago we had Pong, two rectangles and a dot,” he told a Recode conference in June. “That was what games were. Now, 40 years later, we have photorealistic 3D simulations with millions of people playing simultaneously and it’s getting better every year, and soon we’ll have virtual reality.

“So given that we’re clearly on a trajectory to have games that are indistinguishable from reality… and there would probably be billions of computers, it would seem to follow that the odds we are in base reality is one in billions.”

In the New Yorker piece, Altman also touched on the threats posed by artificial intelligence, suggesting that the human race might be able to avoid a doomsday scenario by merging itself with machines.

“Any version without a merge will have conflict: we enslave the AI or it enslaves us,” said Altman. “The full-on-crazy version of the merge is we get our brains uploaded into the cloud. We need to level up humans, because our descendants will either conquer the galaxy or extinguish consciousness in the universe forever.”

http://www.ibtimes.co.uk/take-red-pill-tech-billionaires-who-think-were-living-matrix-are-secretly-funding-way-out-1585315

2 men fall off cliff playing Pokemon Go

Two men in their early 20s fell an estimated 50 to 90 feet down a cliff in Encinitas, California, on Wednesday afternoon while playing “Pokémon Go,” San Diego County Sheriff’s Department Sgt. Rich Eaton said. The men sustained injuries, although the extent is not clear.

Pokémon Go is a free-to-play app that gets users up and moving in the real world to capture fictional “pocket monsters” known as Pokémon. The goal is to capture as many of the more than hundred species of animated Pokémon as you can.

Apparently it wasn’t enough that the app warns users to stay aware of surroundings or that signs posted on a fence near the cliff said “No Trespassing” and “Do Not Cross.” When firefighters arrived at the scene, one of the men was at the bottom of the cliff while the other was three-quarters of the way down and had to be hoisted up, Eaton said.

Both men were transported to Scripps Memorial Hospital La Jolla. They were not charged with trespassing.

Eaton encourages players to be careful. “It’s not worth life or limb,” he said

In parts of San Diego County, there are warning signs for gamers not to play while driving. San Diego Gas and Electric tweeted a warning to stay away from electric lines and substations when catching Pokémon.

This is the latest among many unexpected situations gamers have found themselves in, despite the game being released just more than a week ago. In one case, armed robbers lured lone players of the wildly popular augmented reality game to isolated locations. In another case, the game led a teen to discover a dead body.

http://www.cnn.com/2016/07/15/health/pokemon-go-players-fall-down-cliff/index.html

Why you should believe in the digital afterlife

by Michael Graziano

Imagine scanning your Grandma’s brain in sufficient detail to build a mental duplicate. When she passes away, the duplicate is turned on and lives in a simulated video-game universe, a digital Elysium complete with Bingo, TV soaps, and knitting needles to keep the simulacrum happy. You could talk to her by phone just like always. She could join Christmas dinner by Skype. E-Granny would think of herself as the same person that she always was, with the same memories and personality—the same consciousness—transferred to a well regulated nursing home and able to offer her wisdom to her offspring forever after.

And why stop with Granny? You could have the same afterlife for yourself in any simulated environment you like. But even if that kind of technology is possible, and even if that digital entity thought of itself as existing in continuity with your previous self, would you really be the same person?

Is it even technically possible to duplicate yourself in a computer program? The short answer is: probably, but not for a while.

Let’s examine the question carefully by considering how information is processed in the brain, and how it might be translated to a computer.

The first person to grasp the information-processing fundamentals of the brain was the great Spanish neuroscientist, Ramon Y Cajal, who won the 1906 Nobel Prize in Physiology. Before Cajal, the brain was thought to be made of microscopic strands connected in a continuous net or ‘reticulum.’ According to that theory, the brain was different from every other biological thing because it wasn’t made of separate cells. Cajal used new methods of staining brain samples to discover that the brain did have separate cells, which he called neurons. The neurons had long thin strands mixing together like spaghetti—dendrites and axons that presumably carried signals. But when he traced the strands carefully, he realized that one neuron did not grade into another. Instead, neurons contacted each other through microscopic gaps—synapses.

Cajal guessed that the synapses must regulate the flow of signals from neuron to neuron. He developed the first vision of the brain as a device that processes information, channeling signals and transforming inputs into outputs. That realization, the so-called neuron doctrine, is the foundational insight of neuroscience. The last hundred years have been dedicated more or less to working out the implications of the neuron doctrine.

It’s now possible to simulate networks of neurons on a microchip and the simulations have extraordinary computing capabilities. The principle of a neural network is that it gains complexity by combining many simple elements. One neuron takes in signals from many other neurons. Each incoming signal passes over a synapse that either excites the receiving neuron or inhibits it. The neuron’s job is to sum up the many thousands of yes and no votes that it receives every instant and compute a simple decision. If the yes votes prevail, it triggers its own signal to send on to yet other neurons. If the no votes prevail, it remains silent. That elemental computation, as trivial as it sounds, can result in organized intelligence when compounded over enough neurons connected in enough complexity.

The trick is to get the right pattern of synaptic connections between neurons. Artificial neural networks are programmed to adjust their synapses through experience. You give the network a computing task and let it try over and over. Every time it gets closer to a good performance, you give it a reward signal or an error signal that updates its synapses. Based on a few simple learning rules, each synapse changes gradually in strength. Over time, the network shapes up until it can do the task. That deep leaning, as it’s sometimes called, can result in machines that develop spooky, human-like abilities such as face recognition and voice recognition. This technology is already all around us in Siri and in Google.

But can the technology be scaled up to preserve someone’s consciousness on a computer? The human brain has about a hundred billion neurons. The connectional complexity is staggering. By some estimates, the human brain compares to the entire content of the internet. It’s only a matter of time, however, and not very much at that, before computer scientists can simulate a hundred billion neurons. Many startups and organizations, such as the Human Brain project in Europe, are working full-tilt toward that goal. The advent of quantum computing will speed up the process considerably. But even when we reach that threshold where we are able to create a network of a hundred billion artificial neurons, how do we copy your special pattern of connectivity?

No existing scanner can measure the pattern of connectivity among your neurons, or connectome, as it’s called. MRI machines scan at about a millimeter resolution, whereas synapses are only a few microns across. We could kill you and cut up your brain into microscopically thin sections. Then we could try to trace the spaghetti tangle of dendrites, axons, and their synapses. But even that less-than-enticing technology is not yet scalable. Scientists like Sebastian Seung have plotted the connectome in a small piece of a mouse brain, but we are decades away, at least, from technology that could capture the connectome of the human brain.

Assuming we are one day able to scan your brain and extract your complete connectome, we’ll hit the next hurdle. In an artificial neural network, all the neurons are identical. They vary only in the strength of their synaptic interconnections. That regularity is a convenient engineering approach to building a machine. In the real brain, however, every neuron is different. To give a simple example, some neurons have thick, insulated cables that send information at a fast rate. You find these neurons in parts of the brain where timing is critical. Other neurons sprout thinner cables and transmit signals at a slower rate. Some neurons don’t even fire off signals—they work by a subtler, sub-threshold change in electrical activity. All of these neurons have different temporal dynamics.

The brain also uses hundreds of different kinds of synapses. As I noted above, a synapse is a microscopic gap between neurons. When neuron A is active, the electrical signal triggers a spray of chemicals—neurotransmitters—which cross the synapse and are picked up by chemical receptors on neuron B. Different synapses use different neurotransmitters, which have wildly different effects on the receiving neuron, and are re-absorbed after use at different rates. These subtleties matter. The smallest change to the system can have profound consequences. For example, Prozac works on people’s moods because it subtly adjusts the way particular neurotransmitters are reabsorbed after being released into synapses.

Although Cajal didn’t realize it, some neurons actually do connect directly, membrane to membrane, without a synaptic space between. These connections, called gap junctions, work more quickly than the regular kind and seem to be important in synchronizing the activity across many neurons.

Other neurons act like a gland. Instead of sending a precise signal to specific target neurons, they release a chemical soup that spreads and affects a larger area of the brain over a longer time.

I could go on with the biological complexity. These are just a few examples.

A student of artificial intelligence might argue that these complexities don’t matter. You can build an intelligent machine with simpler, more standard elements, ignoring the riot of biological complexity. And that is probably true. But there is a difference between building artificial intelligence and recreating a specific person’s mind.

If you want a copy of your brain, you will need to copy its quirks and complexities, which define the specific way you think. A tiny maladjustment in any of these details can result in epilepsy, hallucinations, delusions, depression, anxiety, or just plain unconsciousness. The connectome by itself is not enough. If your scan could determine only which neurons are connected to which others, and you re-created that pattern in a computer, there’s no telling what Frankensteinian, ruined, crippled mind you would create.

To copy a person’s mind, you wouldn’t need to scan anywhere near the level of individual atoms. But you would need a scanning device that can capture what kind of neuron, what kind of synapse, how large or active of a synapse, what kind of neurotransmitter, how rapidly the neurotransmitter is being synthesized and how rapidly it can be reabsorbed. Is that impossible? No. But it starts to sound like the tech is centuries in the future rather than just around the corner.

Even if we get there quicker, there is still another hurdle. Let’s suppose we have the technology to make a simulation of your brain. Is it truly conscious, or is it merely a computer crunching numbers in imitation of your behavior?

A half-dozen major scientific theories of consciousness have been proposed. In all of them, if you could simulate a brain on a computer, the simulation would be as conscious as you are. In the Attention Schema Theory, consciousness depends on the brain computing a specific kind of self-descriptive model. Since this explanation of consciousness depends on computation and information, it would translate directly to any hardware including an artificial one.

In another approach, the Global Workspace Theory, consciousness ignites when information is combined and shared globally around the brain. Again, the process is entirely programmable. Build that kind of global processing network, and it will be conscious.

In yet another theory, the Integrated Information Theory, consciousness is a side product of information. Any computing device that has a sufficient density of information, even an artificial device, is conscious.

Many other scientific theories of consciousness have been proposed, beyond the three mentioned here. They are all different from each other and nobody yet knows which one is correct. But in every theory grounded in neuroscience, a computer-simulated brain would be conscious. In some mystical theories and theories that depend on a loose analogy to quantum mechanics, consciousness would be more difficult to create artificially. But as a neuroscientist, I am confident that if we ever could scan a person’s brain in detail and simulate that architecture on a computer, then the simulation would have a conscious experience. It would have the memories, personality, feelings, and intelligence of the original.

And yet, that doesn’t mean we’re out of the woods. Humans are not brains in vats. Our cognitive and emotional experience depends on a brain-body system embedded in a larger environment. This relationship between brain function and the surrounding world is sometimes called “embodied cognition.” The next task therefore is to simulate a realistic body and a realistic world in which to embed the simulated brain. In modern video games, the bodies are not exactly realistic. They don’t have all the right muscles, the flexibility of skin, or the fluidity of movement. Even though some of them come close, you wouldn’t want to live forever in a World of Warcraft skin. But the truth is, a body and world are the easiest components to simulate. We already have the technology. It’s just a matter of allocating enough processing power.

In my lab, a few years ago, we simulated a human arm. We included the bone structure, all the fifty or so muscles, the slow twitch and fast twitch fibers, the tendons, the viscosity, the forces and inertia. We even included the touch receptors, the stretch receptors, and the pain receptors. We had a working human arm in digital format on a computer. It took a lot of computing power, and on our tiny machines it couldn’t run in real time. But with a little more computational firepower and a lot bigger research team we could have simulated a complete human body in a realistic world.

Let’s presume that at some future time we have all the technological pieces in place. When you’re close to death we scan your details and fire up your simulation. Something wakes up with the same memories and personality as you. It finds itself in a familiar world. The rendering is not perfect, but it’s pretty good. Odors probably don’t work quite the same. The fine-grained details are missing. You live in a simulated New York City with crowds of fellow dead people but no rats or dirt. Or maybe you live in a rural setting where the grass feels like Astroturf. Or you live on the beach in the sun, and every year an upgrade makes the ocean spray seem a little less fake. There’s no disease. No aging. No injury. No death unless the operating system crashes. You can interact with the world of the living the same way you do now, on a smart phone or by email. You stay in touch with living friends and family, follow the latest elections, watch the summer blockbusters. Maybe you still have a job in the real world as a lecturer or a board director or a comedy writer. It’s like you’ve gone to another universe but still have contact with the old one.

But is it you? Did you cheat death, or merely replace yourself with a creepy copy?

I can’t pretend to have a definitive answer to this philosophical question. Maybe it’s a matter of opinion rather than anything testable or verifiable. To many people, uploading is simply not an afterlife. No matter how accurate the simulation, it wouldn’t be you. It would be a spooky fake.

My own perspective borrows from a basic concept in topology. Imagine a branching Y. You’re born at the bottom of the Y and your lifeline progresses up the stalk. The branch point is the moment your brain is scanned and the simulation has begun. Now there are two of you, a digital one (let’s say the left branch) and a biological one (the right branch). They both inherit the memories, personality, and identity of the stalk. They both think they’re you. Psychologically, they’re equally real, equally valid. Once the simulation is fired up, the branches begin to diverge. The left branch accumulates new experiences in a digital world. The right branch follows a different set of experiences in the physical world.

Is it all one person, or two people, or a real person and a fake one? All of those and none of those. It’s a Y.

The stalk of the Y, the part from before the split, gains immortality. It lives on in the digital you, just like your past self lives on in your present self. The right hand branch, the post-split biological branch, is doomed to die. That’s the part that feels gypped by the technology.

So let’s assume that those of us who live in biological bodies get over this injustice, and in a century or three we invent a digital afterlife. What could possibly go wrong?

Well, for one, there are limited resources. Simulating a brain is computationally expensive. As I noted before, by some estimates the amount of information in the entire internet at the present time is approximately the same as in a single human brain. Now imagine the resources required to simulate the brains of millions or billions of dead people. It’s possible that some future technology will allow for unlimited RAM and we’ll all get free service. The same way we’re arguing about health care now, future activists will chant, “The afterlife is a right, not a privilege!” But it’s more likely that a digital afterlife will be a gated community and somebody will have to choose who gets in. Is it the rich and politically connected who live on? Is it Trump? Is it biased toward one ethnicity? Do you get in for being a Nobel laureate, or for being a suicide bomber in somebody’s hideous war? Just think how coercive religion can be when it peddles the promise of an invisible afterlife that can’t be confirmed. Now imagine how much more coercive a demagogue would be if he could dangle the reward of an actual, verifiable afterlife. The whole thing is an ethical nightmare.

And yet I remain optimistic. Our species advances every time we develop a new way to share information. The invention of writing jump-started our advanced civilizations. The computer revolution and the internet are all about sharing information. Think about the quantum leap that might occur if instead of preserving words and pictures, we could preserve people’s actual minds for future generations. We could accumulate skill and wisdom like never before. Imagine a future in which your biological life is more like a larval stage. You grow up, learn skills and good judgment along the way, and then are inducted into an indefinite digital existence where you contribute to stability and knowledge. When all the ethical confusion settles, the benefits may be immense. No wonder people like Ray Kurzweil refer to this kind of technological advance as a singularity. We can’t even imagine how our civilization will look on the other side of that change.

http://www.theatlantic.com/science/archive/2016/07/what-a-digital-afterlife-would-be-like/491105/

Thanks to Dan Brat for bringing this to the It’s Interesting community.

Will machines one day control our decisions?

New research suggests it’s possible to detect when our brain is making a decision and nudge it to make the healthier choice.

In recording moment-to-moment deliberations by macaque monkeys over which option is likely to yield the most fruit juice, scientists have captured the dynamics of decision-making down to millisecond changes in neurons in the brain’s orbitofrontal cortex.

“If we can measure a decision in real time, we can potentially also manipulate it,” says senior author Jonathan Wallis, a neuroscientist and professor of psychology at the University of California, Berkeley. “For example, a device could be created that detects when an addict is about to choose a drug and instead bias their brain activity towards a healthier choice.”

Located behind the eyes, the orbitofrontal cortex plays a key role in decision-making and, when damaged, can lead to poor choices and impulsivity.

While previous studies have linked activity in the orbitofrontal cortex to making final decisions, this is the first to track the neural changes that occur during deliberations between different options.

“We can now see a decision unfold in real time and make predictions about choices,” Wallis says.

Measuring the signals from electrodes implanted in the monkeys’ brains, researchers tracked the primates’ neural activity as they weighed the pros and cons of images that delivered different amounts of juice.

A computational algorithm tracked the monkeys’ orbitofrontal activity as they looked from one image to another, determining which picture would yield the greater reward. The shifting brain patterns enabled researchers to predict which image the monkey would settle on.

For the experiment, they presented a monkey with a series of four different images of abstract shapes, each of which delivered to the monkey a different amount of juice. They used a pattern-recognition algorithm known as linear discriminant analysis to identify, from the pattern of neural activity, which picture the monkey was looking at.

Next, they presented the monkey with two of those same images, and watched the neural patterns switch back and forth to the point where the researchers could predict which image the monkey would choose based on the length of time that the monkey stared at the picture.

The more the monkey needed to think about the options, particularly when there was not much difference between the amounts of juice offered, the more the neural patterns would switch back and forth.

“Now that we can see when the brain is considering a particular choice, we could potentially use that signal to electrically stimulate the neural circuits involved in the decision and change the final choice,” Wallis says.

Erin Rich, a researcher at the Helen Wills Neuroscience Institute, is lead author of the study published in the journal Nature Neuroscience. The National Institute on Drug Abuse and the National Institute of Mental Health funded the work.

http://www.futurity.org/brains-decisions-1181542/

Bacteria can be turned into living hard drives


When scientists add code to bacterial DNA, it’s passed on to the next generation.

By Bryan Nelson

The way DNA stores genetic information is similar to the way a computer stores data. Now scientists have found a way to turn this from a metaphorical comparison into a literal one, by transforming living bacteria into hard drives, reports Popular Mechanics.

A team of Harvard scientists led by geneticists Seth Shipman and Jeff Nivala have devised a way to trick bacteria into copying computer code into the fabric of their DNA without interrupting normal cellular function. The bacteria even pass the information on to their progeny, thus ensuring that the information gets “backed up,” even when individual bacteria perish.

So far the technique can only upload about 100 bytes of data to the bacteria, but that’s enough to store a short script or perhaps a short poem — say, a haiku — into the genetics of a cell. For instance, here’s a haiku that would work:

Bacteria on
your thumb
might someday become
a real thumb drive

As the method becomes more precise, it will be possible to encode longer strings of text into the fabric of life. Perhaps some day, the bacteria living all around us will also double as a sort of library that we can download.

The technique is based on manipulation of an immune response that exists in many bacteria known as the CRISPR/Cas system. How the system works is actually fairly simple: when bacteria encounter a threatening virus, they physically cut out a segment of the attacking virus’s DNA and paste it into a specific region of their own genome. The bacteria can then use this section of viral DNA to identify future virus encounters and rapidly mount a defense. Copying this immunity into their own genetic code allows the bacteria to pass it on to future generations.

To get the bacteria to copy strings of computer code instead, researchers just book-ended the information with segments that look like viral DNA. The bacteria then got to work, conveniently cutting and pasting the relevant section into their genes.

The method does have a few bugs. For instance, not all of the bacteria snip the full section, so only part of the code gets copied. But if you introduce the code into a large enough population of bacteria, it becomes easy to deduce the full message from a sufficient percentage of the colony.

The amount of information that can be stored also depends on the bacteria doing the storing. For this experiment, researchers used E. coli, which was only efficient at storing around 100 bytes. But some bacteria, such as Sulfolobus tokodaii, are capable of storing thousands of bytes. With synthetic engineering, these numbers can be increased exponentially.

http://www.mnn.com/green-tech/research-innovations/stories/bacteria-can-now-be-turned-living-hard-drives