Tech billionaires who think we’re living in a computer simulation run by an advanced civilization are secretly funding a way out

By Owen Hughes

Technology moguls convinced that we are all living in a Matrix-like simulation are secretly bankrolling efforts to help us break free of it, according to a new report. It’s alleged that two Silicon Valley billionaires are funding work by scientists on proving the simulation hypothesis, a theory backed by Space X CEO Elon Musk.

The simulation hypothesis is based on the idea that humans are not living in reality at all, and are instead a product of a simulation being run by an extremely advanced post-human civilisation. Much like in the Matrix, this simulation is so sophisticated that humans aren’t even aware they are living in it.

It seems like a far-fetched notion, but it’s one that’s held in increasing regard in the wake of recent technological leaps in computing power and artificial intelligence. According to the New Yorker, some of tech’s top minds are so convinced by this theory that they are now funding a solution – though exactly what this would look like is unclear.

“Many people in Silicon Valley have become obsessed with the simulation hypothesis, the argument that what we experience as reality is in fact fabricated in a computer,” reports the New Yorker. “Two tech billionaires have gone so far as to secretly engage scientists to work on breaking us out of the simulation.”

The comments were made by author Tad Friend within a profile piece on Sam Altman, CEO of Y Combinator. Neither of the two billionaires referenced were named, although one prominent figure to have made his views on the subject vocal is Elon Musk.

Musk has previously suggested that given the rate of progress in 3D graphics, at some point in the future, video games will be indistinguishable from reality. Thus, it would be impossible to tell if we had already advanced to that point and are now living through a simulation.

In fact, Musk believes that the chance we humans are living in the “base reality” – that is, the true reality – is “one in billions”.

“The strongest argument for us probably being in a simulation is that 40 years ago we had Pong, two rectangles and a dot,” he told a Recode conference in June. “That was what games were. Now, 40 years later, we have photorealistic 3D simulations with millions of people playing simultaneously and it’s getting better every year, and soon we’ll have virtual reality.

“So given that we’re clearly on a trajectory to have games that are indistinguishable from reality… and there would probably be billions of computers, it would seem to follow that the odds we are in base reality is one in billions.”

In the New Yorker piece, Altman also touched on the threats posed by artificial intelligence, suggesting that the human race might be able to avoid a doomsday scenario by merging itself with machines.

“Any version without a merge will have conflict: we enslave the AI or it enslaves us,” said Altman. “The full-on-crazy version of the merge is we get our brains uploaded into the cloud. We need to level up humans, because our descendants will either conquer the galaxy or extinguish consciousness in the universe forever.”

http://www.ibtimes.co.uk/take-red-pill-tech-billionaires-who-think-were-living-matrix-are-secretly-funding-way-out-1585315

Ford Promises Fleets of Driverless Cars Within Five Years


“Autonomous vehicles could have just as much significant impact on society as Ford’s moving assembly line did 100 years ago,” said Mark Fields, chief executive of Ford.

by NEAL E. BOUDETTE

In the race to develop driverless cars, several automakers and technology companies are already testing vehicles that pilot themselves on public roads. And others have outlined plans to expand their development fleets over the next few years.

But few have gone so far as to give a definitive date for the commercial debut of these cars of the future.

Now Ford Motor has done just that.

At a news conference on Tuesday at the company’s research center in Palo Alto, Calif., Mark Fields, Ford’s chief executive, said the company planned to mass produce driverless cars and have them in commercial operation in a ride-hailing service by 2021.

Beyond that, Mr. Fields’s announcement was short on specifics. But he said that the vehicles Ford envisioned would be radically different from those that populate American roads now.

“That means there’s going to be no steering wheel. There’s going to be no gas pedal. There’s going to be no brake pedal,’’ he said. “If someone had told you 10 years ago, or even five years ago, that the C.E.O. of a major automaker American car company is going to be announcing the mass production of fully autonomous vehicles, they would have been called crazy or nuts or both.”

The company also said on Tuesday that as part of the effort, it planned to expand its Palo Alto center, doubling the number of employees who work there over the next year, from the current 130.

Ford also said it had acquired an Israeli start-up, Saips, that specializes in computer vision, a crucial technology for self-driving cars. And the automaker announced investments in three other companies involved in major technologies for driverless vehicles.

For several years, automakers have understood that their industry is being reshaped by the use of advanced computer chips, software and sensors to develop cars designed to drive themselves. The tech companies Google and Apple have emerged as potential future competitors to automakers, while Tesla Motors has already proved a competitive threat to luxury brands like BMW and Mercedes-Benz with driver-assistance and collision-avoidance technologies.

More recently, ride-sharing service providers like Uber have raised the competitive concerns of the conventional auto industry. The ride-hailing services aim to operate fleets of driverless cars that, in the future, might provide ready transportation to anyone, making it easier for people to get around without owning a car or even having a driver’s license

A Barclays analyst, Brian Johnson, recently predicted that once autonomous vehicles are in widespread use, auto sales could fall as much as 40 percent as people rely on such services for transportation and choose not to own cars.

Mr. Fields said on Tuesday that the combination of driverless cars and ride-sharing services represented a “seismic shift” for the auto industry that would be greater than the advent of the moving production line was roughly a century ago.

“The world is changing, and it’s changing rapidly,” he said, adding that Ford now sees itself as not just a carmaker but a “mobility company.”

BMW and Mercedes-Benz are among the carmakers that have seized upon the concept of “transportation as a service,” as it is called, by starting ride-sharing services of their own. General Motors has teamed up with, and bought a stake in, Lyft, the main rival of Uber.

GM and Lyft plan to have driverless vehicles operating in tests within a year. Initially, at least, those tests will be conducted with a driver in the car to take control from the self-driving technology, if necessary.

Even some auto suppliers are focusing on ride-hailing services and driverless cars. This month, the components maker Delphi announced that it was working with the government of Singapore to develop a ride service to shuttle people to and from mass transit stations in the country’s business district.

Even though Ford has committed itself to a date for a commercial introduction of its driverless cars, several questions remain about how it will move forward, said Michelle Krebs, an analyst with AutoTrader.

For example, Ford does not have a ride-sharing partner as G.M. does in Lyft, Ms. Krebs said.

In a research note on Tuesday, Mr. Johnson noted that it remained unclear how auto companies would make money from ride-sharing services.

“These are a lot of promises, but we don’t yet know how they are going to evolve,” Ms. Krebs said. “There are still missing pieces.”

One of the investments Ford announced on Tuesday was a $75 million stake in Velodyne, which makes sensors that use lidar, a kind of radar based on laser beams. The Chinese internet company Baidu said it was making a comparable investment in Velodyne.

Ford also said it had made investments in Nirenberg Neuroscience, which is also developing machine vision technology, and Civil Maps, a start-up that is developing 3D digital maps for use by automated vehicles. Ford did not disclose the amount it invested in Nirenberg or Civil Maps.

http://outbr.in/3xtoe#http://www.nytimes.com/2016/08/17/business/ford-promises-fleets-of-driverless-cars-within-five-years.html?_r=0

Can the World Sustain 9 Billion People by 2050?

By Philip Perry

The world’s population is topsy-turvy, and its exponential and uneven growth could have disastrous consequences if we aren’t ready for it. Humanity recently hit a benchmark, a population of 7.9 billion in 2013. It is expected to reach 8.5 billion by 2030, and 9.6 billion by 2050. If that weren’t enough, consider 11.2 billion in 2100. Most of the growth is supposed to come from nine specific countries: India, Pakistan, the Democratic Republic of the Congo, Ethiopia, Tanzania, Nigeria, the United States, and Indonesia.

It isn’t fertility that is driving growth, but rather longer lifespans. World population growth peaked in the 1960s, and has been dropping steadily since the ’70s. 1.24% was the growth rate a decade ago, annually. Today, it is 1.18% per year. Populations in developed countries have slowed to a trickle. Here, it has gotten too expensive to have a child for a large segment of the populace, particularly in the wake of the Great Recession, when young people have to invest a lot of time in education and building a career, spending their most fertile years in lecture halls and office cubicles. Although overall, fertility has been dropping worldwide, the report says researchers used the “low-variant” scenario of population growth. It could be higher.


World population growth by continent.

Meanwhile, the enormous baby boomer generation is aging, and public health officials warn that a “Silver Tsunami” is coming. Worldwide, those age 60 and over are expected to double by 2050, and triple by 2100. As workers age, fewer young people are around to replace them, and that means less taxpayers for Medicare and abroad, for socialized medicine. In Europe, a staggering 34% of the population is projected to be over 60 by 2050. What’s more, Europe’s population is forecast to plummet 14%. It is already struggling, as is Japan, to provide for its aging population. But the birth deficit is likely to exacerbate the problem.

In the U.S., the number of Alzheimer’s patients alone is expected to bankrupt Medicare, if no cure is found, and the program remains as it is today. “Developed countries have largely painted themselves into a corner now,” according to Carl Haub. He is the senior demographer at the Population Reference Bureau.

According to a U.N. report, most of the growth will come from developing countries, with over half projected to take place in Africa, the poorest continent financially, whose resources are already under pressure. 15 highly fertile countries, mostly in sub-Saharan Africa, are expected to increase the number of children per woman at a rate of a little over five per cent, or five per female. Nigeria’s population will likely surpass that of the U.S. by 2050, becoming the third largest in terms of demographics.

The population in developed countries is expected to remain unchanged, holding steady at 1.3 billion. Some developing countries such as Brazil, South Africa, Indonesia, India, and China are seeing a swift fall in the average number of children per woman, which is expected to continue. This may be due to better economic prospects. We often think of China as the world’s most populous nation, but India is set to reach them by 2022, when both nations will contain 1.45 billion citizens. Afterward, India is predicted to surpass China. As India’s population grows, China’s will shrink.

As far as life expectancy, it is expected to increase in both developed and developing nations. Globally, life expectancy will likely be 76 years on average in the 2045-2050 period. It will reach 82 years of age in 2095-2100, if nothing changes. Nearing the end of the century, those in developing nations could expect to live to 81, while in developed nations, 89 will be the norm. Yet, there are concerns that the developing world will suffer even more than today due to this phenomenon.

“The concentration of population growth in the poorest countries presents its own set of challenges, making it more difficult to eradicate poverty and inequality, to combat hunger and malnutrition, and to expand educational enrollment and health systems,” according to John Wilmoth. He is the Director of the Population Division in the UN’s Department of Economic and Social Affairs.

Another worry is resource depletion. Minerals, fossil fuels, timber, and water may become scarce in several regions of the world. Since wars are often fought over resources, and water use is expected to increase 70-90% by mid-century, without improved farming methods and smarter use, water may become the next oil, in terms of driving nations into violent conflict. The world’s water in certain regions is already strained. India and China for instance have already fought two wars over water claims.

Climate change is also likely to eat up more arable land, contributing to fears of food scarcity, as well as the loss of biodiversity, which is likely to occur at a faster rate. To help tamp down the world population, UN researchers suggest investing in reproductive health and family planning, particularly in developing nations.

This report was made possible by 233 countries providing demographic data, as well as 2010 population censuses.

http://bigthink.com/philip-perry/can-the-world-sustain-9-billion-people-by-2050?utm_source=Big+Think+Weekly+Newsletter+Subscribers&utm_campaign=709f2481ff-Newsletter_072016&utm_medium=email&utm_term=0_6d098f42ff-709f2481ff-41106061

2 men fall off cliff playing Pokemon Go

Two men in their early 20s fell an estimated 50 to 90 feet down a cliff in Encinitas, California, on Wednesday afternoon while playing “Pokémon Go,” San Diego County Sheriff’s Department Sgt. Rich Eaton said. The men sustained injuries, although the extent is not clear.

Pokémon Go is a free-to-play app that gets users up and moving in the real world to capture fictional “pocket monsters” known as Pokémon. The goal is to capture as many of the more than hundred species of animated Pokémon as you can.

Apparently it wasn’t enough that the app warns users to stay aware of surroundings or that signs posted on a fence near the cliff said “No Trespassing” and “Do Not Cross.” When firefighters arrived at the scene, one of the men was at the bottom of the cliff while the other was three-quarters of the way down and had to be hoisted up, Eaton said.

Both men were transported to Scripps Memorial Hospital La Jolla. They were not charged with trespassing.

Eaton encourages players to be careful. “It’s not worth life or limb,” he said

In parts of San Diego County, there are warning signs for gamers not to play while driving. San Diego Gas and Electric tweeted a warning to stay away from electric lines and substations when catching Pokémon.

This is the latest among many unexpected situations gamers have found themselves in, despite the game being released just more than a week ago. In one case, armed robbers lured lone players of the wildly popular augmented reality game to isolated locations. In another case, the game led a teen to discover a dead body.

http://www.cnn.com/2016/07/15/health/pokemon-go-players-fall-down-cliff/index.html

Why you should believe in the digital afterlife

by Michael Graziano

Imagine scanning your Grandma’s brain in sufficient detail to build a mental duplicate. When she passes away, the duplicate is turned on and lives in a simulated video-game universe, a digital Elysium complete with Bingo, TV soaps, and knitting needles to keep the simulacrum happy. You could talk to her by phone just like always. She could join Christmas dinner by Skype. E-Granny would think of herself as the same person that she always was, with the same memories and personality—the same consciousness—transferred to a well regulated nursing home and able to offer her wisdom to her offspring forever after.

And why stop with Granny? You could have the same afterlife for yourself in any simulated environment you like. But even if that kind of technology is possible, and even if that digital entity thought of itself as existing in continuity with your previous self, would you really be the same person?

Is it even technically possible to duplicate yourself in a computer program? The short answer is: probably, but not for a while.

Let’s examine the question carefully by considering how information is processed in the brain, and how it might be translated to a computer.

The first person to grasp the information-processing fundamentals of the brain was the great Spanish neuroscientist, Ramon Y Cajal, who won the 1906 Nobel Prize in Physiology. Before Cajal, the brain was thought to be made of microscopic strands connected in a continuous net or ‘reticulum.’ According to that theory, the brain was different from every other biological thing because it wasn’t made of separate cells. Cajal used new methods of staining brain samples to discover that the brain did have separate cells, which he called neurons. The neurons had long thin strands mixing together like spaghetti—dendrites and axons that presumably carried signals. But when he traced the strands carefully, he realized that one neuron did not grade into another. Instead, neurons contacted each other through microscopic gaps—synapses.

Cajal guessed that the synapses must regulate the flow of signals from neuron to neuron. He developed the first vision of the brain as a device that processes information, channeling signals and transforming inputs into outputs. That realization, the so-called neuron doctrine, is the foundational insight of neuroscience. The last hundred years have been dedicated more or less to working out the implications of the neuron doctrine.

It’s now possible to simulate networks of neurons on a microchip and the simulations have extraordinary computing capabilities. The principle of a neural network is that it gains complexity by combining many simple elements. One neuron takes in signals from many other neurons. Each incoming signal passes over a synapse that either excites the receiving neuron or inhibits it. The neuron’s job is to sum up the many thousands of yes and no votes that it receives every instant and compute a simple decision. If the yes votes prevail, it triggers its own signal to send on to yet other neurons. If the no votes prevail, it remains silent. That elemental computation, as trivial as it sounds, can result in organized intelligence when compounded over enough neurons connected in enough complexity.

The trick is to get the right pattern of synaptic connections between neurons. Artificial neural networks are programmed to adjust their synapses through experience. You give the network a computing task and let it try over and over. Every time it gets closer to a good performance, you give it a reward signal or an error signal that updates its synapses. Based on a few simple learning rules, each synapse changes gradually in strength. Over time, the network shapes up until it can do the task. That deep leaning, as it’s sometimes called, can result in machines that develop spooky, human-like abilities such as face recognition and voice recognition. This technology is already all around us in Siri and in Google.

But can the technology be scaled up to preserve someone’s consciousness on a computer? The human brain has about a hundred billion neurons. The connectional complexity is staggering. By some estimates, the human brain compares to the entire content of the internet. It’s only a matter of time, however, and not very much at that, before computer scientists can simulate a hundred billion neurons. Many startups and organizations, such as the Human Brain project in Europe, are working full-tilt toward that goal. The advent of quantum computing will speed up the process considerably. But even when we reach that threshold where we are able to create a network of a hundred billion artificial neurons, how do we copy your special pattern of connectivity?

No existing scanner can measure the pattern of connectivity among your neurons, or connectome, as it’s called. MRI machines scan at about a millimeter resolution, whereas synapses are only a few microns across. We could kill you and cut up your brain into microscopically thin sections. Then we could try to trace the spaghetti tangle of dendrites, axons, and their synapses. But even that less-than-enticing technology is not yet scalable. Scientists like Sebastian Seung have plotted the connectome in a small piece of a mouse brain, but we are decades away, at least, from technology that could capture the connectome of the human brain.

Assuming we are one day able to scan your brain and extract your complete connectome, we’ll hit the next hurdle. In an artificial neural network, all the neurons are identical. They vary only in the strength of their synaptic interconnections. That regularity is a convenient engineering approach to building a machine. In the real brain, however, every neuron is different. To give a simple example, some neurons have thick, insulated cables that send information at a fast rate. You find these neurons in parts of the brain where timing is critical. Other neurons sprout thinner cables and transmit signals at a slower rate. Some neurons don’t even fire off signals—they work by a subtler, sub-threshold change in electrical activity. All of these neurons have different temporal dynamics.

The brain also uses hundreds of different kinds of synapses. As I noted above, a synapse is a microscopic gap between neurons. When neuron A is active, the electrical signal triggers a spray of chemicals—neurotransmitters—which cross the synapse and are picked up by chemical receptors on neuron B. Different synapses use different neurotransmitters, which have wildly different effects on the receiving neuron, and are re-absorbed after use at different rates. These subtleties matter. The smallest change to the system can have profound consequences. For example, Prozac works on people’s moods because it subtly adjusts the way particular neurotransmitters are reabsorbed after being released into synapses.

Although Cajal didn’t realize it, some neurons actually do connect directly, membrane to membrane, without a synaptic space between. These connections, called gap junctions, work more quickly than the regular kind and seem to be important in synchronizing the activity across many neurons.

Other neurons act like a gland. Instead of sending a precise signal to specific target neurons, they release a chemical soup that spreads and affects a larger area of the brain over a longer time.

I could go on with the biological complexity. These are just a few examples.

A student of artificial intelligence might argue that these complexities don’t matter. You can build an intelligent machine with simpler, more standard elements, ignoring the riot of biological complexity. And that is probably true. But there is a difference between building artificial intelligence and recreating a specific person’s mind.

If you want a copy of your brain, you will need to copy its quirks and complexities, which define the specific way you think. A tiny maladjustment in any of these details can result in epilepsy, hallucinations, delusions, depression, anxiety, or just plain unconsciousness. The connectome by itself is not enough. If your scan could determine only which neurons are connected to which others, and you re-created that pattern in a computer, there’s no telling what Frankensteinian, ruined, crippled mind you would create.

To copy a person’s mind, you wouldn’t need to scan anywhere near the level of individual atoms. But you would need a scanning device that can capture what kind of neuron, what kind of synapse, how large or active of a synapse, what kind of neurotransmitter, how rapidly the neurotransmitter is being synthesized and how rapidly it can be reabsorbed. Is that impossible? No. But it starts to sound like the tech is centuries in the future rather than just around the corner.

Even if we get there quicker, there is still another hurdle. Let’s suppose we have the technology to make a simulation of your brain. Is it truly conscious, or is it merely a computer crunching numbers in imitation of your behavior?

A half-dozen major scientific theories of consciousness have been proposed. In all of them, if you could simulate a brain on a computer, the simulation would be as conscious as you are. In the Attention Schema Theory, consciousness depends on the brain computing a specific kind of self-descriptive model. Since this explanation of consciousness depends on computation and information, it would translate directly to any hardware including an artificial one.

In another approach, the Global Workspace Theory, consciousness ignites when information is combined and shared globally around the brain. Again, the process is entirely programmable. Build that kind of global processing network, and it will be conscious.

In yet another theory, the Integrated Information Theory, consciousness is a side product of information. Any computing device that has a sufficient density of information, even an artificial device, is conscious.

Many other scientific theories of consciousness have been proposed, beyond the three mentioned here. They are all different from each other and nobody yet knows which one is correct. But in every theory grounded in neuroscience, a computer-simulated brain would be conscious. In some mystical theories and theories that depend on a loose analogy to quantum mechanics, consciousness would be more difficult to create artificially. But as a neuroscientist, I am confident that if we ever could scan a person’s brain in detail and simulate that architecture on a computer, then the simulation would have a conscious experience. It would have the memories, personality, feelings, and intelligence of the original.

And yet, that doesn’t mean we’re out of the woods. Humans are not brains in vats. Our cognitive and emotional experience depends on a brain-body system embedded in a larger environment. This relationship between brain function and the surrounding world is sometimes called “embodied cognition.” The next task therefore is to simulate a realistic body and a realistic world in which to embed the simulated brain. In modern video games, the bodies are not exactly realistic. They don’t have all the right muscles, the flexibility of skin, or the fluidity of movement. Even though some of them come close, you wouldn’t want to live forever in a World of Warcraft skin. But the truth is, a body and world are the easiest components to simulate. We already have the technology. It’s just a matter of allocating enough processing power.

In my lab, a few years ago, we simulated a human arm. We included the bone structure, all the fifty or so muscles, the slow twitch and fast twitch fibers, the tendons, the viscosity, the forces and inertia. We even included the touch receptors, the stretch receptors, and the pain receptors. We had a working human arm in digital format on a computer. It took a lot of computing power, and on our tiny machines it couldn’t run in real time. But with a little more computational firepower and a lot bigger research team we could have simulated a complete human body in a realistic world.

Let’s presume that at some future time we have all the technological pieces in place. When you’re close to death we scan your details and fire up your simulation. Something wakes up with the same memories and personality as you. It finds itself in a familiar world. The rendering is not perfect, but it’s pretty good. Odors probably don’t work quite the same. The fine-grained details are missing. You live in a simulated New York City with crowds of fellow dead people but no rats or dirt. Or maybe you live in a rural setting where the grass feels like Astroturf. Or you live on the beach in the sun, and every year an upgrade makes the ocean spray seem a little less fake. There’s no disease. No aging. No injury. No death unless the operating system crashes. You can interact with the world of the living the same way you do now, on a smart phone or by email. You stay in touch with living friends and family, follow the latest elections, watch the summer blockbusters. Maybe you still have a job in the real world as a lecturer or a board director or a comedy writer. It’s like you’ve gone to another universe but still have contact with the old one.

But is it you? Did you cheat death, or merely replace yourself with a creepy copy?

I can’t pretend to have a definitive answer to this philosophical question. Maybe it’s a matter of opinion rather than anything testable or verifiable. To many people, uploading is simply not an afterlife. No matter how accurate the simulation, it wouldn’t be you. It would be a spooky fake.

My own perspective borrows from a basic concept in topology. Imagine a branching Y. You’re born at the bottom of the Y and your lifeline progresses up the stalk. The branch point is the moment your brain is scanned and the simulation has begun. Now there are two of you, a digital one (let’s say the left branch) and a biological one (the right branch). They both inherit the memories, personality, and identity of the stalk. They both think they’re you. Psychologically, they’re equally real, equally valid. Once the simulation is fired up, the branches begin to diverge. The left branch accumulates new experiences in a digital world. The right branch follows a different set of experiences in the physical world.

Is it all one person, or two people, or a real person and a fake one? All of those and none of those. It’s a Y.

The stalk of the Y, the part from before the split, gains immortality. It lives on in the digital you, just like your past self lives on in your present self. The right hand branch, the post-split biological branch, is doomed to die. That’s the part that feels gypped by the technology.

So let’s assume that those of us who live in biological bodies get over this injustice, and in a century or three we invent a digital afterlife. What could possibly go wrong?

Well, for one, there are limited resources. Simulating a brain is computationally expensive. As I noted before, by some estimates the amount of information in the entire internet at the present time is approximately the same as in a single human brain. Now imagine the resources required to simulate the brains of millions or billions of dead people. It’s possible that some future technology will allow for unlimited RAM and we’ll all get free service. The same way we’re arguing about health care now, future activists will chant, “The afterlife is a right, not a privilege!” But it’s more likely that a digital afterlife will be a gated community and somebody will have to choose who gets in. Is it the rich and politically connected who live on? Is it Trump? Is it biased toward one ethnicity? Do you get in for being a Nobel laureate, or for being a suicide bomber in somebody’s hideous war? Just think how coercive religion can be when it peddles the promise of an invisible afterlife that can’t be confirmed. Now imagine how much more coercive a demagogue would be if he could dangle the reward of an actual, verifiable afterlife. The whole thing is an ethical nightmare.

And yet I remain optimistic. Our species advances every time we develop a new way to share information. The invention of writing jump-started our advanced civilizations. The computer revolution and the internet are all about sharing information. Think about the quantum leap that might occur if instead of preserving words and pictures, we could preserve people’s actual minds for future generations. We could accumulate skill and wisdom like never before. Imagine a future in which your biological life is more like a larval stage. You grow up, learn skills and good judgment along the way, and then are inducted into an indefinite digital existence where you contribute to stability and knowledge. When all the ethical confusion settles, the benefits may be immense. No wonder people like Ray Kurzweil refer to this kind of technological advance as a singularity. We can’t even imagine how our civilization will look on the other side of that change.

http://www.theatlantic.com/science/archive/2016/07/what-a-digital-afterlife-would-be-like/491105/

Thanks to Dan Brat for bringing this to the It’s Interesting community.

Creating a Synthetic Human Genome


Sixty trays can contain the entire human genome as 23,040 different fragments of cloned DNA. Credit James King-Holmes/Science Source

By ANDREW POLLACK

Scientists are now contemplating the fabrication of a human genome, meaning they would use chemicals to manufacture all the DNA contained in human chromosomes.

The prospect is spurring both intrigue and concern in the life sciences community because it might be possible, such as through cloning, to use a synthetic genome to create human beings without biological parents.

While the project is still in the idea phase, and also involves efforts to improve DNA synthesis in general.

Organizers said the project could have a big scientific payoff and would be a follow-up to the original Human Genome Project, which was aimed at reading the sequence of the three billion chemical letters in the DNA blueprint of human life. The new project, by contrast, would involve not reading, but rather writing the human genome — synthesizing all three billion units from chemicals.

But such an attempt would raise numerous ethical issues. Could scientists create humans with certain kinds of traits, perhaps people born and bred to be soldiers? Or might it be possible to make copies of specific people?

“Would it be O.K., for example, to sequence and then synthesize Einstein’s genome?” Drew Endy, a bioengineer at Stanford, and Laurie Zoloth, a bioethicist at Northwestern University, wrote in an essay criticizing the proposed project. “If so how many Einstein genomes should be made and installed in cells, and who would get to make them?”

The project was initially called HGP2: The Human Genome Synthesis Project, with HGP referring to the Human Genome Project. An invitation to the meeting at Harvard said that the primary goal “would be to synthesize a complete human genome in a cell line within a period of 10 years.”

But by the time the meeting was held, the name had been changed to “HGP-Write: Testing Large Synthetic Genomes in Cells.”

The project does not yet have funding, Dr. Church said, though various companies and foundations would be invited to contribute, and some have indicated interest. The federal government will also be asked. A spokeswoman for the National Institutes of Health declined to comment, saying the project was in too early a stage.

Besides Dr. Church, the organizers include Jef Boeke, director of the institute for systems genetics at NYU Langone Medical Center, and Andrew Hessel, a self-described futurist who works at the Bay Area software company Autodesk and who first proposed such a project in 2012.

Scientists and companies can now change the DNA in cells, for example, by adding foreign genes or changing the letters in the existing genes. This technique is routinely used to make drugs, such as insulin for diabetes, inside genetically modified cells, as well as to make genetically modified crops. And scientists are now debating the ethics of new technology that might allow genetic changes to be made in embryos.

But synthesizing a gene, or an entire genome, would provide the opportunity to make even more extensive changes in DNA.

For instance, companies are now using organisms like yeast to make complex chemicals, like flavorings and fragrances. That requires adding not just one gene to the yeast, like to make insulin, but numerous genes in order to create an entire chemical production process within the cell. With that much tinkering needed, it can be easier to synthesize the DNA from scratch.

Right now, synthesizing DNA is difficult and error-prone. Existing techniques can reliably make strands that are only about 200 base pairs long, with the base pairs being the chemical units in DNA. A single gene can be hundreds or thousands of base pairs long. To synthesize one of those, multiple 200-unit segments have to be spliced together.

But the cost and capabilities are rapidly improving. Dr. Endy of Stanford, who is a co-founder of a DNA synthesis company called Gen9, said the cost of synthesizing genes has plummeted from $4 per base pair in 2003 to 3 cents now. But even at that rate, the cost for three billion letters would be $90 million. He said if costs continued to decline at the same pace, that figure could reach $100,000 in 20 years.

J. Craig Venter, the genetic scientist, synthesized a bacterial genome consisting of about a million base pairs. The synthetic genome was inserted into a cell and took control of that cell. While his first synthetic genome was mainly a copy of an existing genome, Dr. Venter and colleagues this year synthesized a more original bacterial genome, about 500,000 base pairs long.

Dr. Boeke is leading an international consortium that is synthesizing the genome of yeast, which consists of about 12 million base pairs. The scientists are making changes, such as deleting stretches of DNA that do not have any function, in an attempt to make a more streamlined and stable genome.

But the human genome is more than 200 times as large as that of yeast and it is not clear if such a synthesis would be feasible.

Jeremy Minshull, chief executive of DNA2.0, a DNA synthesis company, questioned if the effort would be worth it.

“Our ability to understand what to build is so far behind what we can build,” said Dr. Minshull, who was invited to the meeting at Harvard but did not attend. “I just don’t think that being able to make more and more and more and cheaper and cheaper and cheaper is going to get us the understanding we need.”

Will machines one day control our decisions?

New research suggests it’s possible to detect when our brain is making a decision and nudge it to make the healthier choice.

In recording moment-to-moment deliberations by macaque monkeys over which option is likely to yield the most fruit juice, scientists have captured the dynamics of decision-making down to millisecond changes in neurons in the brain’s orbitofrontal cortex.

“If we can measure a decision in real time, we can potentially also manipulate it,” says senior author Jonathan Wallis, a neuroscientist and professor of psychology at the University of California, Berkeley. “For example, a device could be created that detects when an addict is about to choose a drug and instead bias their brain activity towards a healthier choice.”

Located behind the eyes, the orbitofrontal cortex plays a key role in decision-making and, when damaged, can lead to poor choices and impulsivity.

While previous studies have linked activity in the orbitofrontal cortex to making final decisions, this is the first to track the neural changes that occur during deliberations between different options.

“We can now see a decision unfold in real time and make predictions about choices,” Wallis says.

Measuring the signals from electrodes implanted in the monkeys’ brains, researchers tracked the primates’ neural activity as they weighed the pros and cons of images that delivered different amounts of juice.

A computational algorithm tracked the monkeys’ orbitofrontal activity as they looked from one image to another, determining which picture would yield the greater reward. The shifting brain patterns enabled researchers to predict which image the monkey would settle on.

For the experiment, they presented a monkey with a series of four different images of abstract shapes, each of which delivered to the monkey a different amount of juice. They used a pattern-recognition algorithm known as linear discriminant analysis to identify, from the pattern of neural activity, which picture the monkey was looking at.

Next, they presented the monkey with two of those same images, and watched the neural patterns switch back and forth to the point where the researchers could predict which image the monkey would choose based on the length of time that the monkey stared at the picture.

The more the monkey needed to think about the options, particularly when there was not much difference between the amounts of juice offered, the more the neural patterns would switch back and forth.

“Now that we can see when the brain is considering a particular choice, we could potentially use that signal to electrically stimulate the neural circuits involved in the decision and change the final choice,” Wallis says.

Erin Rich, a researcher at the Helen Wills Neuroscience Institute, is lead author of the study published in the journal Nature Neuroscience. The National Institute on Drug Abuse and the National Institute of Mental Health funded the work.

http://www.futurity.org/brains-decisions-1181542/

Bacteria can be turned into living hard drives


When scientists add code to bacterial DNA, it’s passed on to the next generation.

By Bryan Nelson

The way DNA stores genetic information is similar to the way a computer stores data. Now scientists have found a way to turn this from a metaphorical comparison into a literal one, by transforming living bacteria into hard drives, reports Popular Mechanics.

A team of Harvard scientists led by geneticists Seth Shipman and Jeff Nivala have devised a way to trick bacteria into copying computer code into the fabric of their DNA without interrupting normal cellular function. The bacteria even pass the information on to their progeny, thus ensuring that the information gets “backed up,” even when individual bacteria perish.

So far the technique can only upload about 100 bytes of data to the bacteria, but that’s enough to store a short script or perhaps a short poem — say, a haiku — into the genetics of a cell. For instance, here’s a haiku that would work:

Bacteria on
your thumb
might someday become
a real thumb drive

As the method becomes more precise, it will be possible to encode longer strings of text into the fabric of life. Perhaps some day, the bacteria living all around us will also double as a sort of library that we can download.

The technique is based on manipulation of an immune response that exists in many bacteria known as the CRISPR/Cas system. How the system works is actually fairly simple: when bacteria encounter a threatening virus, they physically cut out a segment of the attacking virus’s DNA and paste it into a specific region of their own genome. The bacteria can then use this section of viral DNA to identify future virus encounters and rapidly mount a defense. Copying this immunity into their own genetic code allows the bacteria to pass it on to future generations.

To get the bacteria to copy strings of computer code instead, researchers just book-ended the information with segments that look like viral DNA. The bacteria then got to work, conveniently cutting and pasting the relevant section into their genes.

The method does have a few bugs. For instance, not all of the bacteria snip the full section, so only part of the code gets copied. But if you introduce the code into a large enough population of bacteria, it becomes easy to deduce the full message from a sufficient percentage of the colony.

The amount of information that can be stored also depends on the bacteria doing the storing. For this experiment, researchers used E. coli, which was only efficient at storing around 100 bytes. But some bacteria, such as Sulfolobus tokodaii, are capable of storing thousands of bytes. With synthetic engineering, these numbers can be increased exponentially.

http://www.mnn.com/green-tech/research-innovations/stories/bacteria-can-now-be-turned-living-hard-drives

French Tattoo Artist Gets World’s 1st Prosthetic Arm That Doubles as a Tattoo Machine

A French tattoo artist who lost his right arm 22 years ago recently received what has been called the world’s first tattooing prosthetic arm.

JC Sheitan Tenet, 32, told ABC News today he received and demonstrated the first prototype of the tattoo machine prosthesis earlier this month during a convention in Devezieux, France.

Though Tenet has been tattooing with his left arm and hand for years, he’s now learning how to tattoo with his right arm using the “Edward Scissorhands”-esque tool, he said.

The tattoo machine arm was created by visual artist and engineer Jean-Louis Gonzalez, who goes by “Gonzal.”

Gonzal told ABC News today that Tenet can control the prosthetic arm with his shoulder. Gonzal is still working on perfecting the prosthesis and said he hopes the next prototype will give Tenet more wrist mobility.

Tenet said that he uses the prosthesis to do a little filling but that he doesn’t rely on it to do elaborate artwork. He added that the needle is disposable and that the prosthesis can be cleaned like a regular tattoo machine.

And though the prosthesis has an oxidized metal look, it’s not rusted or unsanitary at all, Tenet said. It was painted in “steampunk style,” he explained. Steampunk is a science fiction genre and design style that typically features technology and aesthetics inspired by 19th century steam-powered machinery.