Posts Tagged ‘reality’

Back in 1961, the Nobel Prize–winning physicist Eugene Wigner outlined a thought experiment that demonstrated one of the lesser-known paradoxes of quantum mechanics. The experiment shows how the strange nature of the universe allows two observers—say, Wigner and Wigner’s friend—to experience different realities.

Since then, physicists have used the “Wigner’s Friend” thought experiment to explore the nature of measurement and to argue over whether objective facts can exist. That’s important because scientists carry out experiments to establish objective facts. But if they experience different realities, the argument goes, how can they agree on what these facts might be?

That’s provided some entertaining fodder for after-dinner conversation, but Wigner’s thought experiment has never been more than that—just a thought experiment.

Last year, however, physicists noticed that recent advances in quantum technologies have made it possible to reproduce the Wigner’s Friend test in a real experiment. In other words, it ought to be possible to create different realities and compare them in the lab to find out whether they can be reconciled.

And today, Massimiliano Proietti at Heriot-Watt University in Edinburgh and a few colleagues say they have performed this experiment for the first time: they have created different realities and compared them. Their conclusion is that Wigner was correct—these realities can be made irreconcilable so that it is impossible to agree on objective facts about an experiment.

Wigner’s original thought experiment is straightforward in principle. It begins with a single polarized photon that, when measured, can have either a horizontal polarization or a vertical polarization. But before the measurement, according to the laws of quantum mechanics, the photon exists in both polarization states at the same time—a so-called superposition.

Wigner imagined a friend in a different lab measuring the state of this photon and storing the result, while Wigner observed from afar. Wigner has no information about his friend’s measurement and so is forced to assume that the photon and the measurement of it are in a superposition of all possible outcomes of the experiment.

Wigner can even perform an experiment to determine whether this superposition exists or not. This is a kind of interference experiment showing that the photon and the measurement are indeed in a superposition.

From Wigner’s point of view, this is a “fact”—the superposition exists. And this fact suggests that a measurement cannot have taken place.

But this is in stark contrast to the point of view of the friend, who has indeed measured the photon’s polarization and recorded it. The friend can even call Wigner and say the measurement has been done (provided the outcome is not revealed).

So the two realities are at odds with each other. “This calls into question the objective status of the facts established by the two observers,” say Proietti and co.

That’s the theory, but last year Caslav Brukner, at the University of Vienna in Austria, came up with a way to re-create the Wigner’s Friend experiment in the lab by means of techniques involving the entanglement of many particles at the same time.

The breakthrough that Proietti and co have made is to carry this out. “In a state-of-the-art 6-photon experiment, we realize this extended Wigner’s friend scenario,” they say.

They use these six entangled photons to create two alternate realities—one representing Wigner and one representing Wigner’s friend. Wigner’s friend measures the polarization of a photon and stores the result. Wigner then performs an interference measurement to determine if the measurement and the photon are in a superposition.

The experiment produces an unambiguous result. It turns out that both realities can coexist even though they produce irreconcilable outcomes, just as Wigner predicted.

That raises some fascinating questions that are forcing physicists to reconsider the nature of reality.

The idea that observers can ultimately reconcile their measurements of some kind of fundamental reality is based on several assumptions. The first is that universal facts actually exist and that observers can agree on them.

But there are other assumptions too. One is that observers have the freedom to make whatever observations they want. And another is that the choices one observer makes do not influence the choices other observers make—an assumption that physicists call locality.

If there is an objective reality that everyone can agree on, then these assumptions all hold.

But Proietti and co’s result suggests that objective reality does not exist. In other words, the experiment suggests that one or more of the assumptions—the idea that there is a reality we can agree on, the idea that we have freedom of choice, or the idea of locality—must be wrong.

Of course, there is another way out for those hanging on to the conventional view of reality. This is that there is some other loophole that the experimenters have overlooked. Indeed, physicists have tried to close loopholes in similar experiments for years, although they concede that it may never be possible to close them all.

Nevertheless, the work has important implications for the work of scientists. “The scientific method relies on facts, established through repeated measurements and agreed upon universally, independently of who observed them,” say Proietti and co. And yet in the same paper, they undermine this idea, perhaps fatally.

The next step is to go further: to construct experiments creating increasingly bizarre alternate realities that cannot be reconciled. Where this will take us is anybody’s guess. But Wigner, and his friend, would surely not be surprised.

Ref: : Experimental Rejection of Observer-Independence in the Quantum World


What if we could edit the sensations we feel; paste in our brain pictures that we never saw, cut out unwanted pain or insert non-existent scents into memory?

UC Berkeley neuroscientists are building the equipment to do just that, using holographic projection into the brain to activate or suppress dozens and ultimately thousands of neurons at once, hundreds of times each second, copying real patterns of brain activity to fool the brain into thinking it has felt, seen or sensed something.

The goal is to read neural activity constantly and decide, based on the activity, which sets of neurons to activate to simulate the pattern and rhythm of an actual brain response, so as to replace lost sensations after peripheral nerve damage, for example, or control a prosthetic limb.

“This has great potential for neural prostheses, since it has the precision needed for the brain to interpret the pattern of activation. If you can read and write the language of the brain, you can speak to it in its own language and it can interpret the message much better,” said Alan Mardinly, a postdoctoral fellow in the UC Berkeley lab of Hillel Adesnik, an assistant professor of molecular and cell biology. “This is one of the first steps in a long road to develop a technology that could be a virtual brain implant with additional senses or enhanced senses.”

Mardinly is one of three first authors of a paper appearing online April 30 in advance of publication in the journal Nature Neuroscience that describes the holographic brain modulator, which can activate up to 50 neurons at once in a three-dimensional chunk of brain containing several thousand neurons, and repeat that up to 300 times a second with different sets of 50 neurons.

“The ability to talk to the brain has the incredible potential to help compensate for neurological damage caused by degenerative diseases or injury,” said Ehud Isacoff, a UC Berkeley professor of molecular and cell biology and director of the Helen Wills Neuroscience Institute, who was not involved in the research project. “By encoding perceptions into the human cortex, you could allow the blind to see or the paralyzed to feel touch.”

Holographic projection

Each of the 2,000 to 3,000 neurons in the chunk of brain was outfitted with a protein that, when hit by a flash of light, turns the cell on to create a brief spike of activity. One of the key breakthroughs was finding a way to target each cell individually without hitting all at once.

To focus the light onto just the cell body — a target smaller than the width of a human hair — of nearly all cells in a chunk of brain, they turned to computer generated holography, a method of bending and focusing light to form a three-dimensional spatial pattern. The effect is as if a 3D image were floating in space.

In this case, the holographic image was projected into a thin layer of brain tissue at the surface of the cortex, about a tenth of a millimeter thick, though a clear window into the brain.

“The major advance is the ability to control neurons precisely in space and time,” said postdoc Nicolas Pégard, another first author who works both in Adesnik’s lab and the lab of co-author Laura Waller, an associate professor of electrical engineering and computer sciences. “In other words, to shoot the very specific sets of neurons you want to activate and do it at the characteristic scale and the speed at which they normally work.”

The researchers have already tested the prototype in the touch, vision and motor areas of the brains of mice as they walk on a treadmill with their heads immobilized. While they have not noted any behavior changes in the mice when their brain is stimulated, Mardinly said that their brain activity — which is measured in real-time with two-photon imaging of calcium levels in the neurons — shows patterns similar to a response to a sensory stimulus. They’re now training mice so they can detect behavior changes after stimulation.

Prosthetics and brain implants

The area of the brain covered — now a slice one-half millimeter square and one-tenth of a millimeter thick — can be scaled up to read from and write to more neurons in the brain’s outer layer, or cortex, Pégard said. And the laser holography setup could eventually be miniaturized to fit in a backpack a person could haul around.

Mardinly, Pégard and the other first author, postdoc Ian Oldenburg, constructed the holographic brain modulator by making technological advances in a number of areas. Mardinly and Oldenburg, together with Savitha Sridharan, a research associate in the lab, developed better optogenetic switches to insert into cells to turn them on and off. The switches — light-activated ion channels on the cell surface that open briefly when triggered — turn on strongly and then quickly shut off, all in about 3 milliseconds, so they’re ready to be re-stimulated up to 50 or more times per second, consistent with normal firing rates in the cortex.

Pégard developed the holographic projection system using a liquid crystal screen that acts like a holographic negative to sculpt the light from 40W lasers into the desired 3D pattern. The lasers are pulsed in 300 femtosecond-long bursts every microsecond. He, Mardinly, Oldenburg and their colleagues published a paper last year describing the device, which they call 3D-SHOT, for three-dimensional scanless holographic optogenetics with temporal focusing.

“This is the culmination of technologies that researchers have been working on for a while, but have been impossible to put together,” Mardinly said. “We solved numerous technical problems at the same time to bring it all together and finally realize the potential of this technology.”

As they improve their technology, they plan to start capturing real patterns of activity in the cortex in order to learn how to reproduce sensations and perceptions to play back through their holographic system.

Mardinly, A. R., Oldenburg, I. A., Pégard, N. C., Sridharan, S., Lyall, E. H., Chesnov, K., . . . Adesnik, H. (2018). Precise multimodal optical control of neural ensemble activity. Nature Neuroscience. doi:10.1038/s41593-018-0139-8–bJrpQXF2dp2fYgPpEKUOIkhpHxOYZR7Nx-irsQ649T-Ua02wmYTaBOkA9joFtI9BGKIAUb1NoL7-s27Rj9XMPH44XUw&_hsmi=62560457