Archive for the ‘consciousness’ Category


A scan of the man missing 90% of his brain.

by Paul Ratner

What we think we know about our brains is nothing compared to what we don’t know. This fact is brought into focus by the medical mystery of a 44-year-old French father of two who found out one day that he had most of his brain missing. Instead his skull is mostly full of liquid, with almost no brain tissue left. He has a life-long condition known as hydrocephalus, commonly called “water on the brain” or “water head”. It happens when too much cerebrospinal fluid puts pressure on the brain and the brain’s cavities abnormally increase.

As Axel Cleeremans, a cognitive psychologist at the Université Libre in Brussels, who has lectured about this case, told CBC:

“He was living a normal life. He has a family. He works. His IQ was tested at the time of his complaint. This came out to be 84, which is slightly below the normal range … So, this person is not bright — but perfectly, socially apt”.

The complaint Cleeremans refers to is the original reason the man sought help – he had leg pain. Imagine that – you go to your doctor with a leg cramp and get told that you’re living without most of your brain.

The man continues to live a normal life, being a family man with a wife and kids, while working as a civil servant. All this while having 3 of his main brain cavities filled with only fluid and his brainstem and cerebellum stuck into a small space that they share with a cyst.

What can we learn from this rare case? As Cleeremans points out:

“One of the lessons is that plasticity is probably more pervasive than we thought it was… It is truly incredible that the brain can continue to function, more or less, within the normal range — with probably many fewer neurons than in a typical brain. Second lesson perhaps, if you’re interested in consciousness — that is the manner in which the biological activity of the brain produces awareness… One idea that I’m defending is the idea that awareness depends on the brain’s ability to learn.”

The French man’s story really challenges the idea that consciousness arises in one part of the brain only. Current theories hold that the part of the brain called the thalamus is responsible for our self-awareness. A man living with most of his brain missing does not fit neatly into such hypotheses.

http://bigthink.com/paul-ratner/the-medical-mystery-of-a-man-living-with-90-of-his-brain-missing?utm_source=Big+Think+Weekly+Newsletter+Subscribers&utm_campaign=709f2481ff-Newsletter_072016&utm_medium=email&utm_term=0_6d098f42ff-709f2481ff-41106061

Advertisements

by Michael Graziano

Ever since Charles Darwin published On the Origin of Species in 1859, evolution has been the grand unifying theory of biology. Yet one of our most important biological traits, consciousness, is rarely studied in the context of evolution. Theories of consciousness come from religion, from philosophy, from cognitive science, but not so much from evolutionary biology. Maybe that’s why so few theories have been able to tackle basic questions such as: What is the adaptive value of consciousness? When did it evolve and what animals have it?

The Attention Schema Theory (AST), developed over the past five years, may be able to answer those questions. The theory suggests that consciousness arises as a solution to one of the most fundamental problems facing any nervous system: Too much information constantly flows in to be fully processed. The brain evolved increasingly sophisticated mechanisms for deeply processing a few select signals at the expense of others, and in the AST, consciousness is the ultimate result of that evolutionary sequence. If the theory is right—and that has yet to be determined—then consciousness evolved gradually over the past half billion years and is present in a range of vertebrate species.

Even before the evolution of a central brain, nervous systems took advantage of a simple computing trick: competition. Neurons act like candidates in an election, each one shouting and trying to suppress its fellows. At any moment only a few neurons win that intense competition, their signals rising up above the noise and impacting the animal’s behavior. This process is called selective signal enhancement, and without it, a nervous system can do almost nothing.

We can take a good guess when selective signal enhancement first evolved by comparing different species of animal, a common method in evolutionary biology. The hydra, a small relative of jellyfish, arguably has the simplest nervous system known—a nerve net. If you poke the hydra anywhere, it gives a generalized response. It shows no evidence of selectively processing some pokes while strategically ignoring others. The split between the ancestors of hydras and other animals, according to genetic analysis, may have been as early as 700 million years ago. Selective signal enhancement probably evolved after that.

The arthropod eye, on the other hand, has one of the best-studied examples of selective signal enhancement. It sharpens the signals related to visual edges and suppresses other visual signals, generating an outline sketch of the world. Selective enhancement therefore probably evolved sometime between hydras and arthropods—between about 700 and 600 million years ago, close to the beginning of complex, multicellular life. Selective signal enhancement is so primitive that it doesn’t even require a central brain. The eye, the network of touch sensors on the body, and the auditory system can each have their own local versions of attention focusing on a few select signals.

The next evolutionary advance was a centralized controller for attention that could coordinate among all senses. In many animals, that central controller is a brain area called the tectum. (“Tectum” means “roof” in Latin, and it often covers the top of the brain.) It coordinates something called overt attention – aiming the satellite dishes of the eyes, ears, and nose toward anything important.

All vertebrates—fish, reptiles, birds, and mammals—have a tectum. Even lampreys have one, and they appeared so early in evolution that they don’t even have a lower jaw. But as far as anyone knows, the tectum is absent from all invertebrates. The fact that vertebrates have it and invertebrates don’t allows us to bracket its evolution. According to fossil and genetic evidence, vertebrates evolved around 520 million years ago. The tectum and the central control of attention probably evolved around then, during the so-called Cambrian Explosion when vertebrates were tiny wriggling creatures competing with a vast range of invertebrates in the sea.

The tectum is a beautiful piece of engineering. To control the head and the eyes efficiently, it constructs something called an internal model, a feature well known to engineers. An internal model is a simulation that keeps track of whatever is being controlled and allows for predictions and planning. The tectum’s internal model is a set of information encoded in the complex pattern of activity of the neurons. That information simulates the current state of the eyes, head, and other major body parts, making predictions about how these body parts will move next and about the consequences of their movement. For example, if you move your eyes to the right, the visual world should shift across your retinas to the left in a predictable way. The tectum compares the predicted visual signals to the actual visual input, to make sure that your movements are going as planned. These computations are extraordinarily complex and yet well worth the extra energy for the benefit to movement control. In fish and amphibians, the tectum is the pinnacle of sophistication and the largest part of the brain. A frog has a pretty good simulation of itself.

With the evolution of reptiles around 350 to 300 million years ago, a new brain structure began to emerge – the wulst. Birds inherited a wulst from their reptile ancestors. Mammals did too, but our version is usually called the cerebral cortex and has expanded enormously. It’s by far the largest structure in the human brain. Sometimes you hear people refer to the reptilian brain as the brute, automatic part that’s left over when you strip away the cortex, but this is not correct. The cortex has its origin in the reptilian wulst, and reptiles are probably smarter than we give them credit for.

The cortex is like an upgraded tectum. We still have a tectum buried under the cortex and it performs the same functions as in fish and amphibians. If you hear a sudden sound or see a movement in the corner of your eye, your tectum directs your gaze toward it quickly and accurately. The cortex also takes in sensory signals and coordinates movement, but it has a more flexible repertoire. Depending on context, you might look toward, look away, make a sound, do a dance, or simply store the sensory event in memory in case the information is useful for the future.

The most important difference between the cortex and the tectum may be the kind of attention they control. The tectum is the master of overt attention—pointing the sensory apparatus toward anything important. The cortex ups the ante with something called covert attention. You don’t need to look directly at something to covertly attend to it. Even if you’ve turned your back on an object, your cortex can still focus its processing resources on it. Scientists sometimes compare covert attention to a spotlight. (The analogy was first suggested by Francis Crick, the geneticist.) Your cortex can shift covert attention from the text in front of you to a nearby person, to the sounds in your backyard, to a thought or a memory. Covert attention is the virtual movement of deep processing from one item to another.

The cortex needs to control that virtual movement, and therefore like any efficient controller it needs an internal model. Unlike the tectum, which models concrete objects like the eyes and the head, the cortex must model something much more abstract. According to the AST, it does so by constructing an attention schema—a constantly updated set of information that describes what covert attention is doing moment-by-moment and what its consequences are.

Consider an unlikely thought experiment. If you could somehow attach an external speech mechanism to a crocodile, and the speech mechanism had access to the information in that attention schema in the crocodile’s wulst, that technology-assisted crocodile might report, “I’ve got something intangible inside me. It’s not an eyeball or a head or an arm. It exists without substance. It’s my mental possession of things. It moves around from one set of items to another. When that mysterious process in me grasps hold of something, it allows me to understand, to remember, and to respond.”

The crocodile would be wrong, of course. Covert attention isn’t intangible. It has a physical basis, but that physical basis lies in the microscopic details of neurons, synapses, and signals. The brain has no need to know those details. The attention schema is therefore strategically vague. It depicts covert attention in a physically incoherent way, as a non-physical essence. And this, according to the theory, is the origin of consciousness. We say we have consciousness because deep in the brain, something quite primitive is computing that semi-magical self-description. Alas crocodiles can’t really talk. But in this theory, they’re likely to have at least a simple form of an attention schema.

When I think about evolution, I’m reminded of Teddy Roosevelt’s famous quote, “Do what you can with what you have where you are.” Evolution is the master of that kind of opportunism. Fins become feet. Gill arches become jaws. And self-models become models of others. In the AST, the attention schema first evolved as a model of one’s own covert attention. But once the basic mechanism was in place, according to the theory, it was further adapted to model the attentional states of others, to allow for social prediction. Not only could the brain attribute consciousness to itself, it began to attribute consciousness to others.

When psychologists study social cognition, they often focus on something called theory of mind, the ability to understand the possible contents of someone else’s mind. Some of the more complex examples are limited to humans and apes. But experiments show that a dog can look at another dog and figure out, “Is he aware of me?” Crows also show an impressive theory of mind. If they hide food when another bird is watching, they’ll wait for the other bird’s absence and then hide the same piece of food again, as if able to compute that the other bird is aware of one hiding place but unaware of the other. If a basic ability to attribute awareness to others is present in mammals and in birds, then it may have an origin in their common ancestor, the reptiles. In the AST’s evolutionary story, social cognition begins to ramp up shortly after the reptilian wulst evolved. Crocodiles may not be the most socially complex creatures on earth, but they live in large communities, care for their young, and can make loyal if somewhat dangerous pets.

If AST is correct, 300 million years of reptilian, avian, and mammalian evolution have allowed the self-model and the social model to evolve in tandem, each influencing the other. We understand other people by projecting ourselves onto them. But we also understand ourselves by considering the way other people might see us. Data from my own lab suggests that the cortical networks in the human brain that allow us to attribute consciousness to others overlap extensively with the networks that construct our own sense of consciousness.

Language is perhaps the most recent big leap in the evolution of consciousness. Nobody knows when human language first evolved. Certainly we had it by 70 thousand years ago when people began to disperse around the world, since all dispersed groups have a sophisticated language. The relationship between language and consciousness is often debated, but we can be sure of at least this much: once we developed language, we could talk about consciousness and compare notes. We could say out loud, “I’m conscious of things. So is she. So is he. So is that damn river that just tried to wipe out my village.”

Maybe partly because of language and culture, humans have a hair-trigger tendency to attribute consciousness to everything around us. We attribute consciousness to characters in a story, puppets and dolls, storms, rivers, empty spaces, ghosts and gods. Justin Barrett called it the Hyperactive Agency Detection Device, or HADD. One speculation is that it’s better to be safe than sorry. If the wind rustles the grass and you misinterpret it as a lion, no harm done. But if you fail to detect an actual lion, you’re taken out of the gene pool. To me, however, the HADD goes way beyond detecting predators. It’s a consequence of our hyper-social nature. Evolution turned up the amplitude on our tendency to model others and now we’re supremely attuned to each other’s mind states. It gives us our adaptive edge. The inevitable side effect is the detection of false positives, or ghosts.

And so the evolutionary story brings us up to date, to human consciousness—something we ascribe to ourselves, to others, and to a rich spirit world of ghosts and gods in the empty spaces around us. The AST covers a lot of ground, from simple nervous systems to simulations of self and others. It provides a general framework for understanding consciousness, its many adaptive uses, and its gradual and continuing evolution.

http://www.theatlantic.com/science/archive/2016/06/how-consciousness-evolved/485558/

Thanks to Dan Brat for bringing this to the It’s Interesting community.

by Michael Graziano

Imagine scanning your Grandma’s brain in sufficient detail to build a mental duplicate. When she passes away, the duplicate is turned on and lives in a simulated video-game universe, a digital Elysium complete with Bingo, TV soaps, and knitting needles to keep the simulacrum happy. You could talk to her by phone just like always. She could join Christmas dinner by Skype. E-Granny would think of herself as the same person that she always was, with the same memories and personality—the same consciousness—transferred to a well regulated nursing home and able to offer her wisdom to her offspring forever after.

And why stop with Granny? You could have the same afterlife for yourself in any simulated environment you like. But even if that kind of technology is possible, and even if that digital entity thought of itself as existing in continuity with your previous self, would you really be the same person?

Is it even technically possible to duplicate yourself in a computer program? The short answer is: probably, but not for a while.

Let’s examine the question carefully by considering how information is processed in the brain, and how it might be translated to a computer.

The first person to grasp the information-processing fundamentals of the brain was the great Spanish neuroscientist, Ramon Y Cajal, who won the 1906 Nobel Prize in Physiology. Before Cajal, the brain was thought to be made of microscopic strands connected in a continuous net or ‘reticulum.’ According to that theory, the brain was different from every other biological thing because it wasn’t made of separate cells. Cajal used new methods of staining brain samples to discover that the brain did have separate cells, which he called neurons. The neurons had long thin strands mixing together like spaghetti—dendrites and axons that presumably carried signals. But when he traced the strands carefully, he realized that one neuron did not grade into another. Instead, neurons contacted each other through microscopic gaps—synapses.

Cajal guessed that the synapses must regulate the flow of signals from neuron to neuron. He developed the first vision of the brain as a device that processes information, channeling signals and transforming inputs into outputs. That realization, the so-called neuron doctrine, is the foundational insight of neuroscience. The last hundred years have been dedicated more or less to working out the implications of the neuron doctrine.

It’s now possible to simulate networks of neurons on a microchip and the simulations have extraordinary computing capabilities. The principle of a neural network is that it gains complexity by combining many simple elements. One neuron takes in signals from many other neurons. Each incoming signal passes over a synapse that either excites the receiving neuron or inhibits it. The neuron’s job is to sum up the many thousands of yes and no votes that it receives every instant and compute a simple decision. If the yes votes prevail, it triggers its own signal to send on to yet other neurons. If the no votes prevail, it remains silent. That elemental computation, as trivial as it sounds, can result in organized intelligence when compounded over enough neurons connected in enough complexity.

The trick is to get the right pattern of synaptic connections between neurons. Artificial neural networks are programmed to adjust their synapses through experience. You give the network a computing task and let it try over and over. Every time it gets closer to a good performance, you give it a reward signal or an error signal that updates its synapses. Based on a few simple learning rules, each synapse changes gradually in strength. Over time, the network shapes up until it can do the task. That deep leaning, as it’s sometimes called, can result in machines that develop spooky, human-like abilities such as face recognition and voice recognition. This technology is already all around us in Siri and in Google.

But can the technology be scaled up to preserve someone’s consciousness on a computer? The human brain has about a hundred billion neurons. The connectional complexity is staggering. By some estimates, the human brain compares to the entire content of the internet. It’s only a matter of time, however, and not very much at that, before computer scientists can simulate a hundred billion neurons. Many startups and organizations, such as the Human Brain project in Europe, are working full-tilt toward that goal. The advent of quantum computing will speed up the process considerably. But even when we reach that threshold where we are able to create a network of a hundred billion artificial neurons, how do we copy your special pattern of connectivity?

No existing scanner can measure the pattern of connectivity among your neurons, or connectome, as it’s called. MRI machines scan at about a millimeter resolution, whereas synapses are only a few microns across. We could kill you and cut up your brain into microscopically thin sections. Then we could try to trace the spaghetti tangle of dendrites, axons, and their synapses. But even that less-than-enticing technology is not yet scalable. Scientists like Sebastian Seung have plotted the connectome in a small piece of a mouse brain, but we are decades away, at least, from technology that could capture the connectome of the human brain.

Assuming we are one day able to scan your brain and extract your complete connectome, we’ll hit the next hurdle. In an artificial neural network, all the neurons are identical. They vary only in the strength of their synaptic interconnections. That regularity is a convenient engineering approach to building a machine. In the real brain, however, every neuron is different. To give a simple example, some neurons have thick, insulated cables that send information at a fast rate. You find these neurons in parts of the brain where timing is critical. Other neurons sprout thinner cables and transmit signals at a slower rate. Some neurons don’t even fire off signals—they work by a subtler, sub-threshold change in electrical activity. All of these neurons have different temporal dynamics.

The brain also uses hundreds of different kinds of synapses. As I noted above, a synapse is a microscopic gap between neurons. When neuron A is active, the electrical signal triggers a spray of chemicals—neurotransmitters—which cross the synapse and are picked up by chemical receptors on neuron B. Different synapses use different neurotransmitters, which have wildly different effects on the receiving neuron, and are re-absorbed after use at different rates. These subtleties matter. The smallest change to the system can have profound consequences. For example, Prozac works on people’s moods because it subtly adjusts the way particular neurotransmitters are reabsorbed after being released into synapses.

Although Cajal didn’t realize it, some neurons actually do connect directly, membrane to membrane, without a synaptic space between. These connections, called gap junctions, work more quickly than the regular kind and seem to be important in synchronizing the activity across many neurons.

Other neurons act like a gland. Instead of sending a precise signal to specific target neurons, they release a chemical soup that spreads and affects a larger area of the brain over a longer time.

I could go on with the biological complexity. These are just a few examples.

A student of artificial intelligence might argue that these complexities don’t matter. You can build an intelligent machine with simpler, more standard elements, ignoring the riot of biological complexity. And that is probably true. But there is a difference between building artificial intelligence and recreating a specific person’s mind.

If you want a copy of your brain, you will need to copy its quirks and complexities, which define the specific way you think. A tiny maladjustment in any of these details can result in epilepsy, hallucinations, delusions, depression, anxiety, or just plain unconsciousness. The connectome by itself is not enough. If your scan could determine only which neurons are connected to which others, and you re-created that pattern in a computer, there’s no telling what Frankensteinian, ruined, crippled mind you would create.

To copy a person’s mind, you wouldn’t need to scan anywhere near the level of individual atoms. But you would need a scanning device that can capture what kind of neuron, what kind of synapse, how large or active of a synapse, what kind of neurotransmitter, how rapidly the neurotransmitter is being synthesized and how rapidly it can be reabsorbed. Is that impossible? No. But it starts to sound like the tech is centuries in the future rather than just around the corner.

Even if we get there quicker, there is still another hurdle. Let’s suppose we have the technology to make a simulation of your brain. Is it truly conscious, or is it merely a computer crunching numbers in imitation of your behavior?

A half-dozen major scientific theories of consciousness have been proposed. In all of them, if you could simulate a brain on a computer, the simulation would be as conscious as you are. In the Attention Schema Theory, consciousness depends on the brain computing a specific kind of self-descriptive model. Since this explanation of consciousness depends on computation and information, it would translate directly to any hardware including an artificial one.

In another approach, the Global Workspace Theory, consciousness ignites when information is combined and shared globally around the brain. Again, the process is entirely programmable. Build that kind of global processing network, and it will be conscious.

In yet another theory, the Integrated Information Theory, consciousness is a side product of information. Any computing device that has a sufficient density of information, even an artificial device, is conscious.

Many other scientific theories of consciousness have been proposed, beyond the three mentioned here. They are all different from each other and nobody yet knows which one is correct. But in every theory grounded in neuroscience, a computer-simulated brain would be conscious. In some mystical theories and theories that depend on a loose analogy to quantum mechanics, consciousness would be more difficult to create artificially. But as a neuroscientist, I am confident that if we ever could scan a person’s brain in detail and simulate that architecture on a computer, then the simulation would have a conscious experience. It would have the memories, personality, feelings, and intelligence of the original.

And yet, that doesn’t mean we’re out of the woods. Humans are not brains in vats. Our cognitive and emotional experience depends on a brain-body system embedded in a larger environment. This relationship between brain function and the surrounding world is sometimes called “embodied cognition.” The next task therefore is to simulate a realistic body and a realistic world in which to embed the simulated brain. In modern video games, the bodies are not exactly realistic. They don’t have all the right muscles, the flexibility of skin, or the fluidity of movement. Even though some of them come close, you wouldn’t want to live forever in a World of Warcraft skin. But the truth is, a body and world are the easiest components to simulate. We already have the technology. It’s just a matter of allocating enough processing power.

In my lab, a few years ago, we simulated a human arm. We included the bone structure, all the fifty or so muscles, the slow twitch and fast twitch fibers, the tendons, the viscosity, the forces and inertia. We even included the touch receptors, the stretch receptors, and the pain receptors. We had a working human arm in digital format on a computer. It took a lot of computing power, and on our tiny machines it couldn’t run in real time. But with a little more computational firepower and a lot bigger research team we could have simulated a complete human body in a realistic world.

Let’s presume that at some future time we have all the technological pieces in place. When you’re close to death we scan your details and fire up your simulation. Something wakes up with the same memories and personality as you. It finds itself in a familiar world. The rendering is not perfect, but it’s pretty good. Odors probably don’t work quite the same. The fine-grained details are missing. You live in a simulated New York City with crowds of fellow dead people but no rats or dirt. Or maybe you live in a rural setting where the grass feels like Astroturf. Or you live on the beach in the sun, and every year an upgrade makes the ocean spray seem a little less fake. There’s no disease. No aging. No injury. No death unless the operating system crashes. You can interact with the world of the living the same way you do now, on a smart phone or by email. You stay in touch with living friends and family, follow the latest elections, watch the summer blockbusters. Maybe you still have a job in the real world as a lecturer or a board director or a comedy writer. It’s like you’ve gone to another universe but still have contact with the old one.

But is it you? Did you cheat death, or merely replace yourself with a creepy copy?

I can’t pretend to have a definitive answer to this philosophical question. Maybe it’s a matter of opinion rather than anything testable or verifiable. To many people, uploading is simply not an afterlife. No matter how accurate the simulation, it wouldn’t be you. It would be a spooky fake.

My own perspective borrows from a basic concept in topology. Imagine a branching Y. You’re born at the bottom of the Y and your lifeline progresses up the stalk. The branch point is the moment your brain is scanned and the simulation has begun. Now there are two of you, a digital one (let’s say the left branch) and a biological one (the right branch). They both inherit the memories, personality, and identity of the stalk. They both think they’re you. Psychologically, they’re equally real, equally valid. Once the simulation is fired up, the branches begin to diverge. The left branch accumulates new experiences in a digital world. The right branch follows a different set of experiences in the physical world.

Is it all one person, or two people, or a real person and a fake one? All of those and none of those. It’s a Y.

The stalk of the Y, the part from before the split, gains immortality. It lives on in the digital you, just like your past self lives on in your present self. The right hand branch, the post-split biological branch, is doomed to die. That’s the part that feels gypped by the technology.

So let’s assume that those of us who live in biological bodies get over this injustice, and in a century or three we invent a digital afterlife. What could possibly go wrong?

Well, for one, there are limited resources. Simulating a brain is computationally expensive. As I noted before, by some estimates the amount of information in the entire internet at the present time is approximately the same as in a single human brain. Now imagine the resources required to simulate the brains of millions or billions of dead people. It’s possible that some future technology will allow for unlimited RAM and we’ll all get free service. The same way we’re arguing about health care now, future activists will chant, “The afterlife is a right, not a privilege!” But it’s more likely that a digital afterlife will be a gated community and somebody will have to choose who gets in. Is it the rich and politically connected who live on? Is it Trump? Is it biased toward one ethnicity? Do you get in for being a Nobel laureate, or for being a suicide bomber in somebody’s hideous war? Just think how coercive religion can be when it peddles the promise of an invisible afterlife that can’t be confirmed. Now imagine how much more coercive a demagogue would be if he could dangle the reward of an actual, verifiable afterlife. The whole thing is an ethical nightmare.

And yet I remain optimistic. Our species advances every time we develop a new way to share information. The invention of writing jump-started our advanced civilizations. The computer revolution and the internet are all about sharing information. Think about the quantum leap that might occur if instead of preserving words and pictures, we could preserve people’s actual minds for future generations. We could accumulate skill and wisdom like never before. Imagine a future in which your biological life is more like a larval stage. You grow up, learn skills and good judgment along the way, and then are inducted into an indefinite digital existence where you contribute to stability and knowledge. When all the ethical confusion settles, the benefits may be immense. No wonder people like Ray Kurzweil refer to this kind of technological advance as a singularity. We can’t even imagine how our civilization will look on the other side of that change.

http://www.theatlantic.com/science/archive/2016/07/what-a-digital-afterlife-would-be-like/491105/

Thanks to Dan Brat for bringing this to the It’s Interesting community.