Archive for the ‘Albert Einstein’ Category

By Jeffrey Bennett

It has been exactly 100 years since Albert Einstein presented his theory of general relativity to an audience of scientists on November 25, 1915. While virtually everyone has heard of Einstein and his theory, very few people have any idea of what the theory actually is.

This is a shame, not only because there is a great public thirst for understanding of it, but also because relativity is important, for at least four major reasons.

General relativity provides our modern understanding of space, time and gravity — which means it’s crucial to almost everything we do in physics and astronomy. For example, you cannot understand black holes, the expansion of the universe or the Big Bang without first understanding the basic ideas of relativity. Though few people realize it, Einstein’s famous equation E = mc2 is actually part of the theory of relativity, which means that relativity also explains how the sun shines and how nuclear power works.

A second reason everyone should know about relativity lies in the way it changes our perception of reality. Relativity tells us that our ordinary perceptions of time and space are not universally valid. Instead, space and time are intertwined as four-dimensional space-time.

In our ordinary lives, we perceive only three dimensions—length, width and depth—and we assume that this perception reflects reality. However, in space-time, the four directions of possible motion are length, width, depth and time. (Note that time is not “the” fourth dimension; it is simply one of the four.)

Although we cannot picture all four dimensions of space-time at once, we can imagine what things would look like if we could. In addition to the three spatial dimensions of space-time that we ordinarily see, every object would be stretched out through time. Objects that we see as three-dimensional in our ordinary lives would appear as four-dimensional objects in space-time. If we could see in four dimensions, we could look through time just as easily as we look to our left or right. If we looked at a person, we could see every event in that person’s life. If we wondered what really happened during some historical event, we’d simply look to find the answer.

To see why this is so revolutionary, imagine that you met someone today who deeply believed that Earth is the center of the universe. You would probably feel sorry for this person, knowing that his or her entire world view is based on an idea disproven more than 400 years ago.

Now imagine that you met someone who still believed that time and space are independent and absolute — which, of course, describes almost everyone — even though we’ve known that’s not the case for a century now. Shouldn’t we feel equally sorry for all who hold this modern misconception?

It seems especially unfortunate once you realize that the ideas of relativity are not particularly difficult to understand. Indeed, I believe we could begin teaching relativity in elementary school in much the same way that we teach young children about the existence of atoms, even though few will ever study quantum mechanics.

My third reason for believing relativity is important lies in what Einstein’s discovery tells us about human potential. The science of relativity may seem disconnected from most other human endeavors, but I believe Einstein himself proved otherwise. Throughout his life, Einstein argued eloquently for human rights, human dignity and a world of peace and shared prosperity. His belief in underlying human goodness is all the more striking when you consider that he lived through both World Wars, that he was driven out of Germany by the rise of the Nazis, that he witnessed the Holocaust that wiped out more than six million of his fellow Jews, and that he saw his own discoveries put to use in atomic bombs.

No one can say for sure how he maintained his optimism in the face of such tragedies, but I see a connection to his discovery of relativity. Einstein surely recognized that a theory that so challenged our perceptions of reality might have been dismissed out of hand at other times in history, but that we now live in a time when, thanks to the process that we call science, the abundant evidence for relativity allowed for its acceptance.

This willingness to make judgments based on evidence shows that we are growing up as a species. We have not yet reached the point where we always show the same willingness in all our other endeavors, but the fact that we’ve done it for science suggests we have the potential.

Finally, on a philosophical level, relativity is profound. Only about a month before his death in 1955, Einstein wrote: “Death signifies nothing … the distinction between past, present and future is only a stubbornly persistent illusion.” As this suggests, relativity raises interesting questions about what the passage of time really means.

Because these are philosophical questions, they do not have definitive answers, and you will have to decide for yourself what these questions mean to you. But I believe that one thing is clear. Einstein showed that even though space and time can independently differ for different observers, the four-dimensional space-time reality is the same for everyone.

This implies that events in space-time have a permanence to them that cannot be taken away. Once an event occurs, in essence it becomes part of the fabric of our universe. Every human life is a series of events, and this means that when we put them all together, each of us is creating our own, indelible mark on the universe. Perhaps if everyone understood that, we might all be a little more careful to make sure that the mark we leave is one that we are proud of.

So there you have it. Relativity is necessary to comprehend the universe as we know it, it helps us understand the potential we all share when we put our brains to work for the common good, and if we all understood it we might treat each other a little more kindly.

By Jonathan Webb

The design, published in Nature Photonics, adapts technology used in fusion research.

Several locations could now enter a race to convert photons into positrons and electrons for the very first time.

This would prove an 80-year-old theory by Breit and Wheeler, who themselves thought physical proof was impossible.

Now, according to researchers from Imperial College London, that proof is within reach.

Prof Steven Rose and his PhD student, Oliver Pike, told the BBC it could happen within a year.

“With a good experimental team, it should be quite doable,” said Mr Pike.

If the experiment comes to fruition, it will be the final piece in a puzzle that began in 1905, when Einstein accounted for the photoelectric effect with his model of light as a particle.

Several other basic interactions between matter and light have been described and subsequently proved by experiment, including Dirac’s 1930 proposal that an electron and its antimatter counterpart, a positron, could be annihilated upon collision to produce two photons.

Breit and Wheeler’s theoretical prediction of the reverse – that two photons could crash together and produce matter (a positron and an electron) – has been difficult to observe.

“The reason this is very hard to see in the lab is that you need to throw an awful lot of photons together – because the probability of any two of them interconverting is very low,” Prof Rose explained.

His team proposes gathering that vast number of very high-energy photons by firing an intense beam of gamma-rays into a further cloud of photons, created within a tiny, gold-lined cylinder.

That cylinder is called a “hohlraum”, German for “hollow space”, because it contains a vacuum, and it is usually used in nuclear fusion research. The cloud of photons inside it is made from extraordinarily intense X-rays and is about as hot as the Sun.

Hitting this very dense cloud of photons with the powerful gamma-ray beam raises the probability of collisions that will make matter – and history.

“It’s pretty amazing really,” said Mr Pike. He says it took some time to realise the value of the scheme, which he and two colleagues initially jotted down on scrap paper over several cups of coffee.

“For the first 12 hours or so, we didn’t quite appreciate its magnitude.”

But their subsequent calculations showed that the design, theoretically at least, has more than enough power to crack the challenge set by Breit and Wheeler in the 1930s.

“All the ingredients are there,” agrees Sir Peter Knight, an emeritus professor at Imperial College who was not involved in the research but describes it as a “really clever idea”.

“I think people will seriously start to have a crack at this,” Prof Knight told BBC News, though he cautioned that there were a lot of things to get right when putting the design into practice.

“If it’s done in a year, then they’ve done bloody well! I think it might take a bit longer.”

Some healthy scientific competition may speed up the process.

There are at least three facilities with the necessary equipment to test out the new proposal, including the Atomic Weapons Establishment in Oldham.

“The race to carry out and complete the experiment is on,” said Mr Pike.

Thanks to Da Brayn for bringing this to the attention of the It’s Interesting community.

Creativity works in mysterious and often paradoxical ways. Creative thinking is a stable, defining characteristic in some personalities, but it may also change based on situation and context. Inspiration and ideas often arise seemingly out of nowhere and then fail to show up when we most need them, and creative thinking requires complex cognition yet is completely distinct from the thinking process.

Neuroscience paints a complicated picture of creativity. As scientists now understand it, creativity is far more complex than the right-left brain distinction would have us think (the theory being that left brain = rational and analytical, right brain = creative and emotional). In fact, creativity is thought to involve a number of cognitive processes, neural pathways and emotions, and we still don’t have the full picture of how the imaginative mind works.

And psychologically speaking, creative personality types are difficult to pin down, largely because they’re complex, paradoxical and tend to avoid habit or routine. And it’s not just a stereotype of the “tortured artist” — artists really may be more complicated people. Research has suggested that creativity involves the coming together of a multitude of traits, behaviors and social influences in a single person.

“It’s actually hard for creative people to know themselves because the creative self is more complex than the non-creative self,” Scott Barry Kaufman, a psychologist at New York University who has spent years researching creativity, told The Huffington Post. “The things that stand out the most are the paradoxes of the creative self … Imaginative people have messier minds.”

While there’s no “typical” creative type, there are some tell-tale characteristics and behaviors of highly creative people. Here are 18 things they do differently.

They daydream.

Creative types know, despite what their third-grade teachers may have said, that daydreaming is anything but a waste of time.

According to Kaufman and psychologist Rebecca L. McMillan, who co-authored a paper titled “Ode To Positive Constructive Daydreaming,” mind-wandering can aid in the process of “creative incubation.” And of course, many of us know from experience that our best ideas come seemingly out of the blue when our minds are elsewhere.

Although daydreaming may seem mindless, a 2012 study suggested it could actually involve a highly engaged brain state — daydreaming can lead to sudden connections and insights because it’s related to our ability to recall information in the face of distractions. Neuroscientists have also found that daydreaming involves the same brain processes associated with imagination and creativity.

They observe everything.

The world is a creative person’s oyster — they see possibilities everywhere and are constantly taking in information that becomes fodder for creative expression. As Henry James is widely quoted, a writer is someone on whom “nothing is lost.”

The writer Joan Didion kept a notebook with her at all times, and said that she wrote down observations about people and events as, ultimately, a way to better understand the complexities and contradictions of her own mind:

“However dutifully we record what we see around us, the common denominator of all we see is always, transparently, shamelessly, the implacable ‘I,'” Didion wrote in her essay On Keeping A Notebook. “We are talking about something private, about bits of the mind’s string too short to use, an indiscriminate and erratic assemblage with meaning only for its marker.”

They work the hours that work for them.

Many great artists have said that they do their best work either very early in the morning or late at night. Vladimir Nabokov started writing immediately after he woke up at 6 or 7 a.m., and Frank Lloyd Wright made a practice of waking up at 3 or 4 a.m. and working for several hours before heading back to bed. No matter when it is, individuals with high creative output will often figure out what time it is that their minds start firing up, and structure their days accordingly.

They take time for solitude.

In order to be open to creativity, one must have the capacity for constructive use of solitude. One must overcome the fear of being alone,” wrote the American existential psychologist Rollo May.

Artists and creatives are often stereotyped as being loners, and while this may not actually be the case, solitude can be the key to producing their best work. For Kaufman, this links back to daydreaming — we need to give ourselves the time alone to simply allow our minds to wander.

“You need to get in touch with that inner monologue to be able to express it,” he says. “It’s hard to find that inner creative voice if you’re … not getting in touch with yourself and reflecting on yourself.”

They turn life’s obstacles around.

Many of the most iconic stories and songs of all time have been inspired by gut-wrenching pain and heartbreak — and the silver lining of these challenges is that they may have been the catalyst to create great art. An emerging field of psychology called post-traumatic growth is suggesting that many people are able to use their hardships and early-life trauma for substantial creative growth. Specifically, researchers have found that trauma can help people to grow in the areas of interpersonal relationships, spirituality, appreciation of life, personal strength, and — most importantly for creativity — seeing new possibilities in life.

“A lot of people are able to use that as the fuel they need to come up with a different perspective on reality,” says Kaufman. “What’s happened is that their view of the world as a safe place, or as a certain type of place, has been shattered at some point in their life, causing them to go on the periphery and see things in a new, fresh light, and that’s very conducive to creativity.”

They seek out new experiences.

Creative people love to expose themselves to new experiences, sensations and states of mind — and this openness is a significant predictor of creative output.

“Openness to experience is consistently the strongest predictor of creative achievement,” says Kaufman. “This consists of lots of different facets, but they’re all related to each other: Intellectual curiosity, thrill seeking, openness to your emotions, openness to fantasy. The thing that brings them all together is a drive for cognitive and behavioral exploration of the world, your inner world and your outer world.”

They “fail up.”

Resilience is practically a prerequisite for creative success, says Kaufman. Doing creative work is often described as a process of failing repeatedly until you find something that sticks, and creatives — at least the successful ones — learn not to take failure so personally.

“Creatives fail and the really good ones fail often,” Forbes contributor Steven Kotler wrote in a piece on Einstein’s creative genius.

They ask the big questions.
Creative people are insatiably curious — they generally opt to live the examined life, and even as they get older, maintain a sense of curiosity about life. Whether through intense conversation or solitary mind-wandering, creatives look at the world around them and want to know why, and how, it is the way it is.

They people-watch.

Observant by nature and curious about the lives of others, creative types often love to people-watch — and they may generate some of their best ideas from it.

“[Marcel] Proust spent almost his whole life people-watching, and he wrote down his observations, and it eventually came out in his books,” says Kaufman. “For a lot of writers, people-watching is very important … They’re keen observers of human nature.”

They take risks.

Part of doing creative work is taking risks, and many creative types thrive off of taking risks in various aspects of their lives.

“There is a deep and meaningful connection between risk taking and creativity and it’s one that’s often overlooked,” contributor Steven Kotler wrote in Forbes. “Creativity is the act of making something from nothing. It requires making public those bets first placed by imagination. This is not a job for the timid. Time wasted, reputation tarnished, money not well spent — these are all by-products of creativity gone awry.”

They view all of life as an opportunity for self-expression.

Nietzsche believed that one’s life and the world should be viewed as a work of art. Creative types may be more likely to see the world this way, and to constantly seek opportunities for self-expression in everyday life.

“Creative expression is self-expression,” says Kaufman. “Creativity is nothing more than an individual expression of your needs, desires and uniqueness.”

They follow their true passions.

Creative people tend to be intrinsically motivated — meaning that they’re motivated to act from some internal desire, rather than a desire for external reward or recognition. Psychologists have shown that creative people are energized by challenging activities, a sign of intrinsic motivation, and the research suggests that simply thinking of intrinsic reasons to perform an activity may be enough to boost creativity.

“Eminent creators choose and become passionately involved in challenging, risky problems that provide a powerful sense of power from the ability to use their talents,” write M.A. Collins and T.M. Amabile in The Handbook of Creativity.

They get out of their own heads.

Kaufman argues that another purpose of daydreaming is to help us to get out of our own limited perspective and explore other ways of thinking, which can be an important asset to creative work.

“Daydreaming has evolved to allow us to let go of the present,” says Kaufman. “The same brain network associated with daydreaming is the brain network associated with theory of mind — I like calling it the ‘imagination brain network’ — it allows you to imagine your future self, but it also allows you to imagine what someone else is thinking.”

Research has also suggested that inducing “psychological distance” — that is, taking another person’s perspective or thinking about a question as if it was unreal or unfamiliar — can boost creative thinking.

They lose track of the time.
Creative types may find that when they’re writing, dancing, painting or expressing themselves in another way, they get “in the zone,” or what’s known as a flow state, which can help them to create at their highest level. Flow is a mental state when an individual transcends conscious thought to reach a heightened state of effortless concentration and calmness. When someone is in this state, they’re practically immune to any internal or external pressures and distractions that could hinder their performance.

You get into the flow state when you’re performing an activity you enjoy that you’re good at, but that also challenges you — as any good creative project does.

“[Creative people] have found the thing they love, but they’ve also built up the skill in it to be able to get into the flow state,” says Kaufman. “The flow state requires a match between your skill set and the task or activity you’re engaging in.”

They surround themselves with beauty.

Creatives tend to have excellent taste, and as a result, they enjoy being surrounded by beauty.

A study recently published in the journal Psychology of Aesthetics, Creativity, and the Arts showed that musicians — including orchestra musicians, music teachers, and soloists — exhibit a high sensitivity and responsiveness to artistic beauty.

They connect the dots.

If there’s one thing that distinguishes highly creative people from others, it’s the ability to see possibilities where other don’t — or, in other words, vision. Many great artists and writers have said that creativity is simply the ability to connect the dots that others might never think to connect.

In the words of Steve Jobs:

“Creativity is just connecting things. When you ask creative people how they did something, they feel a little guilty because they didn’t really do it, they just saw something. It seemed obvious to them after a while. That’s because they were able to connect experiences they’ve had and synthesize new things.”

They constantly shake things up.

Diversity of experience, more than anything else, is critical to creativity, says Kaufman. Creatives like to shake things up, experience new things, and avoid anything that makes life more monotonous or mundane.

“Creative people have more diversity of experiences, and habit is the killer of diversity of experience,” says Kaufman.

They make time for mindfulness.

Creative types understand the value of a clear and focused mind — because their work depends on it. Many artists, entrepreneurs, writers and other creative workers, such as David Lynch, have turned to meditation as a tool for tapping into their most creative state of mind.

And science backs up the idea that mindfulness really can boost your brain power in a number of ways. A 2012 Dutch study suggested that certain meditation techniques can promote creative thinking. And mindfulness practices have been linked with improved memory and focus, better emotional well-being, reduced stress and anxiety, and improved mental clarity — all of which can lead to better creative thought.

Stephen Hawking's black hole theory
Notion of an ‘event horizon’, from which nothing can escape, is incompatible with quantum theory, physicist claims.

by Zeeya Merali

Most physicists foolhardy enough to write a paper claiming that “there are no black holes” — at least not in the sense we usually imagine — would probably be dismissed as cranks. But when the call to redefine these cosmic crunchers comes from Stephen Hawking, it’s worth taking notice. In a paper posted online, the physicist, based at the University of Cambridge, UK, and one of the creators of modern black-hole theory, does away with the notion of an event horizon, the invisible boundary thought to shroud every black hole, beyond which nothing, not even light, can escape.

In its stead, Hawking’s radical proposal is a much more benign “apparent horizon”, which only temporarily holds matter and energy prisoner before eventually releasing them, albeit in a more garbled form.

“There is no escape from a black hole in classical theory,” Hawking told Nature. Quantum theory, however, “enables energy and information to escape from a black hole”. A full explanation of the process, the physicist admits, would require a theory that successfully merges gravity with the other fundamental forces of nature. But that is a goal that has eluded physicists for nearly a century. “The correct treatment,” Hawking says, “remains a mystery.”

Hawking posted his paper on the arXiv preprint server on 22 January1. He titled it, whimsically, ‘Information preservation and weather forecasting for black holes’, and it has yet to pass peer review. The paper was based on a talk he gave via Skype at a meeting at the Kavli Institute for Theoretical Physics in Santa Barbara, California, in August 2013.

Hawking’s new work is an attempt to solve what is known as the black-hole firewall paradox, which has been vexing physicists for almost two years, after it was discovered by theoretical physicist Joseph Polchinski of the Kavli Institute and his colleagues.

In a thought experiment, the researchers asked what would happen to an astronaut unlucky enough to fall into a black hole. Event horizons are mathematically simple consequences of Einstein’s general theory of relativity that were first pointed out by the German astronomer Karl Schwarzschild in a letter he wrote to Einstein in late 1915, less than a month after the publication of the theory. In that picture, physicists had long assumed, the astronaut would happily pass through the event horizon, unaware of his or her impending doom, before gradually being pulled inwards — stretched out along the way, like spaghetti — and eventually crushed at the ‘singularity’, the black hole’s hypothetical infinitely dense core.

But on analysing the situation in detail, Polchinski’s team came to the startling realization that the laws of quantum mechanics, which govern particles on small scales, change the situation completely. Quantum theory, they said, dictates that the event horizon must actually be transformed into a highly energetic region, or ‘firewall’, that would burn the astronaut to a crisp.

This was alarming because, although the firewall obeyed quantum rules, it flouted Einstein’s general theory of relativity. According to that theory, someone in free fall should perceive the laws of physics as being identical everywhere in the Universe — whether they are falling into a black hole or floating in empty intergalactic space. As far as Einstein is concerned, the event horizon should be an unremarkable place.

Now Hawking proposes a third, tantalizingly simple, option. Quantum mechanics and general relativity remain intact, but black holes simply do not have an event horizon to catch fire. The key to his claim is that quantum effects around the black hole cause space-time to fluctuate too wildly for a sharp boundary surface to exist.

In place of the event horizon, Hawking invokes an “apparent horizon”, a surface along which light rays attempting to rush away from the black hole’s core will be suspended. In general relativity, for an unchanging black hole, these two horizons are identical, because light trying to escape from inside a black hole can reach only as far as the event horizon and will be held there, as though stuck on a treadmill. However, the two horizons can, in principle, be distinguished. If more matter gets swallowed by the black hole, its event horizon will swell and grow larger than the apparent horizon.

Conversely, in the 1970s, Hawking also showed that black holes can slowly shrink, spewing out ‘Hawking radiation’. In that case, the event horizon would, in theory, become smaller than the apparent horizon. Hawking’s new suggestion is that the apparent horizon is the real boundary. “The absence of event horizons means that there are no black holes — in the sense of regimes from which light can’t escape to infinity,” Hawking writes.

“The picture Hawking gives sounds reasonable,” says Don Page, a physicist and expert on black holes at the University of Alberta in Edmonton, Canada, who collaborated with Hawking in the 1970s. “You could say that it is radical to propose there’s no event horizon. But these are highly quantum conditions, and there’s ambiguity about what space-time even is, let alone whether there is a definite region that can be marked as an event horizon.”

Although Page accepts Hawking’s proposal that a black hole could exist without an event horizon, he questions whether that alone is enough to get past the firewall paradox. The presence of even an ephemeral apparent horizon, he cautions, could well cause the same problems as does an event horizon.

Unlike the event horizon, the apparent horizon can eventually dissolve. Page notes that Hawking is opening the door to a scenario so extreme “that anything in principle can get out of a black hole”. Although Hawking does not specify in his paper exactly how an apparent horizon would disappear, Page speculates that when it has shrunk to a certain size, at which the effects of both quantum mechanics and gravity combine, it is plausible that it could vanish. At that point, whatever was once trapped within the black hole would be released (although not in good shape).

If Hawking is correct, there could even be no singularity at the core of the black hole. Instead, matter would be only temporarily held behind the apparent horizon, which would gradually move inward owing to the pull of the black hole, but would never quite crunch down to the centre. Information about this matter would not destroyed, but would be highly scrambled so that, as it is released through Hawking radiation, it would be in a vastly different form, making it almost impossible to work out what the swallowed objects once were.

“It would be worse than trying to reconstruct a book that you burned from its ashes,” says Page. In his paper, Hawking compares it to trying to forecast the weather ahead of time: in theory it is possible, but in practice it is too difficult to do with much accuracy.

Polchinski, however, is sceptical that black holes without an event horizon could exist in nature. The kind of violent fluctuations needed to erase it are too rare in the Universe, he says. “In Einstein’s gravity, the black-hole horizon is not so different from any other part of space,” says Polchinski. “We never see space-time fluctuate in our own neighbourhood: it is just too rare on large scales.”

Raphael Bousso, a theoretical physicist at the University of California, Berkeley, and a former student of Hawking’s, says that this latest contribution highlights how “abhorrent” physicists find the potential existence of firewalls. However, he is also cautious about Hawking’s solution. “The idea that there are no points from which you cannot escape a black hole is in some ways an even more radical and problematic suggestion than the existence of firewalls,” he says. “But the fact that we’re still discussing such questions 40 years after Hawking’s first papers on black holes and information is testament to their enormous significance.”

“Our hypothesis is that the inside of a black hole — it may not be there. Probably that’s the end of space itself. There’s no inside at all.”
– Joe Polchinski, physicist

It could rightly be called the most massive debate of the year: Physicists are locked in an argument over what happens if you fall into a black hole.

On one side are those who support the traditional view from Albert Einstein. On the other, backers of a radical new theory that preserves the very core of modern physics by destroying space itself.

Regardless of who’s right, the new take on black holes could lead to a better understanding of the universe, says Leonard Susskind, a physicist at Stanford University. “This is the kind of thing where progress comes from.”

Black holes are regions of space so dense that nothing, not even light, can escape.

There’s a long-standing view about what would happen if you fell into one of these holes. At first, you’re not going to notice much of anything — but the black hole’s gravity is getting stronger and stronger. And eventually you pass a point of no return.

“It’s kind of like you’re rowing on Niagara Falls, and you pass the point [where] you can’t row fast enough to escape the current,” Susskind says. “Well, you’re doomed at that point. But passing the point of no return — you wouldn’t even notice it.”

Now you can’t get out. And gravity from the black hole is starting to pull on your feet more than your head. “The gravity wants to sort of stretch you in one direction and squeeze you in another,” says Joe Polchinski, a physicist at the University of California, Santa Barbara. He says the technical term for this stretching is spaghettification.

“It’d be kind of medieval,” says Polchinkski. “It’d be like something on Game of Thrones.”

In Einstein’s version of events, that’s the end. But Polchinski has a new version of things: “Our hypothesis is that the inside of a black hole — it may not be there,” he says.

So what’s inside the black hole? Nothing, Polchinski says. Actually even less than that. “Probably that’s the end of space itself; there’s no inside at all.”
This “no inside” idea may sound outrageous, but it’s actually a stab at solving an even bigger problem with black holes.

According to the dominant theory of physics — quantum mechanics — information can never disappear from the universe. Put another way, the atoms in your body are configured in a particular way. They can be rearranged (radically if you happen to slip inside a black hole). But it should always be possible, at least in theory, to look at all those rearranged atoms and work out that they were once part of a human of your dimensions and personality.

This rule is absolutely fundamental. “Everything is built on it,” says Susskind. “If it were violated, everything falls apart.”

For a long time, black holes stretched this rule, but they didn’t break it. People thought that if you fell into a black hole, your spaghettified remains would always be in there, trapped beyond the point of no return.

That is, until the famous physicist Stephen Hawking came along. In the 1970s, Hawking showed that, according to quantum mechanics, a black hole evaporates — very slowly, it vanishes. And that breaks the fundamental rule because all that information that was once in your spaghettified remains vanishes with it.

This didn’t seem to bother Hawking. (“I’m not a psychiatrist, and I can’t psychoanalyze him,” Susskind says.) But it has bothered a lot of other physicists since.

And in the intervening years, work by another theorist — Juan Maldacena, with Princeton’s Institute for Advanced Study — seems to show that Hawking was wrong. Information has to get out of the black hole … somehow. But nobody knows how.

So Polchinski took another look. “We took Hawking’s original argument,” he says, “and very carefully ran it backwards.”

And Polchinski and his colleagues found one way to keep things from vanishing when they fall inside a black hole — they got rid of the inside. By tearing apart the fabric of space beyond the point of no return, the group was able to preserve the information rule of quantum mechanics.

In this version, anything falling into a black hole is instantly vaporized at the point of no return, in a fiery storm of quantum particles. Particles coming from the hole collectively carry away any and all information about the object that’s falling in.

So in Polchinski’s version, when you fall into a black hole, you don’t disappear. Instead, you smack into the end of the universe.

“You just come to the end of space, and there’s nothing beyond it. Terminated,” Susskind says. All the information once contained in your atoms is re-radiated in a quantum mechanical fire.

This new version seems too radical to Susskind. “I don’t think this is true,” he says. “In fact, I think almost nobody thinks this is true — that space falls apart inside a black hole.”

Even Polchinski still feels that black holes should have insides. “My gut believes that the black hole has an interior,” he says. But, he adds, nobody’s been able to disprove his hypothesis that it doesn’t.

“Every counterargument I’ve seen is flawed,” Polchinski says.

Susskind agrees: “Nobody quite knows exactly what’s wrong with their argument — and that’s what makes this so important and interesting.”

And as crazy as it sounds, this is progress. In the year ahead, Susskind hopes someone can find the flaw in Polchinski’s argument, just the way Polchinski found a flaw in Stephen Hawking’s argument. But it will be awhile before we understand black holes inside and out.


The human brain is complex. Along with performing millions of mundane acts, it composes concertos, issues manifestos and comes up with elegant solutions to equations. It’s the wellspring of all human feelings, behaviors, experiences as well as the repository of memory and self-awareness. So it’s no surprise that the brain remains a mystery unto itself.

Adding to that mystery is the contention that humans “only” employ 10 percent of their brain. If only regular folk could tap that other 90 percent, they too could become savants who remember π to the twenty-thousandth decimal place or perhaps even have telekinetic powers.

Though an alluring idea, the “10 percent myth” is so wrong it is almost laughable, says neurologist Barry Gordon at Johns Hopkins School of Medicine in Baltimore. Although there’s no definitive culprit to pin the blame on for starting this legend, the notion has been linked to the American psychologist and author William James, who argued in The Energies of Men that “We are making use of only a small part of our possible mental and physical resources.” It’s also been associated with to Albert Einstein, who supposedly used it to explain his cosmic towering intellect.

The myth’s durability, Gordon says, stems from people’s conceptions about their own brains: they see their own shortcomings as evidence of the existence of untapped gray matter. This is a false assumption. What is correct, however, is that at certain moments in anyone’s life, such as when we are simply at rest and thinking, we may be using only 10 percent of our brains.

“It turns out though, that we use virtually every part of the brain, and that [most of] the brain is active almost all the time,” Gordon adds. “Let’s put it this way: the brain represents three percent of the body’s weight and uses 20 percent of the body’s energy.”

The average human brain weighs about three pounds and comprises the hefty cerebrum, which is the largest portion and performs all higher cognitive functions; the cerebellum, responsible for motor functions, such as the coordination of movement and balance; and the brain stem, dedicated to involuntary functions like breathing. The majority of the energy consumed by the brain powers the rapid firing of millions of neurons communicating with each other. Scientists think it is such neuronal firing and connecting that gives rise to all of the brain’s higher functions. The rest of its energy is used for controlling other activities—both unconscious activities, such as heart rate, and conscious ones, such as driving a car.

Although it’s true that at any given moment all of the brain’s regions are not concurrently firing, brain researchers using imaging technology have shown that, like the body’s muscles, most are continually active over a 24-hour period. “Evidence would show over a day you use 100 percent of the brain,” says John Henley, a neurologist at the Mayo Clinic in Rochester, Minn. Even in sleep, areas such as the frontal cortex, which controls things like higher level thinking and self-awareness, or the somatosensory areas, which help people sense their surroundings, are active, Henley explains.

Take the simple act of pouring coffee in the morning: In walking toward the coffeepot, reaching for it, pouring the brew into the mug, even leaving extra room for cream, the occipital and parietal lobes, motor sensory and sensory motor cortices, basal ganglia, cerebellum and frontal lobes all activate. A lightning storm of neuronal activity occurs almost across the entire brain in the time span of a few seconds.

“This isn’t to say that if the brain were damaged that you wouldn’t be able to perform daily duties,” Henley continues. “There are people who have injured their brains or had parts of it removed who still live fairly normal lives, but that is because the brain has a way of compensating and making sure that what’s left takes over the activity.”

Being able to map the brain’s various regions and functions is part and parcel of understanding the possible side effects should a given region begin to fail. Experts know that neurons that perform similar functions tend to cluster together. For example, neurons that control the thumb’s movement are arranged next to those that control the forefinger. Thus, when undertaking brain surgery, neurosurgeons carefully avoid neural clusters related to vision, hearing and movement, enabling the brain to retain as many of its functions as possible.

What’s not understood is how clusters of neurons from the diverse regions of the brain collaborate to form consciousness. So far, there’s no evidence that there is one site for consciousness, which leads experts to believe that it is truly a collective neural effort. Another mystery hidden within our crinkled cortices is that out of all the brain’s cells, only 10 percent are neurons; the other 90 percent are glial cells, which encapsulate and support neurons, but whose function remains largely unknown. Ultimately, it’s not that we use 10 percent of our brains, merely that we only understand about 10 percent of how it functions.

Thanks to SRW for bringing this to the attention of the It’s Interesting community.


A new study reveals the contribution of a little known Austrian physicist, Friedrich Hasenöhrl, to uncovering a precursor to Einstein famous equation.

Two American physicists outline the role played by Austrian physicist Friedrich Hasenöhrl in establishing the proportionality between the energy (E) of a quantity of matter with its mass (m) in a cavity filled with radiation. In a paper in the European Physical Journal H, Stephen Boughn from Haverford College in Pensylvannia and Tony Rothman from Princeton University in New Jersey argue how Hasenöhrl’s work, for which he now receives little credit, may have contributed to the famous equation E=mc2.

According to science philosopher Thomas Kuhn, the nature of scientific progress occurs through paradigm shifts, which depend on the cultural and historical circumstances of groups of scientists. Concurring with this idea, the authors believe the notion that mass and energy should be related did not originate solely with Hasenöhrl. Nor did it suddenly emerge in 1905, when Einstein published his paper, as popular mythology would have it.

Given the lack of recognition for Hasenöhrl’s contribution, the authors examined the Austrian physicist’s original work on blackbody radiation in a cavity with perfectly reflective walls. This study seeks to identify the blackbody’s mass changes when the cavity is moving relative to the observer.

They then explored the reason why the Austrian physicist arrived at an energy/mass correlation with the wrong factor, namely at the equation: E = (3/8) mc2. Hasenöhrl’s error, they believe, stems from failing to account for the mass lost by the blackbody while radiating.

Before Hasenöhrl focused on cavity radiation, other physicists, including French mathematician Henri Poincaré and German physicist Max Abraham, showed the existence of an inertial mass associated with electromagnetic energy. In 1905, Einstein gave the correct relationship between inertial mass and electromagnetic energy, E=mc2. Nevertheless, it was not until 1911 that German physicist Max von Laue generalised it to include all forms of energy.


The science fiction vision of stars flashing by as streaks when spaceships travel faster than light isn’t what the scene would actually look like, a team of physics students says.

Instead, the view out the windows of a vehicle traveling through hyperspace would be more like a centralized bright glow, calculations show.

The finding contradicts the familiar images of stretched out starlight streaking past the windows of the Millennium Falcon in “Star Wars” and the Starship Enterprise in “Star Trek.” In those films and television series, as spaceships engage warp drive or hyperdrive and approach the speed of light, stars morph from points of light to long streaks that stretch out past the ship.

But passengers on the Millennium Falcon or the Enterprise actually wouldn’t be able to see stars at all when traveling that fast, found a group of physics Masters students at England’s University of Leicester. Rather, a phenomenon called the Doppler Effect, which affects the wavelength of radiation from moving sources, would cause stars’ light to shift out of the visible spectrum and into the X-ray range, where human eyes wouldn’t be able to see it, the students found.

“The resultant effects we worked out were based on Einstein’s theory of Special Relativity, so while we may not be used to them in our daily lives, Han Solo and his crew should certainly understand its implications,” Leicester student Joshua Argyle said in a statement.

The Doppler Effect is the reason why an ambulance’s siren sounds higher pitched when it’s coming at you compared to when it’s moving away — the sound’s frequency becomes higher, making its wavelength longer, and changing its pitch.

The same thing would happen to the light of stars when a spaceship began to move toward them at significant speed. And other light, such as the pervasive glow of the universe called the cosmic microwave background radiation, which is left over from the Big Bang, would be shifted out of the microwave range and into the visible spectrum, the students found.

“If the Millennium Falcon existed and really could travel that fast, sunglasses would certainly be advisable,” said research team member Riley Connors. “On top of this, the ship would need something to protect the crew from harmful X-ray radiation.”

The increased X-ray radiation from shifted starlight would even push back on a spaceship traveling in hyperdrive, the team found, slowing down the vehicle with a pressure similar to the force felt at the bottom of the Pacific Ocean. In fact, such a spacecraft would need to carry extra energy reserves to counter this pressure and press ahead.

Whether the scientific reality of these effects will be taken into consideration on future Star Wars films is still an open question.

“Perhaps Disney should take the physical implications of such high speed travel into account in their forthcoming films,” said team member Katie Dexter.

Connors, Dexter, Argyle, and fourth team member Cameron Scoular published their findings in this year’s issue of the University of Leicester’s Journal of Physics Special Topics.


Physicist Albert Einstein’s brain had an “extraordinary” prefrontal cortex – unlike those of most people – which may have contributed to his remarkable genius, a new study has claimed.

According to the study led by Florida State University evolutionary anthropologist Dean Falk, portions of Einstein’s brain have been found to be unlike those of most people and could be related to his extraordinary cognitive abilities.

Falk and his colleagues describe for the first time the entire cerebral cortex of Einstein’s brain from an examination of 14 recently discovered photographs.

The researchers compared Einstein’s brain to 85 “normal” human brains and, in light of current functional imaging studies, interpreted its unusual features.

“Although the overall size and asymmetrical shape of Einstein’s brain were normal, the prefrontal, somatosensory, primary motor, parietal, temporal and occipital cortices were extraordinary.

“These may have provided the neurological underpinnings for some of his visuospatial and mathematical abilities, for instance,” said Falk.

The study was published in the journal Brain.

On Einstein’s death in 1955, his brain was removed and photographed from multiple angles with the permission of his family. Furthermore, it was sectioned into 240 blocks from which histological slides were prepared.

A great majority of the photographs, blocks and slides were lost from public sight for more than 55 years. The 14 photographs used by the researchers now are held by the National Museum of Health and Medicine.

The study also published the “roadmap” to Einstein’s brain prepared in 1955 by Dr Thomas Harvey to illustrate the locations within his previously whole brain of 240 dissected blocks of tissue, which provides a key to locating the origins within the brain of the newly emerged histological slides.