The mathematically-determined best way to choose a parking spot

Two strategies for choosing a parking spot save far more time than a third, according to researchers’ estimates.

Physicists have compared three typical strategies for finding a parking spot to determine which saves the most time — at least in a highly simplified parking scenario.

Paul Krapivsky at Boston University in Massachusetts and Sidney Redner at the Santa Fe Institute in New Mexico modelled an idealized car park in which the parking spots are in a single row between the entrance to the park and the drivers’ ultimate destination, such as a building.

An ‘optimistic’ strategy, which aims to minimize the time spent walking, is to drive straight to the destination and then backtrack to find a spot. Drivers using a ‘meek’ strategy try to reduce the time spent driving by picking the spot immediately before the first parked car that they come across. An intermediate, or ‘prudent’, strategy is to park in the first encountered gap between two cars.

The authors calculated that the prudent strategy is on average slightly more efficient — in terms of time spent walking and driving — than the optimistic one; the meek strategy was a distant third. Still, even the prudent strategy left many good spots near the target empty.

https://www.nature.com/articles/d41586-019-02903-y?utm_source=Nature+Briefing&utm_campaign=c699f7417d-briefing-dy-20190927&utm_medium=email&utm_term=0_c9dfd39373-c699f7417d-44039353

Bees Appear Able to Comprehend the Concept of Zero

Honeybees can identify a piece of paper with zero dots as “less than” a paper with a few dots. Such a feat puts the insects in a select group—including the African grey parrot, nonhuman primates, and preschool children—that can understand the concept of zero, researchers report June 7 in Science.

“The fact that the bees generalized the rule ‘choose less’ to [blank paper] was consequently really surprising,” study coauthor Aurore Avarguès-Weber, a cognitive neuroscientist the University of Toulouse, tells The Scientist in an email. “It demonstrated that bees consider ‘nothing’ as a quantity below any number.”

In past studies, researchers have shown that bees can count up to five, but whether the insects could grasp more-complex ideas, such as addition or nothingness, has been unclear. In the latest study, Avarguès-Weber and her colleagues tested the bees’ ability to comprehend the absence of a stimulus by first training the insects to consistently choose sheets of paper either with fewer or more dots by landing on a tiny platform near the paper with the dots. If the bees chose correctly, they were rewarded with a sugary drink. The bees performed the task surprisingly well, Avarguès-Weber says. “The fact that they can do it while we were also controlling for potential confounding parameters confirms their capacity to discriminate numbers.”

The team then tested the bees’ ability to distinguish a blank piece of paper, or what the researchers call an empty set, from a sheet with one dot and found the insects chose correctly about 63 percent of the time. The behavior reveals “an understanding that an empty set is lower than one, which is challenging for some other animals,” the researchers write in the paper.

That bees can use the idea of “less than” to extrapolate that nothing has a quantitative nature is “very surprising,” says Andreas Nieder of the University of Tübingen in Germany who was not involved in the study. “Bees have minibrains compared with human brains—fewer than a million neurons compared with our 86 billion—yet they can understand the concept of an empty set.”

Nieder suggests honeybees, similar to humans, may have developed this ability to comprehend the absence of something as a survival advantage, to help with foraging, avoiding predation, and interacting with other bees of the same species. The absence of food or a mate is important to understand, he says.

Clint Perry, who studies bees at Queen Mary University of London and was not involved in the study, is a bit more cautious about the results. “I applaud these researchers. It is very difficult to test these types of cognitive abilities in bees,” he says. “But I don’t feel convinced that they were actually showing that the bees could understand the concept of zero or even the absence of information.” Perry suggests the bees might have selected where to land based solely on the total amount of black or white on each paper and that’s the choice that got rewarded, rather than distinguishing the number of dots or lack of them.

Avarguès-Weber and her colleagues argue, however, that the bees were always rewarded when shown dots. “In the test with zero (white paper) versus an image with a few dots, the bees chose the white picture without any previous experience with such stimulus. A choice based exclusively on learning would consist in choosing an image similar to the rewarded ones, ones presenting dots,” she says.

Perry says he’d like to see better control experiments to confirm the finding, while Nieder is interested in the underlying brain physiology that might drive the how the insects comprehend nothingness. How the absence of a stimulus is represented in the human brain hasn’t been well studied, though it has been explored in individual neurons in the brains of nonhuman primates. It could be even harder to study in bees, because they have much smaller brains, Nieder says. Setting up the experiments to test behavior and record brain activity would be challenging.

Avarguès-Weber and her colleagues propose a solution to that challenge—virtual reality. “We are developing a setup in which a tethered bee could learn a cognitive task as done in free-flying conditions so we could record brain activity in parallel,” she says. The team also plans to test the bees’ potential ability to perform simple addition or subtraction.

S. Howard et al., “Numerical ordering of zero in honey bees,” Science, doi:10.1126/science.aar4975, 2018.

https://www.the-scientist.com/?articles.view/articleNo/54776/title/Bees-Appear-Able-to-Comprehend-the-Concept-of-Zero/

An explanation of the Standard Model of Physics

The Standard Model. What dull name for the most accurate scientific theory known to human beings.

More than a quarter of the Nobel Prizes in physics of the last century are direct inputs to or direct results of the Standard Model. Yet its name suggests that if you can afford a few extra dollars a month you should buy the upgrade. As a theoretical physicist, I’d prefer The Absolutely Amazing Theory of Almost Everything. That’s what the Standard Model really is.

Many recall the excitement among scientists and media over the 2012 discovery of the Higgs boson. But that much-ballyhooed event didn’t come out of the blue – it capped a five-decade undefeated streak for the Standard Model. Every fundamental force but gravity is included in it. Every attempt to overturn it to demonstrate in the laboratory that it must be substantially reworked – and there have been many over the past 50 years – has failed.

In short, the Standard Model answers this question: What is everything made of, and how does it hold together?

The smallest building blocks

You know, of course, that the world around us is made of molecules, and molecules are made of atoms. Chemist Dmitri Mendeleev figured that out in the 1860s and organized all atoms – that is, the elements – into the periodic table that you probably studied in middle school. But there are 118 different chemical elements. There’s antimony, arsenic, aluminum, selenium … and 114 more.

But these elements can be broken down further.

Physicists like things simple. We want to boil things down to their essence, a few basic building blocks. Over a hundred chemical elements is not simple. The ancients believed that everything is made of just five elements – earth, water, fire, air and aether. Five is much simpler than 118. It’s also wrong.

By 1932, scientists knew that all those atoms are made of just three particles – neutrons, protons and electrons. The neutrons and protons are bound together tightly into the nucleus. The electrons, thousands of times lighter, whirl around the nucleus at speeds approaching that of light. Physicists Planck, Bohr, Schroedinger, Heisenberg and friends had invented a new science – quantum mechanics – to explain this motion.

That would have been a satisfying place to stop. Just three particles. Three is even simpler than five. But held together how? The negatively charged electrons and positively charged protons are bound together by electromagnetism. But the protons are all huddled together in the nucleus and their positive charges should be pushing them powerfully apart. The neutral neutrons can’t help.

What binds these protons and neutrons together? “Divine intervention” a man on a Toronto street corner told me; he had a pamphlet, I could read all about it. But this scenario seemed like a lot of trouble even for a divine being – keeping tabs on every single one of the universe’s 10⁸⁰ protons and neutrons and bending them to its will.

Expanding the zoo of particles

Meanwhile, nature cruelly declined to keep its zoo of particles to just three. Really four, because we should count the photon, the particle of light that Einstein described. Four grew to five when Anderson measured electrons with positive charge – positrons – striking the Earth from outer space. At least Dirac had predicted these first anti-matter particles. Five became six when the pion, which Yukawa predicted would hold the nucleus together, was found.

Then came the muon – 200 times heavier than the electron, but otherwise a twin. “Who ordered that?” I.I. Rabi quipped. That sums it up. Number seven. Not only not simple, redundant.

By the 1960s there were hundreds of “fundamental” particles. In place of the well-organized periodic table, there were just long lists of baryons (heavy particles like protons and neutrons), mesons (like Yukawa’s pions) and leptons (light particles like the electron, and the elusive neutrinos) – with no organization and no guiding principles.

Into this breach sidled the Standard Model. It was not an overnight flash of brilliance. No Archimedes leapt out of a bathtub shouting “eureka.” Instead, there was a series of crucial insights by a few key individuals in the mid-1960s that transformed this quagmire into a simple theory, and then five decades of experimental verification and theoretical elaboration.

Quarks. They come in six varieties we call flavors. Like ice cream, except not as tasty. Instead of vanilla, chocolate and so on, we have up, down, strange, charm, bottom and top. In 1964, Gell-Mann and Zweig taught us the recipes: Mix and match any three quarks to get a baryon. Protons are two ups and a down quark bound together; neutrons are two downs and an up. Choose one quark and one antiquark to get a meson. A pion is an up or a down quark bound to an anti-up or an anti-down. All the material of our daily lives is made of just up and down quarks and anti-quarks and electrons.

The Standard Model of elementary particles provides an ingredients list for everything around us.

Simple. Well, simple-ish, because keeping those quarks bound is a feat. They are tied to one another so tightly that you never ever find a quark or anti-quark on its own. The theory of that binding, and the particles called gluons (chuckle) that are responsible, is called quantum chromodynamics. It’s a vital piece of the Standard Model, but mathematically difficult, even posing an unsolved problem of basic mathematics. We physicists do our best to calculate with it, but we’re still learning how.

The other aspect of the Standard Model is “A Model of Leptons.” That’s the name of the landmark 1967 paper by Steven Weinberg that pulled together quantum mechanics with the vital pieces of knowledge of how particles interact and organized the two into a single theory. It incorporated the familiar electromagnetism, joined it with what physicists called “the weak force” that causes certain radioactive decays, and explained that they were different aspects of the same force. It incorporated the Higgs mechanism for giving mass to fundamental particles.

Since then, the Standard Model has predicted the results of experiment after experiment, including the discovery of several varieties of quarks and of the W and Z bosons – heavy particles that are for weak interactions what the photon is for electromagnetism. The possibility that neutrinos aren’t massless was overlooked in the 1960s, but slipped easily into the Standard Model in the 1990s, a few decades late to the party.

Discovering the Higgs boson in 2012, long predicted by the Standard Model and long sought after, was a thrill but not a surprise. It was yet another crucial victory for the Standard Model over the dark forces that particle physicists have repeatedly warned loomed over the horizon. Concerned that the Standard Model didn’t adequately embody their expectations of simplicity, worried about its mathematical self-consistency, or looking ahead to the eventual necessity to bring the force of gravity into the fold, physicists have made numerous proposals for theories beyond the Standard Model. These bear exciting names like Grand Unified Theories, Supersymmetry, Technicolor, and String Theory.

Sadly, at least for their proponents, beyond-the-Standard-Model theories have not yet successfully predicted any new experimental phenomenon or any experimental discrepancy with the Standard Model.

After five decades, far from requiring an upgrade, the Standard Model is worthy of celebration as the Absolutely Amazing Theory of Almost Everything.

Kristina Bigsby just solved college football’s biggest mystery. She can predict where high school players will commit.

By Jacob Bogage

There is an entire industry built up around deciphering where 16- and 17-year-olds will play college football. Websites boast “crystal ball” predictions of where top high school recruits will suit up. Companies charge for premium subscriptions with claims that they can decode the caprice and whimsy of children.

And then there’s Kristina Bigbsy, a PhD candidate at the University of Iowa who is probably better than all of them.

She developed a mathematical model that predicts with 70 percent accuracy where a high school football player will go to college. And she uses nothing but their basic biographical information and Twitter account.

In other words, she can read the minds of some of sports’ most sought-after prospects by reading their tweets and looking up some basic biographical information. Her paper on those findings was published this month in the INFORMS journal “Decision Analysis.”

In other words, she could completely own Nick Saban and Urban Meyer on the recruiting circuit if she really wanted to, with just the use of a fancy spreadsheet and some half-decent computer code.

“If you really want to see where someone is committing, you shouldn’t overlook [social media] data,” she said.

Bigsby developed the model as part of her PhD program in information science. She wanted to study wrestling recruiting — Iowa consistently fields a top wrestling team — but went with football because the sport was more popular and because of the national obsession around recruiting classes.

She mined data from 573 athletes in 2016 from the 247Sports recruiting database who had at least two Division I scholarship offers and public Twitter accounts. Then she pulled their tweets, followers and accounts they followed each month and distilled the data into a model that makes it all easy to understand.

She found that if a recruit tweeted a hashtag about a school, his likelihood of committing there jumped 300 percent. For every coach the athlete followed from a given school, his likelihood of committing went up 47 percent. When a coach follows an athlete, likelihood increases 40 percent.

“The most significant actions online are the actions the athlete is doing,” Bigsby said. “Who is he following? What is he tweeting? What hashtags is he using?”

Her model crunched those numbers along with other data sets — i.e.: a college’s location relative to the recruit’s home town, a college’s academic ranking, a college’s recent football performance, and more — and spit out a list of universities a recruit was likely to attend, along with each school’s odds.

The model correctly predicted a recruit’s choice 70 percent of the time. And if the model was wrong, recruits generally chose the “second-place” college, Bigsby’s paper shows.

“We can narrow most people’s choices down to two schools,” she said, “but you never know what teenagers are thinking.”

The model could provide better predictions, Bigsby said, if researchers pulled recruits’ Twitter data every week instead of every month. Plus, she’s still tweaking the model to better interpret what tweets mean.

An athlete posting “Just got an offer from Iowa,” and “Can’t wait to visit Iowa,” mean very different things, Bigsby said. The first is self-promoting, and probably doesn’t do much for the Hawkeyes’ chances of landing a commitment. The second one is “ingratiating.” The athlete is trying to join an online community conversation about Iowa. That certainly helps the Hawkeyes’ odds.

So imagine the following: Alabama beats South Carolina and then has a bye week. Crimson Tide assistant coaches fan out across the continental United States on recruiting trips equipped with weekly reports on prospects’ online activity and their current likelihood of choosing Alabama.

That’s what this model can do, Bigsby said. It can really give teams an edge in the valuable, year-round recruiting game.

Only one problem: You need an information science expert to run the numbers. Bigsby has a potential solution for that, too.

“I’m careening toward graduation,” she said. “If a football team wants to call me, I will certainly pick up the phone.”

“Beautiful Mind” John Nash’s Schizophrenia “Disappeared” as he aged

The Princeton mathematician, who along with his wife died in a car crash last month, claimed that aging as opposed to medicine helped improve his condition

Mathematician John Nash, who died May 23 in a car accident, was known for his decades-long battle with schizophrenia—a struggle famously depicted in the 2001 Oscar-winning film “A Beautiful Mind.” Nash had apparently recovered from the disease later in life, which he said was done without medication.

But how often do people recover from schizophrenia, and how does such a destructive disease disappear?

Nash developed symptoms of schizophrenia in the late 1950s, when he was around age 30, after he made groundbreaking contributions to the field of mathematics, including the extension of game theory, or the math of decision making. He began to exhibit bizarre behavior and experience paranoia and delusions. Over the next several decades, he was hospitalized several times, and was on and off anti-psychotic medications.

But in the 1980s, when Nash was in his 50s, his condition began to improve. In an email to a colleague in the mid-1990s, Nash said, “I emerged from irrational thinking, ultimately, without medicine other than the natural hormonal changes of aging,” according to The New York Times. Nash and his wife Alicia died, at ages 86 and 82, respectively, in a crash on the New Jersey Turnpike while en route home from a trip on which Nash had received a prestigious award for his work.

Studies done in the 1930s, before medications for schizophrenia were available, found that about 20 percent of patients recovered on their own, while 80 percent did not, said Dr. Gilda Moreno, a clinical psychologist at Nicklaus Children’s Hospital in Miami. More recent studies have found that, with treatment, up to 60 percent of schizophrenia patients can achieve remission, which researchers define as having minimal symptoms for at least six months, according to a 2010 review study in the journal Advances in Psychiatric Treatment.

It’s not clear why only some schizophrenia patients get better, but researchers do know that a number of factors are linked with better outcomes. Nash appeared to have had many of these factors in his favor, Moreno said.

People who have a later onset of the disease tend to do better than those who experience their first episode of psychosis in their teens, Moreno said. (“Psychosis” refers to losing touch with reality, exhibited by symptoms like delusions.) Nash was 30 years old when he started to experience symptoms of schizophrenia, which include hallucinations and delusions.

In addition, social factors—such as having a job, a supportive community and a family that is able to help with everyday tasks—are also linked with better outcomes for schizophrenia patients, Moreno said.

Nash had supportive colleagues who helped him find jobs where people were protective of him, and a wife who cared for him and took him into her house even after the couple divorced, which may have prevented him from becoming homeless, according to an episode of the PBS show “American Experience” that focused on Nash. “He had all those protective factors,” Moreno said.

Some researchers have noted that patients with schizophrenia tend to get better as they age.

“We know, as a general rule, with exceptions, that as people with schizophrenia age, they have fewer symptoms, such as delusions and hallucinations,” Dr. E. Fuller Torrey, a psychiatrist who specializes in schizophrenia, said in an interview with “American Experience.”

However, Moreno said that many patients will get worse over time if they don’t have access to proper medical care and are not in a supportive environment.

“When you have a schizophrenic who has had the multiple psychotic breaks, there is a downward path,” Moreno said. Patients suffer financially because they can’t work, physically because they can’t take care of themselves, and socially because their bizarre behaviors distance them from others, Moreno said.

It may be that the people who have supportive environments are the ones who are able to live to an older age, and have a better outcome, Moreno said.

Still, there is no guarantee that someone will recover from schizophrenia—a patient may have all the protective factors but not recover, Moreno said. Most patients cope with their symptoms for their entire lives, but many are also able to live rewarding lives, according to the National Institute of Mental Health.

http://www.scientificamerican.com/article/beautiful-mind-john-nash-s-schizophrenia-disappeared-as-he-aged/

Baltimore Ravens Offensive Lineman John Urschel Publishes Paper In Math Journal

Some NFL players spend their offseason working out. Others travel around the world. Baltimore Ravens offensive lineman John Urschel has done both while also getting an article published in a math journal.

Urschel, the Ravens’ 2014 fifth-round pick who graduated from Penn State with 4.0 GPA, also happens to be a brilliant mathematician. This week he and several co-authors published a piece titled “A Cascadic Multigrid Algorithm for Computing the Fiedler Vector of Graph Laplacians” in the Journal of Computational Mathematics. You can read the full piece here: http://arxiv.org/abs/1412.0565

Here’s the summary of the paper:

“In this paper, we develop a cascadic multigrid algorithm for fast computation of the Fiedler vector of a graph Laplacian, namely, the eigenvector corresponding to the second smallest eigenvalue. This vector has been found to have applications in fields such as graph partitioning and graph drawing. The algorithm is a purely algebraic approach based on a heavy edge coarsening scheme and pointwise smoothing for refinement. To gain theoretical insight, we also consider the related cascadic multigrid method in the geometric setting for elliptic eigenvalue problems and show its uniform convergence under certain assumptions. Numerical tests are presented for computing the Fiedler vector of several practical graphs, and numerical results show the efficiency and optimality of our proposed cascadic multigrid algorithm.”

When he’s not protecting Joe Flacco, the 23-year-old Urschel enjoys digging into extremely complicated mathematical models.

“I am a mathematical researcher in my spare time, continuing to do research in the areas of numerical linear algebra, multigrid methods, spectral graph theory and machine learning. I’m also an avid chess player, and I have aspirations of eventually being a titled player one day.”

Thanks to Kebmodee for bringing this to the attention of the It’s Interesting community.

Mathematicians Make a Major Discovery About Prime Numbers

In May 2013, the mathematician Yitang Zhang launched what has proven to be a banner year and a half for the study of prime numbers, those numbers that aren’t divisible by any smaller number except 1. Zhang, of the University of New Hampshire, showed for the first time that even though primes get increasingly rare as you go further out along the number line, you will never stop finding pairs of primes that are a bounded distance apart — within 70 million, he proved. Dozens of mathematicians then put their heads together to improve on Zhang’s 70 million bound, bringing it down to 246 — within striking range of the celebrated twin primes conjecture, which posits that there are infinitely many pairs of primes that differ by only 2.

Now, mathematicians have made the first substantial progress in 76 years on the reverse question: How far apart can consecutive primes be? The average spacing between primes approaches infinity as you travel up the number line, but in any finite list of numbers, the biggest prime gap could be much larger than the average. No one has been able to establish how large these gaps can be.

“It’s a very obvious question, one of the first you might ever ask about primes,” said Andrew Granville, a number theorist at the University of Montreal. “But the answer has been more or less stuck for almost 80 years.”

This past August, two different groups of mathematicians released papers proving a long-standing conjecture by the mathematician Paul Erdős about how large prime gaps can get. The two teams have joined forces to strengthen their result on the spacing of primes still further, and expect to release a new paper later this month.

Erdős, who was one of the most prolific mathematicians of the 20th century, came up with hundreds of mathematics problems over his lifetime, and had a penchant for offering cash prizes for their solutions. Though these prizes were typically just \$25, Erdős (“somewhat rashly,” as he later wrote) offered a \$10,000 prize for the solution to his prime gaps conjecture — by far the largest prize he ever offered.

Erdős’ conjecture is based on a bizarre-looking bound for large prime gaps devised in 1938 by the Scottish mathematician Robert Alexander Rankin. For big enough numbers X, Rankin showed, the largest prime gap below X is at least

Number theory formulas are notorious for having many “logs” (short for the natural logarithm), said Terence Tao of the University of California, Los Angeles, who wrote one of the two new papers along with Kevin Ford of the University of Illinois, Urbana-Champaign, Ben Green of the University of Oxford and Sergei Konyagin of the Steklov Mathematical Institute in Moscow. In fact, number theorists have a favorite joke, Tao said: What does a drowning number theorist say? “Log log log log … ”

Nevertheless, Rankin’s result is “a ridiculous formula, that you would never expect to show up naturally,” Tao said. “Everyone thought it would be improved on quickly, because it’s just so weird.” But Rankin’s formula resisted all but the most minor improvements for more than seven decades.

Many mathematicians believe that the true size of large prime gaps is probably considerably larger — more on the order of (log X)2, an idea first put forth by the Swedish mathematician Harald Cramér in 1936. Gaps of size (log X)2 are what would occur if the prime numbers behaved like a collection of random numbers, which in many respects they appear to do. But no one can come close to proving Cramér’s conjecture, Tao said. “We just don’t understand prime numbers very well.”

Erdős made a more modest conjecture: It should be possible, he said, to replace the 1/3 in Rankin’s formula by as large a number as you like, provided you go out far enough along the number line. That would mean that prime gaps can get much larger than in Rankin’s formula, though still smaller than in Cramér’s.

The two new proofs of Erdős’ conjecture are both based on a simple way to construct large prime gaps. A large prime gap is the same thing as a long list of non-prime, or “composite,” numbers between two prime numbers. Here’s one easy way to construct a list of, say, 100 composite numbers in a row: Start with the numbers 2, 3, 4, … , 101, and add to each of these the number 101 factorial (the product of the first 101 numbers, written 101!). The list then becomes 101! + 2, 101! + 3, 101! + 4, … , 101! + 101. Since 101! is divisible by all the numbers from 2 to 101, each of the numbers in the new list is composite: 101! + 2 is divisible by 2, 101! + 3 is divisible by 3, and so on. “All the proofs about large prime gaps use only slight variations on this high school construction,” said James Maynard of Oxford, who wrote the second of the two papers.

The composite numbers in the above list are enormous, since 101! has 160 digits. To improve on Rankin’s formula, mathematicians had to show that lists of composite numbers appear much earlier in the number line — that it’s possible to add a much smaller number to a list such as 2, 3, … , 101 and again get only composite numbers. Both teams did this by exploiting recent results — different ones in each case — about patterns in the spacing of prime numbers. In a nice twist, Maynard’s paper used tools that he developed last year to understand small gaps between primes.

The five researchers have now joined together to refine their new bound, and plan to release a preprint within a week or two which, Tao feels, pushes Rankin’s basic method as far as possible using currently available techniques.

The new work has no immediate applications, although understanding large prime gaps could ultimately have implications for cryptography algorithms. If there turn out to be longer prime-free stretches of numbers than even Cramér’s conjecture predicts, that could, in principle, spell trouble for cryptography algorithms that depend on finding large prime numbers, Maynard said. “If they got unlucky and started testing for primes at the beginning of a huge gap, the algorithm would take a very long time to run.”

Tao has a more personal motivation for studying prime gaps. “After a while, these things taunt you,” he said. “You’re supposed to be an expert on prime numbers, but there are these basic questions you can’t answer, even though people have thought about them for centuries.”

Erdős died in 1996, but Ronald Graham, a mathematician at the University of California, San Diego, who collaborated extensively with Erdős, has offered to make good on the \$10,000 prize. Tao is toying with the idea of creating a new prize for anyone who makes a big enough improvement on the latest result, he said.

In 1985, Tao, then a 10-year-old prodigy, met Erdős at a math event. “He treated me as an equal,” recalled Tao, who in 2006 won a Fields Medal, widely seen as the highest honor in mathematics. “He talked very serious mathematics to me.” This is the first Erdős prize problem Tao has been able to solve, he said. “So that’s kind of cool.”

The recent progress in understanding both small and large prime gaps has spawned a generation of number theorists who feel that anything is possible, Granville said. “Back when I was growing up mathematically, we thought there were these eternal questions that we wouldn’t see answered until another era,” he said. “But I think attitudes have changed in the last year or two. There are a lot of young people who are much more ambitious than in the past, because they’ve seen that you can make massive breakthroughs.”

http://www.wired.com/2014/12/mathematicians-make-major-discovery-prime-numbers/?mbid=social_fb