Archive for the ‘Math’ Category

A terrorist attack might seem like one of the least predictable of events. Terrorists work in small, isolated cells, often using simple weapons and striking at random. Indeed, the element of unpredictability is part of what makes terrorists so scary – you never know when or where they will strike.

However, new research shows that terror attacks may not be as unpredictable as people think. A paper by Stephen Tench and Hannah Fry, mathematicians at the University College London, and Paul Gill, a security and crime expert, shows that terrorist attacks often follow a general pattern that can be modeled and predicted using math.

Predicting human behavior is obviously a difficult thing to do, and one can’t always extrapolate from past events to predict the future. As one academic discussion of the topic points out, if you made a forecast in 1864 about how many presidents would be assassinated in office based on historical data, the expected number would be zero. But over the next 40 years, four U.S. presidents were killed in office.

Yet when you put individual human acts together and look at the aggregate, they often do follow a pattern that can be represented with math. As Sir Arthur Conan Doyle writes in “The Sign of Four,” the second Sherlock Holmes novel, “. . . while the individual man is an insoluble puzzle, in the aggregate he becomes a mathematical certainty.”

The Hawkes process

The mathematical model that Tench and Fry use to look at terrorist attacks is called a “Hawkes process.” The basic idea behind Hawkes processes is that some events don’t occur independently; when a certain event happens, you’re more likely to see other events of the same kind shortly thereafter. As time elapses, however, the probability of a subsequent event occurring gradually fades away and returns to normal.

A mathematician named Alan Hawkes first developed the idea while searching for a mathematical model that would describe the patterns of earthquakes. Earthquake tremors aren’t independent events, either – after an earthquake hits, the area often experiences aftershocks. So Hawkes designed his equations to reflect the greater probability of experiencing a subsequent tremor shortly after the first one.

Since Hawkes developed the model in the 1970s, similar equations have been used to describe all kinds of sequences of related events, including how epidemics travel, how electrical impulses move through the brain, and how emails move through an organization. Recently, Hawkes processes have also been used to predict the locations and timings of burglaries and gang-related violence.

Why gang-related violence follows a Hawkes process is fairly easy to understand. A murder or shooting by one gang often provokes retaliation by another gang. So following the first incident, the probability of a second incident typically goes up.

It’s a little harder to understand why burglaries follow a Hawkes process – i.e., why one burglary would increase the chances of another burglary happening soon after. But, having your house burglarized does increase the chances that thieves will visit again. The burglars now know the location of your valuables and the layout of your house and your neighborhood, meaning your neighbors are more likely to be burglarized in the future, too.

Hawkes processes so accurately describe how trends in crime vary that some security companies and law enforcement bureaus have started to use them in their work. As Fry says, companies like PredPol monitor data on past crimes to model geographic “hotspots” that can be more heavily policed or can become the focus of specific crime-prevention policies.

Predicting terrorist attacks

In their paper, Tench, Fry and Gill apply this same model to terrorism in Northern Ireland. The paper looks at more than 5,000 explosions of improved explosive devices (IEDs) around Northern Ireland during a particularly violent time known as “the Troubles” between 1970 and 1998, when paramilitary groups in the mostly Catholic Northern Ireland fought to secede from Britain and join Ireland. The researchers used the process to analyze when and where one group, the Provisional Irish Republican Army (IRA), launched its terror attacks, how the British Security Forces responded, and how effective those responses were.

IED explosions follow a pattern. After one incident, others follow more quickly. So you have the ordinary chance of the event, but afterward you have a “little kick,” as Fry says, that increases the probability that you’ll have another attack – but then fades away over time. Mathematicians can capture and model these patterns using a Hawkes process equation. The math can reveal patterns in past terrorist activity that weren’t seen before, or be used to test different theories about those patterns, the researchers say. It can also create predictive models, which estimate the probability of future attacks at different times and in different areas.

The researchers say that their analysis shows distinct phases in the conflict between the Irish terrorists and authorities. For example, bombings slowed down as the IRA was infiltrated by British security forces and when more of its members were imprisoned, and bombings increased when the group launched a renewed campaign of violence or tried to use incidents of terrorism as a bargaining tool in negotiations.

One of the most fascinating lessons of the research is on the effects of counterterrorist operations. The paper shows evidence that the death of Catholic civilians, whom the IRA claimed to be representing, would cause the group to increase their IED attacks in retaliation.

That finding echoes previous research that looked at counterterrorism operations by the United States and its coalition partners in Iraq. That paper showed that counterinsurgency operations that were carried out indiscriminately – in other words, attacks that hurt or kill innocent people who were not necessarily insurgents — led to a backlash of terrorist violence. In contrast, counterinsurgency operations that were carried out in a discriminating, targeted way led to a lower level of violence than before.

The paper looks at events in the past, but Tench says the same technique can be used to project future trends. After one terrorist attack, and especially after civilians are killed, the likelihood of subsequent “aftershocks” increases for a specific time period, and authorities need to intervene quickly to avoid a long period of violence. They must also ensure their counterterrorism operations are targeted at the actual insurgents, to avoid provoking the destructive wave of violence that indiscriminate counterterrorism has been shown to do.

Tench says he hopes counterterrorism officials will start using the technique as part of their portfolio. “This application of the Hawkes process is a relatively new idea, so I imagine it might take some time to filter through,” he says.

https://www.washingtonpost.com/news/wonk/wp/2016/03/01/the-eerie-math-that-could-predict-terrorist-attacks/

Advertisements

Some NFL players spend their offseason working out. Others travel around the world. Baltimore Ravens offensive lineman John Urschel has done both while also getting an article published in a math journal.

Urschel, the Ravens’ 2014 fifth-round pick who graduated from Penn State with 4.0 GPA, also happens to be a brilliant mathematician. This week he and several co-authors published a piece titled “A Cascadic Multigrid Algorithm for Computing the Fiedler Vector of Graph Laplacians” in the Journal of Computational Mathematics. You can read the full piece here: http://arxiv.org/abs/1412.0565

Here’s the summary of the paper:

“In this paper, we develop a cascadic multigrid algorithm for fast computation of the Fiedler vector of a graph Laplacian, namely, the eigenvector corresponding to the second smallest eigenvalue. This vector has been found to have applications in fields such as graph partitioning and graph drawing. The algorithm is a purely algebraic approach based on a heavy edge coarsening scheme and pointwise smoothing for refinement. To gain theoretical insight, we also consider the related cascadic multigrid method in the geometric setting for elliptic eigenvalue problems and show its uniform convergence under certain assumptions. Numerical tests are presented for computing the Fiedler vector of several practical graphs, and numerical results show the efficiency and optimality of our proposed cascadic multigrid algorithm.”

When he’s not protecting Joe Flacco, the 23-year-old Urschel enjoys digging into extremely complicated mathematical models.

“I am a mathematical researcher in my spare time, continuing to do research in the areas of numerical linear algebra, multigrid methods, spectral graph theory and machine learning. I’m also an avid chess player, and I have aspirations of eventually being a titled player one day.”

– See more at: http://yahoo.thepostgame.com/blog/balancing-act/201503/john-urschel-baltimore-ravens-nfl-football-math#sthash.avUHj2Tm.dpuf

Thanks to Kebmodee for bringing this to the attention of the It’s Interesting community.

In May 2013, the mathematician Yitang Zhang launched what has proven to be a banner year and a half for the study of prime numbers, those numbers that aren’t divisible by any smaller number except 1. Zhang, of the University of New Hampshire, showed for the first time that even though primes get increasingly rare as you go further out along the number line, you will never stop finding pairs of primes that are a bounded distance apart — within 70 million, he proved. Dozens of mathematicians then put their heads together to improve on Zhang’s 70 million bound, bringing it down to 246 — within striking range of the celebrated twin primes conjecture, which posits that there are infinitely many pairs of primes that differ by only 2.

Now, mathematicians have made the first substantial progress in 76 years on the reverse question: How far apart can consecutive primes be? The average spacing between primes approaches infinity as you travel up the number line, but in any finite list of numbers, the biggest prime gap could be much larger than the average. No one has been able to establish how large these gaps can be.

“It’s a very obvious question, one of the first you might ever ask about primes,” said Andrew Granville, a number theorist at the University of Montreal. “But the answer has been more or less stuck for almost 80 years.”

This past August, two different groups of mathematicians released papers proving a long-standing conjecture by the mathematician Paul Erdős about how large prime gaps can get. The two teams have joined forces to strengthen their result on the spacing of primes still further, and expect to release a new paper later this month.

Erdős, who was one of the most prolific mathematicians of the 20th century, came up with hundreds of mathematics problems over his lifetime, and had a penchant for offering cash prizes for their solutions. Though these prizes were typically just $25, Erdős (“somewhat rashly,” as he later wrote) offered a $10,000 prize for the solution to his prime gaps conjecture — by far the largest prize he ever offered.

Erdős’ conjecture is based on a bizarre-looking bound for large prime gaps devised in 1938 by the Scottish mathematician Robert Alexander Rankin. For big enough numbers X, Rankin showed, the largest prime gap below X is at least

Number theory formulas are notorious for having many “logs” (short for the natural logarithm), said Terence Tao of the University of California, Los Angeles, who wrote one of the two new papers along with Kevin Ford of the University of Illinois, Urbana-Champaign, Ben Green of the University of Oxford and Sergei Konyagin of the Steklov Mathematical Institute in Moscow. In fact, number theorists have a favorite joke, Tao said: What does a drowning number theorist say? “Log log log log … ”

Nevertheless, Rankin’s result is “a ridiculous formula, that you would never expect to show up naturally,” Tao said. “Everyone thought it would be improved on quickly, because it’s just so weird.” But Rankin’s formula resisted all but the most minor improvements for more than seven decades.

Many mathematicians believe that the true size of large prime gaps is probably considerably larger — more on the order of (log X)2, an idea first put forth by the Swedish mathematician Harald Cramér in 1936. Gaps of size (log X)2 are what would occur if the prime numbers behaved like a collection of random numbers, which in many respects they appear to do. But no one can come close to proving Cramér’s conjecture, Tao said. “We just don’t understand prime numbers very well.”

Erdős made a more modest conjecture: It should be possible, he said, to replace the 1/3 in Rankin’s formula by as large a number as you like, provided you go out far enough along the number line. That would mean that prime gaps can get much larger than in Rankin’s formula, though still smaller than in Cramér’s.

The two new proofs of Erdős’ conjecture are both based on a simple way to construct large prime gaps. A large prime gap is the same thing as a long list of non-prime, or “composite,” numbers between two prime numbers. Here’s one easy way to construct a list of, say, 100 composite numbers in a row: Start with the numbers 2, 3, 4, … , 101, and add to each of these the number 101 factorial (the product of the first 101 numbers, written 101!). The list then becomes 101! + 2, 101! + 3, 101! + 4, … , 101! + 101. Since 101! is divisible by all the numbers from 2 to 101, each of the numbers in the new list is composite: 101! + 2 is divisible by 2, 101! + 3 is divisible by 3, and so on. “All the proofs about large prime gaps use only slight variations on this high school construction,” said James Maynard of Oxford, who wrote the second of the two papers.

The composite numbers in the above list are enormous, since 101! has 160 digits. To improve on Rankin’s formula, mathematicians had to show that lists of composite numbers appear much earlier in the number line — that it’s possible to add a much smaller number to a list such as 2, 3, … , 101 and again get only composite numbers. Both teams did this by exploiting recent results — different ones in each case — about patterns in the spacing of prime numbers. In a nice twist, Maynard’s paper used tools that he developed last year to understand small gaps between primes.

The five researchers have now joined together to refine their new bound, and plan to release a preprint within a week or two which, Tao feels, pushes Rankin’s basic method as far as possible using currently available techniques.

The new work has no immediate applications, although understanding large prime gaps could ultimately have implications for cryptography algorithms. If there turn out to be longer prime-free stretches of numbers than even Cramér’s conjecture predicts, that could, in principle, spell trouble for cryptography algorithms that depend on finding large prime numbers, Maynard said. “If they got unlucky and started testing for primes at the beginning of a huge gap, the algorithm would take a very long time to run.”

Tao has a more personal motivation for studying prime gaps. “After a while, these things taunt you,” he said. “You’re supposed to be an expert on prime numbers, but there are these basic questions you can’t answer, even though people have thought about them for centuries.”

Erdős died in 1996, but Ronald Graham, a mathematician at the University of California, San Diego, who collaborated extensively with Erdős, has offered to make good on the $10,000 prize. Tao is toying with the idea of creating a new prize for anyone who makes a big enough improvement on the latest result, he said.

In 1985, Tao, then a 10-year-old prodigy, met Erdős at a math event. “He treated me as an equal,” recalled Tao, who in 2006 won a Fields Medal, widely seen as the highest honor in mathematics. “He talked very serious mathematics to me.” This is the first Erdős prize problem Tao has been able to solve, he said. “So that’s kind of cool.”

The recent progress in understanding both small and large prime gaps has spawned a generation of number theorists who feel that anything is possible, Granville said. “Back when I was growing up mathematically, we thought there were these eternal questions that we wouldn’t see answered until another era,” he said. “But I think attitudes have changed in the last year or two. There are a lot of young people who are much more ambitious than in the past, because they’ve seen that you can make massive breakthroughs.”

http://www.wired.com/2014/12/mathematicians-make-major-discovery-prime-numbers/?mbid=social_fb

end

Civilisation is almost inevitably doomed, a Nasa-funded study has found.

Human society is founded on a level of economic and environmental stability which almost certainly cannot be sustained, it said.

The study used simplified models of civilisation designed to experiment with the balance of resources and climate that creates stability – or not – in our world.

These theoretical models – designed to extrapolate from simple principles the future of our industrialised world – ran into almost intractable problems.

Almost any model “closely reflecting the reality of the world today… we find that collapse is difficult to avoid”, the report said.

Mathematician Safa Motesharri begins his report by stating that “the process of rise-and-collapse is actually a recurrent cycle found throughout history” and that this is borne out by maths, as well as historiography.

“The fall of the Roman Empire, and the equally (if not more) advanced Han, Mauryan, and Gupta Empires, as well as so many advanced Mesopotamian Empires, are all testimony to the fact that advanced, sophisticated, complex, and creative civilizations can be both fragile and impermanent.”
His research – funded by Nasa’s Goddard Space Flight Center and published int he Ecological Economics journal – explored the pressures that can lead to a collapse in civilisation.

These criteria include changes in population, climate change and natural disasters. Access to water, agriculture, and energy are also factors.

Motesharri found that problems with each of these is far more damaging when experienced in combination with another. When this occurs the result is often an “economic stratification” and “stretching of resources” which drags at society’s foundations.

Under this highly simplified model, our society appears to be doomed.

In one of his simulations:

“[Ours] appears to be on a sustainable path for quite a long time, but even using an optimal depletion rate and starting with a very small number of Elites, the Elites eventually consume too much, resulting in a famine among Commoners that eventually causes the collapse of society. It is important to note that this Type-L collapse is due to an inequality-induced famine that causes a loss of workers, rather than a collapse of Nature”

He added that elites tend to have a vested interest in sustaining the current model – however doomed – for as long as possible, regardless of the eventual negative outcome:

“While some members of society might raise the alarm that the system is moving towards an impending collapse and therefore advocate structural changes to society in order to avoid it, Elites and their supporters, who opposed making these changes, could point to the long sustainable trajectory ‘so far’ in support of doing nothing.”

There are caveats, of course. The study is a simplified model of society, not a perfect simulation, and it isn’t able to make solid predictions of the future. It’s also worth noting that Motesharri does allow for the possibility that “collapse can be avoided” – though he thinks it will be exceptionally difficult.

Indeed, as the Guardian reports, other studies by the UK Government and KPMG have also warned of a “perfect storm” of energy scarcity and economy fragility coming within a few decades, which lends weight to his conclusion.

http://www.huffingtonpost.co.uk/2014/03/17/civilisation-doomed_n_4977387.html

MATH

In a lab in Oxford University’s experimental psychology department, researcher Roi Cohen Kadosh is testing an intriguing treatment: He is sending low-dose electric current through the brains of adults and children as young as 8 to make them better at math.

A relatively new brain-stimulation technique called transcranial electrical stimulation may help people learn and improve their understanding of math concepts.

The electrodes are placed in a tightly fitted cap and worn around the head. The device, run off a 9-volt battery commonly used in smoke detectors, induces only a gentle current and can be targeted to specific areas of the brain or applied generally. The mild current reduces the risk of side effects, which has opened up possibilities about using it, even in individuals without a disorder, as a general cognitive enhancer. Scientists also are investigating its use to treat mood disorders and other conditions.

Dr. Cohen Kadosh’s pioneering work on learning enhancement and brain stimulation is one example of the long journey faced by scientists studying brain-stimulation and cognitive-stimulation techniques. Like other researchers in the community, he has dealt with public concerns about safety and side effects, plus skepticism from other scientists about whether these findings would hold in the wider population.

There are also ethical questions about the technique. If it truly works to enhance cognitive performance, should it be accessible to anyone who can afford to buy the device—which already is available for sale in the U.S.? Should parents be able to perform such stimulation on their kids without monitoring?

“It’s early days but that hasn’t stopped some companies from selling the device and marketing it as a learning tool,” Dr. Cohen Kadosh says. “Be very careful.”

The idea of using electric current to treat the brain of various diseases has a long and fraught history, perhaps most notably with what was called electroshock therapy, developed in 1938 to treat severe mental illness and often portrayed as a medieval treatment that rendered people zombielike in movies such as “One Flew over the Cuckoo’s Nest.”

Electroconvulsive therapy has improved dramatically over the years and is considered appropriate for use against types of major depression that don’t respond to other treatments, as well as other related, severe mood states.

A number of new brain-stimulation techniques have been developed, including deep brain stimulation, which acts like a pacemaker for the brain. With DBS, electrodes are implanted into the brain and, though a battery pack in the chest, stimulate neurons continuously. DBS devices have been approved by U.S. regulators to treat tremors in Parkinson’s disease and continue to be studied as possible treatments for chronic pain and obsessive-compulsive disorder.

Transcranial electrical stimulation, or tES, is one of the newest brain stimulation techniques. Unlike DBS, it is noninvasive.

If the technique continues to show promise, “this type of method may have a chance to be the new drug of the 21st century,” says Dr. Cohen Kadosh.

The 37-year-old father of two completed graduate school at Ben-Gurion University in Israel before coming to London to do postdoctoral work with Vincent Walsh at University College London. Now, sitting in a small, tidy office with a model brain on a shelf, the senior research fellow at Oxford speaks with cautious enthusiasm about brain stimulation and its potential to help children with math difficulties.

Up to 6% of the population is estimated to have a math-learning disability called developmental dyscalculia, similar to dyslexia but with numerals instead of letters. Many more people say they find math difficult. People with developmental dyscalculia also may have trouble with daily tasks, such as remembering phone numbers and understanding bills.

Whether transcranial electrical stimulation proves to be a useful cognitive enhancer remains to be seen. Dr. Cohen Kadosh first thought about the possibility as a university student in Israel, where he conducted an experiment using transcranial magnetic stimulation, a tool that employs magnetic coils to induce a more powerful electrical current.

He found that he could temporarily turn off regions of the brain known to be important for cognitive skills. When the parietal lobe of the brain was stimulated using that technique, he found that the basic arithmetic skills of doctoral students who were normally very good with numbers were reduced to a level similar to those with developmental dyscalculia.

That led to his next inquiry: If current could turn off regions of the brain making people temporarily math-challenged, could a different type of stimulation improve math performance? Cognitive training helps to some extent in some individuals with math difficulties. Dr. Cohen Kadosh wondered if such learning could be improved if the brain was stimulated at the same time.

But transcranial magnetic stimulation wasn’t the right tool because the current induced was too strong. Dr. Cohen Kadosh puzzled over what type of stimulation would be appropriate until a colleague who had worked with researchers in Germany returned and told him about tES, at the time a new technique. Dr. Cohen Kadosh decided tES was the way to go.

His group has since conducted a series of studies suggesting that tES appears helpful improving learning speed on various math tasks in adults who don’t have trouble in math. Now they’ve found preliminary evidence for those who struggle in math, too.

Participants typically come for 30-minute stimulation-and-training sessions daily for a week. His team is now starting to study children between 8 and 10 who receive twice-weekly training and stimulation for a month. Studies of tES, including the ones conducted by Dr. Cohen Kadosh, tend to have small sample sizes of up to several dozen participants; replication of the findings by other researchers is important.

In a small, toasty room, participants, often Oxford students, sit in front of a computer screen and complete hundreds of trials in which they learn to associate numerical values with abstract, nonnumerical symbols, figuring out which symbols are “greater” than others, in the way that people learn to know that three is greater than two.

When neurons fire, they transfer information, which could facilitate learning. The tES technique appears to work by lowering the threshold neurons need to reach before they fire, studies have shown. In addition, the stimulation appears to cause changes in neurochemicals involved in learning and memory.

However, the results so far in the field appear to differ significantly by individual. Stimulating the wrong brain region or at too high or long a current has been known to show an inhibiting effect on learning. The young and elderly, for instance, respond exactly the opposite way to the same current in the same location, Dr. Cohen Kadosh says.

He and a colleague published a paper in January in the journal Frontiers in Human Neuroscience, in which they found that one individual with developmental dyscalculia improved her performance significantly while the other study subject didn’t.

What is clear is that anyone trying the treatment would need to train as well as to stimulate the brain. Otherwise “it’s like taking steroids but sitting on a couch,” says Dr. Cohen Kadosh.

Dr. Cohen Kadosh and Beatrix Krause, a graduate student in the lab, have been examining individual differences in response. Whether a room is dark or well-lighted, if a person smokes and even where women are in their menstrual cycle can affect the brain’s response to electrical stimulation, studies have found.

Results from his lab and others have shown that even if stimulation is stopped, those who benefited are going to maintain a higher performance level than those who weren’t stimulated, up to a year afterward. If there isn’t any follow-up training, everyone’s performance declines over time, but the stimulated group still performs better than the non-stimulated group. It remains to be seen whether reintroducing stimulation would then improve learning again, Dr. Cohen Kadosh says.

http://online.wsj.com/news/articles/SB10001424052702303650204579374951187246122?mod=WSJ_article_EditorsPicks&mg=reno64-wsj&url=http%3A%2F%2Fonline.wsj.com%2Farticle%2FSB10001424052702303650204579374951187246122.html%3Fmod%3DWSJ_article_EditorsPicks

UniverseMath_m_0131

By Tanya Lewis, LiveScience

Scientists have long used mathematics to describe the physical properties of the universe. But what if the universe itself is math? That’s what cosmologist Max Tegmark believes.

In Tegmark’s view, everything in the universe — humans included — is part of a mathematical structure. All matter is made up of particles, which have properties such as charge and spin, but these properties are purely mathematical, he says. And space itself has properties such as dimensions, but is still ultimately a mathematical structure.

“If you accept the idea that both space itself, and all the stuff in space, have no properties at all except mathematical properties,” then the idea that everything is mathematical “starts to sound a little bit less insane,” Tegmark said in a talk given Jan. 15 here at The Bell House. The talk was based on his book “Our Mathematical Universe: My Quest for the Ultimate Nature of Reality” (Knopf, 2014).

“If my idea is wrong, physics is ultimately doomed,” Tegmark said. But if the universe really is mathematics, he added, “There’s nothing we can’t, in principle, understand.”

The idea follows the observation that nature is full of patterns, such as the Fibonacci sequence, a series of numbers in which each number is the sum of the previous two numbers. The flowering of an artichoke follows this sequence, for example, with the distance between each petal and the next matching the ratio of the numbers in the sequence.

The nonliving world also behaves in a mathematical way. If you throw a baseball in the air, it follows a roughly parabolic trajectory. Planets and other astrophysical bodies follow elliptical orbits.

“There’s an elegant simplicity and beauty in nature revealed by mathematical patterns and shapes, which our minds have been able to figure out,” said Tegmark, who loves math so much he has framed pictures of famous equations in his living room.

One consequence of the mathematical nature of the universe is that scientists could in theory predict every observation or measurement in physics. Tegmark pointed out that mathematics predicted the existence of the planet Neptune, radio waves and the Higgs boson particle thought to explain how other particles get their mass.

Some people argue that math is just a tool invented by scientists to explain the natural world. But Tegmark contends the mathematical structure found in the natural world shows that math exists in reality, not just in the human mind.

And speaking of the human mind, could we use math to explain the brain?

Some have described the human brain as the most complex structure in the universe. Indeed, the human mind has made possible all of the great leaps in understanding our world.

Someday, Tegmark said, scientists will probably be able to describe even consciousness using math. (Carl Sagan is quoted as having said, “the brain is a very big place, in a very small space.”)

“Consciousness is probably the way information feels when it’s being processed in certain, very complicated ways,” Tegmark said. He pointed out that many great breakthroughs in physics have involved unifying two things once thought to be separate: energy and matter, space and time, electricity and magnetism. He said he suspects the mind, which is the feeling of a conscious self, will ultimately be unified with the body, which is a collection of moving particles.

But if the brain is just math, does that mean free will doesn’t exist, because the movements of particles could be calculated using equations? Not necessarily, he said.

One way to think of it is, if a computer tried to simulate what a person will do, the computation would take at least the same amount of time as performing the action. So some people have suggested defining free will as an inability to predict what one is going to do before the event occurs.

But that doesn’t mean humans are powerless. Tegmark concluded his talk with a call to action: “Humans have the power not only to understand our world, but to shape and improve it.”

http://www.mnn.com/earth-matters/space/stories/whats-the-universe-made-of-math

2nyt

The dodgy arithmetic of one employee at the New York Times resulted in inaccurate issues numbers on the front page of every newspaper for over 100 years.

It seems that back in 1898 someone added one to 14,499 and got 15,000, with the huge jump in issues not being noticed until 2000.

How exactly the mistake was made isn’t clear, but how it was spotted was cleared up in the mother of all correction notes published in the Times on the first day of the new millennium, and rediscovered by The Atlantic this week.

It read:

“The error came to light recently when Aaron Donovan, a news assistant, became curious about the numbering, which he updates nightly when working at the news desk. He wondered about the potential for self-perpetuating error. Using a spreadsheet program, he calculated the number of days since The Times’s founding, on Sept. 18, 1851.

Through the newspaper’s archives, he learned that in its first 500 weeks, The Times published no Sunday issue. Then, for 2,296 weeks from April 1861 to April 1905, the Sunday issue was treated as an extension of the Saturday paper, bearing its number. In the early days, the paper skipped publication on a few holidays. No issues were published for 88 days during a strike in 1978. (During five earlier labor disputes, unpublished issues were assigned numbers, sometimes because catch-up editions were later produced for the archives.)

Finally, by scanning books of historic front pages and reels of microfilm, Mr. Donovan zeroed in on the date of the 500-issue gap.”

The issue number was eventually adjusted, but not before the publication congratulated itself on its 50,000th issue, which was actually number 49,500.

http://www.independent.co.uk/news/world/the-new-york-times-had-mistake-on-front-page-every-day-for-over-a-century-9063702.html