Benny the mystery philanthropist hides $100 bills. So far, he’s given away more than $55,000.

It happened two summers ago, as Joe Robinson was marking down the prices on the pots he was selling at a fine arts festival in Oregon.

“I pick pots up by the rim, flip them upside down, see if the price looks right, maybe cross it out, put something else,” Robinson told The Washington Post on Friday.

Regular, everyday stuff, you know? On that day in 2014, though, he picked up a pot that had an imprint of a fern on it.

He flipped it over.

And as he did, out dropped a $100 bill, upon which a name was written: “Benny.”

“It was a brand-new, crisp $100 bill that had obviously never been in circulation,” Robinson said. “So that mark was pretty obvious on it.”

Robinson figured it was some kind of mistake. Maybe someone dropped it by accident when they were making a purchase earlier in the day.

Because you don’t just find a hundred bucks, right?

But remember that name on the bill? It indicates the money came from a person known simply as Benny — a mysterious philanthropist who has anonymously hidden hundreds of $100 bills over the past few years.

And the people Robinson spoke with after finding the cash knew all about it.

“Everyone had some kind of a story,” Robinson said. “And so I guess it’s his thing to do crisp, brand-new bills.”

“Benny” hides those $100 bills all over the place in Salem, Ore., and the surrounding area, reports have indicated. They have been discovered in the pockets of clothing, in diapers, in baby wipes and in candy, Capi Lynn, a columnist for the Statesman Journal, said in an email.

“As of today, he has given away more than $55,000, and that’s only what has been reported to me,” Lynn said in an email to The Post. “I have a feeling Benny will be at it until his identity is revealed, or he can no longer do it for some reason.”

Lynn’s newspaper, in Salem, Ore., tracks the Benny finds; she gave him the nickname (Get it? As in Benjamin Franklin?), and first wrote about him in 2013 after some Cub Scouts reported finding folded $100 bills.

In the past few years, the newspaper has been able to document all sorts of stories about Benny and his gifts.

There was the girl who found a Benjamin in a pink bank purchased by her mother.

“I shook it, and it popped out of the hole,” the girl said. “My mom thought it was fake, but it was real.”

And the woman who found one with a package of cereal, right when she truly needed it.

“It just made my day,” said the woman, Tammy Tompkins. “I cried happy tears for about an hour and a half.”

The Statesman Journal reported that Tompkins’s husband had been struggling with health issues for some time when she found the money left by Benny.

“I’d just like to tell him — oh, gosh, I’m going to cry — just how much that it touched us, how much we appreciated it,” she told the Statesman Journal. “We’ve been through so much.”

In another instance, Benny’s gift was discovered by an 8-year-old boy who found the cash in a store’s toy bin. The newspaper reported that the boy and a friend who was with him would use the cash to buy toys.

But not for themselves. The plan was to donate the haul to a children’s group. What’s more: Their parents were expected to match Benny’s gift, according to the Statesman Journal.

This is a thing that happens a lot, said Lynn.

Benny, she wrote, has “launched a pay-it-forward spirit in the community.” By her estimation, more than half of those who find his $100 bills end up “paying it forward” — either to a charity of their choice, a cause dear to their heart or just to a person or family needing it.

“The people who need the money are spending it on things like groceries, gas and prescriptions,” Lynn wrote. “Those who pay it forward are spreading the cheer to a variety of local nonprofits and organizations, with food banks, animal-rescue groups and schools the top three pay-it-forward recipients. Benny finders also can be very creative with how they pay it forward. One woman keeps a box of sack lunches in her car to hand out to panhandlers, and she used her Benny to fund her mission for a time.”

Robinson had once hiked the Pacific Crest Trail and remembered one spot in particular, in Oregon.

He thought of what he wanted there during his hike — taco salad, beer, ice cream, cake. So he used the money he found to make that happen for a few lucky people on the trail.

“I just thought it would be cool to do something unexpected that would be super thrilling,” he said.

Robinson hauled slow-cooked pulled pork tacos, sheet cake, ice cream, soda, beer and whiskey out to a campground. “I think I also got some fruits and vegetables,” he said. “Some people actually want healthy food.”

Then, he said, he waited for hikers to happen by.

“It really made a few people’s day,” he said.

About nine hikers came through and benefited from Robinson’s generosity that day. By extension, they benefited from Benny’s, too.

“A lot of agape mouths,” said Robinson, who said the gesture “seemed like a fitting way to make people happy unexpectedly, like Benny did for me.”

When asked about Benny’s anonymity, Robinson said he believes that some people have figured out the mystery. They’ve seen him make the drops, he said — but they’re keeping their mouths shut.

“I think that the people that have found out have probably been touched by his motivations,” Robinson said. “It’s like, ‘Hey, I believe in this, and your secret’s safe with me.’ ”

https://www.washingtonpost.com/news/inspired-life/wp/2016/07/16/benny-the-mystery-philanthropist-hides-100-bills-so-far-hes-given-away-more-than-55000/?tid=hybrid_collaborative_1_na

Smallest hard disk to date writes information atom by atom

Every day, modern society creates more than a billion gigabytes of new data. To store all this data, it is increasingly important that each single bit occupies as little space as possible. A team of scientists at the Kavli Institute of Nanoscience at Delft University managed to bring this reduction to the ultimate limit: they built a memory of 1 kilobyte (8,000 bits), where each bit is represented by the position of one single chlorine atom.

“In theory, this storage density would allow all books ever created by humans to be written on a single post stamp”, says lead-scientist Sander Otte.

They reached a storage density of 500 Terabits per square inch (Tbpsi), 500 times better than the best commercial hard disk currently available. His team reports on this memory in Nature Nanotechnology on Monday July 18.

Feynman

In 1959, physicist Richard Feynman challenged his colleagues to engineer the world at the smallest possible scale. In his famous lecture There’s Plenty of Room at the Bottom, he speculated that if we had a platform allowing us to arrange individual atoms in an exact orderly pattern, it would be possible to store one piece of information per atom. To honor the visionary Feynman, Otte and his team now coded a section of Feynman’s lecture on an area 100 nanometers wide.


Sliding puzzle

The team used a scanning tunneling microscope (STM), in which a sharp needle probes the atoms of a surface, one by one. With these probes scientists cannot only see the atoms but they can also use them to push the atoms around. “You could compare it to a sliding puzzle”, Otte explains. “Every bit consists of two positions on a surface of copper atoms, and one chlorine atom that we can slide back and forth between these two positions. If the chlorine atom is in the top position, there is a hole beneath it — we call this a 1. If the hole is in the top position and the chlorine atom is therefore on the bottom, then the bit is a 0.” Because the chlorine atoms are surrounded by other chlorine atoms, except near the holes, they keep each other in place. That is why this method with holes is much more stable than methods with loose atoms and more suitable for data storage.

Codes

The researchers from Delft organized their memory in blocks of 8 bytes (64 bits). Each block has a marker, made of the same type of ‘holes’ as the raster of chlorine atoms. Inspired by the pixelated square barcodes (QR codes) often used to scan tickets for airplanes and concerts, these markers work like miniature QR codes that carry information about the precise location of the block on the copper layer. The code will also indicate if a block is damaged, for instance due to some local contaminant or an error in the surface. This allows the memory to be scaled up easily to very big sizes, even if the copper surface is not entirely perfect.

Datacenters

The new approach offers excellent prospects in terms of stability and scalability. Still, this type of memory should not be expected in datacenters soon. Otte: “In its current form the memory can operate only in very clean vacuum conditions and at liquid nitrogen temperature (77 K), so the actual storage of data on an atomic scale is still some way off. But through this achievement we have certainly come a big step closer”.

This research was made possible through support from the Netherlands Organisation for Scientific Research (NOW/FOM). Scientists of the International Iberian Nanotechnology Laboratory (INL) in Portugal performed calculations on the behavior of the chlorine atoms.

For more information, please contact dr. Sander Otte, Kavli Institute of Nanoscience, TU Delft: A.F.Otte@tudelft.nl, +31 15 278 8998

http://www.tudelft.nl/en/current/latest-news/article/detail/kleinste-harddisk-ooit-schrijft-informatie-atoom-voor-atoom/

Thanks to Kebmodee for bringing this to the It’s Interesting community.

Mystery of what sleep does to our brains may finally be solved

By Clare Wilson

It is one of life’s great enigmas: why do we sleep? Now we have the best evidence yet of what sleep is for – allowing housekeeping processes to take place that stop our brains becoming overloaded with new memories.

All animals studied so far have been found to sleep, but the reason for their slumber has eluded us. When lab rats are deprived of sleep, they die within a month, and when people go for a few days without sleeping, they start to hallucinate and may have epileptic seizures.

One idea is that sleep helps us consolidate new memories, as people do better in tests if they get a chance to sleep after learning. We know that, while awake, fresh memories are recorded by reinforcing connections between brain cells, but the memory processes that take place while we sleep have remained unclear.

Support is growing for a theory that sleep evolved so that connections in the brain can be pruned down during slumber, making room for fresh memories to form the next day. “Sleep is the price we pay for learning,” says Giulio Tononi of the University of Wisconsin-Madison, who developed the idea.

Now we have the most direct evidence yet that he’s right. Tononi’s team measured the size of these connections or synapses in brain slices taken from mice. The synapses in samples taken at the end of a period of sleep were 18 per cent smaller than those in samples taken from before sleep, showing that the synapses between neurons are weakened during slumber.

A good night’s sleep

Tononi announced these findings at the Federation of European Neuroscience Societies meeting in Copenhagen, Denmark, last week. “The data was very solid and well documented,” says Maiken Nedergaard of the University of Rochester, who attended the conference.

“It’s an extremely elegant idea,” says Vladyslav Vyazovskiy of the University of Oxford

If the housekeeping theory is right, it would explain why, when we miss a night’s sleep, the next day we find it harder to concentrate and learn new information – we may have less capacity to encode new experiences. The finding suggests that, as well as it being important to get a good night’s sleep after learning something, we should also try to sleep well the night before.

It could also explain why, if our sleep is interrupted, we feel less refreshed the next day. There is some indirect evidence that deep, slow-wave sleep is best for pruning back synapses, and it takes time for our brains to reach this level of unconsciousness.

Waking refreshed

Previous evidence has also supported the housekeeping theory. For instance, EEG recordings show that the human brain is less electrically responsive at the start of the day – after a good night’s sleep – than at the end, suggesting that the connections may be weaker. And in rats, the levels of a molecule called the AMPA receptor – which is involved in the functioning of synapses – are lower at the start of their wake periods.

The latest brain-slice findings that synapses get smaller is the most direct evidence yet that the housekeeping theory is right, says Vyazovskiy. “Structural evidence is very important,” he says. “That’s much less affected by other confounding factors.”

Protecting what matters

Getting this data was a Herculean task, says Tononi. They collected tiny chunks of brain tissue, sliced it into ultrathin sections and used these to create 3D models of the brain tissue to identify the synapses. As there were nearly 7000 synapses, it took seven researchers four years.

The team did not know which mouse was which until last month, says Tononi, when they broke the identification code, and found their theory stood up.

“People had been working for years to count these things. You start having stress about whether it’s really possible for all these synapses to start getting fatter and then thin again,” says Tononi.

The team also discovered that some synapses seem to be protected – the biggest fifth stayed the same size. It’s as if the brain is preserving its most important memories, says Tononi. “You keep what matters.”

https://www.newscientist.com/article/2096921-mystery-of-what-sleep-does-to-our-brains-may-finally-be-solved/

The Japanese art of (not) sleeping

By Brigitte Steger

The Japanese don’t sleep. This is what everyone – the Japanese above all – say. It’s not true, of course. But as a cultural and sociological statement, it is very interesting.

I first encountered these intriguing attitudes to sleep during my first stay in Japan in the late 1980s. At that time Japan was at the peak of what became known as the Bubble Economy, a phase of extraordinary speculative boom. Daily life was correspondingly hectic. People filled their schedules with work and leisure appointments, and had hardly any time to sleep. The lifestyle of this era is aptly summed up by a wildly popular advertising slogan of the time, extolling the benefits of an energy drink. “Can you battle through 24 hours? / Businessman! Businessman! Japanese businessman!”

Many voiced the complaint: “We Japanese are crazy to work so much!” But in these complaints one detected a sense of pride at being more diligent and therefore morally superior to the rest of humanity. Yet, at the same time, I observed countless people dozing on underground trains during my daily commute. Some even slept while standing up, and no one appeared to be at all surprised by this.

I found this attitude contradictory. The positive image of the worker bee, who cuts back on sleep at night and frowns on sleeping late in the morning, seemed to be accompanied by an extensive tolerance of so-called ‘inemuri’ – napping on public transportation and during work meetings, classes and lectures. Women, men and children apparently had little inhibition about falling asleep when and wherever they felt like doing so.

If sleeping in a bed or a futon was considered a sign of laziness, then why wasn’t sleeping during an event or even at work considered an even greater expression of indolence? What sense did it make to allow children to stay up late at night to study if it meant that they would fall asleep during class the next day? These impressions and apparent contradictions led to my more intensive involvement with the theme of sleep for my PhD project several years later.

Initially, I had to fight against prejudice as people were reluctant to consider sleep a serious topic for academic enquiry. Of course, it was precisely such attitudes that had originally caught my attention. Sleep can be loaded with a variety of meanings and ideologies; analysing sleep arrangements and the discourse on it reveals attitudes and values embedded in the contexts in which sleep is organised and discussed. In my experience, it is the everyday and seemingly natural events upon which people generally do not reflect that reveal essential structures and values of a society.

We often assume that our ancestors went to bed ‘naturally’ when darkness fell and rose with the Sun. However, sleep times have never been such a simple matter, whether in Japan or elsewhere. Even before the invention of electric light, the documentary evidence shows that people were scolded for staying up late at night for chatting, drinking and other forms of pleasure. However, scholars – particularly young samurai – were considered highly virtuous if they interrupted their sleep to study, even though this practice may not have been very efficient as it required oil for their lamps and often resulted in them falling asleep during lectures.

Napping is hardly ever discussed in historical sources and seems to have been widely taken for granted. Falling asleep in public tends to be only mentioned when the nap is the source for a funny anecdote, such as when someone joins in with the wrong song at a ceremony, unaware that they have slept through most of it. People also seem to have enjoyed playing tricks on friends who had involuntarily dozed off.

Early rising, on the other hand, has clearly been promoted as a virtue, at least since the introduction of Confucianism and Buddhism. In antiquity, sources show a special concern for the work schedule of civil servants, but from the Middle Ages onwards, early rising was applied to all strata of society, with “going to bed late and rising early” used as a metaphor to describe a virtuous person.

Another interesting issue is co-sleeping. In Britain, parents are often told they should provide even babies with a separate room so that they can learn to be independent sleepers, thus establishing a regular sleep schedule. In Japan, by contrast, parents and doctors are adamant that co-sleeping with children until they are at least at school age will reassure them and help them develop into independent and socially stable adults.

Maybe this cultural norm helps Japanese people to sleep in the presence of others, even when they are adults – many Japanese say they often sleep better in company than alone. Such an effect could be observed in spring 2011 after the huge tsunami disaster destroyed several coastal towns. Survivors had to stay in evacuation shelters, where dozens or even hundreds of people shared the same living and sleeping space. Notwithstanding various conflicts and problems, survivors described how sharing a communal sleeping space provided some comfort and helped them to relax and regain their sleep rhythm.

However, this experience of sleeping in the presence of others as children is not sufficient on its own to explain the widespread tolerance of inemuri, especially at school and in the workplace. After some years of investigating this subject, I finally realised that on a certain level, inemuri is not considered sleep at all. Not only is it seen as being different from night-time sleep in bed, it is also viewed differently from taking an afternoon nap or power nap.

How can we make sense of this? The clue lies in the term itself, which is composed of two Chinese characters. ‘I’ which means ‘to be present’ in a situation that is not sleep and ‘nemuri’ which means ‘sleep’. Erving Goffman’s concept of “involvement within social situations” is useful I think in helping us grasp the social significance of inemuri and the rules surrounding it. Through our body language and verbal expressions we are involved to some extent in every situation in which we are present. We do, however, have the capacity to divide our attention into dominant and subordinate involvement.

In this context, inemuri can be seen as a subordinate involvement which can be indulged in as long as it does not disturb the social situation at hand – similar to daydreaming. Even though the sleeper might be mentally ‘away’, they have to be able to return to the social situation at hand when active contribution is required. They also have to maintain the impression of fitting in with the dominant involvement by means of body posture, body language, dress code and the like.

Inemuri in the workplace is a case in point. In principle, attentiveness and active participation are expected at work, and falling asleep creates the impression of lethargy and that a person is shirking their duties. However, it is also viewed as the result of work-related exhaustion. It may be excused by the fact that meetings are usually long and often involve simply listening to the chair’s reports. The effort made to attend is often valued more than what is actually achieved. As one informant told me: “We Japanese have the Olympic spirit – participating is what counts.”

Diligence, which is expressed by working long hours and giving one’s all, is highly valued as a positive moral trait in Japan. Someone who makes the effort to participate in a meeting despite being exhausted or ill demonstrates diligence, a sense of responsibility and their willingness to make a sacrifice. By overcoming physical weaknesses and needs, a person becomes morally and mentally fortified and is filled with positive energy. Such a person is considered reliable and will be promoted. If, in the end, they succumb to sleep due to exhaustion or a cold or another health problem, they can be excused and an “attack of the sleep demon” can be held responsible.

Moreover, modesty is also a highly valued virtue. Therefore, it is not possible to boast about one’s own diligence – and this creates the need for subtle methods to achieve social recognition. Since tiredness and illness are often viewed as the result of previous work efforts and diligence, inemuri – or even feigning inemuri by closing one’s eyes – can be employed as a sign that a person has been working hard but still has the strength and moral virtue necessary to keep themselves and their feelings under control.

Thus, the Japanese habit of inemuri does not necessarily reveal a tendency towards laziness. Instead, it is an informal feature of Japanese social life intended to ensure the performance of regular duties by offering a way of being temporarily ‘away’ within these duties. And so it is clear: the Japanese don’t sleep. They don’t nap. They do inemuri. It could not be more different.

http://www.bbc.com/future/story/20160506-the-japanese-art-of-not-sleeping

New drug for postpartum depression succeeds in mid-stage study

ppd

By Natalie Grover

Sage Therapeutics Inc said its drug alleviated symptoms of severe postpartum depression, meeting the main goal of a small mid-stage study and sending the company’s shares soaring.

About one in seven women experience postpartum depression that eventually interferes with her ability to take care of the baby and handle daily tasks, according to the American Psychological Association. There are no specific therapies for PPD. Existing options include standard antidepressants and psychotherapy.

Data on 21 patients showed that the drug, SAGE-547, achieved a statistically significant reduction in symptoms at 60 hours, compared to placebo, on a standard depression scale, Sage said in a news release reporting topline results from the study. (http://bit.ly/29KtPBI)

“This represented a greater than 20 point mean reduction in the depression scores of the SAGE-547 group at the primary endpoint of 60 hours through trial completion with a greater than 12 point difference from placebo. The statistically significant difference in treatment effect began at 24 hours, (p=0.006) with an effect that was maintained at similar magnitude through to the 30-day follow-up (p=0.01),” the company reported.

Typical antidepressants take about four-to-six weeks to take effect, trial investigator Samantha Meltzer-Brody told Reuters. “So the rapid onset of response of this drug is unlike anything else available in the field,” she said.

A woman with PPD can suffer a whirlwind of emotions, including severe anxiety, panic attacks, thoughts of harming herself or the baby, and feelings of worthlessness, shame, guilt or inadequacy.

Cambridge, Massachusetts-based Sage said it had initiated an expansion of the mid-stage study to determine optimal dosing for the injectable drug.

Sage is also evaluating the drug for use in super refractory status epilepticus (SRSE), a life-threatening seizure disorder, as well as essential tremor.

http://www.psychcongress.com/article/drug-postpartum-depression-succeeds-mid-stage-study-27946

New discovery on brain chemistry of patients with schizophrenia and their relatives

katharine-thakkar

People with schizophrenia have different levels of the neurotransmitters glutamate and gamma-aminobutyric acidergic (GABA) than healthy people do, and their relatives also have lower glutamate levels, according to a study published online in Biological Psychiatry.

Using magnetic resonance spectroscopy, researchers discovered reduced levels of glutamate — which promotes the firing of brain cells — in both patients with schizophrenia and healthy relatives. Patients also showed reduced levels of GABA, which inhibits neural firing. Healthy relatives, however, did not.

Researchers are unsure why healthy relatives with altered glutamate do not show symptoms of schizophrenia or how they maintain normal GABA levels despite a predisposition to the illness.

“This finding is what’s most exciting about our study,” said lead investigator Katharine Thakkar, PhD, assistant professor of clinical psychology at Michigan State University, East Lansing. “It hints at what kinds of things have to go wrong for someone to express this vulnerability toward schizophrenia. The study gives us more specific clues into what kinds of systems we want to tackle when we’re developing new treatments for this very devastating illness.”

The study included 21 patients with chronic schizophrenia, 23 healthy relatives of other people with schizophrenia not involved in the study, and 24 healthy nonrelatives who served as controls.

Many experts believe there are multiple risk factors for schizophrenia, including dopamine and glutamate-GABA imbalance. Drugs that regulate dopamine do not work for all patients with schizophrenia. Dr. Thakkar believes magnetic resonance spectroscopy may help clinicians target effective treatments for specific patients.

“There are likely different causes of the different symptoms and possibly different mechanisms of the illness across individuals,” said Dr. Thakkar.

“In the future, as this imaging technique becomes more refined, it could conceivably be used to guide individual treatment recommendations. That is, this technique might indicate that one individual would benefit more from treatment A and another individual would benefit more from treatment B, when these different treatments have different mechanisms of action.”

—Jolynn Tumolo

References

Thakkar KN, Rösler L, Wijnen JP, et al. 7T proton magnetic resonance spectroscopy of GABA, glutamate, and glutamine reveals altered concentrations in schizophrenia patients and healthy siblings [publisehd online ahead of print April 19, 2016]. Biological Psychiatry.
Study uncovers clue to deciphering schizophrenia [press release]. Washington, DC: EurekAlert!; June 7, 2016.

Name that break computer systems

By Chris Baraniuk

Jennifer Null’s husband had warned her before they got married that taking his name could lead to occasional frustrations in everyday life. She knew the sort of thing to expect – his family joked about it now and again, after all. And sure enough, right after the wedding, problems began.

“We moved almost immediately after we got married so it came up practically as soon as I changed my name, buying plane tickets,” she says. When Jennifer Null tries to buy a plane ticket, she gets an error message on most websites. The site will say she has left the surname field blank and ask her to try again.

Instead, she has to call the airline company by phone to book a ticket – but that’s not the end of the process.

“I’ve been asked why I’m calling and when I try to explain the situation, I’ve been told, ‘there’s no way that’s true’,” she says.

But to any programmer, it’s painfully easy to see why “Null” could cause problems for software interacting with a database. This is because the word ‘null’ can be produced by a system to indicate an empty name field. Now and again, system administrators have to try and fix the problem for people who are actually named “Null” – but the issue is rare and sometimes surprisingly difficult to solve.

For Null, a full-time mum who lives in southern Virginia in the US, frustrations don’t end with booking plane tickets. She’s also had trouble entering her details into a government tax website, for instance. And when she and her husband tried to get settled in a new city, there were difficulties getting a utility bill set up, too.

Generally, the more important the website or service, the stricter controls will be on what name she enters – but that means that problems chiefly occur on systems where it really matters.

Before the birth of her child, Null was working as an on-call substitute teacher. In that role she could be notified of work through an online service or via phone. But the website would never work for Null – she always had to arrange a shift by phone.

“I feel like I still have to do things the old-fashioned way,” she says.

“On one hand it’s frustrating for the times that we need it, but for the most part it’s like a fun anecdote to tell people,” she adds. “We joke about it a lot. It’s good for stories.”

“Null” isn’t the only example of a name that is troublesome for computers to process. There are many others. In a world that relies increasingly on databases to function, the issues for people with problematic names only get more severe.

Some individuals only have a single name, not a forename and surname. Others have surnames that are just one letter. Problems with such names have been reported before. Consider also the experiences of Janice Keihanaikukauakahihulihe’ekahaunaele, a Hawaiian woman who complained that state ID cards should allow citizens to display surnames even as long as hers – which is 36 characters in total. In the end, government computer systems were updated to have greater flexibility in this area.

Incidents like this are known, in computing terminology, as “edge cases” – that is, unexpected and problematic cases for which the system was not designed.

“Every couple of years computer systems are upgraded or changed and they’re tested with a variety of data – names that are well represented in society,” explains programmer Patrick McKenzie.
“They don’t necessarily test for the edge cases.”

McKenzie has developed a pet interest in the failings of many modern computer systems to process less common names. He has compiled a list of the pitfalls that programmers often fail to foresee when designing databases intended to store personal names.

But McKenzie is living proof of the fact that name headaches are a relativistic problem. To many English-speaking westerners, the name “Patrick McKenzie” might not seem primed to cause errors, but where McKenzie lives – Japan – it has created all kinds of issues for him.

“Four characters in a Japanese name is very rare. McKenzie is eight, so for printed forms it’ll often be the case that there’s literally not enough space to put my name,” he says.

“Computer systems are often designed with these forms in mind. Every year when I go to file my taxes, I file them as ‘McKenzie P’ because that’s the amount of space they have.”

McKenzie had tried his best to fit in. He even converted his name into katakana – a Japanese alphabet which allows for the phonetic spelling of foreign words. But when his bank’s computer systems were updated, support for the katakana alphabet was removed. This wouldn’t have presented an issue for Japanese customers, but for McKenzie, it meant he was temporarily unable to use the bank’s website.

“Eventually they had to send a paper request from my bank branch to the corporate IT department to have someone basically edit the database manually,” he says, “before I could use any of their applications.”

McKenzie points out that as computer systems have gone global, there have been serious discussions among programmers to improve support for “edge case” names and names written in foreign languages or with unusual characters. Indeed, he explains that the World Wide Web Consortium, an internet standards body, has dedicated some discussion to the issue specifically.

“I think the situation is getting better, partly as a result of increased awareness within the community,” he comments.

For people like Null, though, it’s likely that they will encounter headaches for a long time to come. Some might argue that those with troublesome names might think about changing them to save time and frustration.

But Null won’t be among them. For one thing, she already changed her name – when she got married.
“It’s very frustrating when it does come up,” she admits, but adds, “I’ve just kind of accepted it. I’m used to it now.”

http://www.bbc.com/future/story/20160325-the-names-that-break-computer-systems

New theory to explain how consciousness evolved

by Michael Graziano

Ever since Charles Darwin published On the Origin of Species in 1859, evolution has been the grand unifying theory of biology. Yet one of our most important biological traits, consciousness, is rarely studied in the context of evolution. Theories of consciousness come from religion, from philosophy, from cognitive science, but not so much from evolutionary biology. Maybe that’s why so few theories have been able to tackle basic questions such as: What is the adaptive value of consciousness? When did it evolve and what animals have it?

The Attention Schema Theory (AST), developed over the past five years, may be able to answer those questions. The theory suggests that consciousness arises as a solution to one of the most fundamental problems facing any nervous system: Too much information constantly flows in to be fully processed. The brain evolved increasingly sophisticated mechanisms for deeply processing a few select signals at the expense of others, and in the AST, consciousness is the ultimate result of that evolutionary sequence. If the theory is right—and that has yet to be determined—then consciousness evolved gradually over the past half billion years and is present in a range of vertebrate species.

Even before the evolution of a central brain, nervous systems took advantage of a simple computing trick: competition. Neurons act like candidates in an election, each one shouting and trying to suppress its fellows. At any moment only a few neurons win that intense competition, their signals rising up above the noise and impacting the animal’s behavior. This process is called selective signal enhancement, and without it, a nervous system can do almost nothing.

We can take a good guess when selective signal enhancement first evolved by comparing different species of animal, a common method in evolutionary biology. The hydra, a small relative of jellyfish, arguably has the simplest nervous system known—a nerve net. If you poke the hydra anywhere, it gives a generalized response. It shows no evidence of selectively processing some pokes while strategically ignoring others. The split between the ancestors of hydras and other animals, according to genetic analysis, may have been as early as 700 million years ago. Selective signal enhancement probably evolved after that.

The arthropod eye, on the other hand, has one of the best-studied examples of selective signal enhancement. It sharpens the signals related to visual edges and suppresses other visual signals, generating an outline sketch of the world. Selective enhancement therefore probably evolved sometime between hydras and arthropods—between about 700 and 600 million years ago, close to the beginning of complex, multicellular life. Selective signal enhancement is so primitive that it doesn’t even require a central brain. The eye, the network of touch sensors on the body, and the auditory system can each have their own local versions of attention focusing on a few select signals.

The next evolutionary advance was a centralized controller for attention that could coordinate among all senses. In many animals, that central controller is a brain area called the tectum. (“Tectum” means “roof” in Latin, and it often covers the top of the brain.) It coordinates something called overt attention – aiming the satellite dishes of the eyes, ears, and nose toward anything important.

All vertebrates—fish, reptiles, birds, and mammals—have a tectum. Even lampreys have one, and they appeared so early in evolution that they don’t even have a lower jaw. But as far as anyone knows, the tectum is absent from all invertebrates. The fact that vertebrates have it and invertebrates don’t allows us to bracket its evolution. According to fossil and genetic evidence, vertebrates evolved around 520 million years ago. The tectum and the central control of attention probably evolved around then, during the so-called Cambrian Explosion when vertebrates were tiny wriggling creatures competing with a vast range of invertebrates in the sea.

The tectum is a beautiful piece of engineering. To control the head and the eyes efficiently, it constructs something called an internal model, a feature well known to engineers. An internal model is a simulation that keeps track of whatever is being controlled and allows for predictions and planning. The tectum’s internal model is a set of information encoded in the complex pattern of activity of the neurons. That information simulates the current state of the eyes, head, and other major body parts, making predictions about how these body parts will move next and about the consequences of their movement. For example, if you move your eyes to the right, the visual world should shift across your retinas to the left in a predictable way. The tectum compares the predicted visual signals to the actual visual input, to make sure that your movements are going as planned. These computations are extraordinarily complex and yet well worth the extra energy for the benefit to movement control. In fish and amphibians, the tectum is the pinnacle of sophistication and the largest part of the brain. A frog has a pretty good simulation of itself.

With the evolution of reptiles around 350 to 300 million years ago, a new brain structure began to emerge – the wulst. Birds inherited a wulst from their reptile ancestors. Mammals did too, but our version is usually called the cerebral cortex and has expanded enormously. It’s by far the largest structure in the human brain. Sometimes you hear people refer to the reptilian brain as the brute, automatic part that’s left over when you strip away the cortex, but this is not correct. The cortex has its origin in the reptilian wulst, and reptiles are probably smarter than we give them credit for.

The cortex is like an upgraded tectum. We still have a tectum buried under the cortex and it performs the same functions as in fish and amphibians. If you hear a sudden sound or see a movement in the corner of your eye, your tectum directs your gaze toward it quickly and accurately. The cortex also takes in sensory signals and coordinates movement, but it has a more flexible repertoire. Depending on context, you might look toward, look away, make a sound, do a dance, or simply store the sensory event in memory in case the information is useful for the future.

The most important difference between the cortex and the tectum may be the kind of attention they control. The tectum is the master of overt attention—pointing the sensory apparatus toward anything important. The cortex ups the ante with something called covert attention. You don’t need to look directly at something to covertly attend to it. Even if you’ve turned your back on an object, your cortex can still focus its processing resources on it. Scientists sometimes compare covert attention to a spotlight. (The analogy was first suggested by Francis Crick, the geneticist.) Your cortex can shift covert attention from the text in front of you to a nearby person, to the sounds in your backyard, to a thought or a memory. Covert attention is the virtual movement of deep processing from one item to another.

The cortex needs to control that virtual movement, and therefore like any efficient controller it needs an internal model. Unlike the tectum, which models concrete objects like the eyes and the head, the cortex must model something much more abstract. According to the AST, it does so by constructing an attention schema—a constantly updated set of information that describes what covert attention is doing moment-by-moment and what its consequences are.

Consider an unlikely thought experiment. If you could somehow attach an external speech mechanism to a crocodile, and the speech mechanism had access to the information in that attention schema in the crocodile’s wulst, that technology-assisted crocodile might report, “I’ve got something intangible inside me. It’s not an eyeball or a head or an arm. It exists without substance. It’s my mental possession of things. It moves around from one set of items to another. When that mysterious process in me grasps hold of something, it allows me to understand, to remember, and to respond.”

The crocodile would be wrong, of course. Covert attention isn’t intangible. It has a physical basis, but that physical basis lies in the microscopic details of neurons, synapses, and signals. The brain has no need to know those details. The attention schema is therefore strategically vague. It depicts covert attention in a physically incoherent way, as a non-physical essence. And this, according to the theory, is the origin of consciousness. We say we have consciousness because deep in the brain, something quite primitive is computing that semi-magical self-description. Alas crocodiles can’t really talk. But in this theory, they’re likely to have at least a simple form of an attention schema.

When I think about evolution, I’m reminded of Teddy Roosevelt’s famous quote, “Do what you can with what you have where you are.” Evolution is the master of that kind of opportunism. Fins become feet. Gill arches become jaws. And self-models become models of others. In the AST, the attention schema first evolved as a model of one’s own covert attention. But once the basic mechanism was in place, according to the theory, it was further adapted to model the attentional states of others, to allow for social prediction. Not only could the brain attribute consciousness to itself, it began to attribute consciousness to others.

When psychologists study social cognition, they often focus on something called theory of mind, the ability to understand the possible contents of someone else’s mind. Some of the more complex examples are limited to humans and apes. But experiments show that a dog can look at another dog and figure out, “Is he aware of me?” Crows also show an impressive theory of mind. If they hide food when another bird is watching, they’ll wait for the other bird’s absence and then hide the same piece of food again, as if able to compute that the other bird is aware of one hiding place but unaware of the other. If a basic ability to attribute awareness to others is present in mammals and in birds, then it may have an origin in their common ancestor, the reptiles. In the AST’s evolutionary story, social cognition begins to ramp up shortly after the reptilian wulst evolved. Crocodiles may not be the most socially complex creatures on earth, but they live in large communities, care for their young, and can make loyal if somewhat dangerous pets.

If AST is correct, 300 million years of reptilian, avian, and mammalian evolution have allowed the self-model and the social model to evolve in tandem, each influencing the other. We understand other people by projecting ourselves onto them. But we also understand ourselves by considering the way other people might see us. Data from my own lab suggests that the cortical networks in the human brain that allow us to attribute consciousness to others overlap extensively with the networks that construct our own sense of consciousness.

Language is perhaps the most recent big leap in the evolution of consciousness. Nobody knows when human language first evolved. Certainly we had it by 70 thousand years ago when people began to disperse around the world, since all dispersed groups have a sophisticated language. The relationship between language and consciousness is often debated, but we can be sure of at least this much: once we developed language, we could talk about consciousness and compare notes. We could say out loud, “I’m conscious of things. So is she. So is he. So is that damn river that just tried to wipe out my village.”

Maybe partly because of language and culture, humans have a hair-trigger tendency to attribute consciousness to everything around us. We attribute consciousness to characters in a story, puppets and dolls, storms, rivers, empty spaces, ghosts and gods. Justin Barrett called it the Hyperactive Agency Detection Device, or HADD. One speculation is that it’s better to be safe than sorry. If the wind rustles the grass and you misinterpret it as a lion, no harm done. But if you fail to detect an actual lion, you’re taken out of the gene pool. To me, however, the HADD goes way beyond detecting predators. It’s a consequence of our hyper-social nature. Evolution turned up the amplitude on our tendency to model others and now we’re supremely attuned to each other’s mind states. It gives us our adaptive edge. The inevitable side effect is the detection of false positives, or ghosts.

And so the evolutionary story brings us up to date, to human consciousness—something we ascribe to ourselves, to others, and to a rich spirit world of ghosts and gods in the empty spaces around us. The AST covers a lot of ground, from simple nervous systems to simulations of self and others. It provides a general framework for understanding consciousness, its many adaptive uses, and its gradual and continuing evolution.

http://www.theatlantic.com/science/archive/2016/06/how-consciousness-evolved/485558/

Thanks to Dan Brat for bringing this to the It’s Interesting community.

2 men fall off cliff playing Pokemon Go

Two men in their early 20s fell an estimated 50 to 90 feet down a cliff in Encinitas, California, on Wednesday afternoon while playing “Pokémon Go,” San Diego County Sheriff’s Department Sgt. Rich Eaton said. The men sustained injuries, although the extent is not clear.

Pokémon Go is a free-to-play app that gets users up and moving in the real world to capture fictional “pocket monsters” known as Pokémon. The goal is to capture as many of the more than hundred species of animated Pokémon as you can.

Apparently it wasn’t enough that the app warns users to stay aware of surroundings or that signs posted on a fence near the cliff said “No Trespassing” and “Do Not Cross.” When firefighters arrived at the scene, one of the men was at the bottom of the cliff while the other was three-quarters of the way down and had to be hoisted up, Eaton said.

Both men were transported to Scripps Memorial Hospital La Jolla. They were not charged with trespassing.

Eaton encourages players to be careful. “It’s not worth life or limb,” he said

In parts of San Diego County, there are warning signs for gamers not to play while driving. San Diego Gas and Electric tweeted a warning to stay away from electric lines and substations when catching Pokémon.

This is the latest among many unexpected situations gamers have found themselves in, despite the game being released just more than a week ago. In one case, armed robbers lured lone players of the wildly popular augmented reality game to isolated locations. In another case, the game led a teen to discover a dead body.

http://www.cnn.com/2016/07/15/health/pokemon-go-players-fall-down-cliff/index.html

You are surprisingly likely to have a living doppelganger

By Zaria Gorvett

It’s on your passport. It’s how criminals are identified in a line-up. It’s how you’re recognised by old friends on the street, even after years apart. Your face: it’s so tangled up with your identity, soon it may be all you need to unlock your smartphone, access your office or buy a house.

Underpinning it all is the assurance that your looks are unique. And then, one day your illusions are smashed.

“I was the last one on the plane and there was someone in my seat, so I asked the guy to move. He turned around and he had my face,” says Neil Douglas, who was on his way to a wedding in Ireland when it happened.

“The whole plane looked at us and laughed. And that’s when I took the selfie.” The uncanny events continued when Douglas arrived at his hotel, only to find the same double at the check-in desk. Later their paths crossed again at a bar and they accepted that the universe wanted them to have a drink. He woke up the next morning with a hangover and an Argentinian radio show on the phone – the picture had gone viral.

Folk wisdom has it that everyone has a doppelganger; somewhere out there there’s a perfect duplicate of you, with your mother’s eyes, your father’s nose and that annoying mole you’ve always meant to have removed. The notion has gripped the popular imagination for millennia – it was the subject of one of the oldest known works of literature – inspiring the work of poets and scaring queens to death.

But is there any truth in it? We live on a planet of over seven billion people, so surely someone else is bound to have been born with your face? It’s a silly question with serious implications – and the answer is more complicated than you might think.

In fact until recently no one had ever even tried to find out. Then last year Teghan Lucas set out to test the risk of mistaking an innocent double for a killer.

Armed with a public collection of photographs of U.S. military personnel and the help of colleagues from the University of Adelaide, Teghan painstakingly analysed the faces of nearly four thousand individuals, measuring the distances between key features such as the eyes and ears. Next she calculated the probability that two peoples’ faces would match.

What she found was good news for the criminal justice system, but likely to disappoint anyone pining for their long-lost double: the chances of sharing just eight dimensions with someone else are less than one in a trillion. Even with 7.4 billion people on the planet, that’s only a one in 135 chance that there’s a single pair of doppelgangers. “Before you could always be questioned in a court of law, saying ‘well what if someone else just looks like him?’ Now we can say it’s extremely unlikely,” says Teghan.

The results can be explained by the famed infinite monkey problem: sit a monkey in front of a typewriter for long enough and eventually it will surely write the Complete Works of William Shakespeare by randomly hitting, biting and jumping up and down on the keys on the board.

It’s a mathematical certainty, but reversing the problem reveals just how staggeringly long the monkey would have to toil. Ignoring grammar, the monkey has a one in 26 chance of correctly typing the first letter of Macbeth. So far, so good. But already by the second letter the chance has shrunk to one in 676 (26 x 26) and by the end of the fourth line (22 letters) it’s one in 13 quintillion. When you multiply probabilities together, the chances of something actually happening disappear very, very quickly.

Besides, the wide array in human guises is undoubtedly down to more than eight traits. Far from everyone having a long-lost “twin”, in Teghan’s view it’s more likely no one does.

But that’s not quite the end of the story. The study relied on exact measurements; if your doppelganger’s ears are 59 mm but yours are 60, your likeness wouldn’t count. In any case, you probably won’t remember the last time you clocked an uncanny resemblance based on the length of someone’s ears.

There may be another way – and it all comes down to what you mean by a doppelganger. “It depends whether we mean ‘lookalike to a human’ or ‘lookalike to facial recognition software’,” says David Aldous, a statistician at U.C. Berkeley.

Francois Brunelle, who has photographed over 200 pairs of doubles for his project I’m not a look-alike, agrees. “For me it’s when you see someone and you think it’s the other person. It’s the way of being, the sum of the parts.” When seen apart, his subjects looked like perfect clones. “When you get them together and you see them side by side, sometimes you feel that they are not the same at all.”
If fine details aren’t important, suddenly the possibility of having a lookalike looks a lot more realistic. But is this really true? To find out, first we need to get to grips with what’s going on when we recognise a familiar face.

Take the illusion of Bill Clinton and Al Gore which circulated the internet before their re-election in 1997. It features a seemingly unremarkable picture of the two men standing side by side. On closer inspection, you can see that Gore’s “internal” facial features – his eyes, nose and mouth – have been replaced by Clinton’s. Even without these traits, with his underlying facial structure intact Al Gore looks completely normal.

It’s a striking demonstration of the way faces are stored in the brain: more like a map than an image. When you bump into a friend on the street, the brain immediately sets to work recognising their features – such as hairline and skin tone – individually, like recognising Italy by its shape alone. But what if they’ve just had a haircut? Or they’re wearing makeup?

To ensure they can be recognised in any context, the brain employs an area known as the fusiform gyrus to tie all the pieces together. If you compare it to finding a country on a map, this is like checking it has a border with France and a coast. This holistic ‘sum of the parts’ perception is thought to make recognising friends a lot more accurate than it would be if their features were assessed in isolation. Crucially, it also fudges the importance of some of the subtler details.

“Most people concentrate on superficial characteristics such as hair-line, hair style, eyebrows,” says Nick Fieller, a statistician involved in The Computer-Aided Facial Recognition Project. Other research has shown we look to the eyes, mouth and nose, in that order.

Then it’s just a matter of working out the probability that someone else will have all the same versions as you. “There are only so many genes in the world which specify the shape of the face and millions of people, so it’s bound to happen,” says Winrich Freiwald, who studies face perception at Rockefeller University. “For somebody with an ‘average’ face it’s comparatively easy to find good matches,” says Fieller.

Let’s assume our man has short blonde hair, brown eyes, a fleshy nose (like Prince Philip, the Duke of Edinburgh), a round face and a full beard. Research into the prevalence of these features is hard to come by, but he’s off to a promising start: 55% of the global population has brown eyes.

Meanwhile more than one in ten people have round faces, according to research funded by a cosmetics company. Then there’s his nose. A study of photographs taken in Europe and Israel identified the ‘fleshy’ type as the most prevalent (24.2%). In the author’s view these are also the least attractive.

Finally – how much hair is there out there? If you thought this was too frivolous for serious investigation, you’d be wrong: among 24,300 people surveyed at a Florida theme park, 82% of men had hair shorter than shoulder-length. Natural blondes, however, constitute just 2%. As the ‘beard capital’ of the world, in the UK most men have some form of facial hair and nearly one in six have a full beard.
A simple calculation (male x brown eyes x blonde x round face x fleshy nose x short hair x full beard) reveals the probability of a person possessing all these features is just over one in 100,000 (0.00001020%).

That would give our guy no less than 74,000 potential doppelgangers. Of course many of these prevalence rates aren’t global, so this is very imprecise. But judging by the number of celebrity look-alikes out there, it might not be far off. “After the picture went viral I think there was a small army of us at some point,” says Douglas.

So what’s the probability that everyone has a duplicate roaming the earth? The simplest way to guess would be to estimate the number of possible faces and compare it to the number of people alive today.
You might expect that even if there are 7.4 billion different faces out there, with 7.4 billion people on the planet there’s clearly one for everyone. But there’s a catch. You’d actually need close to 150 billion people for that to be statistically likely. The discrepancy is down to a statistical quirk known as the coupon collector’s problem. Let’s say there are 50 coupons in a jar and each time you draw one it’s put back in. How many would you need to draw before it’s likely you’ve chosen each coupon at least once?

It takes very little time to collect the first few coupons. The trouble is finding the last few: on average drawing the last one takes about 50 draws on its own, so to collect all 50 you need about 225. It’s possible that most people have a doppelganger – but everyone? “There’s a big difference between being lucky sometimes and being lucky always,” says Aldous.

No one has any good idea what the first number is. Indeed, it may never be possible to say definitively, since the perception of facial resemblance is subjective. Some people have trouble recognising themselves in photos, while others rarely forget a face. And how we perceive similarity is heavily influenced by familiarity. “Some doubles when they get together, they say ‘No I don’t see it. Really, I don’t.’ It’s so obvious to everyone else; it’s a little crazy to hear that,” says Brunelle.
Even so, Fieller thinks there’s a good chance. “I think most people have somebody who is a facial lookalike unless they have a truly exceptional and unusual face,” he says. Friewald agrees. “I think in the digital age which we are entering, at some point we will know because there will be pictures of almost everyone online,” he says.

Why are we so interested anyway? “If you meet someone that looks like you, you have an instant bond because you share something.” Brunelle has received interest from thousands of people searching for their lookalikes, especially from China – a fact he puts down to the one-child policy. Research has shown we judge similar looking-people to be more trustworthy and attractive – a factor thought to contribute to our voting choices.

It may stem back to our deep evolutionary past, when facial resemblance was a useful indicator of kinship. In today’s globalised world, this is misguided. “It is entirely possible for two people with similar facial features to have DNA that is no more similar than that of two random people,” says Lavinia Paternoster, a geneticist at the University of Bristol.

And before you go fantasising about doing a temporary life-swap with your ‘twin’, there’s no guarantee you’ll have anything in common physically either. “Well I’m 5’7 and he’s 6’3… so it’s mainly in the face,” says Douglas.

http://www.bbc.com/future/story/20160712-you-are-surprisingly-likely-to-have-a-living-doppelganger