How To Make Your Face (Digitally) Unforgettable

face

Thanks to new research out of MIT, you might one day be able to subtly manipulate your picture to make it more memorable — meaning that people should be more likely to remember your face.

According to the research article: “One ubiquitous fact about people is that we cannot avoid evaluating the faces we see in daily life … In this flash judgment of a face, an underlying decision is happening in the brain — should I remember this face or not? Even after seeing a picture for only half a second we can often remember it.”

There are subjective factors affecting how a face sticks in your memory — for example, if you know someone else who looks similar, you might find a new face more familiar. But researchers found that there is also a strong universal component to memorability. Some faces are just consistently more easily remembered.

Researchers found that certain associations help make a face memorable: familiarity, kindness, trustworthiness, uniqueness.
“The basic idea is that if there is someone you have never seen [before] and … this person looks familiar — then, if this person looks kind, trustworthy and distinct, then it will be easier to remember them,” says Aude Oliva, a principal research scientist at MIT’s Computer Science and Artificial Intelligence Lab.

But, she says, there’s no “recipe” for how exactly to make facial features look like that; it differs from face to face. But the researchers are working toward creating an app or demo that would analyze thousands of versions of any face, each with tiny modifications, and figure out which is the most memorable — without changing other key aspects like attractiveness, age or expression.

“Manipulating faces is a very tricky process,” Oliva says. “The changes must be subtle and keep the original features of the portrait.”

What’s the point of capitalizing on that? The researchers suggest that social network users could upload more memorable profile pictures, or that job applicants could include a digitally remastered portrait to “more readily stick in the minds of potential employers,” according to the MIT press release (although, take note, job applicants: Business Insider says including your photo with a resume is a no-no anyway).

It could also be used in movies to make the lead characters stick out and fade the extras into the background.

At first glance, the project could seem deceptive or disparaging, as if it’s exploiting our memory or telling us our natural faces aren’t good enough for LinkedIn. But Oliva stresses that the changes are very subtle. And, we wonder, is it any different than using Photoshop to touch up a profile picture or using makeup to make an anchor’s face look more striking on TV?

http://www.npr.org/blogs/alltechconsidered/2014/01/09/261064231/how-to-make-your-face-digitally-unforgettable

Iron Man suit being developed by U.S. Army

iron-man

Researchers at the Massachusetts Institute of Technology, the U.S. Army Research, Development and Engineering Command (RDECOM) and other groups from business and academia are joining forces to create a Tactical Assault Light Operator Suit, or TALOS, that “promises to provide superhuman strength with greater ballistic protection,” according to a statement released by the U.S. Army.

The most amazing features of the suit include integrated 360-degree cameras not unlike Google Glass (but with night vision capabilities), sensors that can detect injuries and apply a wound-sealing foam, and — get ready for this — a bulletproof exoskeleton made of magnetorheological fluids that can change from liquid to solid in milliseconds when a magnetic field or electrical current is applied.

If it all reminds you of the liquid-metal shapeshifter T-1000 from “Terminator” or some other sci-fi character, you’re not alone. “It sounds exactly like ‘Iron Man,'” Gareth McKinley, a professor at MIT, told NPR. “The other kind of things that you see in the movies I think that would be more realistic at the moment would be the kind of external suit that Sigourney Weaver wears in ‘Aliens,’ where it’s a large robot that amplifies the motions and lifting capability of a human.”

The developers from RDECOM, MIT and elsewhere are researching “every aspect making up this combat armor suit,” Lt. Col. Karl Borjes, a RDECOM science adviser, said in the U.S. Army statement. “It’s advanced armor. It’s communications, antennas. It’s cognitive performance. It’s sensors, miniature-type circuits. That’s all going to fit in here, too.”

Not everyone, however, is enamored with the super-advanced gizmos being proposed for the soldiers of tomorrow. “My sense is it is an up-armored Pinocchio,” Scott Neil, a retired special forces master sergeant and Silver Star recipient, told the Tampa Tribune. “Now the commander can shove a monkey in a suit and ask us to survive a machine gun, IED [improvised explosive device] and poor intelligence all on the same objective. And when you die in it, as it melds to your body, you can bury them in it.”

Even believers in the TALOS suit acknowledge its limitations. “The acronym TALOS was chosen deliberately,” McKinley said. “It’s the name of the bronze armored giant from ‘Jason and the Argonauts.’ Like all good superheroes, Talos has one weakness. For the Army’s TALOS, the weak spot is either the need to carry around a heavy pump for a hydraulic system, or lots of heavy batteries. We don’t have Iron Man’s power source yet.”

For would-be sci-fi superheroes who are ready for their very own TALOS, the wait may prove excruciating: Though various components of the suit are currently in development, the Army hopes to have a prototype ready next year, and an advanced model won’t be developed until at least two years after that.

http://www.livescience.com/40325-army-iron-man-suit-talos.html

Scientists create never-before-seen form of matter

matter

Harvard and MIT scientists are challenging the conventional wisdom about light, and they didn’t need to go to a galaxy far, far away to do it.

Working with colleagues at the Harvard-MIT Center for Ultracold Atoms, a group led by Harvard Professor of Physics Mikhail Lukin and MIT Professor of Physics Vladan Vuletic have managed to coax photons into binding together to form molecules – a state of matter that, until recently, had been purely theoretical. The work is described in a September 25 paper in Nature.

The discovery, Lukin said, runs contrary to decades of accepted wisdom about the nature of light. Photons have long been described as massless particles which don’t interact with each other – shine two laser beams at each other, he said, and they simply pass through one another.

“Photonic molecules,” however, behave less like traditional lasers and more like something you might find in science fiction – the light saber.

“Most of the properties of light we know about originate from the fact that photons are massless, and that they do not interact with each other,” Lukin said. “What we have done is create a special type of medium in which photons interact with each other so strongly that they begin to act as though they have mass, and they bind together to form molecules. This type of photonic bound state has been discussed theoretically for quite a while, but until now it hadn’t been observed.

“It’s not an in-apt analogy to compare this to light sabers,” Lukin added. “When these photons interact with each other, they’re pushing against and deflect each other. The physics of what’s happening in these molecules is similar to what we see in the movies.”

To get the normally-massless photons to bind to each other, Lukin and colleagues, including Harvard post-doctoral fellow Ofer Fisterberg, former Harvard doctoral student Alexey Gorshkov and MIT graduate students Thibault Peyronel and Qiu Liang couldn’t rely on something like the Force – they instead turned to a set of more extreme conditions.

Researchers began by pumped rubidium atoms into a vacuum chamber, then used lasers to cool the cloud of atoms to just a few degrees above absolute zero. Using extremely weak laser pulses, they then fired single photons into the cloud of atoms.

As the photons enter the cloud of cold atoms, Lukin said, its energy excites atoms along its path, causing the photon to slow dramatically. As the photon moves through the cloud, that energy is handed off from atom to atom, and eventually exits the cloud with the photon.

“When the photon exits the medium, its identity is preserved,” Lukin said. “It’s the same effect we see with refraction of light in a water glass. The light enters the water, it hands off part of its energy to the medium, and inside it exists as light and matter coupled together, but when it exits, it’s still light. The process that takes place is the same it’s just a bit more extreme – the light is slowed considerably, and a lot more energy is given away than during refraction.”

When Lukin and colleagues fired two photons into the cloud, they were surprised to see them exit together, as a single molecule.

The reason they form the never-before-seen molecules?

An effect called a Rydberg blockade, Lukin said, which states that when an atom is excited, nearby atoms cannot be excited to the same degree. In practice, the effect means that as two photons enter the atomic cloud, the first excites an atom, but must move forward before the second photon can excite nearby atoms.

The result, he said, is that the two photons push and pull each other through the cloud as their energy is handed off from one atom to the next.

“It’s a photonic interaction that’s mediated by the atomic interaction,” Lukin said. “That makes these two photons behave like a molecule, and when they exit the medium they’re much more likely to do so together than as single photons.”

While the effect is unusual, it does have some practical applications as well.

“We do this for fun, and because we’re pushing the frontiers of science,” Lukin said. “But it feeds into the bigger picture of what we’re doing because photons remain the best possible means to carry quantum information. The handicap, though, has been that photons don’t interact with each other.”

To build a quantum computer, he explained, researchers need to build a system that can preserve quantum information, and process it using quantum logic operations. The challenge, however, is that quantum logic requires interactions between individual quanta so that quantum systems can be switched to perform information processing.

“What we demonstrate with this process allows us to do that,” Lukin said. “Before we make a useful, practical quantum switch or photonic logic gate we have to improve the performance, so it’s still at the proof-of-concept level, but this is an important step. The physical principles we’ve established here are important.”

The system could even be useful in classical computing, Lukin said, considering the power-dissipation challenges chip-makers now face. A number of companies – including IBM – have worked to develop systems that rely on optical routers that convert light signals into electrical signals, but those systems face their own hurdles.

Lukin also suggested that the system might one day even be used to create complex three-dimensional structures – such as crystals – wholly out of light.

“What it will be useful for we don’t know yet, but it’s a new state of matter, so we are hopeful that new applications may emerge as we continue to investigate these photonic molecules’ properties,” he said.

http://phys.org/news/2013-09-scientists-never-before-seen.html

Thanks to Kebmodee for bringing this to the attention of the It’s Interesting community.

New theory on why some people may be better than others at getting inside people’s heads

Mind-Reading-300x232

Humans have an impressive ability to take on other viewpoints – it’s crucial for a social species like ours. So why are some of us better at it than others?

Picture two friends, Sally and Anne, having a drink in a bar. While Sally is in the bathroom, Anne decides to buy another round, but she notices that Sally has left her phone on the table. So no one can steal it, Anne puts the phone into her friend’s bag before heading to the bar. When Sally returns, where will she expect to see her phone?

If you said she would look at the table where she left it, congratulations! You have a theory of mind – the ability to understand that another person may have knowledge, ideas and beliefs that differ from your own, or from reality.

If that sounds like nothing out of the ordinary, perhaps it’s because we usually take it for granted. Yet it involves doing something no other animal can do to the same extent: temporarily setting aside our own ideas and beliefs about the world – that the phone is in the bag, in this case – in order to take on an alternative world view.

This process, also known as “mentalising”, not only lets us see that someone else can believe something that isn’t true, but also lets us predict other people’s behaviour, tell lies, and spot deceit by others. Theory of mind is a necessary ingredient in the arts and religion – after all, a belief in the spirit world requires us to conceive of minds that aren’t present – and it may even determine the number of friends we have.

Yet our understanding of this crucial aspect of our social intelligence is in flux. New ways of investigating and analysing it are challenging some long-held beliefs. As the dust settles, we are getting glimpses of how this ability develops, and why some of us are better at it than others. Theory of mind has “enormous cultural implications”, says Robin Dunbar, an evolutionary anthropologist at the University of Oxford. “It allows you to look beyond the world as we physically see it, and imagine how it might be different.”

The first ideas about theory of mind emerged in the 1970s, when it was discovered that at around the age of 4, children make a dramatic cognitive leap. The standard way to test a child’s theory of mind is called the Sally-Anne test, and it involves acting out the chain of events described earlier, only with puppets and a missing ball.

When asked, “When Sally returns, where will she look for the ball?”, most 3-year-olds say with confidence that she’ll look in the new spot, where Anne has placed it. The child knows the ball’s location, so they cannot conceive that Sally would think it was anywhere else.

Baby change
But around the age of 4, that changes. Most 4 and 5-year olds realise that Sally will expect the ball to be just where she left it.

For over two decades that was the dogma, but more recently those ideas have been shaken. The first challenge came in 2005, when it was reported in Science (vol 308, p 255) that theory of mind seemed to be present in babies just 15 months old.

Such young children cannot answer questions about where they expect Sally to look for the ball, but you can tell what they’re thinking by having Sally look in different places and noting how long they stare: babies look for longer at things they find surprising.

When Sally searched for a toy in a place she should not have expected to find it, the babies did stare for longer. In other words, babies barely past their first birthdays seemed to understand that people can have false beliefs. More remarkable still, similar findings were reported in 2010 for 7-month-old infants (Science, vol 330, p 1830).

Some say that since theory of mind seems to be present in infants, it must be present in young children as well. Something about the design of the classic Sally-Anne test, these critics argue, must be confusing 3-year-olds.

Yet there’s another possibility: perhaps we gain theory of mind twice. From a very young age we possess a basic, or implicit, form of mentalising, so this theory goes, and then around age 4, we develop a more sophisticated version. The implicit system is automatic but limited in its scope; the explicit system, which allows for a more refined understanding of other people’s mental states, is what you need to pass the Sally-Anne test.

If you think that explanation sounds complicated, you’re not alone. “The key problem is explaining why you would bother acquiring the same concept twice,” says Rebecca Saxe, a cognitive scientist at Massachusetts Institute of Technology.

Yet there are other mental skills that develop twice. Take number theory. Long before they can count, infants have an ability to gauge rough quantities; they can distinguish, for instance, between a general sense of “threeness” and “fourness”. Eventually, though, they do learn to count and multiply and so on, although the innate system still hums beneath the surface. Our decision-making ability, too, may develop twice. We seem to have an automatic and intuitive system for making gut decisions, and a second system that is slower and more explicit.

Double-think
So perhaps we also have a dual system for thinking about thoughts, says Ian Apperly, a cognitive scientist at the University of Birmingham, UK. “There might be two kinds of processes, on the one hand for speed and efficiency, and on the other hand for flexibility,” he argues (Psychological Review, vol 116, p 953).

Apperly has found evidence that we still possess the fast implicit system as adults. People were asked to study pictures showing a man looking at dots on a wall; sometimes the man could see all the dots, sometimes not. When asked how many dots there were, volunteers were slower and less accurate if the man could see fewer dots than they could. Even when trying not to take the man’s perspective into account, they couldn’t help but do so, says Apperly. “That’s a strong indication of an automatic process,” he says – in other words, an implicit system working at an unconscious level.

If this theory is true, it suggests we should pay attention to our gut feelings about people’s state of mind, says Apperly. Imagine surprising an intruder in your home. The implicit system might help you make fast decisions about what they see and know, while the explicit system could help you to make more calculated judgments about their motives. “Which system is better depends on whether you have time to make the more sophisticated judgement,” says Apperly.

The idea that we have a two-tier theory of mind is gaining ground. Further support comes from a study of people with autism, a group known to have difficulty with social skills, who are often said to lack theory of mind. In fact, tests on a group of high-functioning people with Asperger’s syndrome, a form of autism, showed they had the explicit system, yet they failed at non-verbal tests of the kind that reveal implicit theory of mind in babies (Science, vol 325, p 883). So people with autism can learn explicit mentalising skills, even without the implicit system, although the process remains “a little bit cumbersome” says Uta Frith, a cognitive scientist at University College London, who led the work. The finding suggests that the capacity to understand others should not be so easily written off in those with autism. “They can handle it when they have time to think about it,” says Frith.

If theory of mind is not an all-or-nothing quality, does that help explain why some of us seem to be better than others at putting ourselves into other people’s shoes? “Clearly people vary,” points out Apperly. “If you think of all your colleagues and friends, some are socially more or less capable.”

Unfortunately, that is not reflected in the Sally-Anne test, the mainstay of theory of mind research for the past four decades. Nearly everyone over the age of 5 can pass it standing on their head.

To get the measure of the variation in people’s abilities, different approaches are needed. One is called the director task; based on a similar idea to Apperly’s dot pictures, this involves people moving objects around on a grid while taking into account the viewpoint of an observer. This test reveals how children and adolescents improve progressively as they mature, only reaching a plateau in their 20s.

How does that timing square with the fact that the implicit system – which the director test hinges on – is supposed to emerge in early infancy? Sarah-Jayne Blakemore, a cognitive neuroscientist at University College London who works with Apperly, has an answer. What improves, she reckons, is not theory of mind per se but how we apply it in social situations using cognitive skills such as planning, attention and problem-solving, which keep developing during adolescence. “It’s the way we use that information when we make decisions,” she says.

So teenagers can blame their reputation for being self-centred on the fact they are still developing their theory of mind. The good news for parents is that most adolescents will learn how to put themselves in others’ shoes eventually. “You improve your skills by experiencing social scenarios,” says Frith.

It is also possible to test people’s explicit mentalising abilities by asking them convoluted “who-thought-what-about-whom” questions. After all, we can do better than realising that our friend mistakenly thinks her phone will be on the table. If such a construct represents “second-order” theory of mind, most of us can understand a fourth-order sentence like: “John said that Michael thinks that Anne knows that Sally thinks her phone will be on the table.”

In fact Dunbar’s team has shown that such a concept would be the limit of about 20 per cent of the general population (British Journal of Psychology, vol 89, p 191). Sixty per cent of us can manage fifth-order theory of mind and the top 20 per cent can reach the heights of sixth order.

As well as letting us keep track of our complex social lives, this kind of mentalising is crucial for our appreciation of works of fiction. Shakespeare’s genius, according to Dunbar, was to make his audience work at the edge of their ability, tracking multiple mind states. In Othello, for instance, the audience has to understand that Iago wants jealous Othello to mistakenly think that his wife Desdemona loves Cassio. “He’s able to lift the audience to his limits,” says Dunbar.

So why do some of us operate at the Bard’s level while others are less socially capable? Dunbar argues it’s all down to the size of our brains.

According to one theory, during human evolution the prime driver of our expanding brains was the growing size of our social groups, with the resulting need to keep track of all those relatives, rivals and allies. Dunbar’s team has shown that among monkeys and apes, those living in bigger groups have a larger prefrontal cortex. This is the outermost section of the brain covering roughly the front third of our heads, where a lot of higher thought processes go on.

Last year, Dunbar applied that theory to a single primate species: us. His team got 40 people to fill in a questionnaire about the number of friends they had, and then imaged their brains in an MRI scanner. Those with the biggest social networks had a larger region of the prefrontal cortex tucked behind the eye sockets. They also scored better on theory of mind tests (Proceedings of the Royal Society B, vol 279, p 2157). “The size of the bits of prefrontal cortex involved in mentalising determine your mentalising competencies,” says Dunbar. “And your mentalising competencies then determine the number of friends you have.” It’s a bold claim, and one that has not convinced everyone in the field. After all, correlation does not prove causation. Perhaps having lots of friends makes this part of the brain grow bigger, rather than the other way round, or perhaps a large social network is a sign of more general intelligence.

Lying robots
What’s more, there seem to be several parts of the brain involved in mentalising – perhaps unsurprisingly for such a complex ability. In fact, so many brain areas have been implicated that scientists now talk about the theory of mind “network” rather than a single region.

A type of imaging called fMRI scanning, which can reveal which parts of the brain “light up” for specific mental functions, strongly implicates a region called the right temporoparietal junction, located towards the rear of the brain, as being crucial for theory of mind. In addition, people with damage to this region tend to fail the Sally-Anne test.

Other evidence has emerged for the involvement of the right temporoparietal junction. When Rebecca Saxe temporarily disabled that part of the brain in healthy volunteers, by holding a magnet above the skull, they did worse at tests that involved considering others’ beliefs while making moral judgments (PNAS, vol 107, p 6753).

Despite the explosion of research in this area in recent years, there is still lots to learn about this nifty piece of mental machinery. As our understanding grows, it is not just our own skills that stand to improve. If we can figure out how to give mentalising powers to computers and robots, they could become a lot more sophisticated. “Part of the process of socialising robots might draw upon things we’re learning from how people think about people,” Apperly says.

For instance, programmers at the Georgia Institute of Technology in Atlanta have developed robots that can deceive each other and leave behind false clues in a high-tech game of hide-and-seek. Such projects may ultimately lead to robots that can figure out the thoughts and intentions of people.

For now, though, the remarkable ability to thoroughly worm our way into someone else’s head exists only in the greatest computer of all – the human brain.

(Article by Kirsten Weir, who is a science writer based in Minneapolis).

http://beyondmusing.wordpress.com/2013/06/07/mind-reading-how-we-get-inside-other-peoples-heads/

Never Scrape Again: Windshield Coating Repels Frost

shutterstock_67370692

A fogged-up camera lens can ruin a perfect shot, and a frosty car window can lead to potentially deadly accidents. To help keep glass clear in harsh weather, scientists are developing an advanced new coating that resists both fogging and frosting.

Glass fogs up and frosts because of water. So you might assume so-called hydrophobic materials, which repel water, provide the best method of fighting such moisture. However, these solutions tend only to make water bead up, scattering light and obscuring views.

Researchers have also experimented with the opposite tactic, attempting to prevent fogging and frosting using hydrophilic materials, which attract water. Here, researchers hope to smear water across the glass surfaces in uniform sheets, to keep the moisture from distorting light. Although these materials work against fog, they can’t prevent frosting. When cold glass encounters humid air, the layer of water that develops simply freezes.

However, the new coating possesses both water-repelling and water-attracting properties, so it works against both fog and frost. The material contains organic compounds with both hydrophilic and hydrophobic components. The hydrophilic ingredients love water so much they absorb moisture, trapping it and keeping it from easily forming ice crystals. This lowers water’s usual freezing temperature and dramatically reduces frosting.

Meanwhile, the material’s hydrophobic components help repel contaminants that might spoil the hydrophilic effect.

“We have no freezing of water, even at low temperatures. It remains completely clear,” researcher Michael Rubner, a materials scientist at MIT, told TechNewsDaily.

When the new coating warms up from the freezing cold, it releases the water, “which just evaporates,” Rubner added.

The new coating does have its limits. “If it’s overwhelmed with water, any excess water can freeze,” Rubner said. “You wouldn’t want this on an airplane wing that constantly gets water on it, but an application like eyeglasses or windshields, it can be amazing.”

The researchers are now seeking to enhance the material’s durability to mechanical stresses. They detailed their findings online Jan. 29 in the journal ACS Nano.

http://www.livescience.com/27611-never-scrape-again-windshield-coating-repels-frost.html

Next Great Depression? MIT researchers predict ‘global economic collapse’ by 2030

 

A new study from researchers at Jay W. Forrester’s institute at MIT says that the world could suffer from “global economic collapse” and “precipitous population decline” if people continue to consume the world’s resources at the current pace.

Smithsonian Magazine writes that Australian physicist Graham Turner says “the world is on track for disaster” and that current evidence coincides with a famous, and in some quarters, infamous, academic report from 1972 entitled, “The Limits to Growth.

Produced for a group called The Club of Rome, the study’s researchers created a computing model to forecast different scenarios based on the current models of population growth and global resource consumption. The study also took into account different levels of agricultural productivity, birth control and environmental protection efforts. Twelve million copies of the report were produced and distributed in 37 different languages.

Most of the computer scenarios found population and economic growth continuing at a steady rate until about 2030. But without “drastic measures for environmental protection,” the scenarios predict the likelihood of a population and economic crash.

However, the study said “unlimited economic growth” is still possible if world governments enact policies and invest in green technologies that help limit the expansion of our ecological footprint.

 

The Smithsonian notes that several experts strongly objected to “The Limit of Growth’s” findings, including the late Yale economist Henry Wallich, who for 12 years served as a governor of the Federal Research Board and was its chief international economics expert. At the time, Wallich said attempting to regulate economic growth would be equal to “consigning billions to permanent poverty.”

Turner says that perhaps the most startling find from the study is that the results of the computer scenarios were nearly identical to those predicted in similar computer scenarios used as the basis for “The Limits to Growth.”

“There is a very clear warning bell being rung here,” Turner said. “We are not on a sustainable trajectory.”

http://news.yahoo.com/blogs/sideshow/next-great-depression-mit-researchers-predict-global-economic-190352944.html

Thanks to Ray Gaudette for bringing this to the attention of the It’s Interesting community.