Scientists Debunk the IQ Myth: Notion of Measuring One’s Intelligence Quotient by Singular, Standardized Test Is Highly Misleading

iq

After conducting the largest online intelligence study on record, a Western University-led research team has concluded that the notion of measuring one’s intelligence quotient or IQ by a singular, standardized test is highly misleading.

The findings from the landmark study, which included more than 100,000 participants, were published Dec. 19 in the journal Neuron. The article, “Fractionating human intelligence,” was written by Adrian M. Owen and Adam Hampshire from Western’s Brain and Mind Institute (London, Canada) and Roger Highfield, Director of External Affairs, Science Museum Group (London, U.K).

Utilizing an online study open to anyone, anywhere in the world, the researchers asked respondents to complete 12 cognitive tests tapping memory, reasoning, attention and planning abilities, as well as a survey about their background and lifestyle habits.

“The uptake was astonishing,” says Owen, the Canada Excellence Research Chair in Cognitive Neuroscience and Imaging and senior investigator on the project. “We expected a few hundred responses, but thousands and thousands of people took part, including people of all ages, cultures and creeds from every corner of the world.”

The results showed that when a wide range of cognitive abilities are explored, the observed variations in performance can only be explained with at least three distinct components: short-term memory, reasoning and a verbal component.

No one component, or IQ, explained everything. Furthermore, the scientists used a brain scanning technique known as functional magnetic resonance imaging (fMRI), to show that these differences in cognitive ability map onto distinct circuits in the brain.

With so many respondents, the results also provided a wealth of new information about how factors such as age, gender and the tendency to play computer games influence our brain function.

“Regular brain training didn’t help people’s cognitive performance at all yet aging had a profound negative effect on both memory and reasoning abilities,” says Owen.

Hampshire adds, “Intriguingly, people who regularly played computer games did perform significantly better in terms of both reasoning and short-term memory. And smokers performed poorly on the short-term memory and the verbal factors, while people who frequently suffer from anxiety performed badly on the short-term memory factor in particular.”

1.Adam Hampshire, Roger R. Highfield, Beth L. Parkin, Adrian M. Owen. Fractionating Human Intelligence. Neuron, 2012; 76 (6): 1225 DOI: 10.1016/j.neuron.2012.06.022

http://www.sciencedaily.com/releases/2012/12/121219133334.htm

Scientists create artifical brain with 2.3 million simulated neurons

aritificial brain

Another computer is setting its wits to perform human tasks. But this computer is different. Instead of the tour de force processing of Deep Blue or Watson’s four terabytes of facts of questionable utility, Spaun attempts to play by the same rules as the human brain to figure things out. Instead of the logical elegance of a CPU, Spaun’s computations are performed by 2.3 million simulated neurons configured in networks that resemble some of the brain’s own networks. It was given a series of tasks and performed pretty well, taking a significant step toward the creation of a simulated brain.

Spaun stands for Semantic Pointer Architecture: Unified Network. It was given 6 different tasks that tested its ability to recognize digits, recall from memory, add numbers and complete patterns. Its cognitive network simulated the prefrontal cortex to handle working memory and the basal ganglia and thalamus to control movements. Like a human, Spaun can view an image and then give a motor response; that is, it is presented images that it sees through a camera and then gives a response by drawing with a robotic arm.

And its performance was similar to that of a human brain. For example, the simplest task, image recognition, Spaun was shown various numbers and asked to draw what it sees. It got 94 percent of the numbers correct. In a working memory task, however, it didn’t do as well. It was shown a series of random numbers and then asked to draw them in order. Like us with human brains, Spaun found the pattern recognition task easy, the working memory task not quite as easy.

The important thing here is not how well Spaun performed on the tasks – your average computer could find ways to perform much better than Spaun. But what’s important is that, in Spaun’s case, the task computations were carried out solely by the 2.3 million artificial neurons spiking in the way real neurons spike to carry information from one neuron to another. The visual image, for example, was processed hierarchically, with multiple levels of neurons successively extracting more complex information, just as the brain’s visual system does. Similarly, the motor response mimicked the brain’s strategy of combining many simple movements to produce an optimal, single movement while drawing.

Chris Eliasmith, from the University of Waterlook in Ontario, Canada and lead author of the study is happy with his cognitive creation. “It’s not as smart as monkeys when it comes to categorization,” he told CNN, “but it’s actually smarter than monkeys when it comes to recognizing syntactic patterns, structured patterns in the input, that monkeys won’t recognize.”

Watch Spaun work through its tasks in the following video.

One thing Spaun can’t do is perform tasks in realtime. Every second you saw Spaun performing tasks in the video actually requires 2.5 hours of numbers crunching by its artificial brain. The researchers hope to one day have it perform in realtime.

It’s important to note that Spaun isn’t actually learning anything by performing these tasks. Its neural nets are hardwired and are incapable of the modifications that real neurons undergo when we learn. But producing complex behavior from a simulated neuronal network still represents an important initial step toward building an artificial brain. Christian Machens, a neuroscientist at the Champalimaud Neuroscience Programme in Lisbon and was not involved in the study, writes in Science that the strategy for building a simulated brain is “to not simply incorporate the largest number of neurons or the greatest amount of detail, but to reproduce the largest amount of functionality and behavior.”

We’re still a long way from artificial intelligence that is sentient and self-aware. And there’s no telling if the robots of the future will have brains that look like ours or if entirely different solutions will be used to produce complex behavior. Whatever it looks like, Spaun is a noble step in the right direction.

Scientists Create Artificial Brain With 2.3 Million Simulated Neurons

Ray Kurzweil joins Google

Ray-Kurzweil-singularity

The most well known advocate of the Singularity is Ray Kurzweil, who Bill Gates has called one of the best thinkers of the future of technology.

Ray Kurzweil confirmed today that he will be joining Google to work on new projects involving machine learning and language processing.

“I’m excited to share that I’ll be joining Google as Director of Engineering this Monday, December 17,” said Kurzweil.

“I’ve been interested in technology, and machine learning in particular, for a long time: when I was 14, I designed software that wrote original music, and later went on to invent the first print-to-speech reading machine for the blind, among other inventions. I’ve always worked to create practical systems that will make a difference in people’s lives, which is what excites me as an inventor.

“In 1999, I said that in about a decade we would see technologies such as self-driving cars and mobile phones that could answer your questions, and people criticized these predictions as unrealistic. Fast forward a decade — Google has demonstrated self-driving cars, and people are indeed asking questions of their Android phones. It’s easy to shrug our collective shoulders as if these technologies have always been around, but we’re really on a remarkable trajectory of quickening innovation, and Google is at the forefront of much of this development.

“I’m thrilled to be teaming up with Google to work on some of the hardest problems in computer science so we can turn the next decade’s ‘unrealistic’ visions into reality.”

http://www.kurzweilai.net/kurzweil-joins-google-to-work-on-new-projects-involving-machine-learning-and-language-processing

Thanks to Kebmodee for bringing this to the attention of the It’s Interesting community.

Breakthrough in Augmented Reality Contact Lens

121205090931-large

 

The Centre of Microsystems Technology (CMST), imec’s associated laboratory at Ghent University (Belgium), has developed an innovative spherical curved LCD display, which can be embedded in contact lenses. The first step toward fully pixilated contact lens displays, this achievement has potential wide-spread applications in medical and cosmetic domains.

Unlike LED-based contact lens displays, which are limited to a few small pixels, imec’s innovative LCD-based technology permits the use of the entire display surface. By adapting the patterning process of the conductive layer, this technology enables applications with a broad range of pixel number and sizes, such as a one pixel, fully covered contact lens acting as adaptable sunglasses, or a highly pixilated contact lens display.

The first prototype presented December 5 contains a patterned dollar sign, depicting the many cartoons that feature people or figures with dollars in their eyes. It can only display rudimentary patterns, similar to an electronic pocket calculator. In the future, the researchers envision fully autonomous electronic contact lenses embedded with this display. These next-generation solutions could be used for medical purposes, for example to control the light transmission toward the retina in case of a damaged iris, or for cosmetic purposes such as an iris with a tunable color. In the future, the display could also function as a head-up display, superimposing an image onto the user’s normal view. However, there are still hurdles to overcome for broader consumer and civilian implementation.

“Normally, flexible displays using liquid crystal cells are not designed to be formed into a new shape, especially not a spherical one. Thus, the main challenge was to create a very thin, spherically curved substrate with active layers that could withstand the extreme molding processes,” said Jelle De Smet, the main researcher on the project. “Moreover, since we had to use very thin polymer films, their influence on the smoothness of the display had to be studied in detail. By using new kinds of conductive polymers and integrating them into a smooth spherical cell, we were able to fabricate a new LCD-based contact lens display.”

Video: http://www.youtube.com/watch?v=-btRUzoKYEA

http://www.sciencedaily.com/releases/2012/12/121205090931.htm

Scientists at Cornell create Terminator-like organic metamaterial that flows like liquid and remembers its shape

 

 

DNAletters

 

birdsnests

A bit reminiscent of the Terminator T-1000, a new material created by Cornell researchers is so soft that it can flow like a liquid and then, strangely, return to its original shape.

Rather than liquid metal, it is a hydrogel, a mesh of organic molecules with many small empty spaces that can absorb water like a sponge. It qualifies as a “metamaterial” with properties not found in nature and may be the first organic metamaterial with mechanical meta-properties.

Hydrogels have already been considered for use in drug delivery — the spaces can be filled with drugs that release slowly as the gel biodegrades — and as frameworks for tissue rebuilding. The ability to form a gel into a desired shape further expands the possibilities. For example, a drug-infused gel could be formed to exactly fit the space inside a wound.

Dan Luo, professor of biological and environmental engineering, and colleagues describe their creation in the Dec. 2 issue of the journal Nature Nanotechnology.

The new hydrogel is made of synthetic DNA. In addition to being the stuff genes are made of, DNA can serve as a building block for self-assembling materials. Single strands of DNA will lock onto other single stands that have complementary coding, like tiny organic Legos. By synthesizing DNA with carefully arranged complementary sections Luo’s research team previously created short stands that link into shapes such as crosses or Y’s, which in turn join at the ends to form meshlike structures to form the first successful all-DNA hydrogel. Trying a new approach, they mixed synthetic DNA with enzymes that cause DNA to self-replicate and to extend itself into long chains, to make a hydrogel without DNA linkages.

“During this process they entangle, and the entanglement produces a 3-D network,” Luo explained. But the result was not what they expected: The hydrogel they made flows like a liquid, but when placed in water returns to the shape of the container in which it was formed.

“This was not by design,” Luo said.

Examination under an electron microscope shows that the material is made up of a mass of tiny spherical “bird’s nests” of tangled DNA, about 1 micron (millionth of a meter) in diameter, further entangled to one another by longer DNA chains. It behaves something like a mass of rubber bands glued together: It has an inherent shape, but can be stretched and deformed.

Exactly how this works is “still being investigated,” the researchers said, but they theorize that the elastic forces holding the shape are so weak that a combination of surface tension and gravity overcomes them; the gel just sags into a loose blob. But when it is immersed in water, surface tension is nearly zero — there’s water inside and out — and buoyancy cancels gravity.

To demonstrate the effect, the researchers created hydrogels in molds shaped like the letters D, N and A. Poured out of the molds, the gels became amorphous liquids, but in water they morphed back into the letters. As a possible application, the team created a water-actuated switch. They made a short cylindrical gel infused with metal particles placed in an insulated tube between two electrical contacts. In liquid form the gel reaches both ends of the tube and forms a circuit. When water is added, the gel reverts to its shorter form that will not reach both ends. (The experiment is done with distilled water that does not conduct electricity.)

The DNA used in this work has a random sequence, and only occasional cross-linking was observed, Luo said. By designing the DNA to link in particular ways he hopes to be able to tune the properties of the new hydrogel.

The research has been partially supported by the U.S. Department of Agriculture and the Department of Defense.

http://www.news.cornell.edu/stories/Dec12/ShapeGel.html

Thanks to Dr. Rajadhyaksha for bringing this to the attention of the It’s Interesting community.

Future technology from Apple

applecomputer062906vig1

 

All the way back in February of this year, Apple’s iPhone business alone surpassed the size of Microsoft’s entire business, reaching nearly $25 billion in annual revenue versus Microsoft’s ~$20 billion.

Since February, Apple’s iPhone business has only grown, widening this gap.

Here’s the outdated chart from February:

iPhone vs Microsoft

Remarkable isn’t it?

Here’s what’s more remarkable yet: At this very moment, Apple is working on technology that, if successfully developed, will cannibalize and ultimately destroy that iPhone business.

We have two pieces of evidence.

The first is that Apple has established a pattern.

Unlike most companies, Apple has a remarkable ability to predict the kinds of gadgets that will undercut the gadgets it sells, and then build these new gadgets better than anyone else could.

The best example of this is the iPad, which is actively disrupting Apple’s own Mac business.

During Business Insider’s Ignition Conference last week, top Apple analyst Gene Munster of Piper Jaffray talked about Apple’s tendency to cannibalize its own businesses and predicted that it would continue to do so.

He speculated that Apple is working on consumer robotics, wearable computers, 3D printing, consumable computers, and automated technology.

He showed everyone this chart, which visualizes Apple’s pattern:

Munster on Apple

Here’s the other reason it’s safe to assume Apple is quietly working on the destruction of its most massive business, the iPhone.

Just like Google and Microsoft, Apple is working on computerized glasses. 

Computerized glasses, are, at the moment, the technology that is most likely to bring the smartphone era to an end.

They fit into an obvious pattern, where computers have been getting smaller and closer to our faces since their very beginning. 

First they were in big rooms, then they sat on desktops, then they sat on our laps, and now they’re in our palms. Next they’ll be on our faces. 

We have the rough schematics of Apple’s project.

They’ve been  publicly available on the US Patent Office’s website since this summer, when they were noticed by several Apple-watching websites.

In the patent filing, Apple calls the gadget  a “head-mounted display” or “HMD.”  

The filing is authored by Tony Fadell, designer of the iPod, and John Tang. Fadell is no longer at Apple, but Tang is.

Some highlights from the description:

  • An HMD is “a display device that a person wears on the head in order to have video information directly displayed in front of the eyes.”
  • “The optics are typically embedded in a helmet, glasses, or a visor, which a user can wear.”
  • “HMDs can be used to view a see-through image imposed upon a real world view, thereby creating what is typically referred to as an augmented reality.”
Apple says HMDs can be used…
  • To “display relevant tactical information, such as maps or thermal imaging data.”
  • To “provide stereoscopic views of CAD schematics, simulations or remote sensing applications.”
  • For “gaming and entertainment applications.”
A gadget that features applications for maps, games, and a million other uses? Sounds familiar.

Here’s an illustration from the patent filing:

Apple Patent

Read more: http://www.businessinsider.com/apple-is-quietly-working-to-destroy-the-iphone-2012-12#ixzz2E7XVXabt

British company claims biggest engine advance since the jet: the SABRE engine

A Skylon in flight with a cutaway of the SABRE engine

 

A small British company with a dream of building a re-usable space plane has won an important endorsement from the European Space Agency (ESA) after completing key tests on its novel engine technology.

Reaction Engines Ltd believes its Sabre engine, which would operate like a jet engine in the atmosphere and a rocket in space, could displace rockets for space access and transform air travel by bringing any destination on Earth to no more than four hours away.

That ambition was given a boost on Wednesday by ESA, which has acted as an independent auditor on the Sabre test programme.

“ESA are satisfied that the tests demonstrate the technology required for the Sabre engine development,” the agency’s head of propulsion engineering Mark Ford told a news conference.

“One of the major obstacles to a re-usable vehicle has been removed,” he said. “The gateway is now open to move beyond the jet age.”

The space plane, dubbed Skylon, only exists on paper. What the company has right now is a remarkable heat exchanger that is able to cool air sucked into the engine at high speed from 1,000 degrees Celsius to minus 150 degrees in one hundredth of a second.

This core piece of technology solves one of the constraints that limit jet engines to a top speed of about 2.5 times the speed of sound, which Reaction Engines believes it could double.

With the Sabre engine in jet mode, the air has to be compressed before being injected into the engine’s combustion chambers. Without pre-cooling, the heat generated by compression would make the air hot enough to melt the engine.

The challenge for the engineers was to find a way to cool the air quickly without frost forming on the heat exchanger, which would clog it up and stop it working.

Using a nest of fine pipes that resemble a large wire coil, the engineers have managed to get round this fatal problem that would normally follow from such rapid cooling of the moisture in atmospheric air.

They are tight-lipped on exactly how they managed to do it.

“We are not going to tell you how this works,” said the company’s chief designer Richard Varvill, who started his career at the military engine division of Rolls-Royce. “It is our most closely guarded secret.”

The company has deliberately avoided filing patents on its heat exchanger technology to avoid details of how it works – particularly the method for preventing the build-up of frost – becoming public.

The Sabre engine could take a plane to five times the speed of sound and an altitude of 25 km, about 20 percent of the speed and altitude needed to reach orbit. For space access, the engines would then switch to rocket mode to do the remaining 80 percent.

Reaction Engines believes Sabre is the only engine of its kind in development and the company now needs to raise about 250 million pounds ($400 million) to fund the next three-year development phase in which it plans to build a small-scale version of the complete engine.

Chief executive Tim Hayter believes the company could have an operational engine ready for sale within 10 years if it can raise the development funding.

The company reckons the engine technology could win a healthy chunk of four key markets together worth $112 billion (69 billion pounds) a year, including space access, hypersonic air travel, and modified jet engines that use the heat exchanger to save fuel.

The fourth market is unrelated to aerospace. Reaction Engines believes the technology could also be used to raise the efficiency of so-called multistage flash desalination plants by 15 percent. These plants, largely in the Middle East, use heat exchangers to distil water by flash heating sea water into steam in multiple stages.

The firm has so far received 90 percent of its funding from private sources, mainly rich individuals including chairman Nigel McNair Scott, the former mining industry executive who also chairs property developer Helical Bar.

Chief executive Tim Hayter told Reuters he would welcome government investment in the company, mainly because of the credibility that would add to the project.

But the focus will be on raising the majority of the 250 million pounds it needs now from a mix of institutional investors, high net worth individuals and possibly potential partners in the aerospace industry.

Sabre produces thrust by burning hydrogen and oxygen, but inside the atmosphere it would take that oxygen from the air, reducing the amount it would have to carry in fuel tanks for rocket mode, cutting weight and allowing Skylon to go into orbit in one stage.

Scramjets on test vehicles like the U.S. Air Force Waverider also use atmospheric air to create thrust but they have to be accelerated to their operating speed by normal jet engines or rockets before they kick in. The Sabre engine can operate from a standing start.

If the developers are successful, Sabre would be the first engine in history to send a vehicle into space without using disposable, multi-stage rockets.

Skylon is years away, but in the meantime the technology is attracting interest from the global aerospace industry and governments because it effectively doubles the technical limits of current jet engines and could cut the cost of space access.

The heat exchanger technology could also be incorporated into a new jet engine design that could cut 5 to 10 percent – or $10 (6.25 pounds)-20 billion – off airline fuel bills.

That would be significant in an industry where incremental efficiency gains of one percent or so, from improvements in wing design for instance, are big news.

Thanks to Kebmodee for bringing this to the attention of the It’s Interesting community.

http://uk.reuters.com/article/2012/11/28/uk-science-spaceplane-idUKBRE8AR0R520121128

DARPA project suggests a mix of man and machine may be the most efficient way to spot danger: the Cognitive Technology Threat Warning System

smart_sentryx519

 

Sentry duty is a tough assignment. Most of the time there’s nothing to see, and when a threat does pop up, it can be hard to spot. In some military studies, humans are shown to detect only 47 percent of visible dangers.

A project run by the Defense Advanced Research Projects Agency (DARPA) suggests that combining the abilities of human sentries with those of machine-vision systems could be a better way to identify danger. It also uses electroencephalography to identify spikes in brain activity that can correspond to subconscious recognition of an object.

An experimental system developed by DARPA sandwiches a human observer between layers of computer vision and has been shown to outperform either machines or humans used in isolation.

The so-called Cognitive Technology Threat Warning System consists of a wide-angle camera and radar, which collects imagery for humans to review on a screen, and a wearable electroencephalogram device that measures the reviewer’s brain activity. This allows the system to detect unconscious recognition of changes in a scene—called a P300 event.

In experiments, a participant was asked to review test footage shot at military test sites in the desert and rain forest. The system caught 91 percent of incidents (such as humans on foot or approaching vehicles) in the simulation. It also widened the field of view that could effectively be monitored. False alarms were raised only 0.2 percent of the time, down from 35 percent when a computer vision system was used on its own. When combined with radar, which detects things invisible to the naked eye, the accuracy of the system was close to 100 percent, DARPA says.

“The DARPA project is different from other ‘human-in-the-loop’ projects because it takes advantage of the human visual system without having the humans do any ‘work,’ ” says computer scientist Devi Parikh of the Toyota Technological Institute at Chicago. Parikh researches vision systems that combine human and machine expertise.

While electroencephalogram-measuring caps are commercially available for a few hundred dollars, Parikh warns that the technology is still in its infancy. Furthermore, she notes, the P300 signals may vary enough to require training or personalized processing, which could make it harder to scale up such a system for widespread use.

Thanks to Kebmodee for bringing this to the attention of the It’s Interesting community.

http://www.technologyreview.com/news/507826/sentry-system-combines-a-human-brain-with-computer-vision/

The mannequin that spies on you

Mannequins in fashion boutiques are now being fitted with secret cameras to ‘spy’ on shoppers’ buying habits.

Benetton is among the High Street fashion chains to have deployed the dummies equipped with technology adapted from security systems used to identify criminals at airports.

From the outside, the $3,200 (£2,009) EyeSee dummy looks like any other mannequin, but behind its blank gaze it hides a camera feeding images into facial recognition software that logs the age, gender and race of shoppers.

This information is fed into a computer and is ‘aggregated’ to offer retailers using the system statistical and contextual information they can use to develop their marketing strategies.

Its makers boast: ‘From now on you can know how many people enter the store, record what time there is a greater influx of customers (and which type) and see if some areas risk to be overcrowded.

However, privacy campaigners have denounced the system as ‘creepy’ and said that such surveillance is an instance of profit trumping privacy.

The device is marketed by Italian mannequin maker Almax and has already spurred shops into adjusting window displays, floor layours and promotions, Bloomberg reported.

With growth slowing in the luxury goods industry, the technology taps into retailers’ desperation to personalise their offers to reach increasingly picky customers.

Although video profiling of customers is not new, Almax claims its offering is better at providing data because it stands at eye level with customers, who are more likely to look directly at the mannequins.

The video surveillance mannequins have been on sale for almost a year, and are already being used in three European countries and in the U.S.

Almax claims information from the devices led one outlet to adjust window displays after they found that men shopping in the first two days of a sale spent more than women, while another introduced a children’s line after the dummy showed youngsters made up more than half its afternoon traffic.

A third retailer placed Chinese-speaking staff by a particular entrance after it found a third of visitors using that door after 4pm were Asian.

Almax chief executive Max Catanese refused to name which retailers were using the new technology, telling Bloomberg that confidentiality agreements meant he could not disclose the names of clients.

But he did reveal that five companies – among them leading fashion brands – are using ‘a few dozen’ of the mannequins, with orders for at least that many more.

Almax is now hoping to update the technology to allow the mannequins – and by extension the retailers who operate them – to listen in on what customers are saying about the clothes on display.

Mr Catanese told Bloomberg the company also plans to add screens next to the dummies to prompt passers-by about products that fit their profile, similar to the way online retailers use cookies to personalise web browsing.

Almax insists that its system does not invade the privacy of shoppers since the camera inside the mannequin is ‘blind’, meaning that it does not record the images of passers-by, instead merely collecting data about them.

In an emailed statement, Mr Catanese told MailOnline: ‘Let’s say I pass in front of the mannequin. Nobody will know that “Max Catanese” passed in front of it.

‘The retailer will have the information that a male adult Caucasian passed in front of the mannequin at 6:25pm and spent 3 minutes in front of it. No sensible/private data, nor image is collected.

‘Different is the case if a place (shop, department store, etc.) is already covered by security cameras (by the way, basically almost every retailer in the world today).

‘In those cases we could even provide the regular camera as the data and customers images are already collected in the store which are authorised to do so.

‘In any case, just to avoid questions, so far we only offer the version with blind camera.’

Nevertheless, privacy groups are concerned about the roll-out of the technology. Emma Carr, deputy director of civil liberties campaign group Big Brother Watch, said: ‘Keeping cameras hidden in a mannequin is nothing short of creepy.

‘The use of covert surveillance technology by shops, in order to provide a personalised service, seems totally disproportionate.

‘The fact that the cameras are hidden suggests that shops are fully aware that many customers would object to this kind of monitoring.

‘It is not only essential that customers are fully informed that they are being watched, but that they also have real choice of service and on what terms it is offered.

‘Without this transparency, shops cannot be completely sure that their customers even want this level of personalised service.

‘This is another example of how the public are increasingly being monitored by retailers without ever being asked for their permission. Profit trumps privacy yet again.’

Read more: http://www.dailymail.co.uk/sciencetech/article-2235848/The-creepy-mannequin-stares-Fashion-retailers-adapt-airport-security-technology-profile-customers.html#ixzz2CsSISqiB

Thanks to Kebmodee for bringing this to the attention of the It’s Interesting community.

World’s First 3D Printing Photo Booth Dispenses A 3D Figure Of You

 

Imagine this…What if you went to a photo booth in a mall or a movie theater and you put in a dollar, pulled the curtain, and sat down to get your photo taken by the machine. What if instead of spitting out the usual black and white strip of four or five photos, it dispensed a small mini-me figure of you.   This actually exists (except it costs more than a dollar right now). The world’s first 3D printing photo booth is taking reservations. People will soon be making history by having their photo taken this way in Japan from now until mid-January. You can read World’s First 3D Printing Photo Booth To Open In Japan to get more details. If more than one person gets in the photo booth, it will create more than one figure. The cost right now is between $260 – $530 for each figure depending on what size you want it to create.

http://www.bitrebels.com/technology/worlds-first-3d-printing-photo-booth/