Archive for the ‘Technology’ Category

You don’t have to be a superhero like Spider-Man to climb on walls. Researchers have developed “Gecko Gloves” that can help humans climb on glass walls.

The Gecko Gloves have been created by Elliot Hawkes, a mechanical engineering student at the Stanford University. The gloves have very similar scientific principles as found in the sticky toes of geckos.

Hawkes reveals that he is working with a group of engineers who are developing reusable and controllable adhesive materials that can bond with smooth surfaces such as glass, but also release with the use of minimal effort. With the help of the synthetic adhesive, Hawkes and his team created a device that can enable a person to climb on glass walls.

“It’s a lot of fun, but also a little weird, because it doesn’t feel like you should be gripping glass,” says Hawkes. “You keep expecting to slip off, and when you don’t, it surprises you. It’s pretty exhilarating.”

Hawkes explains that each gecko handheld pad is coated with 24 adhesive tiles. Each tile is covered with sawtooth-shape polymer structures, which measures about 100 micrometers long, or about the width of a normal human hair.

The handheld pads are also connected to degressive springs that become less stiff when the pad is stretched, which means that when the springs are pulled they apply similar force to the adhesive tiles and causes the sawtooth-like structure to flatten. When the load tension is released it reduces grip.

Some experts suggest that the Gecko Gloves can be applied in many fields. It can be used to manufacture robots, which carries glass panels. Mark Cutkosky, who is the senior author of the paper, suggests that they are also working on a project with the U.S. National Aeronautics and Space Administration (NASA), which will involve applying the Gecko Gloves to robotic arms of a spacecraft. With the help of the Gecko Gloves, the robotic arm will be able to catch hold of space debris like solar panels and fuel tanks and move it accordingly.

Researchers of the latest study suggest that previous work of gecko or synthetic adhesives showed that adhesive strength is reduced when size increases. However, in the Gecko Gloves, the springs make it possible to sustain the same adhesive power at all sizes ranging from a square millimeter to the size of a human hand.

The latest version of the Gecko Gloves can support around 200 pounds, or about 90 kilograms (kg). However, if the size is increased by 10 times it can support about 2,000 pounds, or 900 kg.

The research has been published in the journal Royal Society Interface.

http://www.techtimes.com/articles/22769/20141224/gecko-gloves-by-stanford-students-will-let-you-scale-glass-walls-want-to-be-spider-man.htm

When International Space Station commander Barry Wilmore needed a wrench, Nasa knew just what to do. They “emailed” him one. This is the first time an object has been designed on Earth and then transmitted to space for manufacture.

Made In Space, the California company that designed the 3D printer aboard the ISS, overheard Wilmore mentioning the need for a ratcheting socket wrench and decided to create one. Previously, if an astronaut needed a specific tool it would have to be flown up on the next mission to the ISS, which could take months.

This isn’t the first 3D printed object made in space, but it is the first created to meet the needs of an astronaut. In November astronauts aboard the ISS printed a replacement part for the recently installed 3D printer. A total of 21 objects have now been printed in space, all of which will be brought back to Earth for testing.

“We will use them to characterise the effects of long-term microgravity on our 3D-printing process, so that we can model and predict the performance of objects that we manufacture in space in the future,” explained Mike Chen from Made in Space.

Chen also explained the process of sending hardware to space. First, the part is designed by Made In Space in CAD software and converted into a file-format for the 3D printer. This file is then sent to Nasa before being transmitted to the ISS. In space the 3D printer receives the code and starts manufacturing.

“On the ISS this type of technology translates to lower costs for experiments, faster design iteration, and a safer, better experience for the crew members, who can use it to replace broken parts or create new tools on demand,” Chen said.

http://www.wired.co.uk/news/archive/2014-12/19/3d-printed-space-wrench

Pizza Hut incorporating retina tracking into the ordering process, in what it is calling the first Subconscious Menu.

Powered by Tobii, a Swedish company that specializes in eye-tracking technology, Pizza Hut’s new system presents customers with images of ingredients on a screen. Based on how long a customer’s eyes remain on different items, the system generates an order meant to represent what he or she subconsciously wants.

The Subconscious Menu, which has been under development for about six months and is currently being piloted in the U.K., was selected by Pizza Hut as the method that best leverages technology to improve the experience for customers.

Thanks to Kebmodee for bringing this to the attention of the It’s Interesting community.

http://time.com/3613220/pizza-huts-subconscious-menu/#3613220/pizza-huts-subconscious-menu/

by Ashley Dove-Jay

The aircraft industry is expecting a seven-fold increase in air traffic by 2050, and a four-fold increase in greenhouse gas emissions unless fundamental changes are made. But just how “fundamental” will those changes need to be and what will be their effect on the aircraft we use?

The crucial next step towards ensuring the aircraft industry becomes greener is the full electrification of commercial aircraft. That’s zero CO2 and NOx emissions, with energy sourced from power stations that are themselves sustainably fuelled. The main technological barrier that must be overcome is the energy density of batteries, a measure of how much power can be generated from a battery of a certain weight.

Tesla CEO Elon Musk has said that once batteries are capable of producing 400 Watt-hours per kilogram, with a ratio of power cell to overall mass of between 0.7-0.8, an electrical transcontinental aircraft becomes “compelling”.

Given that practical lithium-ion batteries were capable of achieving energy-densities of 113Wh/kg in 1994, 202Wh/kg in 2004, and are now capable of approximately 300Wh/kg, it’s reasonable to assume that they will hit 400Wh/kg in the coming decade.

Another aspect is the exponential fall in the cost of solar panels, which have already become the cheapest form of power in most US states. The expected 70% reduction in cost of lithium-ion batteries by 2025, and the rapid rise seen in the cost of kerosene-based jet fuel means that there will be a large and growing disparity in the costs of running aircraft that will greatly favour electrification. As is often the case, the reasons that will slow transition are not technological, but are rooted in the economic and political inertia against overturning the status-quo.

Biofuels while we wait

Considering the average service-life of passenger and freight aircraft are around 21 and 33 years respectively, even if all new aircraft manufactured from tomorrow were fully electric, the transition away from fossil-fuelled aircraft would take two to three decades.

In the meantime, biofuel offers carbon emissions reductions of between 36-85%, with the variability depending on the type of land used to grow the fuel crops. As switching from one fuel to another is relatively straightforward, this is a low-hanging fruit worth pursuing before completely phasing out combustion engines.

Even though a biofuel-kerosene jet fuel blend was certified in 2009, the aircraft industry is in no hurry to implement change. There are minor technological hurdles and issues around scaling up biofuel production to industrial levels, but the main constraint is price – parity with fossil fuels is still ten years away.

The adoption of any new aircraft technology – from research, to design sketches, to testing and full integration – is typically a decade-long process. Given that the combustion engine will be phased out by mid-century, it would seem to make more economic and environmental sense to innovate in other areas: airframe design, materials research, electric propulsion design and air traffic control.

Bringing aircraft to life

In terms of the cost of computational power, computer technology is advancing more each hour today than it did in its entire first 90 years. With this in mind we can project that the equivalent of a US$1,000 computer today will by 2023 be more powerful than the potential brainpower of a human and, by 2045, will surpass the brainpower equivalent to all human brains combined.

The miniaturisation of digital electronics over the past half-century has followed a similar exponential trend, with the size of transistor gates reducing from approximately 1,000 nanometres in 1970 to 23 nanometres today. With the advent of transistors made of graphene showing great promise, this is expected to fall further to about 7 nanometres by 2025. By comparison, a human red blood cell is approximately 6,200-8,200 nanometres wide.

Putting together this increase in computational power and decrease in circuit size, and adding in the progress made with 3D-printing, at some point in the next decade we will be able to produce integrated computers powerful enough to control an aircraft at the equivalent of the cellular level in near real-time – wireless interlinking of nano-scale digital devices.

Using a biologically-inspired digital “nervous system” with receptors arranged over the aircraft sensing forces, temperatures, and airflow states could drastically improve the energy efficiency of aircraft, when coupled to software and hardware mechanisms to control or even change the shape of the aircraft in response.

Chopping the tail

Once electric aircraft are established, the next step will be to integrate a gimballed propulsion system, one that can provide thrust in any direction. This will remove the need for the elevators, rudders, and tailplane control surfaces that current designs require, but which add significant mass and drag.

The wings we are already designing are near their peak in terms of aerodynamic efficiency, but they still do no justice to what nature has achieved in birds. Aircraft design templates are a century old – constrained by the limitations of the day then, but technology has since moved on. We no longer need to build wings as rigid structures with discrete control surfaces, but can turn to the natural world for inspiration. As Richard Feynman said:

I think nature’s imagination is so much greater than man’s, she’s never going to let us relax.

http://www.iflscience.com/technology/what-commercial-aircraft-will-look-2050

brainy_2758840b

Talking to yourself used to be a strictly private pastime. That’s no longer the case – researchers have eavesdropped on our internal monologue for the first time. The achievement is a step towards helping people who cannot physically speak communicate with the outside world.

“If you’re reading text in a newspaper or a book, you hear a voice in your own head,” says Brian Pasley at the University of California, Berkeley. “We’re trying to decode the brain activity related to that voice to create a medical prosthesis that can allow someone who is paralysed or locked in to speak.”

When you hear someone speak, sound waves activate sensory neurons in your inner ear. These neurons pass information to areas of the brain where different aspects of the sound are extracted and interpreted as words.

In a previous study, Pasley and his colleagues recorded brain activity in people who already had electrodes implanted in their brain to treat epilepsy, while they listened to speech. The team found that certain neurons in the brain’s temporal lobe were only active in response to certain aspects of sound, such as a specific frequency. One set of neurons might only react to sound waves that had a frequency of 1000 hertz, for example, while another set only cares about those at 2000 hertz. Armed with this knowledge, the team built an algorithm that could decode the words heard based on neural activity alone (PLoS Biology, doi.org/fzv269).

The team hypothesised that hearing speech and thinking to oneself might spark some of the same neural signatures in the brain. They supposed that an algorithm trained to identify speech heard out loud might also be able to identify words that are thought.

Mind-reading

To test the idea, they recorded brain activity in another seven people undergoing epilepsy surgery, while they looked at a screen that displayed text from either the Gettysburg Address, John F. Kennedy’s inaugural address or the nursery rhyme Humpty Dumpty.

Each participant was asked to read the text aloud, read it silently in their head and then do nothing. While they read the text out loud, the team worked out which neurons were reacting to what aspects of speech and generated a personalised decoder to interpret this information. The decoder was used to create a spectrogram – a visual representation of the different frequencies of sound waves heard over time. As each frequency correlates to specific sounds in each word spoken, the spectrogram can be used to recreate what had been said. They then applied the decoder to the brain activity that occurred while the participants read the passages silently to themselves.

Despite the neural activity from imagined or actual speech differing slightly, the decoder was able to reconstruct which words several of the volunteers were thinking, using neural activity alone (Frontiers in Neuroengineering, doi.org/whb).

The algorithm isn’t perfect, says Stephanie Martin, who worked on the study with Pasley. “We got significant results but it’s not good enough yet to build a device.”

In practice, if the decoder is to be used by people who are unable to speak it would have to be trained on what they hear rather than their own speech. “We don’t think it would be an issue to train the decoder on heard speech because they share overlapping brain areas,” says Martin.

The team is now fine-tuning their algorithms, by looking at the neural activity associated with speaking rate and different pronunciations of the same word, for example. “The bar is very high,” says Pasley. “Its preliminary data, and we’re still working on making it better.”

The team have also turned their hand to predicting what songs a person is listening to by playing lots of Pink Floyd to volunteers, and then working out which neurons respond to what aspects of the music. “Sound is sound,” says Pasley. “It all helps us understand different aspects of how the brain processes it.”

“Ultimately, if we understand covert speech well enough, we’ll be able to create a medical prosthesis that could help someone who is paralysed, or locked in and can’t speak,” he says.

Several other researchers are also investigating ways to read the human mind. Some can tell what pictures a person is looking at, others have worked out what neural activity represents certain concepts in the brain, and one team has even produced crude reproductions of movie clips that someone is watching just by analysing their brain activity. So is it possible to put it all together to create one multisensory mind-reading device?

In theory, yes, says Martin, but it would be extraordinarily complicated. She says you would need a huge amount of data for each thing you are trying to predict. “It would be really interesting to look into. It would allow us to predict what people are doing or thinking,” she says. “But we need individual decoders that work really well before combining different senses.”

http://www.newscientist.com/article/mg22429934.000-brain-decoder-can-eavesdrop-on-your-inner-voice.html

comdey club

One Barcelona comedy club is experimenting with using facial recognition technology to charge patrons by the laugh.

The comedy club, Teatreneu, partnered with the advertising firm The Cyranos McCann to implement the new technology after the government hiked taxes on theater tickets, according to a BBC report. In 2012, the Spanish government raised taxes on theatrical shows from 8 to 21 percent.

Cyranos McCann installed tablets on the back of each seat that used facial recognition tech to measure how much a person enjoyed the show by tracking when each patron laughed or smiled.

Each giggle costs approximately 30 Euro cents ($0.38). However, if a patron hits the 24 Euros mark, which is about 80 laughs, the rest of their laughs are free of charge.

There’s also a social element. Get this, at the end of the show the patron can also check their laughter account and share their info on social networks. The comedy club in conjunction with their advertising partner even created a mobile app to be used as a system of payment.

While law enforcement has been developing and using facial recognition technology for quite sometime, more industries are beginning to experiment with it.

Some retailers, for example, are considering using the technology to gauge how people might feel while shopping in a certain section of a store.

The U.K. company NEC IT Solutions is even working on technology that would help retailers to identify V.I.P patrons, such as celebrities or preferred customers.

According to a recent report on EssentialRetail.com, the premium department store Harrod’s has been testing facial recognition during the last two years, albeit, the company has been primarily testing it for security reasons.

Facebook also uses facial recognition technology to suggest tags of people who are in images posted on its site.

http://www.cnbc.com/id/102078398


Doctoral student Joseph Choi demonstrates a multidirectional ‘perfect paraxial’ cloak using 4 lenses.


Choi uses his hand to further demonstrate his device.


A laser shows the paths that light rays travel through the system, showing regions that can be used for cloaking an object.

Scientists at the University of Rochester have discovered a way to hide large objects from sight using inexpensive and readily available lenses.

Cloaking is the process by which an object becomes hidden from view, while everything else around the cloaked object appears undisturbed.

“A lot of people have worked on a lot of different aspects of optical cloaking for years,” John Howell, a professor of physics at the upstate New York school, said on Friday.

The so-called Rochester Cloak is not really a tangible cloak at all. Rather the device looks like equipment used by an optometrist. When an object is placed behind the layered lenses it seems to disappear.

Previous cloaking methods have been complicated, expensive, and not able to hide objects in three dimensions when viewed at varying angles, they say.

“From what, we know this is the first cloaking device that provides three-dimensional, continuously multidirectional cloaking,” said Joseph Choi, a graduate student who helped develop the method at Rochester, which is renowned for its optical research.

In their tests, the researchers have cloaked a hand, a face, and a ruler – making each object appear “invisible” while the image behind the hidden object remains in view. The implications for the discovery are endless, they say.

“I imagine this could be used to cloak a trailer on the back of a semi-truck so the driver can see directly behind him,” Choi said. “It can be used for surgery, in the military, in interior design, art.”

Howell said the Rochester Cloak, like the fictitious cloak described in the pages of the Harry Potter series, causes no distortion of the background object.

Building the device does not break the bank either. It cost Howell and Choi a little over $US1000 ($1140) in materials to create it and they believe it can be done even cheaper.

Although a patent is pending, they have released simple instructions on how to create a Rochester Cloak at home for under $US100 (114).

There is also a one-minute video about the project on YouTube.

http://www.smh.com.au/technology/sci-tech/scientists-unveil-invisibility-cloak-to-rival-harry-potters-20140927-10n1dp.html