Artificial intelligence replaces physicists


Physicists are putting themselves out of a job, using artificial intelligence to run a complex experiment. The experiment created an extremely cold gas trapped in a laser beam, known as a Bose-Einstein condensate, replicating the experiment that won the 2001 Nobel Prize.

Physicists are putting themselves out of a job, using artificial intelligence to run a complex experiment.

The experiment, developed by physicists from The Australian National University (ANU) and UNSW ADFA, created an extremely cold gas trapped in a laser beam, known as a Bose-Einstein condensate, replicating the experiment that won the 2001 Nobel Prize.

“I didn’t expect the machine could learn to do the experiment itself, from scratch, in under an hour,” said co-lead researcher Paul Wigley from the ANU Research School of Physics and Engineering.

“A simple computer program would have taken longer than the age of the Universe to run through all the combinations and work this out.”

Bose-Einstein condensates are some of the coldest places in the Universe, far colder than outer space, typically less than a billionth of a degree above absolute zero.

They could be used for mineral exploration or navigation systems as they are extremely sensitive to external disturbances, which allows them to make very precise measurements such as tiny changes in the Earth’s magnetic field or gravity.

The artificial intelligence system’s ability to set itself up quickly every morning and compensate for any overnight fluctuations would make this fragile technology much more useful for field measurements, said co-lead researcher Dr Michael Hush from UNSW ADFA.

“You could make a working device to measure gravity that you could take in the back of a car, and the artificial intelligence would recalibrate and fix itself no matter what,” he said.

“It’s cheaper than taking a physicist everywhere with you.”

The team cooled the gas to around 1 microkelvin, and then handed control of the three laser beams over to the artificial intelligence to cool the trapped gas down to nanokelvin.

Researchers were surprised by the methods the system came up with to ramp down the power of the lasers.

“It did things a person wouldn’t guess, such as changing one laser’s power up and down, and compensating with another,” said Mr Wigley.

“It may be able to come up with complicated ways humans haven’t thought of to get experiments colder and make measurements more precise.

The new technique will lead to bigger and better experiments, said Dr Hush.

“Next we plan to employ the artificial intelligence to build an even larger Bose-Einstein condensate faster than we’ve seen ever before,” he said.

The research is published in the Nature group journal Scientific Reports.

https://www.sciencedaily.com/releases/2016/05/160516091544.htm

Elon Musk says we’re all cyborgs almost certainly living within a computer simulation

Elon Musk has said that there is only a “one in billions” chance that we’re not living in a computer simulation.

Our lives are almost certainly being conducted within an artificial world powered by AI and highly-powered computers, like in The Matrix, the Tesla and SpaceX CEO suggested at a tech conference in California.

Mr Musk, who has donated huge amounts of money to research into the dangers of artificial intelligence, said that he hopes his prediction is true because otherwise it means the world will end.

“The strongest argument for us probably being in a simulation I think is the following,” he told the Code Conference. “40 years ago we had Pong – two rectangles and a dot. That’s where we were.

“Now 40 years later we have photorealistic, 3D simulations with millions of people playing simultaneously and it’s getting better every year. And soon we’ll have virtual reality, we’ll have augmented reality.

“If you assume any rate of improvement at all, then the games will become indistinguishable from reality, just indistinguishable.”

He said that even if the speed of those advancements dropped by 1000, we would still be moving forward at an intense speed relative to the age of life.

Since that would lead to games that would be indistinguishable from reality that could be played anywhere, “it would seem to follow that the odds that we’re in ‘base reality’ is one in billions”, Mr Musk said.

Asked whether he was saying that the answer to the question of whether we are in a simulated computer game was “yes”, he said the answer is “probably”.

He said that arguably we should hope that it’s true that we live in a simulation. “Otherwise, if civilisation stops advancing, then that may be due to some calamitous event that stops civilisation.”

He said that either we will make simulations that we can’t tell apart from the real world, “or civilisation will cease to exist”.

Mr Musk said that he has had “so many simulation discussions it’s crazy”, and that it got to the point where “every conversation [he had] was the AI/simulation conversation”.

The question of whether what we see is real or simulated has perplexed humans since at least the Ancient philosophers. But it has been given a new and different edge in recent years with the development of powerful computers and artificial intelligence, which some have argued shows how easily such a simulation could be created.

http://www.independent.co.uk/life-style/gadgets-and-tech/news/elon-musk-ai-artificial-intelligence-computer-simulation-gaming-virtual-reality-a7060941.html

DESIGNING AI WITH A HEART: THE CHALLENGE OF GIVING MACHINES EMOTIONAL AWARENESS


ADVANCES IN EMOTIONAL TECHNOLOGIES ARE WARMING UP HUMAN-ROBOT RELATIONSHIPS, BUT CAN AI EVER FULFILL OUR EMOTIONAL NEEDS?

Science fiction has terrified and entertained us with countless dystopian futures where weak human creators are annihilated by heartless super-intelligences. The solution seems easy enough: give them hearts.

Artificial emotional intelligence or AEI development is gathering momentum and the number of social media companies buying start-ups in the field indicates either true faith in the concept or a reckless enthusiasm. The case for AEI is simple: machines will work better if they understand us. Rather than only complying with commands this would enable them to anticipate our needs, and so be able to carry out delicate tasks autonomously, such as home help, counselling or simply being a friend.

Assistant professor at Northwestern University’s Kellogg School of Management Dr Adam Waytz and Harvard Business School professor Dr Norton explain in the Wall Street Journal that: “When emotional jobs such as social workers and pre-school teachers must be ‘botsourced’, people actually prefer robots that seem capable of conveying at least some degree of human emotion.”

A plethora of intelligent machines already exist but to get them working in our offices and homes we need them to understand and share our feelings. So where do we start?

TEACHING EMOTION

“Building an empathy module is a matter of identifying those characteristics of human communication that machines can use to recognize emotion and then training algorithms to spot them,” says Pascale Fung in Scientific American magazine. According to Fung, creating this empathy module requires three components that can analyse “facial cues, acoustic markers in speech and the content of speech itself to read human emotion and tell the robot how to respond.”

Although generally haphazard, facial scanners will become increasingly specialised and able to spot mood signals, such as a tilting of the head, widening of the eyes, and mouth position. But the really interesting area of development is speech cognition. Fung, a professor of electronic and computer engineering at the Hong Kong University of Science and Technology, has commercialised part of her research by setting up a company called Ivo Technologies that used these principles to produce Moodbox, a ‘robot speaker with a heart’.

Unlike humans who learn through instinct and experience, AIs use machine learning – a process where the algorithms are constantly revised. The more you interact with the Moodbox, the more examples it has of your behaviour, and the better it can respond in the appropriate way.

To create the Moodbox, Fung’s team set up a series of 14 ‘classifiers’ to analyse musical pieces. The classifiers were subjected to thousands of examples of ambient sound so that each one became adept at recognising music in its assigned mood category. Then, algorithms were written to spot non-verbal cues in speech such as speed and tone of voice, which indicate the level of stress. The two stages are matched up to predict what you want to listen to. This uses a vast amount of research to produce a souped up speaker system, but the underlying software is highly sophisticated and indicates the level of progress being made.

Using similar principles is Emoshape’s EmoSPARK infotainment cube – an all-in-one home control system that not only links to your media devices, but keeps you up to date with news and weather, can control the lights and security, and also hold a conversation. To create its eerily named ‘human in a box’, Emoshape says the cube devises an emotional profile graph (EPG) on each user, and claims it is capable of “measuring the emotional responses of multiple people simultaneously”. The housekeeper-entertainer-companion comes with face recognition technology too, so if you are unhappy with its choice of TV show or search results, it will ‘see’ this, recalibrate its responses, and come back to you with a revised response.

According to Emoshape, this EPG data enables the AI to “virtually ‘feel’ senses such as pleasure and pain, and [it] ‘expresses’ those desires according to the user.”

PUTTING LANGUAGE INTO CONTEXT

We don’t always say what we mean, so comprehension is essential to enable AEIs to converse with us. “Once a machine can understand the content of speech, it can compare that content with the way it is delivered,” says Fung. “If a person sighs and says, ‘I’m so glad I have to work all weekend,’ an algorithm can detect the mismatch between the emotion cues and the content of the statement and calculate the probability that the speaker is being sarcastic.”

A great example of language comprehension technology is IBM’s Watson platform. Watson is a cognitive computing tool that mimics how human brains process data. As IBM says, its systems “understand the world in the way that humans do: through senses, learning, and experience.”

To deduce meaning, Watson is first trained to understand a subject, in this case speech, and given a huge breadth of examples to form a knowledge base. Then, with algorithms written to recognise natural speech – including humour, puns and slang – the programme is trained to work with the material it has so it can be recalibrated and refined. Watson can sift through its database, rank the results, and choose the answer according to the greatest likelihood in just seconds.

EMOTIONAL AI

As the expression goes, the whole is greater than the sum of its parts, and this rings true for emotional intelligence technology. For instance, the world’s most famous robot, Pepper, is claimed to be the first android with emotions.

Pepper is a humanoid AI designed by Alderaban Robotics to be a ‘kind’ companion. The diminutive and non-threatening robot’s eyes are high-tech camera scanners that examine facial expressions and cross-reference the results with his voice recognition software to identify human emotions. Once he knows how you feel, Pepper will tailor a conversation to you and the more you interact, the more he gets to know what you enjoy. He may change the topic to dispel bad feeling and lighten your mood, play a game, or tell you a joke. Just like a friend.

Peppers are currently employed as customer support assistants for Japan’s telecoms company Softbank so that the public get accustomed to the friendly bots and Pepper learns in an immersive environment. In the spirit of evolution, IBM recently announced that its Watson technology has been integrated into the latest versions, and that Pepper is learning to speak Japanese at Softbank. This technological partnership presents a tour de force of AEI, and IBM hopes Pepper will soon be ready for more challenging roles, “from an in-class teaching assistant to a nursing aide – taking Pepper’s unique physical characteristics, complemented by Watson’s cognitive capabilities, to deliver an enhanced experience.”

“In terms of hands-on interaction, when cognitive capabilities are embedded in robotics, you see people engage and benefit from this technology in new and exciting ways,” says IBM Watson senior vice president Mike Rhodin.

HUMANS AND ROBOTS

Paranoia tempts us into thinking that giving machines emotions is starting the countdown to chaos, but realistically it will make them more effective and versatile. For instance, while EmoSPARK is purely for entertainment and Pepper’s strength is in conversation, one of Alderaban’s NAO robots has been programmed to act like a diabetic toddler by researchers Lola Cañamero and Matthew Lewis at the University of Hertfordshire. Switching the roles of carer and care giver, children look after the bumbling robot Robin in order to help them understand more about their diabetes and how to manage it.

While the uncanny valley says that people are uncomfortable with robots that resemble humans, it is now considered somewhat “overstated” as our relationship with technology has dramatically changed since the theory was put forward in 1978 – after all, we’re unlikely to connect as strongly with a disembodied cube than a robot.

This was clearly visible at a demonstration of Robin, where he tottered in a playpen surrounded by cooing adults. Lewis cradled the robot, stroked his head and said: “It’s impossible not to empathise with him. I wrote the code and I still empathise with him.” Humanisastion will be an important aspect of the wider adoption of AEI, and developers are designing them to mimic our thinking patterns and behaviours, which fires our innate drive to bond.

Our interaction with artificial intelligence has always been a fascinating one; and this is only going to get more entangled, and perhaps weirder too, as AEIs may one day be our co-workers, friends or even, dare I say it, lovers. “It would be premature to say that the age of friendly robots has arrived,” Fung says. “The important thing is that our machines become more human, even if they are flawed. After all, that is how humans work.

http://factor-tech.com/

New way to generate electric power from seawater


Scientists have successfully developed a method of producing electricity from seawater, with help from the Sun. Instead of harvesting hydrogen, the new photoelectrochemical cell produces hydrogen peroxide for electricity.

Researchers at Osaka University found a way to turn seawater—one of the most abundant resources on Earth—into hydrogen peroxide (H2O2) using sunlight, which can then be used to generate electricity in fuel cells. This adds to the ever growing number of existing alternative energy options as the world continues to move towards green energy.

“Utilization of solar energy as a primary energy source has been strongly demanded to reduce emissions of harmful and/or greenhouse gases produced by burning fossil fuels. However, large fluctuation of solar energy depending on the length of the daytime is a serious problem. To utilize solar energy in the night time, solar energy should be stored in the form of chemical energy and used as a fuel to produce electricity,” the researchers wrote in their paper.

Previous technologies focused on splitting the molecules of pure water to harvest hydrogen.

As previously mentioned, the new research, instead of harvesting hydrogen from pure water, turns seawater into hydrogen peroxide. Gaseous hydrogen production from pure water has a lower solar energy conversion and is much harder to store, whereas the team notes, “H2O2 can be produced as an aqueous solution from water and O2 in the air.”

It is also much easier and safer to store and transport in higher densities, compared to highly compressed hydrogen gas.

There are other methods of producing H2O2, but they are impractical in that the processes themselves require a lot of energy, essentially defeating the purpose. This is the first time someone developed a photocatalytic method efficient enough to make H2O2 use in fuel cells viable.

The process involves a new photoelectrochemical cell developed to produce H2O2 when sunlight illuminates the photocatalyst, which then absorbs photons and initiates chemical reactions with the energy, resulting in H2O2.

A test conducted for 24 hours shows that the H2O2 concentration in seawater reached about 48mM (millimolar), compared to 2mM in pure water. Researchers found that this was made possible by seawater’s negatively charged chlorine enhancing the photocatalysis.

That said, this method isn’t yet as good as other solar power processes, but it’s a start. Researchers aim to improve efficiency with better materials and lower costs.

“In the future, we plan to work on developing a method for the low-cost, large-scale production of H2O2 from seawater,” Fukuzumi said. “This may replace the current high-cost production of H2O2 from H2 (from mainly natural gas) and O2.”

http://futurism.com/theres-a-new-way-to-generate-power-using-seawater/

New Real-Time In-Ear Device Translator By Waverly Labs To Be Released Soon

Language barrier will no longer be a problem around the world as an in-ear device will be the answer to this. The device can translate foreign language to the wearer’s native language and it works real time.

A company called Waverly Labs has developed a device called “The Pilot.” that will do a real-time translation while on the wearer’s ears.

A smart phone app will also let the user choose different foreign languages, currently Spanish, French, Italian and English. Additional languages will be available soon after, which include East Asian, Hindi, Semitic, Arabic, Slavic, African, and more.The device also works only with always-on data connection of the wearer’s smartphone.

To use the device, the earpieces can be shared by two people. While talking in different languages, the in-ear device will serve as the wearers’ translators to understand each other.

The device will cost $129.

Robot outperforms highly-skilled human surgeons on pig GI surgery

A robot surgeon has been taught to perform a delicate procedure—stitching soft tissue together with a needle and thread—more precisely and reliably than even the best human doctor.

The Smart Tissue Autonomous Robot (STAR), developed by researchers at Children’s National Health System in Washington, D.C., uses an advanced 3-D imaging system and very precise force sensing to apply stitches with submillimeter precision. The system was designed to copy state-of-the art surgical practice, but in tests involving living pigs, it proved capable of outperforming its teachers.

Currently, most surgical robots are controlled remotely, and no automated surgical system has been used to manipulate soft tissue. So the work, described today in the journal Science Translational Medicine, shows the potential for automated surgical tools to improve patient outcomes. More than 45 million soft-tissue surgeries are performed in the U.S. each year. Examples include hernia operations and repairs of torn muscles.

“Imagine that you need a surgery, or your loved one needs a surgery,” says Peter Kim, a pediatric surgeon at Children’s National, who led the work. “Wouldn’t it be critical to have the best surgeon and the best surgical techniques available?”

Kim does not see the technology replacing human surgeons. He explains that a surgeon still oversees the robot’s work and will take over in an emergency, such as unexpected bleeding.

“Even though we take pride in our craft of doing surgical procedures, to have a machine or tool that works with us in ensuring better outcome safety and reducing complications—[there] would be a tremendous benefit,” Kim says. The new system is an impressive example of a robot performing delicate manipulation. If robots can master human-level dexterity, they could conceivably take on many more tasks and jobs.

STAR consists of an industrial robot equipped with several custom-made components. The researchers developed a force-sensitive device for suturing and, most important, a near-infrared camera capable of imaging soft tissue in detail when fluorescent markers are injected.

“It’s an important result,” says Ken Goldberg, a professor at UC Berkeley who is also developing robotic surgical systems. “The innovation in 3-D sensing is particularly interesting.”

Goldberg’s team is developed surgical robots that could be more flexible than STAR because instead of being manually programmed, they can learn automatically by observing expert surgeons. “Copying the skill of experts is really the next step here,” he says.

https://www.technologyreview.com/s/601378/nimble-fingered-robot-outperforms-the-best-human-surgeons/

Thanks to Kebmodee for bringing this to the It’s Interesting community.

Google invents cyborg lenses for our eyes

by David Goldman

Google has patented a new technology that would let the company inject a computerized lens directly into your eyeball.

The company has been developing smart glasses and even smart contact lenses for years. But Google’s newest patented technology would go even further — and deeper.

In its patent application, which the U.S. Patent and Trademark Office approved last week, Google says it could remove the lens of your eye, inject fluid into your empty lens capsule and then place an electronic lens in the fluid.

Once equipped with your cyborg lenses, you would never need glasses or contacts again. In fact, you might not even need a telescope or a microscope again. And who needs a camera when your eyes can capture photos and videos?

The artificial, computerized lenses could automatically adjust to help you see objects at a distance or very close by. The lenses could be powered by the movement of your eyeball, and they could even connect to a nearby wireless device.

Google says that its patented lenses could be used to cure presbyopia, an age-related condition in which people’s eyes stiffen and their ability to focus is diminished or lost. It could also correct common eye problems, such as myopia, hyperopia, astigmatism.

Today, we cure blurry vision with eyeglasses or contact lenses. But sometimes vision is not correctable.

And there are clear advantages to being a cyborg with mechanical eyes.

Yet Google (GOOGL, Tech30) noted that privacy could become a concern. If your computerized eyes are transmitting data all the time, that signal could allow law enforcement or hackers to identify you or track your movements. Google said that it could make the mechanical lenses strip out personally identifying information so that your information stays secure.

Before you sign up for cyborg eyes, it’s important to note that Google and many other tech companies patent technologies all the time. Many of those patented items don’t end up getting made into actual products. So it’s unclear if Google will ever be implanting computers into your eyes — soon or ever.

http://money.cnn.com/2016/05/04/technology/google-lenses/index.html

Massive sculpture relocated because people kept walking into it while texting


The statue by Sophie Ryder had to be moved because people on their phones were bumping into it.

By Sophie Jamieson

A massive 20ft statue of two clasped hands had to be relocated after people texting on their mobile phones kept walking into it.

The sculpture, called ‘The Kiss’, was only put in place last weekend, but within days those in charge of the exhibition noticed walkers on the path were bumping their heads as they walked through the archway underneath.

Artist Sophie Ryder, who designed the sculpture, posted a video of it being moved by a crane on her Facebook page.

The artwork was positioned on a path leading up to Salisbury Cathedral in Wiltshire.

Made from galvanised steel wire, The Kiss had a 6ft 4in gap underneath the two hands that pedestrians could walk through.

But Ms Ryder said people glued to their phones had not seen it coming.

She said on social media: “We had to move ‘the kiss’ because people were walking through texting and said they bumped their heads! Oh well!!”

Her fans voiced their surprise that people could fail to notice the “ginormous” sculpture.

Cindy Billingsley commented: “Oh good grief- they should be looking at the beautiful art instead of texting- so they deserve what they get if they are not watching where they are going.”

Patricia Cunningham said: “If [sic] may have knocked some sense into their heads! We can but hope.”

Another fan, Lisa Wallis-Adams, wrote: “We saw your art in Salisbury at the weekend. We absolutely loved your rabbits and didn’t walk into any of them! Sorry some people are complete numpties.”

Sculptor Sophie Ryder studied at the Royal Academy of Arts and is known for creations of giant mythical figures, like minotaurs.

The sculpture is part of an exhibition that also features Ryder’s large “lady hares” and minotaurs, positioned on the lawn outside the cathedral. The exhibition runs until 3 July.

http://www.telegraph.co.uk/news/uknews/12164922/Massive-sculpture-relocated-because-people-busy-texting-kept-walking-into-it.html

Virtual Reality Therapy Shows Promise Against Depression

An immersive virtual reality therapy could help people with depression to be less critical and more compassionate towards themselves, reducing depressive symptoms, finds a new study from UCL (University College London) and ICREA-University of Barcelona.

The therapy, previously tested by healthy volunteers, was used by 15 depression patients aged 23-61. Nine reported reduced depressive symptoms a month after the therapy, of whom four experienced a clinically significant drop in depression severity. The study is published in the British Journal of Psychiatry Open and was funded by the Medical Research Council.

Patients in the study wore a virtual reality headset to see from the perspective of a life-size ‘avatar’ or virtual body. Seeing this virtual body in a mirror moving in the same way as their own body typically produces the illusion that this is their own body. This is called ’embodiment’.

While embodied in an adult avatar, participants were trained to express compassion towards a distressed virtual child. As they talked to the child it appeared to gradually stop crying and respond positively to the compassion. After a few minutes the patients were embodied in the virtual child and saw the adult avatar deliver their own compassionate words and gestures to them. This brief 8-minute scenario was repeated three times at weekly intervals, and patients were followed up a month later.

“People who struggle with anxiety and depression can be excessively self-critical when things go wrong in their lives,” explains study lead Professor Chris Brewin (UCL Clinical, Educational & Health Psychology). “In this study, by comforting the child and then hearing their own words back, patients are indirectly giving themselves compassion. The aim was to teach patients to be more compassionate towards themselves and less self-critical, and we saw promising results. A month after the study, several patients described how their experience had changed their response to real-life situations in which they would previously have been self-critical.”

The study offers a promising proof-of-concept, but as a small trial without a control group it cannot show whether the intervention is responsible for the clinical improvement in patients.

“We now hope to develop the technique further to conduct a larger controlled trial, so that we can confidently determine any clinical benefit,” says co-author Professor Mel Slater (ICREA-University of Barcelona and UCL Computer Science). “If a substantial benefit is seen, then this therapy could have huge potential. The recent marketing of low-cost home virtual reality systems means that methods such as this could potentially be part of every home and be used on a widespread basis.”

Publication: Embodying self-compassion within virtual reality and its effects on patients with depression. Falconer, CJ et al. British Journal of Psychiatry Open (February, 2016)