Bacteria can be turned into living hard drives


When scientists add code to bacterial DNA, it’s passed on to the next generation.

By Bryan Nelson

The way DNA stores genetic information is similar to the way a computer stores data. Now scientists have found a way to turn this from a metaphorical comparison into a literal one, by transforming living bacteria into hard drives, reports Popular Mechanics.

A team of Harvard scientists led by geneticists Seth Shipman and Jeff Nivala have devised a way to trick bacteria into copying computer code into the fabric of their DNA without interrupting normal cellular function. The bacteria even pass the information on to their progeny, thus ensuring that the information gets “backed up,” even when individual bacteria perish.

So far the technique can only upload about 100 bytes of data to the bacteria, but that’s enough to store a short script or perhaps a short poem — say, a haiku — into the genetics of a cell. For instance, here’s a haiku that would work:

Bacteria on
your thumb
might someday become
a real thumb drive

As the method becomes more precise, it will be possible to encode longer strings of text into the fabric of life. Perhaps some day, the bacteria living all around us will also double as a sort of library that we can download.

The technique is based on manipulation of an immune response that exists in many bacteria known as the CRISPR/Cas system. How the system works is actually fairly simple: when bacteria encounter a threatening virus, they physically cut out a segment of the attacking virus’s DNA and paste it into a specific region of their own genome. The bacteria can then use this section of viral DNA to identify future virus encounters and rapidly mount a defense. Copying this immunity into their own genetic code allows the bacteria to pass it on to future generations.

To get the bacteria to copy strings of computer code instead, researchers just book-ended the information with segments that look like viral DNA. The bacteria then got to work, conveniently cutting and pasting the relevant section into their genes.

The method does have a few bugs. For instance, not all of the bacteria snip the full section, so only part of the code gets copied. But if you introduce the code into a large enough population of bacteria, it becomes easy to deduce the full message from a sufficient percentage of the colony.

The amount of information that can be stored also depends on the bacteria doing the storing. For this experiment, researchers used E. coli, which was only efficient at storing around 100 bytes. But some bacteria, such as Sulfolobus tokodaii, are capable of storing thousands of bytes. With synthetic engineering, these numbers can be increased exponentially.

http://www.mnn.com/green-tech/research-innovations/stories/bacteria-can-now-be-turned-living-hard-drives

Man reveals the truth of his two year ‘relationship’ with a sex robot

David Mills has opened up about his two year ‘relationship’ with a doll.

The 57-year-old has just celebrated his second anniversary with Taffy, his £5000 “RealDoll2”, with silicone skin and steel joints.

He has revealed that some women are turned on by the doll and he’s even shared a threesome with one woman.

The twice-divorced dad says he still dates and gets differing reactions if he tells them about his sex doll and some would “freak out”.

He told Men’s Health: “They’ll be like, ‘Don’t call me anymore, I’m unfriending you on Facebook, stay away from me and my children,’ that sort of thing.

“But I’ve met some women who were into me because of the doll. I’ve had sexual experiences that I never would’ve had without Taffy.”

The American bought the sex robot from a Californian company two years ago and paid an extra £300 for added freckles, to make her more realistic.


The robots come with a £5000 price tag and latets versions will even come with a pulse.

According the website of sex doll suppliers Abyss, Taffy has an “ultra-realistic labia,” “stretchy lips,” and a hinged jaw that “opens and closes very realistically.”

In the first few months, he revealed, he would often come home, see the frozen figure sitting on a chair, and let out a blood-curdling scream.

David recalls one occasion when he brought a woman back to his house after a date, without telling her about his silicone companion.

He added: “I didn’t want my date to walk into the room and suddenly see Taffy, because if you’re not expecting her, she’s kind of terrifying.”

“So I say to this girl, ‘Give me a minute.’ And I run into the bedroom and quickly throw a sheet over Taffy.

“That was a close one.”

David laughs as he recalls one particular act with Taffy which would be impossible with a real woman.

He said: “Sometimes, when I just don’t feel like looking at her, I’ll take out her vagina.
“She stays in the bedroom, and I just walk around with her p***y. Isn’t modern technology wonderful?”

But David is keen to point out that his ownership of a sex robot doesn’t mean he is crazy.

He said: “I wouldn’t exactly call this a relationship.

“I think one of the misconceptions about sex robots is that owners view their dolls as alive, or that my doll is in love with me, or that I sit around and talk to her about whether I should buy Apple stock.


Sex robots are big business in the States and are becoming more advanced all the time.

He also revealed his 20-year-old daughter aware of Taffy’s existence.

“We don’t really talk about it,” he added. “Just like we don’t talk about my television set or washing machine.”

Sex robots have become much more sophisticated in recent years and experts say walking, talking dolls won’t be too far away.

The “RoxxxyGold” robot from True Companion — with a base price, before the extras, of £4,800 — offers options including “a heartbeat and a circulatory system” and the ability to “talk to you about soccer.”

https://www.thesun.co.uk/living/1198869/ive-met-some-women-who-were-into-me-because-of-the-doll-man-reveals-the-truth-of-his-two-year-relationship-with-a-sex-robot/

First robot designed to cause human pain and make us bleed

By Jasper Hamill

Experts fear it’s only a matter of time before robots declare war on humans.

Now the tech world has taken one small step toward making this nightmare scenario a reality.

An American engineer has built the world’s first robot that is entirely designed to hurt human beings.

The pain machine breaks the first rule in science fiction writer Isaac Asimov’s famous “laws of robotics,” which states that machines should never hurt humans.

“No one’s actually made a robot that was built to intentionally hurt and injure someone,” robot designer and artist Alexander Reben told Fast Company.

“I wanted to make a robot that does this that actually exists.

“[It was] important to take it out of the thought experiment realm into reality, because once something exists in the world, you have to confront it. It becomes more urgent. You can’t just pontificate about it.”

Luckily for us humans, the pain-bot is not quite the shotgun-wielding death machine depicted in the “Terminator” films.

Its only weapon is a small needle attached to a long arm, which is used to inflict a small amount of agony on a human victim.

The robot randomly decides whether to attack people who are brave enough to put their hands beneath its arm, although it’s not strong enough to cause major injury.

Reben said the aim of the project wasn’t to hasten the end of humanity. Instead, he wants to encourage people to start discussing the prospect that robots could soon have some terrifying powers.

“I want people to start confronting the physicality of it,” Reben says. “It will raise a bit more awareness outside the philosophical realm.”

“There’s always going to be situations where the unforeseen is going to happen, and how to deal with that is going to be an important thing to think about.”

Last year, world-famous British physicist Professor Stephen Hawking claimed robots and artificial intelligence could wipe humans off the face of the planet.

Billionaire Elon Musk agrees, having spent much of the past few years warning about the apocalyptic scenario of a war between man and machine.

Both Hawking and Musk signed a letter last year urging world leaders to avoid a military robotics arms race.

It is likely that the battles of the future will involve machines capable of killing without needing to be directed by a human controller.

“[Robotic] weapons are ideal for tasks such as assassinations, destabilizing nations, subduing populations and selectively killing a particular ethnic group,” the letter said.

“We therefore believe that a military AI arms race would not be beneficial for humanity.”

http://nypost.com/2016/06/13/this-is-the-first-robot-designed-to-cause-human-pain/?utm_source=applenews&utm_medium=inline&utm_campaign=applenews

Computers can now accurately predict future development of schizophrenia based on how a person talks


A new study finds an algorithmic word analysis is flawless at determining whether a person will have a psychotic episode.

by ADRIENNE LAFRANCE

Although the language of thinking is deliberate—let me think, I have to do some thinking—the actual experience of having thoughts is often passive. Ideas pop up like dandelions; thoughts occur suddenly and escape without warning. People swim in and out of pools of thought in a way that can feel, paradoxically, mindless.

Most of the time, people don’t actively track the way one thought flows into the next. But in psychiatry, much attention is paid to such intricacies of thinking. For instance, disorganized thought, evidenced by disjointed patterns in speech, is considered a hallmark characteristic of schizophrenia. Several studies of at-risk youths have found that doctors are able to guess with impressive accuracy—the best predictive models hover around 79 percent—whether a person will develop psychosis based on tracking that person’s speech patterns in interviews.

A computer, it seems, can do better.

That’s according to a researchers at Columbia University, the New York State Psychiatric Institute, and the IBM T. J. Watson Research Center. They used an automated speech-analysis program to correctly differentiate—with 100-percent accuracy—between at-risk young people who developed psychosis over a two-and-a-half year period and those who did not. The computer model also outperformed other advanced screening technologies, like biomarkers from neuroimaging and EEG recordings of brain activity.

“In our study, we found that minimal semantic coherence—the flow of meaning from one sentence to the next—was characteristic of those young people at risk who later developed psychosis,” said Guillermo Cecchi, a biometaphorical-computing researcher for IBM Research, in an email. “It was not the average. What this means is that over 45 minutes of interviewing, these young people had at least one occasion of a jarring disruption in meaning from one sentence to the next. As an interviewer, if my mind wandered briefly, I might miss it. But a computer would pick it up.”

Researchers used an algorithm to root out such “jarring disruptions” in otherwise ordinary speech. Their semantic analysis measured coherence and two syntactic markers of speech complexity—including the length of a sentence and how many clauses it entailed. “When people speak, they can speak in short, simple sentences. Or they can speak in longer, more complex sentences, that have clauses added that further elaborate and describe the main idea,” Cecchi said. “The measures of complexity and coherence are separate and are not correlated with one another. However, simple syntax and semantic incoherence do tend to aggregate together in schizophrenia.”

Here’s an example of a sentence, provided by Cecchi and revised for patient confidentiality, from one of the study’s participants who later developed psychosis:

I was always into video games. I mean, I don’t feel the urge to do that with this, but it would be fun. You know, so the one block thing is okay. I kind of lied though and I’m nervous about going back.

While the researchers conclude that language processing appears to reveal “subtle, clinically relevant mental-state changes in emergent psychosis,” their work poses several outstanding questions. For one thing, their sample size of 34 patients was tiny. Researchers are planning to attempt to replicate their findings using transcripts from a larger cohort of at-risk youths.

They’re also working to contextualize what their findings might mean more broadly. “We know that thought disorder is an early core feature of schizophrenia evident before psychosis onset,” said Cheryl Corcoran, an assistant professor of clinical psychiatry at Columbia University. “The main question then is: What are the brain mechanisms underlying this abnormality in language? And how might we intervene to address it and possibly improve prognosis? Could we improve the concurrent language problems and function of children and teenagers at risk, and either prevent psychosis or at least modify its course?”

Intervention has long been the goal. And so far it has been an elusive one. Clinicians are already quite good at identifying people who are at increased risk of developing schizophrenia, but taking that one step farther and determining which of those people will actually end up having the illness remains a huge challenge.

“Better characterizing a behavioral component of schizophrenia may lead to a clearer understanding of the alterations to neural circuitry underlying the development of these symptoms,” said Gillinder Bedi, an assistant professor of clinical psychology at Columbia University. “If speech analyses could identify those people most likely to develop schizophrenia, this could allow for more targeted preventive treatment before the onset of psychosis, potentially delaying onset or reducing the severity of the symptoms which do develop.”

All this raises another question about the nature of human language. If the way a person speaks can be a window into how that person is thinking, and further, a means of assessing how they’re doing, which mechanisms of language are really most meaningful? It isn’t what you say, the aphorism goes, it’s how you say it. Actually, though, it’s both.

As Cecchi points out, the computer analysis at the center of the study didn’t include any acoustic features like intonation, cadence, volume—all characteristics which could be meaningful in interpreting a person’s pattern of speaking and, by extension, thinking. “There is a deeper limitation, related to our current understanding of language and how to measure the full extent of what is being expressed and communicated when people speak to each other, or write,” Cecchi said. “The discriminative features that we identified are still a very simplified description of language. Finally, while language provides a unique window into the mind, it is still just one aspect of human behavior and cannot fully substitute for a close observation and interaction with the patient.”

http://www.theatlantic.com/technology/archive/2015/08/speech-analysis-schizophrenia-algorithm/402265/

Artificial intelligence replaces physicists


Physicists are putting themselves out of a job, using artificial intelligence to run a complex experiment. The experiment created an extremely cold gas trapped in a laser beam, known as a Bose-Einstein condensate, replicating the experiment that won the 2001 Nobel Prize.

Physicists are putting themselves out of a job, using artificial intelligence to run a complex experiment.

The experiment, developed by physicists from The Australian National University (ANU) and UNSW ADFA, created an extremely cold gas trapped in a laser beam, known as a Bose-Einstein condensate, replicating the experiment that won the 2001 Nobel Prize.

“I didn’t expect the machine could learn to do the experiment itself, from scratch, in under an hour,” said co-lead researcher Paul Wigley from the ANU Research School of Physics and Engineering.

“A simple computer program would have taken longer than the age of the Universe to run through all the combinations and work this out.”

Bose-Einstein condensates are some of the coldest places in the Universe, far colder than outer space, typically less than a billionth of a degree above absolute zero.

They could be used for mineral exploration or navigation systems as they are extremely sensitive to external disturbances, which allows them to make very precise measurements such as tiny changes in the Earth’s magnetic field or gravity.

The artificial intelligence system’s ability to set itself up quickly every morning and compensate for any overnight fluctuations would make this fragile technology much more useful for field measurements, said co-lead researcher Dr Michael Hush from UNSW ADFA.

“You could make a working device to measure gravity that you could take in the back of a car, and the artificial intelligence would recalibrate and fix itself no matter what,” he said.

“It’s cheaper than taking a physicist everywhere with you.”

The team cooled the gas to around 1 microkelvin, and then handed control of the three laser beams over to the artificial intelligence to cool the trapped gas down to nanokelvin.

Researchers were surprised by the methods the system came up with to ramp down the power of the lasers.

“It did things a person wouldn’t guess, such as changing one laser’s power up and down, and compensating with another,” said Mr Wigley.

“It may be able to come up with complicated ways humans haven’t thought of to get experiments colder and make measurements more precise.

The new technique will lead to bigger and better experiments, said Dr Hush.

“Next we plan to employ the artificial intelligence to build an even larger Bose-Einstein condensate faster than we’ve seen ever before,” he said.

The research is published in the Nature group journal Scientific Reports.

https://www.sciencedaily.com/releases/2016/05/160516091544.htm

DESIGNING AI WITH A HEART: THE CHALLENGE OF GIVING MACHINES EMOTIONAL AWARENESS


ADVANCES IN EMOTIONAL TECHNOLOGIES ARE WARMING UP HUMAN-ROBOT RELATIONSHIPS, BUT CAN AI EVER FULFILL OUR EMOTIONAL NEEDS?

Science fiction has terrified and entertained us with countless dystopian futures where weak human creators are annihilated by heartless super-intelligences. The solution seems easy enough: give them hearts.

Artificial emotional intelligence or AEI development is gathering momentum and the number of social media companies buying start-ups in the field indicates either true faith in the concept or a reckless enthusiasm. The case for AEI is simple: machines will work better if they understand us. Rather than only complying with commands this would enable them to anticipate our needs, and so be able to carry out delicate tasks autonomously, such as home help, counselling or simply being a friend.

Assistant professor at Northwestern University’s Kellogg School of Management Dr Adam Waytz and Harvard Business School professor Dr Norton explain in the Wall Street Journal that: “When emotional jobs such as social workers and pre-school teachers must be ‘botsourced’, people actually prefer robots that seem capable of conveying at least some degree of human emotion.”

A plethora of intelligent machines already exist but to get them working in our offices and homes we need them to understand and share our feelings. So where do we start?

TEACHING EMOTION

“Building an empathy module is a matter of identifying those characteristics of human communication that machines can use to recognize emotion and then training algorithms to spot them,” says Pascale Fung in Scientific American magazine. According to Fung, creating this empathy module requires three components that can analyse “facial cues, acoustic markers in speech and the content of speech itself to read human emotion and tell the robot how to respond.”

Although generally haphazard, facial scanners will become increasingly specialised and able to spot mood signals, such as a tilting of the head, widening of the eyes, and mouth position. But the really interesting area of development is speech cognition. Fung, a professor of electronic and computer engineering at the Hong Kong University of Science and Technology, has commercialised part of her research by setting up a company called Ivo Technologies that used these principles to produce Moodbox, a ‘robot speaker with a heart’.

Unlike humans who learn through instinct and experience, AIs use machine learning – a process where the algorithms are constantly revised. The more you interact with the Moodbox, the more examples it has of your behaviour, and the better it can respond in the appropriate way.

To create the Moodbox, Fung’s team set up a series of 14 ‘classifiers’ to analyse musical pieces. The classifiers were subjected to thousands of examples of ambient sound so that each one became adept at recognising music in its assigned mood category. Then, algorithms were written to spot non-verbal cues in speech such as speed and tone of voice, which indicate the level of stress. The two stages are matched up to predict what you want to listen to. This uses a vast amount of research to produce a souped up speaker system, but the underlying software is highly sophisticated and indicates the level of progress being made.

Using similar principles is Emoshape’s EmoSPARK infotainment cube – an all-in-one home control system that not only links to your media devices, but keeps you up to date with news and weather, can control the lights and security, and also hold a conversation. To create its eerily named ‘human in a box’, Emoshape says the cube devises an emotional profile graph (EPG) on each user, and claims it is capable of “measuring the emotional responses of multiple people simultaneously”. The housekeeper-entertainer-companion comes with face recognition technology too, so if you are unhappy with its choice of TV show or search results, it will ‘see’ this, recalibrate its responses, and come back to you with a revised response.

According to Emoshape, this EPG data enables the AI to “virtually ‘feel’ senses such as pleasure and pain, and [it] ‘expresses’ those desires according to the user.”

PUTTING LANGUAGE INTO CONTEXT

We don’t always say what we mean, so comprehension is essential to enable AEIs to converse with us. “Once a machine can understand the content of speech, it can compare that content with the way it is delivered,” says Fung. “If a person sighs and says, ‘I’m so glad I have to work all weekend,’ an algorithm can detect the mismatch between the emotion cues and the content of the statement and calculate the probability that the speaker is being sarcastic.”

A great example of language comprehension technology is IBM’s Watson platform. Watson is a cognitive computing tool that mimics how human brains process data. As IBM says, its systems “understand the world in the way that humans do: through senses, learning, and experience.”

To deduce meaning, Watson is first trained to understand a subject, in this case speech, and given a huge breadth of examples to form a knowledge base. Then, with algorithms written to recognise natural speech – including humour, puns and slang – the programme is trained to work with the material it has so it can be recalibrated and refined. Watson can sift through its database, rank the results, and choose the answer according to the greatest likelihood in just seconds.

EMOTIONAL AI

As the expression goes, the whole is greater than the sum of its parts, and this rings true for emotional intelligence technology. For instance, the world’s most famous robot, Pepper, is claimed to be the first android with emotions.

Pepper is a humanoid AI designed by Alderaban Robotics to be a ‘kind’ companion. The diminutive and non-threatening robot’s eyes are high-tech camera scanners that examine facial expressions and cross-reference the results with his voice recognition software to identify human emotions. Once he knows how you feel, Pepper will tailor a conversation to you and the more you interact, the more he gets to know what you enjoy. He may change the topic to dispel bad feeling and lighten your mood, play a game, or tell you a joke. Just like a friend.

Peppers are currently employed as customer support assistants for Japan’s telecoms company Softbank so that the public get accustomed to the friendly bots and Pepper learns in an immersive environment. In the spirit of evolution, IBM recently announced that its Watson technology has been integrated into the latest versions, and that Pepper is learning to speak Japanese at Softbank. This technological partnership presents a tour de force of AEI, and IBM hopes Pepper will soon be ready for more challenging roles, “from an in-class teaching assistant to a nursing aide – taking Pepper’s unique physical characteristics, complemented by Watson’s cognitive capabilities, to deliver an enhanced experience.”

“In terms of hands-on interaction, when cognitive capabilities are embedded in robotics, you see people engage and benefit from this technology in new and exciting ways,” says IBM Watson senior vice president Mike Rhodin.

HUMANS AND ROBOTS

Paranoia tempts us into thinking that giving machines emotions is starting the countdown to chaos, but realistically it will make them more effective and versatile. For instance, while EmoSPARK is purely for entertainment and Pepper’s strength is in conversation, one of Alderaban’s NAO robots has been programmed to act like a diabetic toddler by researchers Lola Cañamero and Matthew Lewis at the University of Hertfordshire. Switching the roles of carer and care giver, children look after the bumbling robot Robin in order to help them understand more about their diabetes and how to manage it.

While the uncanny valley says that people are uncomfortable with robots that resemble humans, it is now considered somewhat “overstated” as our relationship with technology has dramatically changed since the theory was put forward in 1978 – after all, we’re unlikely to connect as strongly with a disembodied cube than a robot.

This was clearly visible at a demonstration of Robin, where he tottered in a playpen surrounded by cooing adults. Lewis cradled the robot, stroked his head and said: “It’s impossible not to empathise with him. I wrote the code and I still empathise with him.” Humanisastion will be an important aspect of the wider adoption of AEI, and developers are designing them to mimic our thinking patterns and behaviours, which fires our innate drive to bond.

Our interaction with artificial intelligence has always been a fascinating one; and this is only going to get more entangled, and perhaps weirder too, as AEIs may one day be our co-workers, friends or even, dare I say it, lovers. “It would be premature to say that the age of friendly robots has arrived,” Fung says. “The important thing is that our machines become more human, even if they are flawed. After all, that is how humans work.

http://factor-tech.com/

New way to generate electric power from seawater


Scientists have successfully developed a method of producing electricity from seawater, with help from the Sun. Instead of harvesting hydrogen, the new photoelectrochemical cell produces hydrogen peroxide for electricity.

Researchers at Osaka University found a way to turn seawater—one of the most abundant resources on Earth—into hydrogen peroxide (H2O2) using sunlight, which can then be used to generate electricity in fuel cells. This adds to the ever growing number of existing alternative energy options as the world continues to move towards green energy.

“Utilization of solar energy as a primary energy source has been strongly demanded to reduce emissions of harmful and/or greenhouse gases produced by burning fossil fuels. However, large fluctuation of solar energy depending on the length of the daytime is a serious problem. To utilize solar energy in the night time, solar energy should be stored in the form of chemical energy and used as a fuel to produce electricity,” the researchers wrote in their paper.

Previous technologies focused on splitting the molecules of pure water to harvest hydrogen.

As previously mentioned, the new research, instead of harvesting hydrogen from pure water, turns seawater into hydrogen peroxide. Gaseous hydrogen production from pure water has a lower solar energy conversion and is much harder to store, whereas the team notes, “H2O2 can be produced as an aqueous solution from water and O2 in the air.”

It is also much easier and safer to store and transport in higher densities, compared to highly compressed hydrogen gas.

There are other methods of producing H2O2, but they are impractical in that the processes themselves require a lot of energy, essentially defeating the purpose. This is the first time someone developed a photocatalytic method efficient enough to make H2O2 use in fuel cells viable.

The process involves a new photoelectrochemical cell developed to produce H2O2 when sunlight illuminates the photocatalyst, which then absorbs photons and initiates chemical reactions with the energy, resulting in H2O2.

A test conducted for 24 hours shows that the H2O2 concentration in seawater reached about 48mM (millimolar), compared to 2mM in pure water. Researchers found that this was made possible by seawater’s negatively charged chlorine enhancing the photocatalysis.

That said, this method isn’t yet as good as other solar power processes, but it’s a start. Researchers aim to improve efficiency with better materials and lower costs.

“In the future, we plan to work on developing a method for the low-cost, large-scale production of H2O2 from seawater,” Fukuzumi said. “This may replace the current high-cost production of H2O2 from H2 (from mainly natural gas) and O2.”

http://futurism.com/theres-a-new-way-to-generate-power-using-seawater/

New Real-Time In-Ear Device Translator By Waverly Labs To Be Released Soon

Language barrier will no longer be a problem around the world as an in-ear device will be the answer to this. The device can translate foreign language to the wearer’s native language and it works real time.

A company called Waverly Labs has developed a device called “The Pilot.” that will do a real-time translation while on the wearer’s ears.

A smart phone app will also let the user choose different foreign languages, currently Spanish, French, Italian and English. Additional languages will be available soon after, which include East Asian, Hindi, Semitic, Arabic, Slavic, African, and more.The device also works only with always-on data connection of the wearer’s smartphone.

To use the device, the earpieces can be shared by two people. While talking in different languages, the in-ear device will serve as the wearers’ translators to understand each other.

The device will cost $129.

Virtual Reality Therapy Shows Promise Against Depression

An immersive virtual reality therapy could help people with depression to be less critical and more compassionate towards themselves, reducing depressive symptoms, finds a new study from UCL (University College London) and ICREA-University of Barcelona.

The therapy, previously tested by healthy volunteers, was used by 15 depression patients aged 23-61. Nine reported reduced depressive symptoms a month after the therapy, of whom four experienced a clinically significant drop in depression severity. The study is published in the British Journal of Psychiatry Open and was funded by the Medical Research Council.

Patients in the study wore a virtual reality headset to see from the perspective of a life-size ‘avatar’ or virtual body. Seeing this virtual body in a mirror moving in the same way as their own body typically produces the illusion that this is their own body. This is called ’embodiment’.

While embodied in an adult avatar, participants were trained to express compassion towards a distressed virtual child. As they talked to the child it appeared to gradually stop crying and respond positively to the compassion. After a few minutes the patients were embodied in the virtual child and saw the adult avatar deliver their own compassionate words and gestures to them. This brief 8-minute scenario was repeated three times at weekly intervals, and patients were followed up a month later.

“People who struggle with anxiety and depression can be excessively self-critical when things go wrong in their lives,” explains study lead Professor Chris Brewin (UCL Clinical, Educational & Health Psychology). “In this study, by comforting the child and then hearing their own words back, patients are indirectly giving themselves compassion. The aim was to teach patients to be more compassionate towards themselves and less self-critical, and we saw promising results. A month after the study, several patients described how their experience had changed their response to real-life situations in which they would previously have been self-critical.”

The study offers a promising proof-of-concept, but as a small trial without a control group it cannot show whether the intervention is responsible for the clinical improvement in patients.

“We now hope to develop the technique further to conduct a larger controlled trial, so that we can confidently determine any clinical benefit,” says co-author Professor Mel Slater (ICREA-University of Barcelona and UCL Computer Science). “If a substantial benefit is seen, then this therapy could have huge potential. The recent marketing of low-cost home virtual reality systems means that methods such as this could potentially be part of every home and be used on a widespread basis.”

Publication: Embodying self-compassion within virtual reality and its effects on patients with depression. Falconer, CJ et al. British Journal of Psychiatry Open (February, 2016)