Scientists encode memories in a way that bypasses damaged brain tissue

Researchers at University of South Carolina (USC) and Wake Forest Baptist Medical Center have developed a brain prosthesis that is designed to help individuals suffering from memory loss.

The prosthesis, which includes a small array of electrodes implanted into the brain, has performed well in laboratory testing in animals and is currently being evaluated in human patients.

Designed originally at USC and tested at Wake Forest Baptist, the device builds on decades of research by Ted Berger and relies on a new algorithm created by Dong Song, both of the USC Viterbi School of Engineering. The development also builds on more than a decade of collaboration with Sam Deadwyler and Robert Hampson of the Department of Physiology & Pharmacology of Wake Forest Baptist who have collected the neural data used to construct the models and algorithms.

When your brain receives the sensory input, it creates a memory in the form of a complex electrical signal that travels through multiple regions of the hippocampus, the memory center of the brain. At each region, the signal is re-encoded until it reaches the final region as a wholly different signal that is sent off for long-term storage.

If there’s damage at any region that prevents this translation, then there is the possibility that long-term memory will not be formed. That’s why an individual with hippocampal damage (for example, due to Alzheimer’s disease) can recall events from a long time ago – things that were already translated into long-term memories before the brain damage occurred – but have difficulty forming new long-term memories.

Song and Berger found a way to accurately mimic how a memory is translated from short-term memory into long-term memory, using data obtained by Deadwyler and Hampson, first from animals, and then from humans. Their prosthesis is designed to bypass a damaged hippocampal section and provide the next region with the correctly translated memory.

That’s despite the fact that there is currently no way of “reading” a memory just by looking at its electrical signal.

“It’s like being able to translate from Spanish to French without being able to understand either language,” Berger said.

Their research was presented at the 37th Annual International Conference of the IEEE Engineering in Medicine and Biology Society in Milan on August 27, 2015.

The effectiveness of the model was tested by the USC and Wake Forest Baptist teams. With the permission of patients who had electrodes implanted in their hippocampi to treat chronic seizures, Hampson and Deadwyler read the electrical signals created during memory formation at two regions of the hippocampus, then sent that information to Song and Berger to construct the model. The team then fed those signals into the model and read how the signals generated from the first region of the hippocampus were translated into signals generated by the second region of the hippocampus.

In hundreds of trials conducted with nine patients, the algorithm accurately predicted how the signals would be translated with about 90 percent accuracy.

“Being able to predict neural signals with the USC model suggests that it can be used to design a device to support or replace the function of a damaged part of the brain,” Hampson said.
Next, the team will attempt to send the translated signal back into the brain of a patient with damage at one of the regions in order to try to bypass the damage and enable the formation of an accurate long-term memory.

http://medicalxpress.com/news/2015-09-scientists-bypass-brain-re-encoding-memories.html#nRlv

Paralyzed man walks again, using only his mind.


Paraplegic Adam Fritz works out with Kristen Johnson, a spinal cord injury recovery specialist, at the Project Walk facility in Claremont, California on September 24. A brain-to-computer technology that can translate thoughts into leg movements has enabled Fritz, paralyzed from the waist down by a spinal cord injury, to become the first such patient to walk without the use of robotics.

It’s a technology that sounds lifted from the latest Marvel movie—a brain-computer interface functional electrical stimulation (BCI-FES) system that enables paralyzed users to walk again. But thanks to neurologists, biomedical engineers and other scientists at the University of California, Irvine, it’s very much a reality, though admittedly with only one successful test subject so far.

The team, led by Zoran Nenadic and An H. Do, built a device that translates brain waves into electrical signals than can bypass the damaged region of a paraplegic’s spine and go directly to the muscles, stimulating them to move. To test it, they recruited 28-year-old Adam Fritz, who had lost the use of his legs five years earlier in a motorcycle accident.

Fritz first had to learn how exactly he’d been telling his legs to move for all those years before his accident. The research team fitted him with an electroencephalogram (EEG) cap that read his brain waves as he visualized moving an avatar in a virtual reality environment. After hours training on the video game, he eventually figured out how to signal “walk.”

The next step was to transfer that newfound skill to his legs. The scientists wired up the EEG device so that it would send electrical signals to the muscles in Fritz’s leg. And then, along with physical therapy to strengthen his legs, he would practice walking—his legs suspended a few inches off the ground—using only his brain (and, of course, the device). On his 20th visit, Fritz was finally able to walk using a harness that supported his body weight and prevented him from falling. After a little more practice, he walked using just the BCI-FES system. After 30 trials run over a period of 19 weeks, he could successfully walk through a 12-foot-long course.

As encouraging as the trial sounds, there are experts who suggest the design has limitations. “It appears that the brain EEG signal only contributed a walk or stop command,” says Dr. Chet Moritz, an associate professor of rehab medicine, physiology and biophysics at the University of Washington. “This binary signal could easily be provided by the user using a sip-puff straw, eye-blink device or many other more reliable means of communicating a simple ‘switch.’”

Moritz believes it’s unlikely that an EEG alone would be reliable enough to extract any more specific input from the brain while the test subject is walking. In other words, it might not be able to do much more beyond beginning and ending a simple motion like moving your legs forward—not so helpful in stepping over curbs or turning a corner in a hallway.

The UC Irvine team hopes to improve the capability of its technology. A simplified version of the system has the potential to work as a means of noninvasive rehabilitation for a wide range of paralytic conditions, from less severe spinal cord injuries to stroke and multiple sclerosis.

“Once we’ve confirmed the usability of this noninvasive system, we can look into invasive means, such as brain implants,” said Nenadic in a statement announcing the project’s success. “We hope that an implant could achieve an even greater level of prosthesis control because brain waves are recorded with higher quality. In addition, such an implant could deliver sensation back to the brain, enabling the user to feel their legs.

http://www.newsweek.com/paralyzed-man-walks-again-using-only-his-mind-379531

Ray Kurzweil’s Mind-Boggling Predictions for the Next 25 Years

Bill Gates calls Ray, “the best person I know at predicting the future of artificial intelligence.” Ray is also amazing at predicting a lot more beyond just AI.

This post looks at his very incredible predictions for the next 20+ years.

So who is Ray Kurzweil?

He has received 20 honorary doctorates, has been awarded honors from three U.S. presidents, and has authored 7 books (5 of which have been national bestsellers).

He is the principal inventor of many technologies ranging from the first CCD flatbed scanner to the first print-to-speech reading machine for the blind. He is also the chancellor and co-founder of Singularity University, and the guy tagged by Larry Page to direct artificial intelligence development at Google.

In short, Ray’s pretty smart… and his predictions are amazing, mind-boggling, and important reminders that we are living in the most exciting time in human history.

But, first let’s look back at some of the predictions Ray got right.

Predictions Ray has gotten right over the last 25 years

In 1990 (twenty-five years ago), he predicted…

…that a computer would defeat a world chess champion by 1998. Then in 1997, IBM’s Deep Blue defeated Garry Kasparov.

… that PCs would be capable of answering queries by accessing information wirelessly via the Internet by 2010. He was right, to say the least.

… that by the early 2000s, exoskeletal limbs would let the disabled walk. Companies like Ekso Bionics and others now have technology that does just this, and much more.

In 1999, he predicted…

… that people would be able talk to their computer to give commands by 2009. While still in the early days in 2009, natural language interfaces like Apple’s Siri and Google Now have come a long way. I rarely use my keyboard anymore; instead I dictate texts and emails.

… that computer displays would be built into eyeglasses for augmented reality by 2009. Labs and teams were building head mounted displays well before 2009, but Google started experimenting with Google Glass prototypes in 2011. Now, we are seeing an explosion of augmented and virtual reality solutions and HMDs. Microsoft just released the Hololens, and Magic Leap is working on some amazing technology, to name two.

In 2005, he predicted…

… that by the 2010s, virtual solutions would be able to do real-time language translation in which words spoken in a foreign language would be translated into text that would appear as subtitles to a user wearing the glasses. Well, Microsoft (via Skype Translate), Google (Translate), and others have done this and beyond. One app called Word Lens actually uses your camera to find and translate text imagery in real time.

Ray’s predictions for the next 25 years

The above represent only a few of the predictions Ray has made.

While he hasn’t been precisely right, to the exact year, his track record is stunningly good.

Here are some of Ray’s predictions for the next 25+ years.

By the late 2010s, glasses will beam images directly onto the retina. Ten terabytes of computing power (roughly the same as the human brain) will cost about $1,000.

By the 2020s, most diseases will go away as nanobots become smarter than current medical technology. Normal human eating can be replaced by nanosystems. The Turing test begins to be passable. Self-driving cars begin to take over the roads, and people won’t be allowed to drive on highways.

By the 2030s, virtual reality will begin to feel 100% real. We will be able to upload our mind/consciousness by the end of the decade.

By the 2040s, non-biological intelligence will be a billion times more capable than biological intelligence (a.k.a. us). Nanotech foglets will be able to make food out of thin air and create any object in physical world at a whim.

By 2045, we will multiply our intelligence a billionfold by linking wirelessly from our neocortex to a synthetic neocortex in the cloud.

Ray’s predictions are a byproduct of his understanding of the power of Moore’s Law, more specifically Ray’s “Law of Accelerating Returns” and of exponential technologies.

These technologies follow an exponential growth curve based on the principle that the computing power that enables them doubles every two years.

Ray Kurzweil’s Mind-Boggling Predictions for the Next 25 Years

Thanks to Kebmodee for bringing this to the attention of the It’s Interesting community.

Deep brain stimulation treatment for patients with obsessive-compulsive disorder (OCD)

It seems simple: Walk to the refrigerator and grab a drink.

But Brett Larsen, 37, opens the door gingerly — peeks in — closes it, opens it, closes it and opens it again. This goes on for several minutes.

When he finally gets out a bottle of soda, he places his thumb and index finger on the cap, just so. Twists it open. Twists it closed. Twists it open.

“Just think about any movement that you have during the course of a day — closing a door or flushing the toilet — over and over and over,” said Michele Larsen, Brett’s mother.

“I cannot tell you the number of things we’ve had to replace for being broken because they’ve been used so many times.”

At 12, Larsen was diagnosed with obsessive-compulsive disorder, or OCD. It causes anxiety, which grips him so tightly that his only relief is repetition. It manifests in the smallest of tasks: taking a shower, putting on his shoes, walking through a doorway.

There are days when Larsen cannot leave the house.

“I can only imagine how difficult that is to live with that every single living waking moment of your life,” said Dr. Gerald Maguire, Larsen’s psychiatrist.

In a last-ditch effort to relieve his symptoms, Larsen decided to undergo deep brain stimulation. Electrodes were implanted in his brain, nestled near the striatum, an area thought to be responsible for deep, primitive emotions such as anxiety and fear.

Brett’s OCD trigger

Brett says his obsessions and compulsions began when he was 10, after his father died.

“I started worrying a lot about my family and loved ones dying or something bad happening to them,” he said. “I just got the thought in my head that if I switch the light off a certain amount of times, maybe I could control it somehow.

“Then I just kept doing it, and it got worse and worse.”

“Being OCD” has become a cultural catchphrase, but for people with the actual disorder, life can feel like a broken record. With OCD, the normal impulse to go back and check if you turned off the stove, or whether you left the lights on, becomes part of a crippling ritual.

The disease hijacked Larsen’s life (he cannot hold down a job and rarely sees friends); his personality (he can be stone-faced, with only glimpses of a slight smile); and his speech (a stuttering-like condition causes his speaking to be halting and labored.)

He spent the past two decades trying everything: multiple medication combinations, cognitive behavioral therapy, cross-country visits to specialists, even hospitalization.

Nothing could quell the anxiety churning inside him.

“This is not something that you consider first line for patients because this is invasive,” said Maguire, chair of psychiatry and neuroscience at the University of California Riverside medical school, and part of the team evaluating whether Larsen was a good candidate for deep brain stimulation. “It’s reserved for those patients when the standard therapies, the talk therapies, the medication therapies have failed.”

Deep brain stimulation is an experimental intervention, most commonly used among patients with nervous system disorders such as essential tremor, dystonia or Parkinson’s disease. In rare cases, it has been used for patients with intractable depression and OCD.

The electrodes alter the electrical field around regions of the brain thought to influence disease — in some cases amplifying it, in others dampening it — in hopes of relieving symptoms, said Dr. Frank Hsu, professor and chair of the department of neurosurgery at University of California, Irvine.

Hsu says stimulating the brain has worked with several OCD patients, but that the precise mechanism is not well understood.

The procedure is not innocuous: It involves a small risk of bleeding in the brain, stroke and infection. A battery pack embedded under the skin keeps the electrical current coursing to the brain, but each time the batteries run out, another surgical procedure is required.

‘I feel like laughing’

As doctors navigated Larsen’s brain tissue in the operating room — stimulating different areas to determine where to focus the electrical current — Larsen began to feel his fear fade.

At one point he began beaming, then giggling. It was an uncharacteristic light moment for someone usually gripped by anxiety.

In response to Larsen’s laughter, a staff member in the operating room asked him what he was feeling. Larsen said, “I don’t know why, but I feel happy. I feel like laughing.”

Doctors continued probing his brain for hours, figuring out what areas — and what level of stimulation — might work weeks later, when Larsen would have his device turned on for good.

In the weeks after surgery, the residual swelling in his brain kept those good feelings going. For the first time in years, Larsen and his mother had hope for normalcy.

“I know that Brett has a lot of normal in him, even though this disease eats him up at times,” said Michele Larsen. “There are moments when he’s free enough of anxiety that he can express that. But it’s only moments. It’s not days. It’s not hours. It’s not enough.”

Turning it on

In January, Larsen had his device activated. Almost immediately, he felt a swell of happiness reminiscent of what he had felt in the OR weeks earlier.

But that feeling would be fleeting — the process for getting him to an optimal level would take months. Every few weeks doctors increased the electrical current.

“Each time I go back it feels better,” Larsen said. “I’m more calm every time they turn it up.”

With time, some of his compulsive behaviors became less pronounced. In May, several weeks after his device was activated, he could put on his shoes with ease. He no longer spun them around in an incessant circle to allay his anxiety.

But other behaviors — such as turning on and shutting off the faucet — continued. Today, things are better, but not completely normal.

Normal, by society’s definition, is not the outcome Larsen should expect, experts say. Patients with an intractable disease who undergo deep brain stimulation should expect to have manageable OCD.

Lately, Larsen feels less trapped by his mind. He is able to make the once interminable trek outside his home within minutes, not hours. He has been to Disneyland with friends twice. He takes long rides along the beach to relax.

In his mind, the future looks bright.

“I feel like I’m getting better every day,” said Larsen, adding that things like going back to school or working now feel within his grasp. “I feel like I’m more able to achieve the things I want to do since I had the surgery.”

Thanks to Da Brayn for bringing this to the attention of the It’s Interesting community.

http://www.cnn.com/2014/06/24/health/brain-stimulation-ocd/?c=&page=0

New research suggests that a third of patients diagnosed as vegetative may be conscious with a chance for recovery

Imagine being confined to a bed, diagnosed as “vegetative“—the doctors think you’re completely unresponsive and unaware, but they’re wrong. As many as one-third of vegetative patients are misdiagnosed, according to a new study in The Lancet. Using brain imaging techniques, researchers found signs of minimal consciousness in 13 of 42 patients who were considered vegetative. “The consequences are huge,” lead author Dr. Steven Laureys, of the Coma Science Group at the Université de Liège, tells Maclean’s. “These patients have emotions; they may feel pain; studies have shown they have a better outcome [than vegetative patients]. Distinguishing between unconscious, and a little bit conscious, is very important.”

Detecting human consciousness following brain injury remains exceedingly difficult. Vegetative patients are typically diagnosed by a bedside clinical exam, and remain “neglected” in the health care system, Laureys says. Once diagnosed, “they might not be [re-examined] for years. Nobody questions whether or not there could be something more going on.” That’s about to change.

Laureys has collaborated previously with British neuroscientist Adrian Owen, based at Western University in London, Ont., who holds the Canada Excellence Research Chair in Cognitive Neuroscience and Imaging. (Owen’s work was featured in Maclean’s in October 2013.) Together they co-authored a now-famous paper in the journal Science, in 2006, in which a 23-year-old vegetative patient was instructed to either imagine playing tennis, or moving around her house. Using functional magnetic resonance imaging, or fMRI, they saw that the patient was activating two different parts of her brain, just like healthy volunteers did. Laureys and Owen also worked together on a 2010 follow-up study, in the New England Journal of Medicine, where the same technique was used to ask a patient to answer “yes” or “no” to various questions, presenting the stunning possibility that some vegetative patients might be able to communicate.

In the new Lancet paper, Laureys used two functional brain imaging techniques, fMRI and positron emission tomography (PET), to examine 126 patients with severe brain injury: 41 of them vegetative, four locked-in (a rare condition in which patients are fully conscious and aware, yet completely paralyzed from head-to-toe), and another 81 who were minimally conscious. After finding that 13 of 42 vegetative patients showed brain activity indicating minimal consciousness, they re-examined them a year later. By then, nine of the 13 had improved, and progressed into a minimally conscious state or higher.

The mounting evidence that some vegetative patients are conscious, even minimally so, carries ethical and legal implications. Just last year, Canada’s Supreme Court ruled that doctors couldn’t unilaterally pull the plug on Hassan Rasouli, a man in a vegetative state. This work raises the possibility that one day, some patients may be able to communicate through some kind of brain-machine interface, and maybe even weigh in on their own medical treatment. For now, doctors could make better use of functional brain imaging tests to diagnose these patients, Laureys believes. Kate Bainbridge, who was one of the first vegetative patients examined by Owen, was given a scan that showed her brain lighting up in response to images of her family. Her health later improved. “I can’t say how lucky I was to have the scan,” she said in an email to Maclean’s last year. “[It] really scares me to think what would have happened if I hadn’t had it.”

https://ca.news.yahoo.com/one-third-of-vegetative-patients-may-be-conscious–study-195412300.html

Mild electric current to the brain can improve math skills

MATH

In a lab in Oxford University’s experimental psychology department, researcher Roi Cohen Kadosh is testing an intriguing treatment: He is sending low-dose electric current through the brains of adults and children as young as 8 to make them better at math.

A relatively new brain-stimulation technique called transcranial electrical stimulation may help people learn and improve their understanding of math concepts.

The electrodes are placed in a tightly fitted cap and worn around the head. The device, run off a 9-volt battery commonly used in smoke detectors, induces only a gentle current and can be targeted to specific areas of the brain or applied generally. The mild current reduces the risk of side effects, which has opened up possibilities about using it, even in individuals without a disorder, as a general cognitive enhancer. Scientists also are investigating its use to treat mood disorders and other conditions.

Dr. Cohen Kadosh’s pioneering work on learning enhancement and brain stimulation is one example of the long journey faced by scientists studying brain-stimulation and cognitive-stimulation techniques. Like other researchers in the community, he has dealt with public concerns about safety and side effects, plus skepticism from other scientists about whether these findings would hold in the wider population.

There are also ethical questions about the technique. If it truly works to enhance cognitive performance, should it be accessible to anyone who can afford to buy the device—which already is available for sale in the U.S.? Should parents be able to perform such stimulation on their kids without monitoring?

“It’s early days but that hasn’t stopped some companies from selling the device and marketing it as a learning tool,” Dr. Cohen Kadosh says. “Be very careful.”

The idea of using electric current to treat the brain of various diseases has a long and fraught history, perhaps most notably with what was called electroshock therapy, developed in 1938 to treat severe mental illness and often portrayed as a medieval treatment that rendered people zombielike in movies such as “One Flew over the Cuckoo’s Nest.”

Electroconvulsive therapy has improved dramatically over the years and is considered appropriate for use against types of major depression that don’t respond to other treatments, as well as other related, severe mood states.

A number of new brain-stimulation techniques have been developed, including deep brain stimulation, which acts like a pacemaker for the brain. With DBS, electrodes are implanted into the brain and, though a battery pack in the chest, stimulate neurons continuously. DBS devices have been approved by U.S. regulators to treat tremors in Parkinson’s disease and continue to be studied as possible treatments for chronic pain and obsessive-compulsive disorder.

Transcranial electrical stimulation, or tES, is one of the newest brain stimulation techniques. Unlike DBS, it is noninvasive.

If the technique continues to show promise, “this type of method may have a chance to be the new drug of the 21st century,” says Dr. Cohen Kadosh.

The 37-year-old father of two completed graduate school at Ben-Gurion University in Israel before coming to London to do postdoctoral work with Vincent Walsh at University College London. Now, sitting in a small, tidy office with a model brain on a shelf, the senior research fellow at Oxford speaks with cautious enthusiasm about brain stimulation and its potential to help children with math difficulties.

Up to 6% of the population is estimated to have a math-learning disability called developmental dyscalculia, similar to dyslexia but with numerals instead of letters. Many more people say they find math difficult. People with developmental dyscalculia also may have trouble with daily tasks, such as remembering phone numbers and understanding bills.

Whether transcranial electrical stimulation proves to be a useful cognitive enhancer remains to be seen. Dr. Cohen Kadosh first thought about the possibility as a university student in Israel, where he conducted an experiment using transcranial magnetic stimulation, a tool that employs magnetic coils to induce a more powerful electrical current.

He found that he could temporarily turn off regions of the brain known to be important for cognitive skills. When the parietal lobe of the brain was stimulated using that technique, he found that the basic arithmetic skills of doctoral students who were normally very good with numbers were reduced to a level similar to those with developmental dyscalculia.

That led to his next inquiry: If current could turn off regions of the brain making people temporarily math-challenged, could a different type of stimulation improve math performance? Cognitive training helps to some extent in some individuals with math difficulties. Dr. Cohen Kadosh wondered if such learning could be improved if the brain was stimulated at the same time.

But transcranial magnetic stimulation wasn’t the right tool because the current induced was too strong. Dr. Cohen Kadosh puzzled over what type of stimulation would be appropriate until a colleague who had worked with researchers in Germany returned and told him about tES, at the time a new technique. Dr. Cohen Kadosh decided tES was the way to go.

His group has since conducted a series of studies suggesting that tES appears helpful improving learning speed on various math tasks in adults who don’t have trouble in math. Now they’ve found preliminary evidence for those who struggle in math, too.

Participants typically come for 30-minute stimulation-and-training sessions daily for a week. His team is now starting to study children between 8 and 10 who receive twice-weekly training and stimulation for a month. Studies of tES, including the ones conducted by Dr. Cohen Kadosh, tend to have small sample sizes of up to several dozen participants; replication of the findings by other researchers is important.

In a small, toasty room, participants, often Oxford students, sit in front of a computer screen and complete hundreds of trials in which they learn to associate numerical values with abstract, nonnumerical symbols, figuring out which symbols are “greater” than others, in the way that people learn to know that three is greater than two.

When neurons fire, they transfer information, which could facilitate learning. The tES technique appears to work by lowering the threshold neurons need to reach before they fire, studies have shown. In addition, the stimulation appears to cause changes in neurochemicals involved in learning and memory.

However, the results so far in the field appear to differ significantly by individual. Stimulating the wrong brain region or at too high or long a current has been known to show an inhibiting effect on learning. The young and elderly, for instance, respond exactly the opposite way to the same current in the same location, Dr. Cohen Kadosh says.

He and a colleague published a paper in January in the journal Frontiers in Human Neuroscience, in which they found that one individual with developmental dyscalculia improved her performance significantly while the other study subject didn’t.

What is clear is that anyone trying the treatment would need to train as well as to stimulate the brain. Otherwise “it’s like taking steroids but sitting on a couch,” says Dr. Cohen Kadosh.

Dr. Cohen Kadosh and Beatrix Krause, a graduate student in the lab, have been examining individual differences in response. Whether a room is dark or well-lighted, if a person smokes and even where women are in their menstrual cycle can affect the brain’s response to electrical stimulation, studies have found.

Results from his lab and others have shown that even if stimulation is stopped, those who benefited are going to maintain a higher performance level than those who weren’t stimulated, up to a year afterward. If there isn’t any follow-up training, everyone’s performance declines over time, but the stimulated group still performs better than the non-stimulated group. It remains to be seen whether reintroducing stimulation would then improve learning again, Dr. Cohen Kadosh says.

http://online.wsj.com/news/articles/SB10001424052702303650204579374951187246122?mod=WSJ_article_EditorsPicks&mg=reno64-wsj&url=http%3A%2F%2Fonline.wsj.com%2Farticle%2FSB10001424052702303650204579374951187246122.html%3Fmod%3DWSJ_article_EditorsPicks

To create a robot with common sense, mimic a toddler

robot

Artificial intelligence researcher Ben Goertzel wants to create robots far more intelligent than humans


Why will your robot, Adam Z1, be a toddler?

We are not trying to make a robot exactly like a 3-year-old. There is no toilet training involved! Our main goal is for him to engage in creative play like a young child. For example, if you ask him to “build me something I haven’t seen before” using foam blocks, he would remember what he’d seen you see and then build something different. A smart 3-year-old can do this but no robot today can.

Where will that lead?
What I want to do is make thinking machines that are far smarter than humans. Step one is to make an AI program that understands the world, and itself, in a basic common-sense manner. I think the best way to get there is to build a robot toddler.

How will you get from toddler-level smarts to super-intelligence?
We have specialised algorithms that can predict the stock market and genetic causes of disease. Once we get an AI with basic common sense, you can hybridise with existing narrow software. By putting the two together, you are going to get a whole new kind of artificial general intelligence expert – good at solving specialised problems, but in a way that uses contextual understanding.

Many have tried to create human-like AI and failed. What will be different about yours?
Our open source AI project OpenCog has an architecture for general intelligence that incorporates all the different aspects of what the mind does. No one else seems to have that. Most computer scientists focus on one algorithm – for search or for pattern-recognition, perhaps. The human mind is more heterogeneous; it integrates a bunch of different algorithms. We have tried to encompass that complexity in a family of learning and memory algorithms that all work together.

Will you teach the robot or program it?
It will be a mix. The robot will watch people in the lab and experiment and fiddle with things, and we will also have a programming team improving the algorithms all the time. But there won’t be “build stairs” or “build a wall” programs that we write. It will have to learn these things from higher-level goals – like pleasing people, or getting gold stars.

Adam Z1’s body will be a highly lifelike Hanson robot – why is that important?
The main thing with the Hanson robot is that the face is highly expressive. In terms of social interactions, it is valuable to have a robot that can convey emotions and desires. He needs to learn from people: the more engaged they are, the better data they will give to power his learning.

You are crowdfunding Adam Z1. So far you have only $5000 of the $300,000 target…
Raising research money via crowdfunding is a very speculative thing. We viewed it as a kind of experiment, not only to gain money but also to learn how people react; what they say, what pushback they give. If we succeed, that would be awesome and will accelerate our progress. Fortunately we already have some funding, so the project is going forward one way or another.

http://www.newscientist.com/article/mg21929260.300-to-create-a-robot-with-common-sense-mimic-a-toddler.html#.Uez9QNK-pH8

Advanced ‘artificial skin’ senses touch, humidity, and temperature

e-skin-layers

Technion-Israel Institute of Technology scientists have discovered how to make a new kind of flexible sensor that one day could be integrated into “electronic skin” (e-skin) — a covering for prosthetic limbs that would allow patients to feel touch, humidity, and temperature.

Current kinds of e-skin detect only touch, but the Technion team’s invention “can simultaneously sense touch (pressure), humidity, and temperature, as real skin can do,” says research team leader Professor Hossam Haick.

Additionally, the new system “is at least 10 times more sensitive in touch than the currently existing touch-based e-skin systems.”

Researchers have long been interested in flexible sensors, but have had trouble adapting them for real-world use Haick says. A flexible sensor would have to run on low voltage (so it would be compatible with the batteries in today’s portable devices), measure a wide range of pressures, and make more than one measurement at a time, including humidity, temperature, pressure, and the presence of chemicals. These sensors would also have to be able to be manufactured quickly, easily, and cheaply.

The Technion team’s sensor has all of these qualities, Haick says. The secret: monolayer-capped gold nanoparticles that are only 5–8 nanometers in diameter, surrounded by connector molecules called ligands.

“Monolayer-capped nanoparticles can be thought of as flowers, where the center of the flower is the gold or metal nanoparticle and the petals are the monolayer of organic ligands that generally protect it,” says Haick.

The team discovered that when these nanoparticles are laid on top of a substrate — in this case, made of PET (flexible polyethylene terephthalate), the same plastic found in soda bottles — the resulting compound conducted electricity differently depending on how the substrate was bent.

The bending motion brings some particles closer to others, increasing how quickly electrons can pass between them. This electrical property means that the sensor can detect a large range of pressures, from tens of milligrams to tens of grams.

And by varying how thick the substrate is, as well as what it is made of, scientists can modify how sensitive the sensor is. Because these sensors can be customized, they could in the future perform a variety of other tasks, including monitoring strain on bridges and detecting cracks in engines.

“The sensor is very stable and can be attached to any surface shape while keeping the function stable,” says Dr. Nir Peled, Head of the Thoracic Cancer Research and Detection Center at Israel’s Sheba Medical Center, who was not involved in the research.

Meital Segev-Bar et al., Tunable Touch Sensor and Combined Sensing Platform: Toward Nanoparticle-based Electronic Skin, ACS Applied Materials & Interfaces, 2013, DOI: 10.1021/am400757q

http://www.kurzweilai.net/advanced-artificial-skin-senses-touch-humidity-and-temperature

Thanks to SRW for bringing this to the attention of the It’s Interesting community.

How technology may change the human face over the next 100,000 years

Faces-of-the-Future-4

Designer Lamm’s depiction of how the human face might look in 100,000 years

We’ve come along way looks-wise from our homo sapien ancestors. Between 800,000 and 200,000 years ago, for instance, rapid changes in Earth climate coincided with a tripling in the size of the human brain and skull, leading to a flattening of the face. But how might the physiological features of human beings change in the future, especially as new, wearable technology like Google Glass change the way we use our bodies and faces? Artist and researcher Nickolay Lamm has partnered with a computational geneticist to research and illustrate what we might look like 20,000 years in the future, as well as 60,000 years and 100,000 years out. His full, eye-popping illustrations are at the bottom of this post.

Lamm says this is “one possible timeline,” where, thanks to zygotic genome engineering technology, our future selves would have the ability to control human biology and human evolution in much the same way we control electrons today.

Lamm speaks of “wresting control” of the human form from natural evolution and bending human biology to suit our needs. The illustrations were inspired by conversations with Dr. Alan Kwan, who holds a PhD in computational genomics from Washington University.

Kwan based his predictions on what living environments might look like in the future, climate and technological advancements. One of the big changes will be a larger forehead, Kwan predicts – a feature that has already expanding since the 14th and 16th centuries. Scientists writing in the British Dental Journal have suggested that skull-measurement comparisons from that time show modern-day people have less prominent facial features but higher foreheads, and Kwan expects the human head to trend larger to accommodate a larger brain.

Kwan says that 60,000 years from now, our ability to control the human genome will also make the effect of evolution on our facial features moot. As genetic engineering becomes the norm, “the fate of the human face will be increasingly determined by human tastes,” he says in a research document. Eyes will meanwhile get larger, as attempts to colonize Earth’s solar system and beyond see people living in the dimmer environments of colonies further away from the Sun than Earth. Similarly, skin will become more pigmented to lesson the damage from harmful UV radiation outside of the Earth’s protective ozone. Kwan expects people to have thicker eyelids and a more pronounced superciliary arch (the smooth, frontal bone of the skull under the brow), to deal with the effects of low gravity.

The remaining 40,000 years, or 100,000 years from now, Kwan believes the human face will reflect “total mastery over human morphological genetics. This human face will be heavily biased towards features that humans find fundamentally appealing: strong, regal lines, straight nose, intense eyes, and placement of facial features that adhere to the golden ratio and left/right perfect symmetry,” he says.

Eyes will seem “unnervingly large” — as least from our viewpoint today — and will feature eye-shine and even a sideways blink from the re-introduced plica semilunaris to further protect from cosmic ray effects.

There will be other functional necessities: larger nostrils for easier breathing in off-planet environments, denser hair to contain heat loss from a larger head — features which people may have to weigh up against their tastes for what’s genetically trendy at the time. Instead of just debating what to name a child as new parents do today, they might also have to decide if they want their children to carry the most natural expression of a couple’s DNA, such as their eye-color, teeth and other features they can genetically alter.

Excessive Borg-like technological implants would start to become untrendy, though, as people start to increasingly value that which makes us look naturally human. That “will be ever more important to us in an age where we have the ability to determine any feature,” Kwan says.

Wearable technology will still be around, but in far more subtle forms. Instead of Google Glass and iWatch, people will seek discrete implants that preserve the natural human look – think communication lenses (a technologically souped up version of today’s contacts) and miniature bone-conduction devices implanted above the ear. These might have imbedded nano-chips that communicate to another separate device to chat with others or for entertainment.

The bird’s eye view of human beings in 100,000 years will be people who want to be wirelessly plugged in, Kwan says, but with minimal disruption to what may then be perceived as the “perfect” human face.

His Predictions:

In 20,000 years: Humans have a larger head with a forehead that is subtly too large. A future “communications lens” will be manifested as a the yellow ring around their eyes. These lenses will be the ‘Google Glass’ of the future.

In 60,000 years: Human beings have even larger heads, larger eyes and pigmented skin. A pronounced superciliary arch makes for a darker area below eyebrows. Miniature bone-conduction devices may be implanted above the ear now to work with communications lenses.

In 100,000 years: The human face is proportioned to the ‘golden ratio,’ though it features unnervingly large eyes. There is green “eye shine” from the tapetum lucidum, and a more pronounced superciliary arch. A sideways blink of the reintroduced plica semilunaris seen in the light gray areas of the eyes, while miniature bone-conduction devices implanted above the ear work with the communications lenses on the eyes.

Thanks to Ray Gaudette for bringing this to the attention of the It’s Interesting community.

http://news.yahoo.com/human-face-might-look-100-171207969.html