Elon Musk says we’re all cyborgs almost certainly living within a computer simulation

Elon Musk has said that there is only a “one in billions” chance that we’re not living in a computer simulation.

Our lives are almost certainly being conducted within an artificial world powered by AI and highly-powered computers, like in The Matrix, the Tesla and SpaceX CEO suggested at a tech conference in California.

Mr Musk, who has donated huge amounts of money to research into the dangers of artificial intelligence, said that he hopes his prediction is true because otherwise it means the world will end.

“The strongest argument for us probably being in a simulation I think is the following,” he told the Code Conference. “40 years ago we had Pong – two rectangles and a dot. That’s where we were.

“Now 40 years later we have photorealistic, 3D simulations with millions of people playing simultaneously and it’s getting better every year. And soon we’ll have virtual reality, we’ll have augmented reality.

“If you assume any rate of improvement at all, then the games will become indistinguishable from reality, just indistinguishable.”

He said that even if the speed of those advancements dropped by 1000, we would still be moving forward at an intense speed relative to the age of life.

Since that would lead to games that would be indistinguishable from reality that could be played anywhere, “it would seem to follow that the odds that we’re in ‘base reality’ is one in billions”, Mr Musk said.

Asked whether he was saying that the answer to the question of whether we are in a simulated computer game was “yes”, he said the answer is “probably”.

He said that arguably we should hope that it’s true that we live in a simulation. “Otherwise, if civilisation stops advancing, then that may be due to some calamitous event that stops civilisation.”

He said that either we will make simulations that we can’t tell apart from the real world, “or civilisation will cease to exist”.

Mr Musk said that he has had “so many simulation discussions it’s crazy”, and that it got to the point where “every conversation [he had] was the AI/simulation conversation”.

The question of whether what we see is real or simulated has perplexed humans since at least the Ancient philosophers. But it has been given a new and different edge in recent years with the development of powerful computers and artificial intelligence, which some have argued shows how easily such a simulation could be created.

http://www.independent.co.uk/life-style/gadgets-and-tech/news/elon-musk-ai-artificial-intelligence-computer-simulation-gaming-virtual-reality-a7060941.html

Robot outperforms highly-skilled human surgeons on pig GI surgery

A robot surgeon has been taught to perform a delicate procedure—stitching soft tissue together with a needle and thread—more precisely and reliably than even the best human doctor.

The Smart Tissue Autonomous Robot (STAR), developed by researchers at Children’s National Health System in Washington, D.C., uses an advanced 3-D imaging system and very precise force sensing to apply stitches with submillimeter precision. The system was designed to copy state-of-the art surgical practice, but in tests involving living pigs, it proved capable of outperforming its teachers.

Currently, most surgical robots are controlled remotely, and no automated surgical system has been used to manipulate soft tissue. So the work, described today in the journal Science Translational Medicine, shows the potential for automated surgical tools to improve patient outcomes. More than 45 million soft-tissue surgeries are performed in the U.S. each year. Examples include hernia operations and repairs of torn muscles.

“Imagine that you need a surgery, or your loved one needs a surgery,” says Peter Kim, a pediatric surgeon at Children’s National, who led the work. “Wouldn’t it be critical to have the best surgeon and the best surgical techniques available?”

Kim does not see the technology replacing human surgeons. He explains that a surgeon still oversees the robot’s work and will take over in an emergency, such as unexpected bleeding.

“Even though we take pride in our craft of doing surgical procedures, to have a machine or tool that works with us in ensuring better outcome safety and reducing complications—[there] would be a tremendous benefit,” Kim says. The new system is an impressive example of a robot performing delicate manipulation. If robots can master human-level dexterity, they could conceivably take on many more tasks and jobs.

STAR consists of an industrial robot equipped with several custom-made components. The researchers developed a force-sensitive device for suturing and, most important, a near-infrared camera capable of imaging soft tissue in detail when fluorescent markers are injected.

“It’s an important result,” says Ken Goldberg, a professor at UC Berkeley who is also developing robotic surgical systems. “The innovation in 3-D sensing is particularly interesting.”

Goldberg’s team is developed surgical robots that could be more flexible than STAR because instead of being manually programmed, they can learn automatically by observing expert surgeons. “Copying the skill of experts is really the next step here,” he says.

https://www.technologyreview.com/s/601378/nimble-fingered-robot-outperforms-the-best-human-surgeons/

Thanks to Kebmodee for bringing this to the It’s Interesting community.

Massive sculpture relocated because people kept walking into it while texting


The statue by Sophie Ryder had to be moved because people on their phones were bumping into it.

By Sophie Jamieson

A massive 20ft statue of two clasped hands had to be relocated after people texting on their mobile phones kept walking into it.

The sculpture, called ‘The Kiss’, was only put in place last weekend, but within days those in charge of the exhibition noticed walkers on the path were bumping their heads as they walked through the archway underneath.

Artist Sophie Ryder, who designed the sculpture, posted a video of it being moved by a crane on her Facebook page.

The artwork was positioned on a path leading up to Salisbury Cathedral in Wiltshire.

Made from galvanised steel wire, The Kiss had a 6ft 4in gap underneath the two hands that pedestrians could walk through.

But Ms Ryder said people glued to their phones had not seen it coming.

She said on social media: “We had to move ‘the kiss’ because people were walking through texting and said they bumped their heads! Oh well!!”

Her fans voiced their surprise that people could fail to notice the “ginormous” sculpture.

Cindy Billingsley commented: “Oh good grief- they should be looking at the beautiful art instead of texting- so they deserve what they get if they are not watching where they are going.”

Patricia Cunningham said: “If [sic] may have knocked some sense into their heads! We can but hope.”

Another fan, Lisa Wallis-Adams, wrote: “We saw your art in Salisbury at the weekend. We absolutely loved your rabbits and didn’t walk into any of them! Sorry some people are complete numpties.”

Sculptor Sophie Ryder studied at the Royal Academy of Arts and is known for creations of giant mythical figures, like minotaurs.

The sculpture is part of an exhibition that also features Ryder’s large “lady hares” and minotaurs, positioned on the lawn outside the cathedral. The exhibition runs until 3 July.

http://www.telegraph.co.uk/news/uknews/12164922/Massive-sculpture-relocated-because-people-busy-texting-kept-walking-into-it.html

Graphene successfully interfaced with neurons in the brain

Scientists have long been on a quest to find a way to implant electrodes that interface with neurons into the human brain. If successful, the idea could have huge implications for the treatment of Parkinson’s disease and other neurological disorders. Last month, a team of researchers from Italy and the UK made a huge step forward by showing that the world’s favorite wonder-material, graphene, can successfully interface with neurons.

Previous efforts by other groups using treated graphene had created an interface with a very low signal to noise ratio. But an interdisciplinary collaborative effort by the University of Trieste and the Cambridge Graphene Centre has developed a significantly improved electrode by working with untreated graphene.

“For the first time we interfaced graphene to neurons directly,” said Professor Laura Ballerini of the University of Trieste in Italy. “We then tested the ability of neurons to generate electrical signals known to represent brain activities, and found that the neurons retained their neuronal signaling properties unaltered. This is the first functional study of neuronal synaptic activity using uncoated graphene based materials.”

Prior to experimenting with graphene-based substrates (GBS), scientists implanted microelectrodes based on tungsten and silicon. Proof-of-concept experiments were successful, but these materials seem to suffer from the same fatal flaws. The body’s reaction to the insertion trauma is to form scarring tissue, inhibiting clear electrical signals. The structures were also prone to disconnecting, due to the stiffness of the materials, which were unsuitable for a semi-fluid organic environment.

Pure graphene is promising because it is flexible, non-toxic, and does not impair other cellular activity.

The team’s experiments on rat brain cell cultures showed that the untreated graphene electrodes interfaced well with neurons, transmitting electrical impulses normally with none of the adverse reactions seen previously.

The biocompatibility of graphene could allow it to be used to make graphene microelectrodes that could help measure, harness and control an impaired brain’s functions. It could be used to restore lost sensory functions to treat paralysis, control prosthetic devices such a robotic limbs for amputees and even control or diminish the impact of the out-of-control electrical impulses that cause motor disorders such as Parkinson’s and epilepsy.

“We are currently involved in frontline research in graphene technology towards biomedical applications,” said Professor Maurizio Prato from the University of Trieste. “In this scenario, the development and translation in neurology of graphene-based high-performance bio-devices requires the exploration of the interactions between graphene nano and micro-sheets with the sophisticated signaling machinery of nerve cells. Our work is only a first step in that direction.”

The results of this research were recently published in the journal ACS Nano. The research was funded by the Graphene Flagship, a European initiative that aims to connect theoretical and practical fields and reduce the time that graphene products spend in laboratories before being brought to market.

http://www.cam.ac.uk/research/news/graphene-shown-to-safely-interact-with-neurons-in-the-brain

DARPA program aims to develop an implantable neural interface capable of connecting with one million neurons

A new DARPA program aims to develop an implantable neural interface able to provide unprecedented signal resolution and data-transfer bandwidth between the human brain and the digital world. The interface would serve as a translator, converting between the electrochemical language used by neurons in the brain and the ones and zeros that constitute the language of information technology. The goal is to achieve this communications link in a biocompatible device no larger than one cubic centimeter in size, roughly the volume of two nickels stacked back to back.

The program, Neural Engineering System Design (NESD), stands to dramatically enhance research capabilities in neurotechnology and provide a foundation for new therapies.

“Today’s best brain-computer interface systems are like two supercomputers trying to talk to each other using an old 300-baud modem,” said Phillip Alvelda, the NESD program manager. “Imagine what will become possible when we upgrade our tools to really open the channel between the human brain and modern electronics.”

Among the program’s potential applications are devices that could compensate for deficits in sight or hearing by feeding digital auditory or visual information into the brain at a resolution and experiential quality far higher than is possible with current technology.

Neural interfaces currently approved for human use squeeze a tremendous amount of information through just 100 channels, with each channel aggregating signals from tens of thousands of neurons at a time. The result is noisy and imprecise. In contrast, the NESD program aims to develop systems that can communicate clearly and individually with any of up to one million neurons in a given region of the brain.

Achieving the program’s ambitious goals and ensuring that the envisioned devices will have the potential to be practical outside of a research setting will require integrated breakthroughs across numerous disciplines including neuroscience, synthetic biology, low-power electronics, photonics, medical device packaging and manufacturing, systems engineering, and clinical testing. In addition to the program’s hardware challenges, NESD researchers will be required to develop advanced mathematical and neuro-computation techniques to first transcode high-definition sensory information between electronic and cortical neuron representations and then compress and represent those data with minimal loss of fidelity and functionality.

To accelerate that integrative process, the NESD program aims to recruit a diverse roster of leading industry stakeholders willing to offer state-of-the-art prototyping and manufacturing services and intellectual property to NESD researchers on a pre-competitive basis. In later phases of the program, these partners could help transition the resulting technologies into research and commercial application spaces.

To familiarize potential participants with the technical objectives of NESD, DARPA will host a Proposers Day meeting that runs Tuesday and Wednesday, February 2-3, 2016, in Arlington, Va. The Special Notice announcing the Proposers Day meeting is available at https://www.fbo.gov/spg/ODA/DARPA/CMO/DARPA-SN-16-16/listing.html. More details about the Industry Group that will support NESD is available at https://www.fbo.gov/spg/ODA/DARPA/CMO/DARPA-SN-16-17/listing.html. A Broad Agency Announcement describing the specific capabilities sought will be forthcoming on http://www.fbo.gov.

NESD is part of a broader portfolio of programs within DARPA that support President Obama’s brain initiative. For more information about DARPA’s work in that domain, please visit: http://www.darpa.mil/program/our-research/darpa-and-the-brain-initiative.

http://www.darpa.mil/news-events/2015-01-19

Thanks to Kebmodee for bringing this to the It’s Interesting community.

Astronomer Sir Martin Rees believes aliens have transitioned from organic forms to machines – and that humans will do the same.

British astrophysicist and cosmologist, Sir Martin Rees, believes if we manage to detect aliens, it will not be by stumbling across organic life, but from picking up a signal made by machines.

It’s likely these machines will have evolved from organic alien beings, and that humans will also make the transition from biological to mechanical in the future.

Sir Martin said that while the way we think has led to all culture and science on Earth, it will be a brief precursor to more powerful machine ‘brains’.

He thinks that life away from Earth has probably already gone through this transition from organic to machine.

On a planet orbiting a star far older than the sun, life ‘may have evolved much of the way toward a dominant machine intelligence,’ he writes.

Sir Martin believes it could be one or two more centuries before humans are overtaken by machine intelligence, which will then evolve over billions of years, either with us, or replacing us.

‘This suggests that if we were to detect ET, it would be far more likely to be inorganic: We would be most unlikely to “catch” alien intelligence in the brief sliver of time when it was still in organic form,’ he writes.

Despite this, the astronomer said Seti searches are worthwhile, because the stakes are so high.

Seti seeks our electromagnetic transmissions thought to be made artificially, but even if it did hit the jackpot and detect a possible message sent by aliens, Sir Martin says it is unlikely we would be able to decode it.

He thinks such a signal would probably be a byproduct or malfunction of a complex machine far beyond our understanding that could trace its lineage back to organic alien beings, which may still exist on a planet, or have died out.

He also points out that even if intelligence is widespread across the cosmos, we may only ever recognise a fraction of it because ‘brains’ may take a form unrecognisable to humans.

For example, instead of being an alien civilisation, ET may be a single integrated intelligence.

He mused that the galaxy may already teem with advanced life and that our descendants could ‘plug in’ to a galactic community.

Read more: http://www.dailymail.co.uk/sciencetech/article-3285966/Is-ET-ROBOT-Astronomer-Royal-believes-aliens-transitioned-organic-forms-machines-humans-same.html#ixzz3pOiCcJY8

Scientists encode memories in a way that bypasses damaged brain tissue

Researchers at University of South Carolina (USC) and Wake Forest Baptist Medical Center have developed a brain prosthesis that is designed to help individuals suffering from memory loss.

The prosthesis, which includes a small array of electrodes implanted into the brain, has performed well in laboratory testing in animals and is currently being evaluated in human patients.

Designed originally at USC and tested at Wake Forest Baptist, the device builds on decades of research by Ted Berger and relies on a new algorithm created by Dong Song, both of the USC Viterbi School of Engineering. The development also builds on more than a decade of collaboration with Sam Deadwyler and Robert Hampson of the Department of Physiology & Pharmacology of Wake Forest Baptist who have collected the neural data used to construct the models and algorithms.

When your brain receives the sensory input, it creates a memory in the form of a complex electrical signal that travels through multiple regions of the hippocampus, the memory center of the brain. At each region, the signal is re-encoded until it reaches the final region as a wholly different signal that is sent off for long-term storage.

If there’s damage at any region that prevents this translation, then there is the possibility that long-term memory will not be formed. That’s why an individual with hippocampal damage (for example, due to Alzheimer’s disease) can recall events from a long time ago – things that were already translated into long-term memories before the brain damage occurred – but have difficulty forming new long-term memories.

Song and Berger found a way to accurately mimic how a memory is translated from short-term memory into long-term memory, using data obtained by Deadwyler and Hampson, first from animals, and then from humans. Their prosthesis is designed to bypass a damaged hippocampal section and provide the next region with the correctly translated memory.

That’s despite the fact that there is currently no way of “reading” a memory just by looking at its electrical signal.

“It’s like being able to translate from Spanish to French without being able to understand either language,” Berger said.

Their research was presented at the 37th Annual International Conference of the IEEE Engineering in Medicine and Biology Society in Milan on August 27, 2015.

The effectiveness of the model was tested by the USC and Wake Forest Baptist teams. With the permission of patients who had electrodes implanted in their hippocampi to treat chronic seizures, Hampson and Deadwyler read the electrical signals created during memory formation at two regions of the hippocampus, then sent that information to Song and Berger to construct the model. The team then fed those signals into the model and read how the signals generated from the first region of the hippocampus were translated into signals generated by the second region of the hippocampus.

In hundreds of trials conducted with nine patients, the algorithm accurately predicted how the signals would be translated with about 90 percent accuracy.

“Being able to predict neural signals with the USC model suggests that it can be used to design a device to support or replace the function of a damaged part of the brain,” Hampson said.
Next, the team will attempt to send the translated signal back into the brain of a patient with damage at one of the regions in order to try to bypass the damage and enable the formation of an accurate long-term memory.

http://medicalxpress.com/news/2015-09-scientists-bypass-brain-re-encoding-memories.html#nRlv

Paralyzed man walks again, using only his mind.


Paraplegic Adam Fritz works out with Kristen Johnson, a spinal cord injury recovery specialist, at the Project Walk facility in Claremont, California on September 24. A brain-to-computer technology that can translate thoughts into leg movements has enabled Fritz, paralyzed from the waist down by a spinal cord injury, to become the first such patient to walk without the use of robotics.

It’s a technology that sounds lifted from the latest Marvel movie—a brain-computer interface functional electrical stimulation (BCI-FES) system that enables paralyzed users to walk again. But thanks to neurologists, biomedical engineers and other scientists at the University of California, Irvine, it’s very much a reality, though admittedly with only one successful test subject so far.

The team, led by Zoran Nenadic and An H. Do, built a device that translates brain waves into electrical signals than can bypass the damaged region of a paraplegic’s spine and go directly to the muscles, stimulating them to move. To test it, they recruited 28-year-old Adam Fritz, who had lost the use of his legs five years earlier in a motorcycle accident.

Fritz first had to learn how exactly he’d been telling his legs to move for all those years before his accident. The research team fitted him with an electroencephalogram (EEG) cap that read his brain waves as he visualized moving an avatar in a virtual reality environment. After hours training on the video game, he eventually figured out how to signal “walk.”

The next step was to transfer that newfound skill to his legs. The scientists wired up the EEG device so that it would send electrical signals to the muscles in Fritz’s leg. And then, along with physical therapy to strengthen his legs, he would practice walking—his legs suspended a few inches off the ground—using only his brain (and, of course, the device). On his 20th visit, Fritz was finally able to walk using a harness that supported his body weight and prevented him from falling. After a little more practice, he walked using just the BCI-FES system. After 30 trials run over a period of 19 weeks, he could successfully walk through a 12-foot-long course.

As encouraging as the trial sounds, there are experts who suggest the design has limitations. “It appears that the brain EEG signal only contributed a walk or stop command,” says Dr. Chet Moritz, an associate professor of rehab medicine, physiology and biophysics at the University of Washington. “This binary signal could easily be provided by the user using a sip-puff straw, eye-blink device or many other more reliable means of communicating a simple ‘switch.’”

Moritz believes it’s unlikely that an EEG alone would be reliable enough to extract any more specific input from the brain while the test subject is walking. In other words, it might not be able to do much more beyond beginning and ending a simple motion like moving your legs forward—not so helpful in stepping over curbs or turning a corner in a hallway.

The UC Irvine team hopes to improve the capability of its technology. A simplified version of the system has the potential to work as a means of noninvasive rehabilitation for a wide range of paralytic conditions, from less severe spinal cord injuries to stroke and multiple sclerosis.

“Once we’ve confirmed the usability of this noninvasive system, we can look into invasive means, such as brain implants,” said Nenadic in a statement announcing the project’s success. “We hope that an implant could achieve an even greater level of prosthesis control because brain waves are recorded with higher quality. In addition, such an implant could deliver sensation back to the brain, enabling the user to feel their legs.

http://www.newsweek.com/paralyzed-man-walks-again-using-only-his-mind-379531

Scientists achieve implantation of memory into the brains of mice while they sleep

Sleeping minds: prepare to be hacked. For the first time, conscious memories have been implanted into the minds of mice while they sleep. The same technique could one day be used to alter memories in people who have undergone traumatic events.

When we sleep, our brain replays the day’s activities. The pattern of brain activity exhibited by mice when they explore a new area during the day, for example, will reappear, speeded up, while the animal sleeps. This is thought to be the brain practising an activity – an essential part of learning. People who miss out on sleep do not learn as well as those who get a good night’s rest, and when the replay process is disrupted in mice, so too is their ability to remember what they learned the previous day.

Karim Benchenane and his colleagues at the Industrial Physics and Chemistry Higher Educational Institution in Paris, France, hijacked this process to create new memories in sleeping mice. The team targeted the rodents’ place cells – neurons that fire in response to being in or thinking about a specific place. These cells are thought to help us form internal maps, and their discoverers won a Nobel prize last year.

Benchenane’s team used electrodes to monitor the activity of mice’s place cells as the animals explored an enclosed arena, and in each mouse they identified a cell that fired only in a certain arena location. Later, when the mice were sleeping, the researchers monitored the animals’ brain activity as they replayed the day’s experiences. A computer recognised when the specific place cell fired; each time it did, a separate electrode would stimulate brain areas associated with reward.

When the mice awoke, they made a beeline for the location represented by the place cell that had been linked to a rewarding feeling in their sleep. A brand new memory – linking a place with reward – had been formed.

It is the first time a conscious memory has been created in animals during sleep. In recent years, researchers have been able to form subconscious associations in sleeping minds – smokers keen to quit can learn to associate cigarettes with the smells of rotten eggs and fish in their sleep, for example.

Previous work suggested that if this kind of subconscious learning had occurred in Benchenane’s mice, they would have explored the arena in a random manner, perhaps stopping at the reward-associated location. But these mice headed straight for the location, suggesting a conscious memory. “The mouse develops a goal-directed behaviour to go towards the place,” says Benchenane. “It proves that it’s not an automatic behaviour. What we create is an association between a particular place and a reward that can be consciously accessed by the mouse.”

“The mouse is remembering enough abstract information to think ‘I want to go to a certain place’, and go there when it wakes up,” says neuroscientist Neil Burgess at University College London. “It’s a bigger breakthrough [than previous studies] because it really does show what the man in the street would call a memory – the ability to bring to mind abstract knowledge which can guide behaviour in a directed way.”

Benchenane doesn’t think the technique can be used to implant many other types of memories, such as skills – at least for the time being. Spatial memories are easier to modify because they are among the best understood.

His team’s findings also provide some of the strongest evidence for the way in which place cells work. It is almost impossible to test whether place cells function as an internal map while animals are awake, says Benchenane, because these animals also use external cues, such as landmarks, to navigate. By specifically targeting place cells while the mouse is asleep, the team were able to directly test theories that specific cells represent specific places.

“Even when those place cells fire in sleep, they still convey spatial information,” says Benchenane. “That provides evidence that when you’ve got activation of place cells during the consolidation of memories in sleep, you’ve got consolidation of the spatial information.”

Benchenane hopes that his technique could be developed to help alter people’s memories, perhaps of traumatic events (see “Now it’s our turn”, below).

Loren Frank at the University of California, San Francisco, agrees. “I think this is a really important step towards helping people with memory impairments or depression,” he says. “It is surprising to me how many neurological and psychiatric illnesses have something to do with memory, including schizophrenia and obsessive compulsive disorder.”

“In principle, you could selectively change brain processing during sleep to soften memories or change their emotional content,” he adds.

Journal reference: Nature Neuroscience, doi:10.1038/nn.3970

http://www.newscientist.com/article/dn27115-new-memories-implanted-in-mice-while-they-sleep.html#.VP_L9uOVquD

Thanks to Kebmodee for bringing this to the attention of the It’s Interesting community.

Ray Kurzweil’s Mind-Boggling Predictions for the Next 25 Years

Bill Gates calls Ray, “the best person I know at predicting the future of artificial intelligence.” Ray is also amazing at predicting a lot more beyond just AI.

This post looks at his very incredible predictions for the next 20+ years.

So who is Ray Kurzweil?

He has received 20 honorary doctorates, has been awarded honors from three U.S. presidents, and has authored 7 books (5 of which have been national bestsellers).

He is the principal inventor of many technologies ranging from the first CCD flatbed scanner to the first print-to-speech reading machine for the blind. He is also the chancellor and co-founder of Singularity University, and the guy tagged by Larry Page to direct artificial intelligence development at Google.

In short, Ray’s pretty smart… and his predictions are amazing, mind-boggling, and important reminders that we are living in the most exciting time in human history.

But, first let’s look back at some of the predictions Ray got right.

Predictions Ray has gotten right over the last 25 years

In 1990 (twenty-five years ago), he predicted…

…that a computer would defeat a world chess champion by 1998. Then in 1997, IBM’s Deep Blue defeated Garry Kasparov.

… that PCs would be capable of answering queries by accessing information wirelessly via the Internet by 2010. He was right, to say the least.

… that by the early 2000s, exoskeletal limbs would let the disabled walk. Companies like Ekso Bionics and others now have technology that does just this, and much more.

In 1999, he predicted…

… that people would be able talk to their computer to give commands by 2009. While still in the early days in 2009, natural language interfaces like Apple’s Siri and Google Now have come a long way. I rarely use my keyboard anymore; instead I dictate texts and emails.

… that computer displays would be built into eyeglasses for augmented reality by 2009. Labs and teams were building head mounted displays well before 2009, but Google started experimenting with Google Glass prototypes in 2011. Now, we are seeing an explosion of augmented and virtual reality solutions and HMDs. Microsoft just released the Hololens, and Magic Leap is working on some amazing technology, to name two.

In 2005, he predicted…

… that by the 2010s, virtual solutions would be able to do real-time language translation in which words spoken in a foreign language would be translated into text that would appear as subtitles to a user wearing the glasses. Well, Microsoft (via Skype Translate), Google (Translate), and others have done this and beyond. One app called Word Lens actually uses your camera to find and translate text imagery in real time.

Ray’s predictions for the next 25 years

The above represent only a few of the predictions Ray has made.

While he hasn’t been precisely right, to the exact year, his track record is stunningly good.

Here are some of Ray’s predictions for the next 25+ years.

By the late 2010s, glasses will beam images directly onto the retina. Ten terabytes of computing power (roughly the same as the human brain) will cost about $1,000.

By the 2020s, most diseases will go away as nanobots become smarter than current medical technology. Normal human eating can be replaced by nanosystems. The Turing test begins to be passable. Self-driving cars begin to take over the roads, and people won’t be allowed to drive on highways.

By the 2030s, virtual reality will begin to feel 100% real. We will be able to upload our mind/consciousness by the end of the decade.

By the 2040s, non-biological intelligence will be a billion times more capable than biological intelligence (a.k.a. us). Nanotech foglets will be able to make food out of thin air and create any object in physical world at a whim.

By 2045, we will multiply our intelligence a billionfold by linking wirelessly from our neocortex to a synthetic neocortex in the cloud.

Ray’s predictions are a byproduct of his understanding of the power of Moore’s Law, more specifically Ray’s “Law of Accelerating Returns” and of exponential technologies.

These technologies follow an exponential growth curve based on the principle that the computing power that enables them doubles every two years.

Ray Kurzweil’s Mind-Boggling Predictions for the Next 25 Years

Thanks to Kebmodee for bringing this to the attention of the It’s Interesting community.