French Tattoo Artist Gets World’s 1st Prosthetic Arm That Doubles as a Tattoo Machine

A French tattoo artist who lost his right arm 22 years ago recently received what has been called the world’s first tattooing prosthetic arm.

JC Sheitan Tenet, 32, told ABC News today he received and demonstrated the first prototype of the tattoo machine prosthesis earlier this month during a convention in Devezieux, France.

Though Tenet has been tattooing with his left arm and hand for years, he’s now learning how to tattoo with his right arm using the “Edward Scissorhands”-esque tool, he said.

The tattoo machine arm was created by visual artist and engineer Jean-Louis Gonzalez, who goes by “Gonzal.”

Gonzal told ABC News today that Tenet can control the prosthetic arm with his shoulder. Gonzal is still working on perfecting the prosthesis and said he hopes the next prototype will give Tenet more wrist mobility.

Tenet said that he uses the prosthesis to do a little filling but that he doesn’t rely on it to do elaborate artwork. He added that the needle is disposable and that the prosthesis can be cleaned like a regular tattoo machine.

And though the prosthesis has an oxidized metal look, it’s not rusted or unsanitary at all, Tenet said. It was painted in “steampunk style,” he explained. Steampunk is a science fiction genre and design style that typically features technology and aesthetics inspired by 19th century steam-powered machinery.

Artificial intelligence replaces physicists


Physicists are putting themselves out of a job, using artificial intelligence to run a complex experiment. The experiment created an extremely cold gas trapped in a laser beam, known as a Bose-Einstein condensate, replicating the experiment that won the 2001 Nobel Prize.

Physicists are putting themselves out of a job, using artificial intelligence to run a complex experiment.

The experiment, developed by physicists from The Australian National University (ANU) and UNSW ADFA, created an extremely cold gas trapped in a laser beam, known as a Bose-Einstein condensate, replicating the experiment that won the 2001 Nobel Prize.

“I didn’t expect the machine could learn to do the experiment itself, from scratch, in under an hour,” said co-lead researcher Paul Wigley from the ANU Research School of Physics and Engineering.

“A simple computer program would have taken longer than the age of the Universe to run through all the combinations and work this out.”

Bose-Einstein condensates are some of the coldest places in the Universe, far colder than outer space, typically less than a billionth of a degree above absolute zero.

They could be used for mineral exploration or navigation systems as they are extremely sensitive to external disturbances, which allows them to make very precise measurements such as tiny changes in the Earth’s magnetic field or gravity.

The artificial intelligence system’s ability to set itself up quickly every morning and compensate for any overnight fluctuations would make this fragile technology much more useful for field measurements, said co-lead researcher Dr Michael Hush from UNSW ADFA.

“You could make a working device to measure gravity that you could take in the back of a car, and the artificial intelligence would recalibrate and fix itself no matter what,” he said.

“It’s cheaper than taking a physicist everywhere with you.”

The team cooled the gas to around 1 microkelvin, and then handed control of the three laser beams over to the artificial intelligence to cool the trapped gas down to nanokelvin.

Researchers were surprised by the methods the system came up with to ramp down the power of the lasers.

“It did things a person wouldn’t guess, such as changing one laser’s power up and down, and compensating with another,” said Mr Wigley.

“It may be able to come up with complicated ways humans haven’t thought of to get experiments colder and make measurements more precise.

The new technique will lead to bigger and better experiments, said Dr Hush.

“Next we plan to employ the artificial intelligence to build an even larger Bose-Einstein condensate faster than we’ve seen ever before,” he said.

The research is published in the Nature group journal Scientific Reports.

https://www.sciencedaily.com/releases/2016/05/160516091544.htm

Elon Musk says we’re all cyborgs almost certainly living within a computer simulation

Elon Musk has said that there is only a “one in billions” chance that we’re not living in a computer simulation.

Our lives are almost certainly being conducted within an artificial world powered by AI and highly-powered computers, like in The Matrix, the Tesla and SpaceX CEO suggested at a tech conference in California.

Mr Musk, who has donated huge amounts of money to research into the dangers of artificial intelligence, said that he hopes his prediction is true because otherwise it means the world will end.

“The strongest argument for us probably being in a simulation I think is the following,” he told the Code Conference. “40 years ago we had Pong – two rectangles and a dot. That’s where we were.

“Now 40 years later we have photorealistic, 3D simulations with millions of people playing simultaneously and it’s getting better every year. And soon we’ll have virtual reality, we’ll have augmented reality.

“If you assume any rate of improvement at all, then the games will become indistinguishable from reality, just indistinguishable.”

He said that even if the speed of those advancements dropped by 1000, we would still be moving forward at an intense speed relative to the age of life.

Since that would lead to games that would be indistinguishable from reality that could be played anywhere, “it would seem to follow that the odds that we’re in ‘base reality’ is one in billions”, Mr Musk said.

Asked whether he was saying that the answer to the question of whether we are in a simulated computer game was “yes”, he said the answer is “probably”.

He said that arguably we should hope that it’s true that we live in a simulation. “Otherwise, if civilisation stops advancing, then that may be due to some calamitous event that stops civilisation.”

He said that either we will make simulations that we can’t tell apart from the real world, “or civilisation will cease to exist”.

Mr Musk said that he has had “so many simulation discussions it’s crazy”, and that it got to the point where “every conversation [he had] was the AI/simulation conversation”.

The question of whether what we see is real or simulated has perplexed humans since at least the Ancient philosophers. But it has been given a new and different edge in recent years with the development of powerful computers and artificial intelligence, which some have argued shows how easily such a simulation could be created.

http://www.independent.co.uk/life-style/gadgets-and-tech/news/elon-musk-ai-artificial-intelligence-computer-simulation-gaming-virtual-reality-a7060941.html

New Real-Time In-Ear Device Translator By Waverly Labs To Be Released Soon

Language barrier will no longer be a problem around the world as an in-ear device will be the answer to this. The device can translate foreign language to the wearer’s native language and it works real time.

A company called Waverly Labs has developed a device called “The Pilot.” that will do a real-time translation while on the wearer’s ears.

A smart phone app will also let the user choose different foreign languages, currently Spanish, French, Italian and English. Additional languages will be available soon after, which include East Asian, Hindi, Semitic, Arabic, Slavic, African, and more.The device also works only with always-on data connection of the wearer’s smartphone.

To use the device, the earpieces can be shared by two people. While talking in different languages, the in-ear device will serve as the wearers’ translators to understand each other.

The device will cost $129.

Robot outperforms highly-skilled human surgeons on pig GI surgery

A robot surgeon has been taught to perform a delicate procedure—stitching soft tissue together with a needle and thread—more precisely and reliably than even the best human doctor.

The Smart Tissue Autonomous Robot (STAR), developed by researchers at Children’s National Health System in Washington, D.C., uses an advanced 3-D imaging system and very precise force sensing to apply stitches with submillimeter precision. The system was designed to copy state-of-the art surgical practice, but in tests involving living pigs, it proved capable of outperforming its teachers.

Currently, most surgical robots are controlled remotely, and no automated surgical system has been used to manipulate soft tissue. So the work, described today in the journal Science Translational Medicine, shows the potential for automated surgical tools to improve patient outcomes. More than 45 million soft-tissue surgeries are performed in the U.S. each year. Examples include hernia operations and repairs of torn muscles.

“Imagine that you need a surgery, or your loved one needs a surgery,” says Peter Kim, a pediatric surgeon at Children’s National, who led the work. “Wouldn’t it be critical to have the best surgeon and the best surgical techniques available?”

Kim does not see the technology replacing human surgeons. He explains that a surgeon still oversees the robot’s work and will take over in an emergency, such as unexpected bleeding.

“Even though we take pride in our craft of doing surgical procedures, to have a machine or tool that works with us in ensuring better outcome safety and reducing complications—[there] would be a tremendous benefit,” Kim says. The new system is an impressive example of a robot performing delicate manipulation. If robots can master human-level dexterity, they could conceivably take on many more tasks and jobs.

STAR consists of an industrial robot equipped with several custom-made components. The researchers developed a force-sensitive device for suturing and, most important, a near-infrared camera capable of imaging soft tissue in detail when fluorescent markers are injected.

“It’s an important result,” says Ken Goldberg, a professor at UC Berkeley who is also developing robotic surgical systems. “The innovation in 3-D sensing is particularly interesting.”

Goldberg’s team is developed surgical robots that could be more flexible than STAR because instead of being manually programmed, they can learn automatically by observing expert surgeons. “Copying the skill of experts is really the next step here,” he says.

https://www.technologyreview.com/s/601378/nimble-fingered-robot-outperforms-the-best-human-surgeons/

Thanks to Kebmodee for bringing this to the It’s Interesting community.

Google invents cyborg lenses for our eyes

by David Goldman

Google has patented a new technology that would let the company inject a computerized lens directly into your eyeball.

The company has been developing smart glasses and even smart contact lenses for years. But Google’s newest patented technology would go even further — and deeper.

In its patent application, which the U.S. Patent and Trademark Office approved last week, Google says it could remove the lens of your eye, inject fluid into your empty lens capsule and then place an electronic lens in the fluid.

Once equipped with your cyborg lenses, you would never need glasses or contacts again. In fact, you might not even need a telescope or a microscope again. And who needs a camera when your eyes can capture photos and videos?

The artificial, computerized lenses could automatically adjust to help you see objects at a distance or very close by. The lenses could be powered by the movement of your eyeball, and they could even connect to a nearby wireless device.

Google says that its patented lenses could be used to cure presbyopia, an age-related condition in which people’s eyes stiffen and their ability to focus is diminished or lost. It could also correct common eye problems, such as myopia, hyperopia, astigmatism.

Today, we cure blurry vision with eyeglasses or contact lenses. But sometimes vision is not correctable.

And there are clear advantages to being a cyborg with mechanical eyes.

Yet Google (GOOGL, Tech30) noted that privacy could become a concern. If your computerized eyes are transmitting data all the time, that signal could allow law enforcement or hackers to identify you or track your movements. Google said that it could make the mechanical lenses strip out personally identifying information so that your information stays secure.

Before you sign up for cyborg eyes, it’s important to note that Google and many other tech companies patent technologies all the time. Many of those patented items don’t end up getting made into actual products. So it’s unclear if Google will ever be implanting computers into your eyes — soon or ever.

http://money.cnn.com/2016/05/04/technology/google-lenses/index.html

Massive sculpture relocated because people kept walking into it while texting


The statue by Sophie Ryder had to be moved because people on their phones were bumping into it.

By Sophie Jamieson

A massive 20ft statue of two clasped hands had to be relocated after people texting on their mobile phones kept walking into it.

The sculpture, called ‘The Kiss’, was only put in place last weekend, but within days those in charge of the exhibition noticed walkers on the path were bumping their heads as they walked through the archway underneath.

Artist Sophie Ryder, who designed the sculpture, posted a video of it being moved by a crane on her Facebook page.

The artwork was positioned on a path leading up to Salisbury Cathedral in Wiltshire.

Made from galvanised steel wire, The Kiss had a 6ft 4in gap underneath the two hands that pedestrians could walk through.

But Ms Ryder said people glued to their phones had not seen it coming.

She said on social media: “We had to move ‘the kiss’ because people were walking through texting and said they bumped their heads! Oh well!!”

Her fans voiced their surprise that people could fail to notice the “ginormous” sculpture.

Cindy Billingsley commented: “Oh good grief- they should be looking at the beautiful art instead of texting- so they deserve what they get if they are not watching where they are going.”

Patricia Cunningham said: “If [sic] may have knocked some sense into their heads! We can but hope.”

Another fan, Lisa Wallis-Adams, wrote: “We saw your art in Salisbury at the weekend. We absolutely loved your rabbits and didn’t walk into any of them! Sorry some people are complete numpties.”

Sculptor Sophie Ryder studied at the Royal Academy of Arts and is known for creations of giant mythical figures, like minotaurs.

The sculpture is part of an exhibition that also features Ryder’s large “lady hares” and minotaurs, positioned on the lawn outside the cathedral. The exhibition runs until 3 July.

http://www.telegraph.co.uk/news/uknews/12164922/Massive-sculpture-relocated-because-people-busy-texting-kept-walking-into-it.html

Graphene successfully interfaced with neurons in the brain

Scientists have long been on a quest to find a way to implant electrodes that interface with neurons into the human brain. If successful, the idea could have huge implications for the treatment of Parkinson’s disease and other neurological disorders. Last month, a team of researchers from Italy and the UK made a huge step forward by showing that the world’s favorite wonder-material, graphene, can successfully interface with neurons.

Previous efforts by other groups using treated graphene had created an interface with a very low signal to noise ratio. But an interdisciplinary collaborative effort by the University of Trieste and the Cambridge Graphene Centre has developed a significantly improved electrode by working with untreated graphene.

“For the first time we interfaced graphene to neurons directly,” said Professor Laura Ballerini of the University of Trieste in Italy. “We then tested the ability of neurons to generate electrical signals known to represent brain activities, and found that the neurons retained their neuronal signaling properties unaltered. This is the first functional study of neuronal synaptic activity using uncoated graphene based materials.”

Prior to experimenting with graphene-based substrates (GBS), scientists implanted microelectrodes based on tungsten and silicon. Proof-of-concept experiments were successful, but these materials seem to suffer from the same fatal flaws. The body’s reaction to the insertion trauma is to form scarring tissue, inhibiting clear electrical signals. The structures were also prone to disconnecting, due to the stiffness of the materials, which were unsuitable for a semi-fluid organic environment.

Pure graphene is promising because it is flexible, non-toxic, and does not impair other cellular activity.

The team’s experiments on rat brain cell cultures showed that the untreated graphene electrodes interfaced well with neurons, transmitting electrical impulses normally with none of the adverse reactions seen previously.

The biocompatibility of graphene could allow it to be used to make graphene microelectrodes that could help measure, harness and control an impaired brain’s functions. It could be used to restore lost sensory functions to treat paralysis, control prosthetic devices such a robotic limbs for amputees and even control or diminish the impact of the out-of-control electrical impulses that cause motor disorders such as Parkinson’s and epilepsy.

“We are currently involved in frontline research in graphene technology towards biomedical applications,” said Professor Maurizio Prato from the University of Trieste. “In this scenario, the development and translation in neurology of graphene-based high-performance bio-devices requires the exploration of the interactions between graphene nano and micro-sheets with the sophisticated signaling machinery of nerve cells. Our work is only a first step in that direction.”

The results of this research were recently published in the journal ACS Nano. The research was funded by the Graphene Flagship, a European initiative that aims to connect theoretical and practical fields and reduce the time that graphene products spend in laboratories before being brought to market.

http://www.cam.ac.uk/research/news/graphene-shown-to-safely-interact-with-neurons-in-the-brain

DARPA program aims to develop an implantable neural interface capable of connecting with one million neurons

A new DARPA program aims to develop an implantable neural interface able to provide unprecedented signal resolution and data-transfer bandwidth between the human brain and the digital world. The interface would serve as a translator, converting between the electrochemical language used by neurons in the brain and the ones and zeros that constitute the language of information technology. The goal is to achieve this communications link in a biocompatible device no larger than one cubic centimeter in size, roughly the volume of two nickels stacked back to back.

The program, Neural Engineering System Design (NESD), stands to dramatically enhance research capabilities in neurotechnology and provide a foundation for new therapies.

“Today’s best brain-computer interface systems are like two supercomputers trying to talk to each other using an old 300-baud modem,” said Phillip Alvelda, the NESD program manager. “Imagine what will become possible when we upgrade our tools to really open the channel between the human brain and modern electronics.”

Among the program’s potential applications are devices that could compensate for deficits in sight or hearing by feeding digital auditory or visual information into the brain at a resolution and experiential quality far higher than is possible with current technology.

Neural interfaces currently approved for human use squeeze a tremendous amount of information through just 100 channels, with each channel aggregating signals from tens of thousands of neurons at a time. The result is noisy and imprecise. In contrast, the NESD program aims to develop systems that can communicate clearly and individually with any of up to one million neurons in a given region of the brain.

Achieving the program’s ambitious goals and ensuring that the envisioned devices will have the potential to be practical outside of a research setting will require integrated breakthroughs across numerous disciplines including neuroscience, synthetic biology, low-power electronics, photonics, medical device packaging and manufacturing, systems engineering, and clinical testing. In addition to the program’s hardware challenges, NESD researchers will be required to develop advanced mathematical and neuro-computation techniques to first transcode high-definition sensory information between electronic and cortical neuron representations and then compress and represent those data with minimal loss of fidelity and functionality.

To accelerate that integrative process, the NESD program aims to recruit a diverse roster of leading industry stakeholders willing to offer state-of-the-art prototyping and manufacturing services and intellectual property to NESD researchers on a pre-competitive basis. In later phases of the program, these partners could help transition the resulting technologies into research and commercial application spaces.

To familiarize potential participants with the technical objectives of NESD, DARPA will host a Proposers Day meeting that runs Tuesday and Wednesday, February 2-3, 2016, in Arlington, Va. The Special Notice announcing the Proposers Day meeting is available at https://www.fbo.gov/spg/ODA/DARPA/CMO/DARPA-SN-16-16/listing.html. More details about the Industry Group that will support NESD is available at https://www.fbo.gov/spg/ODA/DARPA/CMO/DARPA-SN-16-17/listing.html. A Broad Agency Announcement describing the specific capabilities sought will be forthcoming on http://www.fbo.gov.

NESD is part of a broader portfolio of programs within DARPA that support President Obama’s brain initiative. For more information about DARPA’s work in that domain, please visit: http://www.darpa.mil/program/our-research/darpa-and-the-brain-initiative.

http://www.darpa.mil/news-events/2015-01-19

Thanks to Kebmodee for bringing this to the It’s Interesting community.

Uploading Our Minds into Digital Space


Human cortical neurons in the brain. (David Scharf/Corbis)

By Jerry Adler
Smithsonian Magazine

Ken Hayworth, a neuroscientist, wants to be around in 100 years but recognizes that, at 43, he’s not likely to make it on his own. Nor does he expect to get there preserved in alcohol or a freezer; despite the claims made by advocates of cryonics, he says, the ability to revivify a frozen body “isn’t really on the horizon.” So Hayworth is hoping for what he considers the next best thing. He wishes to upload his mind—his memories, skills and personality—to a computer that can be programmed to emulate the processes of his brain, making him, or a simulacrum, effectively immortal (as long as someone keeps the power on).

Hayworth’s dream, which he is pursuing as president of the Brain Preservation Foundation, is one version of the “technological singularity.” It envisions a future of “substrate-independent minds,” in which human and machine consciousness will merge, transcending biological limits of time, space and memory. “This new substrate won’t be dependent on an oxygen atmosphere,” says Randal Koene, who works on the same problem at his organization, Carboncopies.org. “It can go on a journey of 1,000 years, it can process more information at a higher speed, it can see in the X-ray spectrum if we build it that way.” Whether Hayworth or Koene will live to see this is an open question. Their most optimistic scenarios call for at least 50 years, and uncounted billions of dollars, to implement their goal. Meanwhile, Hayworth hopes to achieve the ability to preserve an entire human brain at death—through chemicals, cryonics or both—to keep the structure intact with enough detail that it can, at some future time, be scanned into a database and emulated on a computer.

That approach presumes, of course, that all of the subtleties of a human mind and memory are contained in its anatomical structure—conventional wisdom among neuroscientists, but it’s still a hypothesis. There are electrochemical processes at work. Are they captured by a static map of cells and synapses? We won’t know, advocates argue, until we try to do it.

The initiatives require a big bet on the future of technology. A 3-D map of all the cells and synapses in a nervous system is called a “connectome,” and so far researchers have produced exactly one, for a roundworm called Caenorhabditis elegans, with 302 neurons and about 7,000 connections among them. A human brain, according to one reasonable estimate, has about 86 billion neurons and 100 trillion synapses. And then there’s the electrochemical activity on top of that. In 2013, announcing a federal initiative to produce a complete model of the human brain, Francis Collins, head of the National Institutes of Health, said it could generate “yottabytes” of data—a million million million megabytes. To scan an entire human brain at the scale Hayworth thinks is necessary—effectively slicing it into virtual cubes ten nanometers on a side—would require, with today’s technology, “a million electron microscopes running in parallel for ten years.” Mainstream researchers are divided between those who regard Hayworth’s quest as impossible in practice, and those, like Miguel Nicolelis of Duke University, who consider it impossible in theory. “The brain,” he says, “is not computable.”

And what does it mean for a mind to exist outside a brain? One immediately thinks of the disembodied HAL in 2001: A Space Odyssey. But Koene sees no reason that, if computers continue to grow smaller and more powerful, an uploaded mind couldn’t have a body—a virtual one, or a robotic one. Will it sleep? Experience hunger, pain, desire? In the absence of hormones and chemical neurotransmitters, will it feel emotion? It will be you, in a sense, but will you be it?

These questions don’t trouble Hayworth. To him, the brain is the most sophisticated computer on earth, but only that, and he figures his mind could also live in one made of transistors instead. He hopes to become the first human being to live entirely in cyberspace, to send his virtual self into the far future.

Read more: http://www.smithsonianmag.com/innovation/quest-upload-mind-into-digital-space-180954946/#OBRGToqVzeqftrBt.99