Posts Tagged ‘vision’

By Jason Dorrier

t’s been over a decade since artificial retinas first began helping the blind see. But for many people, whose blindness originates beyond the retina, the technology falls short. Which is why new research out of Spain skips the eye entirely, instead sending signals straight to the brain’s visual cortex.

Amazingly, 15 years after losing her sight, Bernardeta Gómez, who suffers from toxic optic neuropathy, used the experimental technology to recognize lights, letters, shapes, people—and even to play a basic video game sent directly to her brain via an implant.

According to MIT Technology Review, Gómez first began working with researchers in late 2018. Over the next six months, she spent four days a week dialing in the technology’s settings and testing its limits.

The system, developed by Eduardo Fernandez, director of neuroengineering at the University of Miguel Hernandez, works like this.

A camera embedded in a pair of thick, black-rimmed glasses records Gómez’s field of view and sends it to a computer. The computer translates the data into electrical impulses the brain can read and forwards it to a brain implant by way of a cable plugged into a port in the skull. The implant stimulates neurons in Gómez’s visual cortex, which her brain interprets as incoming sensory information. Gómez perceives a low-resolution depiction of her surroundings in the form of yellow dots and shapes called phosphenes which she’s learned to interpret as objects in the world around her.

The technology itself is still very much in the early stages—Gómez is the first to test it—but the team aims to work with five more patients in the next few years. Eventually, Fernandez hopes their efforts can help return sight to many more of the world’s blind people.

A Brief History of Artificial Eyes

This isn’t the first time researchers have used technology to help the blind see again.

Roughly two decades ago, the Artificial Retina Project brought together a number of research institutions to develop a device for those suffering retina-destroying diseases. The work resulted in the Argus systems, which, like Fernandez’s system, use a camera mounted on glasses, a computer to translate sensory data, and an implant with an array of electrodes embedded in the retina (instead of the brain).

Over the course of about a decade, researchers developed the Argus I and Argus II systems, ran them through human trials, and gained approval in Europe (2011) and the US (2013) to sell their bionic eyes to eligible individuals.

According to MIT Technology Review, around 350 people use Argus II today, but the company marketing the devices, Second Sight, has pivoted from artificial retinas to the brain itself because far more people, like Gómez, suffer from damage to the neural pathways between eyes and brain.

Just last year, Second Sight was involved in research, along with UCLA and Baylor, testing a system that also skips the retina and sends visual information straight to the brain.

The system, called Orion, is similar to Argus II. A feed from a video camera mounted on dark glasses is converted to electric pulses sent to an implant that stimulates the brain. The device is wireless and includes a belt with a button to amplify dark objects in the sun or light objects in the dark. Like Fernandez’s system, the user sees a low-resolution pattern of phosphenes they interpret as objects.

“I’ll see little white dots on a black background, like looking up at the stars at night,” said Jason Esterhuizen, who was the second research subject to receive the device. “As a person walks toward me, I might see three little dots. As they move closer to me, more and more dots light up.”

Though the research is promising—it’s designated an FDA Breakthrough Device and is being trialed with six patients—Dr. Daniel Yoshor, study leader and neurosurgeon, cautioned the Guardian last year that it’s “still a long way from what we hope to achieve.”

The Road Ahead

Brain implants are far riskier than eye implants, and if the original Argus system is any indication, it may be years before these new devices are used widely beyond research.

Still, brain-machine interfaces (BMIs) are quickly advancing on a number of fronts.

The implant used in Fernandez’s research is a fairly common device called a Utah array. The square array is a few millimeters wide and contains 100 electrode spikes which are inserted into the brain. Each spike stimulates a few neurons. Similar implants have helped paralyzed folks control robotic arms and type messages with just their thoughts.

Though they’ve been the source of several BMI breakthroughs, the arrays aren’t perfect.

The electrodes damage surrounding brain tissue, scarring renders them useless all too quickly, and they only interact with a handful of neurons. The ideal device would be wireless, last decades in the brain—limiting the number of surgeries needed—and offer greater precision and resolution.

Ferndandez believes his implant can be modified to last decades, and while the current maximum resolution is 10 by 10 pixels, he envisions one day implanting as many as 6 on each side of the brain to deliver a resolution of at least 60 by 60 pixels

In addition, new technologies are in the works. Famously, Elon Musk’s company Neuralink is developing soft, thread-like electrodes that are deftly laced into brain tissue by a robot. Neuralink is aiming to include 3,000 electrodes on their device to chat up far more neurons than is currently possible (though it’s not clear whether there’s a limit to how many more neurons actually add value). Still other approaches, that are likely further out, do away with electrodes altogether, using light or chemicals to control gene-edited neurons.

Fernandez’s process also relies on more than just the hardware. The team used machine learning, for example, to write the software that translates visual information into neural code. This can be further refined, and in the coming years, as they work on the system as a whole, the components will no doubt improve in parallel.

But how quickly it all comes together in a product for wider use isn’t clear.

Fernandez is quick to dial back expectations—pointing out that these are still early experiments, and he doesn’t want to get anyone’s hopes up. Still, given the choice, Gómez said she’d have elected to keep the implant and wouldn’t think twice about installing version two.

“This is an exciting time in neuroscience and neurotechnology, and I feel that within my lifetime we can restore functional sight to the blind,” Yoshor said last year.

Blind Woman Sees With New Implant, Plays Video Game Sent Straight to Her Brain

By David Freeman

No one is ditching the night-vision goggles just yet, but scientists working in the United States and China have developed a technique that they say could one day give humans the ability to see in the dark.

The technique involves injecting the eyes with particles that act like tiny antennae that take infrared light — wavelengths that are invisible to humans and other mammals — and convert it to visible wavelengths. Mammals can see wavelengths in just a sliver of the electromagnetic spectrum, and the new technique is designed to widen that sliver.

The nanoparticle injections haven’t been tried on humans, but experiments on mice show that they confer the ability to see infrared light without interfering with the perception of light in the visible range. The effect worked during the day and at night and lasted for several weeks. The rodents were left unharmed once it wore off.

Gang Han, a chemist at the University of Massachusetts Medical School and a co-author of a new paper describing the research, said in a statement that the technique could lead to a better understanding of visual perception and possibly lead to new ways to treat color blindness.

But those are far from the only possible applications if the technique can be made to work safely in other mammals, including humans. In an email to NBC News MACH, Han said it might be possible to use nanoparticle injections to create “superdogs” that could make it easier to apprehend lawbreakers in darkness.

“For ordinary people,” he added, “we may also see our sky in a completely different way” both at night and during the day because many celestial objects give off infrared light.

The technique doesn’t confer the ability to see the longer-wavelength infrared light given off by living bodies and other warm objects, Tian Xue, a neuroscientist at the University of Science and Technology of China and a co-author of the paper, said in an email. But at least theoretically, it could give humans the ability to see bodies and objects in darkness without the use of night-vision gear — though an infrared light would still be needed.

For their research, Han, Xue and their collaborators injected the rodents’ eyes with nanoparticles treated with proteins that helped “glue” the particles to light-sensitive cells in the animals’ retinas. Once the tiny antennae were in place, the scientists hypothesized, the nanoparticles would convert infrared light into shorter wavelengths, which the animals would then perceive as green light.

To make sure the mice were actually seeing the converted infrared light, the scientists subjected the animals to a number of tests, including one in which they were given a choice of entering a totally dark box or one illuminated only with infrared light. (Mice are nocturnal, and ordinarily they prefer darkness.) Control animals showed no preference — because both boxes appeared dark to them — while treated mice showed a distinct preference for the dark box.

Other scientists praised the research while expressing doubts about trying the technique in humans.

Harvard neuroscientist Michael Do said in an email that the experiments were “sophisticated” and that the technique was likely to work in humans as well as in mice. But he said it was unclear just how sharp the infrared vision would be in humans, and he cautioned that the injections might damage delicate structures in the eye.

Glen Jeffery, a neuroscientist at the University College London, expressed similar praise for the research — but even graver doubts. “Injecting any material under the retina is risky and should never be done unless there is a clear and justifiable clinical reason…” he said in an email. “I have no idea how you could use this technology to human advantage and would never support its application on healthy humans.”

But the researchers are moving ahead. Han said the team planned to test the technique in bigger animals — possibly dogs.

https://www.nbcnews.com/mach/science/scientists-create-super-mice-can-see-dark-here-s-what-ncna977966

Thanks to Kebmodee for bringing this to the It’s Interesting community.

ghosted-images-1_1024

by DAVID NIELD

New research suggests the human eye and brain are capable of seeing ghosted images, a new type of visual phenomenon that scientists previously thought could only be detected by a computer. It turns out our eyes are more powerful than we thought.

The discovery could teach us more about the inner workings of the eye and brain and how they process information, as well as changing our thinking on what we human beings can truly see of the world around us.

Having been developed as a way of low-cost image capture for light outside the visible spectrum, the patterns produced by these ghosted images are usually processed by software algorithms – but, surprisingly, our eyes have the same capabilities.

“Ghost-imaging with the eye opens up a number of completely novel applications such as extending human vision into invisible wavelength regimes in real-time, bypassing intermediary screens or computational steps,” write the researchers.

“Perhaps even more interesting are the opportunities that ghost imaging offers for exploring neurological processes.”

Ghost imaging works using a camera with a single pixel, rather than the millions of pixels used by the sensors inside today’s digital cameras and smartphones. When it comes to capturing light beyond the visible spectrum, it’s even a more cost-effective method.

These single pixel cameras capture light as it reflects from an object – by watching different random patterns of bouncing light, and crunching through some calculations, the camera can gradually build up a picture of something even with just one pixel.

In some setups, the single pixel camera is used in combination with a second light, modulated in response to the first, and beamed back on the original random patterns. The advantage is that fewer patterns are needed to produce an image.

In this case a second camera using some smart algorithms can pick up the image without having looked at the object at all – just by looking at the patterns being cast and the light being produced from them.

That’s the ghosted image that was previously thought to only be visible to computers running specialist software. However, the new study shows the human visual perception can make sense of these patterns, called Hadamard patterns.

This diagram from the research paper should give you an idea of what’s happening:

ghosted-images-2

It’s a little bit like when our eyes and brains look at a series of still images and treat them as a moving picture – the same sort of subconscious processing seems to be going on.

Of the four volunteers who took part in the study, all four could make out an image of Albert Einstein sticking out his tongue from the Hadamard patterns. Interestingly, though, the illusion only appeared when the patterns were projected quickly enough.

If the rate dropped below 200 patterns per 20 milliseconds, the image couldn’t be seen by the study participants.

As the researchers point out, this is potentially hugely exciting – it means we might be able to devise simple systems to see light outside the visible spectrum, with no computer processing required in the middle.

That’s all to come – and this is really preliminary stuff, so we can’t get too carried away. For now, the team of researchers is using the findings to explore more about how our visual systems work, and whether our eyes and brains have yet-undiscovered superpowers for looking at the world around us.

The research has yet to be peer-reviewed, but you can read it on the pre-print resource Arxiv.

https://www.sciencealert.com/human-eye-sees-ghosted-images-reflected-light


An array of semitransparent organic pixels on top of a ultrathin sheet of gold. The thickness of both the organic islands and the underlying gold is more than one-hundred times thinner than a single neuron.

SUMMARY: A simple retinal prosthesis is under development. Fabricated using cheap and widely-available organic pigments used in printing inks and cosmetics, it consists of tiny pixels like a digital camera sensor on a nanometric scale. Researchers hope that it can restore sight to blind people.

Researchers led by Eric Glowacki, principal investigator of the organic nanocrystals subgroup in the Laboratory of Organic Electronics, Linköping University, have developed a tiny, simple photoactive film that converts light impulses into electrical signals. These signals in turn stimulate neurons (nerve cells). The research group has chosen to focus on a particularly pressing application, artificial retinas that may in the future restore sight to blind people. The Swedish team, specializing in nanomaterials and electronic devices, worked together with researchers in Israel, Italy and Austria to optimise the technology. Experiments in vision restoration were carried out by the group of Yael Hanein at Tel Aviv University in Israel. Yael Hanein’s group is a world-leader in the interface between electronics and the nervous system.

The results have recently been published in the scientific journal Advanced Materials.

The retina consists of several thin layers of cells. Light-sensitive neurons in the back of the eye convert incident light to electric signals, while other cells process the nerve impulses and transmit them onwards along the optic nerve to an area of the brain known as the “visual cortex.” An artificial retina may be surgically implanted into the eye if a person’s sight has been lost as a consequence of the light-sensitive cells becoming degraded, thus failing to convert light into electric pulses.

The artificial retina consists of a thin circular film of photoactive material, and is similar to an individual pixel in a digital camera sensor. Each pixel is truly microscopic — it is about 100 times thinner than a single cell and has a diameter smaller than the diameter of a human hair. It consists of a pigment of semi-conducting nanocrystals. Such pigments are cheap and non-toxic, and are commonly used in commercial cosmetics and tattooing ink.

“We have optimised the photoactive film for near-infrared light, since biological tissues, such as bone, blood and skin, are most transparent at these wavelengths. This raises the possibility of other applications in humans in the future,” says Eric Glowacki.

He describes the artificial retina as a microscopic doughnut, with the crystal-containing pigment in the middle and a tiny metal ring around it. It acts without any external connectors, and the nerve cells are activated without a delay.

“The response time must be short if we are to gain control of the stimulation of nerve cells,” says David Rand, postdoctoral researcher at Tel Aviv University. “Here, the nerve cells are activated directly. We have shown that our device can be used to stimulate not only neurons in the brain but also neurons in non-functioning retinas.”

https://www.sciencedaily.com/releases/2018/05/180502104043.htm

by Lacy Cook

This praying mantis isn’t just wearing minuscule 3D glasses for the cute factor, but to help scientists learn more about 3D vision. A Newcastle University team discovered a novel form of 3D vision, or stereo vision, in the insects – and compared human and insect stereo vision for the very first time. Their findings could have implications for visual processing in robots.

Humans aren’t the only creatures with stereo vision, which “helps us work out the distances to the things we see,” according to the university. Cats, horses, monkeys, toads, and owls have it too – but the only insect we know about with 3D vision is the praying mantis. Six Newcastle University researchers obtained new insight into their robust stereo vision with the help of small 3D glasses temporarily attached to the insects with beeswax.

The researchers designed an insect 3D cinema, showing a praying mantis a film of prey. The insects would actually try to catch the prey because the illusion was so convincing. And the scientists were able to take their work to the next level, showing the mantises “complex dot-patterns used to investigate human 3D vision” so they could compare our 3D vision with an insect’s for the first time.

According to the university, humans see 3D in still images by matching details of the image each eye sees. “But mantises only attack moving prey so their 3D doesn’t need to work in still images. The team found mantises don’t bother about the details of the picture but just look for places where the picture is changing…Even if the scientists made the two eyes’ images completely different, mantises can still match up the places where things are changing. They did so even when humans couldn’t.”

The journal Current Biology published their work online last week. Lead author Vivek Nityananda, a behavioral ecologist, described the praying mantis’ stereo vision as “a completely new form of 3D vision.”

Future robots could benefit from these findings: instead of 3D vision based on complex human stereo vision, researchers might be able to take some tips from praying mantis stereo vision, which team member Ghaith Tarawneh said probably doesn’t require a lot of computer processing since insect brains are so small.

https://inhabitat.com/praying-mantises-wearing-tiny-glasses-help-researchers-discover-new-type-of-3d-vision/

By Helen Thomson

“When the tide came in, these kids started swimming. But not like I had seen before. They were more underwater than above water, they had their eyes wide open – they were like little dolphins.”

Deep in the island archipelagos on the Andaman Sea, and along the west coast of Thailand live small tribes called the Moken people, also known as sea-nomads. Their children spend much of their day in the sea, diving for food. They are uniquely adapted to this job – because they can see underwater. And it turns out that with a little practice, their unique vision might be accessible to any young person.

In 1999, Anna Gislen at the University of Lund, in Sweden was investigating different aspects of vision, when a colleague suggested that she might be interested in studying the unique characteristics of the Moken tribe. “I’d been sitting in a dark lab for three months, so I thought, ‘yeah, why not go to Asia instead’,” says Gislen.

Gislen and her six-year old daughter travelled to Thailand and integrated themselves within the Moken communities, who mostly lived on houses sat upon poles. When the tide came in, the Moken children splashed around in the water, diving down to pick up food that lay metres below what Gislen or her daughter could see. “They had their eyes wide open, fishing for clams, shells and sea cucumbers, with no problem at all,” she says.

Gislen set up an experiment to test just how good the children’s underwater vision really was. The kids were excited about joining in, says Gislen, “they thought it was just a fun game.”

The kids had to dive underwater and place their heads onto a panel. From there they could see a card displaying either vertical or horizontal lines. Once they had stared at the card, they came back to the surface to report which direction the lines travelled. Each time they dived down, the lines would get thinner, making the task harder. It turned out that the Moken children were able to see twice as well as European children who performed the same experiment at a later date.

What was going on? To see clearly above land, you need to be able to refract light that enters the eye onto the retina. The retina sits at the back of the eye and contains specialised cells, which convert the light signals into electrical signals that the brain interprets as images.

Light is refracted when it enters the human eye because the outer cornea contains water, which makes it slightly denser than the air outside the eye. An internal lens refracts the light even further.

When the eye is immersed in water, which has about the same density as the cornea, we lose the refractive power of the cornea, which is why the image becomes severely blurred.

Gislen figured that in order for the Moken children to see clearly underwater, they must have either picked up some adaption that fundamentally changed the way their eyes worked, or they had learned to use their eyes differently under water.

She thought the first theory was unlikely, because a fundamental change to the eye would probably mean the kids wouldn’t be able to see well above water. A simple eye test proved this to be true – the Moken children could see just as well above water as European children of a similar age.

It had to be some kind of manipulation of the eye itself, thought Gislen. There are two ways in which you can theoretically improve your vision underwater. You can change the shape of the lens – which is called accommodation – or you can make the pupil smaller, thereby increasing the depth of field.

Their pupil size was easy to measure – and revealed that they can constrict their pupils to the maximum known limit of human performance. But this alone couldn’t fully explain the degree to which their sight improved. This led Gislen to believe that accommodation of the lens was also involved.

“We had to make a mathematical calculation to work out how much the lens was accommodating in order for them to see as far as they could,” says Gislen. This showed that the children had to be able to accommodate to a far greater degree than you would expect to see underwater.

“Normally when you go underwater, everything is so blurry that the eye doesn’t even try to accommodate, it’s not a normal reflex,” says Gislen. “But the Moken children are able to do both – they can make their pupils smaller and change their lens shape. Seals and dolphins have a similar adaptation.”

Gislen was able to test a few Moken adults in the same way. They showed no unusual underwater vision or accommodation – perhaps explaining why the adults in the tribe caught most of their food by spear fishing above the surface. “When we age, our lenses become less flexible, so it makes sense that the adults lose the ability to accommodate underwater,” says Gislen.

Gislen wondered whether the Moken children had a genetic anomaly to thank for their ability to see underwater or whether it was just down to practice. To find out, she asked a group of European children on holiday in Thailand, and a group of children in Sweden to take part in training sessions, in which they dived underwater and tried to work out the direction of lines on a card. After 11 sessions across one month, both groups had attained the same underwater acuity as the Moken children.

“It was different for each child, but at some point their vision would just suddenly improve,” says Gislen. “I asked them whether they were doing anything different and they said, ‘No, I can just see better now’.”

She did notice, however, that the European kids would experience red eyes, irritated by the salt in the water, whereas the Moken children appeared to have no such problem. “So perhaps there is some adaptation there that allows them to dive down 30 times without any irritation,” she says.

Gislen recently returned to Thailand to visit the Moken tribes, but things had changed dramatically. In 2004, a tsunami created by a giant earthquake within the Indian Ocean destroyed much of the Moken’s homeland. Since then, the Thai government has worked hard to move them onto the land, building homes that are further inland and employing members of the tribe to work in the National Park. “It’s difficult,” says Gislen. “You want to help keep people safe and give them the best parts of modern culture, but in doing so they lose their own culture.”

In unpublished work, Gislen tested the same kids that were in her original experiment. The Moken children, now in their late teens, were still able to see clearly underwater. She wasn’t able to test many adults as they were too shy, but she is certain that they would have lost the ability to see underwater as they got older. “The adult eye just isn’t capable of that amount of accommodation,” she says.

Unfortunately, the children in Gislen’s experiments may be the last of the tribe to possess the ability to see so clearly underwater. “They just don’t spend as much time in the sea anymore,” she says, “so I doubt that any of the children that grow up these days in the tribe have this extraordinary vision.”

http://www.bbc.com/future/story/20160229-the-sea-nomad-children-who-see-like-dolphins

by David Goldman

Google has patented a new technology that would let the company inject a computerized lens directly into your eyeball.

The company has been developing smart glasses and even smart contact lenses for years. But Google’s newest patented technology would go even further — and deeper.

In its patent application, which the U.S. Patent and Trademark Office approved last week, Google says it could remove the lens of your eye, inject fluid into your empty lens capsule and then place an electronic lens in the fluid.

Once equipped with your cyborg lenses, you would never need glasses or contacts again. In fact, you might not even need a telescope or a microscope again. And who needs a camera when your eyes can capture photos and videos?

The artificial, computerized lenses could automatically adjust to help you see objects at a distance or very close by. The lenses could be powered by the movement of your eyeball, and they could even connect to a nearby wireless device.

Google says that its patented lenses could be used to cure presbyopia, an age-related condition in which people’s eyes stiffen and their ability to focus is diminished or lost. It could also correct common eye problems, such as myopia, hyperopia, astigmatism.

Today, we cure blurry vision with eyeglasses or contact lenses. But sometimes vision is not correctable.

And there are clear advantages to being a cyborg with mechanical eyes.

Yet Google (GOOGL, Tech30) noted that privacy could become a concern. If your computerized eyes are transmitting data all the time, that signal could allow law enforcement or hackers to identify you or track your movements. Google said that it could make the mechanical lenses strip out personally identifying information so that your information stays secure.

Before you sign up for cyborg eyes, it’s important to note that Google and many other tech companies patent technologies all the time. Many of those patented items don’t end up getting made into actual products. So it’s unclear if Google will ever be implanting computers into your eyes — soon or ever.

http://money.cnn.com/2016/05/04/technology/google-lenses/index.html