Posts Tagged ‘The Singularity’

Artificial intelligence can share our natural ability to make numeric snap judgments.

Researchers observed this knack for numbers in a computer model composed of virtual brain cells, or neurons, called an artificial neural network. After being trained merely to identify objects in images — a common task for AI — the network developed virtual neurons that respond to specific quantities. These artificial neurons are reminiscent of the “number neurons” thought to give humans, birds, bees and other creatures the innate ability to estimate the number of items in a set (SN: 7/7/18, p. 7). This intuition is known as number sense.

In number-judging tasks, the AI demonstrated a number sense similar to humans and animals, researchers report online May 8 in Science Advances. This finding lends insight into what AI can learn without explicit instruction, and may prove interesting for scientists studying how number sensitivity arises in animals.

Neurobiologist Andreas Nieder of the University of Tübingen in Germany and colleagues used a library of about 1.2 million labeled images to teach an artificial neural network to recognize objects such as animals and vehicles in pictures. The researchers then presented the AI with dot patterns containing one to 30 dots and recorded how various virtual neurons responded.

Some neurons were more active when viewing patterns with specific numbers of dots. For instance, some neurons activated strongly when shown two dots but not 20, and vice versa. The degree to which these neurons preferred certain numbers was nearly identical to previous data from the neurons of monkeys.

Dot detectors
A new artificial intelligence program viewed images of dots previously shown to monkeys, including images with one dot and images with even numbers of dots from 2 to 30 (bottom). Much like the number-sensitive neurons in monkey brains, number-sensitive virtual neurons in the AI preferentially activated when shown specific numbers of dots. As in monkey brains, the AI contained more neurons tuned to smaller numbers than larger numbers (top).

To test whether the AI’s number neurons equipped it with an animal-like number sense, Nieder’s team presented pairs of dot patterns and asked whether the patterns contained the same number of dots. The AI was correct 81 percent of the time, performing about as well as humans and monkeys do on similar matching tasks. Like humans and other animals, the AI struggled to differentiate between patterns that had very similar numbers of dots, and between patterns that had many dots (SN: 12/10/16, p. 22).

This finding is a “very nice demonstration” of how AI can pick up multiple skills while training for a specific task, says Elias Issa, a neuroscientist at Columbia University not involved in the work. But exactly how and why number sense arose within this artificial neural network is still unclear, he says.

Nieder and colleagues argue that the emergence of number sense in AI might help biologists understand how human babies and wild animals get a number sense without being taught to count. Perhaps basic number sensitivity “is wired into the architecture of our visual system,” Nieder says.

Ivilin Stoianov, a computational neuroscientist at the Italian National Research Council in Padova, is not convinced that such a direct parallel exists between the number sense in this AI and that in animal brains. This AI learned to “see” by studying many labeled pictures, which is not how babies and wild animals learn to make sense of the world. Future experiments could explore whether similar number neurons emerge in AI systems that more closely mimic how biological brains learn, like those that use reinforcement learning, Stoianov says (SN: 12/8/18, p. 14).

https://www.sciencenews.org/article/new-ai-acquired-humanlike-number-sense-its-own

Advertisements

By Greg Ip

It’s time to stop worrying that robots will take our jobs — and start worrying that they will decide who gets jobs.

Millions of low-paid workers’ lives are increasingly governed by software and algorithms. This was starkly illustrated by a report last week that Amazon.com tracks the productivity of its employees and regularly fires those who underperform, with almost no human intervention.

“Amazon’s system tracks the rates of each individual associate’s productivity and automatically generates any warnings or terminations regarding quality or productivity without input from supervisors,” a law firm representing Amazon said in a letter to the National Labor Relations Board, as first reported by technology news site The Verge. Amazon was responding to a complaint that it had fired an employee from a Baltimore fulfillment center for federally protected activity, which could include union organizing. Amazon said the employee was fired for failing to meet productivity targets.

Perhaps it was only a matter of time before software started firing people. After all, it already screens resumes, recommends job applicants, schedules shifts and assigns projects. In the workplace, “sophisticated technology to track worker productivity on a minute-by-minute or even second-by-second basis is incredibly pervasive,” says Ian Larkin, a business professor at the University of California at Los Angeles specializing in human resources.

Industrial laundry services track how many seconds it takes to press a laundered shirt; on-board computers track truckers’ speed, gear changes and engine revolutions per minute; and checkout terminals at major discount retailers report if the cashier is scanning items quickly enough to meet a preset goal. In all these cases, results are shared in real time with the employee, and used to determine who is terminated, says Mr. Larkin.

Of course, weeding out underperforming employees is a basic function of management. General Electric Co.’s former chief executive Jack Welch regularly culled the company’s underperformers. “In banking and management consulting it is standard to exit about 20% of employees a year, even in good times, using ‘rank and yank’ systems,” says Nick Bloom, an economist at Stanford University specializing in management.

For employees of General Electric, Goldman Sachs Group Inc.and McKinsey & Co., that risk is more than compensated for by the reward of stimulating and challenging work and handsome paychecks. The risk-reward trade-off in industrial laundries, fulfillment centers and discount stores is not nearly so enticing: the work is repetitive and the pay is low. Those who aren’t weeded out one year may be the next if the company raises its productivity targets. Indeed, wage inequality doesn’t fully capture how unequal work has become: enjoyable and secure at the top, monotonous and insecure at the bottom.

At fulfillment centers, employees locate, scan and box all the items in an order. Amazon’s “Associate Development and Performance Tracker,” or Adapt, tracks how each employee performs on these steps against externally-established benchmarks and warns employees when they are falling short.

Amazon employees have complained of being monitored continuously — even having bathroom breaks measured — and being held to ever-rising productivity benchmarks. There is no public data to determine if such complaints are more or less common at Amazon than its peers. The company says about 300 employees — roughly 10% of the Baltimore center’s employment level — were terminated for productivity reasons in the year before the law firm’s letter was sent to the NLRB.

Mr. Larkin says 10% is not unusually high. Yet, automating the discipline process, he says, “makes an already difficult job seem even more inhuman and undesirable. Dealing with these tough situations is one of the key roles of managers.”

“Managers make final decisions on all personnel matters,” an Amazon spokeswoman said. “The [Adapt system] simply tracks and ensures consistency of data and process across hundreds of employees to ensure fairness.” The number of terminations has decreased in the last two years at the Baltimore facility and across North America, she said. Termination notices can be appealed.

Companies use these systems because they work well for them.

Mr. Bloom and his co-authors find that companies that more aggressively hire, fire and monitor employees have faster productivity growth. They also have wider gaps between the highest- and lowest-paid employees.

Computers also don’t succumb to the biases managers do. Economists Mitchell Hoffman, Lisa Kahn and Danielle Li looked at how 15 firms used a job-testing technology that tested applicants on computer and technical skills, personality, cognitive skills, fit for the job and various job scenarios. Drawing on past correlations, the algorithm ranked applicants as having high, moderate or low potential. Their study found employees hired against the software’s recommendation were below-average performers: “This suggests that managers often overrule test recommendations because they are biased or mistaken, not only because they have superior private information,” they wrote.

Last fall Amazon raised its starting pay to $15 an hour, several dollars more than what the brick-and-mortar stores being displaced by Amazon pay. Ruthless performance tracking is how Amazon ensures employees are productive enough to merit that salary. This also means that, while employees may increasingly be supervised by technology, at least they’re not about to be replaced by it.

Write to Greg Ip at greg.ip@wsj.com

https://www.morningstar.com/news/glbnewscan/TDJNDN_201905017114/for-lowerpaid-workers-the-robot-overlords-have-arrived.html

Thanks to Kebmodee for bringing this to the It’s Interesting community.


Illustrations of electrode placements on the research participants’ neural speech centers, from which activity patterns recorded during speech (colored dots) were translated into a computer simulation of the participant’s vocal tract (model, right) which then could be synthesized to reconstruct the sentence that had been spoken (sound wave & sentence, below). Credit: Chang lab / UCSF Dept. of Neurosurgery

A state-of-the-art brain-machine interface created by UC San Francisco neuroscientists can generate natural-sounding synthetic speech by using brain activity to control a virtual vocal tract—an anatomically detailed computer simulation including the lips, jaw, tongue, and larynx. The study was conducted in research participants with intact speech, but the technology could one day restore the voices of people who have lost the ability to speak due to paralysis and other forms of neurological damage.

Stroke, traumatic brain injury, and neurodegenerative diseases such as Parkinson’s disease, multiple sclerosis, and amyotrophic lateral sclerosis (ALS, or Lou Gehrig’s disease) often result in an irreversible loss of the ability to speak. Some people with severe speech disabilities learn to spell out their thoughts letter-by-letter using assistive devices that track very small eye or facial muscle movements. However, producing text or synthesized speech with such devices is laborious, error-prone, and painfully slow, typically permitting a maximum of 10 words per minute, compared to the 100-150 words per minute of natural speech.

The new system being developed in the laboratory of Edward Chang, MD—described April 24, 2019 in Nature—demonstrates that it is possible to create a synthesized version of a person’s voice that can be controlled by the activity of their brain’s speech centers. In the future, this approach could not only restore fluent communication to individuals with severe speech disability, the authors say, but could also reproduce some of the musicality of the human voice that conveys the speaker’s emotions and personality.

“For the first time, this study demonstrates that we can generate entire spoken sentences based on an individual’s brain activity,” said Chang, a professor of neurological surgery and member of the UCSF Weill Institute for Neuroscience. “This is an exhilarating proof of principle that with technology that is already within reach, we should be able to build a device that is clinically viable in patients with speech loss.”

Brief animation illustrates how patterns of brain activity from the brain’s speech centers in somatosensory cortex (top left) were first decoded into a computer simulation of a research participant’s vocal tract movements (top right), which were then translated into a synthesized version of the participant’s voice (bottom). Credit:Chang lab / UCSF Dept. of Neurosurgery. Simulated Vocal Tract Animation Credit:Speech Graphics
Virtual Vocal Tract Improves Naturalistic Speech Synthesis

The research was led by Gopala Anumanchipalli, Ph.D., a speech scientist, and Josh Chartier, a bioengineering graduate student in the Chang lab. It builds on a recent study in which the pair described for the first time how the human brain’s speech centers choreograph the movements of the lips, jaw, tongue, and other vocal tract components to produce fluent speech.

From that work, Anumanchipalli and Chartier realized that previous attempts to directly decode speech from brain activity might have met with limited success because these brain regions do not directly represent the acoustic properties of speech sounds, but rather the instructions needed to coordinate the movements of the mouth and throat during speech.

“The relationship between the movements of the vocal tract and the speech sounds that are produced is a complicated one,” Anumanchipalli said. “We reasoned that if these speech centers in the brain are encoding movements rather than sounds, we should try to do the same in decoding those signals.”

In their new study, Anumancipali and Chartier asked five volunteers being treated at the UCSF Epilepsy Center—patients with intact speech who had electrodes temporarily implanted in their brains to map the source of their seizures in preparation for neurosurgery—to read several hundred sentences aloud while the researchers recorded activity from a brain region known to be involved in language production.

Based on the audio recordings of participants’ voices, the researchers used linguistic principles to reverse engineer the vocal tract movements needed to produce those sounds: pressing the lips together here, tightening vocal cords there, shifting the tip of the tongue to the roof of the mouth, then relaxing it, and so on.

This detailed mapping of sound to anatomy allowed the scientists to create a realistic virtual vocal tract for each participant that could be controlled by their brain activity. This comprised two “neural network” machine learning algorithms: a decoder that transforms brain activity patterns produced during speech into movements of the virtual vocal tract, and a synthesizer that converts these vocal tract movements into a synthetic approximation of the participant’s voice.

The synthetic speech produced by these algorithms was significantly better than synthetic speech directly decoded from participants’ brain activity without the inclusion of simulations of the speakers’ vocal tracts, the researchers found. The algorithms produced sentences that were understandable to hundreds of human listeners in crowdsourced transcription tests conducted on the Amazon Mechanical Turk platform.

As is the case with natural speech, the transcribers were more successful when they were given shorter lists of words to choose from, as would be the case with caregivers who are primed to the kinds of phrases or requests patients might utter. The transcribers accurately identified 69 percent of synthesized words from lists of 25 alternatives and transcribed 43 percent of sentences with perfect accuracy. With a more challenging 50 words to choose from, transcribers’ overall accuracy dropped to 47 percent, though they were still able to understand 21 percent of synthesized sentences perfectly.

“We still have a ways to go to perfectly mimic spoken language,” Chartier acknowledged. “We’re quite good at synthesizing slower speech sounds like ‘sh’ and ‘z’ as well as maintaining the rhythms and intonations of speech and the speaker’s gender and identity, but some of the more abrupt sounds like ‘b’s and ‘p’s get a bit fuzzy. Still, the levels of accuracy we produced here would be an amazing improvement in real-time communication compared to what’s currently available.”

Artificial Intelligence, Linguistics, and Neuroscience Fueled Advance

The researchers are currently experimenting with higher-density electrode arrays and more advanced machine learning algorithms that they hope will improve the synthesized speech even further. The next major test for the technology is to determine whether someone who can’t speak could learn to use the system without being able to train it on their own voice and to make it generalize to anything they wish to say.


Image of an example array of intracranial electrodes of the type used to record brain activity in the current study. Credit: UCSF

Preliminary results from one of the team’s research participants suggest that the researchers’ anatomically based system can decode and synthesize novel sentences from participants’ brain activity nearly as well as the sentences the algorithm was trained on. Even when the researchers provided the algorithm with brain activity data recorded while one participant merely mouthed sentences without sound, the system was still able to produce intelligible synthetic versions of the mimed sentences in the speaker’s voice.

The researchers also found that the neural code for vocal movements partially overlapped across participants, and that one research subject’s vocal tract simulation could be adapted to respond to the neural instructions recorded from another participant’s brain. Together, these findings suggest that individuals with speech loss due to neurological impairment may be able to learn to control a speech prosthesis modeled on the voice of someone with intact speech.

“People who can’t move their arms and legs have learned to control robotic limbs with their brains,” Chartier said. “We are hopeful that one day people with speech disabilities will be able to learn to speak again using this brain-controlled artificial vocal tract.”

Added Anumanchipalli, “I’m proud that we’ve been able to bring together expertise from neuroscience, linguistics, and machine learning as part of this major milestone towards helping neurologically disabled patients.”

https://medicalxpress.com/news/2019-04-synthetic-speech-brain.html


B/CI technology might also allow us to create a future “global superbrain” that would connect networks of individual human brains and AIs to enable collective thought. The image is in the public doamin.

Summary: Researchers predict the development of a brain/cloud interface that connects neurons to cloud computing networks in real time.

Source: Frontiers

Imagine a future technology that would provide instant access to the world’s knowledge and artificial intelligence, simply by thinking about a specific topic or question. Communications, education, work, and the world as we know it would be transformed.

Writing in Frontiers in Neuroscience, an international collaboration led by researchers at UC Berkeley and the US Institute for Molecular Manufacturing predicts that exponential progress in nanotechnology, nanomedicine, AI, and computation will lead this century to the development of a “Human Brain/Cloud Interface” (B/CI), that connects neurons and synapses in the brain to vast cloud-computing networks in real time.

Nanobots on the brain

The B/CI concept was initially proposed by futurist-author-inventor Ray Kurzweil, who suggested that neural nanorobots – brainchild of Robert Freitas, Jr., senior author of the research – could be used to connect the neocortex of the human brain to a “synthetic neocortex” in the cloud. Our wrinkled neocortex is the newest, smartest, ‘conscious’ part of the brain.

Freitas’ proposed neural nanorobots would provide direct, real-time monitoring and control of signals to and from brain cells.

“These devices would navigate the human vasculature, cross the blood-brain barrier, and precisely autoposition themselves among, or even within brain cells,” explains Freitas. “They would then wirelessly transmit encoded information to and from a cloud-based supercomputer network for real-time brain-state monitoring and data extraction.”

The internet of thoughts

This cortex in the cloud would allow “Matrix”-style downloading of information to the brain, the group claims.

“A human B/CI system mediated by neuralnanorobotics could empower individuals with instantaneous access to all cumulative human knowledge available in the cloud, while significantly improving human learning capacities and intelligence,” says lead author Dr. Nuno Martins.

B/CI technology might also allow us to create a future “global superbrain” that would connect networks of individual human brains and AIs to enable collective thought.

“While not yet particularly sophisticated, an experimental human ‘BrainNet’ system has already been tested, enabling thought-driven information exchange via the cloud between individual brains,” explains Martins. “It used electrical signals recorded through the skull of ‘senders’ and magnetic stimulation through the skull of ‘receivers,’ allowing for performing cooperative tasks.

“With the advance of neuralnanorobotics, we envisage the future creation of ‘superbrains’ that can harness the thoughts and thinking power of any number of humans and machines in real time. This shared cognition could revolutionize democracy, enhance empathy, and ultimately unite culturally diverse groups into a truly global society.”

When can we connect?

According to the group’s estimates, even existing supercomputers have processing speeds capable of handling the necessary volumes of neural data for B/CI – and they’re getting faster, fast.

Rather, transferring neural data to and from supercomputers in the cloud is likely to be the ultimate bottleneck in B/CI development.

“This challenge includes not only finding the bandwidth for global data transmission,” cautions Martins, “but also, how to enable data exchange with neurons via tiny devices embedded deep in the brain.”

One solution proposed by the authors is the use of ‘magnetoelectric nanoparticles’ to effectively amplify communication between neurons and the cloud.

“These nanoparticles have been used already in living mice to couple external magnetic fields to neuronal electric fields – that is, to detect and locally amplify these magnetic signals and so allow them to alter the electrical activity of neurons,” explains Martins. “This could work in reverse, too: electrical signals produced by neurons and nanorobots could be amplified via magnetoelectric nanoparticles, to allow their detection outside of the skull.”

Getting these nanoparticles – and nanorobots – safely into the brain via the circulation, would be perhaps the greatest challenge of all in B/CI.

“A detailed analysis of the biodistribution and biocompatibility of nanoparticles is required before they can be considered for human development. Nevertheless, with these and other promising technologies for B/CI developing at an ever-increasing rate, an ‘internet of thoughts’ could become a reality before the turn of the century,” Martins concludes.

https://neurosciencenews.com/internet-thoughts-brain-cloud-interface-11074/

When Amanda Kitts’s car was hit head-on by a Ford F-350 truck in 2006, her arm was damaged beyond repair. “It looked like minced meat,” Kitts, now 50, recalls. She was immediately rushed to the hospital, where doctors amputated what remained of her mangled limb.

While still in the hospital, Kitts discovered that researchers at the Rehabilitation Institute of Chicago (now the Shirley Ryan AbilityLab) were investigating a new technique called targeted muscle reinnervation, which would enable people to control motorized prosthetics with their minds. The procedure, which involves surgically rewiring residual nerves from an amputated limb into a nearby muscle, allows movement-related electrical signals—sent from the brain to the innervated muscles—to move a prosthetic device.

Kitts immediately enrolled in the study and had the reinnervation surgery around a year after her accident. With her new prosthetic, Kitts regained a functional limb that she could use with her thoughts alone. But something important was missing. “I was able to move a prosthetic just by thinking about it, but I still couldn’t tell if I was holding or letting go of something,” Kitts says. “Sometimes my muscle might contract, and whatever I was holding would drop—so I found myself [often] looking at my arm when I was using it.”

What Kitts’s prosthetic limb failed to provide was a sense of kinesthesia—the awareness of where one’s body parts are and how they are moving. (Kinesthesia is a form of proprioception with a more specific focus on motion than on position.) Taken for granted by most people, kinesthesia is what allows us to unconsciously grab a coffee mug off a desk or to rapidly catch a falling object before it hits the ground. “It’s how we make such nice, elegant, coordinated movements, but you don’t necessarily think about it when it happens,” explains Paul Marasco, a neuroscientist at the Cleveland Clinic in Ohio. “There’s constant and rapid communication that goes on between the muscles and the brain.” The brain sends the intent to move the muscle, the muscle moves, and the awareness of that movement is fed back to the brain.

Prosthetic technology has advanced significantly in recent years, but proprioception is one thing that many of these modern devices still cannot reproduce, Marasco says. And it’s clear that this is something that people find important, he adds, because many individuals with upper-limb amputations still prefer old-school body-powered hook prosthetics. Despite being low tech—the devices work using a bicycle brake–like cable system that’s powered by the body’s own movements—they provide an inherent sense of proprioception.

To restore this sense for amputees who use the more modern prosthetics, Marasco and his colleagues decided to create a device based on what’s known as the kinesthetic illusion: the strange phenomenon in which vibrating a person’s muscle gives her the false sense of movement. A buzz to the triceps will make you think your arm is flexing, while stimulating the biceps will make you feel that it’s extending. The best illustration of this effect is the so-called Pinocchio illusion: holding your nose while someone applies a vibrating device to your bicep will confuse your brain into thinking your nose is growing.

“Your brain doesn’t like conflict,” Marasco explains. So if it thinks “my arm’s moving and I’m holding onto my nose, that must mean my nose is extending.”

To test the device, the team applied vibrations to the reinnervated muscles on six amputee participants’ chests or upper arms and asked them to indicate how they felt their hands were moving. Each amputee reported feeling various hand, wrist, and elbow motions, or “percepts,” in their missing limbs. Kitts, who had met Marasco while taking part in the studies he was involved in at the institute in Chicago, was one of the subjects in the experiment. “The first time I felt the sense of movement was remarkable,” she says.

In total, the experimenters documented 22 different percepts from their participants. “It’s hard to get this sense reliably, so I was encouraged to see the capability of several different subjects to get a reasonable sense of hand position from this illusion,” says Dustin Tyler, a biomedical engineer at Case Western Reserve University who was not involved in the work. He adds that while this is a new, noninvasive approach to proprioception, he and others are also working on devices that restore this sense by stimulating nerves directly with implanted devices.

Marasco and his colleagues then melded the vibration with the movement-controlled prostheses, so that when participants decided to move their artificial limbs, a vibrating stimulus was applied to the muscles to provide them with proprioceptive feedback. When the subjects conducted various movement-related tasks with this new system, their performance significantly improve.

“This was an extremely thorough set of experiments,” says Marcia O’Malley, a biomedical engineer at Rice University who did not take part in that study. “I think it is really promising.”

Although the mechanisms behind the illusion largely remain a mystery, Marasco says, the vibrations may be activating specific muscle receptors that provide the body with a sense of movement. Interestingly, he and his colleagues have found that the “sweet spot” vibration frequency for movement perception is nearly identical in humans and rats—about 90 Hz.

For Kitts, a system that provides proprioceptive feedback means being able to use her prosthetic without constantly watching it—and feeling it instead. “It’s whole new level of having a real part of your body,” she says.

https://www.the-scientist.com/notebook/vibrations-restore-sense-of-movement-in-prosthetics-64691

DARPA’s new research in brain-computer interfaces is allowing a pilot to control multiple simulated aircraft at once.

A person with a brain chip can now pilot a swarm of drones — or even advanced fighter jets, thanks to research funded by the U.S. military’s Defense Advanced Research Projects Agency, or DARPA.

The work builds on research from 2015, which allowed a paralyzed woman to steer a virtual F-35 Joint Strike Fighter with only a small, surgically-implantable microchip. On Thursday, agency officials announced that they had scaled up the technology to allow a user to steer multiple jets at once.

“As of today, signals from the brain can be used to command and control … not just one aircraft but three simultaneous types of aircraft,” said Justin Sanchez, who directs DARPA’s biological technology office, at the Agency’s 60th-anniversary event in Maryland.

More importantly, DARPA was able to improve the interaction between pilot and the simulated jet to allow the operator, a paralyzed man named Nathan, to not just send but receive signals from the craft.

“The signals from those aircraft can be delivered directly back to the brain so that the brain of that user [or pilot] can also perceive the environment,” said Sanchez. “It’s taken a number of years to try and figure this out.”

In essence, it’s the difference between having a brain joystick and having a real telepathic conversation with multiple jets or drones about what’s going on, what threats might be flying over the horizon, and what to do about them. “We’ve scaled it to three [aircraft], and have full sensory [signals] coming back. So you can have those other planes out in the environment and then be detecting something and send that signal back into the brain,” said Sanchez.

The experiment occured a “handful of months ago,” he said.

It’s another breakthrough in the rapidly advancing field of brain-computer interfaces, or BCIs, for a variety of purposes. The military has been leading interesting research in the field since at least 2007,. And in 2012, DARPA issued a $4 million grant to build a non-invasive “synthetic telepathy” interface by placing sensors close to the brain’s motor centers to pick up electrical signals — non-invasively, over the skin.

But the science has advanced rapidly in recent years, allowing for breakthroughs in brain-based communication, control of prosthetic limbs, and even memory repair.

https://www.defenseone.com/technology/2018/09/its-now-possible-telepathically-communicate-drone-swarm/151068/?oref=d-channeltop


Researchers have developed a new deep learning algorithm that can reveal your personality type, based on the Big Five personality trait model, by simply tracking eye movements.

t’s often been said that the eyes are the window to the soul, revealing what we think and how we feel. Now, new research reveals that your eyes may also be an indicator of your personality type, simply by the way they move.

Developed by the University of South Australia in partnership with the University of Stuttgart, Flinders University and the Max Planck Institute for Informatics in Germany, the research uses state-of-the-art machine-learning algorithms to demonstrate a link between personality and eye movements.

Findings show that people’s eye movements reveal whether they are sociable, conscientious or curious, with the algorithm software reliably recognising four of the Big Five personality traits: neuroticism, extroversion, agreeableness, and conscientiousness.

Researchers tracked the eye movements of 42 participants as they undertook everyday tasks around a university campus, and subsequently assessed their personality traits using well-established questionnaires.

UniSA’s Dr Tobias Loetscher says the study provides new links between previously under-investigated eye movements and personality traits and delivers important insights for emerging fields of social signal processing and social robotics.

“There’s certainly the potential for these findings to improve human-machine interactions,” Dr Loetscher says.

“People are always looking for improved, personalised services. However, today’s robots and computers are not socially aware, so they cannot adapt to non-verbal cues.

“This research provides opportunities to develop robots and computers so that they can become more natural, and better at interpreting human social signals.”

Dr Loetscher says the findings also provide an important bridge between tightly controlled laboratory studies and the study of natural eye movements in real-world environments.

“This research has tracked and measured the visual behaviour of people going about their everyday tasks, providing more natural responses than if they were in a lab.

“And thanks to our machine-learning approach, we not only validate the role of personality in explaining eye movement in everyday life, but also reveal new eye movement characteristics as predictors of personality traits.”

Original Research: Open access research for “Eye Movements During Everyday Behavior Predict Personality Traits” by Sabrina Hoppe, Tobias Loetscher, Stephanie A. Morey and Andreas Bulling in Frontiers in Human Neuroscience. Published April 14 2018.
doi:10.3389/fnhum.2018.00105

https://neurosciencenews.com/ai-personality-9621/