Posts Tagged ‘artificial intelligence’

New research at Case Western Reserve University could help better determine which patients diagnosed with the pre-malignant breast cancer commonly referred to as stage 0 are likely to progress to invasive breast cancer and therefore might benefit from additional therapy over and above surgery alone.

Once a lumpectomy of breast tissue reveals this pre-cancerous tumor, most women have surgery to remove the remainder of the affected tissue and some are given radiation therapy as well, said Anant Madabhushi, the F. Alex Nason Professor II of Biomedical Engineering at the Case School of Engineering.

“Current testing places patients in high risk, low risk and indeterminate risk—but then treats those indeterminates with radiation, anyway,” said Madabhushi, whose Center for Computational Imaging and Personalized Diagnostics (CCIPD) conducted the new research. “They err on the side of caution, but we’re saying that it appears that it should go the other way—the middle should be classified with the lower risk.

“In short, we’re probably overtreating patients,” Madabhushi continued. “That goes against prevailing wisdom, but that’s what our analysis is finding.”

The most common breast cancer

Stage 0 breast cancer is the most common type and known clinically as ductal carcinoma in situ (DCIS), indicating that the cancer cell growth starts in the milk ducts.

About 60,000 cases of DCIS are diagnosed in the United States each year, accounting for about one of every five new breast cancer cases, according to the American Cancer Society. People with a type of breast cancer that has not spread beyond the breast tissue live at least five years after diagnosis, according to the cancer society.

Lead researcher Haojia Li, a graduate student in the CCIPD, used a computer program to analyze the spatial architecture, texture and orientation of the individual cells and nuclei from scanned and digitized lumpectomy tissue samples from 62 DCIS patients.

The result: Both the size and orientation of the tumors characterized as “indeterminate” were actually much closer to those confirmed as low risk for recurrence by an expensive genetic test called Oncotype DX.

Li then validated the features that distinguished the low and high risk Oncotype groups in being able to predict the likelihood of progression from DCIS to invasive ductal carcinoma in an independent set of 30 patients.

“This could be a tool for determining who really needs the radiation, or who needs the gene test, which is also very expensive,” she said.

The research led by Li was published Oct. 17 in the journal Breast Cancer Research.

Madabhushi established the CCIPD at Case Western Reserve in 2012. The lab now includes nearly 60 researchers. The lab has become a global leader in the detection, diagnosis and characterization of various cancers and other diseases, including breast cancer, by meshing medical imaging, machine learning and artificial intelligence (AI).

Some of the lab’s most recent work, in collaboration with New York University and Yale University, has used AI to predict which lung cancer patients would benefit from adjuvant chemotherapy based on tissue slide images. That advancement was named by Prevention Magazine as one of the top 10 medical breakthroughs of 2018.

AI at Case Western Reserve lab predicts which pre-malignant breast lesions will progress to invasive cancer

By Sigal Samuel

A new priest named Mindar is holding forth at Kodaiji, a 400-year-old Buddhist temple in Kyoto, Japan. Like other clergy members, this priest can deliver sermons and move around to interface with worshippers. But Mindar comes with some … unusual traits. A body made of aluminum and silicone, for starters.

Mindar is a robot.

Designed to look like Kannon, the Buddhist deity of mercy, the $1 million machine is an attempt to reignite people’s passion for their faith in a country where religious affiliation is on the decline.

For now, Mindar is not AI-powered. It just recites the same preprogrammed sermon about the Heart Sutra over and over. But the robot’s creators say they plan to give it machine-learning capabilities that’ll enable it to tailor feedback to worshippers’ specific spiritual and ethical problems.

“This robot will never die; it will just keep updating itself and evolving,” said Tensho Goto, the temple’s chief steward. “With AI, we hope it will grow in wisdom to help people overcome even the most difficult troubles. It’s changing Buddhism.”

Robots are changing other religions, too. In 2017, Indians rolled out a robot that performs the Hindu aarti ritual, which involves moving a light round and round in front of a deity. That same year, in honor of the Protestant Reformation’s 500th anniversary, Germany’s Protestant Church created a robot called BlessU-2. It gave preprogrammed blessings to over 10,000 people.

Then there’s SanTO — short for Sanctified Theomorphic Operator — a 17-inch-tall robot reminiscent of figurines of Catholic saints. If you tell it you’re worried, it’ll respond by saying something like, “From the Gospel according to Matthew, do not worry about tomorrow, for tomorrow will worry about itself. Each day has enough trouble of its own.”

Roboticist Gabriele Trovato designed SanTO to offer spiritual succor to elderly people whose mobility and social contact may be limited. Next, he wants to develop devices for Muslims, though it remains to be seen what form those might take.

As more religious communities begin to incorporate robotics — in some cases, AI-powered and in others, not — it stands to change how people experience faith. It may also alter how we engage in ethical reasoning and decision-making, which is a big part of religion.

For the devout, there’s plenty of positive potential here: Robots can get disinterested people curious about religion or allow for a ritual to be performed when a human priest is inaccessible. But robots also pose risks for religion — for example, by making it feel too mechanized or homogenized or by challenging core tenets of theology. On the whole, will the emergence of AI religion make us better or worse off? The answer depends on how we design and deploy it — and on whom you ask.

Some cultures are more open to religious robots than others
New technologies often make us uncomfortable. Which ones we ultimately accept — and which ones we reject — is determined by an array of factors, ranging from our degree of exposure to the emerging technology to our moral presuppositions.

Japanese worshippers who visit Mindar are reportedly not too bothered by questions about the risks of siliconizing spirituality. That makes sense given that robots are already so commonplace in the country, including in the religious domain.

For years now, people who can’t afford to pay a human priest to perform a funeral have had the option to pay a robot named Pepper to do it at a much cheaper rate. And in China, at Beijing’s Longquan Monastery, an android monk named Xian’er recites Buddhist mantras and offers guidance on matters of faith.

What’s more, Buddhism’s non-dualistic metaphysical notion that everything has inherent “Buddha nature” — that all beings have the potential to become enlightened — may predispose its adherents to be receptive to spiritual guidance that comes from technology.

At the temple in Kyoto, Goto put it like this: “Buddhism isn’t a belief in a God; it’s pursuing Buddha’s path. It doesn’t matter whether it’s represented by a machine, a piece of scrap metal, or a tree.”

“Mindar’s metal skeleton is exposed, and I think that’s an interesting choice — its creator, Hiroshi Ishiguro, is not trying to make something that looks totally human,” said Natasha Heller, an associate professor of Chinese religions at the University of Virginia. She told me the deity Kannon, upon whom Mindar is based, is an ideal candidate for cyborgization because the Lotus Sutra explicitly says Kannon can manifest in different forms — whatever forms will best resonate with the humans of a given time and place.

Westerners seem more disturbed by Mindar, likening it to Frankenstein’s monster. In Western economies, we don’t yet have robots enmeshed in many aspects of our lives. What we do have is a pervasive cultural narrative, reinforced by Hollywood blockbusters, about our impending enslavement at the hands of “robot overlords.”

Plus, Abrahamic religions like Islam or Judaism tend to be more metaphysically dualistic — there’s the sacred and then there’s the profane. And they have more misgivings than Buddhism about visually depicting divinity, so they may take issue with Mindar-style iconography.

They also have different ideas about what makes a r

eligious practice effective. For example, Judaism places a strong emphasis on intentionality, something machines don’t possess. When a worshipper prays, what matters is not just that their mouth forms the right words — it’s also very important that they have the right intention.

Meanwhile, some Buddhists use prayer wheels containing scrolls printed with sacred words and believe that spinning the wheel has its own spiritual efficacy, even if nobody recites the words aloud. In hospice settings, elderly Buddhists who don’t have people on hand to recite prayers on their behalf will use devices known as nianfo ji — small machines about the size of an iPhone, which recite the name of the Buddha endlessly.

Despite such theological differences, it’s ironic that many Westerners have a knee-jerk negative reaction to a robot like Mindar. The dream of creating artificial life goes all the way back to ancient Greece, where the ancients actually invented real animated machines as the Stanford classicist Adrienne Mayor has documented in her book Gods and Robots. And there is a long tradition of religious robots in the West.

In the Middle Ages, Christians designed automata to perform the mysteries of Easter and Christmas. One proto-roboticist in the 16th century designed a mechanical monk that is, amazingly, performing ritual gestures to this day. With his right arm, he strikes his chest in a mea culpa; with his left, he raises a rosary to his lips.

In other words, the real novelty is not the use of robots in the religious domain but the use of AI.

How AI may change our theology and ethics
Even as our theology shapes the AI we create and embrace, AI will also shape our theology. It’s a two-way street.

Some people believe AI will force a truly momentous change in theology, because if humans create intelligent machines with free will, we’ll eventually have to ask whether they have something functionally similar to a soul.

“There will be a point in the future when these free-willed beings that we’ve made will say to us, ‘I believe in God. What do I do?’ At that point, we should have a response,” said Kevin Kelly, a Christian co-founder of Wired magazine who argues we need to develop “a catechism for robots.”

Other people believe that, rather than seeking to join a human religion, AI itself will become an object of worship. Anthony Levandowski, the Silicon Valley engineer who triggered a major Uber/Waymo lawsuit, has set up the first church of artificial intelligence, called Way of the Future. Levandowski’s new religion is dedicated to “the realization, acceptance, and worship of a Godhead based on artificial intelligence (AI) developed through computer hardware and software.”

Meanwhile, Ilia Delio, a Franciscan sister who holds two PhDs and a chair in theology at Villanova University, told me AI may also force a traditional religion like Catholicism to reimagine its understanding of human priests as divinely called and consecrated — a status that grants them special authority.

“The Catholic notion would say the priest is ontologically changed upon ordination. Is that really true?” she asked. Maybe priestliness is not an esoteric essence but a programmable trait that even a “fallen” creation like a robot can embody. “We have these fixed philosophical ideas and AI challenges those ideas — it challenges Catholicism to move toward a post-human priesthood.” (For now, she joked, a robot would probably do better as a Protestant.)

Then there are questions about how robotics will change our religious experiences. Traditionally, those experiences are valuable in part because they leave room for the spontaneous and surprising, the emotional and even the mystical. That could be lost if we mechanize them.

To visualize an automated ritual, take a look at this video of a robotic arm performing a Hindu aarti ceremony:

Another risk has to do with how an AI priest would handle ethical queries and decision-making. Robots whose algorithms learn from previous data may nudge us toward decisions based on what people have done in the past, incrementally homogenizing answers to our queries and narrowing the scope of our spiritual imagination.

That risk also exists with human clergy, Heller pointed out: “The clergy is bounded too — there’s already a built-in nudging or limiting factor, even without AI.”

But AI systems can be particularly problematic in that they often function as black boxes. We typically don’t know what sorts of biases are coded into them or what sorts of human nuance and context they’re failing to understand.

Let’s say you tell a robot you’re feeling depressed because you’re unemployed and broke, and the only job that’s available to you seems morally odious. Maybe the robot responds by reciting a verse from Proverbs 14: “In all toil there is profit, but mere talk tends only to poverty.” Even if it doesn’t presume to interpret the verse for you, in choosing that verse it’s already doing hidden interpretational work. It’s analyzing your situation and algorithmically determining a recommendation — in this case, one that may prompt you to take the job.

But perhaps it would’ve worked out better for you if the robot had recited a verse from Proverbs 16: “Commit your work to the Lord, and your plans will be established.” Maybe that verse would prompt you to pass on the morally dubious job, and, being a sensitive soul, you’ll later be happy you did. Or maybe your depression is severe enough that the job issue is somewhat beside the point and the crucial thing is for you to seek out mental health treatment.

A human priest who knows your broader context as a whole person may gather this and give you the right recommendation. An android priest might miss the nuances and just respond to the localized problem as you’ve expressed it.

The fact is human clergy members do so much more than provide answers. They serve as the anchor for a community, bringing people together. They offer pastoral care. And they provide human contact, which is in danger of becoming a luxury good as we create robots to more cheaply do the work of people.

On the other hand, Delio said, robots can excel in a social role in some ways that human priests might not. “Take the Catholic Church. It’s very male, very patriarchal, and we have this whole sexual abuse crisis. So would I want a robot priest? Maybe!” she said. “A robot can be gender-neutral. It might be able to transcend some of those divides and be able to enhance community in a way that’s more liberating.”

Ultimately, in religion as in other domains, robots and humans are perhaps best understood not as competitors but as collaborators. Each offers something the other lacks.

As Delio put it, “We tend to think in an either/or framework: It’s either us or the robots. But this is about partnership, not replacement. It can be a symbiotic relationship — if we approach it that way.”

https://www.vox.com/future-perfect/2019/9/9/20851753/ai-religion-robot-priest-mindar-buddhism-christianity

Thanks to Kebmodee for bringing this to the It’s Interesting community.

by George Dvorsky

Fancy algorithms capable of solving a Rubik’s Cube have appeared before, but a new system from the University of California, Irvine uses artificial intelligence to solve the 3D puzzle from scratch and without any prior help from humans—and it does so with impressive speed and efficiency.

New research published this week in Nature Machine Intelligence describes DeepCubeA, a system capable of solving any jumbled Rubik’s Cube it’s presented with. More impressively, it can find the most efficient path to success—that is, the solution requiring the fewest number of moves—around 60 percent of the time. On average, DeepCubeA needed just 28 moves to solve the puzzle, requiring 1.2 seconds to calculate the solution.

Sounds fast, but other systems have solved the 3D puzzle in less time, including a robot that can solve the Rubik’s cube in just 0.38 seconds. But these systems were specifically designed for the task, using human-scripted algorithms to solve the puzzle in the most efficient manner possible. DeepCubeA, on the other hand, taught itself to solve Rubik’s Cube using an approach to artificial intelligence known as reinforcement learning.

“Artificial intelligence can defeat the world’s best human chess and Go players, but some of the more difficult puzzles, such as the Rubik’s Cube, had not been solved by computers, so we thought they were open for AI approaches,” said Pierre Baldi, the senior author of the new paper, in a press release. “The solution to the Rubik’s Cube involves more symbolic, mathematical and abstract thinking, so a deep learning machine that can crack such a puzzle is getting closer to becoming a system that can think, reason, plan and make decisions.”

Indeed, an expert system designed for one task and one task only—like solving a Rubik’s Cube—will forever be limited to that domain, but a system like DeepCubeA, with its highly adaptable neural net, could be leveraged for other tasks, such as solving complex scientific, mathematical, and engineering problems. What’s more, this system “is a small step toward creating agents that are able to learn how to think and plan for themselves in new environments,” Stephen McAleer, a co-author of the new paper, told Gizmodo.

Reinforcement learning works the way it sounds. Systems are motivated to achieve a designated goal, during which time they gain points for deploying successful actions or strategies, and lose points for straying off course. This allows the algorithms to improve over time, and without human intervention.

Reinforcement learning makes sense for a Rubik’s Cube, owing to the hideous number of possible combinations on the 3x3x3 puzzle, which amount to around 43 quintillion. Simply choosing random moves with the hopes of solving the cube is simply not going to work, neither for humans nor the world’s most powerful supercomputers.

DeepCubeA is not the first kick at the can for these University of California, Irvine researchers. Their earlier system, called DeepCube, used a conventional tree-search strategy and a reinforcement learning scheme similar to the one employed by DeepMind’s AlphaZero. But while this approach works well for one-on-one board games like chess and Go, it proved clumsy for Rubik’s Cube. In tests, the DeepCube system required too much time to make its calculations, and its solutions were often far from ideal.

The UCI team used a different approach with DeepCubeA. Starting with a solved cube, the system made random moves to scramble the puzzle. Basically, it learned to be proficient at Rubik’s Cube by playing it in reverse. At first the moves were few, but the jumbled state got more and more complicated as training progressed. In all, DeepCubeA played 10 billion different combinations in two days as it worked to solve the cube in less than 30 moves.

“DeepCubeA attempts to solve the cube using the least number of moves,” explained McAleer. “Consequently, the moves tend to look much different from how a human would solve the cube.”

After training, the system was tasked with solving 1,000 randomly scrambled Rubik’s Cubes. In tests, DeepCubeA found a solution to 100 percent of all cubes, and it found a shortest path to the goal state 60.3 percent of the time. The system required 28 moves on average to solve the cube, which it did in about 1.2 seconds. By comparison, the fastest human puzzle solvers require around 50 moves.

“Since we found that DeepCubeA is solving the cube in the fewest moves 60 percent of the time, it’s pretty clear that the strategy it is using is close to the optimal strategy, colloquially referred to as God’s algorithm,” study co-author Forest Agostinelli told Gizmodo. “While human strategies are easily explainable with step-by-step instructions, defining an optimal strategy often requires sophisticated knowledge of group theory and combinatorics. Though mathematically defining this strategy is not in the scope of this paper, we can see that the strategy DeepCubeA is employing is one that is not readily obvious to humans.”

To showcase the flexibility of the system, DeepCubeA was also taught to solve other puzzles, including sliding-tile puzzle games, Lights Out, and Sokoban, which it did with similar proficiency.

https://gizmodo.com/self-taught-ai-masters-rubik-s-cube-without-human-help-1836420294


Two-photon imaging shows neurons firing in a mouse brain. Recordings like this enable researchers to track which neurons are firing, and how they potentially correspond to different behaviors. The image is credited to Yiyang Gong, Duke University.

Summary: Convolutional neural network model significantly outperforms previous methods and is as accurate as humans in segmenting active and overlapping neurons.

Source: Duke University

Biomedical engineers at Duke University have developed an automated process that can trace the shapes of active neurons as accurately as human researchers can, but in a fraction of the time.

This new technique, based on using artificial intelligence to interpret video images, addresses a critical roadblock in neuron analysis, allowing researchers to rapidly gather and process neuronal signals for real-time behavioral studies.

The research appeared this week in the Proceedings of the National Academy of Sciences.

To measure neural activity, researchers typically use a process known as two-photon calcium imaging, which allows them to record the activity of individual neurons in the brains of live animals. These recordings enable researchers to track which neurons are firing, and how they potentially correspond to different behaviors.

While these measurements are useful for behavioral studies, identifying individual neurons in the recordings is a painstaking process. Currently, the most accurate method requires a human analyst to circle every ‘spark’ they see in the recording, often requiring them to stop and rewind the video until the targeted neurons are identified and saved. To further complicate the process, investigators are often interested in identifying only a small subset of active neurons that overlap in different layers within the thousands of neurons that are imaged.

This process, called segmentation, is fussy and slow. A researcher can spend anywhere from four to 24 hours segmenting neurons in a 30-minute video recording, and that’s assuming they’re fully focused for the duration and don’t take breaks to sleep, eat or use the bathroom.

In contrast, a new open source automated algorithm developed by image processing and neuroscience researchers in Duke’s Department of Biomedical Engineering can accurately identify and segment neurons in minutes.

“As a critical step towards complete mapping of brain activity, we were tasked with the formidable challenge of developing a fast automated algorithm that is as accurate as humans for segmenting a variety of active neurons imaged under different experimental settings,” said Sina Farsiu, the Paul Ruffin Scarborough Associate Professor of Engineering in Duke BME.

“The data analysis bottleneck has existed in neuroscience for a long time — data analysts have spent hours and hours processing minutes of data, but this algorithm can process a 30-minute video in 20 to 30 minutes,” said Yiyang Gong, an assistant professor in Duke BME. “We were also able to generalize its performance, so it can operate equally well if we need to segment neurons from another layer of the brain with different neuron size or densities.”

“Our deep learning-based algorithm is fast, and is demonstrated to be as accurate as (if not better than) human experts in segmenting active and overlapping neurons from two-photon microscopy recordings,” said Somayyeh Soltanian-Zadeh, a PhD student in Duke BME and first author on the paper.

Deep-learning algorithms allow researchers to quickly process large amounts of data by sending it through multiple layers of nonlinear processing units, which can be trained to identify different parts of a complex image. In their framework, this team created an algorithm that could process both spatial and timing information in the input videos. They then ‘trained’ the algorithm to mimic the segmentation of a human analyst while improving the accuracy.

The advance is a critical step towards allowing neuroscientists to track neural activity in real time. Because of their tool’s widespread usefulness, the team has made their software and annotated dataset available online.

Gong is already using the new method to more closely study the neural activity associated with different behaviors in mice. By better understanding which neurons fire for different activities, Gong hopes to learn how researchers can manipulate brain activity to modify behavior.

“This improved performance in active neuron detection should provide more information about the neural network and behavioral states, and open the door for accelerated progress in neuroscience experiments,” said Soltanian-Zadeh.

https://neurosciencenews.com/artificial-intelligence-neurons-11076/


Researchers at the University of North Carolina School of Medicine used MRI brain scans and machine learning techniques at birth to predict cognitive development at age 2 years with 95 percent accuracy.

“This prediction could help identify children at risk for poor cognitive development shortly after birth with high accuracy,” said senior author John H. Gilmore, MD, Thad and Alice Eure Distinguished Professor of psychiatry and director of the UNC Center for Excellence in Community Mental Health. “For these children, an early intervention in the first year or so of life – when cognitive development is happening – could help improve outcomes. For example, in premature infants who are at risk, one could use imaging to see who could have problems.”

The study, which was published online by the journal NeuroImage, used an application of artificial intelligence called machine learning to look at white matter connections in the brain at birth and the ability of these connections to predict cognitive outcomes.

Gilmore said researchers at UNC and elsewhere are working to find imaging biomarkers of risk for poor cognitive outcomes and for risk of neuropsychiatric conditions such as autism and schizophrenia. In this study, the researchers replicated the initial finding in a second sample of children who were born prematurely.

“Our study finds that the white matter network at birth is highly predictive and may be a useful imaging biomarker. The fact that we could replicate the findings in a second set of children provides strong evidence that this may be a real and generalizable finding,” he said.

Jessica B. Girault, PhD, a postdoctoral researcher at the Carolina Institute for Developmental Disabilities, is the study’s lead author. UNC co-authors are Barbara D. Goldman, PhD, of UNC’s Frank Porter Graham Child Development Institute, Juan C. Prieto, PhD, assistant professor, and Martin Styner, PhD, director of the Neuro Image Research and Analysis Laboratory in the department of psychiatry.

https://neurosciencenews.com/ai-mri-cognitive-development-10904/

by Isobel Asher Hamilton

– China’s state press agency has developed what it calls “AI news anchors,” avatars of real-life news presenters that read out news as it is typed.

– It developed the anchors with the Chinese search-engine giant Sogou.

– No details were given as to how the anchors were made, and one expert said they fell into the “uncanny valley,” in which avatars have an unsettling resemblance to humans.

China’s state-run press agency, Xinhua, has unveiled what it claims are the world’s first news anchors generated by artificial intelligence.

Xinhua revealed two virtual anchors at the World Internet Conference on Thursday. Both were modeled on real presenters, with one who speaks Chinese and another who speaks English.

“AI anchors have officially become members of the Xinhua News Agency reporting team,” Xinhua told the South China Morning Post. “They will work with other anchors to bring you authoritative, timely, and accurate news information in both Chinese and English.”

In a post, Xinhua said the generated anchors could work “24 hours a day” on its website and various social-media platforms, “reducing news production costs and improving efficiency.”

Xinhua developed the virtual anchors with Sogou, China’s second-biggest search engine. No details were given about how they were made.

Though Xinhua presents the avatars as independently learning from “live broadcasting videos,” the avatars do not appear to rely on true artificial intelligence, as they simply read text written by humans.

“I will work tirelessly to keep you informed as texts will be typed into my system uninterrupted,” the English-speaking anchor says in its first video, using a synthesized voice.

The Oxford computer-science professor Michael Wooldridge told the BBC that the anchor fell into the “uncanny valley,” in which avatars or objects that closely but do not fully resemble humans make observers more uncomfortable than ones that are more obviously artificial.

https://www.businessinsider.com/ai-news-anchor-created-by-china-xinhua-news-agency-2018-11


Researchers have developed a new deep learning algorithm that can reveal your personality type, based on the Big Five personality trait model, by simply tracking eye movements.

t’s often been said that the eyes are the window to the soul, revealing what we think and how we feel. Now, new research reveals that your eyes may also be an indicator of your personality type, simply by the way they move.

Developed by the University of South Australia in partnership with the University of Stuttgart, Flinders University and the Max Planck Institute for Informatics in Germany, the research uses state-of-the-art machine-learning algorithms to demonstrate a link between personality and eye movements.

Findings show that people’s eye movements reveal whether they are sociable, conscientious or curious, with the algorithm software reliably recognising four of the Big Five personality traits: neuroticism, extroversion, agreeableness, and conscientiousness.

Researchers tracked the eye movements of 42 participants as they undertook everyday tasks around a university campus, and subsequently assessed their personality traits using well-established questionnaires.

UniSA’s Dr Tobias Loetscher says the study provides new links between previously under-investigated eye movements and personality traits and delivers important insights for emerging fields of social signal processing and social robotics.

“There’s certainly the potential for these findings to improve human-machine interactions,” Dr Loetscher says.

“People are always looking for improved, personalised services. However, today’s robots and computers are not socially aware, so they cannot adapt to non-verbal cues.

“This research provides opportunities to develop robots and computers so that they can become more natural, and better at interpreting human social signals.”

Dr Loetscher says the findings also provide an important bridge between tightly controlled laboratory studies and the study of natural eye movements in real-world environments.

“This research has tracked and measured the visual behaviour of people going about their everyday tasks, providing more natural responses than if they were in a lab.

“And thanks to our machine-learning approach, we not only validate the role of personality in explaining eye movement in everyday life, but also reveal new eye movement characteristics as predictors of personality traits.”

Original Research: Open access research for “Eye Movements During Everyday Behavior Predict Personality Traits” by Sabrina Hoppe, Tobias Loetscher, Stephanie A. Morey and Andreas Bulling in Frontiers in Human Neuroscience. Published April 14 2018.
doi:10.3389/fnhum.2018.00105

https://neurosciencenews.com/ai-personality-9621/