Posts Tagged ‘lying’


Aaron Elkins, a professor at the San Diego State University, is working on a kiosk system that can ask travelers questions at an airport or border crossings and capture behaviors to detect if someone is lying.

International travelers could find themselves in the near future talking to a lie-detecting kiosk when they’re going through customs at an airport or border crossing.

The same technology could be used to provide initial screening of refugees and asylum seekers at busy border crossings.

The U.S. Department of Homeland Security funded research of the virtual border agent technology known as the Automated Virtual Agent for Truth Assessments in Real-Time, or AVATAR, about six years ago and allowed it to be tested it at the U.S.-Mexico border on travelers who volunteered to participate. Since then, Canada and the European Union tested the robot-like kiosk that uses a virtual agent to ask travelers a series of questions.

Last month, a caravan of migrants from Central America made it to the U.S.-Mexico border, where they sought asylum but were delayed several days because the port of entry near San Diego had reached full capacity. It’s possible that a system such as AVATAR could provide initial screening of asylum seekers and others to help U.S. agents at busy border crossings such as San Diego’s San Ysidro.

“The technology has much broader applications potentially,” despite most of the funding for the original work coming primarily from the Defense or Homeland Security departments a decade ago, according to Aaron Elkins, one of the developers of the system and an assistant professor at the San Diego State University director of its Artificial Intelligence Lab. He added that AVATAR is not a commercial product yet but could be also used in human resources for screening.

The U.S.-Mexico border trials with the advanced kiosk took place in Nogales, Arizona, and focused on low-risk travelers. The research team behind the system issued a report after the 2011-12 trials that stated the AVATAR technology had potential uses for processing applications for citizenship, asylum and refugee status and to reduce backlogs.

High levels of accuracy
President Donald Trump’s fiscal 2019 budget request for Homeland Security includes $223 million for “high-priority infrastructure, border security technology improvements,” as well as another $210.5 million for hiring new border agents. Last year, federal workers interviewed or screened more than 46,000 refugee applicants and processed nearly 80,000 “credible fear cases.”

The AVATAR combines artificial intelligence with various sensors and biometrics that seeks to flag individuals who are untruthful or a potential risk based on eye movements or changes in voice, posture and facial gestures.

“We’re always consistently above human accuracy,” said Elkins, who worked on the technology with a team of researchers that included the University of Arizona.

According to Elkins, the AVATAR as a deception-detection judge has a success rate of 60 to 75 percent and sometimes up to 80 percent.

“Generally, the accuracy of humans as judges is about 54 to 60 percent at the most,” he said. “And that’s at our best days. We’re not consistent.”

The human element
Regardless, Homeland Security appears to be sticking with human agents for the moment and not embracing virtual technology that the EU and Canadian border agencies are still researching. Another advanced border technology, known as iBorderCtrl, is a EU-funded project that aims to increase speed but also reduce “the workload and subjective errors caused by human agents.”

A Homeland Security official, who declined to be named, told CNBC the concept for the AVATAR system “was envisioned by researchers to assist human screeners by flagging people exhibiting suspicious or anomalous behavior.”

“As the research effort matured, the system was evaluated and tested by the DHS Science and Technology Directorate and DHS operational components in 2012,” the official added. “Although the concept was appealing at the time, the research did not mature enough for further consideration or further development.”

Another DHS official familiar with the technology didn’t work at a high enough rate of speed to be practical. “We have to screen people within seconds, and we can’t take minutes to do it,” said the official.

Elkins, meanwhile, said the funding for the AVATAR system hasn’t come from Homeland Security in recent years “because they sort of felt that this is in a different category now and needs to transition.”

The technology, which relies on advanced statistics and machine learning, was tested a year and a half ago with the Canadian Border Services Agency, or CBSA, to help agents determine whether a traveler has ulterior motives entering the country and should be questioned further or denied entry.

A report from the CBSA on the AVATAR technology is said to be imminent, but it’s unclear whether the agency will proceed the technology beyond the testing phase.

“The CBSA has been following developments in AVATAR technology since 2011 and is continuing to monitor developments in this field,” said Barre Campbell, a senior spokesman for the Canadian agency. He said the work carried out in March 2016 was “an internal-only experiment of AVATAR” and that “analysis for this technology is ongoing.”

Prior to that, the EU border agency known as Frontex helped coordinate and sponsor a field test of the AVATAR system in 2014 at the international arrivals section of an airport in Bucharest, Romania.

People and machines working together
Once the system detects deception, it alerts the human agents to do follow-up interviews.

AVATAR doesn’t use your standard polygraph instrument. Instead, people face a kiosk screen and talk to a virtual agent or kiosk fitted with various sensors and biometrics that seeks to flag individuals who are untruthful or signal a potential risk based on eye movements or changes in voice, posture and facial gestures.

“Artificial intelligence has allowed us to use sensors that are noncontact that we can then process the signal in really advanced ways,” Elkins said. “We’re able to teach computers to learn from some data and actually act intelligently. The science is very mature over the last five or six years.”

But the researcher insists the AVATAR technology wasn’t developed as a replacement for people.

“We wanted to let people focus on what they do best,” he said. “Let the systems do what they do best and kind of try to merge them into the process.”

Still, future advancement in artificial intelligence systems may allow the technology to someday supplant various human jobs because the robot-like machines may be seen as more productive and cost effective particularly in screening people.

Elkins believes the AVATAR could potentially get used one day at security checkpoints at airports “to make the screening process faster but also to improve the accuracy.”

“It’s just a matter of finding the right implementation of where it will be and how it will be used,” he said. “There’s also a process that would need to occur because you can’t just drop the AVATAR into an airport as it exists now because all that would be using an extra step.”

https://www.cnbc.com/2018/05/15/lie-detectors-with-artificial-intelligence-are-future-of-border-security.html

Thanks to Kebmodee for bringing this to the It’s Interesting community.

Advertisements

By Jessica Hamzelou

Lies have a tendency to snowball, because the more we lie, the more our brains become desensitised to the act of lying. Could this discovery help prevent dishonesty spiralling out of control? It isn’t difficult to think of someone who has ended up in a tangled web of their own lies. In many cases, the lies start small, but escalate.

Tali Sharot at University College London and her colleagues wondered if a person’s brain might get desensitised to lying, in the same way we get used to the horror of a violent image if we see it enough times. Most people feel guilty when they intentionally deceive someone else, but could this feeling ebb away with practice?

To find out, Sharot and her colleagues set up an experiment that encouraged volunteers to lie. In the task, each person was shown jars of pennies, full to varying degrees. While in a brain scanner, each person had to send their estimate to a partner in another room.

The partner was only shown a blurry low-resolution image of the jar, and so relied on the volunteer’s estimate. In some rounds, a correct answer would mean a financial reward for both the volunteer and their partner. But in others, the volunteer was told that a wrong answer from the partner would result in a higher reward for them, but a lower reward for their partner – and the more incorrect the answer, the greater the personal reward. In other rounds, incorrect answers benefited the partner, but not the volunteer.

Sharot found that her volunteers seemed happy to lie if it meant that their partner would benefit. On each of these rounds, the volunteer lied to the same degree. But when it came to self-serving lies, the volunteer’s dishonesty escalated over time – each lie was greater than the one before. For example, a person might start with a lie that earned them £1, but end up telling untruths worth £8.

Brain scans showed that the first lie was associated with a burst of activity in the amygdalae, areas involved in emotional responses. But this activity lessened as the lies progressed. The effect was so strong that the team could use a person’s amygdala activity while they were lying to predict how big their next lie would be.

“When you lie or cheat for your own benefit, it makes you feel bad,” says Sophie van der Zee at the Free University of Amsterdam in the Netherlands. “But when you keep doing it, that feeling goes away, so you’re more likely to do it again.”

“This highlights the danger of engaging in small acts of dishonesty,” says Sharot. Frequent liars are also likely to be better at lying, and harder to catch out, she says. That’s because the amygdala is responsible for general emotional arousal, and all the clues we would normally look for in a liar, such as nervous sweating.

Sharot hopes that her research will help us avoid the spiralling of lies. “If you can understand the mechanism, you might be able to nudge people away from dishonesty,” she says.

One way could be by playing on a person’s emotions to boost the level of activity in the amygdala, says Sharot. “For example, if a government wants people to pay their taxes, they might want to make an emotional case for doing so,” she says.

Van der Zee is working with insurance companies to encourage their customers to file honest claims. In her own research, she has found that people are more likely to lie if they feel they have been rejected, so she is working on ways to reduce the number of failed claims. She has also found that people are more likely to fill in claims forms honestly if they sign their name at the top of the page, before they start filling it in, rather than at the end.

Journal reference: Nature Neuroscience, DOI: 10.1038/nn.4426

https://www.newscientist.com/article/2110130-lying-feels-bad-at-first-but-our-brains-soon-adapt-to-deceiving/

David Cameron’s full-bladder technique really does work – but perhaps not in a way that the UK prime minister intends. Before important speeches or negotiations, Cameron keeps his mind focused by refraining from micturating. The technique may be effective – but it also appears to help people to lie more convincingly.

Iris Blandón-Gitlin of California State University in Fullerton and her colleagues asked 22 students to complete a questionnaire on controversial social or moral issues. They were then interviewed by a panel, but instructed to lie about their opinions on two issues they felt strongly about. After completing the questionnaire, and 45 minutes before the interview, in what they were told was an unrelated task, half drank 700 ml of water and the other half 50 ml.

The interviewers detected lies less accurately among those with a full bladder. Subjects who needed to urinate showed fewer signs that they were lying and gave longer, more detailed answers than those who drank less.

The findings build on work by Mirjam Tuk of Imperial College London, whose study in 2011 found that people with full bladders were better able to resist short-term impulses and make decisions that led to bigger rewards in the long run. These findings hinted that different activities requiring self-control share common mechanisms in the brain, and engaging in one type of control could enhance another.

Other research has suggested that we have a natural instinct to tell the truth which must be inhibited when we lie. Blandón-Gitlin was therefore interested to see whether the “inhibitory spillover effect” identified by Tuk would apply to deception.

Although we think of bladder control and other forms of impulse control as different, they involve common neural resources, says Blandón-Gitlin. “They’re subjectively different but in the brain they’re not. They’re not domain-specific. When you activate the inhibitory control network in one domain, the benefits spill over to other tasks.”

Blandón-Gitlin stresses that her study does not suggest that David Cameron would be more deceitful as a consequence of his full bladder technique. But she says that deception might be made easier using the approach – as long as the desire to urinate isn’t overwhelming. “If it’s just enough to keep you on edge, you might be able to focus and be a better liar,” she says.

https://www.newscientist.com/article/dn28199-the-lies-we-tell-are-more-convincing-when-we-need-to-pee/