Posts Tagged ‘AI. artificial intelligence’


Aaron Elkins, a professor at the San Diego State University, is working on a kiosk system that can ask travelers questions at an airport or border crossings and capture behaviors to detect if someone is lying.

International travelers could find themselves in the near future talking to a lie-detecting kiosk when they’re going through customs at an airport or border crossing.

The same technology could be used to provide initial screening of refugees and asylum seekers at busy border crossings.

The U.S. Department of Homeland Security funded research of the virtual border agent technology known as the Automated Virtual Agent for Truth Assessments in Real-Time, or AVATAR, about six years ago and allowed it to be tested it at the U.S.-Mexico border on travelers who volunteered to participate. Since then, Canada and the European Union tested the robot-like kiosk that uses a virtual agent to ask travelers a series of questions.

Last month, a caravan of migrants from Central America made it to the U.S.-Mexico border, where they sought asylum but were delayed several days because the port of entry near San Diego had reached full capacity. It’s possible that a system such as AVATAR could provide initial screening of asylum seekers and others to help U.S. agents at busy border crossings such as San Diego’s San Ysidro.

“The technology has much broader applications potentially,” despite most of the funding for the original work coming primarily from the Defense or Homeland Security departments a decade ago, according to Aaron Elkins, one of the developers of the system and an assistant professor at the San Diego State University director of its Artificial Intelligence Lab. He added that AVATAR is not a commercial product yet but could be also used in human resources for screening.

The U.S.-Mexico border trials with the advanced kiosk took place in Nogales, Arizona, and focused on low-risk travelers. The research team behind the system issued a report after the 2011-12 trials that stated the AVATAR technology had potential uses for processing applications for citizenship, asylum and refugee status and to reduce backlogs.

High levels of accuracy
President Donald Trump’s fiscal 2019 budget request for Homeland Security includes $223 million for “high-priority infrastructure, border security technology improvements,” as well as another $210.5 million for hiring new border agents. Last year, federal workers interviewed or screened more than 46,000 refugee applicants and processed nearly 80,000 “credible fear cases.”

The AVATAR combines artificial intelligence with various sensors and biometrics that seeks to flag individuals who are untruthful or a potential risk based on eye movements or changes in voice, posture and facial gestures.

“We’re always consistently above human accuracy,” said Elkins, who worked on the technology with a team of researchers that included the University of Arizona.

According to Elkins, the AVATAR as a deception-detection judge has a success rate of 60 to 75 percent and sometimes up to 80 percent.

“Generally, the accuracy of humans as judges is about 54 to 60 percent at the most,” he said. “And that’s at our best days. We’re not consistent.”

The human element
Regardless, Homeland Security appears to be sticking with human agents for the moment and not embracing virtual technology that the EU and Canadian border agencies are still researching. Another advanced border technology, known as iBorderCtrl, is a EU-funded project that aims to increase speed but also reduce “the workload and subjective errors caused by human agents.”

A Homeland Security official, who declined to be named, told CNBC the concept for the AVATAR system “was envisioned by researchers to assist human screeners by flagging people exhibiting suspicious or anomalous behavior.”

“As the research effort matured, the system was evaluated and tested by the DHS Science and Technology Directorate and DHS operational components in 2012,” the official added. “Although the concept was appealing at the time, the research did not mature enough for further consideration or further development.”

Another DHS official familiar with the technology didn’t work at a high enough rate of speed to be practical. “We have to screen people within seconds, and we can’t take minutes to do it,” said the official.

Elkins, meanwhile, said the funding for the AVATAR system hasn’t come from Homeland Security in recent years “because they sort of felt that this is in a different category now and needs to transition.”

The technology, which relies on advanced statistics and machine learning, was tested a year and a half ago with the Canadian Border Services Agency, or CBSA, to help agents determine whether a traveler has ulterior motives entering the country and should be questioned further or denied entry.

A report from the CBSA on the AVATAR technology is said to be imminent, but it’s unclear whether the agency will proceed the technology beyond the testing phase.

“The CBSA has been following developments in AVATAR technology since 2011 and is continuing to monitor developments in this field,” said Barre Campbell, a senior spokesman for the Canadian agency. He said the work carried out in March 2016 was “an internal-only experiment of AVATAR” and that “analysis for this technology is ongoing.”

Prior to that, the EU border agency known as Frontex helped coordinate and sponsor a field test of the AVATAR system in 2014 at the international arrivals section of an airport in Bucharest, Romania.

People and machines working together
Once the system detects deception, it alerts the human agents to do follow-up interviews.

AVATAR doesn’t use your standard polygraph instrument. Instead, people face a kiosk screen and talk to a virtual agent or kiosk fitted with various sensors and biometrics that seeks to flag individuals who are untruthful or signal a potential risk based on eye movements or changes in voice, posture and facial gestures.

“Artificial intelligence has allowed us to use sensors that are noncontact that we can then process the signal in really advanced ways,” Elkins said. “We’re able to teach computers to learn from some data and actually act intelligently. The science is very mature over the last five or six years.”

But the researcher insists the AVATAR technology wasn’t developed as a replacement for people.

“We wanted to let people focus on what they do best,” he said. “Let the systems do what they do best and kind of try to merge them into the process.”

Still, future advancement in artificial intelligence systems may allow the technology to someday supplant various human jobs because the robot-like machines may be seen as more productive and cost effective particularly in screening people.

Elkins believes the AVATAR could potentially get used one day at security checkpoints at airports “to make the screening process faster but also to improve the accuracy.”

“It’s just a matter of finding the right implementation of where it will be and how it will be used,” he said. “There’s also a process that would need to occur because you can’t just drop the AVATAR into an airport as it exists now because all that would be using an extra step.”

https://www.cnbc.com/2018/05/15/lie-detectors-with-artificial-intelligence-are-future-of-border-security.html

Thanks to Kebmodee for bringing this to the It’s Interesting community.

Advertisements

MICROSOFT IS BUYING a deep learning startup based in Montreal, a global hub for deep learning research. But two years ago, this startup wasn’t based in Montreal, and it had nothing to do with deep learning. Which just goes to show: striking it big in the world of tech is all about being in the right place at the right time with the right idea.

Sam Pasupalak and Kaheer Suleman founded Maluuba in 2011 as students at the University of Waterloo, about 400 miles from Montreal. The company’s name is an insider’s nod to one of their undergraduate computer science classes. From an office in Waterloo, they started building something like Siri, the digital assistant that would soon arrive on the iPhone, and they built it in much the same way Apple built the original, using techniques that had driven the development of conversational computing for years—techniques that require extremely slow and meticulous work, where engineers construct AI one tiny piece at a time. But as they toiled away in Waterloo, companies like Google and Facebook embraced deep neural networks, and this technology reinvented everything from image recognition to machine translations, rapidly learning these tasks by analyzing vast amounts of data. Soon, Pasupalak and Suleman realized they should change tack.

In December 2015, the two founders opened a lab in Montreal, and they started recruiting deep learning specialists from places like McGill University and the University of Montreal. Just thirteen months later, after growing to a mere 50 employees, the company sold itself to Microsoft. And that’s not an unusual story. The giants of tech are buying up deep learning startups almost as quickly as they’re created. At the end of December, Uber acquired Geometric Logic, a two-year old AI startup spanning fifteen academic researchers that offered no product and no published research. The previous summer, Twitter paid a reported $150 million for Magic Pony, a two-year-old deep learning startup based in the UK. And in recent months, similarly small, similarly young deep learning companies have disappeared into the likes of General Electric, Salesforce, and Apple.

Microsoft did not disclose how much it paid for Maluuba, but some of these deep learning acquisitions have reached hefty sums, including Intel’s $400 million purchase of Nervana and Google’s $650 million acquisition of DeepMind, the British AI lab that made headlines last spring when it cracked the ancient game of Go, a feat experts didn’t expect for another decade.

At the same time, Microsoft’s buy is a little different than the rest. Maluuba is a deep learning company that focuses on natural language understanding, the ability to not just recognize the words that come out of our mouths but actually understand them and respond in kind—the breed of AI needed to build a good chatbot. Now that deep learning has proven so effective with speech recognition, image recognition, and translation, natural language is the next frontier. “In the past, people had to build large lexicons, dictionaries, ontologies,” Suleman says. “But with neural nets, we no longer need to do that. A neural net can learn from raw data.”

The acquisition is part of an industry-wide race towards digital assistants and chatbots that can converse like a human. Yes, we already have digital assistants like Microsoft Cortana, the Google Search Assistant, Facebook M, and Amazon Alexa. And chatbots are everywhere. But none of these services know how to chat (a particular problem for the chatbots). So, Microsoft, Google, Facebook, and Amazon are now looking at deep learning as a way of improving the state of the art.

Two summers ago, Google published a research paper describing a chatbot underpinned by deep learning that could debate the meaning of life (in a way). Around the same time, Facebook described an experimental system that could read a shortened form of The Lord of the Rings and answer questions about the Tolkien trilogy. Amazon is gathering data for similar work. And, none too surprisingly, Microsoft is gobbling up a startup that only just moved into the same field.

Winning the Game
Deep neural networks are complex mathematical systems that learn to perform discrete tasks by recognizing patterns in vast amounts of digital data. Feed millions of photos into a neural network, for instance, and it can learn to identify objects and people in photos. Pairing these systems with the enormous amounts of computing power inside their data centers, companies like Google, Facebook, and Microsoft have pushed artificial intelligence far further, far more quickly, than they ever could in the past.

Now, these companies hope to reinvent natural language understanding in much the same way. But there are big caveats: It’s a much harder task, and the work has only just begun. “Natural language is an area where more research needs to be done in terms of research, even basic research,” says University of Montreal professor Yoshua Bengio, one of the founding fathers of the deep learning movement and an advisor to Maluuba.

Part of the problem is that researchers don’t yet have the data needed to train neural networks for true conversation, and Maluuba is among those working to fill the void. Like Facebook and Amazon, it’s building brand new datasets for training natural language models: One involves questions and answers, and the other focuses on conversational dialogue. What’s more, the company is sharing this data with the larger community of researchers and encouraging then\m to share their own—a common strategy that seeks to accelerate the progress of AI research.

But even with adequate data, the task is quite different from image recognition or translation. Natural language isn’t necessarily something that neural networks can solve on their own. Dialogue isn’t a single task. It’s a series of tasks, each building on the one before. A neural network can’t just identify a pattern in a single piece of data. It must somehow identify patterns across an endless stream of data—and a keep a “memory” of this stream. That’s why Maluuba is exploring AI beyond neural networks, including a technique called reinforcement learning.

With reinforcement learning, a system repeats the same task over and over again, while carefully keeping tabs on what works and what doesn’t. Engineers at Google’s DeepMind lab used this method in building AlphaGo, the system that topped Korean grandmaster Lee Sedol at the ancient game of Go. In essence, the machine learned to play Go at a higher level than any human by playing game after game against itself, tracking which moves won the most territory on the board. In similar fashion, reinforcement learning can help machines learn to carry on a conversation. Like a game, Bengio says, dialogue is interactive. It’s a back and forth.

For Microsoft, winning the game of conversation means winning an enormous market. Natural language could streamline practically any computer interface. With this in mind, the company is already building an army of chatbots, but so far, the results are mixed. In China, the company says, its Xiaoice chatbot has been used by 40 million people. But when it first unleashed a similar bot in the US, the service was coaxed into spewing racism, and the replacement is flawed in so many other ways. That’s why Microsoft acquired Maluuba. The startup was in the right place at the right time. And it may carry the right idea.

https://www.wired.com/2017/01/microsoft-thinks-machines-can-learn-converse-chats-become-game/