DHS Embraces Hallucinations of its AI Chatbot to Gauge Inconsistencies
A Department of Homeland Security (DHS) pilot, aimed at training officers in conducting interviews with individuals seeking refugee status, intentionally incorporates inconsistencies in AI outputs to simulate real-world scenarios more effectively.
At the ATO and Cloud Security Summit, a top official highlighted that United States Citizenship and Immigration Services within the department is utilizing generative AI tools for this purpose, facilitating training through simulated interviews that closely resemble actual conversations with asylum seekers.
“Generative AI will pretend to be a refugee applicant and give them answers, new answers, to practice the three hour long interview with an automated system. When we think about bias, well, in this particular case, I actually want this generative AI system to pretend to talk about very toxic potential events,” Boyce said.
DHS initially unveiled the pilot in March this year as one of three new use cases aimed at exploring the advantages of AI.
“I also want them to hallucinate and I want them to be a little inaccurate because you're often, in real life, working with an interpreter and there's a lot of confusion and a lot of sort of dropped things, or things that don’t quite line up or make perfect sense,” he added.
Earlier this year, DHS introduced the inaugural cohort of the "AI Corps," comprising the first 10 AI technology experts.
These new team members will have crucial roles in responsibly deploying AI across key mission domains. These include combating fentanyl trafficking, addressing online child sexual exploitation and abuse, improving immigration services, bolstering critical infrastructure, and strengthening cybersecurity efforts.