As billions of dollars are poured into federal agencies to purchase and use artificial intelligence, immigrant rights advocates are urging the Department of Homeland Security to be more transparent as it amps up development of AI technology — not just for enforcement along the border — but to process asylum, visa, and naturalization applications.
Sub-agencies under the Department of Homeland Security, such as the United States Citizenship and Immigration Services, are exploring tools like the “I-539 Approval Prediction,” which attempts to train and build a machine to determine when USCIS should approve an application for visa extensions for students, visitors, or temporary workers.
With “Predicted to Naturalize,” an AI tool that apparently did not develop, USCIS explored a model that could predict when lawful permanent residents would be eligible to naturalize.
Through “Asylum Text Analytics,” USCIS uses AI to identify “plagiarism-based-fraud” in asylum and withholding applications.
These tools are listed in an AI inventory that the DHS is required to publish. This tech is also the focus of the “Automating Deportation” report — spearheaded by the Just Futures Law advocacy group and immigrant rights organization Mijente — that charges the DHS and its sub-agencies with a lack of transparency and a failure to implement “basic safeguards against AI abuse.” The report calls for the agencies to, among other things, consult with the public or impacted communities before the release of an AI product as well as to release the code.
Julie Mao, co-founder and deputy director of Just Futures Law, points to research showing a pattern of DHS disproportionately denying asylum to Black, Latino, and Muslim immigrants, and fears that this “bias” could be amplified if an agency like USCIS is developing and training AI bots “to basically replicate existing human decisions based on discriminatory data.”
“It’s not like a human adjudicator that can go through maybe 1,000 applications a year,” Mao told URL Media, adding concern that AI tools could run through much more applications at a faster rate.
“It’s another completely different impact when a machine is deciding whether to deport someone or to keep the family together,” she added. “That really needs to be done in a very transparent way … where there’s actual human individuals reviewing the accuracy and the biases in the data.”
For years, Just Futures Law and Mijente have sounded the alarm over tech and data driving deportation raids and about the use of facial recognition and location tracking tools for surveillance along the border.
Now, with this report, they’re calling attention to the “pervasive” AI elements that are being used — not just by DHS agencies charged with detaining and deporting immigrants — but by USCIS, which processes millions of applications a year dealing with work permits, green cards, and naturalization.
A DHS spokesperson, in an email, told URL Media that the agency is committed to ensuring that its use of AI “fully respects privacy, civil liberties, and civil rights,” and that it’s rigorously tested to avoid bias and privacy harms.
Earlier this year, DHS released a roadmap detailing how it will use generative AI for USCIS to generate training materials and language translation for asylum and refugee officers. The spokesperson said AI will provide smarter and timelier information to help agents and officers in making decisions, as well as free them up from routine tasks to “focus on higher value work.”
The agency also pointed to its publicly available inventory that is required through an executive order the Biden Administration issued in 2020. The spokesperson said DHS is working to implement requirements outlined in a memo from the Office of Management and Budget that provides a framework to reduce “automation bias” and protect civil rights from potential AI harm.
These protocols are not enough, according to the report.
Mao and the report’s authors note that through months of reviewing DHS’ AI website, “we observed that DHS adds, deletes, and modifies AI programs with no explanation.”
The inventory, which can be downloaded, appears to have been modified around mid-August, after URL Media inquired about the AI tools listed in the Automating Deportation report. The updated inventory listed “Predicted to Naturalize” under the “inactive use cases.” An April version of the inventory, however, included the tool as under “implementation.”
Even with its AI inventory, the report’s authors said they found little information about how DHS identifies or manages errors or conducts oversight. “In the meantime, DHS continues to purchase and use these powerful technologies on immigrant communities,” according to the report.
Mao noted a predictive algorithm — used by ICE — that generates a weekly “Hurricane Score” to make decisions on someone’s conditions of supervision under a program that subjects immigrants to location surveillance. This tool was not listed in the AI Inventory, but was uncovered by a Freedom of Information Act lawsuit that Just Futures Law and Mijente filed, according to the report.
A longtime immigration attorney, Mao knows the time it can take USCIS or other agencies to adjudicate immigration applications.
“People have to wait for years for their visas to be approved, but these types of tools, where we do not know how it’s making the decision, this is not the efficiency that we’re asking for,” she said.
Jacinta González, a public policy director for Mijente, points to the amount of work it took “to even be able to find out the little bit that we do know” about AI tech in immigration.
“What is very clear is that they’re not following standards for how to review how this technology is being used,” said González, adding that the report is part of the larger “No Tech for ICE” campaign that started in 2017.
González thinks back to about 10 years ago when she and other organizers began seeing ICE agents in community spaces, like grocery stores, armed with mobile fingerprinting devices. “They would be fingerprinting people on the spot,” she said.
Just Futures Law and Mijente, in a previous report, highlighted how the mobile fingerprinting app known as EDDIE, helped ICE agents increase deportations of migrants not intentionally targeted for removal, according to the Associated Press.
“You can really see the progression of it,” said González, of the evolution of tech in immigration enforcement.
To González, the findings from the latest report are striking in that “it has implications for everyone, from recent arrivals and asylum seekers to folks who have been here for a long time.”
“It’s not just affecting one group of immigrants, but actually, everyone,” González said.