More than 140 organizations advocating for immigrants, racial justice, and privacy, are calling on the Department of Homeland Security to end its deployment of artificial intelligence by December if it cannot comply with federal requirements for responsible use of the technology.
The letter, drafted by the Just Futures Law advocacy group, claims that DHS is in violation of federal policies set by the Biden administration for the responsible use of AI. Therefore, it must cancel or suspend its use by Dec. 1, as required by the Office of Management and Budget, according to the letter addressed to Secretary Alejandro Mayorkas.
“The stakes are high,” the letter states. “DHS’s latest AI tools impact millions of people in the U.S. Given the historical discrimination, inaccuracies, and complexities of the immigration system, we have serious concerns that DHS’s AI products could exacerbate existing biases or be abused in the future to supercharge detention and deportation.”
Among the organizations that signed the letter are: League of United Latin America Citizens, MALDEF, Surveillance Resistance Lab, Service Employees International Union, Kino Border Initiative, Muslim Advocates, Bread for the World, American Jewish Committee, Coalition for Humane Immigrant Rights, Asian Americans Advancing Justice, and UndocuBlack Network.
DHS did not immediately respond to a request to comment, but a spokesperson previously told URL Media that the agency is committed to ensuring that its use of AI “fully respects privacy, civil liberties, and civil rights,” and that it’s rigorously tested to avoid bias and privacy harms.
The letter comes two months after Just Futures Law and the immigrant rights group Mijente released the “Automating Deportation” report that charges the DHS and its sub-agencies with a lack of transparency and a failure to implement “basic safeguards against AI abuse.”
The report sheds light on how sub-agencies under the DHS — such as the United States Citizenship and Immigration Services — have amped up development of AI technology, not just for enforcement along the border, but to process asylum, visa, and naturalization applications.
It details tools like the “I-539 Approval Prediction,” which attempts to train and build a machine to determine when USCIS should approve an application for visa extensions for students, visitors, or temporary workers.
With “Predicted to Naturalize,” an AI tool that apparently did not develop, USCIS explored a model that could predict when lawful permanent residents would be eligible to naturalize. Through “Asylum Text Analytics,” USCIS uses AI to identify “plagiarism-based-fraud” in asylum and withholding applications.
These tools are listed in an AI inventory that the DHS is required to publish.
Earlier this year, DHS released a roadmap detailing how it will use generative AI for USCIS to generate training materials and language translation for asylum and refugee officers. The DHS spokesperson previously told URL Media that AI will provide smarter and timelier information to help agents and officers in making decisions, as well as free them up from routine tasks to “focus on higher value work.”
The letter notes that federal rules and policies require DHS to consult with impacted communities before using AI tools, monitor the tech for errors and civil rights violations on an ongoing basis, and provide notification and an opt-out process for those impacted by AI.
“We have serious concerns that DHS has fast-tracked the deployment of AI technologies in contravention of these federal policies, executive orders, and agency memoranda,” the letter states.