By Global News
A new report is warning about the federal government’s interest in using artificial intelligence to screen and process immigrant files, saying it could create discrimination, as well as privacy and human rights breaches.
The research, conducted by the University of Toronto’s Citizen Lab outlines the impacts of automated decision-making involving immigration applications and how errors and assumptions within the technology could lead to “life-and-death ramifications” for immigrants and refugees.
The authors of the report issue a list of seven recommendations calling for greater transparency and public reporting and oversight on government’s use of artificial intelligence and predictive analytics to automate certain activities involving immigrant and visitor applications.
“We know that the government is experimenting with the use of these technologies … but it’s clear that without appropriate safeguards and oversight mechanisms, using A.I. in immigration and refugee determinations is very risky because the impact on people’s lives are quite real,” said Petra Molnar, one of the authors of the report.
Earlier this year, federal officials launched two pilot projects to have an A.I. system sort through temporary resident visa applications from China and India. Mathieu Genest, a spokesman for Immigration Minister Ahmed Hussen, says the analytics program helps officers triage online visa applications to “process routine cases more efficiently.”
He says the technology is being used exclusively as a “sorting mechanism” to help immigration officers deal with an ever-growing number of visitor visas from these countries by quickly identifying standard applications and flagging more complex files for review.
Immigration officers always make final decisions about whether to deny a visa, Genest says.
But this isn’t the only dive into artificial intelligence being spearheaded by the Immigration Department.
In April, the department started gauging interest from the private sector in developing other pilot projects involving A.I., or “machine learning,” for certain areas of immigration law, including in humanitarian and compassionate applications, as well as pre-removal risk assessments.
These two refugee streams of Canada’s immigration system are often used as a last resort by vulnerable people fleeing violence and war to remain in Canada, the Citizen Lab report notes.
“Because immigration law is discretionary, this group is really the last group that should be subject to technological experiments without oversight,” Molnar says.
She notes that A.I. has a “problematic track record” when it comes to gender and race, specifically in predictive policing that has seen certain groups over-policed.
“What we are worried about is these types of biases are going to be imported into this high risk laboratory of immigration decision-making.”