University of Toronto Magazine University of Toronto Magazine
Photo of Petra Molnar standing next to a window in front of a desk, with a bookshelf in the background
Petra Molnar. Photo by Narisa Ladak
Alumni

Computer Says “No”

U of T’s Petra Molnar warns that the use of AI in immigration decisions could infringe on the human rights of migrants

Petra Molnar (JD 2016), interim director of the Faculty of Law’s International Human Rights Program, recently investigated how AI is being used in Canada’s immigration decisions. In a report she co-wrote with Lex Gill for the human rights program and U of T’s Citizen Lab, Molnar warns that governments’ use of AI in immigration threatens to exacerbate the vulnerability of migrants.

How is Canada using AI in immigration decisions?
There are two pilot projects underway: one uses AI to help assess temporary residence visa applications from India and China and the other uses AI to triage simple applications. The Canada Border Services Agency has also been testing a potential AI lie detector airport kiosk, which is used rather than an officer to determine if someone is telling the truth before being processed for further screening. There are similar AI lie detectors currently used in the European Union. We do not yet have data on exactly how the pilot projects are working but are hoping to monitor them to make sure they comply with Canada’s human rights obligations.

Do you see benefits to using AI in immigration?
It could allow simple applications to be automated, leaving more time and human resources for complex applications.

What are the biggest risks?
When states experiment with new technologies, they can infringe on fundamental human rights such as the right to life, liberty and security, like when an algorithm determines whether someone should be placed in immigration detention as was done at the U.S.-Mexico border. Data breaches are also a significant concern. What if repressive governments obtain sensitive data about people seeking asylum in another country? Bias is also a risk. AI has a record of discriminating on the grounds of race and gender.

Where are you taking this research from here?
I have been splitting my time between Toronto and the University of Cambridge, doing a global assessment of migration management technology and trying to understand how international human rights laws could regulate this tech. Almost every day there is a new story about the use of technology to increase security or surveillance or about the ways groups are being targeted. I hope to continue this work and spark further conversations about the importance of human rights at the centre of technological development.

Recent Posts

Leave a Reply

Your email address will not be published. Required fields are marked *