Facebook Instant Articles

What happens when artificial intelligence comes to Ottawa

A new report calls for caution as the federal government has been ’experimenting’ with the use of AI in immigration and refugee cases.
Asylum Seekers removals 20180921
An asylum seeker is confronted by an RCMP officer as he crosses the border into Canada from the United States Monday, Aug. 21, 2017 near Champlain, N.Y. (THE CANADIAN PRESS/Paul Chiasson)

There is a notion that the choices a computer algorithm makes on our behalf are neutral and somehow more reliable than our notoriously faulty human decision-making.

But, as a new report presented on Parliament Hill Wednesday points out, artificial intelligence isn’t pristine, absolute wisdom downloaded from the clouds. Rather, it’s shaped by the ideas and priorities of the human beings who build it and by the database of examples those architects feed into the machine’s “brain” to help it “learn” and build rules on which to operate.

Much like a child is a product of her family environment—what her parents teach her, what they read to her and show her of the world—artificial intelligence sees the world through the lens we provide for it. This new report, entitled “Bots at the Gate,” contemplates how decisions rendered by artificial intelligence (AI) in Canada’s immigration and refugee systems could impact the human rights, safety and privacy of people who are by definition among the most vulnerable and least able to advocate for themselves.

The report says the federal government has been “experimenting” with AI in limited immigration and refugee applications since at least 2014, including with “predictive analytics” meant to automate certain activities normally conducted by immigration officials. “The nuanced and complex nature of many refugee and immigration claims may be lost on these technologies, leading to serious breaches of internationally and domestically protected human rights, in the form of bias, discrimination, privacy breaches, due process and procedural fairness issues, among others,” the document warns. “These systems will have life-and-death ramifications for ordinary people, many of whom are fleeing for their lives.”

READ MORE: Why Canada will lead the charge on the UN’s global refugee plan

Citing ample evidence of how biased and confused—how human—artificial intelligence can be, the report from the University of Toronto’s International Human Rights Program (IHRP) and the Citizen Lab at the Munk School of Global Affairs and Public Policy makes the case for a very deliberate sort of caution.

The authors mention how a search engine coughs up ads for criminal record checks when presented with a name it associates with a black identity. A woman searching for jobs sees lower-paying opportunities than a man doing the same search. Image recognition software matches a photo of a woman with another of a kitchen. An app store suggests a sex offender search as related to a dating app for gay men.

“You have this huge dataset, you just feed it into the algorithm and trust it to pick out the patterns,” says Cynthia Khoo, a research fellow at the Citizen Lab and a lawyer specializing in technology. “If that dataset is based on a pre-existing set of human decisions, and human decisions are also faulty and biased—if humans have been traditionally racist, for example, or biased in other ways—then that pattern will simply get embedded into the algorithm and it will say, ‘This is the pattern. This is what they want, so I’m going to keep replicating that.’”

Immigration, Refugees and Citizenship Canada says the department launched two pilot projects in 2018 using computer analytics to identify straightforward and routine Temporary Resident Visa applications from China and India for faster processing. “The use of computer analytics is not intended to replace people,” the department said. “It is another tool to support officers and others in managing our ever-increasing volume of applications. Officers will always remain central to IRCC’s processing.”

This week, the report’s authors made the rounds on the Hill, presenting their findings and concerns to policy-makers. “It does now sound like it’s a measured approach,” says Petra Molnar, a lawyer and technology and human rights researcher with the IHRP. “Which is great.”

Other countries offer cautionary tales rather than best practices. “The algorithm that was used [to determine] whether or not someone was detained at the U.S.-Mexico border was actually set to detain everyone and used as a corroboration for the extension of the detention practices of the Trump administration,” says Molnar.

READ MORE: Canada’s failing refugee system is leaving thousands in limbo

And in 2016, the U.K. government revoked the visas of 36,000 foreign students after automated voice analysis of their English language equivalency exams suggested they may have cheated and sent someone else to the exam in their place. When the automated voice analysis was compared to human analysis, however, it was found to be wrong over 20 per cent of the time—meaning the U.K. may have ejected 7,000 foreign students who had done nothing wrong.

The European Union’s General Data Protection Regulation that came into force in April 2018, on the other hand, is the gold standard, enshrining such concepts as “the right to an explanation,” or the legal certainty that if your data was processed by an automated tool, you have the right to know how it was done.

Immigration and refugee decisions are both opaque and highly discretionary even when rendered by human beings, argues Molnar, pointing out that two different immigration officers may look at the same file and reach different decisions. The report argues that lack of transparency reaches a different level when you introduce AI into the equation, outlining three distinct reasons.

First, automated decision-making solutions are often created by outside entities that sell them to government agencies, so the source code, training data and other information would be proprietary and hidden from public view.

Second, full disclosure of the guts of these programs might be a bad idea anyway because it could allow people to “game” the system.

“Third, as these systems become more sophisticated (and as they begin to learn, iterate, and improve upon themselves in unpredictable or otherwise unintelligible ways), their logic often becomes less intuitive to human onlookers,” the authors explain. “In these cases, even when all aspects of a system are reviewable and superficially ‘transparent,’ the precise rationale for a given output may remain uninterpretable and unexplainable.” Many of these systems end up inscrutable black boxes that could spit out determinations on the futures of vulnerable people, the report argues.

Her group aims to use a “carrot-and-stick approach,” Khoo says, urging the federal government to make Canada a world leader on this in both a human rights and high-tech context. It’s a message that may find a receptive audience with a government that has been eager to make both halves of that equation central to its brand at home and abroad.

But they’ll have to move fast: If AI is currently in a nascent state in policy decisions that shape real people’s lives, it’s growing fast and won’t stay there for long.

“This is happening everywhere,” Khoo says.