you're reading...
Freedom of Information, Human rights/equalities, Immigration and asylum, Initial decision-making, Reports & Publications, Research, System design, Technology

Experiments in Automating Immigration Systems

Experiments in Automating Immigration Systems

By Tatiana Kazim, Public Law Project and Equal Education Law Centre (South Africa)

Governments around the world are embracing automated decision making (ADM). The potential benefits are well-rehearsed: faster, cheaper, more accurate, more consistent decision-making. Equally, the dangers posed by government ADM systems have been exposed by several high-profile scandals including, for example, the A-levels grades debacle in the Summer of 2020. But automated government is a relatively new phenomenon, and our understanding of the different types of ADM systems and their attendant problems is still evolving. This is not helped by the fact that new systems are, for the most part, developed and deployed behind closed doors. As Jack Maxwell and Dr Joe Tomlinson put it, there is a “giant”, and mostly secret, “experiment in automation” going on where “people’s lives are at stake” – especially the lives of marginalised people.

In the UK, the immigration system is one of the key testing grounds for automation. In their new book, Experiments in Automating Immigration Systems, Maxwell and Tomlinson offer a glimpse into the Home Office “laboratory”.

The automated state

The book explores the impact of flawed ADM systems through three main case studies: 

  1. A voice recognition system used to detect fraud in English-language testing.
  2. An automated data matching system, used to process the large volume of applications to the EU Settlement Scheme.
  3. An algorithm used to categorise visa applications according to risk, where risk was assessed by reference to nationality, amongst other factors

These three case studies are, as Maxwell and Tomlinson acknowledge, just “the tip of the iceberg”. A full survey of all ADM systems deployed by the Home Office – let alone by government more generally – would be impossible given the “mildly astonishing” opacity in this area. Indeed, much of what we do know has been the hard-won result of investigatory research, freedom of information requests, and litigation. In the absence of full information, these case studies offer real insight into the problems arising from automation in the immigration space and beyond.

The first case study concerns English-language tests sat by over 58,000 people between 2011 and 2014, in order to meet their visa conditions. Relying on an automated voice recognition system, the Home Office accused tens of thousands of students of cheating on their tests and revoked or curtailed their visas. Some, like 22-year-old Raja Noman Hussain, from Pakistan, had their homes raided and were arrested and sent to detention centres. Hussain spent six years and £30,000 before he eventually cleared his name. The case study centres on the experiences of Hussain and his fellow students and this is representative of the book as a whole. One of its strengths lies in highlighting the tangible, and devastating, consequences of flawed ADM systems on the lives of real people.

Maxwell and Tomlinson go on to show how the Home Office’s deployment of the voice recognition system was “riddled with errors and oversights.” The Home Office failed to take meaningful steps to check the reliability of the voice recognition system before deploying it for the first time in a novel context, relying uncritically on assurances of the private developer; failed to appreciate the low quality of the audio recordings fed into the system and the limited capacity of the system to correctly identify a particular voice from given recordings; and failed to consider the extent to which the outputs of the voice recognition system were an appropriate basis for decisions to penalise students. The result was that thousands of innocent students were accused of fraud and suffered grave consequences: “damage to their reputations, livelihoods, and future in the UK.”

Compounding these problems was a lack of public information about the system, which hampered students in bringing effective challenges, as well as limited avenues for redress.

Automation and administrative (in)justice

Maxwell and Tomlinson approach the case studies through the lens of administrative justice. Their analysis has three dimensions: they look at the law and guidance underpinning ADM systems; the interface between people and the state, particularly the way the state makes decisions about people; and the redress mechanisms available to challenge decisions. They draw out three key points from this analysis:

  1. Rules – Government automation is at risk of becoming a ‘law-free zone’. The UN Special Rapporteur on Extreme Poverty reached this conclusion, and it is endorsed by Maxwell and Tomlinson in the book. There is a pressing need to consider how existing rules can be adapted to regulate ADM, and what new rules might be required.
  2. Grievances – ADM is likely to make certain kinds of grievance in administration more common. Chief among these are discrimination and a lack of communication between those making the decisions and those subject to them. The experiences of those processed in this way are likely to be distinctive. When computer makes an adverse decision about you, without any human input, this is likely to give rise to a unique sense of indignity or injustice.
  3. Redress – The distinctive nature of the problems with ADM calls for distinctive forms of redress. Too often, redress is an afterthought. Instead, proper avenues for challenging a decision and obtaining a remedy should be considered from the outset.

Perhaps a neat way to summarise the problem is as follows: new government ADM systems are being developed far more quickly than proper regulation and redress mechanisms. This can mean that new and widespread algorithmic harms go unremedied.

As Maxwell and Tomlinson point out, laws underpinning ADM systems tend to be general and “do not foresee automation in any specific way”. Very few laws address automation directly. The work of identifying whether and how legal doctrines could be adapted to accommodate algorithmic harms has begun. Sandra Wachter, for example, has recently written about how anti-discrimination law could be adapted to cover new algorithmic groups: groups like “dog owners, sad teens, video gamers, single parents, gamblers, or the poor” which are “routinely used to allocate resources” and “incomprehensible groups defined by parameters that defy human understanding such as pixels in a picture, clicking behavior, electronic signals, or web traffic.” But there is still much more to be done. Equally urgent is the work of considering whether entirely new laws and regulations are required. The proposed EU AI Regulation is a good example. The UK has yet to develop similar legislative proposals.

Proceeding with caution?

Maxwell and Tomlinson give a fair hearing to the benefits that have arisen from the use of ADM systems. In the context of the EU Settlement Scheme, automation “allowed millions of people to get their settled status quicker than would have otherwise been possible, reducing delay and associated anxiety.” But set against these benefits are the failed experiments in automation, which have had “disastrous effects for individuals and their families, as well as wider society and the economy.”

In light of these failures, and in the face of uncertainty about further and future risks, Maxwell and Tomlinson advocate for the ‘precautionary principle’: “given the range of risks associated with automated decision-making in immigration systems, until there is further public evidence and clear data on the impact of such systems, they should be incrementally developed and clear safeguards, including public redress processes and monitoring systems, should be in effect.”

It remains to be seen whether government will adopt this approach and slow the proliferation of new and different systems. Unfortunately, the brakes have not yet been applied. Public Law Project, for example, continues to identify new ADM systems in the immigration context and beyond. Currently in progress is the roll out of ‘Atlas’, which will replace the old immigration casework database. This is a major project, implicating the entire immigration system. And it appears to involve numerous new ADM and data-matching systems, the details of which have not, at present, been disclosed. The Atlas roll-out suggests that the Home Office is not proceeding with caution; rather, the trend of automating immigration is continuing at a pace.

On the horizon, there are opportunities to ensure that government ADM systems operate lawfully, fairly, and accountably. These include new iterations of the Cabinet Office’s ‘Algorithmic Transparency Standard’ and the forthcoming AI White Paper. It will be crucial for civil society to engage fully with these opportunities, in order to ensure that new legal standards protect against the risks of ADM so vividly depicted in Maxwell and Tomlinson’s important book.

A launch event for the book was hosted by Public Law Project and University of York on 27 January. You can read a write-up of the event here.


No comments yet.

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s

%d bloggers like this: