Administrative law and the digital welfare state in the UK and Australia
Jack Maxwell (Research Fellow in Public Law and Technology, Public Law Project)
Technology plays a central role in the modern welfare state. Governments are increasingly using technology to confirm identities, assess eligibility, calculate and make payments, and detect fraud. This new mode of governance – the digital welfare state – raises at least two questions for public lawyers. The first concerns harm: how does the digital welfare state affect people’s basic rights and interests, particularly those of members of vulnerable or marginalised groups? The second concerns law: how can our existing laws be used to prevent or address such harms?
This post focuses on one aspect of the digital welfare state: automated social security systems. Earlier this week, in Secretary of State for Work and Pensions v Johnson, the England and Wales Court of Appeal invalidated an aspect of the UK Government’s automated system for calculating universal credit payments. Similar issues came before the Federal Court of Australia late last year in Amato v Commonwealth, a successful challenge to the Australian Government’s ‘robodebt’ programme.
I discuss each case below, before considering what they tell us about the digital welfare state, its consequences for people’s rights and interests, and the potential for public law to constrain some of its worst excesses.
The digital welfare state in the UK: Johnson
Johnson concerned the UK Government’s automated system for calculating universal credit payments. Universal credit is a social security payment for people who are out of work or on low incomes. Universal credit entitlements depend on, among other things, a person’s level of income, assessed by reference to a fixed monthly assessment period. If a person has earned income during that period, then the system will usually deduct a proportion of that income from their universal credit payment. Some people are allowed to earn some income without it reducing their payment for an assessment period, which is known as the work allowance.
The government’s automated system for assessing and making universal credit payments creates a range of problems for certain groups of people. The focus in Johnson was people who earn income on a monthly basis. If a person’s pay day coincides with the end of their assessment period, they might from time to time receive two months’ salary in one period (‘the pay day problem’). For example, this might occur if their usual pay day falls on a bank holiday or a weekend, thus pushing their salary payment into an earlier or later assessment period. This causes dramatic fluctuations in the person’s universal credit payments, because the system thinks that they have received substantial income in one period and no income in the next. It also means that the person loses money overall, because they cannot take advantage of their work allowance for the assessment period when they appear to have no income.
The claimants in Johnson argued that this arrangement was unlawful. They succeeded at first instance, on the ground that the system’s operation was inconsistent with the relevant regulations. The Court of Appeal overturned this finding. The lower court’s approach would require the government to make an ‘evaluative determination’ in many different types of case about whether a person’s income was earned in respect of that period (at ). This ran contrary to one of the purposes of the regulations: ‘ensuring that the calculation of awards each month could be an automated process’ (at ).
But the Court of Appeal held that the regulations themselves were irrational and thus unlawful. First, the pay day problem, discussed above, had a dramatic impact on ‘many tens of thousands’ of people (at ). People had real difficulties covering basic expenses (e.g. rent, council tax, food) during months when their universal credit payments were reduced. This led to additional stress and financial pressures (e.g. enforcement action for non-payment of tax). And people lost a work allowance each time the pay day problem arose, which could add up to hundreds of pounds per year.
Second, these effects were perverse. They bore no connection to any change in people’s circumstances. Nor did they encourage ‘any of the other kinds of behavioural change considered desirable’ by the government (at ). On the contrary, at least one of the claimants declined work so as to avoid the pay day problem.
Third, the Court rejected the government’s claim that it was impracticable to adjust the system to address this problem. The government had refined and tailored universal credit in other ways ‘without fatally upsetting the computer’ (at ). It had to take the same approach in this case.
For these reasons, the Secretary’s failure to tailor the regulations to address the pay day problem was irrational. The Court left it to the Secretary ‘and those advising her to consider the best way to solve the problem’ (at ). The government has announced that it does not intend to appeal the ruling.
The digital welfare state in Australia: Amato
There is a striking parallel between the issues in Johnson and those raised by Australia’s robodebt programme. Robodebt also involved the automated assessment of a person’s income, for the purpose of reducing their social security entitlements. In Australia, as in the UK, some social security entitlements depend on a person’s level of income. Recipients report their income to the government every fortnight, and the government then calculates their eligibility and payment amount for that fortnight based on that information.
Robodebt was an automated system for checking whether people had correctly reported their income in the past and chasing down apparent welfare overpayments. It had two key elements. First, the system compared a recipient’s self-reported income against their annual income reported by their employer for tax purposes. The two datasets didn’t fit together very well, because the self-reported data was fortnightly, while the tax data was annual. To address this problem, the system averaged the tax data, dividing the annual income figure by about 26 to produce an average, fortnightly income.
Second, if the system identified a discrepancy between the two figures for a particular fortnight, this triggered an automated, digital verification process. The system sent the person a letter asking them to confirm or update their income. If the person failed to respond to the letter adequately (e.g. by providing the government with old pay slips confirming their actual income), the government raised a debt against them based on their averaged income.
The key problem was that averaged income is a poor guide to actual income in any particular fortnight. Averaging obscures variations in a person’s actual earnings from fortnight to fortnight, which are essential for an accurate assessment of their entitlements. The government nevertheless insisted on using averaging alone to claw back social security payments from a vast number of people. Since mid-2016, when the robodebt programme was fully rolled out, the government has collected at least A$720 million from over 370,000 people. Those subjected to it have described its profound financial, material and mental health consequences.
In late 2019, in Amato, the Federal Court of Australia declared that the robodebt programme was unlawful. The declaration was made by consent just before the hearing was to commence: the government conceded that the programme was unlawful, but the court still had to be satisfied of the basis on which the declaration was to be made.
As with Johnson, the outcome in Amato turned on administrative law principles of rationality. The relevant statute impliedly required the government to have a rational basis for determining that a person owed a welfare debt, before it could demand repayment. Robodebt’s flaws meant that it could not provide a rational basis for the government’s demands. In May 2020, in the face of a looming class action, the government agreed to pay back the unlawful debts.
Reflections on Johnson and Amato
There are at least five important points to be taken from Johnson and Amato.
First, these cases attest to the potential harms of the digital welfare state. Some of the most vulnerable and marginalised people in the UK and Australia have been denied essential money to which they were legally entitled, or pursued by government for debts they did not owe. This has had a significant impact on many people’s lives: falling into debt, being deprived of basic living standards, dealing with stress and emotional turmoil of debt collection processes, and so on. And this has occurred on a vast scale. One of the key attractions of automation is scale. It gives government the ability to make a large number of decisions, in less time and using fewer resources than an equivalent human decision-maker. But this also means that when automated decision-making goes wrong, it can harm many more people than a comparable manual process.
Second, these cases show that even relatively simple technology can threaten people’s rights and interests. Much debate over ‘artificial intelligence’ has focused on the use of technology to perform novel and complex functions: e.g. predicting human behaviour or making broad judgments about right and wrong. But the systems in Johnson and Amato were tasked with functions that computers have performed for some time: drawing together large datasets and making numerical calculations based on rules coded by humans. A key problem in each case was that the government relied on the system alone, rather than pairing it with human decision-makers who could, for example, adjust a person’s assessment period to avoid the pay day problem, or follow up discrepancies in a person’s reported income with their employer.
Third, these cases demonstrate how administrative law can be used to curtail some of the harms of automated government. Other recent challenges to automated government, such as the digital welfare state in the Netherlands and automated facial recognition in Wales, have relied on human rights law, equality law and data protection law. But Amato and Johnson turned on administrative law principles of reasonableness and rationality, albeit in slightly different ways. In Amato, the challenge was procedural. The government’s decision-making process was flawed, because a person’s averaged income data alone could not reasonably support a finding that they had been overpaid. In Johnson, the challenge was substantive. The regulations were flawed because they had arbitrary and perverse consequences.
Fourth, Johnson illustrates the importance of evidence in technology-related judicial reviews. To meet the high threshold for irrationality, the claimants had to adduce detailed evidence of the system’s financial impact on people and how it might be adjusted to address the problem. If courts will increasingly need detailed evidence to assess whether automated systems are lawful, this raises significant questions for existing judicial review processes.
Finally, Johnson provides an interesting example of the courts treating the executive’s technological claims with a degree of scepticism. The government’s witnesses said that tinkering with the universal credit system to solve the pay day problem would cost millions of pounds and significantly delay roll-out. Rose LJ, delivering the lead judgment, was unpersuaded: ‘Taking full account of all the SSWP’s evidence … I cannot accept that the programme cannot be modified’ (at ).
Johnson and Amato provide important insights into the digital welfare state. They illustrate how digitisation in this area can go wrong, and how administrative law can be used to hold the government to account in such cases. These cases also raise a range of broader questions. How can such problems be prevented before they arise? What conditions are necessary for our administrative justice system to function as it ultimately did in Johnson and Amato? How could it work better? As governments around the world continue to embrace automation, these issues will only become more pressing for public lawyers and policymakers.
My thanks to Alexandra Sinclair and Claire Hall for their very helpful comments on a draft of this post.