The UN special rapporteur on extreme poverty and human rights has raised concerns about the UK’s rush to apply digital technologies and data tools to socially re-engineer the delivery of public services at scale, warning in a statement today that the impact of a digital welfare state on vulnerable people will be “immense”.

He has also called for stronger laws and enforcement of a rights-based legal framework to ensure that the use of technologies like AI for public service provision does not end up harming people.

“There are few places in government where these developments are more tangible than in the benefit system,” writes professor Philip Alston. “We are witnessing the gradual disappearance of the postwar British welfare state behind a webpage and an algorithm. In its place, a digital welfare state is emerging. The impact on the human rights of the most vulnerable in the UK will be immense.”

Alston’s statement also warns that the push towards automating public service delivery — including through increasing use of AI technologies — is worryingly opaque.

“A major issue with the development of new technologies by the UK government is a lack of transparency. The existence, purpose and basic functioning of these automated government systems remains a mystery in many cases, fuelling misconceptions and anxiety about them,” he writes, adding: “Evidence shows that the human rights of the poorest and most vulnerable are especially at risk in such contexts.”

So, much like tech giants in their unseemly disruptive rush, UK government departments are presenting shiny new systems as sealed boxes — and that’s also a blocker to accountability.

“Central and local government departments typically claim that revealing more information on automation projects would prejudice its commercial interests or those of the IT consultancies it contracts to, would breach intellectual property protections, or would allow individuals to ‘game the system’,” writes Alston. “But it is clear that more public knowledge about the development and operation of automated systems is necessary.”

Radical social re-engineering

He argues that the “rubric of austerity” framing of domestic policies put in place since 2010 is misleading — saying the government’s intent, using the trigger of the global financial crisis, has rather been to transform society via a digital takeover of state service provision.

Or, at he puts it: “In the area of poverty-related policy, the evidence points to the conclusion that the driving force has not been economic but rather a commitment to achieving radical social re-engineering.”

Alston’s assessment follows a two week visit to the UK during which he spoke to people across British society, touring public service and community provided institutions such as job centers and food banks; meeting with ministers and officials across all levels of government, as well as opposition politicians; and talking to representatives from civil society institutions, including front line workers.

His statement discusses in detail the much criticized overhaul of the UK’s benefits system, in which the government has sought to combine multiple benefits into a single so-called Universal Credit, zooming in on the “highly controversial” use of “digital-by-default” service provision here — and wondering why “some of the most vulnerable and those with poor digital literacy had to go first in what amounts to a nationwide digital experiment”.

“Universal Credit has built a digital barrier that effectively obstructs many individuals’ access to their entitlements,” he warns, pointing to big gaps in digital skills and literacy for those on low incomes and also detailing how civil society has been forced into a lifeline support role — despite its own austerity-enforced budget constraints.

“The reality is that digital assistance has been outsourced to public libraries and civil society organizations,” he writes, suggesting that for the most vulnerable in society, a shiny digital portal is operating more like a firewall.

“Public libraries are on the frontline of helping the digitally excluded and digitally illiterate who wish to claim their right to Universal Credit,” he notes. “While library budgets have been severely cut across the country, they still have to deal with an influx of Universal Credit claimants who arrive at the library, often in a panic, to get help claiming benefits online.”

Alston also suggests that digital-by-default is — in practice — “much closer to digital only”, with alternative contact routes, such as a telephone helpline, being actively discouraged by government — leading to “long waiting times” and frustrating interactions with “often poorly trained” staff.

Human cost of automated errors

His assessment highlights how automation can deliver errors at scale too — saying he was told by various experts and civil society organizations of problems with the Real Time Information (RTI) system that underpins Universal Credit.

The RTI is supposed to takes data on earnings submitted by employers to one government department (HMRC) and share it with DWP to automatically calculate monthly benefits. But if incorrect (or late) earnings data is passed there’s a knock on impact on the payout — with Alston saying government has chosen to give the automated system the “benefit of the doubt” over and above the claimant.

Yet here a ‘computer says no’ response can literally mean a vulnerable person not having enough money to eat or properly heat their house that month.

“According to DWP, a team of 50 civil servants work full-time on dealing with the 2% of the millions of monthly transactions that are incorrect,” he writes. “Because the default position of DWP is to give the automated system the benefit of the doubt, claimants often have to wait for weeks to get paid the proper amount, even when they have written proof that the system was wrong. An old-fashioned pay slip is deemed irrelevant when the information on the computer is different.”

Another automated characteristic of the benefits system he discusses segments claimants into low, medium and high risk — in contexts such as ‘Risk-based verification’.

This is also problematic as Alston points out that people flagged as ‘higher risk’ are being subject to “more intense scrutiny and investigation, often without even being aware of this fact”.

“The presumption of innocence is turned on its head when everyone applying for a benefit is screened for potential wrongdoing in a system of total surveillance,” he warns. “And in the absence of transparency about the existence and workings of automated systems, the rights to contest an adverse decision, and to seek a meaningful remedy, are illusory.”

Summing up his concerns he argues that for automation to have positive political — and democratic — outcomes it must be accompanied by adequate levels of transparency so that systems can be properly assessed.

Rule of law, not ‘ethics-washing’

“There is nothing inherent in Artificial Intelligence and other technologies that enable automation that threatens human rights and the rule of law. The reality is that governments simply seek to operationalize their political preferences through technology; the outcomes may be good or bad. But without more transparency about the development and use of automated systems, it is impossible to make such an assessment. And by excluding citizens from decision-making in this area we may set the stage for a future based on an artificial democracy,” he writes.

“Transparency about the existence, purpose, and use of new technologies in government and participation of the public in these debates will go a long way toward demystifying technology and clarifying distributive impacts. New technologies certainly have great potential to do good. But more knowledge may also lead to more realism about the limits of technology. A machine learning system may be able to beat a human at chess, but it may be less adept at solving complicated social ills such as poverty.”

His statement also raises concerns about new institutions that are currently being set up by the UK government in the area of big data and AI, which are intended to guide and steer developments — but which he notes “focus heavily on ethics”.

“While their establishment is certainly a positive development, we should not lose sight of the limits of an ethics frame,” he warns. “Ethical concepts such as fairness are without agreed upon definitions, unlike human rights which are law. Government use of automation, with its potential to severely restrict the rights of individuals, needs to be bound by the rule of law and not just an ethical code.”

Calling for existing laws to be strengthened to properly regulate the use of digital technologies in the public sector, Alston also raises an additional worry — warning over a rights carve out the government baked into updated privacy laws for public sector data (a concern we flagged at the start of this year).

On this he notes: “While the EU General Data Protection Regulation includes promising provisions related to automated decision-making 37 and Data Protection Impact Assessments, it is worrying that the Data Protection Act 2018 creates a quite significant loophole to the GDPR for government data use and sharing in the context of the Framework for Data Processing by Government.”