Media Coverage

Kingsley Hayes discusses the government’s plans to publish a White Paper on regulating Artificial Intelligence

Partner and Head of Data and Privacy Litigation Kingsley Hayes discusses the Department for Work and Pensions, algorithms, and universal credit in relation to the UK government’s plan to publish a White Paper on regulating Artificial Intelligence. 

Kingsley’s article was published by the British Computer Society, 26 October 2022, and can be found here.

There are currently no UK laws in place to police AI. Instead, regulations built for other purposes attempt to cover these evolving technologies with limited success. In the ministerial foreword of the government’s policy paper on AI regulation, the Secretary of State for Digital, Culture, Media and Sport, Nadine Dorries, states that a ‘pro-innovation’ regulatory approach is key to translating AI’s potential into societal benefits.

The government also believes that a ‘light touch approach’ will give businesses the clarity and confidence they need to grow whilst ensuring a boost in public trust in AI. The development of such policies is imperative as AI continues to rapidly advance. Whilst there are many benefits to its use, there are also many risks which need to be managed.

The need for regulation and transparency in AI decisions

The UK Digital Regulation Cooperation Forum (DRCF) published two discussion papers earlier this year, highlighting areas that call for regulation. In particular, the DRCF underlined the need for the development of algorithmic assessment and auditing practices to help identify inadvertent harms and improve transparency. As algorithms become more commonly used by organisations, stringent and rigorous oversight is necessary, especially when AI is used to make decisions which could have a serious impact on the health and wellbeing of individuals.

This necessity is highlighted by the Department for Work and Pensions’ (DWP) use of an algorithmically controlled ‘risk based verification’ system (RBV). The system scores an individual’s benefit claim based on their data and assigns the claimant to either a low, medium, or high-risk group. Individuals placed in the high-risk category are subject to deeper scrutiny, which may include interviews or credit checks. Ultimately, the category allocated by the algorithm significantly impacts the outcome of an individual’s application.

A lack of understanding in the process – or the outcome

Notably, once placed in a category, an individual cannot be moved to a lower risk group. Meaning their categorisation is completely subject to algorithmic control. And there is a huge lack of transparency, with the government’s own guidance to local authorities stating that, whilst it is necessary for an RBV policy to be put in place, this ‘should not be made public due to the sensitivity of its contents’. It is unclear how individuals can challenge the RBV’s assessment outcome if they are not aware that is has taken place, or how the algorithm has arrived at a decision.

Transparency of algorithmic processing is fundamental to ensure that individuals can exercise their rights. Under Article 22 of the General Data Protection Regulations (GDPR), data subjects have the right not to be subject to a decision based solely on automated processing, including profiling, which produces legal effects concerning them, or which similarly significantly affects them. If individuals are unaware that the DWP uses algorithms in its decision making, their ability to exercise this right is diminished.

Human bias vs algorithmic bias

The need for transparency is especially important as the risk of decisions being made which discriminate against individuals is evidently high. Earlier in the year, campaign group Foxglove supported the Greater Manchester Coalition of Disabled People (GMCDP) in sending a formal letter to the DWP asking for details on how its algorithm works.

The organisation believes that the DWP is selecting a disproportionate amount of disabled people for benefit fraud investigations. Whilst under investigation, these individuals often have their benefit payments stopped, leaving them without money for essentials. Up to now, the DWP has refused to answer the questions put to it, and the case may now potentially end up in court. Similarly, the DWP has not responded to Privacy International’s questions about how it uses algorithms to highlight alleged cases of benefit fraud.

A high risk of discriminatory decisions

The DWP’s opacity is particularly concerning when you consider the multiple instances of algorithms replicating human biases. Bias within algorithms may stem from data which is incomplete or unreliable. These biases can then lead to groups of individuals being subject to decisions based on flawed information, which can have a damning impact.

Where discriminatory decisions occur, algorithms do not just present concerns surrounding data protection rights. They also potentially breach Article 14 of the European Convention of Human Rights, which provides for the enjoyment of rights and freedoms without discrimination.

Algorithmic bias needs urgent consideration. In addition, inaccurate data exists within many organisational systems, and in circumstances like the DWP’s, flawed data is leading to incorrect decisions. This will harm the quality of decision making, and in turn, the livelihood of individuals.

Where automated decisions are present, human oversight is essential to ensure accountability, and to make sure that errors are identified and corrected quickly. Indeed, strict oversight of algorithmic processes and how decisions are made is essential as many of the individuals affected are often already vulnerable. And, of course, individuals must be able to challenge any decisions made (as per Article 22 of GDPR).

Incorrect decisions challenging public trust in AI

Unfortunately, it is evident that the DWP does not have the best track record of success when implementing algorithms. A flawed algorithm used to automate adjustments to Universal Credit payments caused unwarranted losses to benefit claimants. In this instance, The Court of Appeal found that the algorithm ‘lead to significant variations not only in the benefit award but in the income for the household from benefits and salary’. Human Rights Watch noted that over 85,000 claimants were affected by incorrect decisions.

Without new legislation, the uncertainties, inadequacies, and risks associated with using AI will continue. So there is no doubt that regulation is needed. But, while the government might favour a light-touch approach, any new regulation must address the many implications of this rapidly developing technology and ensure the necessary transparency and accountability of all algorithmic processes.

When developing future laws, the government must engage with privacy groups and other stakeholders to ensure the current systematic inadequacies are accounted for. Only through doing this will individuals affected by algorithmic processes be able to exercise their rights, and ultimately will we see the societal benefits and public trust in AI that the government desires.

Maltin PR

Recent Posts

Latest Data Breach Round-Up – June 2024

In our regular update, we provide a roundup of some of the data breaches and… Read More

6 months ago

Join our MOVEit/ Zellis Data Breach Action

We have launched a group action against MOVEit/Zellis. Group actions can be a powerful tool… Read More

6 months ago

One year on – the extent of the MOVEit data hack is just becoming clear

The number of organisations affected by the MoveIt Data Breach is still rising, despite the… Read More

6 months ago

Join our 23andMe Data Breach Action

We have launched a group action against 23andMe. Group actions can be a powerful tool… Read More

6 months ago

ICO and Canadian counterpart to investigate 23andMe data breach

The Information Commissioner’s Office (ICO) has launched a joint investigation into the 23andMe data breach… Read More

6 months ago

Join Our Capita Data Breach Action

We have launched a group action against Capita. Group actions can be a powerful tool… Read More

6 months ago