Media Coverage

Lucy Burrows discusses the ICO’s warning to firms over generative AI risks in Data Centre Review

Associate Lucy Burrows has examined the ICO’s warning to businesses over the data privacy risks that generative AI poses in Data Centre Review.

Lucy’s article was published in Data Centre Review, 3 August 2023, and can be found here.

The UK’s privacy watchdog, the Information Commissioner’s Office (ICO), has warned businesses that they will be taking action where privacy risks have not been tackled prior to generative AI being introduced. In spite of this muscular statement of intent from the ICO, it is clear that the current regulatory framework in the UK does not yet match up to the scale of the challenge posed by AI.

Although the development of this technology brings exciting opportunities for consumers, there are also concerns surrounding misinformation and discrimination, amongst other potential risks. As these systems become increasingly sophisticated, it is essential to establish legal guardrails to ensure responsible and fair practices. Earlier this year, the ICO updated their guidance on AI and data protection to include a focus on fairness considerations and transparency principles as they apply to AI.

Whilst the ICO has warned businesses that there will be tougher compliance checks, Big Tech companies are notorious for violating consumer privacy. This is most strongly evident in their use of AI to sharpen the targeting of consumers with ever more bespoke ad content, a process underpinned by their relentless harvesting of customer data. Whilst the ICO has condemned this in the past, there has been little enforcement action in this area by the watchdog, and companies are seemingly evading penalties for their harmful practices.

Within the last few months, the Irish Data Protection Commission (“DPC”) published its decision concerning the transfer of Facebook users’ data from the EU to the United States by Meta Platforms Ireland Limited (“Meta”). The decision saw Meta fined a hefty 1.2 billion euros, with an order to cease processing EU Facebook users’ personal data in the US. It does not appear the ICO will take any similar enforcement action, showing the unwillingness of the regulator to prioritise such investigations.

This calls into question whether British people are adequately protected when the risk of action being taken against organisations is seemingly low. Similarly, the civil courts have so far provided little protection for consumers and we have seen the failure of several high-profile actions against Big Tech for wholesale breaches of data protection laws.

The UK GDPR stipulates that data subjects have the right not to be subject to decisions producing legal effects based solely on automated processing without appropriate human oversight. In addition to placing limitations on automated individual decision-making, the UK GDPR also mandates that individuals are provided with specific details about the processing activities and that measures are taken to prevent errors, bias, and discrimination. Whilst this provides a useful outline for addressing data protection concerns related to algorithmic-like systems, there is currently no explicit UK regulation of the technical aspects or specifics of algorithmic design and implementation.

Rather than exercising caution, the UK has adopted what it terms a ‘pro-innovation approach’ to policing AI. The UK AI white paper is based on principles such as transparency, accountability, and fairness, however it sets out no concrete plans for regulatory control and states that there will be no statutory regulation of AI in the near future.

Compare and contrast this with the European Union, which has opted for a much stricter approach. On 14 June, the European Parliament voted to approve the draft legislation of the Artificial Intelligence Act (“AI Act”), establishing guidelines for AI usage. This legislation focuses on a risk-based approach implementing a tiered system of regulatory obligations for specific applications.

For example, the AI Act proposes to explicitly prohibit some uses of AI where the risk is deemed ‘unacceptable’; examples of prohibited practices include social scoring and ‘real-time’ remote biometric identification systems. Applications categorised as high risk, such as those relating to education, employment, and welfare, will need to undergo a conformity assessment as well as meet numerous additional requirements. Limited and minimal risk applications are also subject to certain obligations, including labelling AI-generated content.

Whilst the UK and EU clearly hold diverging views on how best to regulate AI, there has undoubtedly been significant progress in the ongoing mission to ensure the ethical handling of personal data and the AI systems that process and act upon it. Yet while some consider the UK’s pro-innovation approach a positive step for the AI landscape, with the ICO favouring a hands-off strategy, the task of policing AI and protecting consumers in the UK now falls to novel civil lawsuits as they attempt to rein in this disruptive and revolutionary technology.

Maltin PR

Recent Posts

Latest Data Breach Round-Up – June 2024

In our regular update, we provide a roundup of some of the data breaches and… Read More

7 months ago

Join our MOVEit/ Zellis Data Breach Action

We have launched a group action against MOVEit/Zellis. Group actions can be a powerful tool… Read More

7 months ago

One year on – the extent of the MOVEit data hack is just becoming clear

The number of organisations affected by the MoveIt Data Breach is still rising, despite the… Read More

7 months ago

Join our 23andMe Data Breach Action

We have launched a group action against 23andMe. Group actions can be a powerful tool… Read More

7 months ago

ICO and Canadian counterpart to investigate 23andMe data breach

The Information Commissioner’s Office (ICO) has launched a joint investigation into the 23andMe data breach… Read More

7 months ago

Join Our Capita Data Breach Action

We have launched a group action against Capita. Group actions can be a powerful tool… Read More

7 months ago