Navigating the cross-over between data protection and AI regulation

As stringent rules governing data protection and artificial intelligence collide, how can businesses comply?

15 April 2025

Publication

Loading...

Listen to our publication

0:00 / 0:00

Data is the fuel that powers artificial intelligence (AI). Without data, there's nothing much for AI systems to be intelligent about.

The UK and EU versions of the General Data Protection Regulation (GDPR) apply across sectors. And they potentially extend far beyond the UK and EU too. Organisations that operate any AI system to process personal data will likely need to comply with the GDPR, or other applicable data protection laws. Violations can attract huge fines. In 2023, Italy's data protection watchdog began its investigation into ChatGPT-creator Open AI. It resulted in  fines of €15m for breaching GDPR obligations around the use of personal data to train its model.

Data protection rules: On the regulatory watchlist

To help companies comply, the UK's Information Commissioner's Office (ICO) and the European Data Protection Board have issued a raft of guidance materials. Constantly evolving, they offer companies some support with their interpretation of existing law and their preparations for incoming legislation.

The ICO guidance addresses Data Protection Impact Assessments (DPIAs), which allow for analysis of data flows and logic to identify "allocative" and "representational"  harms. These may include the use of an AI tool in recruitment that discriminates on a gender basis.

"Organisations can't just step back from the AI black box and say they have 'no idea what's going on in there'," says Simmons & Simmons partner Lawrence Brown. "They must be able to explain, in a transparent way, how AI decisions and outputs are generated; the inferences made by the AI system; how new personal data is created, and the individual's rights over modified data. That can be really difficult to do."

Izzy Tennyson, a leading Supervising Associate at Simmons & Simmons, adds: "Understanding AI data flows, and ensuring transparency, are crucial for compliance and innovation. Businesses must be proactive in addressing these challenges to build trust and meet their regulatory requirements."

Supportive new concepts are emerging. The GDPR already identifies "legitimate interests" as a lawful basis under which a business may use personal data. Any business that relies on this basis must assess and balance the benefits of using personal data versus the individual's rights and freedoms. This can be both complex and uncertain. 

Now, the concept of recognised legitimate interests is proposed in the UK's Data (Use and Access) Bill, which is currently making its way through parliament. It could bring helpful clarity in identifying those situations where businesses can rely on the legitimate interests grounds for data processing. Among the proposed examples is the use of AI in helping to safeguard vulnerable people.

"However, the further removed the data controller is from the original data source, the less likely it will be able to rely on the legitimate interests ground. In other words, it is most likely to be available in relation to personal data collected directly from the individual, and least likely where the personal data are obtained from public sources," explains Lawrence.

EU AI Act: a global gamechanger

AI, just like the Internet, the cloud and other tech innovations before it, is bound by rules and regulations that are technology neutral and aimed at all facets of society. They include data privacy laws, discrimination legislation, intellectual property rules and liability obligations.

However, the influence of AI is so monumental that is has its own technology-specific legislation. The EU AI Act came into effect in August 2024, with a potential impact outside the EU. It is the world's first comprehensive collection of AI-specific laws. It has cross-sector application, a clear risk-based approach, and hefty fines, of up to €35m or 7 per cent of global turnover, for breaches

Helen Dixon is the former commissioner at Ireland's telecoms regulator, ComReg, and previously Ireland's Commissioner for Data Protection. She says: "The EU, in drawing up the EU AI Act, sought to balance what we know are the great benefits of AI, with its significant risks."

With its background in product safety legislation, the EU AI Act categorises AI according to risk. It prohibits AI that performs things like social scoring or subliminal manipulation. It has onerous obligations for high-risk AI systems, like those used in biometrics or law enforcement.

A last-minute addition to the legislation, due to the irrepressible rise of ChatGPT following its launch in November 2022, is the so-called general-purpose AI (GPAI) models, the regime for which will come into effect this August.

Partner Andrew Joint specialises in technology at Simmons & Simmons. He acknowledges the challenges of "trying to legislate for a technology which is developing at a pace that seems faster than many types of tech that have come before." He points to a 140-page post-application guidance note on the definition of prohibited AI systems, which he says is "indicative of the challenges and the complex environment we're trying to work through."

In the AI space, Andrew observes that the current contracting environment is AI-supplier friendly. "Suppliers are restricting their liabilities to the narrowest of contractual obligations. As far as possible, they are attempting to shift legal and financial risks onto buyers, and will provide minimal commitments, if any,  in relation to the EU AI Act or GDPR."

This "at your own risk" approach is fairly typical in the lifecycle of new technologies. Inevitably, the risks, liabilities and responsibilities become more nuanced and balanced as the market grows. For now, however, we are in a period of "buyer beware".

Compliance priorities for AI users

Businesses and procurement functions are becoming accustomed to accommodating GDPR obligations where their AI uses or interacts with personal data. Now, however, they must not only meet their GDPR obligations in relation to new AI technology but comply with the EU AI Act too. 

How to prepare for compliance:

  • Plan for appropriate due diligence on vendors that use AI to process personal data to understand the data flows and logic, the way inputs and outputs are generated, and inferences made. 
  • In contract negotiations, seek appropriate contractual protections from AI system providers that deal with how the AI operates and compliance with legislative obligations.
  • Demand appropriate human involvement and oversight in how the AI operates and makes decisions.
  • Practice privacy by design. Incorporate data protection considerations and mechanisms at the earliest stage of AI system development, rather than retrofitting.
  • Conduct regular DPIAs to identify risks and ways to mitigate them as AI systems evolve.
  • Keep up to speed with evolving data protection rules and AI laws; engage with regulators and industry or legal experts; prepare to adapt.

Businesses are only just beginning to experiment with AI and are up against a massive convergence of legislation, rules, responsibilities and guidance, often drafted at speed. It creates a challenging compliance environment. But as Helen Dixon puts it: "The fog will lift a little and the rationale will become clearer as the EU AI Act is rolled out over the next 18 months. Crucially, GDPR principles are directly relevant to AI regulation and, when adhered to, should neither stymie innovation nor obstruct the use of AI."

Izzy Tennyson observes: "As AI technologies evolve, businesses must remain agile and proactive in their compliance strategies to ensure they are ready to adapt to new regulations and technological advances. Collaboration with stakeholders, including legal experts, technology providers, and regulators, is essential for navigating the complexities of AI compliance. By working together, businesses can ensure they are not only compliant but also at the forefront of ethical AI development."

This document (and any information accessed through links in this document) is provided for information purposes only and does not constitute legal advice. Professional legal advice should be obtained before taking or refraining from any action as a result of the contents of this document.