The General Data Protection Regulations (GDPR) and the AI Act are not miles apart. They serve complementary purposes to protect consumers' fundamental rights and promote trustworthy technologies.
Michael Will helped to negotiate the GDPR, which came into force in 2018. He believes it is possible to build on GDPR and AI Act synergies to create a stronger and more comprehensive regulatory framework. It will reduce the need to duplicate rules while enhancing efficiency and easing the compliance burden.
He sets the scene by pointing to Article 25 of the GDPR, which mostly imposes rules on tech users (data controllers and data processors) rather than on tech makers (software developers and system providers). When the AI Act entered into force in 2024, it regulated the AI products and technologies themselves, based on the level of risk they present. It filled, therefore, the gap left by the GDPR for AI systems and models. Together, the GDPR and AI Act create a more complete framework for digital accountability, with a greater impact than either could deliver alone.
GDPR and AI Act synergies
The AI Act is being rolled out in phases. It means the requirements for high-risk AI systems, like those used to screen job or loan applications, will not enter into force until mid-2027. As organisations prepare to comply, Will says there are opportunities to leverage six synergies between the GDPR and AI Act:
Check your pre-requisites: The GDPR and AI Act both require organisations to determine the conditions under which they can lawfully process personal data and use AI products.
Transparency: Under the GDPR, organisations must tell individuals how their personal data will be processed and used. Similarly, the AI Act requires developers and suppliers to inform users if they are interacting with an AI system, like a chatbot, or where content is AI-generated. Transparency helps to build consumer trust in tech-driven decision-making environments, like finance, recruitment and healthcare.
Governance: The GDPR requires organisations to appoint a data protection officer with responsibility for GDPR training. While the AI Act does not contain an explicit provision requiring the appointment of an AI officer, it does establish the concept of AI literacy. By appointing an AI officer, organisations can demonstrate that someone is responsible for ensuring that any individual using AI within their role is AI trained and AI literate. Within the organisation, opportunities to combine GDPR and AI training can deliver shared efficiencies and promote accountability.
Assess your risks: Both regimes promote proportionate and risk-conscious compliance.
Article 35 of the GDPR requires Data Protection Impact Assessments (DPIAs) for high-risk data processing, like the use of facial recognition technology in public spaces. DPIAs assess the nature of the risks, how likely and severe they may be, and the safeguards needed to mitigate them.
Under the AI Act, AI systems are classified according to risk. For high-risk AI systems, risk assessments must prove that they meet safety and legal standards and do not allow bias, discrimination or unfair rejection.
Documentation: Evidence-based compliance and detailed records support transparency. A core GDPR obligation is to proactively protect personal data by implementing "appropriate technical and organisational measures". It means embedding data protection into the design, deployment and operation of AI systems from the outset.
Article 26 of the AI Act sets out requirements for operators of high-risk AI systems. They include the need to keep system logs and technical documentation for at least six months. Operators rely on AI-system developers and suppliers to make their compliance with the Act technically possible, enabling them to evidence the ways in which personal data is processed.
Human oversight: The GDPR (Article 22) requires that individuals are not solely subject to automated decision-making in processing their personal data. A similar principle in the AI Act (Article 14) demands human oversight for high-risk AI systems.
Building on GDPR foundations
Seven years on from first implementing the GDPR, entities have reached a level of maturity and enhanced understanding. Michael believes unlocking synergies will make compliance with the AI Act easier to achieve.
"You've focused on minimising your data protection risks and working out how best to meet your data protection obligations. You've become used to documenting what you do; and educating your people to be vigilant. And because Article 25 of the GDPR requires you to build data privacy into systems from the start - by design - and ensure that only necessary personal data is processed - by default - it should not be too difficult to become AI-Act-ready. You re-use elements that already exist in the GDPR and transpose them into your AI world, and you keep on building from there."
AI Act compliance is not about duplicating what you already do. Rather it's about identifying crossovers with the GDPR and ways to leverage existing competencies to strengthen the organisational response. "We shouldn't see it as a burden or bureaucracy, but as a step towards proper control and transparency on digital issues," concludes Will.


_11zon.jpg?crop=300,495&format=webply&auto=webp)

_11zon.jpg?crop=300,495&format=webply&auto=webp)



_11zon.jpg?crop=300,495&format=webply&auto=webp)
_11zon_(1).jpg?crop=300,495&format=webply&auto=webp)



_11zon.jpg?crop=300,495&format=webply&auto=webp)
.jpg?crop=300,495&format=webply&auto=webp)





