On 19 June 2025, the Irish Data Protection Commission (DPC) released its 2024 Annual Report ("Report"), offering critical insights into its regulatory activities and priorities. Among the most significant themes in this year's Report is the regulation of Artificial Intelligence (AI). As AI continues to transform industries, the DPC's focus on data privacy compliance as it relates to AI provides essential guidance for organisations involved in the development, procurement, or deployment of AI systems.
Key Highlights on AI from the DPC's 2024 Annual Report
Increased Regulatory Scrutiny on AI
The DPC has intensified its focus on AI, particularly in the context of data protection compliance. In 2024, the DPC engaged extensively with companies developing Large Language Models (LLMs) and other AI systems. The Report highlights that the DPC identified deficiencies in plans to train AI models using the personal data of EU/EEA citizens, which posed significant risks to individuals' rights and freedoms. These deficiencies included inadequate transparency, insufficient legal bases for processing, and a lack of safeguards to mitigate risks to data subjects. The DPC intervened in several cases to address these issues, imposing over €652 million in administrative fines in 2024, demonstrating its commitment to ensuring compliance with the GDPR.
Harmonised EU Position on AI
Recognising the need for regulatory consistency, the DPC initiated a formal request for a statutory opinion from the European Data Protection Board (EDPB) under Article 64(2) of the GDPR. This marked the first time the DPC sought such an opinion, reflecting the strategic importance of achieving a harmonised approach to data privacy regulation in relation to AI across the EU/EEA. The EDPB's opinion, adopted in December 2024, clarified key issues, including whether personal data used to train AI models remains personal data in the AI model and how GDPR rights can be exercised in this context. The opinion also addressed the suitability of legitimate interests as a legal basis for processing personal data in AI systems.
The Report also underscores the importance of inter-regulatory cooperation in light of the EU's new digital legislative package, which includes the Data Governance Act, Digital Services Act, and Artificial Intelligence Act. Recognising the interconnected nature of these frameworks, the DPC established a new "Head of Inter-Regulatory Affairs" function to coordinate with other regulators, including Ireland's Digital Regulators Group. This collaboration ensures coherence between data protection and other regulatory obligations, particularly as personal data remains central to the digital economy. The DPC plans to prioritise inter-regulatory engagement in 2025 to strengthen relationships and address overlapping regulatory challenges effectively.
AI and Children's Data
The Report highlights the importance of protecting children and vulnerable individuals from harm arising from AI technologies. The DPC's enforcement actions against major technology platforms led to significant improvements in safeguards for children's data. For example, default privacy settings for minors were introduced, ensuring that children's personal data is set to private rather than public by default. These measures aim to protect children from risks such as identity theft, impersonation, and harmful interactions online.
Proactive Engagement with AI Developers
The DPC engaged with several major organisations to address compliance issues proactively. The Report highlights the DPC's proactive engagement with global platform providers to address compliance issues in the development and deployment of AI systems. For example, one provider voluntarily paused its plans to train Large Language Models (LLMs) using public content shared by users following DPC intervention. Similarly, the DPC launched an inquiry into another provider's foundational AI model to assess compliance with GDPR requirements, including the necessity of conducting a Data Protection Impact Assessment (DPIA). Additionally, the DPC worked with a gaming platform provider to enhance transparency and implement opt-out mechanisms for its voice evaluation AI system.
First Use of Section 134 Powers
In a landmark move, the DPC invoked Section 134 of the Data Protection Act 2018 to seek a High Court order prohibiting X (formerly Twitter) from processing personal data for training its AI tool, "Grok." This was the first time the DPC used its Section 134 powers, which allow it to act urgently to protect the rights and freedoms of data subjects. The DPC's intervention led to X agreeing to permanently cease processing EU/EEA user data for this purpose and to delete the datasets involved.
What should Organisations Involved in AI be doing?
The Report provides a clear roadmap for organisations to align their AI activities with data protection laws. Below are practical steps for businesses to consider:
1. Conduct Robust Data Protection Impact Assessments (DPIAs): Guidance from regulators generally emphasises the importance of DPIAs for AI systems, particularly where processing involves high-risk activities such as profiling or the use of special category data. Guidance shared by the EDPB clarifies that DPIAs should be conducted early in the AI development lifecycle and include:
a systematic description of the AI system, including its purpose, data flows, and logic.
identification of risks such as allocative harms (e.g., bias in job opportunities) and representational harms (e.g., reinforcement of stereotypes).
documentation of trade-offs, such as balancing data minimisation with statistical accuracy.
measures to ensure meaningful human involvement in decision-making processes.
2. Enhance Transparency/Training: Transparency is a cornerstone of GDPR compliance. Organisations will need to provide clear and accessible information to individuals about how data is used in AI systems, as well as to employees. As such, organisations should:
ensure organisational policies reflect the unique challenges posed by AI, including automated decision-making and the use of personal data in training AI models.
provide targeted training for employees on AI literacy and data protection, emphasising the importance of transparency, accountability, and ethical considerations.
update privacy notices to clearly explain how personal data is used in AI systems, including any automated decision-making processes.
tailor explanations to specific audiences, ensuring that data subjects understand the rationale behind AI-driven decisions.
3. Strengthen Procurement Processes: There is a need for evolution in how organisations approach data protection in the context of AI. Organisations will need to:
conduct due diligence on third-party AI systems, focusing on:
the logic and data flows involved in the AI system.
the quality and accuracy of input data to mitigate risks of bias and inaccuracies.
ensure clear instructions for processors, specifying the scope of data collection and the form of outputs to avoid ambiguity.
4. Address Legal Basis for Processing: Guidance from regulators and engagement with developers of AI underscores the need to carefully evaluate the legal basis for processing personal data in AI systems, which may include both personal data included in input data and any newly created or inferred personal data in output data. If relying on legitimate interests, organisations should ensure that:
the interest is lawful and clearly articulated to data subjects.
the processing is necessary and not overridden by data subject rights.
5. Monitor AI Outputs: The DPC's enforcement actions highlight the importance of ongoing monitoring to ensure compliance. Organisations should establish processes to:
Regularly review AI outputs to ensure they do not create new personal data or lead to unintended consequences for individuals.
Assess the impact of AI-generated outputs on data subjects and take corrective actions where necessary.
6. Engage with Regulators: The DPC's proactive engagement with AI developers demonstrates the value of early dialogue with regulators. Organisations should consider consulting with the DPC or other relevant supervisory authorities during the development of AI systems. This can help identify and address compliance issues before they escalate.
Conclusion
The DPC's Report underscores the critical importance of embedding data protection principles into the development and deployment of AI systems. By taking proactive steps to address compliance challenges, organisations can not only mitigate regulatory risks but also build trust with stakeholders and unlock the full potential of AI in a responsible and ethical manner.




















