Exploring Data Use by AI

Exploring Data Use by AI: Automated Decision-Making and Biometric Data in a Changing Regulatory Landscape

08 December 2025

Publication

Loading...

Listen to our publication

0:00 / 0:00

Introduction

Artificial intelligence (AI) is rapidly moving from the realm of experimentation to real-world implementation across industries. As businesses deploy AI, they face a complex and evolving regulatory environment, particularly in relation to data protection, where the rules on automated decision-making (ADM) and the use of biometric data are key areas of focus. Drawing on recent insights from a panel featuring experts from Simmons & Simmons, the Information Commissioner’s Office (ICO), and EU specialists, this article explores the key challenges and practical strategies for organisations navigating this landscape.

AI and Data Protection: The Regulatory Backdrop

Data protection laws in the UK and EU have (with the UK exception summarised below) not changed. However, regulators in the UK and EU are quickly adapting their guidance to address new scenarios, particularly as AI systems increasingly rely on personal data for training and operation. Enforcement action against major AI players has already demonstrated that regulators are willing to act where data protection rules are breached, especially in relation to the use of publicly available personal data to train AI models.

In the UK, the staggered implementation of the Data (Use and Access) Act 2025 aims to balance business innovation with robust data protection, making the UK an attractive environment for AI development. The ICO is actively shaping statutory codes of practice on AI and ADM, with a focus on safeguarding individual rights and promoting public trust.

Automated Decision-Making: Transparency and Human Involvement

Automated decision-making is a particular area of regulatory focus in an AI context as, in many cases, AI can be deployed to take decisions without human involvement. Organisations are expected to “look inside the AI black box” and be able to explain the logic and data flows underpinning automated decisions. This often requires them to conduct Data Protection Impact Assessments (DPIAs), update records of processing activities, and ensure privacy notices are clear about how data is used.

To avoid the restrictions on ADM (where exceptions from them do not apply), it is important to ensure meaningful human involvement in decision-making. Human oversight cannot be a mere formality; individuals must have genuine authority to review and, if necessary, change automated decisions, based on all relevant data. This is especially relevant in contexts such as recruitment, where ADM can have significant effects on individuals’ rights and opportunities.

Biometric Data: Benefits, Risks, and Compliance

Biometric data—data resulting from specific technical processing of characteristics of a natural person that can allow their unique identification such as facial images, fingerprint or voice data—can be used to enable enhanced security and convenience, from unlocking smartphones to streamlining border control. However, it also raises significant privacy risks, including the potential for mass surveillance and identity theft. Under the GDPR, biometric data is classified as a special category data when it is used for the purpose of uniquely identifying someone, and is subject to stricter processing conditions and requiring robust security measures than those that are required in relation to regular personal data.

Practical recommendations for organisations deploying biometric systems include:

  • Implementing strong security measures (eg, encryption, local storage)
  • Applying data minimisation principles—collecting only what is strictly necessary
  • Ensuring transparency and obtaining explicit consent where required
  • Conducting DPIAs before deployment to assess and mitigate risks

Regulatory Developments: UK, EU, and Beyond

The ICO’s AI and biometrics strategy, launched in June 2025, focuses on four high-risk areas: the development of a statutory code of practice for AI and ADM, the development of generative AI foundation models, ADM in recruitment and public services, and the use of facial recognition technology (FRT) by law enforcement. The strategy is evidence-based, drawing on consumer research and ongoing engagement with stakeholders.

In the EU, data protection authorities (DPAs) are increasingly sophisticated in their approach to AI, scrutinising the full lifecycle of AI systems—from data collection and training to deployment and deletion. However, there are notable differences in enforcement and interpretation across Member States. For example, the use of biometric technology in football stadiums for public safety purposes has been approved in Denmark but rejected in France.

The forthcoming EU AI Act will introduce new compliance requirements, but the role of DPAs under the Act remains under discussion. In Germany, for instance, a single federal authority is expected to oversee AI compliance, with DPAs retaining a key role under the GDPR.

Key Takeaways for Businesses

1. End-to-End Compliance: Assess whether your AI and ADM activities fall within the scope of data protection rules, and consider the entire lifecycle—from data collection to decision-making and beyond.
2. Business Enablement: Treat compliance as a business enabler, building products and services that customers trust.
3. Clarity on Issues: Distinguish between legal, technical, governance, and political challenges, and ensure your teams have the necessary technical understanding.
4. Management Engagement: Senior management should embrace AI, understand its benefits and risks, and ensure appropriate governance and oversight are in place.

Conclusion

As AI continues to transform business operations, organisations must stay abreast of evolving regulatory expectations around data use, automated decision-making, and biometrics. By adopting a proactive, transparent, and technically informed approach, businesses can harness the benefits of AI while safeguarding individuals’ rights and maintaining public trust.

This document (and any information accessed through links in this document) is provided for information purposes only and does not constitute legal advice. Professional legal advice should be obtained before taking or refraining from any action as a result of the contents of this document.