In today's interconnected world, where data flows effortlessly across borders, ensuring its security and compliance is paramount. Our updates are your trusted source for the latest news, expert analyses, and practical tips on navigating the complex landscape of data protection.
Key trends
AI continues to be a key area of focus for the data protection authorities. For example, data protection authorities including the UK’s ICO, France’s CNIL ad Hong Kong PCPD have issued updated guidance relating to the application of data protection rules to AI.
Long story short
UK:
- The Information Commissioner, John Edwards, publishes a speech on the challenges and opportunities of regulating generative AI, highlighting the Information Commissioner's Office’s (ICO) commitment to balancing data protection with innovation.
- The ICO issues a report titled "Learning from the Mistakes of Others," urging organisations to enhance cyber security in response to an increase in cyber breaches, particularly in the finance, retail, and education sectors.
- The ICO and Ofcom issue a joint statement committing to enhanced collaboration to protect online users, introducing "Collaboration Themes" for cooperation in online safety and data protection.
- In Harrison v Cameron & anr [2024] EWHC 1377 (KB), the court rules on the extent to which data controllers must disclose specific recipients of personal data in response to Data Subject Access Requests (DSARs), considering the rights and freedoms of the recipients.
- The ICO's Freedom of Information and Transparency Director, Warren Seddon, reviews the past year, noting an all-time high in complaints and detailing the ICO's strategies for managing the increased workload in Freedom of Information (FOI) requests.
EU:
- The French data protection authority (CNIL) publishes a Q&A on the EU AI Act, detailing its interplay with GDPR, clarifying application scenarios, compliance benefits, and emphasising transparency and documentation obligations.
- Noyb files complaints against Microsoft for GDPR breaches in its “365 Education services””
- The European Data Protection Board (EDPB) cautions against the use of facial recognition technologies at airports, advocating for less intrusive methods and individual control over biometric data.
- The Italian data protection authority (Garante) updates guidelines on email management and metadata processing in the workplace, setting retention limits and requiring employer compliance measures.
- The German Data Protection Conference (DSK) releases an updated Standard Data Protection Model (SDM) to assist in GDPR compliance through technical and organisational measures and DPIA structuring.
- German data protection authorities prepare for the AI Act’s enforcement, highlighting the role of data protection authorities in overseeing high-risk AI systems.
- The National Commission for Data Protection (CNPD) in Luxembourg issues a warning for GDPR purpose limitation breaches after investigating a complaint about the misuse of video surveillance data for employee dismissal.
MIDDLE EAST:
- The Saudi Data & AI Authority (SDAIA) publishes draft guidance for public consultation on the criteria and responsibilities for appointing data protection officers in Saudi Arabia.
- The Kingdom of Saudi Arabia’s National Data Governance Platform has been enhanced ahead of the KSA Personal Data Protection Law (PDPL) enforcement date, including tools for DPO appointments and privacy impact assessments, now also open to private entities.
ASIA:
- China’s National Technical Committee 260 on Cybersecurity (TC260) introduces a draft guide for public consultation to aid personal information processors in identifying sensitive personal information, emphasising the need for higher protection and stringent processing requirements.
- Singapore's AI Verify Foundation and the Infocomm Media Development Authority releases the Model AI Governance Framework for Generative AI, aiming to encourage a trusted ecosystem through a balanced approach that addresses generative AI concerns while promoting innovation.
- Hong Kong's Privacy Commissioner for Personal Data issues the Artificial Intelligence: Model Personal Data Protection Framework to guide organisations in AI procurement and use in compliance with the Personal Data (Privacy) Ordinance (PDPA), covering strategy, risk assessment, customisation, and stakeholder engagement, recommended as best practice.
Find the latest news regarding contentious risk relating to data and privacy on our blog Updata.
Regional updates
UK
The Information Commissioner’s Views on Emerging Technology
In a recent speech at New Scientist’s emerging tech summit, John Edwards, the UK's Information Commissioner, addressed the challenges of regulating the use of personal data by fast growing and developing technology and specifically generative AI. Edwards highlighted the concerns and opportunities this growth presents, particularly regarding data protection and the use of personal information by such technologies.
The speech outlined the ICO's commitment to balancing the protection of personal information with the encouragement of innovation. Edwards introduced the ICO's Innovation Advice Service and Regulatory Sandbox as resources for organisations to ensure compliance with data protection laws while developing new technologies. He also referenced the ICO’s ongoing consultations on generative AI, the most recent of which focuses on individual rights when training and deploying generative AI models. He also called on technology companies to embrace the principles of "data protection by design and default" and integrate data protection from the outset of product development.
Of course, the ICO is not the only regulator focusing on the implications of advances in AI and Edwards pointed to the ICO's collaboration with other regulators through the Digital Regulation Cooperation Forum (DRCF) which is intended to provide a unified regulatory approach, benefiting companies navigating the complexities of digital product development. One interesting facility that the DRCF offers is an AI and Digital Hub which provides informal guidance on regulatory questions related to AI and other digital technology.
The ICO issues a call to action amid rising cyber attacks
The ICO has recently issued a statement urging all organisations to enhance their cyber security in order to protect the personal information they hold. This call comes in light of the ICO noticing an increase in the number of cyber breaches being reported to the ICO, with over 3,000 reported in 2023, particularly in the finance, retail, and education sectors.
The ICO has also shared a new report, "Learning from the Mistakes of Others", which provides insights into common security mistakes and practical advice for enhancing security measures. It identifies five main causes of cyber security breaches:
- Phishing: where fraudulent messages trick users into sharing passwords or downloading harmful software.
- Brute Force Attacks: where criminals use trial and error to guess login details or encryption keys.
- Denial of Service: where criminals overload a website or network to disrupt its normal functioning.
- Errors: where security settings are misconfigured due to poor implementation, lack of maintenance, or unchanged default settings.
- Supply Chain Attacks: where criminals compromise an organisation's products, services, or technology to infiltrate their systems.
The ICO emphasises that organisations must report any data breach from a cyber attack within 72 hours of becoming aware of it, unless it is unlikely to pose a risk to affected individuals.
To support you in managing such incidents, our Cyber Response+ service is available 24/7 worldwide, ready to assist you in navigating and mitigating the effects of a data breach. Please feel free to reach out to Lawrence Brown or Robert Allen if you would like to sign up.
The ICO and Ofcom issue a joint statement on a collaboration to regulate online services
The ICO and Ofcom, as the UK’s primary regulators for online safety and data protection, have issued a joint statement where they commit to enhance their collaboration to protect online users more effectively. The statement builds on an earlier version from 2022 and outlines an approach for cooperation across various areas.
The statement introduces an initial set of “Collaboration Themes” as areas of shared interest between the ICO and Ofcom, focusing on topics relevant to both online safety and data protection. These themes, which will be regularly reviewed and updated, include age assurance, profiling and other “proactive tech”, and privacy settings for children, among others.
The statement also details the circumstances in which the regulators may exchange information about relevant issues and companies of mutual interest. It includes a warning that, subject to compliance with the regulators’ statutory duties, information may be shared between them without prior notification to service providers. The collaboration aims to promote compliance, support innovation, and ensure regulatory clarity, thereby fostering the continued growth of online services by removing unnecessary regulatory burdens.
Data subject access requests (DSARs) and the requirement to disclose specific recipients of personal data
Harrison v Cameron & anr [2024] EWHC 1377 (KB) is as important to DSARs as the facts are colourful. A property investor (Harrison) made a number of lurid threats on the telephone to his landscape gardener (Cameron), who recorded the conversations and disseminated the recordings amongst friends and contacts. On discovering this, Harrison issued a DSAR against Cameron and his company, Alasdair Cameron Ltd (‘ACL’).
The DSARs were rejected by the defendants on the basis that (1) Cameron was processing the data in his personal capacity (so the UK GDPR Art 2(2)(c) ‘personal and household exemption’ applied) (2) Cameron was not a controller in any event, and (3) Further to UK GDPR Art 15(4) and Schedule 2, paragraph 16 DPA 2018, ACL was exempt from disclosing information identifying a third party who did not consent to the disclosure (or it was not reasonable to ask them).
The court found that Mr. Cameron, in recording and sharing conversations with Mr. Harrison, was acting in his capacity as a director of ACL, not in a personal capacity, thus invalidating the ‘personal and household exemption’ but also not making him “a "controller" of the data under the UK GDPR. Consequently, the claim against Mr. Cameron was dismissed.
The more interesting decision concerns the data subject’s entitlement to “the recipients or categories of recipient” (Article 15(1)(c) UK GDPR) to whom the personal data has been disclosed. It is common practice for data controllers, in responding to DSAR requests, to withhold anything that might identify any third parties, including the third parties who have received the data subject’s data. This practice was challenged in the EU in the Osterreichische Post decision – but data controllers in the UK were unaffected – until now. The judge applied Osterreichische Post approvingly, and ruled that it is for the data subject, rather than the data controller, to elect whether they wish to receive information concerning the specific recipients of the personal data, rather than merely the categories of recipient.
However – there is an important moderating element to this ruling. The judge ruled that, despite the requirement to disclose specific recipients if the data subject so demands, there may be circumstances where that disclosure can be avoided, as it would adversely affect the rights and freedom of the recipients involved. Here, given the nature of Harrison’s threats, disclosing the recipients' identities could adversely affect their rights and freedoms, considering Mr. Harrison's “sustained and menacing behaviour”. It was also relevant to consider threats of hostile litigation against recipients of the recordings. Consequently, ’CL's assessment, which led to the decision not to disclose the recipients' identities to Mr. Harrison, fell within its discretion as the data controller and was reasonable under the circumstances. Therefore, the claim against ACL was also dismissed.
The important question – and takeaway – from this decision is the extent to which data controllers can be justified in withholding the identity of specific recipients of the data. In Harrison v Cameron there were clear, explicit threats. But what about less extreme potential effects on rights and freedoms? For example, a less explicit, inferred threat of reprisals? What about embarrassment or difficulty that might ensue amongst work colleagues that could emerge if specific recipients were to be identified? Or knock-on effects to personal and family life? Or making inferences purely from the motive (or assumed motive) of the data subject? Ultimately, these boundaries will only be drawn by the courts (if at all) and until that time data controllers will retain a good deal of latitude in considering whether to disclose specific recipients or not.
The ICO’s Freedom of Information and Transparency Director provides an update – the FOI year in review
Alongside its data protection regulatory responsibilities, the ICO regulates compliance with the UK Freedom of Information Act 2000 and the Environmental Information Regulations 2004, each requiring public authorities to provide information to the public on request, subject to certain exemptions. The ICO’s director of Freedom of Information and Transparency, Warren Seddon, provided an overview of what he describes as a “heck of a year for the FOI team”. Over the past year, the ICO received an all-time high of over 8,000 complaints (higher than the previous high of 6,418 in 2018/19) which reflects an increase in the overall volume of requests received by public authorities.
To help close down cases more quickly, the ICO will now provide public authorities who are in breach of the FOI legislation with 30 days to comply with a decision, bringing the period into line with that used for information notices. Additionally, for the past 12 months, the ICO has been applying a new process to prioritising cases, on the basis of whether they are in the public interest. While generally positive, this has not made a significant difference to closing cases more quickly.
The ICO noted the innovative use of its powers to issue statutory information notices to explore the use of public interest extensions, how it had effectively engaged with a university to clear a backlog of requests without needing to take further enforcement action and on developing a process to use its power to refer a public authority for contempt of court if it fails to comply with a statutory decision or notice.
What can organisations take from the above? Over 20 years after it first came into force, the FOI legislation is as well-used as it has ever been, and people are ready to complain if they consider their requests have been mishandled. In the face of a growing caseload, the ICO is ready and able to use all of the statutory powers at its disposal (as well as more collaborative approaches) to clear cases quickly.
EUROPE
The CNIL publishes a Q&A on the EU AI Act
On 12 July 2024, the CNIL published a first Q&A on the newly introduced AI Act that aims to regulate AI systems within the EU. In it, the CNIL offers guidance on the interplay between AI Act and GDPR requirements.
First, it starts by detailing cases in which each regulation applies, either the AI Act alone when, for example, a high-risk AI system provider does not require personal data, neither for its development nor in its deployment, or GDPR alone when personal data will be processed to develop or use an AI system that’s not subject to the AI Act or finally both when a high-risk AI system requires personal data for its development or in its deployment.
Then, CNIL goes on to detail how compliance with the AI Act can facilitate or even prepare for GDPR compliance by addressing specific tensions that existed in the past between its requirements and those of the GDPR, such as replacing certain GDPR rules for biometric identification in public spaces. This now allows the processing of sensitive data to detect biases and permits the reuse of personal data in “regulatory sandboxes” aimed at public interest projects.
Finally, it elaborates on the common obligations to both regulations in terms of transparency and, above all, documentation, notably in terms of impact analysis. For example, the GDPR includes transparency obligations, particularly to inform individuals whose data is being processed and how it is being processed. The AI Act provides for very similar transparency measures.
It concludes by explaining that the AI Act and the GDPR present strong similarities and complementarity, but their objectives and approaches differ.
Noyb files complaints against Microsoft alleging violations of data protection rights
A Vienna-based non-profit organisation, Noyb, has filed complaints with the Austrian data protection authority against Microsoft for breach to the GDPR. The complaints target “365 Education services”, broadly used by Austrian schools, alleging it violates children’s data protection rights. The plaintiff pointed out the fact that Microsoft obstructed the exercise of individuals' rights on the grounds that the schools would be the data controllers, whereas the schools are unable to comply with the request since they have no control over the systems.
The EDPB adopts an opinion on airports’ use of facial recognition technologies
The EDPB has adopted an opinion cautioning against the use of facial recognition technologies by airport operators and airlines, highlighting the risks and advocating for less intrusive methods to ensure individuals' control over their biometric data. The EDPB also addressed the processing of biometric data at airports, recommending storage solutions that maximise individual control.
The Italian Garante updates guidelines on IT programs and services for electronic email management in the workplace and the processing of metadata.
The Italian data protection authority, the Garante, has adopted an updated version of the guidelines entitled *"IT programs and services for electronic email management in the workplace and the processing of metadata".*
The stance taken by the Garante allows employers to retain the metadata (i.e. logs generated by email management and sorting servers, derived from external data outside the email body, such as the subject, the sender and recipient, as well as other information related to the data in transit, like IP addresses and the size of attachments) necessary for the email system's functionality for a short period, typically not exceeding 21 days. Retention for a longer period is permissible only under "special conditions" that necessitate it due to business operations or network security, in adherence to the protections outlined in Article 4, paragraph 1 of the Workers' Statute, which requires agreement with the workers' council or, in its absence, authorisation from the Labour Office. This is because email metadata can enable employers to monitor employees' work activities remotely, distinct from the content of the emails themselves.
Given this context, it is essential for the employer and data controller to implement the following measures:
- ensure the default settings of mail systems allow the employer to manage basic settings, including limiting the retention period of metadata.
- update the data retention policy to reflect the retention of information within the body of email messages (not just their metadata) and align with company policies on the use of work tools, restricting the use of company email addresses to work-related purposes.
- update the privacy policy directed at employees to specify the applicable data retention period for processing email metadata.
- if retaining metadata for more than 21 days is necessary - which must be adequately justified under the principle of accountability - the following steps should be taken:
- enter into an agreement with the workers' council or, if absent, with the competent Labour Office.
- conduct a Data Protection Impact Assessment ("DPIA");
- perform a Legitimate Interest Assessment ("LIA") if data processing is based on the employer's legitimate interest.
Furthermore, the Guidelines emphasise that email service providers must also help ensure that employers can meet their data protection obligations. This involves balancing the need for large-scale marketing of their products with compliance with applicable principles, including improving the product offered in terms of its conformity to relevant regulations.
The German DSK publishes an updated version of its Standard Data Protection Model (SDM)
The SDM is a tool designed to assist in the selection and continuous evaluation of technical and organisational measures to ensure and demonstrate compliance with GDPR requirements for personal data processing. Now updated to SDM 3.1, the model systematises these measures based on protection goals, supporting the selection of appropriate measures exclusively for data protection-compliant design of processing activities. Companies can refer to the SDM for their planning and development of appropriate technical and organisational measures.
The new SDM is part of an iterative process involving legal assessment, design of processing operations, and selection and implementation of technical and organisational measures. It provides a transformation aid between law and technology, supporting ongoing dialogue between specialists in legal and technical-organisational fields. The process runs throughout the entire lifecycle of processing, supporting the GDPR's requirement for regular assessment and evaluation of technical and organisational measures. Finally, the SDM can be used to structure a Data Protection Impact Assessment (“DPIA”) as required by GDPR Article 35 for high-risk processing operations. It is aimed at both supervisory authorities and data controllers, helping the latter systematically plan, implement, and continuously monitor required functions and measures.
German data protection authorities prepare to implement the AI Act
The data protection authorities in Germany are preparing to act as integral supervisory authorities with respect to the AI Act, as was suggested by the DSK in their position statement on 3 May 2024. This follows the publishing of the AI Act in the Official Journal of the European Union which will come into force on 1 August 2024. This also marks the start of the implementation deadlines, meaning companies using or developing AI in its various forms should consider reviewing their compliance with the AI Act as soon as possible or risk significant fines. The key dates and relevant updates which member states and other authorities must be aware of include:
- 2nd February 2025: Prohibitions on certain AI practices will take effect (Article 5 AI Act). This includes a general ban on real-time biometric surveillance systems in public spaces for law enforcement and the prohibition of social scoring, which uses AI to evaluate behaviour and link it to social disadvantages, such as exclusion from public services.
- 2nd August, 2025: Member States must enact a law designating general market surveillance authorities to enforce the AI Act. These authorities must be independent, impartial, and adequately resourced to ensure effective implementation.
- Market Surveillance and Enforcement: Data protection authorities will oversee the market for high-risk AI systems in sectors like law enforcement, judicial administration, migration control, and AI influencing elections (Article 74 (8) AI Act). This oversight extends to software companies, cloud services, and security firms offering AI systems to these sectors.
Luxembourg’s data protection authority issues a warning on the use of data resulting from video surveillance in the workplace
The National Commission for Data Protection (CNPD) has investigated a complaint regarding the use of a video surveillance system by an employer, which led to the immediate dismissal of an employee. The investigation found that the video surveillance images, initially installed for security purposes, were used to justify the dismissal, thereby violating the GDPR's principle of purpose limitation. The CNPD concluded that the administration breached Article 5.1.b) of the GDPR, which requires data to be collected for specific and legitimate purposes and issued a warning to the municipal administration for this violation.
MIDDLE EAST
The SDAIA publishes draft guidance for DPO appointments
On 8 July 2024, as part of a public consultation the SDAIA released draft rules for appointing a data protection officer in the Kingdom of Saudi Arabia (KSA). These rules set out additional details and clarity on topics such as appointment thresholds and criteria, roles and responsibilities and competency standards. The public consultation ends on 6 August 2024.
The KSA National Data Governance Platform shapes up
In our previous newsletter, we introduced the KSA's National Data Governance Platform and the Rules for the National Register of Controllers, designed to streamline registration ahead of the KSA Personal Data Protection Law (PDPL) enforcement date on 14 September 2024. Since then, the platform has been significantly upgraded with new PDPL compliance resources, FAQs, e-services, and tools for DPO appointments and privacy impact assessments. It now also opens registration to KSA private entities, extending beyond the initial public entity focus. While guidance for foreign-based controllers is still pending, we anticipate receiving this and further PDPL-related guidance from SDAIA before the enforcement deadline.
ASIA
China’s TC260 releases a draft guide on the identification of sensitive personal information
On 11 June 2024, The TC260 released the draft Cybersecurity Standard Practice Guide – Guide on the Identification of Sensitive Personal Information (the Guide) for the public.
The Guide is intended to serve as a practical guidance for personal information processors to identify the specific data types that will fall within the scope of “sensitive personal information”. Under the PIPL and associated rules, sensitive personal information is subject to a higher protection level and the processing and transfer of sensitive personal information is subject to more stringent requirements (for example, the obligation to obtain separate consent and to perform a personal information protection impact assessment). Therefore, it is important for personal information processors to accurately identify and classify sensitive personal information from their data pool and apply appropriate compliance measures.
The definition of “sensitive personal Information” under the PIPL is rather broad and general, and the Guide provides useful references on the common categories and examples of sensitive personal information. Although the Guide is released as a technical standard of no mandatory effect, such standard documents are widely adopted by market players to direct their data compliance practice. The Guide is open to public comments until 24 June 2024.
Singapore’s IMDA releases their Model AI Governance Framework for Generative AI
On 30 May 2024, IMDA released their Model AI Governance Framework for Generative AI (the Framework). The Framework aims to provide a systematic and balanced approach for addressing concerns related to generative AI, while also promoting innovation. It calls for a collective effort from policymakers, the industry, the research community, and the public to address these challenges. The Framework outlines nine key dimensions to create a trusted ecosystem which the Framework proposed to be looked at in totality, including emphasising the importance of accountability, trusted data for AI training, and content provenance, to enhance the governance of generative AI.
Developed in consultation with approximately 70 organisations, the Framework by the AI Verify Foundation and IMDA balances the need to encourage innovation. It builds upon a previous framework from 2019, that focused solely on traditional AI. Unlike traditional AI, which analyses given data, generative AI can create new content from extensive data sources. This advancement underscores the necessity for regulators and publishers to implement technical solutions as mentioned in the Framework such as digital watermarking and cryptographic provenance to track and verify the origin of digital content and identify AI-generated or modified content.
Hong Kong’s PCPD issues Model Personal Data Protection Framework on AI
On 11 June 2024, The PCPD issued its “Artificial Intelligence: Model Personal Data Protection Framework” (Model Framework). The Model Framework is targeted at assisting organisations to procure, implement and use AI (including predictive AI and generative AI) in compliance with the relevant requirements under the Personal Data (Privacy) Ordinance. This aligns with the current trend, where a lot of small and medium-sized enterprises are purchasing off-the-shelf solutions instead of building and developing AI systems from scratch. In sum, the Model Framework covers recommended measures in the following four areas: establishing AI strategy and governance; conducting risk assessment and human oversight; customisation of AI models and implementation and management of AI systems; and communication and engagement with stakeholders. Whilst the Model Framework is not legally binding, organisations are recommended to adhere to the principles are a matter of best practice.















.jpg?crop=300,495&format=webply&auto=webp)


