Data Protection Update May 2024

Our updates are your trusted source for the latest news, expert analyses, and practical tips on navigating the complex landscape of data protection.

02 May 2024

Publication

In today's interconnected world, where data flows effortlessly across borders, ensuring its security and compliance is paramount. Our updates are your trusted source for the latest news, expert analyses, and practical tips on navigating the complex landscape of data protection.

AI continues to be a key area of focus for the data protection authorities. For example:

  • The UK’s ICO continues to build on its consultation series on generative AI.
  • Singapore’s PDPC has recently published Advisory Guidelines on AI.
  • The French CNIL has shared its set of recommendations and good practices regarding the use of AI.

Long story short

UK:

  • The UK’s Information Commissioner’s Office (ICO) outlines its 2024-2025 priorities for children’s online data protection, focusing on default privacy settings, targeted advertising, recommender systems, and consent procedures for children under 13.
  • The ICO's new data protection fining guidance, published on March 18, 2024, provides clarity on its fining power, outlining a five-step fine calculation methodology, criteria for penalty issuance, and emphasising the importance of organisations prioritising data protection and compliance efforts.
  • The ICO initiates a consultation series on generative AI, so far focusing on lawful bases for web scraping, purpose limitation, and accuracy, aiming to provide detailed guidance on the application of UK data protection law to the development and use of generative AI technology.
  • The ICO joins the Global Cooperation Arrangement for Privacy Enforcement (Global CAPE), a platform established for international cooperation on cross-border data privacy enforcement matters, facilitating information sharing and promoting cooperation on global data protection and privacy laws.
  • The ICO issues guidance for health and social care organisations on maintaining transparency in handling sensitive personal data, advising on informing individuals about new technologies and processing activities, and encouraging the development of engaging materials to provide information to their audience.

EU:

  • The CJEU ruling C 340/21 states that data controllers must demonstrate adequate security measures during a data breach and allows data subjects to claim for distress over potential future data misuse.
  • The German Data Protection Conference (DSK) supports institutionalising the DSK in the draft German Federal Data Protection Act (BDSG-E), recommends further consultations for credit scoring and suggests removing the ban on fines against public authorities.
  • The European Data Protection Board (EDPB) states that "consent or pay" models in behavioural advertising, while not illegal, must meet strict compliance requirements, including offering a free alternative and specific, granular consent.
  • The Italian Data Protection Authority sanctions employers using facial recognition for attendance monitoring, stating that Italian law does not permit processing employee biometric data for this purpose.
  • The CJEU rules that IAB Europe's Transparency and Consent String (TC String) is personal data under the GDPR, and IAB Europe is a joint controller due to its influence over data processing.
  • The CJEU states that "non-material damage" under Article 82 of the GDPR requires actual damage from a regulation infringement for compensation claims, with damages determined by each Member State's rules.
  • The French Data Protection Authority (CNIL) issues recommendations for AI development in compliance with the GDPR, outlining seven steps including defining a purpose, legal qualifications, a legal basis, and conducting an impact analysis.

MIDDLE EAST

  • The Saudi Data & Artificial Intelligence Authority (SDAIA) launches a public consultation on its draft Data Sovereignty Public Policy, which outlines four principles aimed at preserving Saudi Arabia's data sovereignty, focusing on treating data as a national asset, data protection, data availability, and encouraging local and foreign investment.
  • The Saudi Data & Artificial Intelligence Authority (SDAIA) releases draft rules governing the National Registry of Controllers, outlining registration obligations for certain entities, including public entities and controllers dealing with personal and sensitive data, and detailing the registration process on the National Data Governance Platform.
  • The Saudi Data & Artificial Intelligence Authority (SDAIA) releases draft amendments to the PDPL Data Transfer Regulations for public consultation, proposing changes including expanding permitted cross-border data transfer purposes, introducing new transfer mechanism of "Appropriate Safeguards", and outlining circumstances for conducting a transfer risk assessment.

ASIA

  • The Personal Data Protection Commission (PDPC) in Singapore issues guidelines on the use of personal data in AI recommendation and decision systems, clarifying the application of the Personal Data Protection Act 2012 (PDPA) to AI systems and outlining potential exceptions for AI developers and obligations for service providers.
  • The Cyberspace Administration of China (CAC) issues the Provisions to Promote and Regulate Cross-Border Data Flows, introducing exemptions to certain compliance requirements under China's cross-border data transfer regime, including small-scale transfer, necessity to perform a contract, human resource management, emergency situations, and negative lists in free trade zones.

Must reads / must listen

Find the latest news regarding contentious risk relating to data and privacy on our blog  Updata

Regional updates

UK

plus

The ICO outlines its 2024-2025 priorities for children’s online safety.

The UK's Information Commissioner's Office (ICO) has outlined its priorities for 2024-2025 to further protect children's personal data online, and has said that it will take a closer look at the following key areas in particular:

  • Default settings for privacy and geolocation: The ICO mandates that children's geolocation settings should be turned off by default and their profiles set to private to prevent them from being tracked or targeted.

  • Targeted advertisements for children based on profiling: The ICO suggests that targeted advertising should be off by default to prevent children from losing control over their information and to avoid potential harms such as financial losses.

  • Recommender systems' use of children's information: The ICO warns that recommender systems, which suggest content based on data analysis, could lead children to harmful content and encourage increased data sharing.

  • Children aged 12 years and younger: The ICO emphasises the need for services to consider how they gain consent and use age assurance technologies to protect children under 13, who cannot legally give their own consent for data processing.

The ICO urges organisations to take practical steps in line with these objectives to minimise risks to children's online data. Organisations processing online personal data relating to children should carefully consider the four key areas of focus identified by the ICO and seek to take real, practical steps in line with the ICO’s objectives and to minimise the risks identified by the ICO.

You can read our more detailed update here.

The ICO issues new fining guidance

The ICO’s new data protection fining guidance, published on March 18, 2024, might sound familiar to veterans of other regulatory fining guidelines. While this guidance doesn’t apparently represent a change of approach by the ICO, it is designed to offer transparency and clarity about how it uses its fining power, and indeed within the guidance there are some important clarifications worth noting. This guidance is a result of a consultation process and replaces previous sections on penalty notices in the ICO Regulatory Action Policy from November 2018.

If you are interested, reading the guidance in full will be valuable, but until you get around to doing that, our main observations are:

  • Penalty issuance criteria: The ICO will assess the seriousness of an infringement based on its nature, gravity, duration, whether intentional or due to negligence, and the categories of personal data affected. Other considerations include any mitigating actions taken, previous infringements, cooperation with the ICO, and the effectiveness, proportionality, and dissuasiveness of a fine.

  • Fine calculation methodology: A five-step approach will be used, starting with assessing the infringement’s seriousness, considering the organisation’s turnover (especially for larger entities), determining a starting point for the fine based on seriousness and turnover, adjusting for aggravating or mitigating factors, and ensuring the fine meets the objectives of effectiveness, proportionality, and dissuasiveness.

  • Seriousness and categories criteria: When assessing seriousness and the categories of personal data affected, as well as UK GDPR mandated special category and criminal offence data, the ICO will also pay particular attention to data included in private communications, state issued ID, location data and financial data (amongst other things).

  • Fines: As well as fines arising out of data breaches, the ICO may also fine in circumstances where a business has not responded adequately to an information request or has not complied with an enforcement notice. The level of fine may also be affected by other types of interaction with the ICO: if action is taken by a business to address and ameliorate any damage done as a result of a data breach before the ICO starts its investigation, this may reduce the level of fine. Similarly, an aggravating factor is likely to be if the ICO finds out about a breach other than from the data controller / processor.

Also in the guidance, the ICO emphasizes the importance of organisations prioritising data protection, staying informed about regulation updates, conducting regular risk assessments, investing in employee training, implementing robust security measures, engaging legal counsel when necessary, and maintaining thorough documentation of compliance efforts.

Given recent challenges to fines (the most high profile (but by no means the only) being the First Tier Tribunal’s quashing of the ICO’s fine of Clearview AI) it is not surprising that the ICO has refreshed and clarified its fining procedure in such detail.

The ICO continues to build on its consultation series on generative AI, now covering three main areas of focus

The ICO has launched a consultation series on generative AI, examining how aspects of UK data protection law should apply to the development and use of generative AI technology. The ICO is seeking views from a range of stakeholders, including developers and users of generative AI, legal advisors, consultants, civil society groups, and other public bodies with an interest in generative AI. The output of the consultation series is likely to be a set of detailed guidance related to the development and deployment of generative AI and therefore it is likely to have broad application in the market.

The consultation series covers three main areas so far:

  • Lawful basis for webscraping (consultation closed 1 March 2024): Generative AI models are trained on large amounts of data, typically obtained from publicly accessible sources, either via direct web scraping or through third-party organisations. Insofar as the data used contains personal data, generative AI model developers must ensure their data collection complies with data protection laws and, in particular, has a lawful basis. Five of the six lawful bases are unlikely to be available for training generative AI on web-scraped data and therefore the ICO has focused this consultation on the legitimate interests basis.

To meet this basis, developers must satisfy the three stage legitimate interest test demonstrating that there is a legitimate purpose, that the processing is necessary for that purpose, and the purpose is not overridden by the individual's interests. In relation to defining the purpose the ICO states that the developer must frame the interest in a specific, rather than open-ended way, based on what information they can have access to at the time of collecting the training data but the ICO also asks the question as to how developers of general purpose AI models can sufficiently articulate how their model is going to be used. In relation to the necessity test the ICO, helpfully, states that it acknowledges that, currently, most generative AI training requires large-scale data scraping. The balancing test assesses whether the interests, rights, and freedoms of individuals override those pursued by the controller or third parties. Developers must consider risk mitigations to pass the third part of the legitimate interests test. These considerations may vary depending on whether the model is deployed by the initial developer, a third-party through an API, or provided to third parties. In all cases, developers must evidence and identify a valid interest, consider the balancing test, and demonstrate how the identified interest will be realised and risks to individuals mitigated.

  • Purpose limitation (consultation closed 12 April 2024): The second chapter covers how the purpose limitation principle should be applied at different stages in the generative AI lifecycle. Purpose limitation requires organisations to have a clear, legitimate purpose for processing personal data, which must be specified and explicit. The generative AI model lifecycle involves several stages (e.g. training, fine tuning and deployment), each with different types of personal data and purposes. Having a specified purpose at each stage allows an organisation to understand the scope of each processing activity and evaluate its compliance with data protection rules.

Developers may want to use training to train multiple models, but they must ensure the purpose of training a new model is compatible with the original purpose of collecting the training data. A key part of determining that will be the reasonable expectations of the individuals whose data is involved in the training of the models. Developers without a direct relationship with the individual whose data is being used may use public messaging campaigns and privacy information to increase awareness, along with safeguards to mitigate potential negative consequences.

Similarly, one generative AI model can give rise to many different applications, meaning the developer may have specific applications in mind when training the initial model, or these may be specified afterwards and / or developed by a third party. The ICO considers that developing a generative AI model and developing an application based on such a model are different purposes under data protection law.

The purpose behind any data processing must be detailed and specific enough for all relevant parties to understand why and how the personal data is used and the ICO regards that defining a specific and clear purpose for each different processing is key to a data protection by design and by default approach.

The accuracy principle requires organisations to ensure the personal data they process is accurate and up-to-date. However, the ICO helpfully clarifies that personal data does not always have to be accurate, depending on the purpose of processing. For instance, historical records or opinions may not be factually accurate. Moreover, the ICO states that the accuracy principle does not imply that generative AI models must be 100% statistically accurate. The required level of statistical accuracy depends on the model's use, with higher accuracy needed for models making decisions about individuals or which otherwise could have a material impact on individuals.

The ICO emphasises the importance of accuracy in preventing false information dissemination and ensuring decisions about individuals are not based on incorrect information. Developers and deployers of generative AI models need to consider the impact of training data on the outputs and how the outputs will be used. If inaccurate training data leads to inaccurate outputs with consequences for individuals, it likely violates the accuracy principle.

The ICO also highlights the importance of clear communication between developers, deployers, and end-users of models to ensure the model's application aligns with its level of accuracy. The ICO’s view is that developers should put in place controls to prevent the model's use for purposes requiring accuracy if the model is not sufficiently statistically accurate. They should also assess and communicate the risk of incorrect and unexpected outputs.

The ICO signs up to new cooperation framework

The ICO has joined the Global Cooperation Arrangement for Privacy Enforcement (Global CAPE), an arrangement created in October 2023 to enable national authorities to cooperate on cross-border data privacy enforcement matters.

Global CAPE was created by the Global Cross-Border Privacy Rules (CBPR) Forum, which in turn was established by participants in the APEC CBPR System. As we reported at the time, the UK became an Associate member of the Global CBPR Forum in June 2023.

Global CAPE members include major economies such as the United States, Australia, Canada, Mexico, Japan, the Republic of Korea, the Philippines, Singapore, and Chinese Taipei. The terms of reference for Global CAPE describe the goals of Global CAPE as being to facilitate information sharing on a voluntary basis and promote cooperation on enforcement of global data protection and privacy laws, including but not limited to the Global CBPR Framework. Participating authorities may refer individuals’ complaints to other authorities and may notify other authorities of a “possible breach” of data protection laws in their jurisdiction. Participants are encouraged to share relevant information ranging from surveys of public attitudes, experiences of investigation techniques and regulatory strategies, and trends and developments in types and numbers of complaints and disputes handled.

Organisations should note the creation of Global CAPE and the ICO’s accession to it as providing an additional channel for pursuit of complaints and investigations, overlaying existing frameworks such as agreements linked to the US CLOUD Act and mutual legal assistance treaties. Many will welcome the increased harmonisation of enforcement practices and approaches which should arise from information sharing between participating authorities.

The ICO publishes new guidance on transparency in health and social care

The ICO has published guidance for health and social care organisations (including private and third sector organisations) in relation to the steps they should take to keep people properly informed about how their personal data is being used. The rationale for the guidance is that in the health and social care sector, organisations collect large volumes of highly sensitive personal data (including data relating to individuals’ health) and must ensure that they handle it transparently to maintain trust and confidence in the services that they provide and avoid physical, material and non-material harm to affected individuals. Among other things, the ICO encourages organisations to inform individuals about the new technologies that they use (and cites Secure Data Environments that provide remote access to anonymised health information as an example of this) and in relation to new processing activities (such as setting up a new personal health record app or the sharing of a care record).

The concept of “transparency” in a data protection context means informing people how their personal data is being used. Under UK data protection law the main obligations are set out in Articles 5(1)(a) (which sets out the Transparency Principle) and Articles 12-14 of the UK GDPR (this is referred to in the guidance as the “privacy information”). The ICO also refers to “transparency information” as meaning the “total range of information you should provide to comply with the transparency principle” (which may go into more detail about, for example, the approach taken to sharing data with third parties) and encourages organisations to provide this in addition to the specific information required under Articles 12-14.

On the practical steps that organisations can take to improve transparency, the ICO encourages organisations to understand the profile of their audience and develop engaging and tailored materials to provide information to them. It also emphasises the various ways in which transparency obligations can be complied with, both in digital and hard copy forms.

EUROPE

plus

The CJEU rules on burden of proof regarding adequate TOMs and immaterial damages in light of a cyber attack.

The Court of Justice of the European Union (CJEU) ruling C 340/21 provides guidance on controller liability and immaterial damages for data breaches under the EU General Data Protection Regulation (GDPR). It places the onus on controllers to prove their security was adequate and allows data subjects to potentially recover for distress over future data misuse:

  • The case involved a cyberattack on Bulgaria’s National Revenue Agency (NAP) that resulted in personal data of over 6 million people being published online. One affected individual sued the NAP for non-material damages, claiming the NAP failed to implement appropriate security measures.

  • The CJEU ruled that the mere fact that unauthorised disclosure of or access to personal data occurred is not sufficient by itself to conclude the controller’s security measures were inappropriate under Articles 24 and 32 of the GDPR. Courts must concretely assess if the measures were appropriate for the processing risks.

  • The controller bears the burden of proof regarding the appropriateness of its security measures under Article 32 of the GDPR.

  • A controller is not automatically exempt from liability just because a data breach was caused by a third party like cybercriminals. The controller must prove it was not responsible for the event that led to the damage.

  • The CJEU held that a data subject’s fear about possible future misuse of their personal data resulting from a GDPR infringement can constitute compensable “non-material damage” under Article 82(1) of the GDPR, even without evidence of actual misuse yet.

The DSK publishes its opinion on the draft law amending German Federal Data Protection Act (Bundesdatenschutzgesetz)

The Federal Government Cabinet (Bundeskabinett) introduced the draft German Federal Data Protection Act (BDSG-E) on 9 August, 2023. In response to a CJEU ruling from 7 December 2023, the Bundeskabinett passed an updated draft on 7 February 2024. On 12 April 2024, the German Data Protection Conference (DSK), which comprises Germany's federal and state data protection authorities, issued an additional opinion on key aspects of the BDSG-E:

  • Institutionalisation of the DSK: The DSK supports institutionalising the DSK in the new Section 16a BDSG-E, but recommends additional provisions to strengthen its role in promoting consistent application of data protection law. This institutionalization and strengthening of the DSK could lead to more unified supervisory decisions within Germany.

  • Scoring: Section 37a BDSG-E creates an exception to the Article 22 of the GDPR prohibition on fully automated decisions with respect to data subjects in cases where probability values are created or used in relation to a natural person concerning:

    • A specific future behaviour of the person for the purpose of deciding on the establishment, performance or termination of a contractual relationship with that person; or
    • his or her ability and willingness to pay by credit agencies and including information on claims
  • The DSK cautions that it must be examined whether Section 37a of the BDSG-E complies with Article 23 of the GDPR's requirements for restricting data subject rights.

  • The DSK recommends additional expert consultations to ensure credit scoring complies with the CJEU's ruling from 7 December 2023 that Article 22(1) of the GDPR generally prohibits subjecting individuals to solely automated decisions.

  • Possibility of fines against public authorities: The DSK suggests deleting Section 43(3) of the BDSG-E, which prohibits fines against public authorities. In practice, a need for fines in the public sector has emerged to sufficiently convey the severity of a violation to the supervised authority and incentivise actively preventing data protection violations.

EDPB issues Opinion on valid consent in the context of consent or pay models implemented by large online platforms

The Dutch, Norwegian and German (Hamburg) supervisory authorities requested the European Data Protection Board (EDPB) to issue an opinion on the question of under which circumstances and conditions “consent or pay” models relating to behavioural advertising can be implemented by large online platforms in a way that constitutes valid and, in particular, freely given consent. This opinion follows the CJEU judgment in the Meta Platforms Inc. v Bundeskartellamt, Case C-252/21.

The EDPB does not prohibit nor consider the “consent or pay” model unlawful, but subjects the model to stringent compliance requirements:

  • The offer of only a paid alternative to the service that includes processing for behavioural advertising purposes is in most cases not allowed;

  • A further free of charge alternative should also be offered without behavioural advertising, e.g., with a form of advertising involving the processing of a lesser (or zero) amount of personal data;

  • Any fees imposed may not be such as to effectively inhibit data subjects from making a free choice or lead to prejudice, such as when data subjects who do not consent do not pay a fee and thus risk being excluded from services that are important or decisive for participation in social life;

  • Data controllers must assess, on a case-by-case basis, a number of factors, such as whether there is an imbalance of power with the data subject, the position of the large online platform in the marketplace, the existence of lock-in or network effects, the extent to which the data subject relies on the service, the main audience of the service, whether consent is required to access goods or services, and whether the processing is not necessary for the performance of the contract;

  • Data controllers must evaluate an alternative version of the service offered that does not involve consenting to process personal data for behavioural advertising purposes; and

When presented with a “consent or pay” model, the data subject must be free to choose which purpose of processing to accept (consent must be specific and granular), rather than being faced with a request for consent that bundles several purposes. Consent must then be reconfirmed at intervals.

The Italian data protection authority issues sanctions in relation to facial recognition for the purpose of employees’ attendance verification

The Italian data protection authority (Italian Authority) has recently issued a number of different sanctions against employers that implemented facial recognition systems for attendance monitoring in the workplace (provisions No. 9995680, 9995701, 9995741, 9995762, 9995785).

The Italian Authority’s analysis of the lawfulness of the processing is of particular practical interest: it points out that although in the employment context the purposes of verification of employees’ attendance and verifying compliance with working time may rely on Article 9(2)(b) of the GDPR (i.e. the necessity of carrying out the obligations and exercising specific rights of the controller or of the data subject in the field of employment and social security and social protection law), the processing of biometric data is permitted only “in so far as it is authorised by Union or Member State law [...] subject to appropriate safeguards for the fundamental rights and interests of the data subject”. Currently, Italian national law does not provide processing of employee’s biometric data for the purpose of recording presence on duty. For that reason, the Italian Authority held that the controllers’ justification to use biometric identification system in the context of the ordinary management of the employment cannot be applied. The Italian Authority also considered that less invasive measures could have been adopted, such as automatic controls by means of badges, direct checks, etc.

These provisions do not consider the possibility of relying on the employee's consent under Section 9(2)(a), even where there is a possibility for the employee to choose a traditional alternative to the use of biometrically verified systems.

CJEU rules that the TC String constitutes personal data under the GDPR, as it can be linked to an identifier like the user's IP address

  • IAB Europe is a non-profit association based in Belgium, representing companies in the digital advertising and marketing industry at the European level. It has developed a Transparency & Consent Framework (TCF) which is a set of rules that allow the provider of a website or application, as well as data brokers or advertising platforms, to process personal data related to user profiles on a large scale, in order to facilitate the dissemination of targeted advertisements by these operators.

  • Once a user’s consent to the processing of personal data relating to their user profiles is obtained, their preferences are subsequently encoded and stored in a string composed of a combination of letters and characters designated by IAB Europe under the name a Transparency and Consent String (TC String), shared with personal data brokers and advertising platforms.

  • The CJEU has ruled that the TC String constitutes personal data under the GDPR, as it can be linked to an identifier like the user's IP address.

  • The CJEU also ruled that, by establishing the set of rules defining the methods of personal data processing, IAB should be considered as joint controller, if, given the particular circumstances of the case, it influences, for its own purposes, the processing of the concerned personal data and thus determines, jointly with its members, the purposes and means of such processing.

CJEU ruling brings clarifications to the concept of compensatable damage

  • On 11 April 2024, the CJEU rendered a decision on the interpretation of the right to compensation offered under Article 82. A natural person complained that juris GmbH, a German company, had caused him damages as a result of the different ways in which his personal data was used for marketing purposes despite his numerous objections.

  • The CJEU explained that a “non-material damage” as per Article 82(1) must be sufficiently established. Indeed, the Court stated that an infringement of provisions which confer rights on the data subject is not sufficient to establish that “non-material damage” has occurred. As a result, the plaintiff must establish an actual damage following an infringement of the regulation in order to be able to claim compensation.

  • To determine the amount of damages a person is entitled under Article 82 of the GDPR, the Court outlines that each Member State has to apply its own domestic rules regarding the compensation. As such, the Court also highlights that Article 82 has a compensatory and not punitive function. Nonetheless, only the damage actually suffered is to be taken into consideration when trying to determine the compensation, the fact that several infringements have been caused is irrelevant.

The CNIL publishes a set of recommendations and good practices regarding the use of AI

The French Data Protection Authority (CNIL) issued on 8 April 2024 its recommendations on the development of Artificial Intelligence in com

The recommendations aim to assist actors in the AI sector with their compliance with data privacy laws. They are the result of a public consultation and provide clarification and guidance to address the stakeholders’ concerns about the development of datasets for AI, especially considering the challenges posed by the emergence of generative AI technologies in relation to GDPR compliance. The recommendations have also been designed to ensure alignment with the new requirements in the AI Act.

The recommendations concern (i) the development phase (i.e. the AI system conception, database set-up and learning), excluding the phase of deployment; and (ii) systems that involve the processing of personal data subject to the GDPR.

The recommendations include 7 steps for to the responsible development of AI systems, offering guidance in particular on the following aspects:

  • Definition of a purpose for the AI system, depending on whether the operational use could be identified at the development stage, or whether the AI system is developed for a general use.

  • Determination of the legal qualification and responsibility of the actors involved in the development of an AI system and the processing of personal data, whether as a controller, joint controller, processor or subcontractor.

  • Definition of a legal basis within one of the six legal bases provided for by the GDPR that allow to process personal data through the AI system.

  • Carry out tests and verifications to ensure that the processing is authorised by law in the event of data reuse, whether it involves data originally collected for another purpose or data collected from open sources on the internet.

  • Respect of the data minimisation principle when making AI system design choices, by selecting only relevant data for training and cleaning non-relevant data from the database, and also when using data collection techniques such as web scraping.

  • Definition of a retention period for the development phase of the AI system, but also for the maintenance and improvement of the AI system, in accordance with GDPR rules regarding data retention.

  • Carry out an impact analysis (DPIA) when necessary, considering new specific risks associated with AI systems such as data misuse, data breaches, or processing that may lead to discrimination caused by a bias in the AI system, and other risk criteria introduced in the AI Act.

Further guidance is expected in the coming months. In particular, topics such as the legal basis of legitimate interest, rights management, information to the individuals and issues related to the risks of using the system will be addressed in guidance published later.

MIDDLE EAST

plus

Saudi publishes draft Data Sovereignty Public Policy

On 10 March 2024, the Saudi Data & Artificial Intelligence Authority (SDAIA) started a month-long public consultation on its draft Data Sovereignty Public Policy.

The policy creates four principles by which the Kingdom of Saudi Arabia (** **) aims to preserves its “data sovereignty”:

  • Data as a National Asset

  • Data Protection

  • Data Availability

  • Encouragement of Local and Foreign Investment

The current draft is unclear on how its principles and requirements sit alongside SDAIA’s wider regulatory agenda, mainly the Personal Data Protection Law which came into force on 14 September 2023 and some of the developments in the KSA cyber rules as well as guidance followed by certain sovereign players. However, the policy focusses on:

  • Data as a national asset for economic growth;

  • Protection of data held by entities which are part of the critical national infrastructure;

  • Regulating international transfer of data;

  • The use and availability of data by government entities; and

  • Attracting foreign investment and developing local companies working in the digital economy.

Saudi publishes draft rules for the National Register of Controllers

In April 2024, SDAIA released its draft Rules Governing the National Registry of Controllers within the Kingdom (Rules). The purpose of the Rules is to inform and monitor KSA controllers of their registration obligations in the National Data Governance Platform. The introductory text of the Rules state that SDAIA will publish separate registration rules for controllers located outside of the KSA in due course.

Most notably, Article 2 mandates that the following controllers must register in the National Register of Controllers:

  • Public entities;

  • Controllers whose main activity is based in personal data processing and collection; and

  • Controllers that collect and process sensitive personal data, and the processing is likely to entail a high risk to the rights and freedoms of the personal data subjects. e.g. criminal data, genetic data, or an individual’s racial or ethnic origin.

The Rules also contain instructions on how to register on the National Data Governance Platform, as well as a requirement for the controller to appoint a “delegate” to undertake the registration on its behalf and other compliance tasks.

Saudi proposes suggested amendments to Data Transfer Regulation

On 19 March 2024, SDAIA released, for a month long public consultation, draft amendments (Amendments) to the PDPL Data Transfer Regulations.

Key changes in the Amendments include:

  • Widening the number of permitted cross-border data transfer purposes in accordance with Article 29(1)(d) of the PDPL;

  • Procedures and standards for assessing the level of personal data protection outside the KSA;

  • A new transfer mechanism of “Appropriate Safeguards”. This is defined as “requirements imposed by SDAIA on the controllers, ensuring compliance with provisions of the PDPL and its implementing regulations, when transferring or disclosing personal data to entities outside of the KSA in any case of exemption from conditions for availability of appropriate protection level of personal data or the minimum limit of personal data, as the case may be, with the aim of guaranteeing appropriate level of personal data protection outside KSA that shall not go below the level stipulated by the PDPL and its implementing regulations.”; and

  • Circumstances in which controllers must conduct a transfer risk assessment.

ASIA

plus

PDPC in Singapore issues advisory guidelines on AI in Singapore

The anticipated guidelines on the use of personal data in AI recommendation and decision systems were issued by the Personal Data Protection Commission (PDPC) on 1 March 2024 and are intended to provide clarity on how the Personal Data Protection Act 2012 (PDPA) applies to the use of personal data for developing and deploying AI systems that embed machine learning models. Although not legally binding, it is likely that the guidelines will be highly persuasive as they will likely be referred to by the PDPC.

Additionally, under the recommendations, organisations that are AI developers may benefit from one of the exceptions which negates the need for consent to be obtained under the PDPA, such as the business improvement exception and research exception. Service providers that are designing bespoke AI systems, although just taking on the role of data intermediaries, have to comply with the applicable obligations under the PDPA – a set of recommendations have also been given to encourage the service providers as technical experts on supporting organisations to meet their notification, consent and accountability obligations.

China releases new CBDT provisions to relax cross-border data transfer regime

On 22 March 2024, the Cyberspace Administration of China (CAC) issued the Provisions to Promote and Regulate Cross-Border Data Flows (Provisions), which took effect on the same day and introduced exemptions and relaxations to certain compliance requirements under China ’s cross-border data transfer regime.

Amongst others, the following transfers of personal information can be exempted from the obligations to adopt any of the safeguard mechanisms required under the Personal Information Protection Law of China (PIPL):

  • Small-scale transfer: pre-conditions for this exemption include that: (i) the Data Exporter is not a Critical Information Infrastructure Operator as identified by the competent regulators; and (ii) counting from 1 January of the current year, the Data Exporter has transferred personal information of less than 100,000 individuals and no sensitive personal information outside of China. This exemption is particularly relevant to small and medium enterprises as well as larger companies conducting business-to-business activities, where the data transfer usually involves a relatively small number of data subjects.

  • Necessity to perform a contract with the individual: where the transfer of personal information is truly necessary to perform a contract to which the individual is a party. Typical examples mentioned under the Provisions include cross-border e-commerce / postal and delivery service / payment and remittance / account opening, overseas travel booking, visa application and examination services, etc. This exemption may provide particular relief for organisations that conduct cross-border retail and travel businesses.

  • Human resource management: where the transfer of personal information is truly necessary to conduct cross-border human resource management in accordance with employment rules and collective employment agreements established according to law. Multinational organisations that adopt collective employment agreements in China, for example in manufacturing and retail sectors, will no doubt benefit from this exemption. It is still unclear whether this exemption can be broadly interpreted to cover multinational companies using one-on-one employment contracts and performing “cross-border human resource management” based on internal HR policies.

  • Emergency: where the transfer of personal information is truly necessary under emergency situations for protecting the life, health and property of a natural person.

  • Negative lists in free trade zones: the provisions also grant power to free trade zones (FTZs) to publish their own “negative lists” (subject to approval by provincial cybersecurity authority and filing with the CAC and national data management authority). Where a Data Exporter incorporated within a FTZ transfers personal information not on the FTZ’s negative list, the transfer will be exempted from adopting the safeguard mechanisms.

You can read our more detailed commentary here.

This document (and any information accessed through links in this document) is provided for information purposes only and does not constitute legal advice. Professional legal advice should be obtained before taking or refraining from any action as a result of the contents of this document.