In today's interconnected world, where data flows effortlessly across borders, ensuring its security and compliance is paramount. Our updates are your trusted source for the latest news, expert analyses, and practical tips on navigating the complex landscape of data protection.
Key trends
The development of AI continues to be a significant area of focus for regulators, with latest updates from the UK's Information Commissioner's Office (ICO), European Data Protection Board (EDPB) and Hong Kong Securities and Futures Commission (SFC).
Long Story short
UK
- The Information Commissioner's Office (ICO) has published a response to its consultation series on generative AI, emphasising the need for greater transparency in the use of web-scraped data and the integration of individual rights in generative AI.
- The ICO has shared its guidance on key data protection considerations for organisations using AI to assist in recruitment.
- Global privacy authorities (including the ICO) have issued a joint follow-up statement on data scraping following industry engagement.
- A joint statement has been issued by the Financial Conduct Authority (FCA), ICO and Pension Regulator (TPR) for retail investment firms and pension providers on the interaction between the communication requirements under the TPR’s Code of Practice and guidance and FCA’s Consumer Duty and the requirements under UK data protection laws on direct marketing communications.
- The ICO has published a statement welcoming the proposed reforms to current data protection legislation set out in the Data (Use and Access) Bill.
- The ICO has issued new guidance for organisations and the public on the potential human impact of personal data breaches.
- The ICO and the Department for Science, Innovation and Technology (DSIT) have launched a new cost-benefit assessment tool for Privacy Enhancing Technologies (PETs).
EU
- The European Data Protection Board (EDPB) has adopted its first reports on the review of the EU-U.S. Data Privacy Framework.
- The EDPB has published guidelines on data transfers to third-country authorities and approved a new EU Data Protection Seal.
- The EDPB has published guidelines on “tracing” under the ePrivacy Directive.
- The EDPB has announced its 2024-2025 work programme.
- The EDPB has published its opinion on the processing of personal data in AI development and deployment.
- The French Data Protection Authority has issued a EUR 50 million fine against Orange for unlawful ad banners in the email system and non-compliant use of cookies.
- The Italian Data Protection Authority has issued a decision against the transfer of personal data contained in newspaper archives to OpenAI.
- The Dutch Data Protection Authority has issued a EUR 4.75 million fine against Netflix for lack of transparency.
Asia
- Chinese regulators launched a campaign to enhance supervision over algorithm governance of network platforms.
- Singapore’s Ministry of Law and Ministry of Digital Development and Information launched a public consultation on proposed reforms to tackle online harms.
- The Protection of Critical Infrastructures (Computer Systems) Bill has been published in Hong Kong.
- Guidance for licensed corporations on adopting generative AI language models has been published in Hong Kong.
Middle East
- Saudi Data & AI Authority has published guidance on the handling personal data breaches.
Introducing CtrlTransfer
CtrlTransfer is an award-winning tool designed for cross-border data transfers, ensuring GDPR compliance. Created by Simmons & Simmons experts, it offers comprehensive country analysis, a simple scoring methodology, and regularly updated information. With its customisable Transfer Risk Assessment (TRA), clients can perform risk analysis, add mitigating factors, and optimise resource allocation, reducing unnecessary Data Protection Officer (DPO) input.
Here's what one of our clients have to say:
“For the past three years, we have been subscribed to CtrlTransfer. The tool is intuitive, provides reliable results validated by legal experts in the field and is perfect for risk assessing new data transfers for our business.” Close Brothers
If you want to find out more contact your client partner or our products team: products@simmons-simmons.com
MUST READS
- Mobile Applications Under the Scrutiny of the CNIL, by Sarah Bailey and Emilie Danglades-Perez
- Game on: Navigating rules on global gaming, by Elia Kim
- Cybersecurity in Ireland: navigating NIS2, by Derek Lawlor and Izzy Tennyson
Regional Updates
UK
ICO publishes its outcomes report detailing its policy positions on generative AI
On 13 December 2024, the UK’s ICO published a response to its consultation series on generative AI.
The review identified issues in two main areas: the use of web-scraped data for training generative AI models and the integration of individual rights into these models. It found a lack of transparency within the industry, particularly concerning the origins of training data. The report suggests that generative AI developers need to be more open about how they use personal information, providing clear and accessible explanations to individuals and publishers. This transparency is essential for individuals to understand their rights and for developers to establish a lawful basis for using personal data.
Furthermore, the report highlights the importance of developing generative AI technology in a way that respects data protection laws and prioritises responsibility. It mentions the efforts made in consulting with stakeholders, including the government and international partners, to promote ethical AI development. The document also points to resources like the Regulatory Sandbox and Innovation Advice service for firms aiming to innovate responsibly. The focus will shift towards organisations that are not meeting these expectations, aiming to create a conducive environment for responsible AI development.
ICO sets guidance out key data protection considerations on using AI to assist in recruitment
On 6 November 2024, the ICO shared some guidance on the key questions which organisations should ask when procuring AI tools to help with their employee recruitment.
In particular, the ICO emphasises that organisations considering AI tools for recruitment should carefully evaluate their compliance with data protection laws to prevent potential harm to jobseekers, such as unfair exclusion or privacy breaches. A recent audit of AI recruitment tool providers by the ICO apparently highlighted the need for improvements in fair and minimal processing of personal information and transparency in how candidate data is used. The regulator issued nearly 300 recommendations, all accepted or partially accepted, aiming to enhance compliance with data protection law. The ICO Director of Assurance, Ian Hulme, emphasised the importance of lawful and fair use of AI in recruitment, advising organisations to seek clear assurances from providers about their legal compliance.
Additionally, before procuring AI recruitment tools, the ICO states that organisations should conduct a Data Protection Impact Assessment (DPIA), determine a lawful basis for processing personal data, document responsibilities and processing instructions clearly, ensure bias mitigation, maintain transparency with candidates and limit unnecessary data processing. An upcoming webinar on 22 January 2025 will offer further insights into the audit findings and their application for AI developers and recruiters. You can register for that webinar here.
For more information: AI in Recruitment Outcomes Report
Global privacy authorities issue follow-up joint statement on data scraping after industry engagement
Following a Joint Statement from an association of global data protection authorities (including the ICO) in August 2023 on data scraping, the ICO and its counterparts from 15 other data protection authorities have engaged with a number of the largest global social media companies (SMCs) and issued a Follow-up Joint Statement that highlights further takeaway actions for organisations.
“Data scraping” involves copying publicly accessible data from websites. The August 2023 Joint Statement covered, in brief summary, the considerations for organisations scraping data, SMCs who owe responsibilities to individuals whose data is displayed on their websites (i.e. to prevent scraping by others, such as by detecting bots and blocking IP addresses) and individuals. These considerations include both regulatory guidance on best practice and, in some cases, a reminder of legal obligations.
The Follow-Up Joint Statement highlights a range of key takeaway actions in addition to those in the August 2023 Joint Statement, including:
- There’s no silver bullet: Protecting data from unlawful scraping requires a combination of measures. These must be reviewed and kept up to date, in line with organisations’ obligations in relation to data security under data protection laws more generally.
- AI can be part of the problem and part of the solution: AI can be part of the problem and part of the solution: AI can be used by unlawful scrapers to avoid detection. It can also be used to detect and prevent unlawful scraping. Organisations therefore need to familiarise themselves on an ongoing basis with the threats and (protective) opportunities that AI presents.
- Contractual permissions aren’t enough on their own: Certain SMCs (and others) offer contractual rights for others to scrape. The Follow-Up Joint Statement makes clear that this is not the end of the story. Rather, obligations under data protection law to ensure that personal data are processed fairly (based on privacy notices) and lawfully (based on available lawful bases) and that appropriate security measures are in place must be complied with.
- Use of harvested data to train AI systems, such as Large Language Models (LLMs) must be compliant: Organisations must consider and comply with data protection, AI and other laws when using personal data gathered through scraping to train AI systems. Therefore, as a first step, they need to understand both how harvested data is being used and the laws that apply to them and then implement measures designed to ensure compliance.
The FCA, ICO and TPR issue a joint statement for retail investment firms and pension providers
In response to retail investment and pension provider industry requests, the FCA, the TPR and ICO have published a Joint Statement on the interaction between the communication requirements under the TPR’s Code of Practice and guidance and FCA’s Consumer Duty and the requirements under UK data protection laws on direct marketing communications.
The UK GDPR, the Data Protection Act 2018 and the Privacy and Electronic Communications (EC Directive) Regulations 2003 (PECR) set out requirements relating to direct marketing (defined as “the communication (by whatever means) of advertising or marketing material which is directed to particular individuals”), including the requirements to process personal data fairly and lawfully and to obtain opt-ins (unless defined exceptions apply) to communications and allow people to opt-out from communications. The ICO has also issued Guidance on Direct Marketing and Regulatory Communications to help firms to determine if communications that they are required to send based on the regulatory requirements that apply to them count as direct marketing.
By way of examples of when firms are required to communicate with their customers, the FCA’s Consumer Duty (and related requirements under the PRIN rules) require firms to equip their retail customers to make effective, timely and properly informed decisions. Firms are also required to communicate before the purchase of a product and at suitable points throughout its lifecycle. Importantly the Consumer Duty does not interfere with or replace the direct marketing requirements under UK data protection law. In a similar vein, the TPR’s Code of Practice and Guidance require pension providers to help their customers to make good decisions about their pension, including by informing them about the impact that their contributions will have on their overall benefits and by providing related modelling tools.
The FCA, TPR and ICO’s joint position is that firms should follow the ICO’s Guidance on Direct Marketing and Regulatory Communications to ensure that they avoid unlawful direct marketing. Using a neutral tone and avoiding active promotion when presenting facts is a helpful start. The content and context of communications must also be considered. “Service messages” that provide need-to-know information without related promotion can be sent. The joint position also includes the following helpful list of examples of communications that can be drafted in a way that is unlikely to constitute direct marketing:
- messages warning customers of risk of harm from having inadequate pension income in retirement due to existing contribution (or drawdown) rates;
- Jargon-busting communications;
- factually describing the impact of different decumulation options;
- highlighting unused ISA allowance; and
- telling customers at the end of a term deal what their options are.
ICO publishes statement on the Data (Use and Access) Bill (DUA)
The ICO has published a statement welcoming the proposed reforms to current UK data protection legislation set out in the DUA. The ICO sees the DUA reforms as enabling further innovation and the development of data-driven businesses. In summary:
- Smart data initiatives: The DUA aims to encourage the use of smart data schemes to enhance personal data access and control, fostering innovation and economic growth and this is seen as a positive initiative by the ICO. The ICO does, however, highlight that businesses involved in smart data projects must adopt a privacy-by-design approach, identifying and mitigating data protection risks early.
- Digital verification services: The Government’s aim is to create a digital verification trust framework that can provide an alternative to traditional identity verification options. The ICO is, understandably, focused on ensuring that the systems are designed in a manner that engenders public trust and confidence with appropriate protections for individual’s information rights.
- Data protection reform: The current Information Commissioner has tended to adopt a more pragmatic and risk-based approach which is designed to enable responsible businesses to navigate data protection compliance more easily. Aligned with that ethos, the ICO welcomes changes in the DUA designed to simplify compliance and encourage innovation in the following areas:
- the proposed adjustments to automated decision-making (ADM) regulations which will allow businesses to rely on legitimate interests for ADM involving non-special category data;
- the inclusion of “recognised legitimate interests” in the DUA where no balancing test is required which will simplify the analysis that has to be done by business in these areas and also give greater assurance that they are handling personal data for legitimate purposes;
- the setting out of types of further processing that businesses can assume are compatible with existing processing activity;
- reforms that aim to facilitate smoother data transfers internationally, providing clarity on adequacy decisions and alternative transfer mechanisms; and
- changes to consent requirements for cookies aim to reduce consent fatigue, allowing easier collection of data for statistical purposes and website improvement.
- Enhanced ICO powers: One of the key features of the DUA is the strengthening of the ICO’s powers across the GDPR and the ePrivacy regulatory regime under PECR such that the ICO will be able to apply equivalent penalties across both regimes. Helpfully (for those subject to it) the PECR data breach reporting regime is also to be aligned to the GDPR breach reporting regime and the ICO is pleased to see that greater alignment enabling efficiency both for businesses as well as the ICO.
- Governance and accountability: The ICO's governance will transition to a Board and chief executive model (with the CEO being appointed by the Chair and the Board not the secretary of State), enhancing, in the ICO’s view, decision-making and maintaining regulatory independence.
The ICO issues new guidance for organisations and the public on the potential human impact of personal data breaches
The ICO has published new guidance reminding organisations of the potential human impact of personal data breaches, in particular for vulnerable groups, and calling on organisations to be open about breaches that occur and communicate “with empathy”. This is supported by new guidance for the public on the steps to take when an individual becomes aware of a personal data breach and the possible outcomes of a complaint to the ICO. Much of the new guidance is sensible and based on some concerning statistics on the number of UK individuals affected by breaches and the lack of support received from organisations.
On the other hand, some may see risk in the way the ICO encourages individuals to pro-actively contact an organisation for more information when they are “made aware” of a breach, given the possibility of receiving a significant number of queries from individuals where a breach has not in fact occurred, or individuals who are only minimally impacted or not affected at all by a breach. Businesses will also need to think carefully about the finding the right balance between the ICO’s call for openness in relation to what has happened and the risk of follow-on claims.
For more information: ICO update
The ICO and Department for Science, Innovation and Technology (DSIT) launches new cost-benefit assessment tool for Privacy Enhancing Technologies
The ICO and Department for Science, Innovation and Technology (DSIT) have launched a new tool and associated checklist to help organisations assess the costs and benefits of deploying privacy enhancing technologies (PETs). The tool is centred on the costs and benefits of implementing a “federated” approach to machine learning and considers the benefits of “stacking” different PETs and specific considerations for the protection of data inputs and outputs.
The tool should be of interest to data protection and privacy counsel and other privacy professionals who want to know more about which PETs could be of most use to their organisation and the potential downsides in terms of data storage requirements, computing power, data utility and other factors. It will provide further food for thought for those looking to comply with the EDPB’s recent Opinion 28/2024 on processing personal data in the context of AI models, where PETs will be an important means of enabling organisations to ensure proper anonymisation of data and to rely on the “legitimate interests” legal basis for processing. The tool should also help in bridging gaps in understanding and language between IT professionals and other stakeholders.
For more information: ICO update
EUROPEAN UNION
The EDPB issues a report on the first review of the European Commission Implementing Decision on the Adequate Protection of Personal Data under the EU-US Data Privacy Framework
On 5 December 2024, the European Data Protection Board (EDPB) adopted a report on the first review of the EU-U.S. Data Privacy Framework (DPF). The EDPB welcomes the efforts by the U.S. authorities and the European Commission to implement the DPF and acknowledges several developments that have occurred since the adoption of the adequacy decision in July 2023.
In terms of commercial aspects, specifically the application and enforcement of requirements for companies self-certified under this framework, the EDPB notes that the U.S. Department of Commerce has taken all necessary steps to implement the certification process. This includes the development of a new website, updating procedures, engaging with companies and conducting awareness-raising activities.
Moreover, the redress mechanism for EU individuals has been implemented, with comprehensive complaint-handling guidance published on both sides of the Atlantic. While the EDPB views the recommendation for establishing a level playing field on data retention positively, it is concerned that a broad and general obligation to retain data in electronic form by all service providers could significantly interfere with individuals' rights. Therefore, the EDPB questions whether this would meet the necessity and proportionality requirements of the Charter of Fundamental Rights of the EU and CJEU jurisprudence. In its statement, the EDPB also emphasises that recommendations regarding encryption should not hinder its use or diminish the effectiveness of the protection it provides.
For more information: EDPB report
EDPB clarifies rules for data sharing with third-country authorities and approves EU Data Protection Seal Certification
On 03 December 2024, the EDPB published guidelines on Art.48 GDPR regarding data transfers to third-country authorities and approved a new European Data Protection Seal.
The guidelines focus on Art. 48 GDPR and clarify how organisations can assess the conditions under which they can lawfully respond to such requests. Thus, the guidelines assist organisations in determining whether they can lawfully transfer personal data to third-country authorities upon request. Judgments or decisions from third-country authorities cannot automatically be recognised or enforced in Europe. If an organisation responds to a request for personal data from a third-country authority, this constitutes a transfer and the GDPR applies. An international agreement may provide a legal basis and a ground for transfer. If there is no international agreement, or if the agreement does not provide an appropriate legal basis or safeguards, other legal bases or grounds for transfer could be considered, in exceptional circumstances and on a case-by-case basis.
The guidelines are open for public consultation until 27 January 2025. During the plenary meeting, the Board also adopted an opinion approving the Brand Compliance certification criteria concerning processing activities controllers or processors.
For more information: EDPB guidelines
EDPB finalises guidelines to clarify 'Tracing' under ePrivacy Directive
On 07 October 2024, the European Data Protection Board adopted the final Guidelines 02/2023, which clarify the technical scope of the term "tracing" under Article 5(3) of the ePrivacy Directive. These guidelines address the increasing use of alternative tracking methods, such as tracking pixels and unique identifiers, ensuring they comply with privacy protection principles. The EDPB emphasises that any access to information stored on users’ devices requires explicit consent unless necessary for service provision.
The guidelines refine some key definitions related to information access and storage, providing use cases to help organisations in adhering to data protection standards. Additionally, the EDPB clarified that consent is not always required when access is technically necessary for communication transmission or service delivery. This development follows extensive public consultation and aims to harmonise data protection practices across Europe.
For more information: EDPB’s announcement
EDPB announces 2024-2025 work programme on key changes ahead
On 03 December 2024, the European Data Protection Board presented its work programme for 2024-2025. This aligns with the larger 2024-2027 strategy and introduces measures aimed at enhancing personal data protection across the EU.
One focus area is on new guidelines for controllers using processors. These will address secure data transfers abroad and reinforce contractual clauses, helping organisations meet GDPR standards in outsourcing arrangements.
The EDPB will also issue guidance on legitimate interests under Article 6(1)(f) of the GDPR. This aims to clarify the balance between business interests and individuals’ fundamental rights, offering practical advice for compliance.
Another key initiative is the European Data Protection Seal. This certification will allow organisations to demonstrate GDPR compliance, building user trust in their data handling practices. To support SMEs, the EDPB will develop practical tools and resources to simplify compliance with GDPR obligations. These materials aim to address challenges smaller businesses often face.
Finally, the programme emphasises the need for greater collaboration between national data protection authorities to help ensure an harmonised and effective application of GDPR rules across member states.
For more information: Official EDPB Work Programme 2024-2025
EDPB issues opinion on AI models
The EDPB has issued its first harmonised guidance on the processing of personal data in AI development and deployment. It clarifies the conditions under which AI models can be considered anonymous, outlines how legitimate interest may serve as a valid legal basis and addresses the implications of developing models with unlawfully processed data.
The opinion stresses a case-by-case approach, providing criteria for evaluating user expectations and offering technical and organisational measures to mitigate risks. Requested by the Irish authority, this guidance aims to foster responsible AI innovation while ensuring compliance with the GDPR. The EDPB continues to work on more specific guidelines, including issues related to web scraping and alignment with the upcoming EU AI Regulation.
For more information: EDPB’s opinion
FRANCE
CNIL imposes EUR 50 million fine on Orange for unauthorised email ads and cookie misuse
On 14 November 2024, the French Data Protection Authority (CNIL) sanctioned Orange, France’s leading telecommunications operator, with a EUR 50 million fine for significant data protection breaches.
The fine stems from Orange inserting advertising messages within users’ email inboxes without obtaining their explicit consent, affecting over 7.8 million individuals. This practice contravened Article L.34-5 of the French Postal and Electronic Communications Code, as the ads were disguised to appear as legitimate emails. Additionally, the CNIL identified that Orange continued to read cookies after users had withdrawn their consent, violating Article 82 of the French Data Protection Act.
For more information: CNIL’s official announcement
THE NETHERLANDS
Dutch data protection authority fines Netflix for lack of transparency
The Dutch Data Protection Authority imposed a EUR 4.75 million fine on Netflix for failing to meet GDPR transparency obligations between 2018 and 2020. According to the Dutch DPA, Netflix did not adequately inform users about the purposes and legal bases of its data processing, nor did it clarify which third parties received personal data.
Further deficiencies included insufficient detail regarding data retention periods and inadequate assurances for safeguarding personal data when transferred outside the European Economic Area. These findings indicate a breach of the GDPR’s fundamental principles on fairness and transparency.
Netflix has since updated its privacy documentation but objected to the fine.
For more information: Dutch DPA’s announcement
ITALY
Italian data protection authority issues a decision against the transfer of personal data contained in newspaper archives to OpenAI
On 12th November 2024, the Italian Data Protection Authority issued a decision notifying the publishing group "Gedi Editoriale Spa" regarding an agreement with OpenAI to transfer Italian-language editorial content to ChatGPT users.
The Authority expressed concerns that the processing of personal data could likely violate the GDPR, specifically Articles 9, 10, 13, 14, and Chapter III. According to the information received, the Authority believes the processing activities would involve a significant volume of personal data, including special and judicial data.
The impact assessment conducted by the company and submitted to the Authority does not adequately analyse the legal basis under which the publisher could transfer or license the use of personal data in its archives to OpenAI for algorithm training.
The decision also highlights that the obligations of information and transparency towards the data subjects are not sufficiently met and that the group is not in a position to guarantee data subjects the rights they are entitled to under European privacy regulations, particularly the right to object. The Italian Data Protection Authority notes that digital archives of newspapers, which contain news stories with personal data, including special categories and judicial data, should not be licensed for use by third parties to train artificial intelligence.
For more information: Garante’s decision
MIDDLE EAST
Saudi Data & AI Authority publishes guide on handling personal data breaches
In late October 2024, the Saudi Data & AI Authority (SDAIA) published a guide on handling personal data breaches. This guide supplements the notification requirements of the Saudi Personal Data Protection Law (PDPL), offering organisations detailed practical steps for managing data breach incidents. It clarifies the PDPL's reporting thresholds and deadlines, notification contents and submission process, and containment strategies.
ASIA
China launches campaign to enhance supervision over network platform algorithm issues
On 24 November 2024, Chinese regulators on cyberspace administration, telecommunications administration, public security and market regulation together announced a joint Qinglang campaign to enhance supervision on the algorithm governance of network platforms (the Campaign). The Campaign will start immediately until 14 February 2025.
Core issues the Campaign aim to address include “information cocoons”, manipulation of rankings, abuse of complaints mechanisms and big-data enabled pricing discrimination, with the ultimate goal of promoting healthy, fair, transparent, controllable and accountable algorithms:
- As the first phase of the Campaign, the regulators have urged companies to perform self-reviews and rectifications by 31 December 2024.
- From 1 January 2025 to 31 January 2025, local competent regulators shall perform inspections on the relevant companies against any compliance gaps which are not promptly addressed.
- In the last phase of the Campaign, the regulators will summarise the key findings and achievements of the Campaign, to inform the law-making and enforcement actions in the future.
A reporting and complaint channel will be in place for the duration of the Campaign to receive and verify tip-offs provided by the general public.
Such special campaigns are a common method of law enforcement in China and often provide helpful references of industry best practice and enforcement trends.
Singapore launches public consultation on proposed reforms to tackle online harms
On 22 November 2024, the Ministry of Law and the Ministry of Digital Development and Information launched a public consultation on enhancing online safety which seeks public feedback on proposed reforms to tackle online harms and enable victims to seek remedy from online harms. This also aims to shape the norms and acceptable online conduct and improve accountability. The proposed reforms include the following:
- Introduction of a new complaints mechanism: Establishing a dedicated agency to support victims of online harms, enabling them to submit complaints against specified types of online harms, which include among other proposed categories, online harassment, intimate image abuse, impersonation, misuse of inauthentic material (e.g. deepfakes) and misuse of personal information.
- Introduction of new statutory torts: Introducing statutory torts for specified online harms, allowing victims to hold responsible parties accountable through legal claims for compensation.
- Increasing accountability through improved user information disclosure: Proposals include making perpetrators' user information available to victims, addressing the challenges posed by online anonymity and enabling victims to seek redress more effectively.
Hong Kong publishes its Protection of Critical Infrastructures (Computer Systems) Bill
The Protection of Critical Infrastructures (Computer Systems) Bill (the Bill) was published in the Gazette on 6 December 2024. It aims to protect the security of the computer systems of Hong Kong’s critical infrastructures. The Bill imposes obligations on critical infrastructure operators (each a CI Operator), which is an organisation designated by the Commissioner of Critical Infrastructure (Computer-system Security) (the Commissioner) / a designed authority in written notice. Specific computer system operated by a CI Operator maybe designated as a Critical Computer System (each a CCS) by written notice as well.
The obligations of the CI Operator include:
- Organisational obligations:
- Maintaining a local office in Hong Kong.
- Notifying the regulating authority of ownership and operatorship changes.
- Maintaining a computer-system security management unit.
- Prevention of threats and incidents obligations:
- Notifying the regulating authority of any material change to the CCS.
- Submitting and implementing computer-system security management plans.
- Conducting computer-system security risk assessment regularly.
- Carrying out computer-system security audits regularly.
- Incident reporting and response obligations:
- Participating in computer system security drills organised by the Commissioner.
- Submitting and implementing emergency response plans.
- Notifying the Commissioner of security incidents regarding CCSs within specified timeframes.
For more information: Legislative Framework Paper and the Bill
Hong Kong issues guidance for licensed corporations on adopting generative AI language models
On 12 November 2024, the Securities and Futures Commission (SFC) issued a circular guiding licensed corporations (LCs) on responsibly adopting generative artificial intelligence language models (GenAI LMs). The guidance highlights GenAI LMs' potential to improve operational efficiency and the associated risks, such as inaccurate outputs, cybersecurity and privacy concerns and dependency on third-party providers. The circular's requirements apply to LCs that offer services or functionalities provided by GenAI LMs in relation to their regulated activity, whether using internally-developed solutions or off-the-shelf products from external providers.
The circular sets out four core principles LCs are expected to comply with when using GenAI LMs.
- Senior management responsibilities: LCs must ensure that their senior management establishes effective governance, oversight, and policies regarding the use of GenAI LMs. This includes the recruitment of qualified staff to oversee the adoption and operational integration of these technologies.
- AI model risk management: LCs are required to implement thorough testing, validation, and continuous monitoring processes for their GenAI LMs. This is to mitigate risks such as the generation of inaccurate or misleading information and to ensure the reliability of outputs.
- Cybersecurity and data risk management: LCs must uphold stringent cybersecurity and data privacy measures. This includes regular adversarial testing, data encryption, and the safeguarding of confidential information to prevent unauthorised access and data breaches.
- Third-party provider risk management: LCs using third-party GenAI solutions must conduct diligent initial and ongoing evaluations of these providers. This is to assess and mitigate the risks associated with dependency on external technologies, ensuring operational continuity and resilience.
The circular has taken effect since the publication date (i.e., from 12 November 2024). While the SFC commits to a pragmatic approach in assessing compliance, considering the time needed for implementation, LCs are urged to review their existing policies and procedures and proactively align their practices with these guidelines. Furthermore, LCs are reminded of their obligation to notify the SFC about significant business changes involving GenAI LMs and are encouraged to engage in early discussions with the regulator.
For more information: Circular to licensed corporation – Use of generative AI language models




_11zon.jpg?crop=300,495&format=webply&auto=webp)



_11zon.jpg?crop=300,495&format=webply&auto=webp)





_11zon.jpg?crop=300,495&format=webply&auto=webp)
_11zon_(1).jpg?crop=300,495&format=webply&auto=webp)



