AI View - November 2024

Our fortnightly round-up of key AI legislative, regulatory and policy updates from around the world.

04 November 2024

Publication

Loading...

Listen to our publication

0:00 / 0:00

Welcome to AI View, Simmons & Simmons’ fortnightly round-up of key AI legislative, regulatory, and policy updates from around the world.

EU AI Act: less than 3 months for Prohibitions and AI Literacy

It is now less than 3 months to go until the provisions prohibiting certain AI practices (Article 5) and requiring AI literacy (Article 4) apply to organisations.

We are advising extensively on both of these aspects, including interpretation and application of the prohibitions (which are not straightforward to apply and guidance is not yet available) and implementation of AI literacy measures (for which we have launched a dedicated 3-tiered AI Literacy Programme to ensure compliance with the Act; see here for further details).

Please get in touch if you’d like our help with the EU AI Act.

AI View

This edition brings you:

  1. Australian Government publishes further guidance on AI voluntary standard

  2. US Department of Labor issues guidelines on AI in the workplace

  3. Poland issues consultation on implementing law for EU AI Act

  4. White House issues national security memo on AI

  5. Australian data protection authority publishes guidance on AI

  6. Hong Kong Government issues policy statement on responsible use of AI in financial markets

Australian Government publishes further guidance on AI voluntary standard

Since the Australian Government’s publication in September 2024 of a voluntary standard on safe and responsible use and innovation with AI, further guidance has been given on how firms can comply with the standard.

The AI and ESG guidance, published on 21 October 2024, emphasises the ways in which AI can help accelerate ESG goals by increasing resource efficiency, monitoring environmental impacts and improving labour conditions by streamlining mundane tasks. However, it also warns of, and provides guidance on, the ESG risks inherent in AI, such as bias, privacy loss, fraud and job displacement.

To assist businesses in complying with the voluntary standard, the Australian Government has introduced the AI Impact Navigator, which allows companies to measure the impact of their AI use. This tool sets out a framework with four main dimensions and five indicators each, allowing companies to evaluate their AI applications' ESG impact on a scale from ‘poor’ to ‘excellent’.

Read the AI and ESG guidance in full here and access the AI Impact Navigator here.

US Department of Labor issues guidelines on AI in the workplace

The US Department of Labor has released guidelines which aim to ensure that AI is used for the benefit of workers and to mitigate risks such as worker displacement and discrimination.

Key principles include:

  1. Centring worker empowerment – workers should be informed of and involved in developments in the use of AI in the workplace.

  2. Ethically developing AI – systems should be designed to protect workers’ rights, meet performance requirements and avoid discrimination based on inherent biases.

  3. Establishing AI governance and human oversight – workers should be appropriately trained on AI systems and employers should ensure that AI is not used to make important decisions, particularly employment decisions, without being overseen by a human.

  4. Transparency in AI use – employers should be transparent about how AI is used in the workplace. This should include clear and accessible information on how an employee’s data is collected and used.

  5. Using AI to enable workers – AI should be used to assist work and improve job quality. Enhancing job quality for workers should be a key consideration of businesses when deciding whether to implement AI systems.

  6. Supporting workers impacted by AI – employers are encouraged to support and upskill workers who will use AI and retraining workers displaced by AI where feasible.

Read the full guidelines here.

Poland issues consultation on implementing law for EU AI Act

Following the implementation of the EU AI Act in August 2024, the Polish Ministry of Digital Affairs has commenced a consultation on a draft law to implement the Act.

The Ministry has stated that the legislation will aim to be flexible and transparent, creating the conditions to foster innovation by businesses. The draft law also includes rules on the transparency of algorithms and labelling of AI-generated content.

The consultation is due to close on 15 November 2024.

Read the announcement (in Polish) here.

White House issues national security memo on AI

On 24 October 2024, the White House issued a memorandum outlining the US Government's approach to AI in the context of national security, while emphasising the need for the US to lead in the responsible, secure, and trustworthy development and application of AI technologies. The memo covers the following:

  1. Promoting AI Development – The US aims to foster innovation, competition, and talent in the AI sector, ensuring the US remains a global leader in AI technology.

  2. Protecting US AI Assets – Measures will be taken to secure the US AI ecosystem from foreign intelligence threats and to manage risks related to AI safety, security, and trustworthiness.

  3. Harnessing AI for National Security – The US Government plans to utilise AI in safeguarding national security, requiring significant changes in strategies, capabilities, and governance.

  4. Strengthening AI Governance – The memo calls for robust AI governance and risk management practices to ensure AI innovation aligns with democratic values and international norms.

  5. International AI Governance – The US aims to facilitate the global adoption of emerging technologies, such as AI, in a manner that is safe, secure, and reliable, while also safeguarding democratic principles.

  6. Effective Coordination and Reporting – Agencies are tasked with coordinating their AI-related activities and reporting on their progress to ensure the effective execution of the policy.

Read the full text of the memorandum here.

Australian data protection authority publishes guidance on AI

The Office of the Australian Information Commissioner has published two pieces of guidance to assist organisations in using AI tools in a manner compliant with the Privacy Act.

The ‘Guidance on privacy and developing and training generative AI models’ suggests that:

  1. developers should take reasonable steps to ensure accuracy in generative AI models;

  2. the legality of using publicly available data for AI model training should not be assumed;

  3. developers should take care not to collect sensitive information without obtaining any necessary consent; and

  4. developers should ensure personal data is not used to train AI models where such training was not the primary purpose for which the data was collected.

The ‘Guidance on privacy and the use of commercially available AI products’ reminds companies that:

  1. privacy obligations apply to any personal data inputted into an AI system, as well as the output generated by the AI, where it contains personal data;

  2. they should update their privacy policies to include information about their use of AI;

  3. where AI generates or infers personal information, the Australian Privacy Principles must be complied with;

  4. personal information inputted into AI must only be used for the primary purpose for which it is disclosed; and

  5. entering personal or sensitive information into publicly available generative AI tools is not advisable.

Read the guidance on privacy and developing and training generative AI models here and the guidance on privacy and the use of commercially available AI products here.

Hong Kong Government issues policy statement on responsible use of AI in financial markets

On 28 October 2024, the Hong Kong Government issued a policy statement setting out its stance on the use of AI in the financial markets. The statement sets out a ‘dual-track’ approach to promote AI adoption whilst addressing risks including cybersecurity, data privacy and protection of IP rights.

The statement directs financial institutions to implement AI governance strategies, taking a risk-based approach to the use of AI by ensuring safeguards such as human oversight are in place.

As part of the policy, the Hong Kong University of Science and Technology will make its self-developed AI model available to the financial services industry, offering training to institutions wishing to deploy it.

Read the policy statement here.

If you have any questions (or feedback) or would like to discuss any of these updates further, please contact Minesh Tanna, Partner and Global AI Lead at Simmons & Simmons.

This document (and any information accessed through links in this document) is provided for information purposes only and does not constitute legal advice. Professional legal advice should be obtained before taking or refraining from any action as a result of the contents of this document.