AI View - June 2024

Our fortnightly round-up of key AI legislative, regulatory, and policy updates from around the world.

14 June 2024

Publication

Welcome to AI View, Simmons & Simmons' fortnightly round-up of key AI legislative, regulatory and policy updates from around the world.

This edition brings you:

  1. EU AI Act to come into force in August 2024

  2. Governance of AI report published by House of Commons Committee

  3. Austria's DSB publishes statement on data protection in the EU AI Act

  4. ESMA's statement on the use of AI in the provision of retail investment services

  5. US Treasury Department requests information on uses of AI in financial services sector

  6. Dubai appoints 22 Chief AI Officers

EU AI Act to come into force in August 2024

The EU AI Act is set to enter into force in August this year, following publication in the EU Official Journal in the second half of July.

There is no fixed date yet for publication and this has been delayed slightly due to a backlog of EU legislation. However, we are expecting publication in late July, with entry into force 20 days later.

Governance of AI report published by House of Commons Committee

On 28 May, the House of Commons Science, Innovation and Technology Committee published a report on the governance of AI in the UK, building on the 12 challenges of AI governance previously set out in its 2023 interim report.

Notable challenges include:

  • The Bias Challenge: Developers and deployers of AI models must take steps to mitigate any inherent bias in datasets.

  • The Black Box Challenge: We should accept that the workings of some AI models are and will remain unexplainable and focus instead on interrogating and verifying their outputs.

  • The Open-Source Challenge: The question should not be 'open' or 'closed', but rather whether there is a sufficiently diverse and competitive market to support the growing demand for AI models.

  • The Intellectual Property and Copyright Challenge: The Government should broker a fair, sustainable solution based around a licensing framework governing the use of copyrighted material to train AI models.

  • The Liability Challenge: Determining liability for AI-related harms is not just a matter for the courts - Government and regulators can play a role too.

Read the report here.

Austria's DSB publishes statement on data protection in the EU AI Act

On 1 June, the Austrian Data Protection Authority (DSB) published a statement on data protection and the EU AI Act (the Statement).

The Statement emphasises that the EU GDPR remains applicable even after the EU AI Act comes into effect. This means that:

  • Obligations of providers and operators of AI systems (in their roles as controllers and/or processors under the EU GDPR) remain unaffected.

  • Personal data processed in the context of the use of AI systems must comply with the EU GDPR's data protection principles.

  • Private and public sector entities need a valid basis for processing personal data and additional safeguards for special category data. If there is no suitable justification, this can lead to the inadmissibility of the data processing (and thus to the impermissible use of the respective AI system).

The DSB acknowledged that AI can significantly increase efficiency of administration in the public sector and the judiciary, but data protection laws must be strictly complied with.

Read the Statement here.

ESMA's statement on the use of AI in the provision of retail investment services

On 30 May, the European Securities and Markets Authority (ESMA) issued a public statement providing guidance to firms using AI when providing investment services to retail clients.

ESMA expects firms to comply with relevant MiFID II requirements when using or planning to use AI technologies, particularly when it comes to the following aspects:

  • Clients' best interest and information to clients: Investment firms should be transparent about how AI is involved in their investment decision-making processes. If AI is used for client interactions, such as through chatbots or other automated systems, firms should openly disclose the use of such technology during these interactions.

  • Organisational requirements: Firms should conduct regular AI model testing and monitor AI systems to mitigate potential risks and biases. Firms should also ensure adequate training on AI for staff. Staff should be equipped with the knowledge to identify and address issues such as data integrity, algorithmic bias, and unintended consequences of AI decision-making.

  • Conduct of business requirements: Investment firms should implement rigorous quality assurance processes and conduct periodic stress tests for their AI tools.

  • Record-keeping: Investment firms should keep comprehensive records of AI deployment, including decision-making processes, data sources used, algorithms implemented, and any modifications made over time.

Read the public statement here.

US Treasury Department requests information on uses of AI in financial services sector

On 6 June, the US Department of the Treasury (US Treasury) published a request for information on the uses, opportunities and risks of AI in the financial services sector (RFI).

The US Treasury is interested in the opportunities and risks that AI developments and applications present to the financial services sector. It is also examining potential barriers that could hinder the responsible use of AI within financial institutions.

The RFI queries the extent of AI's impact on various stakeholders, including consumers, investors, financial institutions, businesses, regulators, and end-users.

Lastly, the Treasury is seeking recommendations to improve legislative, regulatory, and supervisory frameworks related to AI in financial services.

Members of the public are encouraged to submit comments within 60 days.

Read the RFI here. Comments on the RFI can be submitted here.

Dubai appoints 22 Chief AI Officers

On 9 June, H.H. Sheikh Hamdan bin Mohammed bin Rashid Al Maktoum, Crown Prince of Dubai and Chairman of the Executive Council of Dubai, appointed 22 Chief AI Officers to spearhead specialised plans and programmes in the field of AI and advanced technology.

The newly appointed Chief AI Officers represent a number government entities across Dubai, such as the Community Development Authority in Dubai, Dubai Government Human Resources Department, Dubai Customs, Dubai Police, and the Judicial Council.

Read more here.

This document (and any information accessed through links in this document) is provided for information purposes only and does not constitute legal advice. Professional legal advice should be obtained before taking or refraining from any action as a result of the contents of this document.