In case you missed it, this edition includes a link to access our quick-fire webinar on the EU AI Act.
This edition also brings you:
- UK and US develop new global guidelines for AI security
- California proposes automated decision-making technology regulations
- Italian Data Protection Authority investigates the online collection of personal data to train algorithms
- Courts and Tribunals Judiciary releases guidance for responsible use of AI in UK Courts and Tribunals
Webinar on EU AI Act (on demand)
On Thursday 14 December, we hosted a live webinar on the EU’s newly agreed deal for the EU AI Act, delivered by Minesh Tanna (Global AI Lead) and Christopher Götz (Head of Digital Business, Germany) at Simmons & Simmons.
This webinar provides a summary of the deal and explains what the Act looks like now, what will happen next, and the impact on organisations developing or using AI.
Watch on demand here.
UK and US develop new global guidelines for AI Security
On 27 November 2023, the UK published new global guidelines for secure AI development which 17 other countries, including the US, have confirmed they will endorse and co-sign. Developed by the UK’s National Cyber Security Centre and the US’s Cybersecurity and Infrastructure Security Agency, in collaboration with industry experts, the guidelines are the first of their kind to be agreed globally, and address cybersecurity challenges in the rapidly advancing field of AI.
The guidelines are focussed on the following four key areas.
- Secure design: Guidelines that apply to the design stage of the AI system development lifecycle, covering understanding risks and threat modelling, as well as specific topics and trade-offs to consider on system and model design.
- Secure development: Guidelines that apply to the development stage of the AI system development lifecycle, including supply chain security, documenting operation and lifecycle processes, and asset and technical debt management.
- Secure deployment: Guidelines that apply to the deployment stage of the AI system development lifecycle, addressing the protection of infrastructure and models from compromise, threats, malicious use or loss as well as developing incident management processes and responsible release.
- Secure operation and maintenance: Guidelines that apply to the secure operation and maintenance stage of the AI system development lifecycle, covering logging and monitoring, update management and information sharing.
The guidelines are aimed primarily at providers of AI systems, however the guidelines can help all AI stakeholders make informed decisions about the design, deployment and operation of AI systems.
Read the guidelines here.
California proposes automated decision-making technology regulations
On 27 November 2023, the California Privacy Protection Agency released draft regulations on automated decision-making technology (ADMT). The regulations propose to implement consumer rights to opt-out of, and access information regarding, businesses’ use of ADMT, as provided for by the California Consumer Privacy Act. The draft regulations propose requirements on significant impact areas for businesses using ADMT, such as employment decisions, and profiling employees, contractors, applicants, students and consumers in various contexts. The draft also proposes consumer protection measures such as pre-use notices to inform consumers about how the business intends to use ADMT, opt-out notices, and access to information about ADMT use. The draft regulations would work in tandem with risk assessment requirements that the California Privacy Protection Agency Board is also considering.
On 8 December 2023, the California Privacy Protection Agency Board voted unanimously to advance the legislative proposal to require browser vendors to include a feature that allows users to exercise their privacy rights through opt-out preference signals. If the proposal is adopted, California would be the first US state to require browser vendors to offer consumers the option to enable these opt-out signals.
Read more here.
Italian Data Protection Authority investigates the online collection of personal data to train algorithms
On 22 November 2023, the Italian Data Protection Authority (IDPA) launched a “fact-finding” investigation into the collection of personal data online for training AI algorithms. The investigations aims to evaluate whether online platforms implement adequate security measures to prevent unwarranted data scraping for AI purposes. Academics, AI experts, and consumer groups have been invited to participate in the fact-finding process, and can share their views or comments over a 60 day period.
The IDPA is one of the most proactive national data protection authorities in assessing AI platform compliance with the GDPR. Earlier this year, the IDPA temporarily suspended ChatGPT from processing personal data relating to Italian users based on concerns that ChatGPT may violate several GDPR obligations including transparency, legal basis, and accuracy.
The IDPA has reserved the right to take necessary steps following the fact-finding investigation.
Read the IDPA´s press release here.
Courts and Tribunals Judiciary releases guidance for responsible use of AI in UK Courts and Tribunals
On 12 December 2023, the Courts and Tribunals Judiciary released guidance to assist judicial office holders in relation to the use of AI in courts and tribunals. In summary, the guidance emphasizes:
- The need for a basic understanding of AI capabilities and potential limitations, such as possible inaccuracies in output.
- Upholding confidentiality and privacy by urging caution when entering information into public AI chatbots, and avoiding entering private or confidential information into AI chatbots.
- Ensuring accountability and accuracy of information provided by AI tools before relying on information, recognising that there is a possibility of bias in AI tools which may result in misleading or incorrect information.
- Following best practices for maintaining security, such as using work devices rather than personal devices, and taking responsibility for material produced using AI.
- Being aware that other court users may possibly have used AI tools, such as legal professionals, or unrepresented litigants and therefore there may be errors or inaccuracies in information presented in courts or tribunals. The guidance provides examples of indications that information presented in courts or tribunals may not be accurate such as references to cases that do not sound familiar, or have citations from other jurisdictions such as the US.
Additionally, the guidance discusses potential uses and risks of generative AI (GenAI) in courts and tribunals, offering examples and recommendations for tasks suitable or not recommended for AI involvement. It suggests tasks like summarising text, writing presentations, and performing administrative duties are potential uses for GenAI, however, tasks such as legal research or legal analysis are not recommended to be performed by GenAI.
You can read the guidance here.
.jpg?crop=300,495&format=webply&auto=webp)

.jpg?crop=300,495&format=webply&auto=webp)


_11zon.jpg?crop=300,495&format=webply&auto=webp)




.jpg?crop=300,495&format=webply&auto=webp)



_11zon.jpg?crop=300,495&format=webply&auto=webp)



_11zon.jpg?crop=300,495&format=webply&auto=webp)