This edition brings you:
- President Biden's Executive Order on Safe, Secure and Trustworthy AI
- G7 leaders agree on Guiding Principles and a Code of Conduct on AI
- Discussion Paper in support of the AI Safety Summit, on capabilities and risks of frontier AI
- The UK Government's report on emerging processes for frontier AI safety
- Feedback from the BofE, PRA and FCA on their AI regulation discussion paper
- 28 countries agree to the Bletchley Declaration on AI safety
President Biden issues Executive Order on Safe, Secure and Trustworthy AI
Focused around 8 pillars, President Biden's landmark Executive Order on Safe, Secure and Trustworthy AI (the "Order") was published last week, setting out new requirements for US federal government agencies and the private sector. Key impacts for the private sector include:
New Standards for AI Safety and Security:
- Developers of the most powerful AI systems are required to share their safety test results and other critical information with the U.S. government.
- The National Institute of Standards and Technology is expected to develop standards, tools, and tests to help ensure that AI systems are safe, secure, and trustworthy, to be applied to critical infrastructure sectors, including the financial services sector.
- The Department of Commerce will also develop standards for detecting AI-generated content and authenticating content for federal government agencies that the private sector can use as best practice guidance.
- Advancing Equity and Civil Rights: The Attorney General is expected to address algorithmic discrimination through the Department of Justice and Federal civil rights offices by developing best practices for investigating and prosecuting civil rights violations related to AI.
- Supporting Workers: The Secretary of Labour will develop principles and best practices to mitigate the harms and maximise the benefits of AI for workers, providing guidance to prevent employers from undercompensating workers, evaluating job applications unfairly, or impinging on workers' ability to organise.
- Protecting Privacy: The Director of the Office of Management and Budget will develop guidelines for federal agencies to evaluate the effectiveness of privacy-preserving techniques, including those used in AI systems to be used to advance agency efforts to protect individuals' data.
The Order forms part of the Administration's agenda to build a strong international framework for AI governance, and supports global initiatives, including the UK Summit on AI Safety.
The White House statement is here, and the full Executive Order is here.
G7 countries agree International Guiding Principles on AI and a voluntary Code of Conduct for AI
G7 leaders have agreed International Guiding Principles on AI and a voluntary Code of Conduct for AI developers (the "Code"). The Code of Conduct is an outcome of the Hiroshima AI process, which kicked off in May 2023.
The Code aims to promote safe, secure, and trustworthy AI, providing voluntary guidance and 11 recommended actions, including around:
- Risk management: Take appropriate measures in the development of advanced AI systems, including prior to and throughout deployment, to identify, evaluate, and mitigate risks across the AI lifecycle.
- Corrective actions: Identify and mitigate vulnerabilities, and incidents and patterns of misuse, after deployment.
- Public reporting: Publicly report AI systems' capabilities, limitations and domains of appropriate and inappropriate use, ensuring accountability and transparency.
- Information sharing: Work towards responsible information sharing and reporting of incidents with industry, governments, civil society, and academia.
- Policy architecture: Develop, implement and disclose AI governance and risk management policies, grounded in a risk-based approach - including privacy policies and mitigation measures.
- Security: Invest in and implement robust security controls, including physical security, cybersecurity and insider threat safeguards across the AI lifecycle.
The Code will be reviewed and updated as necessary (including through ongoing stakeholder consultations).
The Code and further information is available here.
UK AI Safety Summit - Capabilities and risks from frontier AI
The UK AI Safety Summit took place last week, with discussions focusing on narrow AI with dangerous capabilities (e.g., AI for bioengineering) and frontier AI (i.e., cutting-edge LLMs).
Ahead of the event, the Department for Science, Innovation & Technology ("DSIT") published a discussion paper on capabilities and risks from frontier AI. Key takeaways include:
- Frontier AI today primarily includes LLMs like ChatGPT and Bard, but in the future frontier AI could be underpinned by another technology.
- Frontier AI can perform a wide variety of tasks, and is being augmented with better prompts, tools, data and integration with other AI models.
- Limitations include hallucinations, coherence over extended durations and lack of detailed context.
- Advanced general-purpose AI agents are likely to be the next wave of technology.
The paper notes risks including: evaluating the safety of frontier AI - it may be difficult to track how frontier AI systems are deployed and used, misuse risks (e.g., in the development of biological or chemical weapons, increased cybersecurity risks, and the spread of disinformation to create disruption); and loss of control risks (e.g., where AI systems might actively reduce human control through manipulation, cyber attacks or autonomous replication and adaptation).
Read the discussion paper here.
UK AI Safety Summit - Emerging Processes for Frontier AI Safety
DSIT also published a report based on leading frontier AI organisations' safety policies, aimed at guiding organisations on good AI policy and governance.
The report sets out 9 'emerging processes' around AI safety, including:
- Responsible capability scaling: This provides a framework for managing risk as organisations scale frontier AI systems. It enables companies to prepare for potential future, more dangerous AI risks before they occur, as well as manage the risks associated with current systems.
- Model Reporting and Information Sharing: This increases government visibility into frontier AI development and deployment. Information sharing also enables users to make well-informed choices about whether and how to use AI systems.
- Security Controls Including Securing Model Weights: These are key underpinnings for the safety of an AI system. If they are not developed and deployed securely, AI models risk being stolen or leaking secret or sensitive data, potentially before important safeguards have been applied.
- Reporting Structure for Vulnerabilities: This enables outsiders to identify safety and security issues in an AI system. This is analogous to how organisations often set up 'bug bounty programs' for vulnerabilities in software and IT infrastructure.
See the report here.
Bank of England, PRA and FCA provide feedback on regulation of AI in financial services
The feedback statement provides a summary of responses to Discussion Paper 5/22 (DP5/22), which was initially published by the regulators to inform their understanding of how AI may affect their objectives and their approach to the supervision of financial services firms.
The feedback statement identifies key themes on the questions raised by the regulators in DP 5/22 - it doesn't include any policy proposals, but does point towards the regulatory direction of travel and flags how firms are approaching AI governance and risk.
Headline points from the responses include:
- A regulatory definition of AI would not be 'useful' -a number of respondents pointed to the use of alternative approaches to the definition of AI, focusing on specific characteristics of AI or the risks posed by it.
- 'Live' guidance would be most appropriate for rapid-changing technology, i.e., periodically updated guidance and examples of best practice.
- Existing firm governance structures and regulatory frameworks (e.g., SMCR), are sufficient to address AI risks. However, the regulatory landscape on the whole is complex and fragmented when it comes to AI (particularly data regulation).
- Key risks for firms are data risks (fairness, bias, and management of protected characteristics), third-party models and data, and models with AI characteristics.
- Mitigating AI risks requires a joined-up approach across business units and functions, in particular closer collaboration between data management and model risk management teams.
The full statement is here.
28 countries agree to the Bletchley Declaration at the AI Safety Summit
For the first time, leading AI nations have agreed on a shared understanding of the opportunities, risks and need for international action in the face of frontier AI. The Bletchley Declaration (the "Declaration") was signed by 28 countries at the UK AI Safety Summit, and aims to drive forward key summit objectives on the responsible design and development of AI technologies across the globe. Countries endorsing the Declaration include the US and China, along with the European Union.
Key aspects of the Declaration include:
- Countries agreed substantial risks may arise from potential misuse (both intended and unintended) of frontier AI (noting cybersecurity, biotechnology and disinformation risks).
- Countries have also agreed to work together to support a network of scientific research on frontier AI safety.
- The Declaration places emphasis on international cooperation - the Republic of Korea will co-host a mini virtual summit on AI in the next 6 months, and France will host an in-person Summit next year.
The UK Government's press release can be read here, and the Declaration here.
.jpg?crop=300,495&format=webply&auto=webp)
_11zon.jpg?crop=300,495&format=webply&auto=webp)



.jpg?crop=300,495&format=webply&auto=webp)


_11zon.jpg?crop=300,495&format=webply&auto=webp)



.jpg?crop=300,495&format=webply&auto=webp)



_11zon.jpg?crop=300,495&format=webply&auto=webp)

