Welcome to AI View, Simmons & Simmons' fortnightly round-up of key AI legislative, regulatory, and policy updates from around the world.
This edition brings you:
US TRAIN Act introduced in House of Representatives to expand transparency protections for copyright holders
Singapore launches Model AI Governance Framework for Agentic AI
California Senate approves SB 813 to establish AI safety commission
Ireland introduces Online Safety (Recommender Algorithms) Bill 2026 to curb the use of recommender algorithms on social media
International Regulatory Strategy Group publishes review of emerging global approaches to AI in financial services
UK FCA launches review into the long-term implications of AI for retail financial services
UK House of Commons Treasury Committee publishes report on AI in financial services
1. US TRAIN Act introduced in House of Representatives to expand transparency protections for copyright holders
On 22 January 2026, the Transparency and Responsibility for Artificial Intelligence Networks (TRAIN) Act (the Bill)was first introduced to the US House of Representatives. The Bill aims to provide musicians, artists, writers, and other creators with the means to determine whether their copyrighted works have been used to train generative AI models without their consent.
The Bill, modelled after legal processes for internet piracy, would grant copyright holders access to AI model training records, thereby enabling them to identify unauthorised use of their works.
The Bill has received broad support from industry and creative sector organisations, including, among others, the Recording Industry Association of America (RIAA), the Screen Actors Guild-American Federation of Television and Radio Artists (SAG-AFTRA), the American Federation of Musicians, and the International Association of Scientific, Technical and Medical Publishers (STM).
The aim of the Bill is to establish a critical mechanism for transparency, accountability, and IP protection in the context of AI development. The Bill also aims to ensure that AI innovation proceeds with respect for creators' rights, proper attribution, and fair compensation.
If passed by the House of Representatives, the Bill will proceed to the Senate for further consideration.
Read the Bill here.
2. Singapore launches Model AI Governance Framework for Agentic AI
On 22 January 2026, Singapore's Minister for Digital Development and Information announced the launch of the Model AI Governance Framework for Agentic AI (the Framework). The Framework addresses the reliable and safe deployment of agentic AI, building on the foundations of Singapore's 2020 Model Governance Framework for AI (see here).
The Framework emphasises the importance of maintaining meaningful human control and oversight to ensure responsible use and provides structured guidance for organisations deploying agentic AI, whether developed in-house or sourced from third parties. It sets out best practices and risk management measures in four key areas:
Risk assessment: Assessing and bounding risks by selecting appropriate use cases and limiting agents' autonomy and access to tools and data;
Accountability: Ensuring meaningful human accountability by defining checkpoints requiring human approval;
Controls: Implementing technical controls and processes throughout the agent lifecycle, including baseline testing and access controls; and
Transparency: Enabling end-user responsibility through transparency and education.
The Framework was developed by the Infocomm Media Development Authority (IMDA) with input from both government and private sector stakeholders. It is intended to be a living document, open to feedback and case studies to support its ongoing refinement.
Read the Framework here.
3. California Senate approves SB 813 to establish AI safety commission
On 27 January 2026, the California Senate approved SB 813, the Voluntary AI Standards Act (the Bill). This aims to establish robust safety standards for AI and promote responsible AI development.
The Bill proposes the creation of the California Artificial Intelligence Standards and Safety Commission, designed to oversee the designation of Independent Verification Organizations (IVOs). These IVOs, comprising AI industry experts, academics, and government officials, will be responsible for developing sector-specific safety standards for AI, covering industries such as healthcare, energy, and education. AI developers and vendors may seek voluntary certification from these IVOs, with ongoing monitoring and the possibility of certification revocation if standards are not maintained. The Commission also retains the authority to revoke the approval of an IVO at any time.
The Bill is designed to address a range of risks associated with AI, including psychological harm to children, bias, cyber fraud, election tampering, misinformation, deepfakes, job displacement and concerns about AI hallucinations.
The Bill will now proceed to the California State Assembly for further consideration.
Read the Bill here.
4. Ireland introduces Online Safety (Recommender Algorithms) Bill 2026 to curb the use of recommender algorithms on social media
On 22 January 2026, the Online Safety (Recommender Algorithms) Bill 2026 (the Bill) was introduced in the Irish Parliament. The Bill seeks to amend the Online Safety and Media Regulation Act 2022. The Bill aims to regulate the use of recommender algorithms by video-sharing platform services, to ban the targeting of children with recommender algorithms and to require that such algorithms are switched off by default for users over 18.
The proposal has received support from Government Ministers and in public opinion, with a recent poll indicating that 68% of respondents favour disabling these algorithms.
The Bill responds to growing concerns about the impact of recommender algorithms on children and adults, including the promotion of harmful content relating to self-harm, eating disorders, misogyny and the spread of AI-generated abusive imagery. The Bill is positioned as an urgent measure to protect both children and adults from such harms.
The Bill will now proceed to the Second Stage in the lower house of the Irish Parliament, at which its general principles are debated before more detailed examination at the Committee Stage.
Read the Bill here.
5. International Regulatory Strategy Group publishes review of emerging global approaches to AI in financial services
On 12 January 2026, the International Regulatory Strategy Group (IRSG) published a report mapping how jurisdictions are approaching AI in financial services. It identifies areas where international alignment is beginning to emerge and outlines practical steps for policymakers, regulators, and international standard setters to promote safe and responsible innovation in AI.
The IRSG reported that there is agreement on the high-level principles that should govern AI. Most frameworks draw on the OECD and G20/G7-endorsed principles of human-centricity, transparency and explainability, robustness and safety, and accountability. However, there are disagreements about how these principles should be translated into national regulatory approaches.
A key recommendation of the report is that countries should work together to set clear and consistent expectations, use common principles and language, and develop similar ways to monitor and supervise AI, all within the rules that are already in place. The IRSG advocates for coordinated, interoperable, and principle-based supervision, reinforced through collaboration among national authorities and international standard setting bodies including the International Organisation of Securities Commissions, the Basel Committee on Banking Supervision and the Financial Stability Board, amongst others.
The IRSG concludes that the most effective near-term strategy is to leverage and align existing frameworks, rather than creating AI-specific global rulebooks. Coherence can be promoted through shared principles and interoperable supervision, while the main drivers of fragmentation, such as data localisation, competition, security, and extraterritorial reach, should be managed through collaborative, principle-based solutions.
Read the report here.
6. UK FCA launches review into the long-term implications of AI for retail financial services
On 27 January 2026, the UK Financial Conduct Authority (FCA) announced the commencement of a comprehensive review into the long-term implications of AI for retail financial services (the Mills review). The review builds upon the FCA's existing initiatives, including its AI Discussion Paper, AI Sprint, and AI Lab, and relies on existing, principles-based frameworks rather than introducing additional AI-specific regulation.
The review will focus on four interrelated themes:
Future evolution of AI technology: Examining how AI may involve, including the emergence of more powerful, autonomous, and agentic systems, and assessing the entire AI value chain.
Future impact of AI on markets and firms: Considering the potential effects on market structure, competition, and the UK's global competitiveness.
Future consumer trends: Assessing the impact on consumers, including opportunities for improved outcomes, emerging risks, behavioural changes, and shifts in the demand and provision of financial services.
Future regulatory approach: Exploring how financial regulators may need to adapt to ensure that retail financial markets continue to function effectively.
The review will consider retail financial services, consumer outcomes, consumer protection, and the competitiveness and growth of UK financial services. The findings and recommendations of the review will be reported to the FCA Board, culminating in an external publication to support informed debate and ensure the FCA remains prepared, adaptive, and able to support a thriving, innovative UK financial services sector.
Read more about the FCA's review here.
7. UK House of Commons Treasury Committee publishes report on AI in financial services
On 22 January 2026, the UK Treasury Committee published a report on AI in financial services. This follows the inquiry launched on 3 February 2025 which aimed to examine the opportunities and risks presented by AI for the UK financial services sector.
The Committee concluded that AI could bring considerable benefits to consumers and encourages firms and the FCA to work collaboratively to ensure these opportunities are realised. However, it expressed concern that the FCA, Bank of England, and HM Treasury are not doing enough to manage the risks associated with AI. The current "wait-and-see" approach was criticised for exposing consumers and the financial system to potentially serious harm.
The Committee recommended that the FCA provide the financial services sector with greater clarity regarding the application of existing rules to AI. By the end of 2026, the FCA should publish guidance for firms on:
a) the application of existing consumer protection rules to AI use, and
b) accountability and the level of assurance expected from senior managers under the Senior Managers and Certification Regime for harm caused through the use of AI.
To enhance firms' preparedness for AI-driven market shocks, the Committee also recommended that the Bank of England and the FCA conduct AI-specific stress testing.
The Committee further recommended that, by the end of 2026, HM Treasury should designate major AI and cloud providers as critical third parties under the Critical Third Parties Regime, a UK regulatory regime which applies to bodies whose service disruptions could threaten UK financial stability. The Committee expressed concern at the slow pace of implementation and called on the Bank of England's Financial Policy Committee to monitor progress and, if necessary, use its power of recommendation to HM Treasury to ensure swift action.
Read the report here.

.jpg?crop=300,495&format=webply&auto=webp)
.jpg?crop=300,495&format=webply&auto=webp)




.jpg?crop=300,495&format=webply&auto=webp)



_11zon.jpg?crop=300,495&format=webply&auto=webp)
_11zon.jpg?crop=300,495&format=webply&auto=webp)






