What’s the latest in global AI regulation?

Fast-paced AI innovation demands equally agile AI regulation. But can ethical safeguards keep up, without slowing adoption?

20 November 2024

Publication

Loading...

Listen to our publication

0:00 / 0:00

As artificial intelligence (AI) continues to shape our world, governments are racing to create frameworks that balance innovation with ethical considerations, data protection and public safety.

Strategies for regulating AI are influenced by cultural, social, political and economic values.

Jingyuan Shi is Simmons & Simmons head of Technology, media and Telecommunications for the Greater China Region. She describes Europe’s approach as holistic and precautionary. In August 2024, it introduced the EU AI Act, the world’s first comprehensive AI law and part of a broader strategy aimed at making Europe fit for the digital age. Prescriptive and comprehensive, it prioritises citizens’ protections and categorises AI risks from minimal to unacceptable, with corresponding obligations and sanctions. But could this risk-based, safety-first focus potentially slow the deployment of AI technologies?

Canada, Turkey, Brazil, South Korea and Indonesia are following the EU’s lead.

Mainland China, on the other hand, is taking an agile-governance approach to AI. Instead of a single comprehensive AI law, it has a mix of national strategies, ethical guidelines and sector-specific regulations. Flexible and adaptive, and designed to foster innovation and rapid technology development, this approach fits China’s aspiration to become a global AI leader. But will this accelerated rollout come with suitably robust ethical and societal safeguards?

Commonalties in global AI regulations

  • Ethical AI: Emphasis on transparency, accountability and human oversight.
  • Data protection and privacy: AI legislation intersects with data protection and privacy laws, impacting how AI systems process personal information.
  • Regulation of sensitive sectors: Financial services, healthcare, and life sciences, are among the sectors singled out for specific guidance on ethical AI, use cases and/or consumer protections.

AI rulemaking in Asia

China began to roll out its sector-specific AI regulations in 2021. They target online information services, including algorithm-based recommendations, deep synthesis and generative AI capabilities.

AI system developers and deployers, which provide services to the public in mainland China, are governed by the regulations. But AI intended purely for internal use, is not caught.

The rules set out basic requirements on transparency, accountability, information security, human oversight, data privacy, content moderation, fair treatment of users, etc.

But AI services that can influence public opinion or mobilise social activity — like public forums or social media platforms — must be filed with the regulator. By August 2024, almost 1,000 algorithms and 190 large-language models were already registered with the Chinese authority.

The regulator is also exercising its extraterritorial powers, under Interim Administrative Measures on Generative AI Services, to block non-compliant foreign GenAI services from being accessed by China-based users.

Other mandatory AI-related rules include China’s laws on cybersecurity, data security and personal information protection, as well as regulations relating to internet-based services and scientific activities.

Non-mandatory ethical and technical standards for AI development in China sit alongside the mandatory rules. Some target sensitive sectors, like healthcare and financial services, so that AI-based medical devices and financial applications are addressed.

In Hong Kong and Singapore, guidelines and tools promote ethical and responsible AI. But no mandatory AI regulation exists in either jurisdiction. At least not yet.

AI regulation: what it means for businesses

The global legal landscape for AI is evolving rapidly and, according to Jingyuan Shi, it comes with commercial and legal opportunities and challenges.

Compliance with diverse regulatory frameworks: Businesses must prepare to navigate a patchwork of AI laws and rules that vary across jurisdictions.

Data protection and privacy: Data use by AI systems must comply with data protection and privacy laws, even in jurisdictions with no AI-specific regulations.

Intellectual property: Intellectual property issues arise where AI is used in the creation of work. They include recognition of AI as the inventor or author of content. Businesses must navigate the complexities of AI law to ensure that AI-generated content is both protected and compliant.

Liability and accountability: Who is accountable if AI systems cause harm or make errors? Contracts must clearly define liabilities and outline measure to mitigate risks.

Ethics and public trust: Ethical AI practices build trust and can create competitive advantage. Businesses must develop AI governance strategies and ethics frameworks to evidence their compliance with both mandatory laws and non-mandatory guidelines.

Emerging areas of law: Businesses must keep on top of emerging AI laws to avoid compliance failures and penalties. Prepare for new legal categories and requirements, like AI personhood, robot rights and specific regulations for autonomous vehicles and drones.

International collaboration: In AI together

The fast-evolving pace of AI, and its impact across borders, is behind a strong international drive to collaborate. Forums such as the G7 and G20, along with organisations like UNESCO and the OECD, are promoting dialogue to harmonise AI regulations and share best practices.

Meanwhile, we’re seeing major economies join forces on AI governance.

In May 2024, the presidents of France and China agreed to work together on addressing AI risks and promoting secure, reliable and trustworthy AI. And, in September, the EU, the UK and the US signed the first legally binding international treaty to implement safeguards against threats presented by AI to human rights, democracy and the rule of law.

Compliance and innovation in a shifting regulatory landscape

Although it is still early days for AI, the pace of evolution and adoption continues to surprise. Global collaboration on AI regulation is a positive step forward but businesses face a complex landscape of requirements — some mandatory; some optional; some designed to safeguard citizens and reduce risk; others more flexible to support innovation. Businesses that prioritise regulatory awareness will be best placed to balance compliance and responsible innovation and remain resilient in an AI-shaped future.

This document (and any information accessed through links in this document) is provided for information purposes only and does not constitute legal advice. Professional legal advice should be obtained before taking or refraining from any action as a result of the contents of this document.