The evolution of the AI Act
Since an initial draft in April 2021, the AI Act has undergone several revisions. Now it is in the trilogue phase of the legislative process, where the EU council, the EU Commission and the EU parliament aim to achieve a common position with a view to adopting the Act into law by the end of 2023. Therefore, the following statements may still be subject to change.
Within the EU, the Act will cover AI system providers, including developers, suppliers and third-party product manufacturers that incorporate AI software into their own systems. Users of high-risk AI systems also fall under the Act's scope and may face sanctions for non-compliance.
However, the Act will have both extraterritorial reach and global validity, similar to the GDPR. Under the “market location principle”, providers based outside the EU but directly marketing and operating their systems within the EU, will be required to comply with the Act. Moreover, providers of AI systems that do not physically enter the EU market but generate results that are used within EU territories, will be held accountable.
Once the AI Act comes into force in the EU, it seems likely that other nations will follow suit, as they did with GDPR.
Risk classifications
The AI Act adopts a risk-based approach to permissible AI systems.

Source: https://digital-strategy.ec.europa.eu/en/policies/regulatory-framework-ai
Unacceptable-risk AI systems, such as those employing subliminal techniques to negatively influence behaviour or which engage in social scoring or deliberate manipulation, are strictly prohibited. These AI systems pose a threat to safety, livelihoods and the rights of individuals.
High-risk AI systems are the primary focus of the AI Act. They include critical infrastructure, such as transport systems, where AI failures could endanger the lives and health of citizens, as well as law-enforcement activities that may infringe upon fundamental rights. Users and providers of high-risk AI systems must adhere to various obligations, including risk management; data quality to prevent discrimination; data governance; human oversight measures; cybersecurity and traceability. High-risk AI systems will require ongoing monitoring to ensure compliance throughout their lifecycle.
Limited-risk AI systems, such as chatbots and deepfakes that are not designed to manipulate, are classified as low risk.
Minimal or no-risk AI systems, such as spam filters and AI-enabled video games, are permissible. Currently, most systems used in the EU are deemed minimal risk.
Sanctions imposed
The AI Act provides a three-level sanction model, with fines based on the severity of the infringement.
Non-compliance with the AI Act is likely to lead to substantial penalties. The maximum fine is €30m or six per cent of a company’s worldwide annual turnover, whichever is higher. This applies where a prohibited AI system is used, or where the quality criteria for high-risk AI systems is not met. However, small and medium-sized enterprises, including start-ups, have a cap set at three per cent.
Distributors and importers have responsibility for ensuring that the provider has accurately completed conformity procedures; that the AI system has instructions for use and the user is informed about technical modalities. Violations, including failure to establish and document a risk management system, carry penalties of up to €20m or four per cent of worldwide annual turnover.
Providing false, incomplete or misleading information to competent authorities can lead to fines of up €10m or two per cent of worldwide annual turnover.
Large language models
Under the Act, providers of generative foundation models, like ChatGPT, will need to demonstrate when content has been created by an AI system rather than a human. This may include obligations under copyright and appropriate identification of sources. The foundation model must be registered in an EU database. The aim is to mitigate potential risks and ensure greater collaboration between downstream and upstream providers of content.
AI Liability Directive
In parallel with the sanctions-based AI Act, the AI Liability Directive is progressing through the European Commission. The new rules will ensure that individuals harmed by AI systems secure the same level of protection as those harmed by other technologies in the EU.
According to the Commission, it will be challenging for an injured party to prove damages and causality by providers and users of high-risk AI systems. The AI Liability Directive will establish a rebuttable “presumption of causality” to ease the burden of proof for victims seeking compensation. Therefore, burden of proof will fall on providers, which is causing inevitable concern.
Be prepared
The proposed AI Act and AI Liability Directive are still going through EU regulatory channels and further revisions may occur along the way. However, it is likely that the final versions will have broad and far-reaching implications.
The Act expands the EU’s jurisdiction to third-country providers and users, and establishes risk classifications and obligations for different stakeholders. The Directive, meanwhile, complements the Act by addressing liability issues.
As these regulations move closer to adoption, perhaps as early as the end of 2023, companies involved in developing and operating AI systems must ensure that their governance and risk management structures are robust and that they stay on top of the evolving regulatory landscape.
.jpg?crop=300,495&format=webply&auto=webp)

.jpg?crop=300,495&format=webply&auto=webp)















.jpg?crop=300,495&format=webply&auto=webp)