AI View - March 2024

Our fortnightly round-up of key AI legislative, regulatory, and policy updates from around the world.

12 March 2024

Publication

This edition brings you:

  • OECD Explanatory Memorandum on Updated Definition of AI System
  • India asks tech firms to seek approval from Government before releasing "unreliable" AI tools
  • PDPC Singapore: Final Advisory Guidelines on use of Personal Data in AI Recommendation and Decision Systems
  • Deputy Prime Minister Oliver Dowden delivers speech that sets out UK Government plans for AI
  • CAIDP Provides Recommendations on AI Policy for G7 Digital Ministers

OECD Explanatory Memorandum on the Updated Definition of an AI System

On 8 November 2023, member countries of the Organisation for Economic Co-operation and Development (OECD) approved a revised version of the OECD definition of an "AI System". The OECD has now published an Explanatory Memorandum (EM) to complement the revised definition, providing further technical detail to support the reasoning behind the amended definition.

The updated definition of an AI System is:

"An AI system is a machine-based system that, for explicit or implicit objectives, infers, from the input it receives, how to generate outputs such as predictions, content, recommendations, or decisions that can influence physical or virtual environments. Different AI systems vary in their levels of autonomy and adaptiveness after deployment".

As detailed in the EM, the primary motivation behind these efforts was to reflect the current consensus that AI system objectives can be both explicit and/or implicit.

Other key amendments to the new definition of AI Systems include:

  • the stated emphasis on the importance of "input" and "output" in the operation and development of AI systems;
  • the addition of "content" as an output, which clarifies that the definition also applies to generative AI systems;
  • the substitution of "real" environments with "physical" environments, a distinction that better reflects the contrast to be drawn with the virtual world;
  • the inclusion of "adaptiveness after deployment", which reflects the fact that some AI systems can continue to develop and evolve after their design and deployment.

The EU AI Act definition of "AI System" follows this OECD definition.

Read the OECD EM here. A supplementary article from the OECD that explains the updates can also be found here.

India asks tech firms to seek approval from the Government before releasing "unreliable" AI tools

On 1 March, India's Ministry of Electronics and Information Technology (MEITY) issued an advisory that requests tech firms seek approval from the Government before the public release of any AI tools that are either under trial or deemed "unreliable". The advisory also requests that tech firms "appropriately" recognise and label the "possible and inherent fallibility or unreliability" of their AI tools and models. This means that, in practice, platforms must provide sufficient disclaimers to indicate that their AI tools are under testing.

However, India's Deputy Minister Rajeev Chandrasekhar has indicated on X (formerly Twitter) that the restrictions will apply only to "significant platforms", and not to startups.

Chandrasekhar went on to state that the advisory is "signalling that this is the future of regulation" in India, a message that indicates a dramatic reversal of the country's previously stated laissez-faire and hands-off approach to AI regulation. Though the advisory is not currently legally binding, there is a suggestion that non-compliance with its proposals may result in the implementation of future legislation to make them compulsory.

See an article from Reuters explaining the advisory here and an additional article from Business Standard here.

PDPC Singapore: Advisory Guidelines on the use of Personal Data in AI Recommendation and Decision Systems

On 1 March, the Personal Data Protection Commission of Singapore (PDPC) issued its final Advisory Guidelines on Use of Personal Data in AI Recommendation and Decision Systems, following a public consultation that took place from 18 July 2023 to 31 August 2023. The Guidelines set out the recommended data handling and accountability measures that should be followed by any organisation that collects or uses personal data to develop and deploy AI systems.

The Guidelines consider the applicability of PDPA exceptions to the requirement for obtaining consent from data subjects when their personal data is used for AI training purposes. These exceptions are set out below.

The Business Improvement Exception (part 5, schedule 1 PDPA). This exception may be relevant when an organisation's use of personal data is intended for the purposes of offering new products or improving operational efficiency. This would include the following:

  • Developing AI systems to provide targeted content to social media users based on their browsing history.
  • Testing AI systems to assess their accuracy.
  • Matching potential job candidates to applicable job vacancies.
  • However, note that the Business Improvement Exception is only relevant to the sharing of data between related companies.

The Research Exception (part 2, schedule 2 PDPA). This exception is relevant when an organisation is using personal data to conduct commercial research which advances science and engineering, but has no immediate application to that organisation's products and services. This will be relevant to the following:

  • If the AI system being developed can increase innovation in products or services to improve quality of life.
  • If the use of personal data assists in the development of industry practices and/or standards for the future development and deployment of AI systems.

The Guidelines also encourage organisations to be increasingly transparent about their data use, as well as providing sufficient information to consumers at the point of data collection to ensure that they can give informed consent.

Read the new Guidelines here.

Deputy Prime Minister Oliver Dowden delivers speech that sets out government plans for AI

On 1 March, Oliver Dowden, Deputy Prime Minister of the UK, delivered a speech at Imperial College London concerning the government's plans regarding AI. The speech relayed a desire "to ensure [the UK is] leading the world in the adoption of AI across [its] public sector".

Dowden stated that this goal will involve an effort to apply AI tools to various areas of public sector work, including healthcare, education, crime prevention and casework.

Dowden additionally announced a scaling-up of the government's AI incubator, i.AI, which he explained will "more than double" in size. i.AI was also announced to be in the process of signing a "collaboration charter" with the NHS, whereby i.AI will support the NHS in identifying and deploying AI solutions to improve NHS services.

Read the speech here.

CAIDP Provides Recommendations on AI Policy for G7 Digital Ministers

In advance of the 50^th^ G7 summit, to be held in Italy in March of this year, the Centre for AI and Digital Policy (CAIDP) has provided input and recommendations for the G7 AI Policy.

Key recommendations include:

  • Implement the Hiroshima Guiding Principles and Code of Conduct. The CAIDP particularly lauds the Guiding Principles' focus on mitigating harmful AI biases to ensure that "data quality, training data and data collection comply with applicable legal frameworks". The CAIDP also supports the G7 inclusion of transparency measures in the Hiroshima Code of Conduct to ensure accountability regarding AI systems' capabilities and limitations.

  • Require impact assessments for all high-risk AI systems and ensure that bias is assessed and mitigated prior to deployment of AI systems. The CAIDP notes that there are "high-risk societal impacts of algorithmic bias evident in areas such as credit, housing, employment, and the criminal justice system". They accordingly recommend impact assessments for all high-risk AI systems to ensure such bias can be mitigated before systems are deployed.

  • Ensure that the Council of Europe AI Treaty does not exclude private sector AI systems. The CAIDP note that there remains contention as to whether AI governance should include private sector AI systems- see the Council of Europe AI Treaty. The CAIDP recommendation is for the G7 member states to ensure that private sector AI systems ("the most consequential" of AI systems) are not excluded from the scope of the Council of Europe Treaty.

  • Support the establishment of an International Panel on Artificial Intelligence. The CAIDP proposes that this panel is modelled after the International Panel on Climate Change (IPCC) and that it adopts an approach aligned with the Universal Declaration of Human Rights and Sustainable Development Goals.  

  • Endorse the Termination Obligation set out in the Universal Guidelines for AI. The Termination Obligation simply states the following: "An institution that has established an AI system has an affirmative obligation to terminate the system if human control of the system is no longer possible". The CAIDP refer to this obligation as the "ultimate statement of accountability for an AI system".

Read the recommendations here.

This document (and any information accessed through links in this document) is provided for information purposes only and does not constitute legal advice. Professional legal advice should be obtained before taking or refraining from any action as a result of the contents of this document.