Download the full visual bulletin here, text below.
EU competition regulators have launched an antitrust investigation into Internet of Things providers
16 July 2020
Who is affected?
Developers and users of voice assistant and related technology.
What is it?
EU competition regulators have launched an antitrust investigation into Apple's Siri, Amazon's Alexa and other voice assistants. The regulators will analyse whether the voice assistants help their parent companies stifle market competition by restricting services and collecting user data.
For example, the probe will look into whether companies hurt competition when their voice assistants only let users stream music from a single service (eg Apple Music) or send them to their own shopping websites (eg Amazon.com).
It will also examine if the collection of user data by smart speakers puts the parent companies at an unfair advantage.
This isn't simply a European issue. The US House of Representatives also recently presented a report on the antitrust practices of Big Tech, calling for significant changes to antitrust laws and Google was served with monopoly-related charges on 20 October 2020.
Read the announcement here.
What should I do?
If you are operating in this sector and interacting with the EU you should monitor developments as the investigations by EU competition regulators progress. So far, these investigations are at a very early stage and the first results are not expected until spring 2021.
The Court of Appeal has ruled it unlawful for South Wales Police to use automatic facial recognition (‘AFR’) to search crowds
11 August 2020
Who is affected?
Data Protection Officers and Compliance teams in the UK.
What is it?
The Court of Appeal has ruled that:
The AFR use breached Article 8 of the European Convention on Human Rights (respect for private life) due to its broad use "without apparent limits";
The DPIA conducted by the police was deficient, in breach of the DPA 2018; and
The police did not do all it could reasonably do to fulfil its public sector equality duties, particularly "to make sure that the software used does not have a racial or gender bias".
Interestingly the judges did not disregard a lawful use of AFR, stating that the benefits of tech are "potentially great" and the intrusion into people's privacy "minor". But the message was clear that there is a need for greater care and clearer guidance.
To find out more, please read our Insights article here.
What should I do?
Businesses making use of AFR or similar technology in all jurisdictions should ensure that they pursue and maintain the highest possible level of compliance with privacy and data protection laws.
Those operating in the UK should carry out adequate and thorough DPIAs before processing any data, taking into account any potential engagement with human rights, equality laws and privacy laws.
LRC of SAL has established a subcommittee on Robotics and Artificial Intelligence
18 August 2020
Who is affected?
All developers and users of AI in Singapore.
What is it?
The Law Reform Committee (LRC) of the Singapore Academy of Law (SAL) established a Subcommittee on Robotics and Artificial Intelligence to consider and make recommendations regarding the application of the law to AI systems.
The LRC is considering whether existing systems of law, regulation and wider public policy remain 'fit for purpose', given the pace and ceaselessness of change of the field of AI.
The LRC has already published three reports in 2020 on:
Applying Ethical Principles for Artificial Intelligence in Regulatory Reform;
Rethinking Database Rights and Data Ownership in an AI World; and
The Attribution of Civil Liability for Accidents Involving Autonomous Cars.
Read the reports here.
What should I do?
If you are using AI and currently based in Singapore you should follow LRC updates closely.The reports are intended to encourage critical analysis of the options available and guide key stakeholders in the development of AI-related laws. We will be publishing a multi-jurisdictional article on LRC in early 2021 – watch this space.
China has tightened its export control on AI
28 August 2020
Who is affected?
AI developers or users with significant Chinese contracts.
What is it?
China announced a tightening of its export control rule on AI for the first time since 2008.
This move has made AI interfaces (such as text and speech recognition and data analysis) a matter of national security. This means that government approval is needed before transferring or exporting the technology.
According to Ministry of Commerce for the People's Republic of China statistics, in 2019, the value of China's technology export contracts was $32.1 billion.
Read a summary statement in English here and the full statement in Chinese here.
What should I do?
If you are exporting AI from China, be sure to reflect this significant delay into any timelines going forward (30 working days for the foreign trade department to examine the initial application to export and 15 working days for the State Council to examine the contract).
The IPO has announced a consultation on AI and IP
7 September 2020
Who is affected?
All those in the IP sector.
What is it?
The Intellectual Property Office is seeking to understand the implication AI might have for IP policy, views from industry experts, academics and research organisations, as well as those in the IP sector and government departments.
The call for views was split into five sections: patents; copyright and related rights; designs; trademarks; and trade secrets.
Read the consultation description here.
What should I do?
The consultation closed on 30 November 2020 and we expect the response in mid-2021.
For a general overview of our thoughts on how AI is impacting IP law, please see a recent article authored by members of our IP team and published in the Intellectual Property magazine here.
Uber back-up driver for self- driving car charged with negligent homicide
15 September 2020
Who is affected?
Developers of autonomous technology.
What is it?
The Uber back-up safety driver in one of the company’s self-driving cars, that hit and killed a pedestrian in Arizona in 2018, was charged with negligent homicide.
The courts heard that the individual who was responsible for monitoring the car was distracted and that the fatality could have xbeen avoided.
This follows the decision in March 2019 that Uber would not be held criminally liable by prosecutors. This was especially noteworthy as it was acknowledged that there was a series of development decisions made by Uber that contributed to the crash. For instance, the modified software failed to identify the pedestrian and did not flag the operators’ complacency.
The trial is due to begin in February 2021.
What should I do?
This will prove a key case in the development of AI regulation and is definitely worth monitoring, regardless of your current jurisdiction.
US and UK have signed an agreement to jointly advance trustworthy AI
25 September 2020
Who is affected?
All developers and users of AI in US and UK.
What is it?
The US and the UK have signed an agreement to jointly advance trustworthy AI. The partnership, known as the Declaration on Cooperation in Artificial Intelligence Research and Development, will focus on AI R&D, innovations, and workforce development.
This agreement stems from a meeting last year between US President Trump and British Prime Minister, Boris Johnson to promote the countries' economic growth. Together the governments will prioritise areas where they have a "strong common interest", and will coordinate collaboration among their AI experts.
Read the declaration here.
What should I do?
Your compliance team should already be aware of the importance of ethical AI. It is unlikely that this agreement will directly result in increased regulation in either jurisdiction, but it may make international collaboration easier if legislation is aligned.
European framework on ethical aspects of artificial intelligence, robotics and related technologies
28 September 2020
Who is affected?
Policy-makers and those interested in the politicisation of AI.
What is it?
The European Parliamentary Research service has published an Added Value Assessment pushing for timely, common EU legislation on the ethical use of artificial intelligence, robotics and related technologies. The report details the strength of current EU laws in this area and the significant economic value of being a 'first mover' in AI-specific legislation.
Read the report here.
What should I do?
Watch this space. The EU are likely to confirm synchronised legislation on the use of AI in early 2021, which will have a significant impact on the shaping of worldwide AI legislation.
MEPs have proposed new AI regulations
20 October 2020
Who is affected?
Policy-makers and those interested in the politicisation of AI.
What is it?
The European Parliament has proposed new recommendations on AI regulation. The recommendations focus on three main areas: ethics, liability and intellectual property rights.
These recommendations seem to incorporate some of the ideas suggested by the European Regulatory Research in September, including the need to ensure transparency and accountability and protect against bias and discrimination.
The recommendations also include a requirement for an effective IPR system in relation to AI, which protects innovative inventors without harming human creators' interests nor the EU's ethical principle.
Read the press release here.
What should I do?
Watch this space. The EU Commission legislative proposal is expected in early 2021 and likely to have a global impact.
Portland passed the toughest facial recognition ban in the US
4 November 2020
Who is affected?
Developers and users of facial recognition software
What is it?
Voters have passed a ballot banning the use of facial recognition by police and city agencies in Portland. This follows the earlier vote by Portland City Council which blocked both public and private use of the technology. The new measure strengthens the ordinance and cannot be revoked for at least five years.
The first part of the law (which comes into force immediately) bars all city agencies from using facial recognition (with a few exceptions for smartphones and other personal verification). The second part (which takes effect on 1st January 2021), makes it illegal for private entities to use the software to identify individuals in public places (e.g. shops or banks). Companies that violate the ban could face a daily fine of up to $1,000.
Read the ordinance here.
What should I do?
There are an increasing number of such laws in other US states, indicating an increased legislative focus on the use of facial recognition.
If your organisation is based in the US, you should continue to be careful when using any form of facial recognition and ensure you fully understand the extent that any associates are doing so.
UK government announced increased AI spending
19 November 2020
Who is affected?
All developers and users of AI in the UK.
What is it?
The UK Prime Minister announced a £16.5 billion funding boost to the defence budget over 4 years. This increase in spending will facilitate the opening of a National Cyber Force and a new agency dedicated to artificial intelligence.
The funding is expected to facilitate AI-led defence innovation such as drone technology and automated weaponry.
Read the press release here.
What should I do?
If you’re using AI in the UK, you should monitor developments and announcements from the new organisations. We expect the increased funding to be matched with increased regulation.
Executive Order promoting the use of trustworthy AI in Government
3 December 2020
Who is affected?
Public bodies using or interested in the use of AI.
What is it?
The executive order provides further guidance for the use of artificial intelligence in government decision-making.
The guidelines contain familiar principles of, amongst other things, transparency, lawfulness and accuracy, which are intended to act as a foundation for future regulations and laws.
The order directs agencies to prepare inventories of AI-use throughout their departments and directs the White House to develop a road map for policy guidance for administrative use.
Read the executive order here.
What should I do?
Regardless of your jurisdiction, but especially if you are currently using AI in the US, you should take this as a prompt to similarly audit your use of AI, especially if it’s being used to make decisions that affect individuals.
_11zon.jpg?crop=300,495&format=webply&auto=webp)







.jpg?crop=300,495&format=webply&auto=webp)




