AI and Ethics in the UAE - A synergistic relationship?
With the appointment of the world's first Minister of State for Artificial Intelligence, can government drive the growth of this industry?
The UAE’s impressive list of “firsts” was further enhanced by the appointment of the world’s first Minister of State for Artificial Intelligence. Youthful (appointed at the age of 27), but with a mature focus, both HE Omar Bin Sultan Al Olama and his mandate are undoubtedly exciting and progressive. However, while inherently commendable, can the UAE and wider region establish themselves as something more than “ambitious” and “visionary”? What can show the world a depth of credibility which goes beyond focussed intent? The Minister, to his credit, has certainly not dodged the issue of hard, tangible results: he’s on record with target dates for certain milestones for example around the number of expert coders emerging from domestic talent and, within that subset, talent which has narrow expertise in AI. We will also undoubtedly see further state-led proofs of concepts around AI which will each, deservedly, be applauded for being a “first” and “the most advanced”.
Is that enough?
Law, policy and regulation are thorny areas where the region cannot afford to lag behind in the development of its AI vision and, if properly embraced, can be catalysts. In developed markets, where new technology raises moral and ethical concerns, development is often in an environment where the principal actors are isolated from the policy-making arena. However, by having a government-driven (rather than purely industry-driven) mandate, the door to take the issue into the policy-making arena is wide open. It’s in this arena (rather than just in the R&D huddles) where some of the most critical AI-related decisions will be made. The philosophically rich questions around machine learning such as bot liability and technology / data-enabled precognition are actual roadblocks to progress rather than merely intellectual debates or Hollywood blockbuster storylines. So, where can law and policy makers in the region start with this grappling of them?
Legal headwinds
Some of the toughest questions which will shape laws and policies on AI are already being asked in various parts of the world and in a range of sectors. For example, the GDPR has brought about specific rules in relation to AI-enabled gathering and processing of personal data. In summary, individuals should not be subject to a decision based only on automated processing that is legally binding or which has a significant effect on them, unless the processing is necessary to enter into or perform a contract, the individual consents or the processing is allowed by a national law with suitable safeguards to protect the individual’s rights, freedoms and legitimate interests.
As a result of this, various sectors (for example Banking) are carefully considering their automated processing and decision-making. Although the GDPR will provide a level of harmonisation within EU member states, it is not yet clear how legal questions of data protection will be addressed at a transnational level to enable firms to create AI systems that are fit for purpose in a global economy. A short-term and ill-advised view amongst actors for whom the GDPR does not apply could be that they have an opportunity to wantonly exploit algorithm-based decision-making as part of their data processing activities. A more mature approach would be to confront the issues at play and consider and vocalise (if not enact) legal and policy-based positions. Before embarking on a grand legal or policy overhaul, the starting point, must be to clearly define the actual legal issues that come up in the deployment and use of AI. In this context, we can categorise the legal risk areas into two buckets: “Causation” and “Big Data”.
Legal Risk: Causation
AI biases
The use of AI can open up novel legal risks through bias. Some harbour concerns that algorithms and the data used to train them may introduce new biases or perpetuate and institutionalize existing social and procedural biases. These biases are often a reflection of input from the creators of such algorithms themselves. Human decision makers might, for example, be prone to giving extra weight to their personal experiences. This is a form of bias known as anchoring, one of many that can affect business decisions. Within a financial context, AI approaches to credit scoring could systemise unfairness and discrimination; AI has the potential to repeat, and even develop, human biases where it ���trains’ on an historical transaction dataset. This could result in people with certain characteristics being generally offered better loans from banks than people without those characteristics. This is the tendency to discount the possibility of significant change - for example, through substitution effects created by innovation. The severity of this bias can be magnified by machine-learning algorithms that must assume things will more or less continue as before in order to operate.
In the UAE, such uses of AI for social profiling have been deployed in the field of education, and therefore, issues of AI biases must be borne in mind. Dubai Knowledge and Human Development Authority (KHDA), is working with a start-up called Nexquare in using AI and machine learning to predict students who are at risk of dropping out, the employability of graduates, and a teacher’s chances of success in a school. The machines use a variety of data like a student’s socioeconomic background, behavioural issues, scores, attendance, and more, in forming patterns that will be used to make predictions. The dangers of AI bias in predicting the employability of graduates, for instance, is clear, not least in the case of social mobility.
Besides producing biased results, there is an ethical question here. Should governments use machine learning and AI methods to determine students who are at risk of dropping out, the employability of graduates and a teacher’s chances of success in a school? More fundamentally, should the use of algorithmic systems that are known to have biases be applied to benefit some while material prejudice is caused to others?
AI decision-making
In robotics, methods are needed to provide legally required explanations without significantly hampering performance, for example, using proxy or simplified models or rule extraction. Legal issues arise where the interpretability of AI decision-making, based on complex algorithms and machine learning is unclear. In a financial crime and AML context, firms typically need the ability to explain why and how a particular decision was made (for example, what was the basis for a suspicion of money laundering, or lack thereof). This is important for a firm’s internal systems and controls and could also be necessary in the context of regulatory enforcement action.
This issue is sometimes referred to as the “black box” complexity of deep learning techniques; even if the inputs and outputs are known, the algorithms used to arrive at a decision are often proprietary or not easily understood, despite when the inner-workings of the programming is open source and made freely available. This issue is particularly pronounced in applications where trust matters and predictions carry societal implications, as in financial lending. Some nascent approaches, including Explainable Al (XAI), aim to increase model transparency. XAI is artificial intelligence that is programmed to describe its purpose, rationale and decision-making process in a way that can be understood by the average person. XAI is often discussed in relation to deep learning and plays an important role in the FAT ML model (fairness, accountability and transparency in machine learning).
XAI provides general information about how an AI program makes a decision by disclosing:
- the program's strengths and weaknesses
- the specific criteria the program uses to arrive at a decision
- why a program makes a particular decision as opposed to alternatives
- the level of trust that's appropriate for various types of decisions
- what types of errors the program is prone to, and
- how errors can be corrected.
As AI initiatives become increasingly prevalent, it will become more important than ever to disclose how bias and the question of trust are being addressed. Employing AI systems on a black box basis without corresponding pressure to understand the methods driving their success (or failure) is unlikely to be acceptable to regulators. Indeed, Article 22 of the GDPR, for example, includes a right to an explanation in favour of data subjects. Regulatory standards need to be developed to set system- and context-dependent accountability requirements based on potential biased and discriminatory decision-making and risks to safety, fairness, and privacy.
Ironically though, the reluctance to remove humans entirely from the loop in AI decision-making may create a penumbral space in which the ability to make AI systems accountable is impaired. Taking the GDPR as an example, this legislation contains safeguards against automated decision making. The safeguards only apply to decisions “based solely on automated processing,” which may exclude many robotic systems that involve some form of human decision-making. The outcome may be that robotic decision making would not qualify as “solely” automated. In a UAE context, AI system accountability may in itself become part of the skilled employee growth objection.
Legal risks: Big Data
Data monopolies
Numerous reports have emphasised concern with the current development of AI, warning against the role of big tech data monopolies. The monopolisation of data demonstrates the need for strong ethical, data protection and competition frameworks. When firms know more information about consumers with an ever better ability to fine-tune the consumer experience, they are able to influence consumers at a personal level and to trigger vulnerability in consumers in their marketing. For instance, Big Data allow for corporates to identify a consumer who is pregnant, but then also engineer food cravings in her through subtle triggers.
In this regard, it will be interesting to see how corporates tackle the monopolisation of data themselves, put cynically, to fend off the threat of government intervention. Tim Berners-Lee (the man credited with inventing the World Wide Web) through his World Wide Web Foundation has started a campaign to draw up a new “Contract for the Web” - a 21st Century Magna Carta - designed to make the web a safer, more equitable environment. The countermeasures suggested by the World Wide Web Foundation include putting “into place comprehensive data protection laws and strong operational frameworks” and ensuring “automated decisions are explainable and accountable to the people they are meant to serve”. Early signatories to this Magna Carta include Facebook and Google. Of course, the question remains: how much of this is merely for public relations, intended to reverse the growing opprobrium toward Silicon Valley.
Big Data and third parties
The UAE has adopted an open policy as regards the use of Big Data. Indeed, according to the World Competitiveness Yearbook 2018, the UAE, ranked fifth globally in use of big data and analytics. The UAE is the most connected country in the region with one of the highest Internet penetration rates in the world. Similarly, Smart Dubai has outlined its commitment to data sharing. In an address by Younus Al Nasser, Assistant Director General of the Smart Dubai Office and CEO of the Dubai Data Establishment, it was outlined that “data is the bedrock upon which a smart city is built. We are on a mission to build a robust data sharing economy in Dubai, and with that in mind, we’ve launched a bundle of data initiatives, including Dubai Pulse, Digital Wealth, Dubai Data Compliance Courses and many other advanced innovations. Supporting events such as the Smart Data Summit helps advance our plans and projects to build a data-sharing economy in Dubai.”
However, the harnessing of Big Data from third parties poses ethical problems on the application of such data. Such so called data supply chains increase the risk of secondary misuse of data and a gradual degradation of an individual’s rights vis a vis their data as it passes along this supply chain. More obvious problems include adverse consequences such as being denied credit. In insurance markets, these problems have been felt keenly; more accurate pricing of risk may lead to higher premiums for riskier consumers (such as in health insurance for individuals with a genetic predisposition to certain diseases) and could even price some individuals out of the market. A more egregious yet subtle consequence is digital market manipulation.
Even if innovative insurance pricing models are based on large data sets from various third parties and numerous variables, as mentioned earlier, algorithms can entail biases that can lead to non-desirable discriminations and even reinforce human prejudices. The ethics of data sharing are therefore even more pronounced where such biases can be promulgated further by data sharing initiatives amongst parties. As a minimum, this warrants a societal discussion on the desired extent of risk sharing, how the algorithms are conceived, and which information are admissible.
Big Data and categorisation
Big Data necessarily relies on the pooling of individuals into quantifiable categories. Categorising individuals under certain headings can be disrespectful to them - for example, the categorisation of individuals based on their personal history, such as crime victim status, becomes an exercise in objectifying individuals as a mere category. Big Data aggregators have been known to list individuals by classifications such as alcoholics, erectile dysfunction sufferers and even as “daughter killed in car crash.” Individuals can be disrespected through objectifying them as a mere category - particularly a category that overwhelms in significance, such as being the victim of a crime, struggling with an addiction or coping with a death. Returning to our example of Nexquare and its application of AI systems to the UAE education sector, the system is designed to recognise certain patterns, such as a student’s socioeconomic background, reported behavioural issues, scores, academics, attendance, assignments, curriculum, extra-curricular history, and form categories of student from this data.
AI, automation and happiness
In the UAE, ambitions are for AI to be used as a source of happiness, too. Smart Dubai, a government team helping to turn the city into the happiest on earth through technology, is working on an ethical constitution to make sure it and its private sector partners uphold the UAE’s values. The ‘AI Lab’, an initiative within Smart Dubai, has as its mission a desire to “transform the way [the government] engages with its citizens and visitors, and disrupting traditional business processes.” While this mission is admirable, how will this drive to disrupt traditional business processes affect unemployment, which has a proven toll on self-worth, social status, social relations, daily structure, and happiness? In essence, is it ethical to develop AI technologies that might soon replace many jobs currently performed by humans?
Indeed, in a recent study by McKinsey on the impact of automation and AI on work found that physical activities in highly predictable and structured environments, as well as data collection and data processing, which together account for roughly half of the activities that people do across all sectors in most economies, are easily automatable.
Smart Dubai is attuned to the ethical issues surrounding AI. Dr Aisha bin Bishr, director general of Smart Dubai, has outlined that the organisation has established a “specific team developing what we call AI principles and ethics.” This team would be taught to “use these principles and ethics in their work.” These principles, according to Dr bin Bishr, are “much aligned” with the seven principles recently set out by Google.
The role of Google’s principles in shaping Smart Dubai’s principles speaks to a broader trend in the AI and ethics space, namely that of the role of corporates and non-governmental organisations in shaping this very space. To be clear, when we speak of AI principles and ethics, we are talking about, specifically, an AI code of ethics (also called an AI value platform) which is a policy statement that formally defines the role of AI as it applies to the continued development of the human race. The purpose of an AI code of ethics is to provide stakeholders with guidance when faced with an ethical decision regarding the use of AI. This code of ethics is largely being driven by institutions such as The Future of Life Institute (FLI), the non-profit organization that led the creation of the Asilomar AI Principles, a set of 23 principles intended to promote the safe and beneficial development of artificial intelligence. The principles - which include research issues, ethics and values, and longer-term issues - emerged from a collaboration between AI researchers, economists, legal scholars, ethicists, and philosophers in Asilomar, California in January of 2017. For all the potential benefits of private sector engagement in developing an AI code of ethics, the primary responsibility of private companies is to their shareholders, while the primary duty of government is to its citizens. These priorities can clash a problem that is not new, but may be magnified. The state can and should play a leading role in the future of AI. For instance, and fundamentally, who will protect and control citizens' data? The answer to this question is crucial to government legitimacy.
What’s next?
While many great thinkers continue to prophesise on the long term impacts of AI (Stephen Hawking infamously proclaimed that ‘the development of full artificial future intelligence could spell the end of the human race’), what is clear is that AI poses some very immediate and real legal and ethical challenges which will need to be neutered by regulation. Such challenges present the UAE with an opportunity to be thought leaders on the implementation of AI and its regulation. Such an opportunity is commensurate with, and indeed fundamental to, other ambitions the state has for itself, such as being the leading technology hub in the region. The UAE’s laws and regulations that have helped scale the Middle East technology sector is attributed to a number of factors such as the government’s provision of funding for start-up and incubators offering discounted operating costs; its encouragement of SMEs to bid for public contracts; high demand for the main digital business in the region, success in e-commerce; and the willingness of experienced career executives to take up the entrepreneurial challenge. Clearly too, a mature legal and regulatory environment for AI will need to move up this list, if the UAE is to build on its success.






