AI v AI – Fuelling claims and creating opportunities in the HLS sector
“With artificial intelligence we’re summoning the demon”, warned Elon Musk at an MIT Symposium in 2014.
Introduction
"With artificial intelligence we're summoning the demon", warned Elon Musk at an MIT Symposium in 2014. In 2019 Bill Gates said that the power of artificial intelligence is "so incredible, it will change society in some very deep ways". AI is already being applied in a multitude of ways, while legislators urgently seek to devise a catch-all approach for a governance and regulatory regime. Even then, the EU's draft AI Act, undergoing significant consultation, is not expected to become binding law until late 2023 or 2024.
In tandem, the EU Product Liability Directive is undergoing review at European Commission level, with a public expectation that strict liability will be imposed for intangible products such as software, including algorithmic and AI driven programmes. The Collective Redress Directive, which ensures minimum standards of access to class actions across EU Member States, requires implementation by Member States by June 2023.
With significant adoption of AI technology in the healthcare and life sciences ("HLS") sector, coupled with a likely widening of the product liability regime, and easier access to collective redress on the horizon, it is no surprise that many operating in the HLS sector expect an increase in a variety of dispute types, including product liability and class actions.
This article considers how AI is being used to assist claimants and funders in 'building books' of claims, and how AI is being deployed in the HLS sector. Put another way, AI technology that is being adopted by claimants is likely to be deployed to fuel claims against products using AI technology in the healthcare and life sciences sector; a sign of how pervasive AI will become.
The power of AI to bind
Traditional product liability claims in the HLS sector, typically relating to off-the-shelf medicines and vaccines, have often failed to establish medical causation (that the product was capable of causing, and did cause, the condition(s) complained of). The collapse of claims in England and Wales, for example, in group litigation involving MMR vaccines, benzodiazepines, and Seroxat, have previously dented the confidence of claimant lawyers and funders to pursue 'traditional' product liability claims in the HLS sector.
However, the application of AI to more efficiently and cost-effectively build cohorts of potential claims, along with a momentum of enthusiasm to pursue class actions as access to justice is promoted across the EU and UK, has seen a revived interest in pursuing, or preparing to pursue, large numbers of claims.
AI is being used by claimant lawyers, funders and other stakeholders in a layered way:
Reaching potential claimants by effective targeted marketing, across widely consumed social media platforms such as Meta (Facebook). AI helps to more effectively identify those who demonstrate an interest in particular medical products that might be considered defective, for example where they have visited particular pages or used certain keywords. AI also helps to make targeted adverts more appealing, analysing particular colours and written content to determine what most effectively gains attention and results in click-throughs. The primary challenge faced by such adverts at present is credibility; many are perceived as scams. Inevitably AI will assist in determining how to improve the presentation of targeted marketing, and boost credibility.
With a cohort of potential claimants identified, AI can greatly facilitate the gathering, processing and navigation of large quantities of key data such as names, dates of birth and relevant medical information such as nature and date of onset of symptoms. That allows for sub-groups to be identified that might be particularly viable for class actions (e.g. those who experience symptoms close to taking of the allegedly defective product, and without pre-existing co-morbidities). Improved efficiency of these activities reduces the cost, which increases the potential profitability for claimant lawyers and funders wishing to advance such claims. The greater the reward, the greater the appetite for risk.
AI can analyse historical court judgments to predict the likely outcome of future claims. This will allow claimant lawyers and funders to target and more vigorously pursue types of claims that have a greater prospect of success, identify distinguishing factors that fuel prospects, and build case strategies accordingly. In 2017 Daniel Katz used AI to analyse US Supreme Court case data back to 1791, with the model correctly predicting approximately 71% of each justice's vote and case outcomes. The viability of this technology is such that a concern around analysing data relating to judges resulted in the activity becoming a criminal offence in France in 2019, under Article 33 Justice Reform Act.
These applications only summarily demonstrate the range of AI use to 'book build'. However, the combination of machine driven activity, human expertise, and a healthy risk/reward balance for pursuing claims is likely to fuel an increase in claimant activity.
Unleashing AI in the HLS Sector
The adoption of AI in the HLS sector is already leading to more targeted and effective healthcare, and development of medicines and vaccines. It is important to recognise that the Product Liability Directive was not intended to stifle innovation that carried the potential to benefit society. It instead sought to fairly allocate the risk of harm from defective products across consumers, producers, and others in the supply chain.
Uncertainty exists as to how liability will be determined for products applying AI; the deemed tolerance of consumers in the UK and across different Member States to accept the risk of harm from the application of AI (whether in isolation of, or as part of the application of, a physical product) remains to be determined. Furthermore, determining a causative link between an AI decision and a loss, allocating responsibility among parties in the supply chain, and determining the scope of disclosure necessary to prove that an AI product is safe or not, are likely to be other areas of future uncertainty.
Three examples of AI in the HLS sector, which create opportunities but also bear risks, are:
Increased efficiency in analysing viable candidates for clinical trials. The expensive, time-consuming and often unsuccessful clinical trial process is benefitting from early application of AI to analyse vast quantities of medical records to identify relevant factors such as prior medicine use, symptoms, and genetic features, to more efficiently match potential subjects against clinical trial criteria. However, such extensive AI analysis, which may be time-consuming to manually scrutinise and unpick, could result in unintended bias and discrimination; for example selecting disproportionately few candidates based on age, gender, or race, because particular characteristics common to certain classes are considered not to match clinical trial criteria. In turn, the product might be developed such that it results in greater efficacy for the class selected for clinical trials and/or there is greater uncertainty of adverse side effects for classes not selected.
Healthcare at home, by an automated health assistant. Such technology is capable of reading and analysing a patient's clinical information in real-time (e.g. heart rate and rhythm, blood oxygen levels), and providing output in order to facilitate clinical decisions on patient health and treatment. Robotic assistants are even helping to ease patient loneliness. However, defective AI analysis risks undervaluing and masking markers that a clinician might otherwise decide to investigate further (e.g. by carrying out a home visit). Or the clinician might prescribe different treatment as a result of relying on defective AI analysis, resulting in potential harm by over-medicating, or loss of chance to prevent or reduce the risk of certain symptoms by under-medicating or not medicating at all.
Personalised medicines, bespoke to patients exhibiting certain characteristics. Off-the-shelf medicines and traditional treatment pathways broadly assume that individuals and their conditions will react in a similar way to a particular medicine. However, the appropriate selection and dose of medicines, and their effectiveness, depend in part on individual characteristics of the patient. AI is being used to analyse vast amounts of personal data and outcomes of prior treatment, to help separate patients into distinct clusters, each warranting different, i.e. personalised, treatment plans. Going further, the medicine itself can be produced bespoke to each cluster. The potential benefit - more effective treatment - is obvious. However, the process is assumptive, and a risk arises that attributes not properly considered make a patient unsuitable for a particular personalised treatment (for example if it is wrongly assumed the patient can accept a more concentrated dose).
These applications are merely indicative of the vast investment in, and use of, AI in the HLS sector.
Conclusion
It is evident that AI is fuelling both the bringing of claims, and the efficacy of the HLS sector. The adoption of a range of AI technology by the HLS sector undoubtedly creates incredible opportunities for the benefit of humanity, but at the same time risks claims when AI causes harm. The next three years in particular will be critical in shaping the nature of future risk in the HLS sector.
_11zon.jpg?crop=300,495&format=webply&auto=webp)

_11zon.jpg?crop=300,495&format=webply&auto=webp)

.jpg?crop=300,495&format=webply&auto=webp)










.jpg?crop=300,495&format=webply&auto=webp)



