As GenAI adoption accelerates, legal teams face dual challenges: guiding organisational compliance and integrating AI into legal services. This Q&A session between Simmons Adaptive and industry expert, Minesh Tanna, explores the impact of GenAI for the future of legal teams.
How are in-house legal teams responding to AI?
Generative AI (GenAI) has prompted intense professional interest since the launch of ChatGPT in late 2022. Organisations are racing to explore its adoption to realise anticipated cost efficiencies and to achieve competitive advantage. This is impacting in-house legal teams in two ways. The first is the role of legal teams to guide their organisations through the legal, regulatory and governance challenges of adopting GenAI in the wider business. Legal teams are typically experiencing an influx of proposed use cases as organisations look to adopt GenAI across their business. Second is how GenAI can be used internally to deliver legal services. This is likely to have a major impact on legal services, which are data / information-heavy, and to date have been manual and labour intensive. So it makes sense that legal teams are also looking at how they can realise the benefits of GenAI, especially given the relatively high costs of obtaining legal services.
What do you see as the key legal risks?
Sophisticated forms of GenAI have the capacity to essentially act autonomously. This is particularly so as we enter the world of 'agentic AI' which delegates more responsibility to connected AI systems to undertake a multitude of tasks autonomously. As a result, AI increasingly creates the risk of unpredictability which in turn raises legal concerns relating to, for example, the accuracy of output, and bias and discrimination concerns. GenAI systems are often powered by foundational models (for example, large language models (LLMs)) which are trained on extensive datasets. This introduces other legal issues, such as intellectual property (IP) risks (if, for example, the datasets contain potential copyrighted data), as well as data privacy issues. Transactional and contractual aspects of AI are also increasingly posing legal challenges because contracts dealing with AI as the subject matter need to reflect the various risks that AI presents.
What advice would you give to in-house legal teams in terms of compliance?
Start now. Our proposed AI governance model comprises a knowledge framework, risk assessment, and compliance strategy. A useful first step is to map AI tools and uses within the organisation (e.g. in an inventory), including where and what type of AI is being used. Knowledge also entails ensuring that individuals within the organisation involved in AI understand the risks. The EU AI Act contains a specific requirement around "AI literacy" which already applies (as from 2 February 2025) and requires that organisations ensure that their staff dealing with AI have a sufficient level of AI literacy. An inventory of AI tools and uses can then form the basis for risk assessment, which allows the organisation to assess legal, reputational and practical risk, including for example how the EU AI Act will impact current and planned AI use cases. With knowledge and risk assessment in place, the focus can shift to managing these risks and implementing compliance measures. Doing this work retrospectively is not ideal, so we encourage teams to start early and lay the groundwork for the inevitable rise in AI adoption over the next few years.
How will AI shape the future of work for legal teams?
The legal industry is ripe for AI adoption, especially given the time and cost efficiencies that can be achieved. Research has already shown that the legal industry will be one of those most impacted by AI. That said, change will not occur overnight. Take eDiscovery, for example. Ten years ago, the review and analysis of large datasets in litigation was an almost entirely manual task. Today, most review is undertaken using eDiscovery technology, including AI. But that shift has happened gradually, over many years. A really important question for me is about future generations of lawyers. Junior lawyers currently perform document review and analysis, conduct legal research, and create first drafts. If AI does these tasks, how will junior lawyers acquire the practical know-how and experience which senior lawyers possess today? I can see AI bringing about systemic change in the make-up of legal teams including new roles in a legal context.
How should in-house legal teams be upskilling in anticipation?
Lawyers won't necessarily be replaced by AI but those who can better utilise it will have an advantage. You don't need to be an expert to have a solid understanding of AI, but you do need to engage with these new technologies. Lawyers are specialists and so it can be daunting to get to know other technical areas. All I can say is, don't be intimidated. AI is complex, but it's essentially the next generation of IT and, as with existing IT developments, training and familiarity can help in getting you up-to-speed. We have various resources to help with this, including a tailored AI literacy programme (with an eLearning module for foundational AI training) and other resources dedicated to AI governance and compliance with AI regulations such as the EU AI Act.
An industry-recognised leader in AI law, Minesh Tanna is Simmons & Simmons' Global AI Lead. He is a published author and regular speaker, lecturer and media commentator on AI law. Minesh is currently Chair of the AI Committee of the City of London Law Society (CLLS) and Chair of the AI Group of the Society for Computers and Law (SCL).













.png?crop=300,495&format=webply&auto=webp)
