We’re talking about the tech. Are we talking enough about the people?

Addressing the human side of AI adoption in legal – a personal perspective from Sarah James, Partner and Global Head of Adaptive

28 April 2026

Publication

Loading...

Listen to our publication

0:00 / 0:00

There’s no shortage of conversation about AI in legal. Conferences are full. Case studies are compelling. Efficiency gains are real. And the use cases are genuinely impressive.

Yet, in virtually every conversation I have with clients now about AI, we end up in the same place: the people.
Not “what can this technology do?” but: how are our people doing with it?

Many teams now have the tools. The challenge is adoption and, for lawyers and legal operations, doing it safely. Model risk is real. But there is no value in a tool we that isn’t used, and no learning without experimentation. So alongside governance and controls, we have to take seriously another risk: what happens when capable, conscientious professionals feel they must either keep up quietly or opt out silently - and whether leaders are investing deliberately in their people to prevent that.

A moment many leaders will recognise

Here’s a familiar pattern (shared here as an anonymised composite, not a single client story):

A legal leader introduces an AI tool and does what good leaders do: they encourage curiosity, experimentation, and pace. The team nods along. Some people engage quickly. Others stay quiet.

Weeks later, in a smaller conversation, a senior lawyer admits they are using the tool - but only at night, privately. They don’t want to be seen “needing it”. They worry that asking basic questions will signal they are behind, or that their judgement will be questioned. So they learn in silence, double-check everything, and carry the anxiety alone.

This is the people story inside many AI programmes. And it’s why adoption is not primarily a technical rollout. It’s a trust and culture transition.

The question beneath the question

A client recently referred to this in an article as ‘the missing conversation’. As a profession, we’ve focused almost all our attention on what AI can do, how it performs, and what it saves. Far less on the human response to its arrival.

That lands for me. And it’s something leaders should take seriously.

If we don’t, the impact isn’t just “soft”. It shows up in the work: slower and more uneven adoption; people learning privately rather than sharing what they’re finding; duplicated effort as teams solve the same problems in parallel; and inconsistent practice that increases rework and risk. And when confidence is low, the safest choice becomes doing things the old way, which means the value case takes longer to realise, if it lands at all.

Change like this is hard. Not because people lack capability, but because it unsettles established ways of working, thinking, and judging quality - often in already demanding, fast-paced environments where expectations remain high. It asks people to adapt while continuing to deliver precision and rigour, at a time when job and career certainty feels harder to access and margin for error feels smaller.

This is not a problem with the people. It’s a predictable response to change, arriving in the place where confidence, expertise, and reputation have traditionally been built: the work itself.

What we’re hearing

In the conversations I’m having with legal leaders, a few themes recur.

Public enthusiasm for AI can sit alongside private, more complicated feelings: anxiety about relevance, uncertainty about how expertise is valued, and a sense that the ground is shifting in ways that are hard to name, but easy to feel.

What determines whether these reactions surface or stay hidden is culture: whether people feel safe to question, experiment and admit what they don’t yet know - and whether leaders model that openness themselves. It’s whether the people dimension is treated as a core priority, not a footnote. At Simmons & Simmons this is modelled very clearly by our leaders, and it’s been a game changer.

An adaptive leadership challenge, not a technical one

The technology is moving quickly, implications are still unfolding, and we’re all - leaders included - navigating something genuinely new. That makes this an adaptive challenge - not a problem to be solved with the right tool or policy, but a one that requires us to learn our way through it together, in real time.

That is a different kind of leadership ask. Less certainty, more presence. Less direction, more listening. The ability to say, genuinely: I don’t have all the answers, but I’m committed to working through this with you.

It also asks something of leaders personally. Before we focus on how our teams are responding, it’s worth reflecting on how we’re responding to those same pressures ourselves. Naming that pressure, calmly and without panic, creates permission for others to speak honestly too.

What good looks like in practice

So what does it actually mean to “support people through this transition” in a way that’s practical, not performative?

Here are a few moves I see working well:

  • Name the hidden tension. Acknowledge openly that AI can feel like both opportunity and threat, especially where professional identity is tied to expertise and precision. Naming it reduces shame and increases candour.
  • Make learning visible and safe. Create structured spaces where to share prompts, failures, near-misses, and lessons learned. Treat experimentation as a professional discipline, not an extracurricular activity.
  • Redesign what “good” looks like. If the implicit standard is “fast, confident, and error-free”, people will hide uncertainty. If the standard is “rigorous judgement, thoughtful oversight, and good escalation”, you make room for learning without lowering quality.
  • Separate competence from tool fluency. Make it explicit that being a brilliant lawyer is not the same as being an early adopter and that both matter. This reduces defensive behaviour.
  • Build governance that enables, not just controls. Guardrails matter. But governance should create confidence around what is allowed, what isn’t, what needs escalation, and how to use tools responsibly without fear.
  • Invest in the social side of adoption. Pair lawyers with technologists and operations/legal business colleagues to build shared language and shared ownership. Adoption accelerates when people feel part of a team.

None of this is complicated. But it is often under-designed.

Where we’re leaning in: applied AI and the human journey

One thing that stands out to me is the impact of combining applied AI with deliberate attention to the human journey.
At Simmons & Simmons, we’re doing genuinely inspiring work on applied AI. Not as theatre or speculation, but AI that’s built, tested, governed and used in the real world. We’re seeing what becomes possible when legal expertise, technology and operational design come together with intent.

At the same time, we’re spending equal energy on the people dimension: the leadership, the culture, the confidence-building, the operating model shifts, and the psychological safety required to learn in public.

Each strand matters on its own. But together, they’re powerful. Because real AI adoption happens when the technology and the humans evolve together - when capability and trust grow side by side.

The opportunity I find genuinely exciting

Handled thoughtfully, this moment can be an invitation to bring different people together in new ways. Lawyers alongside technologists. Operational teams alongside subject matter experts. People with different mindsets and ways of seeing the same challenge, working towards a shared goal with a stake in getting it right.

That kind of multidisciplinary collaboration doesn’t happen by default in legal environments. But when the conditions are right - clarity of purpose, openness to collaboration, and a culture that supports learning - it tends to produce something better than any one group could have reached alone. It’s something I’ve seen first-hand. It’s the work we’re engaged in now – both with clients and in our own AI adoption journey – and it’s exciting.

We don’t have it fully figured out. Neither do our clients. But we’re in it together, learning in real time and that’s exactly the point. And I can’t lie, I’m finding it enriching, exciting and full of opportunities to drive change.

Are we doing enough to support our people through this transition?

It’s a question worth asking, openly, repeatedly, and with genuine curiosity. And then worth
designing for, with the same seriousness we bring to the technology itself.

This document (and any information accessed through links in this document) is provided for information purposes only and does not constitute legal advice. Professional legal advice should be obtained before taking or refraining from any action as a result of the contents of this document.