Law Society conference: 3 insurer demands force firms to put AI on their risk agenda

Law Society conference: 3 insurer demands force firms to put AI on their risk agenda

Unexpectedly blunt scrutiny from insurers has pushed AI onto the frontline of legal risk management, the law society risk and compliance sessions made clear. As firms completed professional indemnity insurance renewals this year, underwriters sought explicit detail on AI policies, controls and staff behaviour — a line of inquiry that many firms were not prepared to answer fully.

Why this matters now

The timing is critical: insurers and regulators have noticed instances where generative systems were used without rigorous oversight. There have been reports of lawyers citing AI-generated cases and of solicitors placing client material into open generative tools; one practitioner admitted placing Home Office emails containing client details into ChatGPT. Those episodes have prompted regulators and the courts to scrutinize conduct, and that scrutiny is feeding directly into insurers’ renewal questions.

Law Society risk conference exposes insurer demands

At the Law Society’s risk and compliance conference, delegates were asked whether AI use in firms is controlled. In a poll, 14% of attendees agreed that AI was “allowed but largely unmanaged, ” and almost half assigned responsibility for day-to-day management to individual fee-earners rather than to supervising partners. Only 24% said managing partners should carry the accountability. Brokers and underwriters are reacting to those patterns: they want clear, accountable governance rather than vague terms such as “experimenting” with the technology.

Deep analysis: what lies beneath the headline

Insurers are focused on three practical dimensions when assessing a firm’s risk profile: the accuracy of AI-produced work, the security of client data, and the human verification that accompanies automated outputs. Marc Rowson, partner with insurance broker Lockton, explained that underwriters are not trying to penalize use of AI but want evidence that firms can show how outputs are checked and how data is protected. Arjun Rohilla, senior vice president of broker Paragon, warned that insurers are seeking clear answers rather than vague experimentation statements.

The courts and regulators have already signalled intolerance for misuse. Last year, High Court judge Mr Justice Ritchie characterised the deployment of fake case citations by some practitioners as “appalling professional misbehaviour. ” Separately, two immigration solicitors have been referred to the Solicitors Regulation Authority for apparently using generative AI to create irrelevant or false cases. These developments create a chain reaction: regulatory scrutiny informs insurer questions, which in turn shape renewal terms and the market’s willingness to underwrite.

Expert perspectives and regional implications

Olivier Roth, Solicitors Regulation Authority, emphasised the role of professional judgement in the adoption of generative tools, stating, “Generative AI should be seen as a tool to support professional judgement, not a replacement for it. ” That framing frames the expectations insurers express on renewal forms: not a ban on technology, but documentation of supervision and verification.

One firm owner said renewal forms contained an unprecedented number of questions about AI policies, risk plans and staff practices. Insurers are still in a fact-finding phase and will scrutinise where the human element of verification sits within workflow. For firms, the immediate consequence is practical: renewals now require demonstrable policies addressing accuracy, data security and staff behaviour — and an audit trail showing those policies are applied.

The ripple effects extend beyond individual firms. As underwriters incorporate AI governance into their underwriting frameworks, market capacity and premiums could shift for firms that lack clear controls. The solicitor discipline, court processes and professional indemnity markets are now cross-checking the same problem from different angles, increasing the premium on internal compliance work and governance.

What remains unsettled is how granular insurers will demand firms go when describing controls, and whether the profession has yet built the supervisory structures needed to satisfy both regulators and underwriters. Will firms invest in governance now to secure renewals on favourable terms, or will gaps in oversight translate into higher costs and disciplinary exposure down the line?

As this agenda tightens around firms, the law society community faces a choice: treat AI as an operational detail to be left to fee-earners, or embed it in formal risk policies that demonstrate oversight, verification and data security. Which path will the profession take next — and who will be held accountable when an AI-assisted error reaches a court or a claims adjuster?

Next