Innovation is not new to the legal sector – lawyers have always adapted to new tools and working practices as they have emerged. Over recent decades we have seen changes in regulation, including a degree of liberalisation in how businesses offering legal services can be owned and operated, the rise of the legal operations model, and a shift from traditional advisory approaches toward a broader range of knowledge services and products. The pandemic accelerated hybrid and remote working practises, and innovation is being aided by systematic changes by legal system stakeholders such as the Ministry of Justice’s AI Action Plan for Justice. Generative AI (GenAI) is now reshaping the legal landscape at a rapid pace.
A mainstream technology – but unevenly spread
Legal technology company Clio suggest that “AI adoption is near universal” in the UK and Ireland legal market (UK & Ireland Legal Insights Report 2026), but research from the Solicitors Regulation Authority (‘Sole practitioners’ and small firms’ use of technology and innovation’) and the Bar Standards Board (‘Technology and Innovation at the Bar’) reports a more nuanced pattern of adoption across the legal sector. There is also a competitive dimension to adoption: in September 2025, The Law Society Gazette reported on research from Thomson Reuters which showed 78% of top 40 law firms in the UK now advertise their use of AI, up 60% from 2024. For these firms, AI is not just a productivity tool; it has become part of their brand and competitive positioning.
From a consumer perspective, there has been a sharp increase in the percentage of the public using GenAI to access legal information instead of using a search engine (Ofcom’s Online Nation Report 2025). This is significant when we know that GenAI output does not always provide legally accurate responses. Consumer use of GenAI for legal queries is having real-world impact; for example, AI summaries now appearing at the top of some search results are reported to have a ‘disintermediating effect’ by negatively impacting traffic away from trusted sources of legal information (Select Committee Inquiry into Access to Justice, February 2026). There is also an interesting dynamic emerging around who is using AI and when: clients are increasingly obtaining AI-generated legal information and then asking lawyers to verify it, just as lawyers themselves may be using AI and then reviewing the output.
Trust, transparency and the ‘disclosure disconnect’

Whatever the extent of current AI adoption, trust remains central to the lawyer–client relationship, and it is here that some of the most pressing questions arise.
Not all users of legal services want or are able to engage digitally – the Legal Services Consumer Panel Tracker Survey 2025 reported that 56% of respondents would trust legal services less if they could only be accessed digitally, and there are still approximately 1.6 million people in the UK who have no internet connection and around 23% of the population have very low digital capability (UK Government, Digital Inclusion Action Plan).
Even for those users who are able and willing to engage with digitally delivered legal services, a 2025 poll by Robin AI found that respondents’ views on AI use in law differed according to the nature of their legal matter – with respondents drawing a “hard line between low-stakes paperwork and more personal matters“, with support for AI use dropping sharply for “more emotionally charged or legally complex matters such as divorce, redundancy and criminal defence“. Only 4% of respondents in the Robin AI poll said they would trust AI on its own for legal advice, with most respondents preferring to rely on either a traditional lawyer (69%) or a lawyer using AI as a support tool (27%).
The 2026 Clio report identifies what they term a “disclosure disconnect”: 81% of law firms surveyed said they disclose their AI use to clients, but only 7% of clients surveyed recalled their lawyer actively sharing that information. This raises real questions about governance, transparency, and professional conduct obligations. Regulatory oversight of AI use in the delivery of legal services continues to develop: one of the policy priorities in the Legal Services Board’s Business Plan for 26/27 is “harnessing technology and innovation to support access to justice and efficiencies in the legal sector”.
Roles are changing, not disappearing

In a recent speech entitled ‘Lawyers and Education in the Machine Age’ at the Association of Law Teachers Conference (April 2026), Master of the Rolls Sir Geoffrey Vos predicted that in the future, individuals and businesses:
“will no longer value lawyers in the same way because they will have their own access to the previously forbidden land of laws and legal precedents… instead, they will value the guidance and insight that trained lawyers can give to explain what the machines have advised, and perhaps also, determined“.
So, as we face a future where AI will become more integrated into legal workflows, and where consumers will have greater access to increasingly robust legal information sourced from AI, the instinctive assumption could be that demand for legal services might fall. But economic history suggests the opposite. Jevons Paradox – the nineteenth-century observation that greater efficiency in coal consumption increased rather than reduced overall demand for coal – may apply here. As AI lowers the cost and friction of accessing basic legal information, more people will be able to engage with their legal questions. Some queries might be effectively resolved by AI alone, but many will surface a level of complexity or risk that only a trained lawyer can address, and many consumers will continue to seek out the insight and human support that lawyers can provide. AI may not reduce the need for lawyers; it may end up expanding the pool of people who discover they need one!
