AI is coming for your practice. Is it secured?

27/04/2026

A plain-English briefing for partners and owners of UK legal firms

Artificial intelligence is no longer a distant prospect for the legal sector. According to Thomson Reuters research, 95% of professional services firms expect AI to be central to their core workflows within five years. Whether you are already exploring AI tools or watching cautiously from the sidelines, the question of security is one you cannot afford to leave to your IT provider.

Drawing on CyberSolver's technical paper on AI security for CISOs, this is a partner-level briefing on the risks you need to understand before AI becomes embedded in how your firm operates.

Where law firms are starting

Most UK firms using AI today are on the first rung of the ladder: tools like Microsoft Copilot or ChatGPT helping fee earners draft documents, summarise bundles, research precedents, and respond to routine correspondence. Productivity goes up. Non-billable time comes down.

The next rung is AI that can act autonomously, managing workflows, processing data, and connecting with your practice management systems without human sign-off on each step. This is arriving faster than most firms appreciate.

Each rung up the AI ladder increases the potential benefits. It also increases the risk.

The risks partners need to know about

Client Confidentiality

Your professional obligations under the SRA Code of Conduct do not change because an AI is involved. But AI changes the scale at which a confidentiality breach can happen.

If your AI tool has access to your document management system, email, or case files, and your internal access controls are not properly maintained (shared folders, old permissions never revoked, files accessible to more people than they should be) the AI will surface that content. It will find the salary spreadsheet that was left in a shared drive four years ago. It will retrieve the board paper that was never properly restricted. A prompt can inadvertently expose what a human would never have found manually.

Tightening of your document permissions is tedious work. Along with a good governance framework, it is also the single most effective step you can take before deploying any AI tool with access to your business data.

The jailbreak problem - and why you cannot fully "guardrail" your way out

You may have heard that AI tools often come with "guardrails" safety filters that prevent the model from behaving badly. The reality, which major vendors are open about, is that guardrails can typically be bypassed in minutes by a determined attacker. They are a necessary layer of protection but are not a panacea.

Prompt injection

The dominant AI-specific attack at present is called prompt injection: crafting an input that tricks the AI into ignoring its instructions and following the attackers instead. No technical expertise is required. A cleverly worded email in your inbox or a document uploaded by an opposing party could, in a poorly secured deployment, cause your AI to disclose information or take actions you never authorised.

This is not a reason to avoid AI. It is a reason to think carefully about what you give it access to, and to ensure there is always human review of high-stakes outputs before they reach clients or courts.

Open-source and third-party tools: a hidden risk

Development teams and AI-savvy fee earners will understandably use free, open-source AI models to help in drafting documentation. The risk is that many publicly available models have uncertain origins, unverified training data, and known vulnerabilities. Security researchers found that one well-publicised AI model failed to block a single harmful prompt when tested against fifty standard attack scenarios. While most will check the AI output before it is sent, things may be missed – even judges have been caught out in their use of AI.

Any AI model your firm uses, whether built in-house, sourced from a vendor, or adopted by a member of staff, should be subject to proper risk assessment. Shadow AI adoption (staff using personal AI accounts for client work) is a particular concern for legal practices, where the data handling implications are serious.

Cloud providers do not take full responsibility

If your firm uses a cloud-based AI service, it is essential understanding the shared responsibility model for security. AWS, Microsoft, and Google are explicit: while they secure the underlying infrastructure, the responsibility for how you deploy and configure AI, and for preventing vulnerabilities, lies with you. Buying a cloud AI service does not transfer the security obligation. Your firm still owns it.

The regulatory picture

The UK's National Cyber Security Centre has published guidelines for secure AI development, and the UK Government has released a Code of Practice for AI cyber security. The SRA and Bar Standards Board are actively developing guidance on AI use in legal practice, particularly around supervision of AI-generated work product and disclosure obligations.

In parallel, the EU AI Act which has extraterritorial reach for firms advising clients or operating across EU borders, imposes obligations on those who deploy AI systems commercially.

What good looks like: five priorities for legal firms

You do not need to become a security expert. You need to ask the right questions of the people who are.

  1. Governance: treat AI security as a partner-level governance issue, not just an IT question. The decisions about what AI can access, and what it can do autonomously, have the same significance as decisions about who in the firm has access to client files.
  2. AI agent identities and access controls: are access controls defined and managed? AI agents have identities that need to be subject to access control. Audit your data access controls before deploying any AI tool with access to client or business data. What can it see? Should it be able to see all of that?
  3. Security risk assessment: establish a policy on AI tool adoption including security risk assessment of all open source and third-party tools and agents. Ensure all usage is subject to approval and what client data (if any) may be used with them. Unapproved tools used for client work are a risk you cannot manage if you cannot see them.
  4. Quality of output: insist on human review of all AI-assisted work product before it goes to clients, courts, or counterparties. Professional responsibility does not transfer to the AI tool or agent.
  5. Responsibility for technical security controls: understand what security controls your vendors are responsible for and what they are not (the shared responsibility model). For example, ask specifically about short and long-term data residency (where is your data stored and processed), whether is it used to train the model, and what happens in the event of a breach.

The Bottom Line

AI will make well-run legal practices more efficient and more competitive. The firms that benefit most will be those that adopt it thoughtfully, with proper governance in place from the outset. The firms that struggle will be those that bolt AI onto existing systems without addressing the underlying security and access control questions, or who discover after a breach that their vendor's responsibility ended at the infrastructure layer.

Get ahead of this is now. The conversation should be happening at partner level, not just in the IT team.

For a copy of the full paper, a more detailed technical briefing or a security review of your firm's AI posture, please contact michael@cybersolver.co.uk or 07508 506931.