We use cookies to collect and analyse information on site performance and usage to improve and customise your experience, where applicable. View our Cookies Policy. Click Accept and continue to use our website or Manage to review and update your preferences.

Law firms ‘juiciest target’ for AI-enhanced hacks

11 Dec 2025 technology Print

Law firms ‘juiciest target’ for AI-enhanced hacks

Generative AI is a significant evolution from traditional predictive models but it is not a database, the Law Society Technology Committee 2025 conference has heard.

Speaking at AI for Today’s Lawyer (November 26), Professor John Kelleher explained that LLMs (large language models) reproduce linguistic patterns rather than retrieve authoritative or up-to-date information. 

They are, therefore, only suitable for factual tasks if paired with retrieval-augmented generation (RAG), which “combines [the] language model with a database”.

Bias

The chair of computer science at TCD added that AI had been shown to display gender and racial bias and to produce plausible but false content known as hallucinations.

Dr Andrew Hines extended this, framing the risk as “confabulation” because, the more niche or complex the query, the higher the chance of incorrect outputs.

“Mixing up ideas or getting facts slightly wrong is where the real problem lies,” the assistant professor at the UCD School of Computer Science said.

This is of particular relevance in a small jurisdiction like Ireland, according to Paula Fearon (McCann FitzGerald).

Professor John Kelleher said that careful prompting was key to guiding AI safely and effectively. Good prompts provide:

  • Clear context,
  • Assign a specific role (‘you are a legal assistant’), 
  • Define a particular task, 
  • Specify a desired output format.

Expert oversight

Structured prompts, repeated consistently, improve reliability, but expert oversight remains crucial.

Brian McElligott (MHC) highlighted the issue of confidentiality, warning that using free LLMs was effectively “putting information out to the public”.

Even within ringfenced AI platforms, practitioners must ask: “Where is my data going? How is it being processed?” 

The head of artificial intelligence at MHC added: “Just because you can use something doesn’t mean your clients are happy with you using it.”

The EU AI Act structurally resembled medical-device regulation more than GDPR, McElligott said. It regulates uses, not tools, through the pyramid of banned, high-risk, limited-risk, and low-risk systems. 

Although legal services are not themselves regulated by the act, lawyers’ use of AI  – including enterprise tools – must still satisfy General Standards Guidelines produced by the Law Society regarding confidentiality, privilege, disclosures, data protection, and client expectations. 

The Law Society has also issued guidelines on AI.

The act’s high-risk compliance deadlines have been extended due to complexity and because states are not ready, but AI literacy programs are already expected in practice.

Dr Andrew Hines emphasised that AI literacy was now a ‘lifetime journey’ and that lawyers must distinguish between tools that augment their work and those that risk outsourcing professional judgment.

AI governance is “socio-technical”, affecting trust and accountability, according to Labhaoise Ní Fhaoláin, PhD Researcher in AI law and regulation.

“AI governance is through internal mechanisms within your organisation,” the solicitor continued – including senior accountability, input from across an organisation, and cultural norms that encouraged staff to raise concerns.

Kate Colleary of Pembroke Privacy reiterated the importance of firms building AI governance frameworks that worked for them, and of all staff being familiar with the framework.

“It’s a risk mitigation issue,” she said.

Risk mitigation is also key in cybersecurity.

AI has “changed the volume, speed and quality” of cyber-attacks, according to cyber security expert Paul Delahunty (Stryve), particularly in regard to impersonation and automation.

Law firms, who held client money and sensitive data, were “the juiciest targets,” Delahunty said, particularly for ransomware attacks. 

Eimear Lane (Brown & Brown Insurance Brokers) highlighted the shift in ransomware economics, because even lower-threat actors could feed stolen documents into models that instantly identified valuable content.

Describing it as “mind-boggling” from an insurance perspective, Lane said that insurance companies, although so far silent on AI, were watching.

“You might find in a couple of years they'll start excluding areas of cover, similar to what they did with the cyber.”

Lane said that, although it was not publicised, “firms are paying large sums of money [for the return of information] across different sectors”.

Delahunty agreed that it was often a “silent crime” that victims did not wish to advertise.

Solicitor Elizabeth Fitzgerald pointed out that the cyber-security section of long-form insurance proposals acted as “a great checklist for things to implement in the firm”. 

Eimear Lane added that modern policies provided training, incident response and ransomware cover and, while risk could not be eliminated, it could be diminished.

Gazette Desk
Gazette.ie is the daily legal news site of the Law Society of Ireland

Copyright © 2025 Law Society Gazette. The Law Society is not responsible for the content of external sites – see our Privacy Policy.