Law inadequate to AI challenges, DSBA webinar hears
Pic: Shutterstock

Law inadequate to AI challenges, DSBA hears

Legislation is playing catch up with technological developments, a Dublin Solicitors Bar Association (DSBA) seminar has heard.

The Harassment, Harmful Communications and Related Offences Act 2020 already appearing to be inadequate to deal with the challenges posed by AI, attendees were told.

Michael O’Doherty BL was speaking at the DSBA webinar Legal Perspectives on Generative AI – From Legal Privilege to Contracts to Publication and Defamation (28 April).

In his presentation, O’Doherty addressed artificial intelligence as a subject of litigation, with a particular focus on defamation. 

Litigation involving AI remains very much in its early days, he noted. 

While established principles of defamation law continue to apply, issues remain unresolved relating to:

  • Publication,
  • Identification of publisher, and
  • Application of statutory defences.

 Artificial intelligence is defined in the Artificial Intelligence Act as systems capable of performing tasks associated with human intelligence.

Generative AI refers to systems that produce content in response to a prompt. 

“Importantly, from a defamation perspective, generative AI does not cut and paste existing content that appears online,” O’Doherty added.

Identifying patterns

Such systems generate outputs by identifying patterns in existing data rather than reproducing source material. 

Notwithstanding ongoing improvements, erroneous outputs (‘hallucinations’) remain a feature.

Defamation in Ireland is governed primarily by the Defamation Act 2009 as amended by the Defamation Amendment Act 2026

A plaintiff must establish publication, defamatory meaning, and identification.

Defamation is actionable per se for natural persons, while under the 2026 Amendment Act, corporate plaintiffs must demonstrate serious financial harm. 

Strict liability

The tort is one of strict liability, and the established defences continue to apply under section 27 of the Defamation Act 2009

These defences are: 

  • Truth,
  • Privilege,
  • Fair and reasonable publication on a matter of public interest,
  • Honest opinion, and
  • Innocent publication. 

The statutory reference to publication ‘by any means’ encompasses online and AI-generated content, and no AI-specific provisions have been introduced.

O’Doherty also noted the relevance of the  Digital Services Act, in particular article 6.

This provides that a hosting provider will not be liable for user-generated content unless, upon obtaining knowledge of illegality, it fails to ‘act expeditiously’ to remove or disable access. 

There is no obligation on platforms to monitor content proactively. 

O’Doherty described the Artificial Intelligence Act as ‘very high level’ and of limited direct assistance to litigants.

He noted that it adopts a risk-based classification of AI systems and that article 50 requires providers to ensure that users are informed when they are interacting with an AI system.

“We’ll have to wait for published decisions of the Superior Courts before any view can be formed with any certainty about whether AI will affect defamation actions in any way,” the barrister said.

He then highlighted some emerging case law of note in the United States.

In Walters v OpenAI LLC, radio host Mark Walters claimed that OpenAI defamed him when ChatGPT generated a false document alleging embezzlement. 

The proceedings failed with the court holding that the plaintiff, as a public figure, was required to establish ‘actual malice’.

O’Doherty noted that as this ‘malice’ requirement does not exist in Irish or EU defamation law, the threshold would be lower.

He then referred to two sets of ongoing proceedings concerning AI-generated summaries produced by Google LLC. 

In Wolf River Electric v Google LLC, a solar company alleged defamation arising from an AI summary stating that it was under investigation for fraud.

In Robby Starbuck v Google LLC, an influencer alleged defamatory AI-generated content falsely linking him to serious criminal allegations, including child sexual abuse and murder.  

Publication vs reproduction 

The issue in both cases is whether AI-generated summaries constitute ‘publications’ by the platform itself rather than reproduction of third-party material.

O’Doherty noted that section 230 of the US Communications Decency Act essentially means that “platforms are immune from liability for any third party content that they host”. 

This is the case even where they have been placed on notice and fail to remove such content. 

By contrast, under the EU Digital Services Act, immunity is conditional upon the absence of knowledge and the taking of action ‘expeditiously’ upon notice.

‘Classification is critical’  

O’Doherty suggested therefore that classification is critical. 

If AI outputs are treated as third-party content, platforms may benefit from immunity.

If AI outputs are treated as original publications generated by the platform, that immunity may not apply. 

The barrister further indicated that, in Ireland and the EU, where a platform generates content in response to a user prompt, it may be characterised as the publisher of that content. 

Innocent publication

In such circumstances, reliance on the ‘innocent publication’ defence under section 27 of the Defamation Act 2009 or the ‘hosting defence’ under Article 6 of the Digital Services Act may be limited.

This, however, remains untested, he said.

O’Doherty also described the identification of the publisher as “the million dollar question”.

This is particularly so where generative AI does not reproduce existing material but generates new text by predicting language patterns. 

This raises uncertainty as to whether liability attaches to the AI provider, the user, or both.

Issue of attribution

The issue of attribution therefore directly affects the availability of statutory defences under Irish law, including section 27 of the Defamation Act 2009, and under EU law, including Article 6 of the Digital Services Act.

The allocation of liability between AI providers and users remains unresolved in Irish and EU courts.

Noting that the case referred to misrepresentation rather than defamation, O’Doherty referenced Moffatt v Air Canada (2024), where the British Columbia Civil Resolution Tribunal held that an operator was liable for outputs generated by its chatbot.

Michael O’Doherty concluded with “a potential and huge benefit to AI” –  the identification of anonymous online defendants.

Published anonymously 

“One of the biggest issues facing victims of litigation in 2026 is that defamatory statements are often published anonymously,” he explained.

Identifying the source typically requires recourse to the Norwich Pharmacal procedure, now extended in modified form to the Circuit Court via an identification order.

O’Doherty described this recourse as “expensive”, “cumbersome,” and not always successful. 

“However, recent developments in technology suggest that AI could be used to identify anonymous users without having to go to court,” he concluded.

Gazette Desk
Gazette.ie is the daily legal news site of the Law Society of Ireland

Copyright © 2026 Law Society Gazette. The Law Society is not responsible for the content of external sites – see our Privacy Policy.