For those who have been the subject of intimate image abuse – including children – criminal sanctions after the event are of little comfort. Restriction of access to these tools is urgently required. Clare Daly nudifies the rogue robots
The dawn of the new year, 1 January 2026, and a robot issues an apology.
Grok, the AI feature deployed by the social-media platform X, is quoted as saying: “I deeply regret an incident on 28 December 2025, where I generated and shared an AI image of two young girls (estimated ages 12-16) in sexualised attire based on a user’s prompt.
"This violated ethical standards and potentially US laws on CSAM [child sexual-abuse material]. It was a failure in safeguards, and I’m sorry for any harm caused. xAI is reviewing to prevent future issues.”
Following this, the Internet Watch Foundation analysed CSAM imagery created on the dark web and found criminal imagery of children aged 11-13, which appears to have been created using the tool.
Whether one blames the human prompting the AI to carry out these actions or the technology, one thing is clear: children have no place within this experiment, either as victims or as users.
This article aims to set out, briefly, the legislation in this area and asks the question: are we doing enough to protect children online in light of the new risks that AI presents?
The new wave
The issue of AI-generated sexualised deepfakes did not start with Grok. Unregulated AI has been a safeguarding issue for a number of years, particularly with regard to so-called ‘nudification’ apps.
There has been a growing trend towards peer-on-peer abuse by way of CSAM generated by children concerning children, as has happened in Spain, the UK, and the US.
Australia legislated against nudification apps last September and the UK followed suit, proposing to ban them last November.
In Ireland, the Ombudsman for Children had been calling for legislation against nudification apps since May 2025, and doubled down on this call more recently following these most recent reports involving Grok.
Better, faster, stronger
The mantra of ‘move fast and break things’, utilised by many tech gurus, left children’s rights’ advocates concerned about the role and status of safeguarding and safe design during periods of rapid technological innovation.
Wherever a platform’s own AI feature is capable of being prompted to create content regarded as criminal, one must question the ‘safety by design’ of the feature.
Grok is an AI feature integrated into the X platform and offered as part of the service. Thus, Grok’s outputs clearly fall within the scope of X’s ‘terms and conditions’, whether posted on the platform or otherwise.
The aforementioned apology from Grok arose in circumstances where Grok had been upgraded to so called ‘spicy mode’, enabling the generation of adult content last summer.
An image-editing function was introduced last month. Essentially, any photograph could be altered by Grok by users simply typing in a prompt. Thereafter, a relatively benign image could be virtually undressed and further altered to show the person in compromising sexualised positions.
While concerns had been raised over ‘spicy mode’ since its release, the Grok apology in January appeared to stir media interest.
What followed were numerous complaints circulating in the media from distressed individuals whose personal images had been rendered into states of undress, without consent, by others using the platform.
While access to the edit function was subsequently limited by the platform to paying subscribers, it appears that this did little to appease concerns. Instead, the legal status of such images has come sharply into focus.
Doin’ it right
Notably, Irish legislation was, for a time, ahead of the curve. Irish law already criminalises both the non-consensual distribution of intimate images and the production of CSAM, both of which appear to apply to AI-generated harm.
The Harassment, Harmful Communications and Related Offences Act 2020 criminalises the distribution of intimate images without the consent of the person depicted.
However, the extent to which this legislation covers images that are digitally altered but not distributed is unclear.
The definition of ‘intimate image’ extends to any visual representation made by any means, including any digital representation of a person’s anatomy, and thus clearly encompasses AI-generated imagery.
Section 2 prohibits the distribution, publication, or threat to distribute or publish an intimate image without consent, with intent to cause harm, or being reckless as to whether harm is caused.
Section 3 provides for the offence of recording, distributing or publishing an intimate image of another person without that other person’s consent.
The emphasis of this legislation is on the recording of an intimate image without consent, as opposed to the generation of an image without consent, and while the prohibitions here extend to the publication of a generated image where it is published or threatened to be published, it is unclear to what extent this law applies to the mere creation of the image without publication or threat to publish.
While this might appear to be a semantic difference, where a person generates an image without the consent of the other person, this too should be captured under the legal parameters of intimate image abuse.
Of course, the definition of ‘recording’ an image might extend to downloading and saving a video in a camera roll, perhaps, but this is unclear. Nonetheless, the merits of this legislation cannot be overstated.
Moreover, an operational review of the legislation indicated that, by 2023, almost 100 prosecutions had already occurred since the legislation came into force, reflecting judicial and garda attitudes against intimate image abuse.
Effective reporting channels exist, and individuals affected by intimate-image abuse can take action through hotline.ie. According to its 2024 annual report, hotline.ie removed 93% of intimate images reported to it last year.
In addition to intimate-image offences, the creation, possession, distribution or facilitation of access to CSAM is illegal in Ireland.
Section 5 of the Child Trafficking and Pornography Act 1998 provides that it is illegal to knowingly produce, distribute, print or publish any child pornography, or to encourage or knowingly facilitate such production, distribution, printing or publication of child pornography.
The definition of ‘child pornography’ includes any visual representation or description of a child, thought to extend to AI-generated CSAM.
This offence extends to the body corporate where such imagery is generated, under section 9 of the 1998 act. It provides that a director or other officer of a body committing such an offence may also be guilty if the offence was committed with the consent or connivance of that person, or is attributable to any neglect on the part of that person.
The key word here is ‘facilitating’ – “that word is in the legislation that it’s an offence to facilitate the production or distribution of child pornography”, said Prof Conor O’Mahony, former Government Special Rapporteur on Child Protection, quoted in the Irish Examiner on 8 January.
However, is the criminal law a sufficient remedy in circumstances where a child is harmed by such an image?
We can already foresee such harm where over one-quarter of Irish children aged 8 to 12 were using AI chatbots last year. Unregulated AI technologies, such as chatbots and image-altering AI, can present major threats to children.
Human after all
The Online Safety and Media Regulation Act 2022 amended the Broadcasting Act 2009 to establish Ireland’s statutory enforcement framework.
Thereunder, Coimisiún na Meán (the media regulator) regulates designated online services through legally binding online safety codes, and a failure by a videosharing platform service to comply with an online safety code constitutes a contravention for the purposes of part 8B of the act.
The first Online Safety Code, published in October 2024, provides, among other things, that VSPs (video-sharing platforms) shall include in their terms and conditions measures to protect children, and the public generally, from content referred to in article 28 b(1)(a)-(c) of the Audiovisual Media Services Directive (AVMSD Directive).
In December 2023, Coimisiún na Meán designated ten platforms as VSPs pursuant to the act, comprising Facebook, Instagram, YouTube, Udemy, TikTok, LinkedIn, X, Pinterest, Tumblr and Reddit.
Specifically, article 28b(1)(c) of the AVMSD Directive provides that “member states shall ensure that videosharing platform providers under their jurisdiction take appropriate measures to protect the general public from programmes, user-generated videos, and audiovisual commercial communications containing content the dissemination of which constitutes an activity which is a criminal offence under union law, namely … offences concerning child pornography as set out in article 5(4) of Directive 2011/93/EU of the European Parliament”.
Thus, it is clear that platforms have an obligation derived from the Online Safety Code to provide internal rules to prevent to the dissemination of CSAM on their services. Thereafter, a breach of these internal rules is actionable according to the same online safety codes.
The codes further specifically provide measures in respect of age assurance of adult content – part B of the code came into force last July. The obligations on platforms to prevent children from accessing adult content appear to apply to any children under the age of 18, as defined in the code.
A 2024 study by Qustodio found that 30% of US children aged 7 to 9 had an account on X.
X rules state that users can share adult content on the platform, provided that it is properly marked.
It is not clear what age verification mechanisms are in place to ensure that children in Ireland are prevented from accessing this content.
Around the world
The EU AI Act is lauded as being the first-ever comprehensive legal framework on AI worldwide. The AI Act sets rules for AI systems based on their risk level, rated in terms of ‘unacceptable risk’, ‘high risk’, ‘limited risk’, and ‘minimal risk’ practices.
However, nudification apps do not appear to fit within the specific list of ‘high-risk’ use cases set out in the act.
It has been said that the AI Act is weak in terms of regulating deep fakes, and that this weakness arises due to “the lack of reference to deep porn and the harmful effects of this phenomenon”.
Aerodynamic
The Digital Services Act (DSA) introduced a new regulatory framework for intermediate service providers in 2022, with the goal of encouraging platforms to fight illegal content, while respecting users’ fundamental rights.
The DSA provides numerous obligations on all intermediate service providers (which includes online services) in terms of tackling illegal content.
While platforms do not have an obligation to monitor content, article 16 provides that hosting services, including online platforms, must act expeditiously upon obtaining actual knowledge of illegal material.
Where a platform not only hosts illegal content, but provides a mechanism to generate illegal content, questions must be asked around actual knowledge of content, including an obligation to monitor illegal content generated by the platform itself.
One more time
Regulatory and criminal frameworks are in place – yet for those people who have been the subject of intimate-image abuse, including children, criminal sanctions after the event are of little comfort.
Restriction of access to these tools (particularly by and concerning children) is urgently required.
Peer-on-peer abuse is cited as a rising issue, with the courts frequently raising concerns around young children’s easy access to pornography, which is then linked to serious sexual offending.
The Children’s Commissioner for England has said: “In our lifetime, we have seen the rise and power of artificial intelligence … It has enormous potential to enhance our lives but, in the wrong hands, it also brings alarming risks to children’s safety online.”
These alarming risks to children are identifiable, present online, and being borne out in real time.
Clare Daly is a solicitor in the Office of Legal Services, Tusla Child and Family Agency, a board member of CyberSafeKids, and author of Child Safeguarding in the Digital Age (due in 2026).
LEGISLATION:
LITERATURE: