Online-safety regulator Coimisiún na Meán (CnaM) has said that it is engaging with the European Commission on social-media platform X’s AI tool Grok.
There have been concerns that the app is being used to create sexually explicit images of adults and children.
Niamh Smyth (Minister of State with responsibility for AI) told RTÉ radio this morning (8 January) that both Irish and EU laws have been broken in relation to AI-generated images of children.
CnaM said that, under the Digital Services Act, the European Commission was responsible for the oversight of the compliance very large online platforms with their requirements to assess and mitigate risks that their services may create in relation to illegal content and the protection of fundamental rights – including protection for minors.
The regulator said that it was also continuing to engage with An Garda Síochána on the issue.
“The sharing of non-consensual intimate images is illegal, and the generation of child sexual-abuse material is illegal,” its statement said.
CnaM urged members of the public who were concerned about images shared online to report them to gardaí, adding that reports could also be made to the Irish national reporting centre, Hotline.ie.
The watchdog said that users who had problems reporting illegal content to online platforms, they could also contact CnaM.
The online-safety campaign group CyberSafeKids has called for an “urgent and total ban” on so-called ‘nudify’ and ‘pornify’ applications, and any other AI-based tools capable of generating deepfake sexual images of both children and adults.
Its chief executive Alex Cooney has called on Ireland to follow Britain and Australia by introducing legislation explicitly prohibiting AI tools to generate or manipulate sexualised images of minors.
“The current AI Act fails to adequately address the unique and heightened risks that AI systems pose to children,” she said.