GDPR remains ‘fundamental’ to AI regulation
Pic: Shutterstock

22 Apr 2026 data law Print

GDPR still ‘fundamental’ to AI regulation

Solicitors have heard that GDPR obligations remain “fundamental” when considering data-protection issues linked to artificial-intelligence (AI) tools.

The annual conference of the Law Society’s Intellectual Property and Data Protection Commission (20 April) heard comprehensive reviews of recent legislative developments and case law in the area, as well as an exploration of the use of AI tools in the workplace.

Olivia Mullooly (partner, Arthur Cox) told the event that regulation in the area was very much a “moving feast” in the light of continuing negotiations on the EU Digital Omnibus, a package aimed at cutting some of the regulatory burden on businesses.

She added that, until the EU’s AI Act came fully into force, the Data Protection Commission (DPC) would remain a central regulator, adding that GDPR had been an effective tool so far in regulating new and novel activities by AI companies.

The Arthur Cox partner told the conference that the GDPR continued to “overlap and interweave” with many other different regulations.

Firms urged not to wait

In a panel discussion on AI compliance moderated by committee chair Elaine Morrissey (small piccture), Bird & Bird partner Deirdre Kilroy said that firms should not ignore the fundamental principles of GDPR in the use of AI.

She also urged firms, amid what she described as “shifting” regulatory ground, not to wait to take compliance action, even if they might have to come back and refine their processes later.

Mullooly also warned that some of the exemptions proposed as part of the proposed EU package often came with conditions attached.

She urged organisations to review the final details carefully to find out how useful the changes would be. “You’ll find that you can easily get tripped up by thinking that it does something that it doesn’t,” she said.

Uncertainty

Richard Greene, who works in-house for global software company Autodesk, told the event that it was somewhat challenging  to try to keep ahead of regulatory developments, particularly in the EU.

“It’s not so much that the new laws are strict; it's sometimes more the uncertainty that it creates around compliance,” he said, citing the Digital Omnibus as an example. 

“I think a lot of that uncertainty certainly adds complexity to our prioritisation recommendations,” Greene stated.

Asked about purchasing decisions on AI tools, the in-house lawyer said that there was “an emerging conflict between the business wanting to accelerate in the area of AI and then being obviously cautious about onboarding vendors”.

He added, however, that his company had put in place a rigorous process for vendors who processed personal data, and had expanded that process for AI tools.

Privacy statements

The panel discussed the difficulties of drafting privacy statements in the light of evolving regulation, particularly for employees.

Greene told the event there was a potential conflict between, on the one hand, trying to provide enhanced transparency by adding more content to statements, and on the other, making it easier for users to understand the privacy statement.

Mullooly acknowledged that privacy statements were difficult to get right, as the operation of AI tools could be “quite opaque”.

She added that there was also a conflict between an employer’s legitimate interest in using data to develop an AI system and an unconditional right of objection proposed by the Digital Omnibus.

‘Scary’ threat landscape

Asked about cyber breaches that involved AI, Carlo Salizzo (partner, Dentons) told the event that AI was making the threat landscape “quite scary”.

“While we get these wonderful efficiency gains from these AI tools, bad actors get the same efficiency gains,” he said.

Salizzo said that the first step for organisations should be to “always plan, test, come up with a process to follow, road-test that, and then update it over time”.

The Dentons partner said that there were also new risks from agentic AI – tools that could act with autonomy. These could breach data rules directly not only by following instructions incorrectly, but also by acting on their own volition.

“For example, an agent given the task to improve the marketing plan might decide to go off and scrape publicly available personal data from the internet, or subscribe to a not publicly available database, pull in that data, and use it to build a marketing plan,” he said.

“Agents, just by bringing that level of autonomy, just create uncertainty in the system – much in the same way that hiring an untrained, unvetted employee might do the same.”

Salizzo also urged solicitors advising on data and AI issues to take some time to investigate and explore the tools themselves.

‘Meaningful’ assessments

Deirdre Kilroy urged organisations to carry out “meaningful” DPIAs (data-protection impact assessments), adding that sometimes this was done “almost as a matter of rote”.

Organisations should ask themselves if they have done a proper DPIA exercise for AI tools, she stated.

“It’s what the regulator will look for when there’s a problem,” Kilroy added.

Earlier, Mullooly had told the event that she was seeing a drop-off in claims for non-material damages for data breaches under GDPR after recent court rulings – in particular a Supreme Court judgment that warned plaintiffs to expect only “very, very modest damages” in such cases.

DPC sees AI increase

The conference also heard from the DPC’s director of legal affairs Deirdre O’Donovan, who told Elaine Morrissey that the watchdog had seen a sharp increase in AI-related engagements in recent years – these accounted for one in four engagements last year, compared with one in 35 in 2021.

She added that the need for further increases in the DPC’s staff numbers was “clear”, as the scale and complexity of cases had also increased.

O’Donovan outlined the new responsibilities – including a market-surveillance role – that the DPC would have under the AI Act once implementing legislation had been enacted, adding that the distributed model of regulation proposed would lead to more co-operation with other sectoral regulators.

She told the conference that the regulator was seeing AI providers engaging with it before deploying tools and was encouraging such engagement.

Gazette Desk
Gazette.ie is the daily legal news site of the Law Society of Ireland

Copyright © 2026 Law Society Gazette. The Law Society is not responsible for the content of external sites – see our Privacy Policy.