Many practitioners may be using AI-enhanced features without realising they are engaging in data processing that could have GDPR implications. Louis Masterson mans the barriers
Irish lawyers are increasingly turning to artificial intelligence (AI) tools like ChatGPT, Claude, and specialist legal AI platforms (such as Harvey AI and Legora) to enhance efficiency, research case law, draft documents, and analyse contracts.
This adoption brings genuine benefits: faster document review, improved research capabilities, and the potential for significant cost savings for clients.
But the integration of AI extends beyond standalone platforms.
For example, Microsoft has embedded AI functionality directly into Word, Excel, and Outlook (through Copilot), while Adobe has introduced AI features in Acrobat and other applications.
Many practitioners may be using these AI-enhanced features without realising they are engaging in data processing that could have General Data Protection Regulation (GDPR)-compliance implications.
A simple prompt to ‘summarise this document’ in Microsoft Word or ‘edit this PDF’ in Adobe Acrobat may involve client data being processed by AI systems.
AI, IOU
The intersection of AI and legal practice raises fundamental questions about data protection that go far beyond simple confidentiality concerns.
When a solicitor or barrister inputs client information into an AI system, they are engaging in ‘processing’ that triggers specific, non-negotiable legal obligations under the GDPR.
The Data Protection Commission’s guidance on AI emphasises that organisations remain accountable for all processing activities.
DPC penalties for non-compliance can be severe but, for legal professionals, the reputational damage of a data breach involving client confidences could be even more costly than the fines.
The central question is: is using AI a breach?
Many lawyers ask whether uploading client personal data to an AI platform automatically breaches GDPR.
The answer is ‘no’, but only if you comply with all relevant requirements first.
Processing client data through AI isn’t prohibited, but it will constitute a breach if you lack the necessary compliance framework.
This framework is not merely a box-ticking exercise: it requires a granular analysis of how the AI tool functions.
Does it learn from your data? Where is the server located? Who has access?
To use these tools lawfully, practitioners must navigate six key areas of the GDPR:
Lawful basis
Under article 6 of the GDPR, you cannot process personal data without a lawful basis.
When using AI for client work, which basis applies?
Practitioners typically rely on ‘contractual necessity’ or ‘legitimate interests’ (article 6(1)(b) and 6(1)(f) respectively).
If the AI is used strictly to deliver legal advice the client has paid for – for example, using an AI tool to review a discovery/disclosure folder – contractual necessity may be a strong argument.
Large language models (LLMs) work by analysing vast amounts of text data to learn statistical patterns about which words and phrases typically follow others, then using these patterns to predict and generate the most probable next words in a sequence based on the input they receive.
Crucially, not all AI tools use client inputs to train or improve their models.
Many enterprise-grade and legal-specific AI platforms explicitly guarantee that user data is not incorporated into model training, processing queries in isolated sessions without retention.
However, if the AI tool does use your client data to ‘train’ its model for the benefit of other users, the ‘necessity’ argument falls away.
You do not need to train the vendor’s AI to fulfil your client contract.
In such cases, you might be forced to rely on legitimate interests, which require a careful balancing test to ensure the client’s privacy rights do not override your efficiency goals.
If the tool retains client data for its own purposes, you may find you have no lawful basis at all, making the processing illegal ab initio.
The transparency trap
Transparency is a cornerstone of the GDPR.
Articles 13 and 14 require controllers to inform data subjects about how their data is processed. To ensure compliance, review your current privacy notices and engagement letters.
Do they mention AI? It is likely they state that data is shared with ‘IT service providers’. However, it is worth pointing out that, in the context of generative AI, this generic description may no longer be sufficient.
If you are uploading a client’s sensitive family-law file into a cloud-based LLM, the client arguably has a right to know.
Practitioners must update their client-care documentation to explicitly state that AI tools are used to assist in legal-service delivery.
This disclosure should explain, in plain English, the nature of the processing and the safeguards in place.
Failing to disclose this usage could be deemed a breach of the transparency principle, rendering the processing unlawful, even if it is technically secure.
The vendor gap
Perhaps the most common compliance gap is the lack of a data-processing agreement (DPA).
Under article 28, whenever a controller (the law firm) engages a processor (the AI provider), there must be a written contract ensuring that the processor acts only on instructions and maintains security.
When you sign up for a standard ChatGPT or a similar ‘off-the-shelf’ account, you are likely accepting standard terms of service.
These standard terms may not satisfy article 28 requirements. They might allow the vendor to use your inputs to improve their services, essentially granting them rights to your client’s data.
For legal professionals, this is unacceptable.
You must seek ‘enterprise’ or legal-specific tiers, like those offered by Harvey AI or Legora, that offer robust article 28-compliant DPAs.
These agreements should explicitly state that input data is not used to train the model, that data is deleted after the session or a set period, and that sub-processors are strictly managed.
If you cannot secure an article 28-compliant agreement, you cannot lawfully use that tool for personal client data.
Security of processing
Article 32 mandates that controllers implement “appropriate technical and organisational measures” to secure personal data. In the context of AI, ‘appropriate’ is a high bar.
Technical measures include encryption and access controls, but organisational measures are equally critical.
Have you trained your staff on how to use AI? Do you have a policy forbidding the input of names, addresses, or financial details into public AI prompts?
A significant security risk with LLMs is ‘leakage’ or ‘hallucination’.
While data leakage (where the AI reveals your data to another user) is rare in enterprise models, it is a theoretical risk that must be mitigated.
Anonymisation is the best defence.
Before pasting text into an AI tool, practitioners should sanitise the data, removing names, dates, locations, and amounts. If the data is truly anonymous, GDPR does not apply.
However, true anonymisation is difficult to achieve – simply removing a name is often insufficient if the remaining context (for example, “a CEO of a large Dublin tech firm involved in a merger on ‘X’ date”) allows for identification.
The ‘high-risk’ bar
Article 35 requires a data-protection impact assessment (DPIA) for processing that is “likely to result in a high risk” to rights and freedoms.
Using AI for legal profiling or reviewing sensitive data (article 9 data, such as medical records in personal-injury cases) almost certainly triggers this requirement.
A DPIA is a written process where you identify risks and mitigation strategies before you start using the tool. For many law firms, the DPIA process will reveal that the risks of using open/public AI tools for case files are too high to mitigate.
This reinforces the need to move toward ring-fenced, private, or legal-specific AI instances where the firm retains control.
International transfers
Finally, the location of the AI server matters.
Many leading AI models are hosted in the United States. Transferring client data to US servers constitutes an international transfer under Chapter V of the GDPR. Since the Schrems II judgment, transfers to the US require careful assessment.
While the new EU-US Data Privacy Framework provides some relief, it only applies if the US vendor is self-certified under that framework.
If your AI provider is not certified, or if they store data in a jurisdiction without an adequacy decision, you must rely on standard contractual clauses (SCCs) and conduct a transfer impact assessment (TIA).
If you cannot verify where the data sits, you should assume it is leaving the EEA.
For Irish practitioners, the safest route is to select vendors that guarantee EU data residency, ensuring data never leaves the European Economic Area.
Innovation with integrity
The legal profession cannot afford to ignore AI, but neither can it afford to ignore the GDPR. The two must go hand in hand.
The path forward requires a shift in mindset. We must stop viewing AI tools as simple search engines and start viewing them as third-party service providers that process our most sensitive assets.
To remain compliant, practitioners should take immediate steps to audit their current position. This means identifying which AI tools are currently being used by staff, whether sanctioned or unsanctioned, and understanding the data flows involved.
Firms should then implement an ‘AI acceptable-use policy’ that clearly prohibits the uploading of personal data to public models, and sets out the circumstances in which AI tools may be used. Investment in enterprise-grade tools that offer article 28-compliant data-processing agreements and zero-retention policies is essential, as is ensuring transparency with clients by updating engagement letters to reflect the use of AI technologies in service delivery.
By adhering to these principles, the legal profession can harness the transformative power of AI without compromising the privacy rights that we are sworn to uphold.
The integration of AI into legal practice is not a question of if, but when and how.
Those who act now to build robust compliance frameworks will be best positioned to benefit from these technologies while maintaining the trust that is fundamental to the solicitor/client relationship.
Louis Masterson is a barrister practising in data-protection law, technology law, and regulatory compliance. He was a contributor to Benedict Ó Floinn SC’s Practice and Procedure in the Superior Courts, and regularly advises law firms on GDPR compliance.
CASES:
LEGISLATION:
LITERATURE: