We use cookies to collect and analyse information on site performance and usage to improve and customise your experience, where applicable. View our Cookies Policy. Click Accept and continue to use our website or Manage to review and update your preferences.


Let me put it to you

20 May 2025 technology Print

Let me put it to you

When ChatGPT burst onto the scene, it was greeted like digital sliced bread. But, as the saying goes, ‘garbage in, garbage out’. So what do you need to know to use it effectively? David Cowan scrapes the mould off 

For the legal profession, new tech like ChatGPT and its ilk can seem to cut too close to the bone, with the usual bout of headlines that law-bots and digi-judges are on their way. The end of lawyers!

In reality, tools such as ChatGPT, and now DeepSeek, are a long way from this. It is not lawyers that are being replaced, but rather some tasks, which include drafting tools.

The pen, the typewriter, and computers have all been increasingly powerful as aids to legal drafting. ChatGPT is the latest tool that can help – if properly used.

But it is an aid, not a replacement, and where lawyers have overly relied on ChatGPT, they have come a cropper in court.

A cautionary tale

In an American personal-injury case, Roberto Mata sued the airline Avianca.

His lawyers submitted a ten-page brief, authored by Steven A Schwartz of the firm Levidow, Levidow & Oberman, which cited over half a dozen court decisions, including Martinez v Delta Airlines, Zicherman v Korean Airlines, and Varghese v China Southern Airlines.

Unfortunately for Mr Schwartz, a practitioner of 30 years in New York, these cases were ‘hallucinated’ by ChatGPT. Schwartz admitted in an affidavit to the court that he had used “a source that has revealed itself to be unreliable”.

In a two-hour grilling in court, he explained to the judge that he had never used ChatGPT before, so “therefore was unaware of the possibility that its content could be false”.

After prompting ChatGPT further, the program doubled-down and confirmed that the cases were real.

The judge read out aloud a few lines of the draft and asked Schwartz: “Can we agree that’s legal gibberish?”

This now infamous example illustrates two things. First, the fallacy that this technology will soon replace lawyers. Second, the need for practitioners to develop legal prompting skills for use with AI and other technology

Lawyers have always been trained to ask questions, to be sure – but, in the digital space, how this is done requires a specific approach that goes by the name ‘legal prompting.’

Using good legal prompting techniques allows lawyers to use ChatGPT as a tool, just like a laptop or smartphone is a tool.

Practitioners need to migrate towards understanding such tools as part of what I call the ‘augmented lawyer’ approach. This combines human legal ability with the power and reach of big-data technology.

How do these tools work? ChatGPT is a large language model (LLM), meaning that it generates human-like language responses to questions, or prompts, provided by humans.

LLMs offer a new form of technical support for different types of intellectual work undertaken by lawyers.

By skilfully using the right commands, or prompts, lawyers can save time on numerous tasks in the context of legal work, but it still requires much effort by the human lawyer.

Short cuts, as Mr Schwartz found out to his cost, still remain a precarious route for legal practitioners to take.

These are all tools that are pre-trained on a large amount of data. The data used includes text, images, videos, and speech. The more rules and data provided to the LLM, the more accurate and efficient the outcomes.

Indeed, where there is big data and clear rules, then the use of LLMs can make more sense, as the issue is essentially about volume handling.

However, they do demand a lot of data, energy, resources, and costs. The critical aspect is the pre-training phase, and this goes back to an equation in all computing that should never be forgotten: ‘garbage in, garbage out’ (GIGO).

LLMs use guesswork and, while it mimics many human thought processes, it can fall down dramatically compared with how humans think and value information.

One of the unresolved issues of how LLMs work is the extent to which they infringe intellectual-property rights in the course of ‘scraping’ as much content as they can from the internet as the pool of data for responding to a prompt. There is a case to be made that this is IP theft on a grand scale.

Hence, for instance, if a judicial assistant uses ChatGPT to draft material for a judge, then there is an element of IP theft involved.

If a solicitor uploads a question relating to a client, this becomes part of the recycled content that the LLM uses to ‘learn’ for prompts given by other users, raising client-privacy concerns.

Limitations

Having such a set of tools makes legal prompting, or legal-prompt engineering, an important emerging skill. Approaching the use of LLMs as an augmented skill helps to tackle some of the limitations.

The risk of using these tools is that they can:

  • Reflect bias: this can result from a bias in the training data or the algorithm used. Any flaws in the data may become obvious to the user, but the system takes the data at face value and may accept it.
  • Make false statements: this can occur due to errors and bias in the instructions or training data, as we still have the dynamic of GIGO. LLMs deal with probabilities. The results can be random and lead to the system ‘guessing’ what the answer may be to the enquiry.
  • Hallucinate: this occurs when the response generated by the model is made up in order to provide an answer, having chosen what appears to be the most probable response. It may make up a case citation, because it ‘knows’ there should be a case but cannot find one. The model then produces this without necessarily explaining this to the user.
  • Ignore basic logic: the text generated by the model may appear very sensible and logically laid out, but it does not replicate human knowledge. The answer may not be the logical one; instead, it is what the model considers probable.

It is essential to have these limitations in mind as we conduct legal work. This does not mean that such tools fail to offer great value. It means seeing LLMs as tools, not replacements.

The ‘WHICH’ approach (see panel below) helps to focus your enquiry and create a holistic output.

Legal-prompting skills

When we reason as humans, we think of questions to help us reach a conclusion. What an LLM needs is a prompt. We may formulate the enquiry as a natural-language question, but the LLM discerns the prompts within the question, which are the keywords and the blocks of text.

The more focused and distinct the keywords, the more the LLM will return a useful response. Further precise prompts can help to develop the initial response.

Legal training involves this approach, such as cross-examination, and the LLMs are akin to undertaking a digital cross-examination to get the quality of response required.

However, if it is done lazily or hurriedly, then the LLM is more likely to make errors, because it is likely to miss data points. This is because it has less definition on which to base its probability reasoning.

Building on the WHICH approach, there are seven steps that can be taken to refine your questions and develop your legal prompting skills (see list below), by helping you to determine how to enter your requests in order to get successful outcomes.

This comes down to clarity in what the user wants, and effective communication in translating wants into actions.

Gremlins

A well-honed enquiry is the antidote to GIGO. Put in a good enquiry, and you should have a good result to work with. However, it will – and indeed should – always be lacking in something.

This is because you, as the lawyer, should be adding more to the output to make it value-added for the client, even where you have done a lot of work to refine the output.

Of course, the less complex the needs of the enquiry, the more the input will create a satisfactory output that might need little additional effort.

That said, all output still needs to be checked over – we don’t want any of those expensive little hallucination gremlins creeping into the final work to be presented to clients or in court!

Dr David Cowan is assistant professor at Maynooth University and the author of Law and Technology (Bloomsbury Professional Ireland, forthcoming in June 2025). 

SEVEN-STEP PROGRAM

1) Definition: tell the LLM what your role is in asking the question. Are you the prosecutor, defence counsel, judge? Be precise about the area of your enquiry.

For instance, you might ask about the retention of mobile-phone data in respect of a suspect. The question: “Can the gardaí access the mobile-phone location data of a suspect?” will give a more precise answer than: “What are the privacy rights of a suspect to have their data protected?”

The LLM is picking up the keywords of ‘mobile’, ‘phone’, ‘location’, ‘data’, and ‘suspect’ in the first enquiry, and the keywords ‘privacy’, ‘rights’, ‘data’, and ‘protected’ in the second.

2) Audience: tell the LLM to whom your research is directed. Are you drafting a court submission, writing an email advising a client, or presenting ideas for an internal team on a client matter? The audience determines the relevant facts, language, and ideas required by the LLM to narrow the enquiry.

3) Question: the prompt needs to be a precise component of an overall questioning strategy. Do you want a general idea, research pointers, specific cases and citations, strategic options, a template?

4) Style: do you need the LLM output to provide the response in a particular style; for instance, a formal document with a recommended layout?

5) Context: specify the context for your enquiry by providing locations, timings, people involved, and other details to help make your prompt more specific. These are keywords as well. These items can be highlighted to the LLM by putting the contextual example within three hashtags or dashes.

For instance, you could refer to: “The suspect was visiting his family in Dublin ###a busy time because there had been some riots around the time### he was always angry with his family and strangers alike”.

6) Clarity: you can review the response and ask further questions to refine the output or undertake some initial verification. For instance, you could ask: “Do you have a citation for that case?” or even tell the LLM: “You are incorrect”, and ask a question that invites the LLM to modify the answer.

7) Verify: having been through the process, it is essential to verify the response, which is part of the previous step – but even with that input, you should verify the responses. The mode here should be more towards ‘distrust, and verify’, rather than ‘trust, but verify’.

Source: Cowan, Law and Technology (Bloomsbury Professional Ireland, 2025).

David Cowan
David Cowan
Dr David Cowan is assistant professor at Maynooth University and the author of Law and Technology (Bloomsbury Professional Ireland, forthcoming in June 2025).

Copyright © 2025 Law Society Gazette. The Law Society is not responsible for the content of external sites – see our Privacy Policy.