We use cookies to collect and analyse information on site performance and usage to improve and customise your experience, where applicable. View our Cookies Policy. Click Accept and continue to use our website or Manage to review and update your preferences.

I, robot

16 Jul 2019 / Innovation Print

I, robot

If you visit a website called willrobotstakemyjob.com, you can discover the probability of automation, at some point in the future, rendering your job either obsolete or safe. Since the Gazette is for lawyers, I can save you the bother – it is 4%.

This compares with a mere 0.8% for priests, 11% for journalists, 67% for bus drivers, and 94% for accountants and auditors, which perhaps goes a long way towards explaining why the ‘Big Four’ auditing and accountancy firms are making inroads into the legal space!

The calculation is based on a methodology developed by Oxford University researchers looking at the future of employment. While 4% appears reassuring, the website adds a rider that “our poll suggests a higher chance of automation: a 39% chance of automation within the next two decades”.

Beyond the fear

The area of automation that causes the most consternation is artificial intelligence (AI), but we need to look beyond the fear and hype. The real question about AI is not whether your job as a solicitor will be replaced, but to ask two questions that are currently very much in flux, and not easily answered.

First, what impact will AI have on how you do your job? Second, what are the legal and ethical problems arising that will affect the development of the law and regulation?

The problems of AI are not really technological, which is not to deny there is much nuance to this point. The idea of rampaging robots or ‘Big Brother’ AI remain at the levels of fantasy, but we are certainly on the threshold, and now is the time for lawyers to be thinking about how these technologies are, and will be, used by the profession, and what the law needs to do about them in society. The problem is that AI, which includes autonomous machines and machine-learning technologies, does raise fundamental problems for humanity.

2030 transformation

Richard Susskind, an author and guru on the subject of technology’s impact on the law, wrote his doctorate on AI and law at Oxford in the early to mid-’80s. He says: “I believe that most of the short-term predictions about the impact of AI on law are overstated. At the same time, I think that the long-term predictions understate its likely impact. The legal world will not change fundamentally in the next couple of years. But, by 2030, I expect many aspects of legal service and court service will have been transformed.”

By 2030, according to a report by PwC, AI could be contributing $15.7 trillion to the global economy. A Thomson Reuters report Ready or Not: Artificial Intelligence and Corporate Legal Departments, looked ahead to 2025. Their report stated that corporate counsel believe they are tech savvy, but acknowledge that their comfort level and confidence with technology have limitations, specifically around artificial intelligence. Less than 15% of survey respondents believed their legal departments were effectively using big data to deliver legal services.

Significant impact?

The intersection of big data, business, risk and delivering better legal services is where AI can make a significant impact. The Thomson Reuters report highlighted the potential of using AI to automate invoice review and complete contracts, with one attorney specifying: “Hopefully, AI will be able to take over record-keeping roles like entity and document management. I could see some significant AI document preparation as well.” This is all more mundane than the realms of AI fantasy.

Larger legal departments are the most receptive to adopting AI tools, with only 26% of respondents in departments with more than 11 attorneys stating that their departments were not interested in AI. However, 67% of respondents who work in legal departments with six to ten attorneys reported their departments were not interested in AI technologies, while 62% of respondents in legal departments with fewer than six attorneys indicated that their departments weren’t ready. It would not be a great leap of the imagination to suggest that this is where Irish client legal departments are situated on the issue. More notably, only 4% of respondents, overall, indicated their departments were seriously considering purchasing technology tools with AI.

Key benefits

The key benefits of AI are currently perceived as reducing costs and saving time, and these benefits are not seen as being tangible enough as yet. This is where we find the end of the ‘hype road’.

There are three barriers that need to be overcome before we reach the transformative effects of AI that Susskind predicts for 2030. They are: cost, ethical concern, and fear of – and resistance to – change. We can deal with cost rather quickly – it is arguably the most straightforward. There is a lot of funding in research and development just now, and this will result in new products and adoption of AI in the fullness of time. Costs come down, like all technologies, and 2030 seems like a realistic timeframe for this process.

Transformative force

The other two factors can be taken in tandem. The European Commission has already started looking at ethics through the High Level Expert Group on Artificial Intelligence. In December 2018, the group issued Draft Ethics Guidelines for Trustworthy AI, stating: “Artificial intelligence is one of the most transformative forces of our time, and is bound to alter the fabric of society.”

We can expect to see continued discussion in reports and on conference platforms. These discussions will be interdisciplinary, bringing lawyers together in dialogue with technologists, futurologists, business leaders, policymakers, civil servants, neurologists, psychologists, ethicists, philosophers, theologians and other specialists.

Automation v autonomy

The real nexus of the discussion is the distinction between automation and autonomy in relation to AI. To understand this critical point, we can reflect on how AI is used by lawyers, and see that AI’s disruption of the legal business will increasingly put a premium on the human lawyer. Let’s start with so-called ‘smart contracts’.

Automating tedious and repetitive tasks is a good thing, AI proponents believe. These are tasks undertaken by juniors, and many argue it is indispensable to training new lawyers, but AI advocates believe this is not what young lawyers have in mind when they walk through the doors of a law firm. Aside from any training value, traditionally, this work has been billable and profitable.

However, it is being automated, but – like a lot of what is labelled ‘AI’ – it is not really AI at all. It is merely fast automation that can handle big data. Thus, ‘smart’ contracts are not as smart as the name implies. It is smart in the sense of handling the volume of work, but it is not smart in the sense of understanding that data.

Contract Intelligence

A good example is JP Morgan, which deployed software called ‘Contract Intelligence’, or COIN. They dramatically reduced the time taken to review legal documents to the extent that 360,000 human hours now takes seconds. That is smart, but it is simply automation. It is also a good example of where clients can do work they previously paid law firms to undertake.

This raises the question of threat, but this is not a threat from AI – it is a threat from automation. Autonomy is really what AI is about. When we talk about the hype of AI, what we are really seeing is imagination driving people’s perception about what AI is and what it might do.

Cutting through the hype

Josh Hogan, partner at McCann FitzGerald, notes: “Cutting through the hype, there is a lot of talk, interest and potential, but the tangible issues are still being worked through.” One way that law firms are working through the issues is by incubating a range of AI applications, working with various disruptive technologies and partners.

This does not mean lawyers have to write their own code or become tech providers. Hogan explains that he had built an app himself, but doesn’t believe this is the way to go. He says that, while it’s a good app, and he is proud of the achievement, “our experience was that, although the client liked it, what they really want is the advice. That was an important lesson for me. Building apps is not what lawyers need to do. Clients want good tools, but what they really want from us is good advice and communication.”

He continues: “There are two questions to ask. First, how does technology improve the client experience? Second, what does it do for us as a firm – how can we improve?” Hogan argues that the lens through which this can be answered is collaboration: within the firm, with clients, and with other partners.

Augmented intelligence

AI will develop through collaboration, and we know we have a long way to go when participants in the debate are questioning the very term itself, with both words – artificial and intelligence – under challenge. What do we mean by ‘artificial’? Is AI really that intelligent?

Others advocate a change to the term, with candidates being ‘augmented intelligence’ and ‘extended intelligence’. However, Jacob Turner, a barrister and author of Robot Rules, says: “We are probably stuck with the term – it is in common use. There might be better ways of phrasing it, but that would be a bit of a fool’s errand.”

The autonomy question

It is the ‘intelligence’ part of the term that brings us back to the important distinction to be made between automatic and autonomous machines. Turner says in respect to AI: “Autonomous is the sense in which I use ‘AI’, rather than ‘automated’. This is the legally interesting area.”

To understand this distinction, it is useful to go back to the barriers to adoption. The first point of cost and efficiency savings will be achieved by ‘smart’ automation and, to a certain extent, will override resistance to change.


There are significant data protection issues, including concerns about spying, and GDPR has been a major step forward in addressing these issues. The collection of data, which may or may not be used by big-data companies against our permission, is the kind of novel innovation lawyers have historically been up to dealing with. Automation still leaves us with problems, but they fit more closely with existing legal paradigms.

However, the autonomy question is the bigger and longer-term focus of legal questions to be asked today. It also raises the issue of other aspects of change, which morph into questions of ethics.

Jacob explains: “There are two ethical issues involved. First, how AI takes these decisions, which encompasses moral dilemmas, how they are decided upon, what trade-offs are being made. If the algorithm makes a recommendation, that recommendation may shape actions going forward or encourage certain behaviours. We also have questions around bias and explainability. Second, we need to ask whether there are any decisions AI should never be able to take. For instance, interest groups like Big Brother Watch want facial recognition banned. Other areas include autonomous weapons, and life and death medical decisions.”

Ireland’s ‘AI moment’

These are long-term debates, and it is around the issue of autonomy that we will discover the way to 2030. This means we have an ‘AI moment’ in Ireland. There is a wealth of legal, technological and academic talent on the island, and there are a lot of technology companies – big and small – that can contribute to this discovery.

Ireland also has the benefit of size, where bringing everyone together to debate the issues can be done. This is on the agenda of a Dublin-based meeting on 1 August 2019, which is part of a virtual World Legal Summit 2019 (see https://worldlegalsummit.org/dublin-host-page).

The AI debate swings from the mundane to the fanciful, leading some to say it’s all hype, and others raising the spectre of today’s sci-fi being tomorrow’s must-have consumer product. It is a debate where the legal profession can offer reasoned intellectual and practical input, both for the sake of the profession and the island.

In 2017, Oxford Insights created the world’s first Government AI Readiness Index, produced with the support of the International Development Research Centre (IDRC). They asked how well placed national governments were to take advantage of the benefits of AI in their operations and delivery of public services.

Ireland’s place

In 2017, the index rated Ireland in 17th place, with a score of 6.697. In the expanded 2019 index of 194 countries and territories, Ireland is in 34th place, scoring 6.542, which is a move in the wrong direction. Countries above Ireland are investing. Last year, Emmanuel Macron announced a €1.5 billion investment in AI in France, and Angela Merkel announced a €3 billion investment in AI in Germany.

Taoiseach Leo Varadkar recently warned that robots and AI posed a risk to people’s jobs, saying that most jobs were “vulnerable to digitalisation or automatisation” and people would be replaced. However, he added: “The important thing now is that we think ahead.”

He then joked: “I’m not sure if we’ll have artificial intelligence to replace TDs and senators, or robot ministers – who knows? You get accused of being robotic sometimes.”

Perhaps now is the time to move beyond the joke, and for Ireland’s lawyers to seize their ‘AI moment’.

David Cowan
David Cowan
David Cowan is an author, journalist and trainer