The Law Society of England and Wales has urged the British Government to ensure that its approach to regulating artificial intelligence (AI) systems does not diverge from the EU's nascent regulatory regime.
The call came in a response to a consultation on the British Government's white paper on AI regulation.
Divergence from EU and US principles-based regimes “adds complexity for law firms when determining which ethical guidelines apply and in which jurisdictions,” the submission states.
The solicitors’ body also highlights the “urgent need” for explicit regulations on liability across the lifecycle of an AI-based system.
The Law Society Gazette of England and Wales said that concerns about the regulation of AI technology had rocketed up the political agenda since the emergence of so-called large language-model systems, such as ChatGPT.
Last month, British Prime Minister Rishi Sunak said that he wanted Britain to become a global centre for AI under “safe and secure” rules.
Meanwhile, the EU is in the process of drawing up an Artificial Intelligence Act designed to “ensure that AI developed and used in Europe is fully in line with EU rights and values”.
Call for AI officers
In its 48-page response to a white paper published in March by the Department for Science, Innovation and Technology, the Law Society calls for a “nuanced, balanced approach” to regulation, with a blend of adaptable regulation and firm legislation.
The organisation says that the issue of liability requires strong regulation.
Current routes for contestability and redress for AI-related “harms” are not adequate, mainly due to the lack of clear definitions in the current legal framework for terms such as 'meaningful human intervention', the society states.
It recommends that the Law Commission or the British Government should review crimes and civil offences involving an element of subjective mental state or intention, to understand whether any such harm-creating activities should also be applied to AI.
Entities above a certain size, or working in high-risk areas, should be required to appoint an AI officer, the submission states.
Other recommendations cover the need for organisations to be transparent in their use of AI, and for decisions made by such systems to be “interpretable”.