We use cookies to collect and analyse information on site performance and usage to improve and customise your experience, where applicable. View our Cookies Policy. Click Accept and continue to use our website or Manage to review and update your preferences.


Lack of AI definition gives lawmakers difficult task
Pic: Shutterstock

21 Aug 2024 / technology Print

Lack of AI definition gives lawmakers difficult task

The European Union's AI Act attempts to establish a comprehensive regulatory framework for artificial intelligence within its member states, write James Egleston and Leo Twiggs.

The act will harmonise EU AI regulations and attempt to ensure the safety of users of AI systems and protection of fundamental rights by pursuing a risk-based approach towards classifying AI.

Nevertheless, the act also intends to promote innovation and aims to remain agile in how it defines AI.

Definition of ‘AI system’

The definition of an AI system contained in article 3(1) states that:

‘AI system’ means a machine-based system that is designed to operate with varying levels of autonomy and that may exhibit adaptiveness after deployment, and that, for explicit or implicit objectives, infers, from the input it receives, how to generate outputs such as predictions, content, recommendations, or decisions that can influence physical or virtual environments

This is compatible with recital 12 of the AI Act which states that the definition of AI systems in the act should be clearly defined and should be closely aligned with the work of international organisations to ensure legal certainty, while also being flexible enough to accommodate the rapid technological development of AI that has been demonstrated in recent years.

This definition, which was primarily inspired by OECD wording in 2019 (Recommendation of the Council on Artificial Intelligence), is designed to include a wide array of emerging technologies, thereby future-proofing the regulation and allowing the act to keep abreast of a rapidly developing field.

The AI Act, in article 3(63) and (66), also defines ‘General Purpose AI (GPAI)’ models and systems as a separate category.

GPAI is defined as AI models and systems that are trained with a large amount of data and are capable of competently performing a wide variety of tasks, both for direct use and integration in other systems.

The definition of GPAI is open-ended and would include powerful large-language models such as GPT-4 and onwards (developed by OpenAI), the capabilities of which are exponentially increasing year-by-year.

Questions emerge on what types of AI systems this definition captures and whether it casts too wide a regulatory net, and whether certain types of AI systems may evade this regulation in the future.

Issues with defining AI

Establishing a reliable definition of AI is an exceptionally difficult task for legislators, given that the term AI is highly ambiguous in nature.

There is no generally accepted definition of AI.

A simple Google autofill or basic targeted advertising both use some form of AI, but it would be reductionist to compare either of these to a sophisticated large language model such as GPT-4, even though all are technically ‘AI systems’.

Regulations aim to establish legal certainty about what is permitted and what is forbidden.

The EU attempts to carefully balance this need for legal certainty while also balancing the need for flexibility in the AI Act, to leave leeway for unforeseen further developments in the technology.

This approach has led to some anomalies.

For example, article 2(3) excludes AI systems used ‘exclusively for military purposes’ from the ambit of the act, but AI systems that might be feasibly used for mixed military and law-enforcement purposes will be considered as ‘high-risk systems’ (the most extensively regulated tier of AI systems, seen in article 6 and annex 3 of the AI Act).

This tier includes AI systems that are prohibited due to unacceptable levels of risk (such as AI that deploys subliminal techniques, AI ‘social scoring’ mechanisms or AI systems that categorise people based on protected characteristics) to AI systems that are not regulated whatsoever (such as spam filters).

Future-proofing

Despite this, the European Commission is empowered (under article 7) to adopt delegated acts adding to this list of high-risk systems, ensuring that the AI Act remains up to speed with future technological developments.

For example, the European Commission might adopt delegated acts that further regulate the use of AI systems in autonomous vehicles, as these are not directly referred to in annex 3.

The AI Act is a foundational framework for continued regulation at EU level.

Accordingly, it has been drafted in such a manner that allows it to be regularly reviewed and updated and the definition of an AI system used in the act is suitably broad enough to include a wide variety of future iterations on the technology.

Despite this, the act as written might not foresee future revolutionary changes in AI such as the development of Artificial General Intelligence (AGI), an as-yet undeveloped evolution of AI that near-perfectly emulates or surpasses human intelligence across numerous cognitive measurements.

OpenAI has made the development of AGI part of its charter, although the timeline for the development of this AI is hard to pin down.

Realm of science fiction

The development of AI of this level (or even super-intelligent AI systems – something that is currently within the realm of science fiction) would pose a substantial challenge to legislators at EU level.

Despite this, the AI Act is well-provisioned to be changed, tweaked or even replaced should the AI tool landscape change this dramatically in the future.

The AI Act attempts to pre-empt these developments by:

  • Remaining broad enough to cover what technologies might be developed in the future, while
  • Directly addressing what currently exists, and
  • Maintaining a level of regulatory adaptability allowing it to be changed as needed.

Whether the commission will be able to actively maintain the AI Act’s continued relevance, given the rapid pace of these technologies, remains to be seen.

Leo Twiggs and James Egleston
Leo Twiggs and James Egleston
Leo Twiggs is a US-qualified attorney and policy advisor at the Law Society, with a focus on the digital divide and access to justice. James Egleston LLB, MA (NUI) is a former Law Reform Commission and Department of Justice legal researcher. He now works as a policy development executive in the Law Society