We use cookies to collect and analyse information on site performance and usage to improve and customise your experience, where applicable. View our Cookies Policy. Click Accept and continue to use our website or Manage to review and update your preferences.


Knowing me, knowing you
Pic: Shutterstock

10 Apr 2024 / technology Print

Knowing me, knowing you

The conclusion of negotiations between the EU Council and Parliament on the content of the Artificial Intelligence Act means that AI has hit the mainstream. Labhaoise Ní Fhaoláin and Dr Andrew Hines ask what’s the name of the game?

It seems that artificial intelligence has been constantly in the headlines over the last 12 months. Between the arrival of ChatGPT and the conclusion of the negotiations between the EU Council and Parliament on the content of the Artificial Intelligence Act, AI has hit the mainstream.

Similar to the GDPR, the AI Act’s effect will be felt by every industry and have worldwide impact, so solicitors of every specialty and firm size will need to become aware of concepts within the agreed text.

Work on the AI Act began over three years ago with the European Commission proposing it in April 2021. It is a culmination of years of drafting, negotiations and discussions between subject-matter experts, civil society, industry, and regulators.

The goal of the act is to promote the uptake of human-centric and trustworthy AI while ensuring that people’s health, safety and fundamental rights are protected against harmful effects of AI systems in the European Union.

These fundamental rights are found in the European Charter of Fundamental Rights and include democracy, the rule of law, and environmental protection and sustainability. The act also seeks to support innovation and to improve the functioning of the internal market, through competitiveness.

The practicality of implementation and impact on competitiveness have been debated.

Take a chance on me

Across academia and industry, and as AI has become a topic of public conversation, the definition of exactly what constitutes artificial intelligence is difficult to pin down.

Despite efforts by the act’s drafters to constrain the definition’s scope under the act, it has been criticised as being too broad, with potential to apply to non-AI technology.

Furthermore, where AI begins and ends within broader technology systems is a challenge the act tries to address. The commission has confirmed that guidelines will be issued to provide more information on how the definition should be applied in practice.

Under the act, an AI system is “a system that is designed to operate with elements of autonomy and that, based on machine and/or human-provided data and inputs, infers how to achieve a given set of objectives using machine learning and/or logic- and knowledge-based approaches, and produces system-generated outputs such as content (generative AI systems), predictions, recommendations or decisions, influencing the environments with which the AI system interacts”.

To deliver regulations for AI and AI systems with a trustworthy and human-rights-first approach, the act:

  • Sets out rules for placing AI systems on the market or putting them into use in the union,
  • Prohibits certain AI practices,
  • Provides specific requirements for high-risk AI and sets out obligations for operators of such systems,
  • Provides transparency rules for certain AI systems,
  • Provides rules for general-purpose AI models,
  • Sets out rules for market monitoring, market-surveillance governance and enforcement (at national and EU level),
  • Provides measures to support innovation, with a particular focus on SMEs, including start-ups.

A risk-based approach is adopted in the AI Act, where ‘risk’ means the combination of the probability of an occurrence of harm and the severity of that harm. A higher risk of impact on health, safety, and rights entails more onerous compliance requirements.

There are four risk categories defined within the act:

1) Prohibited – the risk of detrimental impact by some use-cases is so great that they are prohibited entirely. Examples of prohibited practices include using biometric categorisation to infer sensitive characteristics; using social behaviour or characteristics to create a social score that could result in unfavourable outcomes for a person or groups; untargeted scraping of facial images from the internet or CCTV footage to create or expand facial-recognition databases; and the use of emotion recognition in the workplace or education institutions. While real-time remote biometric identification systems are prohibited, there are narrow exceptions for law enforcement in public spaces, subject to prior authorisation.

2) High risk – Annex III sets out areas that are considered to be high-risk applications. Examples include criticalinfrastructure AI systems used as safety components in critical digital infrastructure; the use of AI systems to determine access or admission to educational and vocational training; the use of AI systems in recruitment or selection to analyse candidates and in worker management; the use to evaluate the reliability of evidence in criminal offences; using AI systems to assess applications for migration and asylum; and, in the administration of justice and democratic processes, the use by a judicial authority in research or interpreting facts and applying the law to a set of facts (this also applies to alternative dispute resolution). However, even if a use appears on Annex III, it will not be considered as high risk if it does not pose a significant risk to harm to health, safety, or fundamental rights, and the act sets out the criteria that can be used to assess whether that may be the case. For example, if the AI system only performs a narrow procedural task, then it may not be deemed to be high risk. There is a further large tranche of cases that are automatically deemed to be high risk. This is where an AI system is intended to be used as a safety component in a product, or is a product itself, that is already governed by EU regulations, for example, machinery, toys, aircraft. If a use is deemed to be high risk, then it must undergo a conformity assessment (relating to data quality, documentation and traceability, transparency, human oversight, accuracy, cybersecurity and robustness) before the product is placed on the market, along with other obligations, such as registration.

3) Limited risk – under this heading fall technologies such as synthetic audio (for example, deep-fakes) and AIpowered direct interactions (for example, chatbots). There are transparency requirements for this level of risk. For these applications, the user must be informed that an AI system is behind the application.

4) Minimal risk – all other applications not falling into the other three categories will be deemed to have minimal risk.

Don’t shut me down

There has been concentrated negotiation and lobbying around the high-risk categorisation, which is likely to remain under constant review.

Further categorisation complicates the act, with a separate ‘general purpose’ AI category added late in the drafting stages. This deals with technologies such as ‘large language models’ (LLMs), used, for example, in ChatGPT.

This category provides for further obligations, independent of the application risk classification.

Certain domains are not subject to the act, including national security, military, and defence. Also excluded are systems and models specifically developed and put into service for the sole purpose of scientific research and development.

There is also an exemption for research, testing, and development of AI systems and models prior to being placed on the market or put into service (though testing in the real world is not covered by this exemption). Purely personal, non-professional use also falls outside the scope of the act.

The AI Act applies to both public and private organisations located within or outside the EU that place an AI system on the EU market. It also extends to where an AI system’s use has an impact on people located in the EU.

Organisations developing an AI system (providers), along with organisations that acquire and implement AI systems (deployers), are subject to the act, as are importers and distributors.

Voulez-vous?

While the definition and boundaries around being a provider are clear, similar to the GDPR (where you need to establish whether you are a data processor or data controller), for the AI Act you may be a deployer and not realise it.

Establishing whether you are a provider or a deployer is important, as they have different obligations. Indeed, a deployer may need to comply with the provider obligations in the event that the AI system is adapted or changed.

As a solicitor in practice, not advising on the area, what do you need to know? For a solicitor’s practice, is there a need to know about the AI Act?

Similar to the GDPR, the answer is likely ‘yes’, where law firms become deployers of AI systems. Solicitors should bear in mind that they may not even be aware that AI has been integrated into the systems they use.

When assessing your firm’s obligations, the first question to ask is whether the use-case is prohibited. If so, then that is the end of the matter, as the AI system cannot be used.

Next, does your AI system fall within the high-risk category? If yes, you must pause and ask whether what is being done also poses a significant risk of harm to health, safety, or fundamental rights. The area of employment is specifically referred to in Annex III (high risk), with use-cases.

For example, the use of AI in employment-related systems or tools, whether for filtering CVs at recruitment or analysing billable-hour trends for promotion, could fall within the scope of the act.

If AI systems are being deployed, there is an obligation to ensure that staff and others dealing with the operation and use of AI on their behalf have a sufficient level of AI literacy.

Of course, legal professionals will be aware that, while their use of AI in the legal practice may be in compliance with the AI Act, the use may breach the Solicitor’s Guide to Professional Conduct or may give rise to professional negligence.

I have a dream

The European AI Office was established in February 2024 within the European Commission, and it will oversee the enforcement and implementation of the AI Act with the member states.

Member states must designate national competent authorities under the act and, in Ireland, it is likely that the NSAI and CCPC will take roles under the act. However, other authorities may also be involved in areas such as conformity assessments.

There will also be a mechanism for complaints to be made. Non-compliance with the act can result in fines ranging from €7.5 million or 1.5% of turnover, to €35 million or 7% of turnover.

There has been a lot of coverage in recent months, as the European Parliament agreed to the act in principle in December.

Once the final text has been voted on by the Parliament and published in the Official Journal, the provisions relating to prohibitions come into effect six months after the notice, the provisions relating to general-purpose AI 12 months after the notice, and the high-risk provisions 24 months after the notice.

However, even after enactment, the operational details will evolve over the coming years.

The commission will issue guidelines that will provide further information and details, including on the high-risk categories. There will be further consultations with stakeholders before the guidelines are published, and we can expect a significant amount of lobbying within that process.

While it cannot add to the areas referred to in Annex III (high risk), the commission can add or modify the use-cases in Annex III without the need for the parliament to vote.

For example, it could add use-cases to the area of administration of justice and democratic processes, or other scenarios if they are deemed to pose a risk of harm to health and safety, or an adverse impact on fundamental rights is equivalent to, or greater than, the existing use-cases.

Standards will play a central role in compliance, and a draft request for standardisation was sent to standards bodies even before the final text of the act had been agreed. All member states can feed into these standards through the representation by national standards bodies (the NSAI in Ireland, for example).

It is not unreasonable to assume that the AI Act could be as consequential a regulatory development as the GDPR, and it will take a decade to operationalise the guidelines and clearly establish the scope of the high-risk and deployer categories.

In the meantime, it is advisable to be aware of the framework, obligations, stakeholders and enforcement – and the potential of being an accidental deployer of high-risk AI.

Labhaoise Ní Fhaoláin is a member of the Law Society’s Technology Committee and is completing a PhD in the governance of artificial intelligence. Dr Andrew Hines is an assistant professor in the School of Computer Science, UCD.

LOOK IT UP

CASES

  • Artificial Intelligence Act
  • Proposal for a Regulation of the European Parliament and of the Council
    Laying Down Harmonised Rules on Artificial Intelligence (Artificial
    Intelligence Act) and Amending Certain Union Legislative Acts (21 April
    2021, Document 52021PC0206)
Labhaoise Ní Fhaoláin and Andrew Hines
Labhaoise Ní Fhaoláin is a member of the Law Society’s Technology Committee. She is completing a PhD in the governance of artificial intelligence, funded by Science Foundation Ireland at the School of Computer Science, University College Dublin. Dr Andrew Hines is an assistant professor in the School of Computer Science, University College Dublin. He is an investigator at the SFI Insight Centre for Data Analytics, a senior member of the IEEE, and a member of the RIA Engineering and Computer Sciences Committee.