The European Parliament has voted for a negotiation position of a full ban on artificial intelligence (AI) for biometric surveillance, emotion recognition, and predictive policing (14 June).
A statement said that the rules aim to promote the uptake of “human-centric and trustworthy” AI and protect the health, safety, fundamental rights and democracy from its harmful effects.
The negotiating position on the Artificial Intelligence (AI) Act had 499 votes in favour, 28 against, and 93 abstentions ahead of talks with EU member states on the final shape of the law.
Generative AI systems, such as ChatGPT, must disclose that content was AI-generated and systems used to influence voters in elections are to be considered high-risk.
“The rules would ensure that AI developed and used in Europe is fully in line with EU rights and values, including human oversight, safety, privacy, transparency, non-discrimination and social and environmental wellbeing,” said a statement after the vote.
Prohibited AI practices
The rules establish obligations for providers and those deploying AI systems depending on the level of risk the AI can generate.
AI systems with an unacceptable level of risk to people’s safety will be prohibited, such as those used for social scoring (classifying people based on their social behaviour or personal characteristics).
MEPs expanded the list to include bans on intrusive and discriminatory uses of AI, such as:
- Real-time remote biometric identification systems in publicly accessible spaces,
- ‘Post’ remote biometric identification systems, with the only exception of law enforcement for the prosecution of serious crimes, and only after judicial authorisation,
- Biometric categorisation systems using sensitive characteristics (gender, race, ethnicity, citizenship status, religion, political orientation),
- Predictive policing systems (based on profiling, location or past criminal behaviour),
- Emotion-recognition systems in law enforcement, border management, the workplace, and educational institutions, and
- Untargeted scraping of facial images from the internet or CCTV footage to create facial recognition databases (violating human rights and right to privacy).
The classification of high-risk applications will now include AI systems that pose significant harm to people’s health, safety, fundamental rights or the environment.
AI systems used to influence voters and the outcome of elections and in ‘recommender systems’ used by social-media platforms (with over 45 million users) were added to the high-risk list.
Obligations for general-purpose AI
Providers will have to assess and mitigate possible risks to health, safety, fundamental rights, the environment, democracy and rule of law, and register their models in the EU database, before their release on the EU market.
Generative AI systems, based on such models as ChatGPT, would have to comply with transparency requirements (disclosing that the content was AI-generated, also helping distinguish so-called deep-fake images from real ones) and to ensure safeguards against generating illegal content.
Detailed summaries of the copyrighted data used for their training will also have to be made publicly available.
MEPs added exemptions for research activities and AI components provided under open-source licenses.
The new law promotes so-called regulatory sandboxes, or real-life environments, established by public authorities to test AI before it is deployed.
MEPs also want to boost citizens’ right to file complaints about AI systems and receive explanations of decisions based on high-risk AI systems that significantly affect their fundamental rights.
MEPs also reformed the role of the EU AI Office, which would be tasked with monitoring how the AI rulebook is implemented.
Co-rapporteur Brando Benifei (S&D, Italy) said: “While Big Tech companies are sounding the alarm over their own creations, Europe has gone ahead and proposed a concrete response to the risks AI is starting to pose.
"We want AI’s positive potential for creativity and productivity to be harnessed, but we will also fight to protect our position and counter dangers to our democracies and freedoms during the negotiations with council”.
Negotiations with the European Council on the final form of the law will now begin.
Addleshaw Goddard partner Claire Edwards said that the topic had become politicised, with opposing views in the European Parliament.
“Some see strict regulation as something that will be harmful to the bloc, whereas others believe that, without it, there is a risk of technologies being used for nefarious reasons, including mass surveillance and social scoring,” she pointed out.
Britain is taking a more hands-off approach to the regulation of AI, she added, but the AI Act will be pertinent for those who want to license solutions into the EU.
Lawmakers in Britain, the US, and China will pay close attention to the draft EU act and assess its strengths and weaknesses, Edwards suggested.
"I think the EU recognises both the major potential and major risks associated with AI. The bloc has chosen to take a coordinated approach to avoid member states creating domestic legislation that is inconsistent with that of neighbouring countries.”
Heavier regulation in the EU than in Britain could stifle innovation, critics believe.
"Potentially, as AI develops further, the EU may not be given access to various platforms as a result of this legislation,” Edwards points out.
"This act will not become law until its final form has been negotiated and agreed by the European Parliament, Council and Commission.
“This could be a lengthy process, which isn't ideal, as technologies are developing extremely quickly. It may be difficult for the law to keep up with the technology.
“The EU is currently working with the US on a joint initiative to put in place an emergency global AI code of conduct, so this is likely to move more quickly," she said.
Meanwhile, the CCBE will hold a webinar on AI for lawyers on 26 June. Registration is now open.