We use cookies to collect and analyse information on site performance and usage to improve and customise your experience, where applicable. View our Cookies Policy. Click Accept and continue to use our website or Manage to review and update your preferences.


EU to ban unacceptable ‘subliminal’ AI systems
Margrethe Vestager (executive vice-president in charge of competition policy)

22 Apr 2021 / technology Print

EU to ban unacceptable ‘subliminal’ AI systems

The European Commission has launched its first legal framework on artificial intelligence (AI), which includes a proposal to ban uses of the technology which it sees as unacceptable.

Commission vice-president Margrethe Vestager (pictured) said the ban was aimed at AI systems that use subliminal techniques to cause physical or psychological harm to someone. She cited the example of a toy that uses voice assistance to manipulate a child into doing something dangerous.

She added that any type of scoring system that would rank people based on their social behaviour would also be prohibited.

Trustworthy

The use of biometric identification, such as facial-recognition technology, in public places is also prohibited, though there will be exceptions that are strictly defined, limited, and regulated, according to Vestager.

The commissioner said the overall aim was to develop “a secure, trustworthy and human-centred" system for the use of the technology, which focused not on the AI technology itself, but how it is used, and for what end.

The framework divides AI use into four categories of risk, with the riskiest facing a complete ban.

The proposals focus mainly on what are categorised as ‘high-risk’ uses of AI, which include systems that filter through candidates' CVs for education and job applications, or assess whether someone can receive a mortgage from a bank.

Software used in self-driving cars or medical devices is also included in this category.

National authorities responsible

Under the framework, these 'high-risk' systems will be subject to a new set of five obligations:

1) AI providers must feed their systems with high-quality data to make sure the results are not biased or discriminating,

2) Providers must give detailed documentation about how their AI systems work, for authorities to assess their compliance,

3) Providers must share substantial information with users to help them understand and properly use AI systems,

4) They also have to ensure an appropriate level of human oversight both in the design and implementation of AI, and

5) They must respect the highest standards of cyber-security and accuracy.

Under the framework, national authorities will be responsible for assessing whether AI systems meet their obligations, and it will be for member states to identify which national authority is best placed to supervise.

AI activities in the lower two categories of risk, such as filters which block spam from inboxes, will be allowed, either without restrictions beyond existing consumer protections, or subject to transparency obligations.

European internet industry association Eco welcomed the EU’s focus on regulation for high-risk AI applications, adding that the framework had avoided “a blanket over-regulation" of the sector.

Gazette Desk
Gazette.ie is the daily legal news site of the Law Society of Ireland