The European Commission is inviting feedback on draft guidelines on transparency obligations for AI providers.
From 2 August, people in the EU will have to be informed when they are interacting with AI systems or exposed to certain AI-generated or manipulated content.
The commission published the draft guidelines on the obligations under the AI Act on Friday (8 May).
Under the act, AI providers will have to inform people when they are interacting with an AI system and add machine-readable marks to enable the detection of AI-generated or manipulated content.
Deployers will also have to inform people when they are exposed to deepfakes, AI-generated publications on matters of public interest, and emotion-recognition or biometric-categorisation systems.
The commission says that its draft guidelines take feedback from previous consultations into account and aim to clarify the scope of the obligations.
A voluntary code of practice drafted by independent experts, expected in June, will complement the guidelines.
Interested parties – including providers and developers of AI systems, businesses and public authorities, academia, research institutions, and citizens – are being invited to share their views on the guidelines by 3 June.
Last week, the EU Council and European Parliament reached a provisional agreement on a European Commission proposal, known as the Digital Omnibus on AI, aimed at streamlining some of the rules on the technology in the AI Act.