Business-law firm Mason Hayes & Curran (MHC) has said that the window for organisations to assess their readiness for the EU’s AI Act is narrowing, as a shift from policy to compliance ramps up.
Its latest report on the technology highlights that Irish businesses developing or using AI face approaching deadlines and obligations under the EU legislation.
The firm’s Artificial Intelligence Mid-Year Review 2025 points out that new legal obligations for providers of general-purpose AI models will apply from August.
The EU act’s penalty regime takes effect at the same time, allowing maximum fines of up to €35 million or 7% of global turnover.
MHC says that organisations already face strict new standards – including a ban on specific AI practices such as facial- or emotion-recognition and social-scoring tools.
Businesses must also provide mandatory AI literacy training to ensure that staff understand both the benefits and risks of using or deploying AI technologies.
Brian McElligott (MHC’s head of AI) said that the firm was advising clients daily on what the AI Act meant in practice – where their obligations fell, which systems and models were in scope, and how to document compliance.
“Regulators expect to see evidence, and delaying now could leave organisations exposed once enforcement begins next year,” he stated.
The MHC review also examines guidance from the European Data Protection Board on managing privacy risks in large language models. It outlines how organisations should assess these systems under GDPR and ensure that personal data is handled lawfully.
The firm notes that the European Commission has put a proposal for an AI Liability Directive (AILD) “on the chopping block”.
The AILD was aimed at making it easier to bring claims for damage caused by AI systems, but critics argued that existing legal frameworks were sufficient.
“The decision to drop the AILD was largely political, signalling a broader EU move toward regulatory simplification and maintaining tech competitiveness,” the MHC review says.
The firm also examines ‘agentic AI’ – autonomous tools built on generative AI that can plan, act, and learn with minimal human input.
MHC says that these systems are expected to be capable of autonomously managing everyday tasks like booking restaurants or football match tickets, as well as complex tasks like personalised customer service, software development and healthcare interactions.
The firm says that the AI Act may classify certain agentic AI systems as high-risk, or even prohibited, depending on their use and potential impact.