The key workplace decisions of recruitment, performance, selection, promotion and termination are considered high-risk, and therefore subject to stricter rules, under the new EU AI Act, writes Síobhra Rush, employment law partner at Lewis Silkin (pictured).
The act, the world’s first comprehensive AI legal framework, has been formally adopted by the Council of the EU, with the key compliance obligations to be staggered over the next two years. The Department of Enterprise, Trade and Employment has sought submissions from interested parties in regard to the implementation of the act.
The general expectation is that the act will become the international default, much like the GDPR, which became a model for many other laws on a global scale.
The potential for AI systems to ‘bake in’ discrimination and bias is well recognised.
Hiring decisions using AI could therefore result in outcomes that are open to legal challenge.
Detecting and addressing the risk of discriminatory outcomes is a multi-stakeholder issue.
Provider assurances will be a key part of the procurement process and deployers must ensure that input data is representative and relevant.
Many employers will go further, putting in place bias audits and performance testing to mitigate these risks.
Similarly, ensuring that AI supported decisions can be adequately explained is critical to maintaining trust in AI systems and enabling individuals to contest effectively decisions based on AI profiling.
Novel AI tools are proliferating in areas such as recruitment, performance evaluation, and monitoring and surveillance.
The act categorises these common use cases as automatically high risk.
Lower risk scenarios are where the AI performs narrow procedural tasks or improves the result of a previously completed human activity.
This breadth reflects the wide range of AI systems already in use as workplace tools. These are interesting to consider when assessing the potential reach of the act.
Each stage of the recruitment process can now be supported by AI systems such that generative AI drafts job descriptions, algorithms determine ad targets, and candidates might interact with a chatbot when submitting their application.
Selection, screening and shortlisting supported by AI systems presents legal and ethical risks. Assessments and even interviews may now have less human input.
Collection and objective analysis of employee data means that AI is already widely used as a performance management tool.
Monitoring technology has the potential to provide a safer workplace (for example, tracking delivery drivers’ use of seatbelts and speed) but could also be overly intrusive and erode trust by monitoring keystrokes and work rate.
Employment AI use cases will very likely to fall into the more rigorous end of the act’s requirements, but consequent obligations will hinge on whether the employer is a ‘provider’ or ‘deployer’ of the AI system.
Most employers will be considered a deployer, with the developers of the AI system the provider.
Providers have extensive obligations:
The requirements for deployers are somewhat less onerous (and less costly) but will still require significant planning:
Deployers may also be required to comply with a data request for an explanation of the role of AI in a decision which has impacted an affected person’s health, safety or fundamental rights.
However, lack of settled practice yet, as to what this kind of explanation looks like, may make compliance more difficult.
Employers should be aware that a deployer may be deemed a provider, with more onerous obligations, if s/he:
Lighter transparency obligations apply to use cases deemed limited risk, for example the requirement that users are informed that they are interacting with an AI tool.
Certain uses are banned, such as biometric categorisation and inferring emotions or using personal physical characteristics to determine race, political views, religion, or sexual orientation.
Deployers should: