Artificial-intelligence (AI) systems will have to identify a legal person to be held responsible for any problems, under proposals unveiled by the British government.
The Law Society Gazette of England and Wales quotes the British government as saying that the proposed “pro-innovation” regime will be operated by existing regulators, rather than a dedicated central body along the lines of that being created by the EU.
The proposals were published as the Data Protection and Digital Information Bill, which sets out an independent data-protection regime, was introduced to parliament.
The core principles of AI regulation will oblige developers and users to:
- Ensure that AI is used safely,
- Ensure that AI is technically secure, and functions as designed,
- Make sure that AI is appropriately transparent and explainable,
- Consider fairness,
- Identify a legal person to be responsible for AI,
- Clarify routes to redress or contestability.
The Gazette says that regulators – including Ofcom, the Competition and Markets Authority, the Information Commissioner’s Office, the Financial Conduct Authority and the Medicine and Healthcare Products Regulatory Agency – will be asked to interpret and implement the principles.
They will be encouraged to consider “lighter-touch” options that could include guidance and voluntary measures, or creating ‘sandboxes’ – trial environments where businesses can check the safety and reliability of AI tech before introducing it to market.