A survey of business leaders across the world has found that they have concerns over the decisions and omissions made by artificial-intelligence (AI) systems – even though 60% of companies now use them.
The study, carried out by global law firm Dentons, showed that businesses saw many benefits in AI – including saving time by automating processes, generating data-driven business information for decision-making, and reducing human error in processing.
There were also significant areas of concern, however, with just over 80% of firms expressing worries about the protection of personal data – though Dentons pointed out that only 55% of businesses had policies for the protection of personal and non-personal data in place.
Uncertainty about liability
The law firm also found that less than 20% of businesses had a strategy or roadmap for AI. It added that this meant that AI systems were being implemented “without proper consideration of the risks, the relevant legislation, or the internal controls required to ensure it is well-governed".
Other findings included:
- 80% of business leaders reported uncertainty about where liability lay for the decisions, as well as omissions, made by AI systems,
- Almost 60% had concerns about the potential for discrimination arising from the actions of AI systems,
- Depending on the area of law, between 55% and 75% of firms were unaware of relevant AI legislation in their country,
- Just over 60% of business wanted regulators to provide protection mechanisms on the use of AI in relation to privacy. Around half wanted similar mechanisms in the areas of consumer protection, criminal liability, and intellectual property.
Giangiacomo Olivi of Dentons said that global business leaders were beginning to ask “serious questions” about where the responsibility for good governance, regulation and compliance sat.
“We urgently need to start a dialogue on the controls needed to protect businesses, customers, shareholders and communities,” he added.
Dentons is calling for a system of ‘algor-ethics’, so that checks and balances can be put in place. It argues that moral considerations need to become an integral part of the development of AI technologies.
In Ireland, the Government published an AI strategy document last year.
Dentons points out, however, that the current lack of a specific AI regulation means that it falls to the existing legal and regulatory frameworks to determine liability for the consequences of the use of such systems.
“From a practical point of view, many rules are largely untested when it comes to liability for AI systems,” it adds.
The survey was conducted online among more than 200 global business leaders in September 2021.