The application of AI, as a part of any insurer’s innovative IT infrastructure, is not new. After more than a decade of intelligent systems deployment, the 2018 report by the House of Lords Select Committee on Artificial Intelligence — then chaired by Lord Clement-Jones — titled AI in the UK: Ready, Willing and Able? discussed many regulatory options concerning AI’s increasing commercial use, competitive value and possible risks.

It is in that context that the Bermuda Monetary Authority’s recent report concerning that sector’s use of “artificial intelligence” and “machine-learning” technology in Bermuda is particularly informative.

That report, based on the BMA’s late-2021 survey of Bermuda insurers, provides many valuable and informative insights — both concerning the use of AI in that sector and the BMA’s indications as to the future of AI regulation in Bermuda.

As for the current and future use of AI by Bermuda’s insurers, 38 per cent of all respondents reported they currently use some form of AI in their operations. As well, 23 per cent of insurers reported that they plan to adopt AI solutions within the next five years, thus indicating that within five years the majority of Bermuda insurers will use AI.

The AI use numbers are even more impressive among the larger insurance enterprises, defined in the report as “insurance groups”.

Sixty-eight per cent of those respondents, all of whom have international affiliates, reported that they use AI.

Of particular interest to boards of directors, insurance executives and their legal advisers are four categories of survey response that the BMA reviews in the report.

First, six out of eight of the top concerns insurers expressed about managing the risk and operational quality of AI solutions identified AI’s explainability, auditability, modelling challenges, security, consistency of output and execution challenges.

However, insurers must remember that where AI is deployed, all of those legitimate operational concerns are routinely addressed in the course of commercially contracting for those projects.

To successfully manage those risks, well-drafted AI development and service contracts pervasively stipulate the operational specifications, security features, functional service levels, and acceptance testing requirements that are required before any intelligent system is permitted to go live.

Second, most respondents indicated that their AI systems are either provided by third-party service providers or procured as third-party, off-the-shelf systems.

In that regard, all of the usual commercial, risk allocation and other contract terms that apply to the procurement of all IT goods and services from third parties, including outsourcing transactions, are equally applicable to intelligent systems and service contracts, including AI.

Third, a few revealing responses arose concerning AI corporate governance when the BMA asked “what governance and control measures are currently in place”.

Although 26 out of 30 respondents indicated “senior management” has accountability at the business unit level, only ten respondents (33 per cent) indicated that AI systems were governed at the board level.

Although the steep trend of governance best practice is to ensure that all material IT systems and services (including contracts) receive direct board oversight, I expect that board governance over AI use will organically increase as AI systems become more material to insurers.

Fourth, when the BMA asked insurers what their concerns are when considering the adoption of AI systems, both “regulatory compliance” (seventh) and “legal liability” (eighth) were among the top nine answers.

However, most of the same challenges that insurers address at present to govern their existing IT infrastructure, including security, are applicable to AI systems. Where AI solutions are secured as third-party services, regulatory compliance and legal liability can be addressed in well-drafted AI service contracts, including all privacy, outsourcing and cyber security requirements.

Remember, the use of AI systems can also reduce legal liabilities. For example, many AI systems are being used for internal analytics, modelling, decision tree formation, outcome predictability and risk management, and are not used for business-to-business or consumer applications, which tend to carry the greatest third-party liability risk.

As well, where “big data” AI applications are used for complex modelling and advanced analytics, the aggregated data used is often anonymised, thus avoiding any privacy liability risk.

Where bespoke AI solutions provide distinct competitive advantages, enterprises often use those AI systems “within the castle walls” and possibly without web access, thus reducing operational cyber risk.

The BMA’s AI survey report also provides a generous indication of how the BMA may regulate the use of AI by Bermuda’s insurers in the future, including:

  • The governance of AI use should now be proportionally considered within the existing frameworks of governance, risk management and business conduct
  • The BMA will likely expand its Operational Cyber Security Code of Conduct to include specific guidelines for the use of AI
  • The BMA will likely strengthen its oversight of outsourced services where those third-party service providers use AI

I expect that the BMA’s very thoughtful approach to assessing the unique capabilities, risks and governance requirements associated with AI’s increasing use among Bermuda’s insurers will be well received.

When I spoke at an AI Business & Law conference with Lord Clement-Jones in 2019, he reiterated one of his Select Committee’s AI recommendations, which the BMA is living up to: “We believe that existing sector-specific regulators are best to consider the impact on their sectors of any (AI regulation) which may be needed.”

First published in The Royal Gazette, Legally Speaking, December 2022

Share
Twitter LinkedIn Email Save as PDF
More Publications
30 Nov 2023

Uncertainty Over Control Test in Sanctions Relating to Russia

Bermuda-based companies undertaking business with Russian entities must comply with the island’s s...

9 Nov 2023

Navigating AI Service Contracts

Organisations are increasingly using and relying on the many commercial advantages of artificial int...

26 Oct 2023

Lay Offs: A practical analysis of the legislation for employers

The choice to make lay offs is never an easy one. It is also an area of business where the commercia...

26 Oct 2023

Bermuda: Economic Substance – Trends in Compliance and Enforcement

This article provides a brief summary of the legislative framework concerning the regulation of the ...

25 Oct 2023

The Insolvency Review: Bermuda

The Insolvency Review offers an in-depth review of the most consequential features of the insolvency...

23 Oct 2023

Bermuda: Investors still wary of sidecars despite upbeat outlook for 2023 returns

Sidecar vehicles are expected to return on average 20%-30% in 2023, assuming no major loss event in ...

12 Oct 2023

Bermuda: Privacy and the Private Sector

Bermuda’s Personal Information Protection Act 2016, which comes into full force on January 1, 2025...

5 Oct 2023

Board Governance: Caring for your Bermuda Captive

If there is one topic that always comes up in major captive conferences, it’s the best way to mana...

2 Oct 2023

Expect a Busy Year-End for Cat Bonds and Sidecars

History has shown that in both soft and hard markets, the final quarter of the year is always busy, ...

29 Sep 2023

Directors’ knowledge and the duty to creditors

In the intricate realm of corporate governance and insolvency law, directors hold a position of imme...