The application of AI, as a part of any insurer’s innovative IT infrastructure, is not new. After more than a decade of intelligent systems deployment, the 2018 report by the House of Lords Select Committee on Artificial Intelligence — then chaired by Lord Clement-Jones — titled AI in the UK: Ready, Willing and Able? discussed many regulatory options concerning AI’s increasing commercial use, competitive value and possible risks.

It is in that context that the Bermuda Monetary Authority’s recent report concerning that sector’s use of “artificial intelligence” and “machine-learning” technology in Bermuda is particularly informative.

That report, based on the BMA’s late-2021 survey of Bermuda insurers, provides many valuable and informative insights — both concerning the use of AI in that sector and the BMA’s indications as to the future of AI regulation in Bermuda.

As for the current and future use of AI by Bermuda’s insurers, 38 per cent of all respondents reported they currently use some form of AI in their operations. As well, 23 per cent of insurers reported that they plan to adopt AI solutions within the next five years, thus indicating that within five years the majority of Bermuda insurers will use AI.

The AI use numbers are even more impressive among the larger insurance enterprises, defined in the report as “insurance groups”.

Sixty-eight per cent of those respondents, all of whom have international affiliates, reported that they use AI.

Of particular interest to boards of directors, insurance executives and their legal advisers are four categories of survey response that the BMA reviews in the report.

First, six out of eight of the top concerns insurers expressed about managing the risk and operational quality of AI solutions identified AI’s explainability, auditability, modelling challenges, security, consistency of output and execution challenges.

However, insurers must remember that where AI is deployed, all of those legitimate operational concerns are routinely addressed in the course of commercially contracting for those projects.

To successfully manage those risks, well-drafted AI development and service contracts pervasively stipulate the operational specifications, security features, functional service levels, and acceptance testing requirements that are required before any intelligent system is permitted to go live.

Second, most respondents indicated that their AI systems are either provided by third-party service providers or procured as third-party, off-the-shelf systems.

In that regard, all of the usual commercial, risk allocation and other contract terms that apply to the procurement of all IT goods and services from third parties, including outsourcing transactions, are equally applicable to intelligent systems and service contracts, including AI.

Third, a few revealing responses arose concerning AI corporate governance when the BMA asked “what governance and control measures are currently in place”.

Although 26 out of 30 respondents indicated “senior management” has accountability at the business unit level, only ten respondents (33 per cent) indicated that AI systems were governed at the board level.

Although the steep trend of governance best practice is to ensure that all material IT systems and services (including contracts) receive direct board oversight, I expect that board governance over AI use will organically increase as AI systems become more material to insurers.

Fourth, when the BMA asked insurers what their concerns are when considering the adoption of AI systems, both “regulatory compliance” (seventh) and “legal liability” (eighth) were among the top nine answers.

However, most of the same challenges that insurers address at present to govern their existing IT infrastructure, including security, are applicable to AI systems. Where AI solutions are secured as third-party services, regulatory compliance and legal liability can be addressed in well-drafted AI service contracts, including all privacy, outsourcing and cyber security requirements.

Remember, the use of AI systems can also reduce legal liabilities. For example, many AI systems are being used for internal analytics, modelling, decision tree formation, outcome predictability and risk management, and are not used for business-to-business or consumer applications, which tend to carry the greatest third-party liability risk.

As well, where “big data” AI applications are used for complex modelling and advanced analytics, the aggregated data used is often anonymised, thus avoiding any privacy liability risk.

Where bespoke AI solutions provide distinct competitive advantages, enterprises often use those AI systems “within the castle walls” and possibly without web access, thus reducing operational cyber risk.

The BMA’s AI survey report also provides a generous indication of how the BMA may regulate the use of AI by Bermuda’s insurers in the future, including:

  • The governance of AI use should now be proportionally considered within the existing frameworks of governance, risk management and business conduct
  • The BMA will likely expand its Operational Cyber Security Code of Conduct to include specific guidelines for the use of AI
  • The BMA will likely strengthen its oversight of outsourced services where those third-party service providers use AI

I expect that the BMA’s very thoughtful approach to assessing the unique capabilities, risks and governance requirements associated with AI’s increasing use among Bermuda’s insurers will be well received.

When I spoke at an AI Business & Law conference with Lord Clement-Jones in 2019, he reiterated one of his Select Committee’s AI recommendations, which the BMA is living up to: “We believe that existing sector-specific regulators are best to consider the impact on their sectors of any (AI regulation) which may be needed.”

First published in The Royal Gazette, Legally Speaking, December 2022

Share
X.com LinkedIn Email Save as PDF
More Publications
Appleby-Website-Insurance-and-Reinsurance
1 Oct 2025

Private Cat Bonds and Casualty Sidecars Gaining Momentum in ILS Space

Following a particularly busy quarter for privately placed catastrophe bond transactions, this appea...

Technology and Innovation
25 Sep 2025

IT Enables Global Business Alignment

In Bermuda, many — if not most — of our international businesses are part of a multinational ent...

Appleby_preview_Bermuda_1
23 Sep 2025

Continuous Compliance: Building Confidence, Reducing Risk

Over the past decade, Bermuda businesses have faced a steady rise in regulatory and legal obligation...

Bermuda-1024x576-1
11 Sep 2025

A guide to selling your Bermuda home

Bermuda homeowners should protect their interests by enlisting expert advice when they decide to sel...

Bermuda-1024x576-1
10 Sep 2025

Discipline Now Key as Pressures on Reinsurers Mount

The reinsurance market is in a strong position after two years of profits and covering its cost of c...

Appleby-Website-Insurance-and-Reinsurance
10 Sep 2025

Education and Acceptance Fuel Wave of New Sponsors in Cat Bond Market

With the catastrophe bond market seeing eleven new sponsors enter the space so far this year, the tr...

Appleby-Website-Insurance-and-Reinsurance
9 Sep 2025

Built on Governance, Driven by Innovation: The Bermuda Advantage

Holding 85% of the cat bond market, Bermuda’s edge in alternative capital is no accident. “Re...

Appleby-Website-Employment-and-Immigration
26 Aug 2025

Walking the Tightrope of Restrictive Covenants

Restrictive covenants in employment agreements can often be a tightrope for employers. Ideally, thos...

ICLG Fintech 21 cover
26 Aug 2025

Insights from the BMA’s Discussion Paper on Responsible Use of Artificial Intelligence in Bermuda’s Financial Sector

The Bermuda Monetary Authority (BMA) recently published a discussion paper on 30 July, 2025: The Res...

Appleby-Website-Insurance-and-Reinsurance
25 Aug 2025

Bermuda – Influential Women in Hamilton: Melinda Mayne

Insurance companies in Bermuda are more open to discussions on diversity and inclusion, though there...