Contracts to Manage AI Risk: Part Two (Bermuda)

Published: 24 Jul 2025
Type: Insight

In part one of this two-part series about artificial intelligence contracts, I discussed the ways that contracts can mitigate, if not avoid, many of the risks associated with the development and use of transformative technology like AI.

In addition to the intellectual property infringement risks I have described, AI’s current use is raising concerns about the veracity, reliability and completeness of AI’s output.

Therefore, in addition to contracts including acceptance testing provisions, AI contracts should include covenants that address the quality of the AI’s output, usually as service level specifications.

As well, many AI contracts now include “adult supervision” clauses that require ongoing human oversight, verification and quality assurances for AI solutions that AI promises.

The model AI contract published by the Digital Transformation Agency of the Australian Government, “Artificial Intelligence (AI) Model Clauses, v. 2.0”, recommends more than a dozen human oversight provisions for consideration.

A dimension of risk that AI and cybersecurity governance share, is that both are subject to a fast-moving legal and regulatory landscape.

Since evolving AI laws and regulations may directly and materially affect how AI is developed and used, AI contracts should include change management provisions to allow the parties to discuss the contract’s terms and conditions in response to any such law reform, including how the contract may have to be amended to address those unknown future, but expected, legal developments.

As risk managers know, AI operations are not highly transparent.

Therefore, for reasons related to potential litigation and the need for service performance monitoring and regulatory compliance, the UK’s Society for Computers and Law, in a white paper titled, “Artificial Intelligence Contractual Clauses”, devotes considerable attention to recommending that all AI contracts require the AI solution to produce a transparent, reliable, complete and accurate record of the AI’s operations and activities.

Such AI operational record transparency is often referred to as “logging by design”, and AI contracts often stipulate the precise types of AI operations that must be tracked and recorded, including when the AI fails to operate in compliance with the governing contract.

Another potential risk for enterprises that use AI to gain important competitive advantages, is that they may not own the results of what the AI has created, learnt or compiled.

Given the creative and self-improvement abilities of AI, unless the enterprise owns the AI that it is using, the contract needs to address who owns the AI-created works, including any advanced data analytics or software improvements that AI may create for itself.

For the most part, AI product or service vendors insist on owning those “sweat of the software brow” labour results.

However, where an AI solution or application has been created or customised to a customer’s bespoke operational specifications and contains important competitive commercial advantages, the ownership of those works may be negotiated otherwise.

Even where the customer does not contractually own the results of the AI solution’s endeavours — for example, the advanced data analytics that the AI created — then the customer should contractually stipulate that:

  • Such works constitute the commercially confidential information of the customer despite the vendor’s ownership of same
  • The customer shall have the sole and exclusive, perpetual, royalty-free, personal, non-transferable and non-sublicensable right to use same for the purposes of its business without any territorial or other restriction

Customers of service providers that rely on AI to perform their services should consider that most of the model contracts provisions that the SCL and the DTA have recommended are entirely applicable to those governing AI service agreements.

The supply-chain use of AI presents almost as many risks to customers as the direct use of AI does, except that in the latter case, the customer arguably has more control over the terms and conditions of the governing AI solution agreement.

Given the fast-moving regulation of AI applications worldwide, there is a growing risk that some of the features and functions of the AI that customers are using have been banned or otherwise prohibited in parts of the world.

Consequently, the DTA recommends that all AI contracts include a representation and warranty that no part or aspect of the AI solution contains any operations that constitute practices, AI products, applications, software code or web services that have been banned, prohibited or otherwise restricted from use that would have a detrimental impact on the user.

A simple schedule to the relevant contract can disclose any exceptions that are acceptable to the parties.

One of the fastest developing imperatives for companies to critically review their AI contracts arises where AI is being used for job application automation.

Numerous human rights cases have alleged that some AI solutions have been programmed with inherent discriminatory biases that skew its operations for applicant evaluation, decisions on candidate scoring and ranking and other qualitative judgments in contravention of certain candidates’ human rights protections.

Hopefully, the prescriptions offered in this two-part series will help organisations to manage, if not avoid, such material risks during their adoption and reliance upon transformative technology like AI.

First Published in The Royal Gazette, Legally Speaking column, July 2025

Share
More publications
Economic Substance
27 Apr 2026

Economic substance regime now falls under Cita

Recent amendments to Bermuda’s economic substance regime have transferred regulatory responsibility from the Registrar of Companies to the Corporate Income Tax Agency.

Appleby-Website-Private-Client-and-Trusts-Practice
22 Apr 2026

Regulation, Regulation, Regulation

The article discusses updates to global trust guidance and regulation, as well as beneficial ownership and the regulatory burden on trustees that comes with increased transparency.

Appleby-Website-Private-Client-and-Trusts-Practice-1905px-x-1400px
15 Apr 2026

Purpose trusts: Bermuda’s answer to modern asset structuring

Purpose trusts represent a notable development in modern trust law, particularly within offshore financial jurisdictions such as Bermuda. Unlike traditional private trusts, which are established for the benefit of identifiable beneficiaries, purpose trusts are created to achieve specific objectives or purposes. Historically, common law jurisdictions were reluctant to recognise such arrangements due to the absence of beneficiaries capable of enforcing the trust. However, legislative reforms in Bermuda have significantly expanded the scope of trust law by expressly validating noncharitable purpose trusts. Through the enactment of the Trusts (Special Provisions) Act 1989 (‘the 1989 Act’), Bermuda introduced a statutory framework that allows trusts to exist for defined purposes, provided certain legal requirements are satisfied. This innovation has made Bermuda a leading jurisdiction for the establishment of purpose trusts, particularly in the fields of international finance, corporate structuring and private wealth management. This article examines the legal foundations of purpose trusts under Bermuda law, focusing on their historical development, statutory framework, requirements for validity, enforcement mechanisms and practical applications.

Website-Code-Bermuda-1
10 Apr 2026

Bermuda Regulatory Update – Economic Substance Amendment Act 2026

On 31 March 2026, the Economic Substance Amendment Act 2026 and the Economic Substance Amendment Regulations 2026 (together, the “2026 Amendments”) came into force, enacting changes to the Economic Substance Act 2018 (“ES Act”) and Economic Substance Regulations 2018.

ICLG Fintech 21 cover
10 Apr 2026

Digital asset developments and Bermuda’s regulatory readiness

While frightening to some, “finance bros” and “tech bros” are now wearing the same gilets as traditional finance products and structures are being infused with digital asset adaptation.

Appleby-Website-Insurance-and-Reinsurance
1 Apr 2026

Q1’26 Suggests Cat Bond Issuance Could Reach $20bn Again, Private ILS & Sidecar Surge to Continue

It’s been an exceptionally busy start to the year for the catastrophe bond sector, with Q1’26 officially becoming the second highest Q1 on record in terms of total catastrophe bond issuance, which indicates that 2026 could end up reaching the $20 billion+ milestone once again, Brad Adderley, Managing Partner at law firm Appleby has said.

Trust Disputes
27 Mar 2026

Privy Council decision in X Trusts – redefining the role of the protector

On 19 March 2026, the Judicial Committee of the Privy Council (JCPC) delivered its long-awaited judgment regarding the role of a fiduciary protector in the administration of a trust (A and 6 others (Appellants) v C and 13 others (Respondents) [2026] UKPC 11, on appeal from the Court of Appeal of Bermuda). The decision of the JCPC was unanimous, with the judgment being given by Lords Briggs and Richards.

Appleby-Website-Insurance-and-Reinsurance
26 Mar 2026

Latin American risks and the Bermuda market

Bermuda’s decades-long efforts to welcome Latin American risks to the island’s re/insurance market have borne fruit in the form of the many LatAm captive insurers that have become domiciled here.

Appleby-Website-Insurance-and-Reinsurance
24 Mar 2026

Navigating Bermuda’s New Recovery Planning Requirements: A Roadmap for Commercial Insurers

On 20 March 2026, the Bermuda Monetary Authority (BMA) issued an updated Guidance Note for Recovery Planning Requirements (Guidance Note). The Guidance Note assists Bermuda commercial insurers’ compliance with the obligations set out in the Insurance (Prudential Standards) (Recovery Plan) Rules 2024 (Rules), which became operative on 1 May 2025.