In addition to the intellectual property infringement risks I have described, AI’s current use is raising concerns about the veracity, reliability and completeness of AI’s output.

Therefore, in addition to contracts including acceptance testing provisions, AI contracts should include covenants that address the quality of the AI’s output, usually as service level specifications.

As well, many AI contracts now include “adult supervision” clauses that require ongoing human oversight, verification and quality assurances for AI solutions that AI promises.

The model AI contract published by the Digital Transformation Agency of the Australian Government, “Artificial Intelligence (AI) Model Clauses, v. 2.0”, recommends more than a dozen human oversight provisions for consideration.

A dimension of risk that AI and cybersecurity governance share, is that both are subject to a fast-moving legal and regulatory landscape.

Since evolving AI laws and regulations may directly and materially affect how AI is developed and used, AI contracts should include change management provisions to allow the parties to discuss the contract’s terms and conditions in response to any such law reform, including how the contract may have to be amended to address those unknown future, but expected, legal developments.

As risk managers know, AI operations are not highly transparent.

Therefore, for reasons related to potential litigation and the need for service performance monitoring and regulatory compliance, the UK’s Society for Computers and Law, in a white paper titled, “Artificial Intelligence Contractual Clauses”, devotes considerable attention to recommending that all AI contracts require the AI solution to produce a transparent, reliable, complete and accurate record of the AI’s operations and activities.

Such AI operational record transparency is often referred to as “logging by design”, and AI contracts often stipulate the precise types of AI operations that must be tracked and recorded, including when the AI fails to operate in compliance with the governing contract.

Another potential risk for enterprises that use AI to gain important competitive advantages, is that they may not own the results of what the AI has created, learnt or compiled.

Given the creative and self-improvement abilities of AI, unless the enterprise owns the AI that it is using, the contract needs to address who owns the AI-created works, including any advanced data analytics or software improvements that AI may create for itself.

For the most part, AI product or service vendors insist on owning those “sweat of the software brow” labour results.

However, where an AI solution or application has been created or customised to a customer’s bespoke operational specifications and contains important competitive commercial advantages, the ownership of those works may be negotiated otherwise.

Even where the customer does not contractually own the results of the AI solution’s endeavours — for example, the advanced data analytics that the AI created — then the customer should contractually stipulate that:

  • Such works constitute the commercially confidential information of the customer despite the vendor’s ownership of same
  • The customer shall have the sole and exclusive, perpetual, royalty-free, personal, non-transferable and non-sublicensable right to use same for the purposes of its business without any territorial or other restriction

Customers of service providers that rely on AI to perform their services should consider that most of the model contracts provisions that the SCL and the DTA have recommended are entirely applicable to those governing AI service agreements.

The supply-chain use of AI presents almost as many risks to customers as the direct use of AI does, except that in the latter case, the customer arguably has more control over the terms and conditions of the governing AI solution agreement.

Given the fast-moving regulation of AI applications worldwide, there is a growing risk that some of the features and functions of the AI that customers are using have been banned or otherwise prohibited in parts of the world.

Consequently, the DTA recommends that all AI contracts include a representation and warranty that no part or aspect of the AI solution contains any operations that constitute practices, AI products, applications, software code or web services that have been banned, prohibited or otherwise restricted from use that would have a detrimental impact on the user.

A simple schedule to the relevant contract can disclose any exceptions that are acceptable to the parties.

One of the fastest developing imperatives for companies to critically review their AI contracts arises where AI is being used for job application automation.

Numerous human rights cases have alleged that some AI solutions have been programmed with inherent discriminatory biases that skew its operations for applicant evaluation, decisions on candidate scoring and ranking and other qualitative judgments in contravention of certain candidates’ human rights protections.

Hopefully, the prescriptions offered in this two-part series will help organisations to manage, if not avoid, such material risks during their adoption and reliance upon transformative technology like AI.

First Published in The Royal Gazette, Legally Speaking column, July 2025

Share
X.com LinkedIn Email Save as PDF
More Publications
Appleby-Website-Bermuda2
30 Oct 2025

Changes to beneficial ownership regime

One of the most notable innovations in the Beneficial Ownership Act 2025, which was passed last mont...

Appleby-Website-Employment-and-Immigration
29 Oct 2025

Changes to Department of Immigration’s Work Permit Policy Are Here

It has been over ten years since Bermuda’s Department of Immigration released a policy with respec...

Appleby-Website-Corporate-Practice
28 Oct 2025

Updates on Hong Kong’s Uncertificated Securities Market Regime from an offshore perspective

Hong Kong’s uncertificated securities market ("USM”) initiative is scheduled to take effect in 2...

Website-Code-Bermuda-1
16 Oct 2025

Privacy issues in new beneficial ownership regime

Bermuda has passed the Beneficial Ownership Act 2025, a landmark reform that consolidates and simpli...

Regulatory Advice
10 Oct 2025

BMA requires greater operational resilience

Last month, the Bermuda Monetary Authority issued its code of conduct to bolster the resiliency of r...

Appleby-Website-Insurance-and-Reinsurance
1 Oct 2025

Private Cat Bonds and Casualty Sidecars Gaining Momentum in ILS Space

Following a particularly busy quarter for privately placed catastrophe bond transactions, this appea...

Technology and Innovation
25 Sep 2025

IT Enables Global Business Alignment

In Bermuda, many — if not most — of our international businesses are part of a multinational ent...

Appleby_preview_Bermuda_1
23 Sep 2025

Continuous Compliance: Building Confidence, Reducing Risk

Over the past decade, Bermuda businesses have faced a steady rise in regulatory and legal obligation...

Bermuda-1024x576-1
11 Sep 2025

A guide to selling your Bermuda home

Bermuda homeowners should protect their interests by enlisting expert advice when they decide to sel...

Bermuda-1024x576-1
10 Sep 2025

Discipline Now Key as Pressures on Reinsurers Mount

The reinsurance market is in a strong position after two years of profits and covering its cost of c...