The growing use of artificial intelligence in construction projects is beginning to reshape how contractual obligations, liability, and evidence are assessed in construction disputes. Across jurisdictions, AI raises common yet differently answered questions about responsibility, reliance, and legal certainty.
In this article, a joint publication by Law Exchange International, practitioners from four jurisdictions around the globe, examine the legal implications of AI, in their respective jurisdictions, in the context of construction contracts.
—
English Law Perspective
by Colin Jones, Partner, Head of Construction and Engineering, HCR Law
Artificial intelligence (AI) is increasingly used across construction projects, including design optimisation, programme planning, site monitoring, safety management and predictive maintenance. While these technologies offer efficiencies, they also highlight a growing challenge between new AI fed construction practices and traditional contractual frameworks.
Most standard form construction contracts used in the UK, including JCT and NEC, were drafted on the assumption that project decisions are made by human professionals exercising judgment. Currently, they do not address the use of AI systems or allocate risk where project outcomes are influenced by algorithmic tools. As AI adoption increases, this omission creates material legal and commercial risk.
The Contractual Blind Spot
UK standard forms make no express provision for AI. They do not regulate its use, define the extent to which AI‑generated outputs may be relied upon, or allocate responsibility where AI contributes to error. Parties are therefore left to argue whether existing provisions can be adapted to accommodate AI‑driven decision‑making.
UK standard forms are not “AI‑ready”. Where AI plays a substantive role on a project, bespoke amendments are required to avoid uncertainty and unintended risk transfer.
Liability and Responsibility
A central challenge posed by AI is the allocation of responsibility when failures occur.
Where an AI tool produces a defective design, unsafe methodology or inaccurate programme, liability may be disputed between multiple parties.
Potentially responsible parties include the contractor or consultant deploying the AI, those providing the underlying data, the technology provider, and those responsible for reviewing or approving the output. Standard forms provide no guidance on liability allocation in these circumstances, increasing the likelihood of dispute.
Clear contractual provisions governing AI deployment, validation and decision‑making are therefore essential.
Errors, Hallucinations and Reliance
AI systems may generate outputs that appear credible but are materially incorrect. In a construction context, this may result in flawed designs, unrealistic programmes or defective methodologies. Where such errors cause delay or loss, disputes may arise as to whether reliance on AI outputs was reasonable and whether appropriate verification was undertaken.
Contracts will need to address the extent to which AI outputs may be relied upon, the level of human review required, and the allocation of risk where AI‑generated information proves inaccurate.
Intellectual Property
AI also raises complex intellectual property issues. Where AI generates designs or project materials, ownership may be uncertain. Employers, contractors and technology providers may each assert rights, particularly where proprietary systems or datasets are used.
There is also a risk that AI systems trained on historic data may reproduce confidential or protected material. Standard IP clauses, which assume human authorship, may not adequately address these risks.
Data Protection and Confidentiality
The use of AI can involve extensive data collection through sensors, wearables and drones, increasing data protection and confidentiality risk, particularly where personal data is processed.
Organisations must ensure lawful processing, carry out appropriate impact assessments and maintain transparency with the workforce. Contracts should clearly allocate responsibility for compliance and regulate data storage, sharing and security.
Insurance
Traditional professional indemnity and construction insurance policies may not respond to AI‑related losses, particularly where failures do not arise from conventional human negligence. AI‑driven errors may therefore expose parties to uninsured risk.
Insurance requirements and liability caps should be reviewed to ensure they remain appropriate in light of AI‑related exposure.
Human Oversight
Despite advances in automation, human oversight remains critical. Unsupervised reliance on AI can result in error, bias and unsafe outcomes, particularly in safety‑critical or design assurance contexts.
Contracts should reflect the importance of “human‑in‑the‑loop” oversight by requiring professional review, recording decision-making processes and clearly identifying where ultimate responsibility lies.
Governance and Future‑Proofing
Effective governance is essential. Organisations using AI should maintain clear policies, registers of AI systems, records of data flows and model versions, and documentation capable of evidential use.
Contracts can support this by requiring transparency around AI use, preserving access to AI‑generated information for operational and legal purposes, and accommodating technological change over the project lifecycle.
Conclusion: AI has the potential to transform construction delivery but exposes the limitations of existing contractual frameworks. Standard form contracts were not drafted with AI in mind and, without amendment, may leave parties exposed to uncertainty and unallocated risk.
As AI becomes embedded in construction projects, contractual approaches must evolve. Clear risk allocation, robust governance and meaningful human oversight will be essential to realising the benefits of AI without undermining legal certainty.
—
Danish Law Perspective
by Stine Kalsmose Jakobsen, Partner, Holst Law
Artificial intelligence (AI) is increasingly being integrated into Danish construction projects. Used for design optimisation, scheduling, site monitoring, and project planning, AI offers significant potential benefits, including improved efficiency, early risk identification, enhanced decision-making, and strengthened project control.
However, alongside these benefits, the use of AI introduces severe legal and contractual risks. These include uncertainty regarding liability allocation, potential gaps in insurance coverage, and reduced transparency in decision-making processes. As AI adoption accelerates, construction companies need to ensure that contractual frameworks and risk management practices evolve accordingly.
The Contractual Blind Spot
Traditionally, Danish standard construction contracts are drafted on the assumption that decisions are made by identifiable human professionals. They do not reflect algorithm-driven workflows and do not address the legal implications of AI-assisted decision-making.
Danish standard forms contain no express provisions governing the use of AI. They do not clarify whether AI-generated outputs may be relied upon, nor do they allocate responsibility where algorithmic tools cause or contribute to error.
For example, if an AI system generates an incorrect quantity estimate that results in additional project costs, the contractual framework may not clearly determine whether responsibility lies with the consultant, contractor, technology provider, or another party.
Where AI plays a substantive role in project delivery, bespoke contractual provisions are increasingly necessary. Without such provisions, risk allocation may become uncertain, and liability may unintentionally be imposed on parties that did not anticipate it.
Liability and Responsibility
One central legal challenge posed by AI is the allocation of responsibility when failures occur.
Liability may arise across multiple actors, e.g. the consultant deploying the AI tool, the contractor relying on its output, the party supplying project data, or the AI provider itself.
Importantly, under Danish law, the use of AI does not automatically shift liability to the software provider. AI is treated as a tool, analogous to other technical instruments or software. The professional deploying the tool remains responsible for their work and must exercise independent professional judgment.
Professional liability is assessed based on established standards of reasonableness and professional diligence. In the event of a dispute, tribunals will assess whether the professional acted prudently, understood the system’s limitations, and appropriately verified AI-generated outputs.
Accordingly, clear contractual provisions governing the use of AI, validation requirements, and decision-making authority are essential to ensure predictable risk allocation.
Errors, Hallucinations and Reliance
AI systems may produce outputs that appear reliable but nonetheless contain material inaccuracies, such as incorrect quantity estimates, unrealistic schedules, and impractical design solutions. Over-reliance on such outputs presents a significant legal risk.
Contractual frameworks should therefore define the required level of human oversight and allocate responsibility where AI-generated information proves inaccurate. While AI may enhance efficiency, it does not reduce the professional standard of care required under Danish construction law.
Intellectual Property
Existing intellectual property provisions in Danish law are premised on human authorship and do not fully address AI-generated material.
Where AI systems are used to generate design drawings, technical solutions, or other project material, uncertainty may arise regarding ownership, usage rights, and the ability to reuse such material in future projects.
Clear contractual provisions addressing ownership, licensing, and permitted use of AI-generated material are therefore necessary to avoid disputes and ensure legal certainty.
Data Protection and Regulatory Risk
AI systems in construction frequently rely on extensive data collection and processing. Where personal data is involved, compliance with the General Data Protection Regulation (GDPR) is mandatory.
This may arise, for example, where AI systems process camera footage from construction sites, worker-related data, or project communications. Danish regulatory practice demonstrates that authorities are willing to impose fines where adequate organisational and technical safeguards are lacking.
In addition, the EU Artificial Intelligence Act, expected to apply fully by 2027, will introduce risk-based regulatory obligations for AI systems across the European Union. Certain construction applications, particularly those affecting safety or fundamental rights, may fall within the scope of high-risk AI systems subject to stricter compliance requirements.
Insurance
Traditional construction insurance policies are structured around human negligence. AI-related failures may fall between existing insurance categories, including professional liability, product liability, and cyber risk.
Where contractual risk allocation is not aligned with insurance coverage, parties may face unintended insurance gaps. This creates significant financial exposure, as a party may be contractually liable for an AI-related error without having corresponding insurance protection.
Construction companies should therefore confirm with their insurers whether the use of AI tools is covered under existing policies.
Human Oversight and Governance
Unsupervised reliance on AI significantly increases legal risk. Human oversight, professional review, and transparent decision-making remain essential.
AI-generated designs, schedules, and cost estimates should be reviewed and approved by qualified professionals prior to implementation. Contracts should clearly identify who retains ultimate responsibility and require appropriate validation procedures.
At an organisational level, companies deploying AI should implement internal governance frameworks, maintain records of systems in use, and preserve audit trails capable of evidential use in arbitration or litigation.
Practical Recommendations
Construction companies using AI should:
- review and amend contracts to address AI use explicitly
- require human verification of AI-generated outputs
- confirm insurance coverage for AI-related risks
- implement internal AI governance policies
- clearly define responsibility for AI use within project teams
Conclusion: AI is transforming how construction projects are designed, managed, and delivered in Denmark. It enables faster design processes, improved planning, and enhanced project control. However, it also exposes the limitations of contractual frameworks designed for human decision-making.
Without clear contractual provisions, structured governance, aligned insurance coverage, and documented human oversight, AI may introduce significant legal and financial risks. AI should be viewed as a professional tool that supports rather than replaces human expertise. Legal responsibility remains with the parties involved in the project.
For Danish and international construction projects, the challenge is not whether to adopt AI—it is how to integrate it responsibly while preserving legal certainty. Properly managed, AI can improve efficiency, safety, and predictability. Without proper contractual and risk management, however, it is likely to create uncertainty, liability exposure, and disputes.
—
Mexican Law Perspective
by Ana Galina Híjar Soltero, Partner, DíazIgareda
Under Mexican law, construction and development contracts are based on strong principles of contractor liability for results and professional duty of care. The use of artificial intelligence does not displace such responsibility: AI is treated as a tool rather than an autonomous decision-maker.
Accordingly, absent express contractual provisions, Mexican courts are likely to apply traditional regimes on breach, latent defects and professional negligence, generally placing risk on the party deploying the technology. Parties using AI in construction projects should therefore expressly regulate permitted uses, mandatory human oversight and allocation of liability for AI-driven errors.
From an intellectual property and data perspective, Mexico has recently held that works created exclusively by AI systems cannot be registered as copyrighted works, reaffirming that authorship is limited to natural persons.
This creates uncertainty regarding ownership and protection of AI-generated designs or technical deliverables, making contractual allocation essential. In addition, strict personal data protection obligations apply where AI processes workforce, biometric or site-monitoring data, and the party determining such processing bears direct statutory responsibility. Construction contracts should therefore also address ownership of AI-assisted outputs and allocation of data protection responsibilities.
—
Canadian Law Perspective
by Christophe Shammas, Partner, Loopstra Nixon LLP
The construction industry has long been defined by its physical outputs, but the next decade of litigation will likely be defined by its digital inputs.
As artificial intelligence moves from a back-office curiosity to a site-based reality, the Canadian legal landscape faces a series of questions that have yet to be tested in our courts. From the interpretation of standard contracts to the fundamental definition of professional negligence, the integration of AI is creating a new category of risk that is currently unaddressed by existing case law.
Procurement and the Duty of Fairness
The integration of AI into the procurement process introduces new risks to the established duty of fairness in Canada. If an owner in a binding procurement process utilizes an AI tool to score or rank proposals, a disappointed bidder could argue that the owner has breached its duty by using an undisclosed evaluation methodology if the AI applies hidden weightings not found in the tender documents. Conversely, contractors who rely on AI to generate complex bids risk incorporating errors or hallucinations regarding their actual capacity or site constraints. As the industry moves toward automated procurement, the traditional emphasis on a careful human review of bid documents remains the most effective way to avoid a binding but unprofitable contract.
The Standard of Care and Means and Methods
A central question for contractors in an AI-enabled industry is whether the technology falls within the traditional definition of means and methods. Generally, the contractor maintains total control over how the work is executed. If a contractor utilizes an AI tool to optimize the sequence of a complex structural pour and that sequence leads to a failure, the legal responsibility likely remains with the contractor under the existing framework.
However, the more difficult question involves the evolving standard of care. We may soon reach a point where a reasonably prudent contractor is expected to use AI for risk detection or schedule optimization.
If the industry adopts these tools as a standard, a contractor who relies solely on manual processes might find themselves defending a claim for failing to utilize the best available technology to prevent an avoidable loss. On the other hand, AI software companies may find themselves being brought into construction disputes if their software was relied upon in the execution of the work.
Contract Administration and Strict Notice Regimes
The intersection of AI-assisted administration and Canada’s strict approach to notice provisions is particularly fraught with risk. Many firms are now exploring AI tools to summarize complex contracts or track mandatory deadlines. While these tools offer efficiency, they also introduce a new point of failure. If an algorithm misinterprets a notice period or fails to flag a condition precedent, the contractor remains legally bound by the missed deadline.
Given the consistent history of Canadian courts strictly enforcing notice requirements, it seems unlikely that a party could successfully argue that reliance on an incorrect AI summary should excuse noncompliance. The machine is a tool of the user, and the legal consequences of its errors will almost certainly be borne by the party that deployed it.
Ownership of Data and Intellectual Property
Current CCDC contracts are largely silent on the ownership and use of project data. This silence creates a significant gap when it involves the insights derived from AI training. While a consultant may own the copyright to a specific set of drawings, the question of who owns the metadata generated during a project remains open.
If an owner uses a contractor’s historical performance data to train an AI that then optimizes future tenders, the contractor may argue that their proprietary methods have been misappropriated. Similarly, as AI begins to generate design options based on a set of parameters, the traditional licensing clauses in standard forms may need to be entirely reconsidered to account for non-human authorship.
The Insurance Gap and the CCDC 41 Framework
The integration of AI into the construction lifecycle also introduces a significant disconnect with current insurance frameworks. While standard form documents, like CCDC 41, have recently been updated to include modern technology such as drones, they remain largely silent on the specific risks created by autonomous software and algorithmic decision making. Traditional Commercial General Liability and Wrap-Up Liability policies are typically triggered by physical property damage or bodily injury. Many AI-driven errors, such as a scheduling algorithm that causes a significant project delay, result in economic losses that may fall into a coverage gap. Furthermore, Professional Liability insurance assumes a level of human oversight. If an AI tool operates as a “black box” that the human professional cannot fully explain, insurers may challenge whether the error fits within the scope of covered professional services.
The Evidentiary Challenge and Record Keeping
The gold standard of construction evidence has traditionally been the contemporaneous record, including meeting minutes and site diaries. As these records become AI-generated or assisted, their reliability will inevitably be challenged in litigation or arbitration. Verifying the accuracy of an automated site report requires a level of transparency that many proprietary AI platforms do not yet provide.
Conclusion: These issues are currently theoretical, but the pace of adoption suggests they will be litigated sooner than the industry expects. Standard form contracts and insurance policies were not drafted with the unique risks of automated decision making in mind. Until the courts provide clarity, the best defense is a proactive approach to contract negotiation.
Parties should be intentional about how they allocate the risks of digital failure and who retains the rights to the data generated on site. The use of AI on construction projects has to be provided for in contracts. Further, records, designs and other AI-generated work will still require human review and verification.
—
So, while the legal treatment of AI differs across jurisdictions, a common theme emerges: AI is generally regarded as a tool, not a substitute for human judgment or responsibility.
The absence of AI-specific provisions in standard construction contracts underscores the need for clearer contractual allocation of risk and documented human oversight, a concern that will only become more urgent as AI becomes more deeply embedded in construction practice.