Two court cases collapsed in 2025 because AI-generated expert reports cited legal precedents that did not exist. Dame Victoria Sharp, President of the King's Bench Division of the High Court, publicly flagged these failures on 6 June 2025 as a direct warning to the surveying and legal professions [1]. For building pathology experts, the message is unambiguous: AI Ethics in Expert Witness Reports: RICS 2026 Standards for Admissible Building Pathology Evidence are no longer optional reading β they are the professional baseline for court-ready work.
The stakes are high. AI adoption among expert witnesses has doubled in just one year, rising from 9.3% in 2024 to 20% in 2026 [1]. Meanwhile, an overwhelming 89% of surveying professionals believe specific guidance on AI use by expert witnesses is urgently needed [1]. RICS has responded with a formal standard β published in September 2025 and operative through 2026 β that draws clear ethical lines between acceptable AI assistance and conduct that could render evidence inadmissible.
This article explains exactly where those lines fall, how they apply to building pathology cases, and what expert witness surveyors must do to stay on the right side of them.
Key Takeaways π
- AI adoption has doubled among expert witnesses in one year, making formal ethical standards urgent and necessary.
- 89% of professionals want specific RICS guidance on AI use β and RICS has now delivered it.
- AI may assist with formatting and language only; it must never generate substantive building pathology diagnoses without personal expert verification.
- Transparency and audit trails are mandatory: firms must document the AI systems used, their limitations, and risk assessments.
- Individual experts sign statements of truth β they cannot delegate liability to an AI tool or their employer.
Why AI Ethics in Expert Witness Reports Became a 2026 Priority
The Rapid Rise of AI in Surveying Practice
The speed of AI adoption in professional surveying has caught many practitioners off guard. In 2024, fewer than one in ten expert witnesses used AI tools in their work. By 2026, that figure has doubled [1]. Tools such as ChatGPT, Microsoft Copilot, and specialist property-analysis platforms are now embedded in day-to-day practice β from drafting specific defect reports to summarising technical findings for non-specialist audiences.
This growth is not inherently problematic. AI can genuinely improve the accessibility and clarity of complex building pathology evidence. The problem arises when speed and convenience tempt practitioners to allow AI to do what only a qualified human expert should do: form and express a professional opinion.
The Judicial Warning That Changed Everything
The two court cases flagged by Dame Victoria Sharp in 2025 illustrate the worst-case scenario. In both instances, AI-generated content included citations to legal cases that did not exist β so-called "hallucinations" produced by large language models [1]. When opposing counsel checked the references, the reports unravelled. The expert witnesses faced serious professional consequences, and the parties they represented suffered significant procedural damage.
π¬ "AI can be helpful, but it must not replace the expert witness's knowledge and judgement." β Martin Burns, Head of ADR Research and Development, RICS [1]
For building pathology practitioners β whether working on damp investigations, structural defects, or party wall disputes β this warning applies with full force. A fabricated case citation in a structural crack report carries the same admissibility risk as one in any other area of expert evidence.
Understanding the RICS 2026 Ethical Framework

What the Standard Permits: AI as a Presentation Tool
The RICS standard is carefully calibrated. It does not prohibit AI use outright. Instead, it draws a precise distinction between presentation and substance.
Permitted uses of AI in expert witness reports include:
| β Permitted | β Prohibited |
|---|---|
| Improving language clarity | Writing substantive defect diagnoses |
| Formatting report structure | Generating professional opinions |
| Summarising findings for lay readers | Citing legal or technical precedents without verification |
| Spell-checking and grammar | Replacing site observation with AI inference |
| Accessibility improvements | Submitting AI content as the expert's own unverified view |
RICS guidance explicitly permits the use of AI software such as ChatGPT "to improve language and formatting of reports" to enhance accessibility for non-experts [1]. This is a sensible acknowledgement that many building pathology reports are read by judges, solicitors, and property owners who lack technical backgrounds. Clearer writing serves justice.
However, it is explicitly "inappropriate for an expert witness to use AI to write the substantive content of a report and submit it without personal verification" that the content reflects their honest, professional opinion [1]. For a Level 3 building survey expert providing court evidence on structural defects, this means every diagnosis, every causation finding, and every remediation recommendation must originate from β and be verified by β the human expert.
Mandatory Transparency and Documentation Requirements
The September 2025 RICS standard introduces specific documentation obligations that apply to all regulated firms [3]. These are not advisory suggestions; they are enforceable requirements.
Regulated firms must provide written information on request covering:
- π€ The type of AI system used in producing the report
- βοΈ Its basic working methods and known limitations
- π Due diligence processes applied to AI outputs
- β οΈ Risk management procedures for AI-generated content
- β The basis for decisions about output reliability
Beyond disclosure, firms must also maintain a written risk assessment for each AI application, recording:
- The identifiable application of the AI system to the specific task
- Potential risks and benefits for that task
- Alternative approaches that could have been taken instead [3]
This audit trail requirement is particularly significant for building pathology work. A surveyor using AI to assist with a damp investigation report β perhaps to analyse moisture readings or pattern-match against known defect types β must document exactly how that tool was used, what its limitations are, and why its outputs were or were not relied upon.
The March 2026 Standard: Guardrails, Not a Gag Order
The operative RICS standard for 2026 is framed as an effort to "assist members and regulated firms in establishing guardrails to maintain professional judgement while adapting to new technology" [4]. This framing matters. RICS is not attempting to freeze AI out of surveying practice β that would be both impractical and counterproductive. The goal is to ensure that AI enhances human expertise rather than substituting for it.
For practitioners conducting structural surveys or preparing evidence for building disputes, this means treating AI as a capable assistant that still requires constant supervision, verification, and professional override.
Applying AI Ethics in Expert Witness Reports to Real Building Pathology Cases
CPR Part 35: The Legal Framework AI Must Not Compromise
Civil Procedure Rules Part 35 governs expert evidence in English and Welsh courts. Its requirements are non-negotiable:
- The expert's duty is to the court, not to the instructing party [2]
- Reports must contain a statement of truth signed by the individual expert
- The expert must confirm the report represents their genuine, independent opinion
- Experts must disclose any limitations on their knowledge relevant to the opinion given
AI Ethics in Expert Witness Reports: RICS 2026 Standards for Admissible Building Pathology Evidence map directly onto these CPR requirements. When an expert signs a statement of truth, they are personally certifying the content. Individual experts cannot hide behind their firm or employer β the liability is personal [2].
This has immediate practical consequences. If an AI tool generated a diagnosis of rising damp in a party wall dispute, and the expert signed off on it without independent verification, that expert has potentially signed a false statement of truth. The professional and legal consequences could be severe.
Case Scenario 1: Damp and Structural Defect Disputes
Consider a typical building pathology case: a property owner claims that their neighbour's basement excavation caused structural cracking and damp ingress. The instructed surveyor uses an AI tool to analyse moisture meter readings and pattern-match the crack morphology against a database of known defect types.
Ethical AI use in this scenario:
- AI assists in formatting the damp survey findings into clear, accessible language
- AI flags potential causation hypotheses for the expert to evaluate
- The expert personally inspects the property, verifies all readings, and forms their own independent opinion
- The report discloses that AI formatting tools were used for presentation purposes
Unethical AI use in this scenario:
- AI generates the causation finding without site verification
- AI cites technical standards or case law without the expert checking their accuracy
- The expert submits the AI-generated diagnosis as their own professional opinion without review
Case Scenario 2: Valuation Disputes and Party Wall Awards
In valuation-related expert evidence β such as diminution in value claims following party wall works β AI tools are increasingly used to analyse comparable sales data and generate draft valuation narratives. Registered RICS valuers operating as expert witnesses face the same ethical constraints.
An AI tool might efficiently process dozens of comparable transactions and produce a draft narrative. But the expert must personally assess the comparables, apply professional judgement to adjustments, and verify that any market commentary reflects genuine current conditions rather than AI-generated approximations of market behaviour.
π‘ Key Principle: AI output is a starting point for professional analysis, never an endpoint for expert opinion.
Competence Training: A Non-Negotiable Requirement
The RICS standard is explicit that experts must demonstrate competence through formal training in expert witness conduct, legal process, Civil Procedure Rules, and court expectations [2]. Without this training, experts have "very little defence if challenged" on their qualifications β and even less defence if challenged on their use of AI tools [2].
This means that simply being a qualified surveyor is insufficient. Practitioners providing expert witness services must also understand the legal framework within which their evidence operates. In 2026, that framework now explicitly includes AI governance.
Practical Steps for Compliant AI Integration in 2026

A Compliance Checklist for Building Pathology Expert Witnesses β
The following checklist reflects the combined requirements of the RICS 2026 standard, CPR Part 35, and current judicial expectations:
Before Using AI:
- Confirm the AI tool's known limitations and hallucination risks
- Establish whether the task is presentation (permitted) or substantive (prohibited)
- Document the rationale for using AI for this specific task
- Identify alternative approaches
During Report Preparation:
- Use AI only for language, formatting, and accessibility improvements
- Personally verify every factual claim, diagnosis, and citation
- Ensure all building pathology opinions originate from direct site observation
- Cross-check any legal or technical references against primary sources
Before Signing the Statement of Truth:
- Confirm every opinion in the report reflects your genuine, independent professional view
- Ensure no AI-generated content has been submitted without personal verification
- Retain documentation of AI tools used and due diligence applied
- Be prepared to disclose AI use if requested by the court or opposing party [3]
The Duty to the Court Comes First
Perhaps the most important ethical principle in this entire framework is one that predates AI entirely: the expert witness's overriding duty is to the court, not to the client who is paying the fee [2]. AI tools, by their nature, generate outputs that are optimised for plausibility and coherence β not for impartiality or accuracy. An AI tool does not have a duty to the court. The human expert does.
This means that even where AI output appears compelling and well-reasoned, the expert must apply independent critical scrutiny. In building pathology cases β where the diagnosis of a defect can determine whether a claim succeeds or fails β this scrutiny is not a formality. It is the entire point of having a human expert in the first place.
For practitioners involved in commercial building surveys or complex dilapidations disputes, the volume of technical data can make AI assistance tempting. The RICS 2026 standard acknowledges this reality while insisting that professional judgement remains the final, non-delegable step.
Conclusion: Actionable Next Steps for Expert Witnesses in 2026
The convergence of rapid AI adoption, high-profile judicial warnings, and formal RICS regulation has created a clear new landscape for building pathology expert witnesses. AI Ethics in Expert Witness Reports: RICS 2026 Standards for Admissible Building Pathology Evidence are not bureaucratic obstacles β they are the profession's collective response to a genuine risk of justice being undermined by unverified AI content.
Here are the immediate actions every expert witness surveyor should take in 2026:
- Read the RICS September 2025 standard in full and ensure your practice procedures reflect its documentation requirements [3].
- Audit your current AI use β categorise each use case as presentation (permitted) or substantive (prohibited) and adjust accordingly.
- Invest in formal expert witness training that covers CPR Part 35, court expectations, and the new AI governance framework [2].
- Create an AI audit trail for every report: document which tools were used, for what purpose, and what due diligence was applied.
- Never sign a statement of truth without personally verifying every diagnosis, opinion, and citation in the report.
- Engage with RICS guidance updates β the standard is evolving, and staying current is a professional obligation [4].
The surveyors who thrive in this environment will be those who treat AI as a powerful assistant operating under strict professional supervision β not as a shortcut to court-ready conclusions. The court does not need AI's opinion. It needs yours.
References
[1] Ai Expert Witness – https://ww3.rics.org/uk/en/modus/technology-and-data/surveying-tools/ai-expert-witness.html
[2] Expert Witness Duties Responsibilities – https://ww3.rics.org/uk/en/journals/built-environment-journal/expert-witness-duties-responsibilities.html
[3] Responsible Use Of Artificial Intelligence In Surveying Practice September 2025 – https://www.rics.org/content/dam/ricsglobal/documents/standards/Responsible-use-of-artificial-intelligence-in-surveying-practice_September-2025.pdf
[4] Responsible Use Of Ai – https://www.rics.org/profession-standards/rics-standards-and-guidance/conduct-competence/responsible-use-of-ai
[5] Implementing Rics Responsible Ai Standards In 2026 Building Surveys Ethical Tools For Defect Detection And Reporting – https://nottinghillsurveyors.com/blog/implementing-rics-responsible-ai-standards-in-2026-building-surveys-ethical-tools-for-defect-detection-and-reporting