← All insights
Readiness·7 min read·

Hallucinations, governance, and the trust discount

Uncontrolled AI usage doesn't just fail to add value in diligence — it actively creates a discount. The good news: the governance buyers expect at lower-middle-market scale is light. Here is the version that works.

Why uncontrolled AI is a discount, not a neutral

Stanford's AI Index has tracked a steep rise in AI-related incidents — output errors, leaked data, regulatory investigations — and the corresponding rise in buyer-side and regulator-side scrutiny [1]. A target company whose staff are pasting customer data into public LLMs with no controls is not "neutral" in diligence. It is a known risk a buyer will price.

The discount mechanism is simple. A buyer who cannot establish what the company's AI usage actually looks like will assume the worst plausible version, and either price it into the multiple or shift more of the consideration into escrow and indemnities.

The four-function model buyers are aligning to

NIST's AI Risk Management Framework organises AI governance into four functions: govern, map, measure, manage [2]. At lower-middle-market scale you do not need the full enterprise treatment. You need a two-page version that visibly does each one.

  • Govern — a named owner of AI risk, a short policy, and an annual review cadence.
  • Map — an inventory of AI use cases, classified by sensitivity (public data, internal data, customer data, regulated data).
  • Measure — logs, evals, and an incident register.
  • Manage — controls proportionate to sensitivity (e.g. customer data only goes to vendors with a signed DPA and zero-retention configuration).

The lightweight controls that actually move the needle

  1. An approved-tool list. Staff use these tools, configured this way, for these data classes. Anything else needs sign-off.
  2. Zero-retention configuration where available. Vendor settings that prevent your prompts being used for training. Anthropic's responsible-scaling and usage documentation explains the controls available[3].
  3. A short prompt-and-output log per workflow. For any AI workflow operating on customer or financial data, the input and output are logged with a timestamp.
  4. Human review for high-stakes outputs. Any AI-generated output sent externally above a defined threshold (a quote, a contract, a legal-sounding response) is reviewed and approved by a named human before it leaves.
  5. An incident register. A simple log of every time an AI workflow produced a wrong or problematic output, what was done about it, and what changed afterwards.

Why this is the cheapest discount you'll ever remove

Implementing the five controls above takes a few weeks and almost no recurring cost. The discount they remove — the "we don't know what your AI is doing, so we'll assume the worst" discount — is one of the largest a buyer will apply to an AI-using business with no governance in place. It is, in pure ROI terms, the highest-leverage AI work an owner can do in the 12 months before a process opens.

Frequently asked questions

What is an AI hallucination?
A hallucination is when an LLM generates a confident, plausible-sounding answer that is factually wrong. It happens because LLMs predict probable text, not verified facts — so when the prompt asks about something the model didn't see clearly in training, it fills in.
What is the minimum AI governance a small business needs?
Four things: every AI call logged, an evals framework that re-runs sample prompts weekly, written escalation rules for high-stakes outputs, and a one-page AI policy aligned to NIST AI RMF. That removes the diligence discount without adding meaningful overhead.
Why does uncontrolled AI usage hurt valuation?
Because buyers price unknown failure modes as risk. Customer-facing AI errors, regulatory exposure, IP leakage and untracked vendor concentration are all material in diligence — and a sophisticated acquirer will discount them when they can't see the controls.

Want this for your business?

Start with a Diagnose. Two weeks. Written report. Honest fit assessment.

Send an enquiry