When a bank makes a credit decision using an AI model, it can — if it has good records — tell you which version of the model was running, when the decision was made, and what input it received. What most banks cannot do is prove that record is accurate. The log could have been altered. The model version on file might not match what was actually deployed. The inputs might be post-hoc reconstructions.

This is the problem that AI model attestation solves. Not documentation — proof.

Definition

AI model attestation is a cryptographically verifiable record binding a model version, inference inputs, and output together into a tamper-evident artifact. It makes the claim "this model, with these inputs, produced this output" independently verifiable — not just asserted.

Documentation vs. Attestation: The Critical Difference

Most AI governance frameworks today produce documentation: model cards, validation reports, performance dashboards. Documentation is valuable, but it's retrospective, centralised, and mutable. Attestation is different in every dimension that matters for regulatory accountability.

Documentation only
  • Lives in a database or document store — can be edited or deleted
  • Captures what should have happened, not what did happen
  • Model version in the log may not match deployed model
  • Input/output pairs are stored separately, linkage is implicit
  • Requires trusting the operator's infrastructure integrity
  • Can be produced retroactively for a specific decision
Attestation
  • Cryptographic hash links model version to output — hash cannot be forged
  • Generated at inference time — cannot be created retroactively
  • Model identity is verified against a registered fingerprint
  • Input, output, model ID, and timestamp are bound in a single signed record
  • Can be verified independently — no need to trust the operator
  • Immutable once issued — modification breaks the signature

The practical implication: in an examination or litigation context, documentation tells a story. Attestation provides evidence. Regulators are increasingly interested in evidence.

How AI Model Attestation Works

Attestation at inference time follows a four-step process. The complexity is handled by the attestation layer — from a developer or compliance perspective, it looks like a standard API call with a richer response object.

Why Regulators Are Moving Toward Requiring Attestation

Three regulatory frameworks are independently converging on attestation requirements, each from a different angle but arriving at the same place: AI decisions must be independently verifiable, not just documented.

Framework Relevant requirement Attestation relevance
EU AI Act Art. 12 High-risk AI systems must maintain logging of "all events during the lifetime of the system that are relevant for identifying risks." Attestation provides the tamper-evidence that makes Article 12 logs defensible to the AI Office and national authorities.
SR 11-7 Fed MRM Models must have documentation sufficient for independent validation and post-hoc review of decisions. Attestation provides the inference-time linkage that makes post-hoc review credible — not just a reconstruction from logs.
NIST AI RMF Govern 1.7 AI actors should maintain "end-to-end traceability" and "human review of AI system operations." Attestation is the technical mechanism that makes AI RMF's traceability requirements operational rather than aspirational.
OSFI E-23 Canada MRM Model risk management must include controls for model change events and deployment version tracking. Attestation's model identity verification component directly addresses OSFI's version-control and change-management requirements.

None of these frameworks uses the word "attestation" explicitly — regulatory language moves slowly. But the underlying requirement — that AI decisions be linkable to a specific model version with verified inputs and tamper-evident outputs — is precisely what attestation provides.

The timing question

EU AI Act's logging requirements for high-risk AI apply from August 2026. SR 11-7 expectations for AI governance are already active — examiners began asking for AI model inventories in 2024 and are now asking about monitoring and change management. Banks that start building attestation infrastructure in 2026 will be ahead; those that wait for explicit regulatory guidance will be retrofitting.

What Attestation Is Not

A few important clarifications, since "attestation" is used loosely in the industry:

Who Needs Attestation Infrastructure Now

Not every AI system requires attestation. The key filter is whether the AI is making — or materially informing — decisions with regulatory accountability or material financial consequence:

For these use cases, the question is no longer "should we think about attestation?" It's "how quickly can we implement it before an examiner asks to see it?"

Implementation path

Attestation doesn't require replacing existing model infrastructure. It sits as a layer between model execution and decision logging — intercepting inference calls, capturing inputs and outputs, verifying model identity, and issuing signed records. Existing models typically don't need to be modified.

The QLabs Attestation Layer

The QLabs AI Governance module includes a production attestation layer designed for financial services compliance environments. It handles:

The module is currently available for design partnerships. If you're building out an AI governance programme or preparing for regulatory scrutiny of AI systems, we'd welcome a technical conversation about your specific use case.