When a bank makes a credit decision using an AI model, it can — if it has good records — tell you which version of the model was running, when the decision was made, and what input it received. What most banks cannot do is prove that record is accurate. The log could have been altered. The model version on file might not match what was actually deployed. The inputs might be post-hoc reconstructions.
This is the problem that AI model attestation solves. Not documentation — proof.
AI model attestation is a cryptographically verifiable record binding a model version, inference inputs, and output together into a tamper-evident artifact. It makes the claim "this model, with these inputs, produced this output" independently verifiable — not just asserted.
Documentation vs. Attestation: The Critical Difference
Most AI governance frameworks today produce documentation: model cards, validation reports, performance dashboards. Documentation is valuable, but it's retrospective, centralised, and mutable. Attestation is different in every dimension that matters for regulatory accountability.
- Lives in a database or document store — can be edited or deleted
- Captures what should have happened, not what did happen
- Model version in the log may not match deployed model
- Input/output pairs are stored separately, linkage is implicit
- Requires trusting the operator's infrastructure integrity
- Can be produced retroactively for a specific decision
- Cryptographic hash links model version to output — hash cannot be forged
- Generated at inference time — cannot be created retroactively
- Model identity is verified against a registered fingerprint
- Input, output, model ID, and timestamp are bound in a single signed record
- Can be verified independently — no need to trust the operator
- Immutable once issued — modification breaks the signature
The practical implication: in an examination or litigation context, documentation tells a story. Attestation provides evidence. Regulators are increasingly interested in evidence.
How AI Model Attestation Works
Attestation at inference time follows a four-step process. The complexity is handled by the attestation layer — from a developer or compliance perspective, it looks like a standard API call with a richer response object.
Before inference, the deployed model's cryptographic fingerprint is verified against the registered hash in the model registry. If they don't match — wrong model version, tampered binary — the inference is blocked.
model_id: risk-scorer-v2.4.1 | hash: sha256:a9f4b2…
The inference inputs — features, embeddings, context — are hashed at the point of call. This creates an immutable fingerprint of what the model actually received, preventing retroactive reconstruction.
input_hash: sha256:3c7d91… | timestamp: 2026-02-18T14:23:07Z
The model output — score, classification, text, recommendation — is captured alongside confidence scores, feature attributions, and human-readable rationale.
risk_score: 84 | decision: REVIEW | confidence: 0.91
The attestation layer combines model ID, input hash, output, and timestamp into a signed record. The signature makes it independently verifiable — anyone with the public key can confirm the record hasn't been altered.
attestation_id: attest-2026021814-x7q2 | sig: RS256…
Why Regulators Are Moving Toward Requiring Attestation
Three regulatory frameworks are independently converging on attestation requirements, each from a different angle but arriving at the same place: AI decisions must be independently verifiable, not just documented.
| Framework | Relevant requirement | Attestation relevance |
|---|---|---|
| EU AI Act Art. 12 | High-risk AI systems must maintain logging of "all events during the lifetime of the system that are relevant for identifying risks." | Attestation provides the tamper-evidence that makes Article 12 logs defensible to the AI Office and national authorities. |
| SR 11-7 Fed MRM | Models must have documentation sufficient for independent validation and post-hoc review of decisions. | Attestation provides the inference-time linkage that makes post-hoc review credible — not just a reconstruction from logs. |
| NIST AI RMF Govern 1.7 | AI actors should maintain "end-to-end traceability" and "human review of AI system operations." | Attestation is the technical mechanism that makes AI RMF's traceability requirements operational rather than aspirational. |
| OSFI E-23 Canada MRM | Model risk management must include controls for model change events and deployment version tracking. | Attestation's model identity verification component directly addresses OSFI's version-control and change-management requirements. |
None of these frameworks uses the word "attestation" explicitly — regulatory language moves slowly. But the underlying requirement — that AI decisions be linkable to a specific model version with verified inputs and tamper-evident outputs — is precisely what attestation provides.
EU AI Act's logging requirements for high-risk AI apply from August 2026. SR 11-7 expectations for AI governance are already active — examiners began asking for AI model inventories in 2024 and are now asking about monitoring and change management. Banks that start building attestation infrastructure in 2026 will be ahead; those that wait for explicit regulatory guidance will be retrofitting.
What Attestation Is Not
A few important clarifications, since "attestation" is used loosely in the industry:
- Attestation is not explainability. Attestation proves that a model version produced an output. Explainability (SHAP, LIME, attention) tells you why. Both are required for high-risk AI; they're complementary, not substitutes.
- Attestation is not model validation. Validation assesses whether a model is conceptually sound and performs as intended. Attestation is the runtime mechanism that makes validated decisions traceable. You need validation first.
- Attestation is not audit logging. Traditional audit logs are operator-maintained and mutable. Attestation records are cryptographically bound — a tampered attestation is detectable. This is the core difference.
- Attestation is not a blockchain requirement. Attestation can be implemented with standard cryptographic signatures (RS256, EdDSA) without distributed ledger infrastructure. Immutability comes from the signing mechanism, not the storage layer.
Who Needs Attestation Infrastructure Now
Not every AI system requires attestation. The key filter is whether the AI is making — or materially informing — decisions with regulatory accountability or material financial consequence:
- Transaction monitoring and AML alert scoring
- Credit origination and pricing decisions
- Fraud detection and case escalation
- Sanctions screening and PEP matching
- Customer risk classification for KYC/CDD
- AI-assisted SAR narrative generation
For these use cases, the question is no longer "should we think about attestation?" It's "how quickly can we implement it before an examiner asks to see it?"
Attestation doesn't require replacing existing model infrastructure. It sits as a layer between model execution and decision logging — intercepting inference calls, capturing inputs and outputs, verifying model identity, and issuing signed records. Existing models typically don't need to be modified.
The QLabs Attestation Layer
The QLabs AI Governance module includes a production attestation layer designed for financial services compliance environments. It handles:
- Model registry with cryptographic fingerprinting — every registered model version gets a hash. Deployment verifies the hash before any inference runs.
- Inference-time attestation issuance — signed records are generated at each inference call, binding model ID, inputs, outputs, and timestamp.
- Tamper-evident audit log — attestation records are stored with append-only controls. Modification attempts break the signature chain.
- Compliance export — attestation records can be packaged into examiner-ready reports pre-mapped to EU AI Act, SR 11-7, and OSFI requirements.
The module is currently available for design partnerships. If you're building out an AI governance programme or preparing for regulatory scrutiny of AI systems, we'd welcome a technical conversation about your specific use case.