Available for pilots

AI systems are making
high-stakes decisions.
Most can't prove why.

AI Governance & Assurance closes that gap — a single layer for model oversight, inference logging, and regulator-ready reporting across every AI system in your stack.

The accountability gap

Regulators are asking hard questions about AI in finance. Most firms can't answer them.

SR 11-7, OSFI E-23, and the EU AI Act all demand that AI systems used in high-risk financial decisions be documented, monitored, and auditable. But most AI deployments — even sophisticated ones — have no systematic way to prove what model made a decision, with what inputs, at what version, and whether that model has drifted since it was validated.

QLabs AI Governance & Assurance is the layer that closes that gap — sitting between your AI systems and your regulator-facing outcomes, with no changes required to the underlying models.

73%
of financial institutions report they cannot fully explain AI-driven decisions to examiners on demand.
$4.5M
average regulatory fine for model risk management deficiencies at US banks (2023–2025).
Aug 2026
EU AI Act high-risk AI obligations are fully enforceable. Compliance programmes need to be running now.
What it does

Six capabilities. One governance layer.

Each capability maps directly to a regulatory requirement. No generic MLOps tooling — built specifically for high-risk financial AI.

Model Registry

Version-controlled record of every model in production — architecture, training provenance, validation status, and deployment history. No undocumented models in the stack.

SR 11-7 OSFI E-23

Inference Logging

Tamper-evident log of every prediction: model version, input features, output score, decision, and timestamp. Linked to the model registry entry active at the time of inference.

EU AI Act Art. 12

Drift Monitoring

Continuous statistical monitoring of model performance against validation baselines. Alerts when a model deviates beyond defined thresholds — before a regulator spots it first.

SR 11-7 Ongoing Validation

Reproducibility

Re-run any past inference with the exact same inputs, model version, and feature state. Litigation-ready and examiner-ready: you can show precisely why a decision was made.

Audit Readiness NIST AI RMF

Compliance Exports

Pre-formatted reports for SR 11-7 annual reviews, OSFI submissions, and EU AI Act technical documentation. Structured for examiners, not just internal engineering teams.

SR 11-7 OSFI E-23 EU AI Act

Attestation Layer

Cryptographic proof of model identity and the integrity of every inference chain. Each log entry is signed and linked to the previous, making tampering computationally detectable.

NIST AI RMF Composable Audit
Attestation architecture

Every decision leaves a verifiable trail

The QLabs attestation layer is not a database of logs. It's a cryptographically linked chain — each entry signed and bound to the previous one, making the record tamper-evident by construction.

When an examiner asks "what model made this decision, and has it changed since?" — you pull the chain, verify the signature, and show the answer. No manual reconstruction. No gap in the record.

Composable by design. The attestation layer integrates at the inference boundary — no changes to your models, no vendor lock-in. Works with your existing ML serving infrastructure via a thin SDK or API hook.
01
Model registered
On deployment, the model artefact is hashed and registered. Training provenance, validation report, and version metadata are sealed into the registry entry.
registry_id: sha256:a3f8c9d…
02
Inference requested
Inputs are captured at the boundary before the model runs. The active registry entry is pinned to this inference request.
input_hash: sha256:7b2e1f…
03
Decision recorded
The model output, score, decision label, and latency are captured and written to the inference log as an immutable entry.
decision: ESCALATE · score: 78
04
Log entry signed
The entry is signed with an ed25519 key and chained to the previous log entry. Any post-hoc modification breaks the chain signature verification.
sig: ed25519:7f3a2b… · prev: #00141
05
Attestation issued & export ready
A signed attestation bundle is generated — human-readable for examiners and machine-readable for automated compliance pipelines.
status: tamper-evident ✓ · export: PDF / JSON
Regulatory alignment

Built around the frameworks that matter

Not a generic governance tool mapped backwards to compliance. Each capability was designed against specific regulatory requirements.

SR 11-7
US · Federal Reserve
Supervisory Guidance on Model Risk Management
Requires
  • Model inventory with documentation of purpose, design, and limitations
  • Independent model validation before deployment
  • Ongoing performance monitoring and periodic revalidation
  • Governance policies with clear ownership and escalation paths
QLabs addresses this with
Model Registry Drift Monitoring Compliance Exports
OSFI E-23
Canada · OSFI
Model Risk Management Guideline
Requires
  • Comprehensive model lifecycle management from development to retirement
  • Model tiering based on materiality and complexity
  • Validation standards commensurate with model risk
  • Audit trail for model changes and decisions
QLabs addresses this with
Model Registry Inference Logging Reproducibility
EU AI Act
EU · Enforceable Aug 2026
Regulation on Artificial Intelligence — High-Risk AI Systems
Requires
  • Automatic logging of events for high-risk AI systems (Art. 12)
  • Technical documentation covering design, testing, and performance
  • Human oversight mechanisms and transparency to affected persons
  • Post-market monitoring plan with incident reporting
QLabs addresses this with
Inference Logging Attestation Layer Compliance Exports
NIST AI RMF
US · NIST · Voluntary
AI Risk Management Framework 1.0
Requires
  • GOVERN: Policies, accountability, and culture for AI risk
  • MAP: Identify and categorise AI risks in context
  • MEASURE: Analyse and assess risk with appropriate metrics
  • MANAGE: Prioritise, respond to, and monitor risks over time
QLabs addresses this with
Model Registry Drift Monitoring Attestation Layer
Pilot programme

We're onboarding design partners now

AI Governance & Assurance is in active rollout with early partners. Pilot partners co-design the compliance export templates and framework coverage with our team — and lock in early-access pricing.

Best fit for
  • Compliance or model risk teams at banks, fintechs, or VASPs
  • Organisations already using AI for AML, credit, or fraud decisions
  • Teams facing an upcoming SR 11-7 review, OSFI examination, or EU AI Act deadline
  • Risk officers who need to answer "what model made this decision?" today
Direct product team access

Weekly sessions with the QLabs team. Your compliance requirements directly shape the roadmap.

Co-designed export templates

We build the compliance export formats around your actual regulatory submissions — not generic templates.

No-change model integration

The attestation layer integrates at the inference boundary. No modifications to your existing models or serving infrastructure.

Early-access pricing locked in

Pilot partners receive perpetual early-access rates. Pricing does not increase as the product reaches GA.