SMCR and AI: Who Is Personally Accountable When Your Model Fails?
AI governance is often framed as an institutional obligation. Under SMCR, it is also a personal one. When a model fails and the FCA investigates, the question becomes: who is accountable? Named. Documented. Provably so.
Most AI governance frameworks talk about accountability in institutional terms — policies, committees, escalation procedures, board oversight. The Senior Managers and Certification Regime makes accountability personal. Under SMCR, a named individual accepts formal responsibility for specific business areas and activities. When those activities include AI systems, the personal accountability of the named Senior Manager Function holder is directly in scope.
This is not an abstract risk. The FCA has made clear in its Finalised Guidance on AI and in multiple supervisory letters that it expects firms to be able to identify, by name, the senior individual accountable for each AI system that makes or informs material decisions. Where that individual cannot be identified, or where the accountability cannot be supported by contemporaneous evidence, the firm’s SMCR compliance is in question.
The challenge for most regulated firms is not that they lack an accountable person — it is that they cannot prove the accountability chain existed at the time relevant decisions were made, rather than being constructed after the fact in response to regulatory scrutiny. That distinction matters enormously in enforcement.
What SMCR Says About AI
SMCR does not mention AI systems explicitly — the regime predates the current generation of AI adoption in financial services. But the Senior Manager Function designations create accountability structures that apply to AI as they do to any other material business activity.
SMF Holders with AI Accountability Exposure
SMF4 — Chief Risk Officer
The CRO bears accountability for the firm's overall risk management framework. AI model risk sits within the risk management function. An AI system that generates material losses or regulatory exposure falls squarely within the CRO's accountability statement.
SMF7 — Group Entity Senior Manager
Where AI systems operate across multiple group entities — a common structure in banking groups — the Group Entity Senior Manager may carry accountability for AI governance at group level, depending on how the accountability map is structured.
SMF24 — Chief Operations Officer
The COO's accountability typically covers operational systems, technology infrastructure, and third-party operational dependencies. AI systems embedded in operational processes — claims processing, customer service automation, fraud monitoring — fall naturally within this scope.
The specific accountability assignment depends on how the firm has drawn its accountability map. But across most retail and investment firms, at least three SMF function codes carry meaningful AI accountability exposure. In a firm with a broad AI deployment footprint, more may be implicated.
The Scenario: 3,000 False Positives and an FCA Investigation
A retail bank’s AML monitoring model undergoes a threshold change in December 2025. The change is intended to reduce false negatives — missed suspicious transactions — following a regulatory recommendation. The configuration update is applied by the model operations team without a formal change management record. No risk sign-off is obtained. No SMCR accountability entry is updated to reflect the change.
In the two weeks following the change, the model generates 3,000 false positive transaction blocks — legitimate customer payments flagged as suspicious and held pending review. The backlog creates significant customer harm. Dozens of vulnerable customers are unable to access funds for essential payments. The firm’s complaints function is overwhelmed.
The FCA opens an investigation. The supervisory team requests the following documentation: the approval record for the December threshold change; the risk assessment conducted before the change; the name and SMF function code of the Senior Manager who authorised the change; the monitoring logs showing how the firm identified the false positive spike; the incident response record; and the timeline of remediation actions.
The firm can produce none of these documents in their contemporaneous form. The change was undocumented. The risk sign-off did not happen. The accountability was not recorded. The monitoring logs exist but were not reviewed until after the customer complaints arrived. The firm must now attempt to reconstruct the governance record — in response to a regulatory investigation — from memory, email chains, and informal notes. This is an exceptionally difficult position from which to defend.
What Evidence Is Required
For SMCR compliance to hold under regulatory scrutiny, the accountability record must meet a specific evidential standard. It must be contemporaneous — created at the time the decision was made, not reconstructed afterwards. It must identify a specific named individual and their SMF function code. And it must be tamper-evident — the FCA must be satisfied that the records reflect what actually happened, not what the firm wishes had happened.
Required Evidence Categories
- ✓
Deployment Approval Records
Who approved the model for production, when, under what authority, with what conditions attached.
- ✓
Risk Sign-Off
The documented assessment of model risk, signed by the accountable SMF holder, prior to deployment or change.
- ✓
Monitoring Logs
Continuous performance monitoring records showing the model was being watched throughout its operational life, not just at deployment.
- ✓
Change Management Records
Every material change to the model — threshold updates, retraining, scope changes — with approval and accountability recorded at the time.
- ✓
Incident Response Records
When incidents occur, who was notified, who took what action, and when. The accountability chain through the incident lifecycle.
Each of these evidence categories must carry a reliable timestamp. Not the timestamp on a Word document that can be edited — a cryptographic timestamp that can be independently verified by the regulator against a trusted timestamp authority. RFC 3161 timestamps, issued by a recognised timestamping authority, provide exactly this. They prove that a document existed in its current form at a specific point in time, before the regulatory investigation began.
What Audital Generates
Audital’s accountability map is built continuously as governance events occur. When a model is registered, the system prompts for an SMF holder assignment — a named individual, their SMF function code, and their role in relation to this specific model. That assignment is recorded as an event in the audit chain, timestamped and hashed.
As the model moves through its lifecycle — deployment approval, risk sign-off, performance review, incident response, configuration change — each event is attributed to a named individual. The accountability map at any point in time shows, for every AI system in the firm’s inventory, who is accountable for what, supported by the underlying evidence events from which the map is derived.
Five Decision Types Tracked
- —
Deployment
SMF holder who approved production release and accepted ongoing accountability.
- —
Approval
Person who granted model sign-off following validation and risk assessment.
- —
Incident Response
Named individual accountable for managing and remediating the incident.
- —
Risk Sign-Off
Person who completed and signed the pre-deployment risk assessment.
- —
Monitoring
Ongoing performance monitoring responsibility holder and review schedule.
The Pre-Flight Certificate
Before any model reaches production, Audital runs six automated governance checks against the audit trail. Each check verifies that a specific accountability or documentation requirement has been met: that an SMF holder is assigned; that an approval event is on record; that a risk framework has been applied; that a change management ticket is linked; that the version history is documented; and that there are no unresolved incidents blocking deployment.
A model that passes all six checks receives a Pre-Flight Certificate — a cryptographically signed document that records the exact state of the audit trail at the moment of deployment approval. The certificate includes the SHA-256 hash of the evidence set and an RFC 3161 timestamp, meaning the certificate can be independently verified by a regulator or auditor without relying on Audital’s systems to confirm its authenticity.
The Pre-Flight Certificate is the answer to the FCA’s first question in any AI governance review: can you prove that governance was completed before this model went live? With a certificate, the answer is yes, and the proof is mathematically verifiable. Without it, the answer is a narrative — which may or may not be believed.
SMCR Accountability
Download the Board Briefing
A concise briefing for boards and senior leadership on SMCR accountability obligations for AI systems, the evidence standard required, and how Audital’s accountability map and Pre-Flight Certificate support your SMCR compliance posture.
Download the Board Briefing →RegRadar Briefing
Monthly Regulatory Intelligence
Monthly: the regulatory changes that matter, the enforcement actions to learn from, and the deadlines coming up. Read by compliance professionals at regulated firms across the UK and EU.
Audital Compliance Team
audital.ai