Shadow AI7 min read19 February 2026

Shadow AI: Why 74% of Enterprise AI Operates Outside Governance Frameworks

Most enterprise AI isn’t rogue — it’s invisible. Across regulated firms, the majority of AI usage happens outside any formal governance programme, and the organisations running it often don’t know it is there.

Research consistently finds that the majority of enterprise AI usage occurs outside IT-sanctioned governance frameworks. In recent enterprise AI adoption surveys, more than seven in ten AI deployments were found to lack formal documentation, risk classification, or governance oversight at the point of use. This is not a fringe problem affecting a handful of experimental teams — it is the default state of AI adoption in most large organisations.

Shadow AI is the enterprise equivalent of shadow IT, but with a more acute regulatory dimension. When an employee installs unsanctioned software, the risks are relatively bounded: data security, licence compliance, support overhead. When an employee uses an AI system to inform consequential decisions — credit assessments, fraud flags, customer triage — without that system appearing in any governance register, the firm is operating a material risk function without documentation, oversight, or accountability. Under the EU AI Act, that is not a process gap. It is a regulatory breach.

The question for compliance teams at regulated firms is not whether shadow AI exists in their organisation. It almost certainly does. The question is whether they can account for it — and what happens when a regulator asks them to.

Three Routes into Your Organisation

Shadow AI does not arrive through a single vector. In regulated financial institutions, it enters through at least three distinct pathways, each with different governance implications.

Route 1: Developer Convenience

The most common pathway is a developer adding an AI library dependency to a project without formal approval. npm install openai, pip install anthropic, pip install langchain — these are single-line additions to a package manifest that can put an AI system into production within an afternoon. The developer may not think of this as deploying AI. They are solving a problem. The governance register never sees it.

In a team of forty engineers working across ten repositories, this happens frequently. By the time a compliance officer becomes aware of a tool that has been in daily use for three months, there is no audit trail of its introduction, no risk assessment, and no approved intended purpose documented anywhere.

Route 2: Departmental Procurement

The second pathway is a business team purchasing a SaaS product that includes AI capabilities without routing the acquisition through IT or compliance review. A credit risk team buys a data analytics platform. A compliance team subscribes to an AI-powered document review tool. A customer success team deploys a CRM with embedded predictive scoring. In each case, the AI capability is bundled inside a product that was procured through a normal purchase order — and the AI component is never separately identified, assessed, or registered.

Third-party AI systems embedded in procured software carry their own EU AI Act obligations. If a vendor’s tool performs a function that falls under Annex III — for example, AI-assisted credit risk assessment — the deploying firm bears obligations as a deployer even if the model itself was built by the vendor. Most firms have no mechanism to identify these embedded AI systems at the point of procurement.

Route 3: Open-Source Experimentation

The third pathway is research and data science teams downloading and running models locally. A quant analyst pulls a HuggingFace model to experiment with NLP-based document classification. A risk researcher fine-tunes an open-source language model on internal data to generate regulatory summaries. The work begins as an experiment, produces useful outputs, and quietly becomes operational — all without the model ever being registered, validated, or reviewed by anyone outside the team.

Open-source models running on internal infrastructure are particularly difficult to detect because they generate no external API traffic. There is no vendor invoice, no network call to an external endpoint, no record in the firm’s third-party supplier register. The only evidence of their existence is a Python script in a private repository.

What an FCA Examiner Would Ask

FCA supervisors conducting an AI-related review will ask a small number of focused questions. Firms that cannot answer them face significant supervisory risk.

The questions are deceptively simple: Where are all your AI systems? Who approved them? Where is the documentation? What is the performance monitoring in place? Who is accountable under SMCR?

For the registered, governed AI systems — the ones that went through formal approval — most firms can answer these questions, albeit with effort. For the shadow AI systems — the ones that entered through the three routes above — the answer is typically some version of: we don’t know they exist, and therefore we have no documentation, no approval, and no accountability chain.

The FCA’s approach to model risk management, codified in SS1/23, does not distinguish between AI systems the firm chose to govern and AI systems it did not know about. The obligation attaches to all models that make or inform material decisions. Ignorance of a model’s existence is not a mitigating factor — it is an aggravating one.

A Scenario: Credit Risk and the Invisible LLM

Consider a realistic scenario. A credit risk analyst at a mid-sized lender begins using a large language model — accessed through the OpenAI API — to summarise customer complaint narratives and extract relevant credit signals. The tool is useful. It saves hours per week. The analyst shares it with the team. Within six weeks, eight analysts are using it daily, and the summaries are feeding into underwriting decisions as a matter of routine.

No one has registered this as an AI system. No one has assessed whether it falls under EU AI Act Annex III (it does — it is informing credit decisions). No one has documented its intended purpose, its performance characteristics, or its human oversight arrangements. The OpenAI API key is in the analyst’s personal account, billed to a corporate card.

The FCA opens a thematic review of AI use in consumer credit. The firm is asked to provide a complete inventory of AI systems used in credit decisioning. The inventory does not include this tool, because compliance does not know about it. The examiner asks the analysts directly. The tool is disclosed. The FCA asks for the Annex IV technical documentation, the risk assessment, the approval record, the validation results, and the name of the accountable SMF holder.

The answer to all of them is: we do not have it. The firm is now explaining, to a regulator, why an AI system that has been informing retail credit decisions for six months has no documentation, no approval, and no accountability chain. That is a difficult conversation.

Technical Detection: How Audital Finds Shadow AI

Audital’s Shadow AI detection engine operates across three surfaces: repository scanning, API log analysis, and package manifest inspection. Each surface catches a different category of shadow AI usage.

Libraries and Endpoints Detected

OpenAI SDK

GPT-4, GPT-4o, DALL-E, Whisper, Embeddings

Anthropic SDK

Claude 3, Claude Haiku, Claude Sonnet

LangChain

Agent frameworks, chain definitions, tool use

HuggingFace Transformers

Pipeline imports, model.from_pretrained()

PyTorch / TensorFlow

torch.nn, tf.keras model definitions

AWS Bedrock

boto3 bedrock-runtime invocations

Azure AI / Cognitive Services

azure-ai-* SDK imports, OpenAI via Azure

Google Vertex AI

vertexai.generative_models, PaLM API calls

Repository scanning parses package manifests (package.json, requirements.txt, pyproject.toml, Cargo.toml) for known AI library dependencies. It also performs pattern matching on source code for characteristic import statements and API call signatures, catching cases where a dependency has been vendored or is loaded dynamically.

API log analysis scans outbound network traffic records for calls to known AI provider endpoints — the OpenAI completions API, Anthropic Messages API, HuggingFace Inference API, and provider-specific gateway URLs. This catches AI usage where the library itself is not present in the repository, because calls are being made from a script, a notebook, or a curl command.

The Exposure Heatmap

Audital visualises shadow AI exposure as a heatmap across two dimensions: AI provider and repository (or team). The horizontal axis lists providers — OpenAI, Anthropic, HuggingFace, AWS Bedrock, Azure, Google Vertex. The vertical axis lists connected repositories or organisational units. Each cell shows the count of detected AI invocations or library imports in that combination.

The heatmap makes concentration risk immediately visible. A single repository with 47 OpenAI API calls and no registered AI system is a clear signal. A team with multiple providers and no governance documentation shows up as a block of amber and red cells against a background of compliant green. Compliance teams can see the exposure profile of the entire engineering organisation at a glance, without needing to audit each repository individually.

Governance in One Click: Detect, Acknowledge, Register

Detection is only the first step. The value of Shadow AI detection is not in producing a list of problems — it is in enabling a structured remediation workflow that converts unregistered AI systems into governed, documented assets.

Audital’s remediation flow has four stages. A detected shadow AI system appears as an OPEN detection. A compliance officer or model owner acknowledges the detection, confirming they are aware of it and taking ownership. The system is then registered in the model inventory — name, intended purpose, owner, risk tier, and framework classification all captured at this point. From that moment, the audit trail begins: every subsequent event in the model’s lifecycle is captured, chained, and timestamped. The system moves from invisible to fully governed in a single workflow.

The act of acknowledging and registering a previously undetected AI system is itself recorded as an event in the audit chain. Firms can demonstrate to regulators not just that they have a governed AI inventory, but that they actively identified and brought into governance AI systems that were previously operating outside it. That is a materially better position than one in which a regulator discovers the same shadow AI systems through their own review.

See Shadow AI Detection in Action

Shadow AI detected: 2 unregistered endpoints found in under 60 seconds

0:58

Shadow AI Scan

Run a Shadow AI scan in the Sandbox

The Audital sandbox contains a simulated repository environment with pre-seeded shadow AI detections. Run a scan, see the heatmap, and step through the acknowledge → register → govern workflow — without connecting your production infrastructure.

Run a Shadow AI scan in the Sandbox →

RegRadar Briefing

Monthly Regulatory Intelligence

Monthly: the regulatory changes that matter, the enforcement actions to learn from, and the deadlines coming up. Read by compliance professionals at regulated firms across the UK and EU.

AC

Audital Compliance Team

audital.ai