AI Governance Technology Market Map
The vendor landscape across the seven governance layers, plus the discovery prerequisite below and the cross-layer platforms that span them. Click any layer label to read the corresponding article.
Audit trails & explainability→Decision interpretability on demand
Agent gateways→Single control plane for traffic
Guardian agents→AI overseeing AI
Agent posture management→Built secure, deployed dangerous
Guardrails→Policy at runtime, not in a doc
Identity & authorization→Every agent needs a badge
Agent registry→Inventory everything — foundation
Discovery→Find what you don't know is running
Frequently Asked Questions
A quick reference for the seven-layer governance stack and the AI governance market it describes.
About the framework
What is the seven-layer governance stack?
The seven-layer governance stack is a framework for managing enterprise AI agents organized by accountability layer rather than by product category. The stack defines what governing a digital workforce actually requires: discovery, agent registry, identity and authorization, guardrails, posture management, guardian agents, agent gateways, and audit trails. Each layer addresses a distinct accountability question, and each layer is a prerequisite for the layer above it. The framework was developed across the Managing the Digital Workforce series and exists to give operators a way to identify which vendors solve which problem, where the layers are competitive, and where the structural gaps in the market still sit.
What does Digital Workforce Management mean?
Digital Workforce Management is the discipline of governing AI agents the same way enterprises govern human employees. It treats agents as workers with identities, scopes, owners, performance records, and accountability chains rather than as features inside applications. The framing matters because it changes the operating question from "how do we manage AI risk" to "how do we manage our digital workers," which lands differently with operations leaders, maps to existing organizational structures (HR, audit, supervision), and acknowledges that the problem is operational before it is technical. Deloitte's 2026 Tech Trends calls this the "silicon-based workforce." McKinsey describes "the agentic organization." The framing the Managing the Digital Workforce series advances is one of several converging on the same idea.
How is agent governance different from model governance?
Agent governance and model governance address fundamentally different problems. Model governance covers what was built into the model weights: training data lineage, bias evaluation, model documentation, output validation. Agent governance covers everything in the harness around the model: the tools the agent can call, the MCP servers it connects to, the data it accesses, the orchestration logic that decides when to spawn sub-agents, and the human oversight workflow that catches problems before they reach production. Most "AI governance" platforms govern models. Enterprise risk lives in the harness. The seven-layer governance stack is a harness governance framework, which is why it surfaces gaps that model-centric analyst reports do not.
Why should AI agents be managed like workers?
AI agents do work, take actions with consequences, access enterprise data, make decisions, and require oversight. The organizations that treat them as features inside applications discover too late that they have no inventory, no accountability chain, no audit trail, and no operating model for managing them at scale. Treating agents as workers means giving each one an owner, a scope, an identity, performance criteria, and a documented chain of accountability when something goes wrong. It is the operating model that converts AI investment into AI outcomes. Read the full argument in the Managing the Digital Workforce series capstone.
The eight layers of the governance stack
What is Layer 0 — Discovery?
Layer 0 is the foundation of the governance stack: finding the AI agents, MCP servers, embedded AI features, and shadow AI usage running inside an organization. It addresses the problem that 80% of workers use unapproved AI tools and 73% of workplace ChatGPT usage happens through personal accounts invisible to IT. Vendors include Knostic, Harmonic Security, Netskope, Snyk Agent Scan, AppOmni, and LayerX. Read more in The Agent You Don't Know About, part of the Managing the Digital Workforce series.
What is Layer 1 — Agent Registry?
Layer 1 is the system of record for the digital workforce: every agent, sanctioned and shadow, with its owner, scope, data access, risk tier, and deployment status. It addresses the problem that organizations cannot govern what they have not inventoried, and most enterprises have no central catalog of the AI agents running across their environment. Vendors include ModelOp, Trustible, Credo AI, Saidot, FairNow (AuditBoard), and the major platform registries from IBM, Microsoft, Google, and AWS. Read more in Your AI Strategy Is Only as Strong as Your Inventory, part of the Managing the Digital Workforce series.
What is Layer 2 — Identity and Authorization?
Layer 2 is the identity layer for AI agents: distinct credentials, scoped access, time-bound tokens, and a revocation path for every agent. It addresses the non-human identity problem and is built on a five-protocol stack (PKCE, DPoP, OAuth OBO, Token Exchange, and CAEP) that answers the governance question "who is this agent and who authorized it to act?" Vendors include Microsoft Entra Agent ID, Okta, Auth0, HashiCorp Vault, Permit.io, ConductorOne, and the open-source SPIFFE/SPIRE and OPA projects. Read more in Every Agent Needs a Badge, part of the Managing the Digital Workforce series.
What is Layer 3 — Guardrails?
Layer 3 is policy enforcement at runtime: the framework for deciding which agent actions run autonomously, which require human approval before anything happens, and which never run at all. It addresses the gap between governance documents and production behavior, mapping agent actions to risk tiers using frameworks like Cynefin (simple to chaotic). Vendors include Holistic AI, Prodago, Aporia, CalypsoAI, Permit.io, AWS Bedrock Guardrails, and the open-source Guardrails AI, NVIDIA NeMo, Llama Guard, and LangGraph interrupt projects. Read more in Not Every Decision Needs a Human, part of the Managing the Digital Workforce series.
What is Layer 4 — Agent Posture Management?
Layer 4 is continuous monitoring of how agents are actually deployed and behaving in production: configuration drift, scope creep, behavioral anomalies, and the gap between an agent's pre-deployment risk profile and its actual runtime behavior. It addresses the "built secure, deployed dangerous" problem, which is distinct from Layer 0 discovery (finding agents you didn't know about) because Layer 4 monitors the agents you registered. Vendors include Microsoft Defender, Wiz, Palo Alto AI-SPM, Reco AI, Securiti, Teramind, DTEX, and Knostic. Read more in Built Secure, Deployed Dangerous, part of the Managing the Digital Workforce series.
What is Layer 5 — Guardian Agents?
Layer 5 is AI oversight of AI: dedicated agents that watch for behavioral drift, anomalous access patterns, and policy violations across the agent fleet. As agent populations scale, humans cannot monitor every transaction, so the answer becomes algorithmic supervision running continuously. Gartner published its inaugural Market Guide for Guardian Agents in February 2026, formally recognizing the category. Vendors include Wayfound, Opsin, Holistic AI Guardian Agents, Lakera, Protect AI, Robust Intelligence (Cisco), Mindgard, and Apiiro Guardian Agent. Read more in AI Watching AI, part of the Managing the Digital Workforce series.
What is Layer 6 — Agent Gateways?
Layer 6 is the single control plane for agent traffic: a centralized chokepoint where agent-to-tool calls, agent-to-agent communications, and MCP server connections are inspected, enforced, logged, and routed. It addresses the visibility and governance gap that emerges when agent traffic spreads across providers, harnesses, and integrations without a unified control point. Vendors include Bifrost (Maxim AI), Kong AI Gateway, Zenity, Cloudflare AI Gateway, AWS Bedrock, Databricks AI Gateway, CalypsoAI, Prompt Security, and the open-source Portkey and LiteLLM projects. Read more in The Single Chokepoint, part of the Managing the Digital Workforce series.
What is Layer 7 — Audit Trails and Explainability?
Layer 7 is decision interpretability on demand: a reconstructable chain of reasoning showing what an agent did, what data it accessed, what policy governed its action, who approved it, and why. It addresses the EU AI Act's August 2026 enforcement deadline, which makes audit trail capabilities non-negotiable for high-risk AI systems. Vendors include Seekr, Cobbai, PolicyLayer, AuditBoard, Modus, Datadog LLM Observability, Maxim AI, IBM watsonx.gov, Microsoft Purview, and the open-source Langfuse and Arize Phoenix projects. Read more in Proving What Happened, part of the Managing the Digital Workforce series.
Sources & Further Research
The research that anchored this market map, plus pointers for readers who want to go deeper.
Sources cited
Analyst reports
- Gartner Market Guide for AI Governance Platforms (November 2025) — the AIGP market definition that this map both uses and critiques
- Gartner Market Guide for Guardian Agents (February 2026) — the inaugural market guide that formalized Layer 5
- Gartner Market Guide for AI TRiSM (February 2025) — security-focused governance category
- Gartner Hype Cycle for Agentic AI 2026 — including governance, security, and cost-focused profiles
- IDC MarketScape: Worldwide Unified AI Governance Platforms 2025-2026 — leader and major-player positioning
- IAPP AI Governance Vendor Report 2026 v1.2 — service-type taxonomy of the governance vendor landscape
- Forrester Wave: AI Governance Platforms (2026) — competitive evaluation of cross-layer platforms
Industry research and surveys
- IBM X-Force Threat Intelligence Index 2026 — source for the 82:1 machine-to-human identity ratio
- Gravitee State of AI Agent Security 2026 — source for 14.4% of agents going live with full security approval, 47% monitoring rates
- Grant Thornton 2026 AI Governance Audit Readiness Survey — source for 78% audit-readiness gap among senior leaders
- Salesforce 2026 Connectivity Report — enterprise integration data
- Databricks 2026 State of AI Agents — adoption and deployment patterns
- Harmonic Security: 22 Million Enterprise AI Prompts Report (2026) — shadow AI usage data
- UpGuard 2026 Shadow AI Report — source for 80%+ unsanctioned AI tool usage and 665+ distinct GenAI apps in enterprise environments
- Deloitte 2026 Tech Trends — "silicon-based workforce" framing
- Forrester 2026 Predictions — digital employee management forecast for HCM platforms
Regulatory and standards frameworks
- EU AI Act — high-risk AI provisions, August 2026 enforcement deadline
- NIST AI Agent Standards Initiative — launched February 2026, comment period closed April 2026
- NSA AI/ML Supply Chain Risks and Mitigations Guidance (March 2026) — AIBOM recommendations
- OWASP AIBOM Project — AI Bill of Materials specification
- Linux Foundation AI-BOM with SPDX 3.0 — open standard for AI dependency mapping
- Kubernetes AI Gateway Working Group (announced March 2026) — network-layer AI governance standards
- OpenTelemetry GenAI Semantic Conventions v1.37 — observability standard for AI agents
Technical research
- LangChain — The Anatomy of an Agent Harness (March 2026) — Agent = Model + Harness framework
- Daily Dose of DS — The Anatomy of an Agent Harness (April 2026) — harness simplification design constraint
- Anthropic Model Context Protocol (donated to Linux Foundation, December 2025) — agent-to-tool standard, 97M monthly SDK downloads
The Managing the Digital Workforce series — The CTO's Edge
The full nine-part framework that anchors this market map:
- Article 1: Your AI Strategy Is Only as Strong as Your Inventory — Layer 1: Agent Registry
- Article 2: Every Agent Needs a Badge — Layer 2: Identity and Authorization
- Article 3: Not Every Decision Needs a Human — Layer 3: Guardrails
- Article 4: Built Secure, Deployed Dangerous — Layer 4: Agent Posture Management
- Article 5 — series mid-point synthesis
- Article 6: AI Watching AI — Layer 5: Guardian Agents
- Article 7: The Single Chokepoint — Layer 6: Agent Gateways
- Article 8: Proving What Happened — Layer 7: Audit Trails and Explainability
- Article 9: The Agent You Don't Know About — Layer 0: Discovery
- Capstone: AI Governance Market Landscape — series synthesis and the framework essay that accompanies this market map
For further research
For readers who want to go deeper than this market map, the following resources are worth tracking.
Regulatory text
- EU AI Act full text — the actual statute, with article-level navigation
- NIST AI Risk Management Framework — the foundational US framework
- NIST AI Agent Standards Initiative — newer, agent-specific
- The White House AI Bill of Rights — principles framework underlying US policy
Open-source projects worth tracking
- Model Context Protocol (MCP) — the agent-to-tool standard, now Linux Foundation
- OpenTelemetry GenAI Semantic Conventions — observability standard
- Cordum — open-source AI agent governance control plane (January 2026)
- VerifyWise — open-source AI registry
- Guardrails AI — open-source guardrails framework
- LangGraph — interrupt/resume pattern for HITL workflows
- HumanLayer — SDK for human approval workflows
- Temporal — durable execution for long-running agent workflows
- Bifrost — open-source MCP gateway and LLM router (Maxim AI)
- Langfuse — open-source LLM observability
Analyst content on AI governance
- Gartner research portal — search "AI governance," "AI TRiSM," "guardian agents," "agentic AI"
- Forrester research — search "AI governance platforms," "digital workforce"
- IDC — "AI governance," "responsible AI"
- IAPP (International Association of Privacy Professionals) — vendor reports and member research
Industry reports
- IBM X-Force Threat Intelligence Index — annual security landscape including AI agent threats
- Anthropic's Responsible Scaling Policy — model provider perspective on AI safety
- OpenAI's Preparedness Framework — same, from a different provider
- AI Incident Database — documented cases of AI failures and harms
Vendors mentioned in the map
The map names roughly 130 vendors across nine categories. For procurement research, vendor websites are the canonical source for current product capabilities, pricing, and integrations. Gartner Peer Insights, G2, and TrustRadius offer practitioner reviews. The cross-layer platforms (IBM watsonx, Microsoft, Google Vertex AI, AWS, DataRobot, Databricks, SAS, Holistic AI) are best evaluated against specific layer requirements rather than as comprehensive solutions.
Related frameworks worth knowing
- NIST AI RMF — risk-based framework, complements the layer-based approach in this map
- ISO/IEC 42001 — first international AI management system standard (December 2023)
- OECD AI Principles — international policy framework
- Cynefin Framework — referenced in Layer 3 for risk-tiered decision-making
- Zero Trust Architecture (NIST SP 800-207) — foundational for Layer 2 identity work
Read the capstone essay
The AI Governance Market Landscape
The reasoning behind the map. What the existing analyst taxonomies miss, why model governance and agent governance are different problems, and the structural whitespace nobody has filled.
Read the essay →