Note
Access to this page requires authorization. You can try signing in or changing directories.
Access to this page requires authorization. You can try changing directories.
AI agents built with Agent Builder in Microsoft 365 Copilot, Copilot Studio and Microsoft Foundry must operate within enterprise‑grade security, governance, and compliance boundaries. As agents gain autonomy, access business data, and take action across systems, organizations must ensure they remain secure by design, governed throughout their lifecycle, and aligned with corporate risk and compliance requirements.
This pillar focuses on how organizations use Microsoft's integrated security and governance stack to protect AI agents, manage risk, and maintain trust. It also focuses on how organizations establish the guardrails, controls, and operating discipline required to ensure agents operate securely, compliantly, and predictably without slowing innovation.
Note
Operations and lifecycle management and Responsible AI and trust are cross‑cutting capabilities. To support clearer maturity assessment, they're separate pillars in this maturity model, even though they're embedded across security, technology, and process execution in practice.
Why security and governance matter for AI agents
Agents amplify human intent by acting within the context of identity, data, and permissions. Without strong governance, this same capability can introduce risk through unintended data exposure, inconsistent behavior, or unclear accountability.
Strong security and governance provide the foundation that allows agent adoption to scale safely. They ensure that agent behavior is observable, controlled, and auditable, and that increasing autonomy is matched with clear decision rights, lifecycle oversight, and risk management. This foundation helps innovation progress without compromising safety or reliability.
What high maturity looks like
At high maturity, security and governance are embedded, scalable, and enabling.
Characteristics include:
- Organizations govern agents using consistent, enterprise‑wide standards.
- Identity, data access, and compliance controls are enforced by default.
- Organizations make agent behavior observable through logs, telemetry, and review mechanisms.
- Human oversight and escalation paths are clearly defined for each agent class.
- Governance enables faster adoption rather than slowing it down.
How to read the maturity table
The table describes how security and governance capabilities evolve across five maturity levels.
For each level, notice:
- State of security and governance: Observable characteristics at that level.
- Opportunity to progress: Practical focus areas that enable the next level of maturity.
Organizations often operate at different levels depending on agent criticality. For example, internal productivity agents might require lighter controls than customer-facing or decision-making agents.
Security and governance maturity
| Level | State of security and governance | Opportunity to progress |
|---|---|---|
| 100: Initial |
|
|
| 200: Repeatable |
|
|
| 300: Defined |
|
|
| 400: Capable |
|
|
| 500: Efficient |
|
|
Common anti-patterns
No inventory and no ownership. Teams create and share agents without a reliable registry, lifecycle status, or accountable owner, which makes audits and incident response slow and inconsistent.
Controls are "guidance-only" instead of enforceable. Teams document policies but don't translate them into enforceable technical controls (for example, data governance, data policy, and sensitivity constraints), so compliance depends on individual behavior.
Missing or ignored environment strategy. Makers build and publish in the same environment without clear separation or guardrails, which increases the risk of accidental exposure and weakens change control.
Treating all agents as the same (no tiered approach by risk and criticality). Organizations apply one set of controls to every agent. This approach either over‑restricts low‑risk personal productivity agents (driving shadow AI), or under‑governs departmental and mission‑critical agents (creating security and compliance gaps). A tiered approach is needed because risk and governance requirements increase as you move from personal productivity to department and team collaboration to enterprise and mission‑critical workloads.
Data policy and connector governance aren't treated as an "agent safety boundary." Teams allow agents to connect broadly (connectors, actions, HTTP) without consistent policy constraints, which increases data exfiltration and unintended action risk.
Audit and monitoring are afterthoughts. Teams don't centralize logs, create dashboards, or connect security operations center (SOC) workflows with agent data. Teams only learn about risky behavior after incidents escalate.
Security posture isn't continuously validated. Teams don't rely on runtime protection status, automatic security scans (where available), or systematic adversarial testing expectations prior to release and major updates.
Cost and usage governance is unmanaged. Teams don't allocate or monitor token, usage, and capacity costs, so spend grows without visibility and governance can't prioritize what to scale or retire.
Using this pillar in practice
Security and governance should scale with agent autonomy and impact.
As you progress:
- Make controls more automated and embedded.
- Keep decision rights explicit and review them regularly.
- Consider governance metrics alongside value metrics.
Strong security and governance provide the trust and clarity needed to scale AI agents responsibly, so organizations can move faster.
Next step
Next, explore how value realization and outcomes ensure that well-governed agents deliver measurable and sustained business impact.
Related information
- Administering and Governing Agents
- Copilot Control System security and governance
- Microsoft Agent 365 documentation
- Microsoft 365 Copilot adoption site
- AI Agents adoption site
- Data, Privacy, and Security for Microsoft 365 Copilot
- Copilot Studio security and governance
- Manage your Copilot Studio projects
- Governance and security for AI agents across the organization