Bemærk
Adgang til denne side kræver godkendelse. Du kan prøve at logge på eller ændre mapper.
Adgang til denne side kræver godkendelse. Du kan prøve at ændre mapper.
This article walks you through migrating hosted agents from the initial public preview to the refreshed public preview of Foundry Agent Service. The refreshed preview introduces a new hosting backend, protocol libraries, identity model, and management APIs.
Important
The initial public preview hosting backend is being retired. You must redeploy your agents using the new model described in this article. Existing agent deployments on the old backend won't be migrated automatically and will be supported only until May 22, 2026.
This guide applies to you if you deployed a hosted agent before April 2026 using the azure-ai-agentserver-agentframework or azure-ai-agentserver-langgraph packages, or any custom code that used the initial preview hosting APIs.
What changed
The refreshed preview updates the existing platform with a session-based sandbox model. Key changes:
- Automatic compute lifecycle — No manual start, stop, or replica management. The platform provisions compute when a request arrives and deprovisions it after 15 minutes of inactivity. See CLI command mapping.
- Session-based isolation — Each session gets its own sandbox with persistent
$HOMEand/filesstorage across turns and idle periods. - Protocol libraries replace framework adapters — The framework-specific adapter packages (
azure-ai-agentserver-agentframework,azure-ai-agentserver-langgraph) are replaced by protocol-specific libraries (azure-ai-agentserver-responses,azure-ai-agentserver-invocations). See Protocol library and framework migration. - Dedicated agent identity from deploy time — Every agent gets its own Entra identity at creation, replacing the shared project managed identity model. See Identity and RBAC changes.
- Dedicated agent endpoint — Each agent gets its own endpoint URL (for example,
{project_endpoint}/agents/{name}/endpoint/protocols/openai/v1/responses). You no longer route through a shared project endpoint withagent_referencein the request body. See Agent invocation changes. - New protocols — Invocations, Activity, and A2A protocols join the existing Responses protocol. A single agent can expose multiple protocols simultaneously.
- REST API for full lifecycle — Complete REST coverage for agent, version, session, and file operations. See SDK method changes.
- Capability host creation removed — The platform handles infrastructure provisioning automatically. You no longer need to create an account-level capability host. See Removed APIs.
Prerequisites
Azure AI Projects SDK version 2.1.0 or later (was 2.0.0).
Azure Developer CLI version 1.23.0 or later with the updated Foundry agents extension:
azd ext install azure.ai.agents
Migration steps at a glance
The following steps summarize the end-to-end migration. Each links to the detailed section.
- Update protocol libraries and agent code — Replace framework adapters with the new protocol libraries and update your agent entry point. Choose your path: Agent Framework, LangGraph, or custom/BYO.
- Update API, CLI, and SDK calls — Remove retired CLI commands, update SDK methods, and switch to the dedicated agent endpoint. See Removed APIs, CLI command mapping, SDK method changes, and Agent invocation changes.
- Update identity and RBAC — Grant downstream resource access to the agent's dedicated Entra identity. See Identity and RBAC changes.
- Update Azure Developer CLI tooling — Install the latest
azdFoundry agents extension and updateagent.yaml. See Azure Developer CLI changes. - Redeploy and verify — Build your container image, deploy using
azd upor the SDK, and confirm the version reachesactivestatus.
For a task-by-task summary, see the Migration checklist at the end of this article.
Protocol library and framework migration
The initial preview used framework-specific adapter packages (azure-ai-agentserver-agentframework, azure-ai-agentserver-langgraph) that wrapped your agent code. The refreshed preview replaces these with protocol-specific libraries and updated framework integration packages.
Your migration path depends on which framework you use:
- Microsoft Agent Framework — Use the updated Agent Framework packages with the
ResponsesHostServerbridge. - LangGraph — Use the
azure-ai-agentserver-responsesprotocol library directly withResponsesAgentServerHost. - CrewAI, Semantic Kernel, or custom code — Use the protocol libraries directly (
azure-ai-agentserver-responsesorazure-ai-agentserver-invocations).
Package changes
Protocol libraries (all users)
| Initial preview package | Refreshed preview replacement |
|---|---|
azure-ai-agentserver-core |
azure-ai-agentserver-core 2.0.0b1 — still required, now installed automatically as a dependency of the protocol packages |
azure-ai-agentserver-agentframework |
Removed — see Agent Framework or protocol library paths below |
azure-ai-agentserver-langgraph |
Removed — use azure-ai-agentserver-responses or azure-ai-agentserver-invocations directly |
Azure.AI.AgentServer.Core (.NET) |
Azure.AI.AgentServer.Core 1.0.0-beta.21 — still required as a dependency |
Azure.AI.AgentServer.AgentFramework (.NET) |
Azure.AI.AgentServer.Responses 1.0.0-beta.1 or Azure.AI.AgentServer.Invocations 1.0.0-beta.1 |
Agent Framework packages (Agent Framework users only)
The Agent Framework packages were also updated for the refreshed preview:
| Initial preview | Refreshed preview |
|---|---|
agent-framework (single package) |
agent-framework-core, agent-framework-openai, agent-framework-foundry, agent-framework-orchestrations |
AzureAIAgentClient |
FoundryChatClient (from agent_framework.foundry) |
ChatAgent |
Agent (from agent_framework) |
@ai_function decorator |
@tool decorator with approval_mode parameter |
| Not available | agent-framework-foundry-hosting — bridge between Agent Framework and the protocol library |
Migrate Agent Framework agents
If your agent uses the Microsoft Agent Framework, use the ResponsesHostServer bridge from agent-framework-foundry-hosting. This approach keeps your Agent Framework code (agent definition, tools, instructions) intact while using the new protocol library under the hood.
Initial preview:
from azure.ai.agentserver.agentframework import from_agent_framework
from agent_framework import ai_function, ChatAgent
from agent_framework.azure import AzureAIAgentClient
client = AzureAIAgentClient(
project_endpoint=PROJECT_ENDPOINT,
model_deployment_name="gpt-4.1",
credential=DefaultAzureCredential(),
)
@ai_function
def get_weather(location: str) -> str:
"""Get the weather for a location."""
return f"The weather in {location} is sunny."
agent = ChatAgent(
chat_client=client,
instructions="You are a helpful assistant.",
tools=[get_weather],
)
if __name__ == "__main__":
from_agent_framework(agent).run()
Refreshed preview:
import os
from agent_framework import Agent, tool
from agent_framework.foundry import FoundryChatClient
from agent_framework_foundry_hosting import ResponsesHostServer
from azure.identity import DefaultAzureCredential
from pydantic import Field
from typing_extensions import Annotated
client = FoundryChatClient(
project_endpoint=os.environ["FOUNDRY_PROJECT_ENDPOINT"],
model=os.environ["MODEL_DEPLOYMENT_NAME"],
credential=DefaultAzureCredential(),
)
@tool(approval_mode="never_require")
def get_weather(
location: Annotated[str, Field(description="The location to get the weather for.")],
) -> str:
"""Get the weather for a location."""
return f"The weather in {location} is sunny."
agent = Agent(
client=client,
instructions="You are a helpful assistant.",
tools=[get_weather],
default_options={"store": False},
)
server = ResponsesHostServer(agent)
server.run()
Key differences:
AzureAIAgentClient→FoundryChatClient(fromagent_framework.foundry).ChatAgent→Agent(fromagent_framework).@ai_function→@tool(approval_mode="never_require")withAnnotatedtype hints for parameter descriptions.from_agent_framework(agent).run()→ResponsesHostServer(agent).run().- Add
default_options={"store": False}because conversation history is managed by the hosting platform.
For MCP tools, use client.get_mcp_tool() instead of defining tools in the create_version API:
mcp_tool = client.get_mcp_tool(
name="GitHub",
url="https://api.githubcopilot.com/mcp/",
headers={"Authorization": f"Bearer {github_pat}"},
approval_mode="never_require",
)
agent = Agent(client=client, tools=[mcp_tool], ...)
For samples, see the Agent Framework hosted agent samples.
Note
For .NET (C#) Agent Framework migration, the pattern uses AddFoundryResponses and MapFoundryResponses ASP.NET extensions instead of ResponsesHostServer. See the .NET Agent Framework hosted agent samples for complete examples.
Migrate LangGraph agents
If your agent uses LangGraph, replace the azure-ai-agentserver-langgraph adapter with the azure-ai-agentserver-responses protocol library. Your LangGraph agent logic (graph definition, tools, LLM configuration) stays the same — only the hosting entry point changes.
Initial preview:
from azure.ai.agentserver.langgraph import from_langgraph
from langchain_openai import AzureChatOpenAI
from langgraph.prebuilt import create_react_agent
llm = AzureChatOpenAI(azure_endpoint=ENDPOINT, azure_deployment="gpt-4o", ...)
tools = [my_tool_a, my_tool_b]
graph = create_react_agent(llm, tools=tools, prompt=SYSTEM_PROMPT)
if __name__ == "__main__":
from_langgraph(graph).run()
Refreshed preview:
import asyncio
import os
import httpx
from azure.ai.agentserver.responses import (
CreateResponse,
ResponseContext,
ResponsesAgentServerHost,
ResponsesServerOptions,
TextResponse,
)
from azure.identity import DefaultAzureCredential, get_bearer_token_provider
from langchain_core.messages import AIMessage, HumanMessage
from langchain_openai import ChatOpenAI
from langgraph.prebuilt import create_react_agent
FOUNDRY_PROJECT_ENDPOINT = os.environ["FOUNDRY_PROJECT_ENDPOINT"]
MODEL = os.environ.get("AZURE_AI_MODEL_DEPLOYMENT_NAME", "gpt-4.1")
_token_provider = get_bearer_token_provider(
DefaultAzureCredential(), "https://ai.azure.com/.default"
)
# httpx auth hook that injects a fresh Microsoft Entra token on every request.
class _AzureTokenAuth(httpx.Auth):
def auth_flow(self, request):
request.headers["Authorization"] = f"Bearer {_token_provider()}"
yield request
llm = ChatOpenAI(
base_url=f"{FOUNDRY_PROJECT_ENDPOINT}/openai/v1",
api_key="placeholder", # overridden by _AzureTokenAuth
model=MODEL,
use_responses_api=True,
http_client=httpx.Client(auth=_AzureTokenAuth()),
)
tools = [my_tool_a, my_tool_b]
graph = create_react_agent(llm, tools=tools, prompt=SYSTEM_PROMPT)
app = ResponsesAgentServerHost(
options=ResponsesServerOptions(default_fetch_history_count=20)
)
@app.response_handler
async def handle(
request: CreateResponse,
context: ResponseContext,
cancellation_signal: asyncio.Event,
):
async def run_graph():
try:
history = await context.get_history()
except Exception:
history = []
user_input = await context.get_input_text() or ""
# Convert platform history to LangChain messages
lc_messages = []
for item in history:
if hasattr(item, "content"):
for c in item.content:
if hasattr(c, "text") and c.text:
if item.role == "user":
lc_messages.append(HumanMessage(content=c.text))
else:
lc_messages.append(AIMessage(content=c.text))
lc_messages.append(HumanMessage(content=user_input))
result = await graph.ainvoke({"messages": lc_messages})
raw = result["messages"][-1].content
if isinstance(raw, list):
yield "".join(
block.get("text", "") if isinstance(block, dict) else str(block)
for block in raw
)
else:
yield raw or ""
return TextResponse(context, request, text=run_graph())
if __name__ == "__main__":
app.run()
Key differences:
azure-ai-agentserver-langgraph→azure-ai-agentserver-responses. The LangGraph-specific adapter is removed.from_langgraph(graph).run()→ ExplicitResponsesAgentServerHostwith a@app.response_handlerthat returns aTextResponse.- Uses
ChatOpenAIwithbase_url=f"{FOUNDRY_PROJECT_ENDPOINT}/openai/v1"instead ofAzureChatOpenAI. This uses the project-scoped endpoint, which requires only project-level permissions. - Conversation history is fetched via
context.get_history()and converted to LangChain message types for multi-turn support. - LangGraph agent logic (tools, graph creation) is unchanged. For fine-grained control over function calls, reasoning items, or multiple output types, use
ResponseEventStreaminstead ofTextResponse.
MCP Toolbox integration
To connect your LangGraph agent to tools in the Foundry Toolbox via MCP, use langchain-mcp-adapters inside your handler. Load tools dynamically from the MCP endpoint:
from langchain_mcp_adapters.tools import load_mcp_tools
from mcp import ClientSession
from mcp.client.streamable_http import streamablehttp_client
@app.response_handler
async def handle(request, context, cancellation_signal):
user_input = await context.get_input_text()
endpoint = os.environ["TOOLBOX_ENDPOINT"]
token = DefaultAzureCredential().get_token("https://ai.azure.com/.default").token
headers = {
"Authorization": f"Bearer {token}",
"Foundry-Features": "Toolsets=V1Preview",
}
async with streamablehttp_client(endpoint, headers=headers) as (r, w, _):
async with ClientSession(r, w) as session:
await session.initialize()
tools = await load_mcp_tools(session)
graph = create_react_agent(llm, tools=tools, prompt=SYSTEM_PROMPT)
result = await graph.ainvoke(
{"messages": [{"role": "user", "content": user_input}]},
)
# ... extract answer and return TextResponse
Add these packages to your requirements.txt:
langchain-mcp-adapters>=0.1.0
mcp>=1.0.0
For complete samples, see the LangGraph hosted agent samples.
Migrate custom or BYO agents
If you use CrewAI, Semantic Kernel, or other custom code, use the protocol library directly. The protocol libraries are framework-agnostic — you handle orchestration, tools, and memory in your own code.
Responses protocol — Use ResponsesAgentServerHost for conversational agents. Register your handler with the @app.response_handler decorator:
import asyncio
from azure.ai.agentserver.responses import (
CreateResponse,
ResponseContext,
ResponsesAgentServerHost,
TextResponse,
)
app = ResponsesAgentServerHost()
@app.response_handler
async def handler(
request: CreateResponse,
context: ResponseContext,
cancellation_signal: asyncio.Event,
):
text = await context.get_input_text()
return TextResponse(context, request, text=f"Echo: {text}")
app.run()
For streaming responses, pass an async iterable to TextResponse. For fine-grained control over function calls, reasoning items, or multiple output types, use ResponseEventStream instead of TextResponse.
Invocations protocol — Use InvocationAgentServerHost for agents that need arbitrary JSON payloads (webhooks, non-conversational processing). The handler uses Starlette Request/Response types directly:
from azure.ai.agentserver.invocations import InvocationAgentServerHost
from starlette.requests import Request
from starlette.responses import JSONResponse, Response
app = InvocationAgentServerHost()
@app.invoke_handler
async def handle(request: Request) -> Response:
data = await request.json()
return JSONResponse({"greeting": f"Hello, {data['name']}!"})
app.run()
The Invocations protocol also supports long-running operations with @app.get_invocation_handler and @app.cancel_invocation_handler for polling and cancellation.
Choose your protocol based on your agent's interaction pattern. See What are hosted agents — Protocols for guidance on which protocol to use.
Protocol version format change
The protocol version format changed from "v1" to semver "1.0.0":
# Initial preview
ProtocolVersionRecord(protocol=AgentProtocol.RESPONSES, version="v1")
# Refreshed preview
ProtocolVersionRecord(protocol=AgentProtocol.RESPONSES, version="1.0.0")
Removed APIs
The following APIs from the initial preview aren't available in the refreshed preview:
| Removed API | Reason |
|---|---|
az cognitiveservices agent start |
Compute lifecycle is automatic — no manual start needed |
az cognitiveservices agent stop |
Compute deprovisions automatically after 15 minutes of inactivity |
az cognitiveservices agent update |
Replaced by PATCH /agents/{name} for endpoint routing; create a new version for runtime changes |
az cognitiveservices agent delete-deployment |
Delete the version directly instead |
az cognitiveservices agent list-versions |
Use az rest --method GET against the REST API |
az cognitiveservices agent show |
Use az rest --method GET or azd ai agent show |
Capability host creation (PUT .../capabilityHosts/accountcaphost) |
Platform handles infrastructure automatically |
tools parameter in create_version |
Tools are accessed via Foundry Toolbox MCP endpoint at runtime |
CLI command mapping
| Initial preview CLI | Refreshed preview equivalent |
|---|---|
az cognitiveservices agent start --name X --agent-version 1 |
Removed — compute starts automatically on first request |
az cognitiveservices agent stop --name X --agent-version 1 |
Removed — compute stops automatically after idle timeout |
az cognitiveservices agent update --min-replicas N --max-replicas M |
Removed — no replica management |
az cognitiveservices agent show --name X |
az rest --method GET --url "$BASE_URL/agents/X" --resource "https://ai.azure.com" |
az cognitiveservices agent list-versions --name X |
az rest --method GET --url "$BASE_URL/agents/X/versions" --resource "https://ai.azure.com" |
az cognitiveservices agent delete --name X |
az rest --method DELETE --url "$BASE_URL/agents/X" --resource "https://ai.azure.com" |
az cognitiveservices agent delete --name X --agent-version 1 |
az rest --method DELETE --url "$BASE_URL/agents/X/versions/1" --resource "https://ai.azure.com" |
az cognitiveservices agent delete-deployment --name X --agent-version 1 |
Removed — delete the version instead |
Where BASE_URL is https://{account}.services.ai.azure.com/api/projects/{project}.
SDK method changes
| Initial preview | Refreshed preview |
|---|---|
pip install "azure-ai-projects>=2.0.0" |
pip install "azure-ai-projects>=2.1.0" |
project.get_openai_client() with extra_body={"agent_reference": {"name": ..., "type": "agent_reference"}} |
project.get_openai_client(agent_name="my-agent") — client is pre-bound, no extra_body needed |
ProtocolVersionRecord(protocol=AgentProtocol.RESPONSES, version="v1") |
ProtocolVersionRecord(protocol=AgentProtocol.RESPONSES, version="1.0.0") |
tools=[...] in HostedAgentDefinition |
Removed — use Foundry Toolbox MCP endpoint instead |
| Not available | project.beta.agents.create_session(agent_name, isolation_key=..., version_indicator=...), .get_session(), .list_sessions(), .delete_session(isolation_key=...) |
| Not available | project.beta.agents.download_session_file(path=...), .get_session_files(path=...), .delete_session_file(path=...) |
| Not available | project.beta.agents.patch_agent_details() for endpoint routing and traffic splitting |
| Not available | metadata={"enableVnextExperience": "true"} parameter on client.agents.create_version() |
Agent invocation changes
In the initial preview, you routed to agents through a shared project endpoint by passing an agent_reference in the request body. In the refreshed preview, each agent gets a dedicated endpoint and the SDK binds to it automatically.
Initial preview:
openai_client = project.get_openai_client()
response = openai_client.responses.create(
input=[{"role": "user", "content": "Hello!"}],
extra_body={"agent_reference": {"name": "my-agent", "type": "agent_reference"}}
)
Refreshed preview:
openai_client = project.get_openai_client(agent_name="my-agent")
response = openai_client.responses.create(
input="Hello!",
)
print(response.output_text)
Note
Using agent_name requires allow_preview=True when constructing the AIProjectClient:
project = AIProjectClient(
credential=DefaultAzureCredential(),
endpoint=PROJECT_ENDPOINT,
allow_preview=True,
)
The agent_name parameter tells the SDK to target the agent's dedicated endpoint. For REST calls, use the agent endpoint directly:
curl -X POST "$BASE_URL/agents/my-agent/endpoint/protocols/openai/responses?api-version=$API_VERSION" \
-H "Authorization: Bearer $TOKEN" \
-H "Content-Type: application/json" \
-H "Foundry-Features: HostedAgents=V1Preview" \
-d '{"input": "Hello!", "model": "gpt-4.1", "stream": false}'
Important
REST calls to hosted agent endpoints require the Foundry-Features: HostedAgents=V1Preview header during preview. Without it, the request returns a preview_feature_required error. The SDK sets this header automatically.
Active endpoints depend on the protocols you declare in your agent version definition. The Responses and Conversations routes live under the OpenAI-compatible namespace at {project_endpoint}/agents/{name}/endpoint/protocols/openai/v1/{responses|conversations}, while Invocations, Activity, and A2A route directly at {project_endpoint}/agents/{name}/endpoint/protocols/{invocations|activityprotocol|a2a}.
Version status changes
The agent lifecycle states changed from a manual state machine to automatic provisioning statuses:
| Initial preview state | Refreshed preview status |
|---|---|
Stopped (initial) |
Not applicable — no stopped state |
Starting → Started |
creating → active |
Failed |
failed |
Running → Stopping → Stopped |
Not applicable — compute deprovisions automatically |
| Not available | deleting → deleted |
Identity and RBAC changes
The identity model changed significantly:
| Aspect | Initial preview | Refreshed preview |
|---|---|---|
| Unpublished agent runtime identity | Project managed identity (shared) | Dedicated Entra agent identity (per agent) |
| When dedicated identity is created | At publish time only | At deploy time (every agent) |
| Project managed identity role | Runtime identity for all unpublished agents | Infrastructure only — used for container image pulls |
| Required deployment role | Azure AI Owner (new project), AI Owner + Contributor (new resources), or Reader + Azure AI User (existing project) | Azure AI Project Manager at project scope |
| Post-publish RBAC reconfiguration | Required — project MI permissions don't transfer to agent identity | Not required — agent has its own identity from the start |
Action required
- Update RBAC assignments: The project managed identity is no longer the runtime identity. Grant RBAC roles for any downstream Azure resources directly to the agent's Entra identity instead.
- Simplify deployment roles: You need Azure AI Project Manager at project scope to create and deploy hosted agents.
Azure Developer CLI changes
Updated commands
| Initial preview | Refreshed preview |
|---|---|
azd init -t https://github.com/Azure-Samples/azd-ai-starter-basic |
azd ai agent init (interactive template selection) |
azd ai agent init --project-id /subscriptions/.../projects/... |
Same syntax, still supported |
azd up |
Same — provisions, builds, pushes, creates version |
azd down |
Same — cleans up resources |
| Not available | azd ai agent show — view agent status |
| Not available | azd ai agent monitor — real-time logs and status |
| Not available | azd ai agent invoke --input "..." — invoke the agent |
| Not available | azd ai agent files upload/list/download/remove — session file management |
Action required
Update the Foundry agents extension:
azd ext install azure.ai.agentsIf your
agent.yamlspecifiesversion: "v1"for protocol versions, change it toversion: "1.0.0".
Log streaming changes
| Aspect | Initial preview | Refreshed preview |
|---|---|---|
| Endpoint | .../versions/{v}/containers/default:logstream |
.../versions/{v}/sessions/{sessionId}:logstream |
| Response format | Plain text (chunked) | Server-Sent Events (SSE) with JSON payloads |
| Query parameters | kind=console\|system, tail=20, replica_name |
Simplified — no query parameters needed |
| Max connection | 10 minutes | 30 minutes |
| Idle timeout | 1 minute | 2 minutes |
| azd access | Not available | azd ai agent monitor |
Known gaps
The following capabilities from the initial preview aren't yet available in the refreshed preview:
| Feature | Status | Workaround |
|---|---|---|
az cognitiveservices agent CLI extension |
Removed — no first-party CLI commands | Use az rest for REST API calls or azd ai agent for developer workflows |
| Non-versioned metadata updates (description, tags) | Not yet available via SDK | Use az rest --method PATCH against the REST API |
| Explicit replica scaling (min/max replicas) | Replaced by session-based auto-scaling | Sessions scale automatically; no configuration needed |
| Delete deployment without deleting version | Not available | Delete the version directly; create a new version when needed |
Migration checklist
Use this checklist to track your migration:
- Update
azure-ai-projectsSDK to version 2.1.0 or later. - Agent Framework users: Update Agent Framework packages (
agent-framework-core,agent-framework-foundry,agent-framework-foundry-hosting, etc.). Replacefrom_agent_framework(agent).run()withResponsesHostServer(agent).run(). UpdateAzureAIAgentClient→FoundryChatClient,ChatAgent→Agent, and@ai_function→@tool. - LangGraph users: Replace
azure-ai-agentserver-langgraphwithazure-ai-agentserver-responses. Replacefrom_langgraph(graph).run()with aResponsesAgentServerHosthandler that returns aTextResponse. UseChatOpenAIwith the project-scoped endpoint instead ofAzureChatOpenAI. Addlangchain-mcp-adaptersandmcpif using Foundry Toolbox. - Custom/BYO users: Replace framework adapter packages with protocol libraries (
azure-ai-agentserver-responsesorazure-ai-agentserver-invocations). Rewrite agent entry points usingResponsesAgentServerHostorInvocationAgentServerHost. - Update protocol version strings from
"v1"to"1.0.0"in code andagent.yaml. - Update
agent.yamlif usingazd(protocol version format, remove anytoolsdefinitions from agent definition). - Remove
az cognitiveservices agentCLI calls from scripts and CI/CD pipelines; replace withaz restorazd ai agentcommands. - Remove capability host creation steps from provisioning scripts.
- Update agent invocation code — use
project.get_openai_client(agent_name=...)instead ofextra_bodywithagent_reference. - Review RBAC — grant downstream resource access to the agent's dedicated Entra identity, not the project managed identity.
- Update
azdFoundry agents extension to the latest version. - Build container image with
--platform linux/amd64(if not already). - Redeploy your agent using
azd upor the SDKcreate_versionmethod. - Verify the new version reaches
activestatus before sending traffic.