herramienta del agente de intérprete de código Python

Azure Databricks proporciona system.ai.python_exec, una función integrada de catálogo de Unity que permite a los agentes de IA ejecutar dinámicamente Python código escrito por el agente, proporcionado por un usuario o recuperado de un código base. Está disponible de forma predeterminada y se puede usar directamente en una consulta SQL:

SELECT python_exec("""
import random
numbers = [random.random() for _ in range(10)]
print(numbers)
""")

Para más información sobre las herramientas del agente de IA, consulte .

Añade el intérprete de código a tu agente

Para agregar python_exec al agente, conéctese al servidor MCP administrado para el esquema del system.ai catálogo de Unity. El intérprete de código está disponible como una herramienta MCP preconfigurada en https://<workspace-hostname>/api/2.0/mcp/functions/system/ai/python_exec.

SDK de agentes de OpenAI (aplicaciones)

from agents import Agent, Runner
from databricks.sdk import WorkspaceClient
from databricks_openai.agents import McpServer

# WorkspaceClient picks up credentials from the environment (Databricks Apps, notebook, CLI)
workspace_client = WorkspaceClient()
host = workspace_client.config.host

# The context manager manages the MCP connection lifecycle and ensures cleanup on exit.
# from_uc_function constructs the endpoint URL from UC identifiers and wires in auth
# from workspace_client, avoiding hardcoded URLs and manual token handling.
async with McpServer.from_uc_function(
    catalog="system",
    schema="ai",
    function_name="python_exec",
    workspace_client=workspace_client,
    name="code-interpreter",
) as code_interpreter:
    agent = Agent(
        name="Coding agent",
        instructions="You are a helpful coding assistant. Use the python_exec tool to run code.",
        model="databricks-claude-sonnet-4-5",
        mcp_servers=[code_interpreter],
    )
    result = await Runner.run(agent, "Calculate the first 10 Fibonacci numbers")
    print(result.final_output)

Conceda a la aplicación acceso a la función en databricks.yml:

resources:
  apps:
    my_agent_app:
      resources:
        - name: 'python_exec'
          uc_securable:
            securable_full_name: 'system.ai.python_exec'
            securable_type: 'FUNCTION'
            permission: 'EXECUTE'

LangGraph (Aplicaciones)

from databricks.sdk import WorkspaceClient
from databricks_langchain import ChatDatabricks, DatabricksMCPServer, DatabricksMultiServerMCPClient
from langgraph.prebuilt import create_react_agent

workspace_client = WorkspaceClient()
host = workspace_client.config.host

# DatabricksMultiServerMCPClient provides a unified get_tools() interface across
# multiple MCP servers, making it easy to add more servers later without refactoring.
mcp_client = DatabricksMultiServerMCPClient([
    DatabricksMCPServer(
        name="code-interpreter",
        url=f"{host}/api/2.0/mcp/functions/system/ai/python_exec",
        workspace_client=workspace_client,
    ),
])

async with mcp_client:
    tools = await mcp_client.get_tools()
    agent = create_react_agent(
        ChatDatabricks(endpoint="databricks-claude-sonnet-4-5"),
        tools=tools,
    )
    result = await agent.ainvoke(
        {"messages": [{"role": "user", "content": "Calculate the first 10 Fibonacci numbers"}]}
    )
    # LangGraph returns the full conversation history; the last message is the agent's final response
    print(result["messages"][-1].content)

Conceda a la aplicación acceso a la función en databricks.yml:

resources:
  apps:
    my_agent_app:
      resources:
        - name: 'python_exec'
          uc_securable:
            securable_full_name: 'system.ai.python_exec'
            securable_type: 'FUNCTION'
            permission: 'EXECUTE'

Servicio de modelos

from databricks.sdk import WorkspaceClient
from databricks_mcp import DatabricksMCPClient
import mlflow

workspace_client = WorkspaceClient()
host = workspace_client.config.host

mcp_client = DatabricksMCPClient(
    server_url=f"{host}/api/2.0/mcp/functions/system/ai/python_exec",
    workspace_client=workspace_client,
)

tools = mcp_client.list_tools()

# get_databricks_resources() extracts the UC permissions the agent needs at runtime.
# Passing these to log_model lets Model Serving grant access automatically at deployment,
# without requiring manual permission configuration.
mlflow.pyfunc.log_model(
    "agent",
    python_model=my_agent,
    resources=mcp_client.get_databricks_resources(),
)

Para implementar el agente, consulte Deploy an agent for generative AI applications (Model Serving). Para más información sobre el registro de agentes con recursos de MCP, consulte Uso de servidores MCP administrados de Databricks.

Pasos siguientes