Compartilhar via


ferramenta de agente interpretador de código Python

Azure Databricks fornece system.ai.python_exec, uma função interna do Catálogo Unity que permite que agentes de IA executem dinamicamente código Python escrito pelo agente, fornecido por um usuário ou recuperado de uma base de código. Ele está disponível por padrão e pode ser usado diretamente em uma consulta SQL:

SELECT python_exec("""
import random
numbers = [random.random() for _ in range(10)]
print(numbers)
""")

Para saber mais sobre ferramentas para agentes, consulte ferramentas para agentes de IA.

Adicionar o interpretador de código ao seu agente

Para adicionar python_exec ao seu agente, conecte-se ao servidor MCP gerenciado no esquema do Unity Catalog. O interpretador de código está disponível como uma ferramenta MCP pré-configurada em https://<workspace-hostname>/api/2.0/mcp/functions/system/ai/python_exec.

SDK de Agentes do OpenAI (Aplicativos)

from agents import Agent, Runner
from databricks.sdk import WorkspaceClient
from databricks_openai.agents import McpServer

# WorkspaceClient picks up credentials from the environment (Databricks Apps, notebook, CLI)
workspace_client = WorkspaceClient()
host = workspace_client.config.host

# The context manager manages the MCP connection lifecycle and ensures cleanup on exit.
# from_uc_function constructs the endpoint URL from UC identifiers and wires in auth
# from workspace_client, avoiding hardcoded URLs and manual token handling.
async with McpServer.from_uc_function(
    catalog="system",
    schema="ai",
    function_name="python_exec",
    workspace_client=workspace_client,
    name="code-interpreter",
) as code_interpreter:
    agent = Agent(
        name="Coding agent",
        instructions="You are a helpful coding assistant. Use the python_exec tool to run code.",
        model="databricks-claude-sonnet-4-5",
        mcp_servers=[code_interpreter],
    )
    result = await Runner.run(agent, "Calculate the first 10 Fibonacci numbers")
    print(result.final_output)

Conceda ao aplicativo acesso à função em databricks.yml:

resources:
  apps:
    my_agent_app:
      resources:
        - name: 'python_exec'
          uc_securable:
            securable_full_name: 'system.ai.python_exec'
            securable_type: 'FUNCTION'
            permission: 'EXECUTE'

LangGraph (Aplicativos)

from databricks.sdk import WorkspaceClient
from databricks_langchain import ChatDatabricks, DatabricksMCPServer, DatabricksMultiServerMCPClient
from langgraph.prebuilt import create_react_agent

workspace_client = WorkspaceClient()
host = workspace_client.config.host

# DatabricksMultiServerMCPClient provides a unified get_tools() interface across
# multiple MCP servers, making it easy to add more servers later without refactoring.
mcp_client = DatabricksMultiServerMCPClient([
    DatabricksMCPServer(
        name="code-interpreter",
        url=f"{host}/api/2.0/mcp/functions/system/ai/python_exec",
        workspace_client=workspace_client,
    ),
])

async with mcp_client:
    tools = await mcp_client.get_tools()
    agent = create_react_agent(
        ChatDatabricks(endpoint="databricks-claude-sonnet-4-5"),
        tools=tools,
    )
    result = await agent.ainvoke(
        {"messages": [{"role": "user", "content": "Calculate the first 10 Fibonacci numbers"}]}
    )
    # LangGraph returns the full conversation history; the last message is the agent's final response
    print(result["messages"][-1].content)

Conceda ao aplicativo acesso à função em databricks.yml:

resources:
  apps:
    my_agent_app:
      resources:
        - name: 'python_exec'
          uc_securable:
            securable_full_name: 'system.ai.python_exec'
            securable_type: 'FUNCTION'
            permission: 'EXECUTE'

Serviço de Modelo

from databricks.sdk import WorkspaceClient
from databricks_mcp import DatabricksMCPClient
import mlflow

workspace_client = WorkspaceClient()
host = workspace_client.config.host

mcp_client = DatabricksMCPClient(
    server_url=f"{host}/api/2.0/mcp/functions/system/ai/python_exec",
    workspace_client=workspace_client,
)

tools = mcp_client.list_tools()

# get_databricks_resources() extracts the UC permissions the agent needs at runtime.
# Passing these to log_model lets Model Serving grant access automatically at deployment,
# without requiring manual permission configuration.
mlflow.pyfunc.log_model(
    "agent",
    python_model=my_agent,
    resources=mcp_client.get_databricks_resources(),
)

Para implantar o agente, consulte Implantar um agente para aplicativos de IA generativos (Model Serving). Para obter detalhes sobre agentes de log com recursos mcp, consulte Usar servidores MCP gerenciados pelo Databricks.

Próximas etapas