Bemærk
Adgang til denne side kræver godkendelse. Du kan prøve at logge på eller ændre mapper.
Adgang til denne side kræver godkendelse. Du kan prøve at ændre mapper.
This guide helps you get started with Terraform to manage Lakebase resources using the Azure Databricks Terraform provider. You'll create a project, add a development branch and endpoint, and then delete them when finished. This is a typical workflow for managing development and testing environments.
Tip
This guide covers a subset of available Terraform commands. For the complete resource reference and all available configuration options, see the Azure Databricks provider documentation on the Terraform Registry.
Prerequisites
Before you begin, you need:
- Terraform installed (version 1.0 or higher). See Install Terraform.
- A service principal configured for OAuth machine-to-machine (M2M) authentication with CAN MANAGE permission on the Lakebase project. This guide requires CAN MANAGE (CAN USE is insufficient because it does not allow creating or updating resources). See Authorize service principal access to Azure Databricks with OAuth and Manage project permissions.
Lakebase Autoscaling Terraform semantics
Lakebase Autoscaling resources use Terraform semantics with spec/status fields for declarative state management. The spec field defines your desired state, while the status field shows the current state.
Important
Important: Drift detection and changes outside of Terraform
Changes made to Lakebase resources outside of Terraform (using the UI, CLI, or API) are not detected by Terraform's standard drift detection.
For complete details on how spec/status fields work, drift detection behavior, and state management requirements, see the databricks_postgres_project resource documentation.
Resource hierarchy
Lakebase resources follow a parent-child hierarchy: you create parent resources before children, and delete children before parents. For the full resource model (projects, branches, computes, databases, and more), see How projects are organized.
Order of operations for this guide: Project → Branch → Endpoint
Quickstart: Manage a Lakebase project with Terraform
Follow these steps to create a complete working project with a development branch and compute endpoint:
1. Set up authentication
Configure the Azure Databricks provider to authenticate using the service principal you configured in the prerequisites. Lakebase resources require OAuth authentication, so you set environment variables for your service principal's OAuth credentials:
export DATABRICKS_HOST="https://your-workspace.cloud.databricks.com"
export DATABRICKS_CLIENT_ID="your-service-principal-client-id"
export DATABRICKS_CLIENT_SECRET="your-service-principal-secret"
Then configure your provider to use these environment variables:
terraform {
required_version = ">= 1.0"
required_providers {
databricks = {
source = "databricks/databricks"
version = "~> 1.0"
}
}
}
provider "databricks" {
# Automatically uses DATABRICKS_HOST, DATABRICKS_CLIENT_ID,
# and DATABRICKS_CLIENT_SECRET from environment variables
}
For more authentication options and details about OAuth configuration, see Authorize service principal access to Azure Databricks with OAuth and Databricks Terraform provider.
2. Create a project
A project is the top-level resource that contains branches, endpoints, databases, and roles.
Note
When you create a project, Azure Databricks automatically provisions a default branch named production with a read-write compute endpoint named primary. To configure either resource (for example, to enable high availability on the endpoint), declare a matching databricks_postgres_branch or databricks_postgres_endpoint with replace_existing = true. Terraform takes ownership of the existing resource by matching on those known IDs.
Create a basic project:
resource "databricks_postgres_project" "app" {
project_id = "my-app"
spec = {
pg_version = 17
display_name = "My Application"
}
}
Run these commands to format your configuration and create the project:
terraform fmt
terraform apply
3. Get a project
Get information about the project you just created using a data source:
data "databricks_postgres_project" "this" {
name = databricks_postgres_project.app.name
}
output "project_name" {
value = data.databricks_postgres_project.this.name
}
output "project_pg_version" {
value = try(data.databricks_postgres_project.this.status.pg_version, null)
}
output "project_display_name" {
value = try(data.databricks_postgres_project.this.status.display_name, null)
}
Tip
Data sources return values in the status field. Use try() to safely access fields that might not be available in all provider versions.
Run these commands to apply the configuration and view the project details:
terraform apply
terraform output
4. Create a branch
Branches provide isolated database environments within a project.
Note
A default production branch is created automatically when you create a project, and includes an implicit read-write endpoint named primary. When you create additional branches like the dev branch below, each new branch also gets its own implicit primary read-write endpoint. Step 5 shows how to bring that endpoint under Terraform management.
In this example, you create a development branch:
resource "databricks_postgres_branch" "dev" {
branch_id = "dev"
parent = databricks_postgres_project.app.name
spec = {
no_expiry = true
}
}
output "dev_branch_name" {
value = databricks_postgres_branch.dev.name
}
Run these commands to create the branch and view its name:
terraform apply
terraform output dev_branch_name
5. Create an endpoint
Endpoints provide compute resources for executing queries against a branch.
Note
Every branch you create includes an implicitly created read-write endpoint named primary. To bring it under Terraform management and apply your own configuration to it, declare a databricks_postgres_endpoint resource with endpoint_id = "primary" and set replace_existing = true. This tells Terraform to take ownership of the existing endpoint instead of trying to create a new one. Without replace_existing, the apply fails with a conflicting-operations error.
Take ownership of the dev branch's primary endpoint and apply your configuration to it:
resource "databricks_postgres_endpoint" "dev_primary" {
endpoint_id = "primary"
parent = databricks_postgres_branch.dev.name
spec = {
endpoint_type = "ENDPOINT_TYPE_READ_WRITE"
}
replace_existing = true
}
output "dev_endpoint_name" {
value = databricks_postgres_endpoint.dev_primary.name
}
Run these commands to apply the configuration and view the endpoint name:
terraform apply
terraform output dev_endpoint_name
For other endpoint patterns, including read-only replicas and custom autoscaling, see the databricks_postgres_endpoint reference.
6. List endpoints
List the endpoints in your development branch to view details about the read-write endpoint you created:
data "databricks_postgres_endpoints" "dev" {
parent = databricks_postgres_branch.dev.name
}
output "dev_endpoint_names" {
value = [for e in data.databricks_postgres_endpoints.dev.endpoints : e.name]
}
output "dev_endpoint_types" {
value = [
for e in data.databricks_postgres_endpoints.dev.endpoints :
try(e.status.endpoint_type, null)
]
}
Run these commands to apply the configuration and view the endpoint details:
terraform apply
terraform output dev_endpoint_names
terraform output dev_endpoint_types
Tip
When you run terraform apply and only outputs change (no infrastructure changes), Terraform shows "Changes to Outputs" and updates the state without modifying resources.
7. List branches
List all branches in your project. This returns two branches: the production branch that was created automatically with your project, and the development branch you created in a preceding step:
data "databricks_postgres_branches" "all" {
parent = databricks_postgres_project.app.name
}
output "branch_names" {
value = [for b in data.databricks_postgres_branches.all.branches : b.name]
}
Run these commands to apply the configuration and view the branch names:
terraform apply
terraform output branch_names
8. Delete a branch
Now delete the development branch you created earlier. This is a typical workflow: create a branch for development or testing, and delete it when you're finished.
When deleting a branch, destroy any associated endpoints, and then destroy the branch.
8.1 Destroy the endpoint
Destroy the endpoint for the development branch:
terraform destroy -target=databricks_postgres_endpoint.dev_primary
8.2 Destroy the branch
Destroy the development branch:
terraform destroy -target=databricks_postgres_branch.dev
8.3 Remove from configuration
After targeted destroy, remove or comment out the resource blocks from your configuration files to prevent Terraform from recreating them:
- Remove
databricks_postgres_branch.devand its outputs - Remove
databricks_postgres_endpoint.dev_primaryand its outputs - Update any data sources that reference the deleted branch (e.g.,
list_endpoints.tf)
Then reconcile the state:
terraform apply
Tip
Alternative: Remove all at once
You can also remove the resource blocks from your configuration first, then run terraform apply. Terraform will plan to destroy the resources. This approach shows you the full destruction plan before executing.
Serialize sibling resources with depends_on
Lakebase processes only one role, database, or endpoint operation at a time within a single branch. If you declare two sibling resources of these kinds in the same branch, and Terraform doesn't already have a dependency edge between them (for example, a database referencing a role through spec.role), Terraform tries to create them in parallel and one fails with a conflicting-operations error.
The fix is to add an explicit depends_on so Terraform serializes the apply:
resource "databricks_postgres_role" "schema_owner" {
role_id = "schemamigrator"
parent = databricks_postgres_branch.main.name
spec = {
postgres_role = "schemamigrator"
membership_roles = ["DATABRICKS_SUPERUSER"]
}
}
resource "databricks_postgres_role" "application" {
role_id = "application"
parent = databricks_postgres_branch.main.name
spec = {
postgres_role = "application"
}
depends_on = [databricks_postgres_role.schema_owner]
}
Next steps
databricks_postgres_projecton the Terraform Registry. Entry point for Lakebase resources. The Registry sidebar links from here topostgres_branch,postgres_endpoint,postgres_role, andpostgres_database.- Azure Databricks Terraform provider
- Manage Lakebase projects
- Branch-based development tutorial