Standalone pipelines

Lakeflow Spark Declarative Pipelines is the most common way to work with data in pipelines. You can also define standalone materialized views and streaming tables outside of Lakeflow Spark Declarative Pipelines using simple query syntax, and Azure Databricks manages the underlying pipelines for you. Standalone tables can be created and refreshed from a Databricks SQL warehouse or from a notebook running on serverless general compute.

Note

This section was previously called "Pipelines for Databricks SQL." It was renamed to "Standalone pipelines" to reflect new support for creating standalone materialized views and streaming tables from a notebook running on serverless general compute, in addition to a Databricks SQL warehouse.

This section teaches you about using standalone pipelines, including the following topics.

Topic Description
Requirements for standalone pipelines Learn about the compute options for standalone materialized views and streaming tables, including feature support and regional availability.
Use standalone streaming tables Create, refresh, configure, and monitor streaming tables.
Use standalone materialized views Create, refresh, and query materialized views.

Additional resources