Bemærk
Adgang til denne side kræver godkendelse. Du kan prøve at logge på eller ændre mapper.
Adgang til denne side kræver godkendelse. Du kan prøve at ændre mapper.
The managed Confluence connector in Lakeflow Connect allows you to ingest data from Confluence into Azure Databricks.
What to know before you start
| Topic | Why it matters |
|---|---|
| Azure Databricks user persona | The workflow depends on your Azure Databricks user persona:
|
| Authentication method | The steps to create a connection depend on the authentication method you choose. For supported methods, see Authentication methods. |
| Interface | The steps to create a pipeline depend on the interface. |
| Ingestion frequency | The pipeline schedule depends on your latency and cost requirements. |
| Common patterns | Depending on your ingestion needs, the pipeline might use configurations like history tracking, column selection, and row filtering. Supported configurations vary by connector. See Feature availability. |
Start ingesting from Confluence
The following table provides an overview of the end-to-end Confluence ingestion flow, based on user type:
| User | Steps |
|---|---|
| Admin |
|
| Non-admin | Use any supported interface to create a pipeline from an existing connection. See Ingest data from Confluence. |