Edit

Use SQL database as the source for translytical applications

Applies to:SQL database in Microsoft Fabric

This article describes how to use SQL database in Microsoft Fabric as the operational backbone for translytical apps - apps that require both transactional and analytical access to the same data.

The goal is to enable transactional workloads, operational APIs, and immediate analytical data in a single database, within the governed, secure Microsoft Fabric platform.

SQL analytics endpoint

When you create a SQL database in Fabric, the data is automatically replicated in near real time into OneLake, creating a read-only copy of your data in Delta tables that are accessible via T-SQL through the SQL analytics endpoint. This architecture means that data generated by the transactional system is available for analytic queries without any processing or complex ETL pipelines. By using the SQL analytics endpoint, you can analyze data in near real time without impacting your transactional workload. This data can be used for Power BI reporting, machine learning models, or other analytics within Fabric. The SQL analytics endpoint is also accessible to external applications that need analytical access to the data.

Examples include:

  • Healthcare: Monitor health metrics to provide real-time insights for healthcare professionals.
  • Finance: Analyze market data to optimize trading strategies in real-time.
  • Retail: Analyze customer behavior to provide personalized shopping experiences.
  • Manufacturing: Monitor production processes to enhance product quality.

Translytical task flows

When you use Power BI to analyze data, there might be times when you need to take action on the data such as updating a record, adding annotations, or triggering an action in another system based on the data. You can easily accomplish these tasks by using translytical task flows in Fabric.

User data functions enable scenarios such as:

  • Add data: Add a record of data to a table in your database and see it reflected in your report. For example, adding a new customer record.
  • Edit data: Edit an existing record of data in a table in your database and see it reflected in your report, without custom coding. For example, updating a status field or an annotation.
  • Delete data: Delete an existing record of data from a table in your database and see it removed from your report. For example, removing a customer record that's no longer needed.
  • Call an external API: Make an API request that's accessible through a network request. For example, making a request to the REST endpoint of a public API that either updates the underlying data or end user's input, or takes action in a different system.

For data writeback scenarios, user data functions provide connection management for SQL databases in Fabric, warehouses, and lakehouses. Here's how a translytical task flow can work:

  • User data functions in Fabric invoke functions on the underlying Fabric data sources. User data functions work with the current report context (filters, slicers, and selections) passed as parameters.
  • Because the SQL analytics endpoint maintains a near real-time replica in OneLake, updates written by task flows are available for immediate analytics and Power BI visuals without additional ETL.
  • Actions captured via user data functions can be routed to Fabric Notebooks in ELT pipelines for AI-assisted processing (for example, classification or enrichment) before final persistence in the SQL database.

Building a translytical task flow involves these main tasks:

  1. Storing your data in a Fabric data source.
  2. Developing a user data function to handle the action.
  3. Creating a Power BI semantic model to use this data.
  4. Building a Power BI report with interactive elements to capture the user's input and call the function.