Loading article

Microsoft Fabric

Building a Simple Decision-Support Workflow Using Microsoft Fabric

By Syed Hussnain Sherazi | 2026-05-07 | Microsoft Fabric | Business Intelligence | Technical Guides

A practical walkthrough for turning raw data into a decision-support workflow using Microsoft Fabric.

A practical walkthrough: from raw data to a dashboard your team will actually use

A practical walkthrough for turning raw data into a decision-support workflow using Microsoft Fabric.its into their day-to-day work. They have seen the demos. They know it involves a Lakehouse and OneLake and something called Shortcuts. But they have not seen a real, end-to-end workflow that connects the dots.

This post is my attempt to fix that.

I am going to walk through building a simple but realistic decision-support workflow using Microsoft Fabric: from raw data all the way to a dashboard that helps someone make a better decision. No jargon for the sake of it. Just a practical flow that you can adapt for your own situation.

What We Are Building

Let us say you work for a retail company. Your team needs to track sales performance across regions, understand which products are underperforming, and get early warnings when a store's revenue is trending below target.

Currently, the data lives in a SQL Server database in a datacentre. Someone exports it to Excel every Monday morning. The Excel file gets emailed around. Three people have different versions open simultaneously. No one is confident in the numbers.

We are going to replace that workflow with something that runs automatically, uses a single source of truth, and gives decision-makers what they need without manual effort.

Microsoft Fabric: A Quick Context

Before we dive in, a quick note on what Fabric actually is.

Microsoft Fabric is an all-in-one analytics platform. It combines data engineering, data warehousing, data science, real-time analytics, and business intelligence inside a single environment. Crucially, all of it sits on top of OneLake: a single, unified storage layer that means data does not need to be copied between tools.

Think of it as Microsoft taking all of their data tools (Azure Data Factory, Azure Synapse, Power BI) and weaving them together under one roof, with a much simpler experience.

For our retail use case, we will use:

  • Dataflow Gen2 for ingestion
  • Lakehouse for storage
  • Notebooks / SQL Analytics Endpoint for transformation
  • Semantic Model for business logic
  • Power BI for the dashboard

The Workflow

Visual summary of the workflow
Step 1🗄️ SQL Server On-Premises Sales Database
Step 2🏠 Fabric Lakehouse Bronze Layer Raw Sales Data
Step 3🏠 Fabric Lakehouse Gold Layer fact_sales · dim_product · dim_store
Step 4📐 Power BI Semantic Model Revenue · Margin · Trend
Step 5📊 Power BI Dashboard Regional Performance Product Alerts · Trend Lines
Step 6👤 Decision Maker Regional Manager Store Director
Step 7✅ Business Decision Promotion · Reorder · Intervention

Let me walk through each step.

Step 1: Create a Workspace and Lakehouse

Everything in Fabric lives inside a Workspace. This is your project space. Create one, give it a meaningful name (something like "Retail Analytics - Sales Performance"), and assign the right people to it.

Inside the workspace, create a Lakehouse. This will be your central storage and processing environment. Name it clearly: I usually go with something like lh_sales_retail.

When you create the Lakehouse, Fabric automatically gives you:

  • A Files section for raw files
  • A Tables section for structured Delta tables
  • A SQL Analytics Endpoint for querying with SQL

You do not need to configure any of this manually. It is all set up for you.

Step 2: Ingest Data Using Dataflow Gen2

Now we need to get the data out of SQL Server and into the Lakehouse.

Dataflow Gen2 is Fabric's visual data transformation tool: think Power Query but designed for pipelines that run on a schedule and write to a Lakehouse.

Inside your workspace, create a new Dataflow Gen2. Connect it to your SQL Server data source. For our scenario, we are pulling three tables: sales_transactions, products, and stores.

Key things to configure:

  • Incremental refresh: Do not pull the full table every time. Set up the dataflow to only pull rows where the transaction date is newer than the last run. This keeps things fast and cost-efficient.
  • Destination: Point each query to a Delta table in your Lakehouse. This becomes your Bronze layer: raw data, as it came from the source, with no transformations applied.

Once the dataflow is configured, schedule it to run every night at 2am. From this point on, your Lakehouse receives fresh data every morning without anyone touching a spreadsheet.

Step 3: Transform the Data Into the Gold Layer

Raw data is not business-ready. We need to clean it, join the tables together, and create the structures that the dashboard will actually use.

There are two ways to do this in Fabric: Notebooks (for Python/Spark) or SQL Transformations (for T-SQL). For a reporting use case like ours, SQL is usually cleaner and easier to maintain.

Using the SQL Analytics Endpoint in your Lakehouse, you can write transformations like this:

-- Create a clean sales fact table
CREATE TABLE gold.fact_sales AS
SELECT
    t.transaction_id,
    t.transaction_date,
    t.store_id,
    t.product_id,
    t.quantity_sold,
    t.unit_price,
    t.quantity_sold * t.unit_price AS revenue,
    p.product_name,
    p.category,
    p.cost_price,
    (t.unit_price - p.cost_price) * t.quantity_sold AS gross_margin,
    s.store_name,
    s.region
FROM bronze.sales_transactions t
LEFT JOIN bronze.products p ON t.product_id = p.product_id
LEFT JOIN bronze.stores s ON t.store_id = s.store_id
WHERE t.transaction_date IS NOT NULL
AND t.unit_price > 0;

This creates a single, clean table that has everything the dashboard needs: no joins required at query time, no ambiguity about which version of a product name is correct.

You can also add a few pre-calculated aggregations here:

-- Daily regional summary for trend analysis
CREATE TABLE gold.daily_regional_summary AS
SELECT
    transaction_date,
    region,
    store_name,
    COUNT(DISTINCT transaction_id) AS transaction_count,
    SUM(revenue) AS total_revenue,
    SUM(gross_margin) AS total_margin,
    SUM(gross_margin) / NULLIF(SUM(revenue), 0) AS margin_pct
FROM gold.fact_sales
GROUP BY transaction_date, region, store_name;

Wrap these transformations inside a Notebook or a Data Pipeline and schedule them to run after your Dataflow Gen2 completes. Now every morning, your gold layer is freshly updated with yesterday's data already processed.

Step 4: Build the Semantic Model

The Semantic Model is the bridge between your data and your business. It is where you define measures, KPIs, and business logic in a way that is consistent for every person who uses the dashboard.

In Power BI Desktop (or directly in Fabric), connect to your Lakehouse gold tables and define your key measures using DAX:

Total Revenue = SUM(fact_sales[revenue])

Revenue vs Target % = 
DIVIDE([Total Revenue], [Monthly Revenue Target], 0) * 100

Revenue Trend (7D) = 
CALCULATE([Total Revenue], DATESINPERIOD('Date'[Date], LASTDATE('Date'[Date]), -7, DAY))

Stores Below Target = 
CALCULATE(
    DISTINCTCOUNT(fact_sales[store_id]),
    FILTER(VALUES(fact_sales[store_id]),
    [Revenue vs Target %] < 90)
)

These measures are defined once, centrally, and reused everywhere. No more each analyst building their own version of "revenue" in a different way.

Step 5: Build the Dashboard

With the semantic model published to Fabric, building the actual dashboard in Power BI is relatively fast.

For our regional sales performance dashboard, I would include:

  • KPI cards at the top: Total Revenue, Margin %, Transaction Count, Stores Below Target
  • Map visual: Revenue by region, colour-coded by performance vs target
  • Line chart: Daily revenue trend for the last 30 days, with a target line overlay
  • Table: Store-level breakdown sorted by performance vs target: makes it easy to spot which stores need attention
  • Conditional formatting: Red highlights for any store below 90% of target

The dashboard refreshes automatically every morning. No emails, no exports, no "which file is the right one?"

What This Workflow Achieves

Let me be direct about what we have built here and what it does for decision-making.

Before: A regional manager waits until Monday morning, opens an Excel file that may or may not be the latest version, and tries to figure out if last week was good or bad. By the time they make a decision, the data is already a week old.

After: The same manager opens a dashboard on their phone or laptop, sees exactly how each store performed yesterday, identifies which stores are trending below target, and can make an intervention call before the end of the week.

That is the difference between reactive decision-making and proactive decision-making. And it is made possible by a workflow that, once set up, runs without any human involvement.

Common Mistakes to Avoid

In my experience, these are the mistakes that trip people up when building this kind of workflow:

Skipping the bronze layer. It is tempting to transform data as you ingest it. Do not. Always land raw data first. When something breaks (and it will), you need to be able to reprocess without going back to the source system.

Putting business logic in the dashboard. Calculated columns in a Power BI report are hard to find, easy to duplicate, and impossible to reuse. Keep business logic in the semantic model.

Not setting up monitoring. Schedule a simple data quality check that sends an alert if the daily row count drops unexpectedly. You want to know if the pipeline breaks before the business users do.

Over-engineering from day one. Start with the simplest thing that works. Three tables, one gold view, one dashboard. Prove the value, then add complexity.

Closing Thought

Microsoft Fabric genuinely makes this kind of end-to-end workflow much more accessible than it used to be. You do not need a team of six people across three disciplines to build something like this. A single capable data person can wire this up in a few days.

The real value is not the technology itself. It is what happens when decision-makers stop fighting over spreadsheets and start looking at the same, trusted, up-to-date picture of the business.

That is when the decisions start getting better.

Next in this series: A step-by-step guide to planning a migration from on-premises SQL Server to Microsoft Fabric: covering assessment, planning, and going live without disrupting the business.

Reader Comments

Add a comment with your name and email. Your email is used only for basic validation and is not shown publicly.