Microsoft Fabric
Step-by-Step Guide to Planning a Migration from On-Premises to Microsoft Fabric
Migrations are where data projects go to die.
A practical field guide: from the first conversation to go-live, without the chaos
Migrations are where data projects go to die.
Not because the technology is bad, but because the planning is. Teams underestimate the complexity of their existing environment, skip the discovery phase, and try to lift and shift everything at once. Six months later they are over budget, behind schedule, and still running the old system in parallel because nobody is confident enough to switch off.
I have seen this pattern play out more times than I care to count.
The good news is that migrating to Microsoft Fabric does not have to go this way. If you plan it properly: and this post will show you exactly how: you can get through it without disrupting the business, without burning out your team, and without losing the trust of your stakeholders.
Let me walk you through the approach I would follow, phase by phase.
Before Anything Else: Understand What You Are Migrating From
The biggest mistake organisations make is treating this as a technology project from day one. It is not. It is a business change project that happens to involve technology.
Before you write a single line of migration code, you need to understand your current state in detail. This is the discovery phase, and it is non-negotiable.
Phase 0: Discovery and Assessment
Timeline: 2-4 weeks
The goal of this phase is to answer one question honestly: what exactly are we migrating?
Inventory your current data assets. Create a catalogue of every database, every data warehouse, every reporting system, every ETL pipeline, and every scheduled job that touches data. Include the owner, the purpose, the upstream dependencies, and the downstream consumers. This is tedious. Do it anyway.
Classify your workloads. Not everything needs to move to Fabric. Some workloads might be better decommissioned. Others might be better served by a different cloud service. Group workloads by:
- Complexity (simple table sync vs. complex multi-stage transformation pipeline)
- Business criticality (core financial reporting vs. rarely-used operational report)
- Data volume and frequency (10MB daily batch vs. 5TB incremental warehouse)
- Regulatory sensitivity (does this data contain PII? Is it subject to GDPR or SOX?)
Identify dependencies. A single reporting database might feed 47 reports, 3 downstream applications, and an external data feed to a partner. Understanding dependencies prevents you from cutting off something important mid-migration.
Assess technical debt. Be honest about the state of what you are migrating. If your existing ETL pipelines are undocumented, untested, and understood only by one person who left two years ago, that is a risk. Document it and factor it into your plan.
Deliverable: A migration inventory spreadsheet and a risk register.
Phase 1: Planning and Design
Timeline: 2-3 weeks
Now that you know what you have, design what you are building.
Define your target architecture. For most organisations moving to Fabric, the target architecture will follow the Medallion pattern: Bronze (raw), Silver (cleaned), Gold (business-ready). Agree on this structure with your team before you start building anything.
Design your naming conventions. This sounds boring but it will save you enormous pain later. Agree on how you will name workspaces, Lakehouses, tables, pipelines, and semantic models. Write it down. Enforce it.
Choose your migration approach per workload. There are three common strategies:
| Approach | When to Use | Risk | |---|---|---| | Lift and Shift | Simple pipelines, straightforward data models | Low: fastest to migrate | | Refactor | Complex legacy pipelines worth improving | Medium: more effort, better outcome | | Re-engineer | Old systems that are fundamentally broken | High: most effort, cleanest result |
Do not apply the same approach to everything. Use Lift and Shift for simple workloads to build momentum early. Reserve Re-engineering for the complex, valuable, genuinely broken pieces.
Plan your migration waves. Do not migrate everything at once. Group workloads into waves based on risk and dependency. A good wave structure:
- Wave 1 (Pilot): 1-2 low-risk, non-critical workloads. Prove the process works before you touch anything important.
- Wave 2 (Core): Your most commonly used reporting and analytics workloads, moving in small batches.
- Wave 3 (Complex): Large, complex, or business-critical workloads with more testing time built in.
- Wave 4 (Tail): Legacy or rarely-used workloads. Some of these you may choose to decommission rather than migrate.
Define your success criteria. For each workload, define what "done" looks like. Row counts match. Column definitions are equivalent. Reports produce the same output. These criteria become your validation tests.
Deliverable: Migration design document, wave plan, and validation criteria per workload.
Phase 2: Environment Setup
Timeline: 1-2 weeks
Before you migrate data, set up the environment properly. Cutting corners here causes headaches throughout the rest of the project.
Provision Fabric capacity. Work with your Microsoft account team or purchase an F-SKU capacity that meets your expected workload. Do not size for today: size for where you will be in 12 months.
Set up workspaces. Create separate workspaces for Development, Testing, and Production. This is not optional. You should never develop directly in production. Build a workspace governance model: who can publish to Production, who approves changes, how are permissions managed.
Configure networking and security. If you are migrating from an on-premises environment, you need to set up either a VNet Data Gateway or a self-hosted On-Premises Data Gateway to securely connect Fabric to your existing systems. This is often where projects get delayed: do not leave it until the last minute.
Set up monitoring. Before data starts moving, put in place the monitoring infrastructure you will use throughout the migration. Fabric has built-in capacity monitoring, but you should also set up pipeline alerting and data quality checks.
Deliverable: Fabric environments provisioned, access controls configured, connectivity tested.
Phase 3: Pilot Migration
Timeline: 2-3 weeks
The pilot is where you test your process, not just your technology.
Pick a workload from Wave 1: something low risk, reasonably well understood, and end-to-end testable. Migrate it fully:
1. Ingest the source data into the Fabric Lakehouse Bronze layer using Dataflow Gen2 or Data Pipeline 2. Apply transformations to create the Silver and Gold layers 3. Build or recreate the Semantic Model 4. Validate outputs against the source system
Validation is the most important step. Run your pre-defined success criteria. Compare row counts. Compare aggregated totals. If your legacy report shows £4,231,450 in revenue for March, your new Fabric dashboard needs to show the same number.
The pilot will almost certainly surface issues. That is the point. Document them, fix them, update your process documentation, and run it again until it passes.
Deliverable: Pilot completed and signed off, process documentation updated, learnings captured for future waves.
Phase 4: Full Migration: Wave by Wave
Timeline: 4-12 weeks depending on volume
Now you run the waves you planned. Each wave follows the same process as the pilot, but with increasing confidence and speed as your team gets into the rhythm.
A few things to keep in mind during this phase:
Run old and new in parallel. Do not switch off the source system until the new system has been validated and users are confident. Parallel running creates overhead but prevents disasters.
Communicate with business users. People who rely on existing reports and dashboards need to know what is happening and when. Surprises are the enemy of trust. Send regular updates: "Wave 2 completes on 15th May: these reports will move to the new platform. Here is what will change."
Freeze changes on the source system. During migration of a specific workload, put a temporary change freeze on the source. Nothing is more frustrating than migrating a pipeline, validating it, and then discovering the source system changed while you were testing.
Track progress visually. A simple migration tracker: even a spreadsheet: showing each workload, its current wave, its status, and its sign-off date is invaluable for stakeholder communication.
Phase 5: Validation and User Acceptance Testing
Timeline: 2-3 weeks
Before you declare migration complete for any workload, it needs to be validated by the people who actually use it.
Data validation: Row counts, aggregates, and key KPIs match between old and new systems. Run this for at least two weeks of data to catch any edge cases.
Report validation: Every report and dashboard that was recreated in Fabric should be reviewed by a business user who is familiar with the original. Not just by the data team: by the person who uses it.
Performance testing: Does the new system perform adequately? Reports that took 10 seconds to load in the old system should not take 2 minutes in the new one. If they do, optimise before cutover.
Deliverable: Signed UAT sign-off from business stakeholders for each migrated workload.
Phase 6: Cutover and Decommission
Timeline: 1-2 weeks
This is the final step, and it is worth doing carefully.
Plan your cutover weekend. For critical systems, cutover should happen over a low-traffic period: typically a weekend. Have a clear runbook: what happens at what time, who is responsible, what the rollback plan is if something goes wrong.
Communicate the switch. Tell users the exact date and time when the old system goes offline and the new system becomes the source of truth. Make sure they know where to find the new dashboards and reports.
Decommission deliberately. Do not rush to turn off the old systems. Keep them available in read-only mode for 30-60 days after cutover as a safety net. Once you are confident everything is working, decommission them formally and release the compute/licences.
Document the final state. Update your data catalogue, your architecture documentation, and your runbooks to reflect the new environment. Future-you will thank present-you.
What Most Plans Get Wrong
After going through migrations at several organisations, here are the common failure modes:
Underestimating data quality issues. Source systems always have more mess than anyone admits. Budget time for it.
Insufficient business involvement. Data teams cannot validate business logic. Business users must be involved from Phase 1, not just brought in for UAT at the end.
No rollback plan. Hope is not a strategy. Always know how you will recover if a cutover goes wrong.
Migrating everything before proving anything. Pilots exist for a reason. Run them seriously.
Closing Thought
A migration to Microsoft Fabric, done right, is a genuine opportunity to improve not just your infrastructure but your entire data capability. It forces you to clean up technical debt, agree on definitions, and rethink how data flows through your organisation.
That is hard work. But the organisations that do it well come out the other side with something they did not have before: a data platform they can build on, rather than one they are constantly fighting against.
Plan carefully. Move deliberately. Validate everything. And never, ever skip the discovery phase.
Next in this series: Top 5 AI Tools for Video Content Generation: and which one actually fits your use case.
Reader Comments
Add a comment with your name and email. Your email is used only for basic validation and is not shown publicly.