Microsoft Fabric is a serious platform. If you work in data engineering or analytics, you’ve likely evaluated it, deployed parts of it, or at least sat through a demo. With over 28,000 organisations now on board, the momentum is real. OneLake delivers on the promise of unified storage. The data pipeline orchestration works. And the integration across the Microsoft analytics stack, from Synapse to Power BI, tightens the entire ecosystem in ways that genuinely improve workflows for teams already invested in Excel, Teams, and Azure.
All of this is genuine progress. Fabric solves real problems.
However, Fabric solves a specific kind of problem. Understanding which kind is the difference between a smooth deployment and an expensive mistake.
The Distinction That Matters
At its core, Fabric is a data engineering and analytics platform. Its job is to move data from source systems, reliably transform it, store it efficiently, and make it available to downstream tools. The lakehouse architecture removes the old false choice between data lakes and warehouses. Data lineage and governance features give you visibility into what happened to your data and when. Meanwhile, pipeline orchestration handles scheduling, retry logic, and dependency management, so your team doesn’t have to.
In short, Fabric solves the data plumbing problem. Think of it as the pipes and pumps: it reliably moves data from A to B at scale, with audit trails and change control.
But moving data is not the same as modelling what it means.
Where the Conversation Breaks Down
People often ask: “Does Fabric replace our BI platform?” or “Can it replace our data warehouse?” The answer seems obvious. After all, it’s a warehouse, it’s a BI tool, of course it does. Yet that question assumes that moving data and modelling data are the same thing. They’re not.
Consider a simple manufacturing metric: cost per unit produced. This number feeds into dashboards, reports, and business reviews. A widget costs $500 to make, divided by the number made. One number. It looks simple.
That simplicity is an illusion. Cost per unit is a ratio metric, meaning it cannot be averaged across organisational levels. If Plant A produced 1,000 units at $200 per unit and Plant B produced 500 units at $300 per unit, the company’s cost per unit is not $250. It’s $233. You have to reaggregate the base inputs, total cost, and total units at each level and recompute the ratio.
Fabric can move the cost-per-unit numbers into OneLake without any trouble. What it doesn’t know is that doing so incorrectly introduces systematic errors into higher-level reporting.
That’s a modelling problem, not a plumbing problem.
What About Fabric’s Semantic Layer?
To be fair, Fabric has evolved beyond pure data engineering. Microsoft introduced semantic models and metric sets that centralise measures, KPIs, and calculations in a shared layer. Every analyst can reference the same revenue or margin definition rather than building their own. This is a meaningful step forward for BI consistency.
Still, a semantic layer is not an operational intelligence layer. Semantic models define how data is consumed by reports. They don’t enforce how metrics are computed, validated, or aggregated across organisational hierarchies. Specifically, Fabric’s semantic layer does not distinguish between additive metrics and ratio metrics when rolling up across org levels. It does not maintain a formula dependency graph where changes to one metric cascade correctly through every calculated field that depends on it. Nor does it implement multi-level validation workflows with data locking, or replace spreadsheet-based data capture with purpose-built screens that check input types and ranges in real time.
These capabilities sit in a different layer entirely. Fabric wasn’t designed for them, and it shouldn’t be expected to handle them.
The Aggregation Problem in Practice
Our “Why Spreadsheets Lie About Cost Per Unit” analysis covers this in depth, but the core issue deserves mention here.
Organisations routinely build reporting pipelines that move numbers from source systems into their data platform without revisiting how those numbers were computed. Cost per unit, efficiency ratios, and yield percentages all get piped through. The assumption is that the maths is correct and moving clean numbers is all that matters.
Here’s the catch: when you aggregate ratio metrics incorrectly, you don’t move the correct numbers. You move biased ones. A plant with lower production volume will artificially pull down the company-wide average, even if its unit economics are strong. Budget variance percentages computed at the leaf level will not roll up correctly either.
These errors are systematic, repeatable, and easy to miss. The dashboard still calculates. Your SQL still runs. Every number still looks clean. Nothing throws an error.
Fabric will execute these incorrect calculations reliably and consistently. That’s the problem.
Designed for Different Layers
The mental model that resolves this confusion is straightforward. Think of the data architecture as three layers.
The operational intelligence layer computes the metric, enforces the aggregation logic, validates the inputs, and governs what changes and when. It answers the question: “What do we actually mean by cost-per-unit, and how does it flow through the organisation?”
The data engineering layer takes the output of that layer, the validated, correctly modelled metrics, and moves them reliably. Fabric excels here. It orchestrates pipelines, maintains lineage, scales storage, and integrates with downstream tools.
The visualisation layer renders the data in ways that humans can understand. Power BI, Tableau, Looker, the choice depends on your stack, but the job is the same.
Why Conflating Layers Creates Problems
Each layer does what it’s designed for. Problems emerge when responsibilities blur. Building your operational intelligence logic inside your data platform forces your data engineers to become metric scientists. Your pipelines fill up with business logic instead of pipes.
Pushing it downstream into Power BI is no better. Your calculations end up in DAX formulas instead of a governed model. Every report author has to know how to reaggregate ratio metrics correctly, and most won’t.
Where Capstone Fits: Two Integration Patterns
Platforms like NxGN Capstone sit at the operational intelligence layer. They enforce aggregation rules, manage metric dependencies, control validation workflows, and maintain the structural taxonomies that let data roll up correctly.
What makes this especially practical for Fabric users is that the two platforms connect in both directions.
Pattern 1: Fabric Feeds Capstone
In this scenario, Fabric serves as the data source. Raw operational data, production counts, cost figures, downtime records, and flows from OneLake into Capstone. Once inside Capstone, the platform applies the business rules: per-metric aggregation logic, formula dependencies, validation workflows, and organisational roll-ups. The result is a set of validated, correctly modelled metrics ready for consumption.
This pattern works well when your source data already lives in Fabric, and you need a governed modelling layer on top.
Pattern 2: Capstone Feeds Fabric
Here, the flow reverses. Capstone handles the data capture, validation, and metric computation. It then exports the validated results back to Fabric, either to OneLake or via a direct integration. From there, Fabric distributes the data across the enterprise via its pipeline orchestration, lineage tracking, and Power BI connectivity.
This pattern suits organisations that want Capstone to own the “single version of truth” for operational metrics, while Fabric handles the scale and distribution that the rest of the enterprise depends on.
The Common Thread
In both patterns, the principle is the same. Capstone handles the modelling. Fabric handles the distribution. The output is format-agnostic. Whether you move it via Fabric pipelines, SQL integration, or a direct connector doesn’t change the fact that each platform does what it was designed for.
The Right Question
Organisations evaluating Fabric often frame it as a replacement question: “Does this replace our existing data platform?” A better framing is a layer question: “What modelling decisions need to be made, and where do they happen?”
Fabric is an excellent choice for the data engineering layer. For the operational intelligence layer, you need something purpose-built. If you use Fabric without it, you get fast, reliable, wrong answers. There’s no speed advantage to being reliably wrong.
The reverse also holds. Operational intelligence without a robust data engineering layer is a bottleneck. You need both. The distinguishing factor is knowing which tool solves which problem.
Microsoft’s engineering is strong, and the Fabric product roadmap is ambitious. With recent additions such as AI-powered data integration and the Osmos acquisition for autonomous data engineering, the platform will only become more capable. But ambition doesn’t erase the fundamental difference between moving data and modelling it.
Both layers are necessary. Neither is sufficient alone.
What Comes Next
If your organisation is evaluating data platforms and the conversation gravitates toward “will this replace everything else?”, it may be time for a different discussion. The question isn’t whether one tool can do everything. The question is whether the layers are correct and the responsibilities are clear.
Want to explore whether operational intelligence should sit alongside your current data platform? Visit our products to see how Capstone works at this layer, or reach out to discuss your specific architecture.
Continue Reading
- Why Spreadsheets Lie About Cost Per Unit (And What It’s Costing You)
- Power BI and Capstone: Why the Best Operations Teams Use Both
- From OEE Percentage to Dollar Impact: The Bridge Your CFO Is Missing
Explore NxGN Solutions:
Our Solutions NxGN Capstone Our Clients
NxGN Solutions builds intelligent, cloud-based platforms to help organisations see clearly, act faster, and scale with confidence. Learn more about our approach.
