Someone decided what questions mattered. A developer built a report. The organisation moved on, hoping those were the right questions.
What if you could ask your data a question directly, in plain language, and get an answer in seconds?
In most enterprises, that’s not how it works. The cycle repeats every quarter: one dashboard for production, another for finance, a third for safety, a fourth for the board, and a handful of ad hoc requests in between. Each one took weeks to build, required a specialist to configure, and answered exactly one set of questions. The moment someone asks something the dashboard wasn’t designed for, the cycle starts again.
That queue is the real cost of traditional business intelligence. Not the licence fee. That’s visible. The real cost is the time between having a question and getting an answer. And it imposes a second, less visible cost: it silently discourages questions. If every question requires a build request, people stop asking. The most expensive question in any organisation is the one that never gets asked because there’s no dashboard for it.
Why not solve this with better self-service tools, or more analysts, or a data mesh? Because those approaches speed up the build queue. They don’t eliminate it. Even the best self-service BI tool still requires someone to understand the data model, write the right query, and build a visualisation. The fundamental constraint remains: every new question requires a new artefact. What changes the economics is a system where questions don’t require building at all.
Every Dashboard Is a Frozen Hypothesis
In our experience across mining and manufacturing clients, a new operational dashboard takes three to six weeks from request to delivery. Longer when the BI team’s backlog is deep, which it usually is. The realistic wait for a non-trivial report is often two to three months.
The numbers get worse when you look at who’s actually asking. Forrester’s research estimates that roughly 80 per cent of business users still rely on centralised BI teams rather than building their own reports. Self-service was supposed to eliminate the queue. It hasn’t. In the manufacturing environments we work in, most operational users don’t have the technical fluency to query data models directly. They submit requests and wait.
So the queue persists. And a queue doesn’t just slow answers down. It shapes which questions get asked in the first place. When every inquiry requires a two-month turnaround, people learn to ask only what they can justify waiting for. The exploratory questions (“what would happen if,” “why did that change,” “how does this connect to that”) die in the gap between curiosity and capacity.
Some Questions Can’t Be Dashboarded
Not every question is a slow dashboard. Some are questions that dashboards cannot answer structurally.
A mining group asked us to analyse their Visible Felt Leadership programme: thousands of VFL interaction records, written in natural language by supervisors after safety conversations on the floor. The starting question was deceptively simple: “What insights can we get from this unstructured data?”
That’s not a dashboard request. There’s no KPI to plot. But the questions that followed were specific and operationally urgent:
- Do people talk about feeling safe in their VFLs? Show me the trend over time.
- What themes emerge across business units? Where do we see patterns completely different from the rest of the organisation?
- What’s our VFL coverage by operation, by functional location, employees versus contractors?
- How does VFL activity compare to actual safety events (high-potential hazards, high-potential incidents, recorded events)? Do the locations match?
- When are VFLs captured versus when the observation actually happened? Show the analysis by weekday, by quarter, by operation.
Each of those questions combines structured data (headcounts, event logs, dates, org hierarchy) with unstructured text (the free-form VFL notes). Before large language models, answering even one required a dedicated NLP project: keyword extraction, pattern matching, and custom pipelines built for mining safety terminology. NxGN has delivered exactly this kind of work in mining: predictive root-cause classification on incident records, and composite risk models combining safety, HR, and production data across enterprise systems. The models were valuable. They were also slow to build, narrow in scope, and rigid to new questions. Every follow-up (“What about fatigue?” or “Break it down by contractor versus employee?”) meant another round of development.
With a conversational interface connected to a governed operational model, the entire list becomes a working session. Ask the first question. Follow up. Pivot. Drill into the business unit that looks different. Compare it to the one that doesn’t. No build cycle. No queue.
The underlying model already holds the VFL data, the organisational hierarchy, the event records, and the relationships between them. The question activates a pathway that’s already there. No amount of dashboard building will solve this class of question. It requires reasoning across unstructured and structured data simultaneously.
The Model, Not the AI, Is the Breakthrough
Most “conversational analytics” products bolt a natural language interface onto a database. Ask a question, get a SQL query, receive a number. That works for simple lookups. It fails the moment you need context, because the relationships between metrics aren’t encoded anywhere the AI can reach.
The VFL example worked because a pre-built model already existed. Dependencies were mapped. Aggregation logic was mathematically correct. Governance was in place.
NxGN Capstone takes this approach. Instead of building reports that anticipate what someone might ask, Capstone lets users ask their data questions directly and get answers in seconds. When you ask Capstone a question, it traverses the entire dependency graph and delivers a contextualised answer. Change one input, and the entire model updates instantly. No manual reconciliation. No waiting for a refresh cycle. The AI connects to Capstone’s model via the Model Context Protocol (MCP), an open standard that gives language models structured access to tools and data. For the technical architecture (formula chain tracing, dependency graphs, diagnostic walkthrough), see When Your Dashboard Talks Back.
One caveat: conversational intelligence is only as good as the model it connects to. If the formulas are stale, the data connections broken, or the organisational hierarchy misconfigured, the AI will return confident-sounding answers that are wrong. That’s why Capstone’s governance layer exists: validation workflows, data locking, change request processes, and full audit trails. When an answer is wrong, the dependency graph makes the error traceable. You can see which formula, which data source, or which assumption failed.
What Happens When You Ask Your Data a Question
The difference is clearest through the lens of the people who use it.
| Role | Traditional BI Approach | Conversational Approach |
|---|---|---|
| Safety Executive | Receives a monthly compliance report from a standalone system. VFL records sit in a database, unread at scale. To understand what supervisors are actually discussing requires a dedicated analyst or NLP project. | Asks: “Do people talk about feeling safe in their VFLs? Show me the trend by quarter. Now compare VFL locations to where our actual safety events are happening.” A working session, not a report request. |
| CEO (Retail/Distribution) | Reviews inventory reports by branch, each built separately. Dead stock is a known problem but quantifying it across 16 branches requires a finance team exercise. | Asks: “Where is my dead stock concentrated? Which three branches should I act on first, and what’s the working capital recovery if I clear them?” The model traverses inventory age, velocity, and branch performance to answer in seconds. |
| Mine Manager | Waits for a weekly report showing production against plan. If something looks off, requests an ad hoc analysis from the planning team. Two to five day turnaround depending on backlog. | Asks: “Show me yesterday’s production variance at Shaft 4 and tell me what drove it. Now model bringing the Section 3 maintenance forward a week — what does it do to monthly output and unit cost?” One session, two answers. |
Across mining operations running on Capstone, this is the shift that operational leaders describe as the most valuable. Not faster answers to known questions. The ability to ask questions that previously required a dedicated analyst, a custom model, and days of turnaround.
Before You Even Ask Your Data a Question
The most valuable question is sometimes the one you didn’t think to ask. A conversational interface handles the questions you bring. The model behind it can also flag what you missed.
A retail group put Capstone onto its stock data. Out of R90 million in inventory across its branches, the model flagged R15 million in dead stock: 15,366 SKUs with zero sales history, 16.7% of the portfolio, with three branches holding 43% of the exposure.
A finance team could reach those numbers by pulling a set of ERP reports. Harder to replicate is what Capstone did next: it modelled what clearance would recover. Discount 30% of the dead stock, recover 65% of cost value, and R2.9 million returns to working capital. Change the discount depth or narrow the scope to the three worst branches, and the number updates in seconds.
The more interesting finding was the one nobody had asked about. In the same session, Capstone flagged 3,098 critical items where open sales orders exceeded available stock. Lost sales hiding inside undersupply. The model had noticed.
That’s a different kind of value from asking a question and getting an answer. It’s the model doing the asking on your behalf, because the relationships between the data points are already mapped. Both capabilities depend on the same foundation: a governed operational model where the dependencies between metrics are explicitly defined.
We’re Not Against Dashboards
Dashboards still have a place. A well-designed operational dashboard that tracks a known set of KPIs in real time remains useful. Capstone produces these, too.
But dashboards should be the starting point of an investigation, not the endpoint. What happens when a number on the dashboard looks wrong? In most organisations, you escalate, request an analysis, and wait. With a conversational interface, you click into the number, ask why, and get an answer. The conversation continues until you have what you need. That’s the difference between a display and a dialogue.
One Architecture, Every Sector
Capstone was built in mining, one of the most operationally complex and data-fragmented environments in South Africa. The architecture is sector-agnostic: the cascade logic, dependency graphs, and conversational interface stay the same across industries. What changes is the content of the model: the formulas and hierarchies configured for each domain. A mine manager asking about production variance and a retail head asking about inventory ageing are using the same engine on different models.
Three Questions to Ask Your Data Team
Before evaluating any conversational analytics platform, ask your BI or data team these three questions:
- How long is the current dashboard request queue? If the answer is more than two weeks, your organisation is making decisions with stale or no answers.
- What percentage of our users build their own reports? If it’s under 30%, self-service BI hasn’t delivered on its promise. Most of your people are still waiting in line.
- What was the last question someone wanted to ask but couldn’t because there was no report for it? That’s the question worth the most. And it’s the one your current tools can’t answer.
Frequently Asked Questions
What does it mean to ask your data a question?
Conversational analytics lets users ask operational and financial questions in plain language and receive contextualised answers from a pre-built data model. Unlike traditional dashboards that display pre-configured reports, a conversational interface traverses the relationships between metrics, formulas, and organisational structures to answer questions that were never anticipated when the system was built.
How is this different from adding a chatbot to a BI tool?
A chatbot on top of a database retrieves numbers. It can tell you what the revenue was last quarter. It can’t tell you why the margin dropped, what would happen if an input changes, or how a metric at one site connects to a financial outcome at the enterprise level. That requires a model in which the dependencies between metrics are explicitly defined, not just a natural-language layer on top of SQL.
Does this replace our existing BI tools?
No. Dashboards and BI tools remain useful for monitoring known KPIs. Conversational analytics complements them by handling questions dashboards weren’t designed for: scenario modelling, root cause tracing, cross-disciplinary analysis, and the exploration of unstructured data. For a detailed comparison, see Power BI and Capstone: Why You Need Both.
If your organisation’s data strategy still revolves around building dashboards and waiting for reports, you’re paying a tax on every decision. That tax is measured in the time between question and answer, in the questions that never get asked, and in the insights that arrive too late to act on.
See how it works in practice. Read how Anglo American cut sustainability data validation from two months to two weeks using the governed model that powers conversational analytics, or exploring the technical architecture in When Your Dashboard Talks Back.
Ready to try it with your data? Contact NxGN Solutions to run a live session with your operational data. Your questions. Real answers.
