Every enterprise AI conversation eventually reaches the same question: which model should we use?

It is, almost always, the wrong place to start.

The 2025 DORA State of AI-Assisted Software Development report surveyed nearly 5,000 technology professionals and arrived at a finding that cuts across every industry and every technology stack. AI does not create organisational capability. It amplifies what is already there. Strong foundations get stronger. Weak ones get exposed faster.

As Nathen Harvey, the lead author of the DORA research, put it: “In well-organised organisations with strong practices, AI amplifies that flow and accelerates value delivery. And in fragmented organisations with brittle processes, AI will expose those pain points and bottlenecks. You will feel the pain of those bottlenecks more acutely.”

The model barely features in that equation.


What this looks like in practice

Across the enterprise data environments I work in, a consistent situation has emerged over the past 18 months. An AI initiative arrives with genuine executive support, a credible use case, and a delivery team with the technical skills to execute. The model work proceeds. A proof of concept gets built. It works. Leadership approves the next phase.

And then the programme stalls.

Not because the model underperforms, but because the data feeding it turns out to be inconsistent across source systems. Or the pipeline delivering that data is brittle and fails under load. Or the process the AI is supposed to improve was never well-defined to begin with, and the model faithfully automates the ambiguity.

The DORA research quantifies exactly this. Organisations with loosely coupled architectures and fast feedback loops see meaningful, measurable gains from AI. Organisations constrained by tightly coupled legacy systems (the kind where changing one component risks breaking three others) see little or no benefit, and in some cases see delivery instability increase as AI accelerates the volume of changes their underlying infrastructure cannot safely handle.


Five foundations that determine which side of that equation you land on

1. Data quality: the constraint that surfaces first

The BARC Data, BI and Analytics Trend Monitor 2026, drawing on 1,579 respondents, found that data quality reclaimed the number one position among all barriers to AI and analytics success. That finding will not surprise most data engineers. What has changed is the consequence.

A data warehouse with known quality issues can still produce useful reports. A BI dashboard can work around inconsistent definitions with careful calculation logic. AI models have no equivalent tolerance. They consume whatever you feed them, learn from it, and produce outputs that reflect it. An inconsistency that was a minor nuisance in a reporting environment becomes a systematic error in a production model, one that compounds with every prediction.

The specific failure mode worth watching for is when quality problems are not discovered until after deployment. A model trained on six months of data with a silent quality issue in one source system can take weeks to degrade visibly enough for users to lose trust. By that point, the damage to adoption is often irreversible regardless of whether the underlying data issue gets fixed.

IBM’s 2025 CDO Study: The AI Multiplier Effect found that organisations which had redesigned their data practices specifically for AI were realising substantially stronger business outcomes than those deploying AI onto existing infrastructure. The differentiator was data modernisation, not model selection.

2. Architecture: where the real constraint usually lives

Most enterprise data architectures were designed for batch analytics at a particular scale. They were not designed for the access patterns, latency requirements, and compute profiles that AI workloads demand. That mismatch only becomes visible when you try to run something in production.

The DORA report is clear on the mechanism: AI amplifies the quality of the engineering system it operates within. Organisations with mature platforms and well-defined pipelines see gains scale across the organisation. Organisations with fragmented tooling, inconsistent data models, and manual integration processes find that AI accelerates the creation of technical debt rather than reducing it.

In practice, a data platform that can support enterprise AI needs several things working together. Consistent, governed pipelines that deliver reliable data to models without manual intervention. Observability tooling that detects data drift or pipeline failures before they produce incorrect model outputs at scale. A compute architecture that can handle training workloads without competing for resources with live serving pipelines. Enough decoupling between components that expanding to a new use case does not require re-engineering the entire platform.

Microsoft Fabric’s unified architecture addresses several of these directly. A single OneLake store eliminates the replication overhead that causes consistency problems in multi-platform environments. Shared governance and a single security model mean access controls do not fragment as new workloads are added. The integration between Fabric’s data engineering, Data Science, and Real-Time Intelligence workloads means the same governed data estate supports both analytics and AI without duplication. The fragmented alternative, separate platforms for ingestion, transformation, warehousing, and model serving each with their own governance model, creates exactly the tight coupling the DORA research identifies as an AI amplification blocker.

3. Semantic consistency: the failure that hides until it is expensive

Semantic debt is the architectural problem most likely to be underestimated at the start of an AI programme and most costly to discover after models are in production.

The symptoms are familiar. “Customer” means something different in the CRM than in the billing system: one counts legal entities, the other counts active contracts. “Revenue” is calculated differently across divisions because each has its own P&L logic built into their source system. Date fields use different conventions. Currency handling is inconsistent. None of this is unusual in an enterprise that has grown through acquisitions or organic expansion over a decade.

In a reporting environment, these inconsistencies can be managed with transformation logic and careful labelling. The BI developer knows which definition applies to which report. Users learn the nuances over time. The system is imperfect but functional.

In an AI environment, the model has no equivalent knowledge. It trains on whatever definitions exist in the data, learns the inconsistencies as if they were legitimate signals, and produces outputs that reflect them. The errors are not random noise. They are systematic, because the model has learned a systematically incorrect representation of the business. And because the outputs often look plausible, the problem can persist for months before it surfaces clearly enough to investigate.

IBM’s CDO study identified semantically rich data as one of the foundational capabilities that separates AI-ready organisations from those still operating on organic data growth assumptions. Building a consistent semantic layer across the data estate is not glamorous work. It requires alignment between teams with competing priorities and different definitions of success. It rarely features in AI programme business cases because it produces no direct deliverable. And it is the work that determines whether the models your programme builds can be trusted or merely tolerated.

4. Pipeline reliability: the operational risk nobody prices in

Legacy ETL processes were built to move data from A to B on a schedule. Most of them were not built with the assumption that a production AI system would depend on them for real-time or near-real-time data.

The failure mode is predictable. A schema change in a source system breaks a downstream pipeline. The failure is silent: the pipeline does not error loudly, it simply stops delivering rows. The model continues serving predictions based on data that stopped updating 48 hours ago. Users notice that the AI’s recommendations have become increasingly strange. By the time the root cause is identified, the trust damage is significant.

The DORA research found that 90% of high-performing AI organisations have adopted platform engineering, with dedicated teams responsible for the reliability and observability of the infrastructure AI depends on. A data pipeline feeding a production AI model is operational infrastructure, not a batch job. That means monitoring with alerting, defined SLAs for data freshness, and automated recovery where possible. It means pipelines built for resilience rather than just throughput.

5. Governance as an operational capability

Governance in most organisations is treated as a compliance activity. Policies get written, access controls get configured, a data catalogue gets populated to satisfy an audit requirement. And then the AI programme gets built, new data assets get created, new pipelines get added, and the governance infrastructure from the compliance exercise does not keep pace with what is actually running in production.

The consequence is that six months into a live deployment, you have models consuming data through pipelines that are not catalogued, producing outputs that cannot be audited, with access controls that have drifted from the original design. When something goes wrong, and something will, the ability to diagnose the problem and demonstrate responsible data handling to stakeholders depends on governance infrastructure that was let slip.

The DORA report found that the organisations seeing sustained AI performance are those that treat platform governance as an ongoing operational capability rather than a project phase. Models drift as the data they consume changes. Access patterns evolve as new use cases are added. Cost profiles shift as usage scales. Governance that was appropriate at go-live needs active maintenance to remain appropriate at scale.

This is precisely why our methodology sequences Design, Build, Manage as a continuous cycle. The Manage phase is not clean-up after the build. It is the operational discipline that determines whether the investment compounds in value over time or degrades quietly until something breaks.


Use the readiness check below to see where your own foundations stand across these five areas before you deploy.

Foundation Readiness Check

Is AI about to amplify your strengths — or your gaps?

Rate your organisation across five dimensions. Takes about two minutes.

Data Quality
How would you describe the data your AI will consume?
Data Architecture
How would you describe your current data platform?
Processes
How well-defined are the business processes AI will operate within?
Strategic Clarity
How clearly is your AI initiative tied to specific business outcomes?
Platform Governance
How mature is the governance around the platform your AI will depend on?

Choosing the right problem to accelerate

The conversation in most enterprises has shifted from “should we do AI?” to “why is our AI not delivering?” The DORA research, IBM’s CDO study, and the BARC trend data point to the same answer: the constraint is almost never the model.

The organisations extracting measurable value from AI right now invested in foundation work before they deployed. They addressed data quality before they built pipelines. They modernised their architecture before they built on top of it. They defined the semantic layer before they trained models on it. They built governance into the platform before the platform scaled beyond their ability to govern it retrospectively.

The organisations still cycling through proofs of concept that never reach production typically started with the model and assumed the foundations would follow.

AI accelerates whatever direction you are already heading. The question worth settling before you deploy is whether that direction is the one you actually want to go faster in.


Ready to identify where your AI programme’s primary constraint sits? Take our AI Maturity Assessment to get a personalised diagnostic across Design, Build, and Manage. In under three minutes, you will receive specific recommendations for your situation.

Take the Assessment →


Sources

  1. DORA / Google Cloud. “2025 DORA Report: State of AI-Assisted Software Development.” September 2025.
  2. DORA / Google Cloud. “2025 DORA AI Capabilities Model.” September 2025.
  3. BARC. “The Data, BI and Analytics Trend Monitor 2026.” Summer 2025 (1,579 respondents).
  4. IBM Institute for Business Value. “The 2025 CDO Study: The AI Multiplier Effect.” 2025.
  5. McKinsey & Company. “The State of AI: Agents, Innovation, and Transformation.” 2025.
  6. Jellyfish / Nathen Harvey. “AI as Amplifier: Taking a Closer Look at the 2025 DORA Report.” November 2025.