The demo went well. Genuinely well. The model performed, the use case resonated, and the room was engaged. Leadership approved the next phase. A delivery team was assembled.
Six months later, the technology is sitting largely unused.
This pattern is not rare. McKinsey’s 2025 State of AI survey found that 88 percent of organisations now use AI in at least one business function, but nearly two-thirds have not yet begun scaling it across the enterprise. Only around one in three has genuinely moved from pilots to something that functions as core business infrastructure. The technology adoption curve has been climbed. The organisational adoption curve largely hasn’t.
What happens in that gap is almost never a technical failure. It is a human systems failure, and addressing it requires a fundamentally different set of skills than the ones that delivered the pilot.
The trust gap nobody budgets for
Prosci’s research, drawing on 1,107 professionals across frontline employees, team leaders, and executives, found that 63 percent of AI implementation challenges stem from human factors rather than technical limitations. User proficiency alone accounts for 38 percent of all reported AI implementation difficulties, dramatically outpacing technical issues which account for just 16 percent.
The same research reveals something more uncomfortable: the trust in AI across organisational levels is not evenly distributed. Executives express strong confidence in AI capabilities. Team leaders show moderate, cautious optimism. Frontline workers demonstrate minimal trust, remaining sceptical about AI’s value and with very little say in which tools they are expected to use.
This disparity creates a feedback loop that is hard to see from the top. Senior leaders look at positive metrics from the pilot and assume adoption is progressing. Meanwhile, the people whose daily work depends on actually using the system are quietly ignoring it or working around it. The numbers that reach the boardroom remain encouraging. The reality on the ground does not match them.
Completing the Microsoft AI Transformation Leader certification, which I took in its beta phase, reinforced something that the research consistently bears out: the skills required to lead AI adoption across an organisation, evaluating opportunities, aligning AI investments with business goals, championing responsible practices, are distinct from the skills required to build the technology. Most organisations resource the latter and underinvest in the former. Deloitte’s 2026 State of AI in the Enterprise report frames this as the gap between AI ambition and AI activation. The ambition is largely there. The organisational capability to activate it is not.
Why this is harder than other technology change programmes
Having held a Change Management Foundation qualification alongside PRINCE2, Agile PM, and CSM for over a decade, and having delivered programmes across Government, Oil & Gas, Manufacturing, and Automotive, one thing is evident: AI transformation is categorically different from most technology change.
Standard change management frameworks are built around discrete, defined transitions. You move from state A to state B. You plan the communication, you train the users, you measure adoption, you close the project. The change has an end date.
AI change does not work this way. Prosci’s analysis of AI change practitioners across hundreds of organisations identified this as the defining characteristic: AI adoption is a continuously shifting target. The technology evolves between planning and deployment. New capabilities emerge during rollout. The model that was defined in the design phase may behave differently in production than it did in testing. Reinforcement is not a finite phase; it becomes an ongoing operational requirement.
This matters because most programme governance structures are not built for it. Projects have closure criteria. AI adoption programmes need operating model criteria. The question is not “did we deploy the tool?” but “has the tool changed how decisions are made?” Those are very different questions, and they require very different accountability structures.
BCG’s research found that approximately 70 percent of AI implementation challenges relate to people and processes rather than technical failures. A separate Kyndryl survey found that while 95 percent of senior executives reported investing in AI, only 14 percent felt they had successfully aligned their workforce, technology, and business goals. Nearly half of the CEOs surveyed said most of their employees were resistant or openly hostile to AI-driven changes. The top obstacles were not technical: they were lack of effective change management, low employee trust in AI, and workforce skills gaps.
The five failure modes between demo and adoption
These are the specific points where well-resourced, technically sound AI programmes break down between a successful pilot and meaningful organisational adoption.
1. The business owner is missing
In programme and change management, the “business owner” is the senior person on the business side who is accountable for the outcomes the AI is supposed to deliver: a Head of Operations, a Finance Director, a Commercial Lead, whoever owns the process or function the AI is being deployed into. This is a distinct role from the technical project sponsor or the delivery team.
Every successful AI programme needs this person. In practice, the role is often conflated with the executive who approved the budget, attended the demo, and then returned to their other responsibilities.
A business owner in the change management sense has a different mandate. They are responsible for ensuring the organisation can absorb the change. They translate model outputs into operational decisions. They hold the relevant teams accountable for adoption, not just deployment. They resolve the inevitable conflicts between the way things have always been done and the new operating model the AI requires.
Without this person, teams default to their prior workflows the moment the pilot team moves on. The technology sits available. The working practice does not change.
2. Success metrics were defined by the data team
A pilot that achieves 94 percent model accuracy is a technical success. Whether it is a business success depends entirely on whether anyone changed their behaviour as a result of what the model produced.
This distinction seems obvious stated plainly. In practice, the metrics tracked and reported during the pilot phase are almost always the ones the data team can measure: model performance, prediction accuracy, system latency, pipeline reliability. These are the right metrics for assessing the technology. They are the wrong metrics for assessing adoption.
Business KPIs for AI fall into three tiers, and all three need to be defined before the pilot closes.
Adoption KPIs tell you whether people are actually using the system: active user rate, time-to-proficiency from first use to consistent use, manual override rate (how often users ignore the AI’s recommendation, one of the clearest signals of trust failure), and workflow integration rate.
Operational KPIs tell you whether the process has changed: process cycle time in the affected area, error or exception rate, decisions made per person in scope, and the proportion of in-scope decisions that were informed by the model rather than made without it.
Financial KPIs tell you whether the business outcome has shifted: cost per unit of output in the affected process, revenue impact where the capability is customer-facing, and time reclaimed and redeployed to higher-value work. BCG data suggests AI can reclaim between 26 and 36 percent of time in roles that are routine and data-heavy.
The critical point the research makes consistently: financial KPIs take twelve to eighteen months to show up clearly. Adoption and operational KPIs are the leading indicators. Organisations that only track financial impact in the first six months will conclude the programme is underperforming, because the financial signal has not materialised yet, even when adoption is progressing well.
McKinsey identifies tracking well-defined KPIs from the start as the single management practice with the highest correlation to AI value realisation. The difference between high-performing organisations and the majority is that they treat adoption measurement as infrastructure, not an afterthought.
Download the reference card: Get the AI Initiative KPI Reference Card — a one-page framework covering all three KPI tiers with example metrics, timelines, and how to use them at each stage of your programme.
3. Workflows were not redesigned around the AI
This is the most expensive failure mode because it is often invisible until significant investment has already been made. The AI capability is deployed on top of existing processes rather than integrated into redesigned ones. People are expected to consult the model alongside the things they were already doing, rather than having their workflow fundamentally restructured to make the AI’s output the primary input.
McKinsey’s 2025 analysis is unambiguous on this point: AI high performers are nearly three times more likely to fundamentally redesign their workflows than other organisations. Half of those high performers use AI to transform how the business operates. The majority of organisations layer AI onto what already exists, which captures a fraction of the available value and places the burden of adoption on individual users rather than embedding it in the process.
From a programme delivery perspective, workflow redesign is a pre-implementation activity, not a post-deployment one. By the time the system is live, the window for redesigning the process around it has largely closed. The people who need to operate differently are already anchored to how they worked before the AI arrived.
4. Communication stopped at go-live
Programme communications in most organisations peak around two moments: the announcement of the initiative and the launch. The period that determines actual adoption, the weeks and months after go-live when users are forming new habits, encountering friction, and deciding whether the tool is worth the effort, is typically the quietest.
The research on what separates successful AI rollouts from unsuccessful ones returns to communication consistently. Not the launch communication, but the ongoing narrative: what is working, what has changed, who is using it well, what problems have been resolved, what is being learned. This is the internal evidence-building that creates credibility across the organisation. Without it, sceptical users have no signal that adoption is worthwhile, and the tool fades.
PRINCE2’s principle of continued business justification applies directly here. The business case for the AI capability does not become self-evidencing at go-live. It needs to be actively demonstrated throughout adoption, and that requires structured communication cadences well beyond the project closure date.
5. The change was positioned as a technology project
AI adoption fails at the highest rate in organisations that frame it as an IT initiative. The framing matters because it determines who is accountable, what budget gets allocated, what governance structure oversees it, and what the measure of success is.
An IT initiative succeeds when the system is deployed, stable, and performing to specification. An organisational change initiative succeeds when people work differently as a result. These are not the same thing, and conflating them means the investment in the technology is measured on the wrong dimension.
The organisations achieving sustained value from AI have reframed the initiative at leadership level. AI is a business strategy challenge that requires technology to execute. That framing changes the sponsorship model, the change management investment, and the accountability structure. It moves the conversation from “did we deploy it?” to “did the business change?”
Use the diagnostic below to check your own programme against each of these five failure modes and get a risk profile with recommended next steps.
AI Adoption Readiness Diagnostic
Five questions mapping to the five failure modes between a successful AI pilot and meaningful organisational adoption. Answer each to get your risk profile and recommended actions.
What good change management for AI actually looks like
Change management for AI does not follow a linear implementation plan. Given the pace at which the technology evolves, the governance structure needs to be genuinely adaptive, capable of absorbing new capabilities, updated models, and shifting use cases without requiring a full programme restart each time.
Establish the business owner before the technical team is assembled. The person accountable for adoption and outcomes needs to be in the room for design decisions, not introduced at handover. Their job is to represent the operational reality that the technology will need to function within, and to hold the receiving organisation accountable through the adoption period.
Design the adoption metrics before the pilot closes. What does “the AI is working” look like in business terms? How will you measure whether frontline users are acting on model outputs? These questions need answers agreed before go-live, because the data needed to answer them retrospectively often does not exist.
Redesign the workflow before the technology is deployed. The integration point between the AI’s output and the human decision needs to be defined in the process design phase. If the workflow redesign is left until after deployment, you are asking users to change how they work at the same time as they are learning a new tool. The cognitive load and the resistance compound each other.
Build a communication programme that extends well beyond launch. The adoption period needs a structured cadence of internal evidence: case studies from early users, data on decisions influenced, problems resolved, improvements made. This is the mechanism by which the organisation builds confidence in the technology, which is the prerequisite for sustained adoption.
Treat the change as continuous, not discrete. AI capabilities will change during the programme and after it. The governance model needs to accommodate this. A dedicated adoption function, whether internal or supported externally, provides the continuity that a time-limited project cannot.
The role of programme delivery discipline
Agile project management and PRINCE2 address different dimensions of AI programmes, and both are needed. PRINCE2’s emphasis on defined business cases, stage gates with continued justification, and exception-based reporting gives AI programmes the governance structure they need to remain accountable. Agile’s iterative approach accommodates the uncertainty inherent in AI delivery, where requirements shift as capabilities become clearer and user feedback refines the use case.
The programmes that deliver sustained AI adoption tend to use both: a structured outer governance framework that maintains business case accountability, and an iterative inner delivery approach that accommodates learning. The PRINCE2 principle of managing by stages is particularly relevant. Each adoption stage should have defined success criteria in business terms, not just technical completion criteria, before the programme proceeds.
Scrum’s retrospective practice has a useful analogy in AI adoption: the regular review of what is working, what is not, and what needs to change. In technology delivery this is a development team practice. In AI adoption it needs to be an organisational practice, involving the business owner, the change lead, and the operational teams who are living with the technology daily.
The design phase is where adoption is won or lost
Our methodology at Orion is Design First, Build Smart, Manage Continuously. The change management argument for that sequencing is that the decisions made in the Design phase determine whether the Build phase produces something the organisation can actually absorb.
Getting the design right means defining the business outcomes before the technical requirements, establishing the ownership model before the delivery team is assembled, and confirming the adoption pathway before the first line of code is written. This is not slowing down the programme. It is the difference between delivering a capability and delivering an outcome.
The demo that went well produced a technical capability. Turning that capability into a business outcome is a change management challenge, and it requires the same rigour, the same resourcing, and the same leadership attention as the technical delivery that preceded it.
Ready to identify where your AI programme’s primary constraint sits? Take our AI Maturity Assessment to get a personalised diagnostic across Design, Build, and Manage. In under three minutes, you will receive specific recommendations for your situation.
Sources
- McKinsey & Company. “The State of AI: Agents, Innovation, and Transformation.” 2025.
- McKinsey & Company. “Are Your People Ready for AI at Scale?” March 2026.
- Prosci. “Why AI Transformation Fails: Keys to Unlocking AI Adoption.” September 2025.
- Prosci. “8 Ways AI-Driven Change is Different.” June 2025.
- BCG. “AI Adoption in 2024: 74% of Companies Struggle to Achieve and Scale Value.” October 2024.
- California Management Review. “Bridging the Gaps in AI Transformation: An Evidence-Based Framework for Scalable Adoption.” November 2025.
- Deloitte. “State of AI in the Enterprise 2026.” January 2026.
- Google Cloud. “KPIs for Gen AI: Measuring Your AI Success.” November 2024.