Why AI Adoption Keeps Stalling, And What to Do About It
Many organisations investing in AI are not seeing the returns they expected. The problem is many treat AI adoption as a technology problem when it’s in fact a human behaviour problem.
Adoption is not a yes/no question
AI adoption is a behavioural process, not a switch. It runs on a continuum from occasional, individual use at one end to AI embedded in core operations and strategy at the other. High usage numbers can look encouraging while meaningful change in how work actually gets done remains out of reach. Understanding where your organisation sits on that continuum — and what you are actually trying to achieve — is the starting point for any adoption work worth doing.
The five levels of adoption
Based on the Wharton AI Adoption Report and the MIT NANDA State of AI in Business 2025, I use five levels to map where organisations typically are: Experimenting, Productivity, Functional, Operational, and Strategic.
Most organisations today are concentrated at levels two and three: individuals using AI to work faster, and some teams building it into specific workflows. Levels four and five, where AI reshapes how work is structured and what the organisation can do, still remain rare.
The barriers are different at every level
This is where most adoption programmes go wrong.
The barriers at the individual level are primarily about motivation and relevance — people have not yet had a moment where AI does their specific job noticeably better.
At team and functional level, the barriers shift to workflows, social norms, and what managers are modelling day to day.
At operational level, it becomes an organisational design problem. Training does not touch it.
Applying the same intervention regardless of where people are is a common and costly mistake in AI adoption work.
A three-stage cycle that actually works
Once you are clear about what level of adoption you are targeting and for whom, the work becomes a three-stage behaviour change cycle.
First, diagnose the binding constraint: is it motivation, capability, or opportunity? Second, design interventions that match the barrier rather than defaulting to comms and training. Third, measure whether both the behaviour and the relevant business outcome have actually changed, not just whether activity has happened.
The diagnosis stage also clarifies the business outcome and sets up your scorecard, so measurement is straightforward rather than retrofitted. This cycle repeats as your ambition evolves and context changes.
What this means for people leaders
The Wharton report found that nearly three-quarters of business leaders are now tracking structured, business-linked ROI metrics. Budget discipline and ROI rigour are becoming the operating model for AI investment. That raises the stakes for getting the approach right. The organisations seeing real impact are not necessarily the ones with the biggest budgets or the most sophisticated tools. They are the ones that are clear about what they are targeting, honest about what is in the way, and disciplined about measuring what actually matters.

