All wiki notes
Heuristic

Measure adoption, not just implementation

Deploying an AI tool and reporting success are not the same thing; track active use rather than availability, because the gap between the two is where unvoiced resistance hides and where the investment fails to earn its return.

Last updated 24 April 2026 First captured 24 April 2026

ai-adoptionorganisational-readinessstrategic-framing

The natural metrics for an AI initiative are implementation metrics. Licences purchased. Staff trained. Systems integrated. Rollouts completed. They are easy to count, easy to report and easy to chart upwards. They are also almost entirely disconnected from whether the initiative is working.

The working rule is to track active use rather than availability. Daily active users, number of queries, types of tasks attempted, the proportion of licensed staff who return to the tool after the first week. Low adoption on any of these is information; it signals where Unvoiced staff resistance is the primary failure mode of AI initiatives is visible in the data, where infrastructure or knowledge gaps are blocking use, or where the tool was oversold relative to what staff actually need.

Why implementation metrics mislead

Implementation metrics are built for the programme management layer. They answer “did we do the thing?” rather than “is the thing working?”. For a conventional software rollout those two questions usually converge — if the system is installed and staff are trained, it gets used. For AI rollouts they diverge, because AI is fragile enough, counterintuitive enough and politically sensitive enough that installation does not produce use.

Reports built on implementation metrics tell leadership that the programme is succeeding at precisely the moments it is not. The gap between rollout completion and actual adoption can run for months, and the longer leadership operates on the implementation numbers, the less time there is to correct before budgets and patience run out.

What adoption metrics reveal

Three categories of signal emerge from adoption tracking.

Use that is actually happening, which identifies where AI is earning its keep. These are the cases to protect, resource and build on.

Use that started but stopped, which identifies where a first impression was good but sustained engagement did not hold. Usually the tool is fine and the supporting infrastructure — context, training, workflow fit — is not. See Useful AI is a context problem for why this is the usual diagnosis.

Use that never started, which identifies where leadership assumed a use case that staff did not share. These are the cases to de-scope or re-explain rather than push.

Also worth tracking: where is AI not being used that it could be? That question often highlights more important issues than the adoption data itself. It connects to Channel shadow AI use as signal, not risk to suppress — gaps between official and unofficial use are usually pointing at infrastructure that needs work, and the map of that gap is one of the most useful artefacts an adoption-measurement programme produces.