In 2022, a major European bank invested forty million euros in an AI-powered document drafting assistant. Eighteen months later, teams were still primarily using the old Word templates. The tool wasn't failing. By every benchmark, it was superior in quality and three times faster. Staff had completed the training. They'd attended the adoption workshops. And yet they systematically reverted to old practices the moment pressure mounted.
This case is not isolated. It is representative of a pattern we observe across virtually every large-scale AI deployment: adoption rates collapse not in the first weeks of enthusiasm, but in the first moments of real pressure.
What resistance actually measures
Change management literature has long interpreted this as a competence or acceptability problem. More training, better communication, manager buy-in. This reading is partly correct. It is, above all, insufficient.
What resistance actually measures is the coherence of the incentive system. When a senior consultant bypasses the AI analysis tool to manually produce a presentation — because their director still evaluates visual sophistication as a marker of personal investment — they are not resisting change. They are adapting perfectly to a system that hasn't changed.
AI deployed onto an untransformed organisation doesn't produce an augmented organisation. It produces an organisation in internal conflict with itself.
The three architectures of resistance
Our observations identify three structural configurations that predict adoption failure, regardless of tool quality.
The phantom metric. Organisations continue measuring pre-AI behaviours: documents produced, time in meetings, volume of requests handled. These metrics intrinsically reward old workflows. AI, which automates precisely these flows, therefore penalises individual performance indicators.
The middle manager in no-man's land. Middle management is the transmission belt of any transformation. It is also the most exposed to the contradiction between institutional discourse ("use AI") and evaluative reality ("show me your added value"). When these two injunctions conflict, the middle manager chooses the one that preserves their position.
The visible effort ritual. Some organisations have developed a culture of visible work as a signal of seriousness and commitment. In these contexts, AI-enabled speed becomes suspicious. "It only took two hours?" often implies "is this really work?" Perceptible effort remains the true measure of value.
What this means for leaders
AI transformation requires simultaneous intervention at three levels: the tool, of course, but also performance metrics and managerial rituals. These last two are the blind spots of virtually every deployment programme.
Concretely, this means revising evaluation frameworks to include decision quality indicators rather than activity metrics, training managers to value AI-assisted outputs without invisibilising the judgement that guided the tool, and restructuring reporting rituals so they no longer reward work volume but the quality of arbitration.
Without these adjustments, forty million in platform investment ends up in the drawer of rational resistance.