Wiki · Theme

organisational-readiness

13 notes tagged with this theme.


All themes
Pattern The mid-tier AI adoption threshold In mid-tier organisations, the daily pressure of business-as-usual sets a payoff threshold that typical AI gains do not clear, so adoption stalls even when tools and training are in place. Updated 24 Apr 2026 Pattern AI removes the practical ceiling on workplace surveillance Comprehensive workplace monitoring was always theoretically possible but practically capped by human review capacity; AI removes that cap, and the capability itself reshapes behaviour whether or not it is used. Updated 24 Apr 2026 Heuristic Expect AI to surface authenticity gaps between stated and actual values An AI system that takes an organisation's stated values seriously will quickly surface where stated and actual behaviour diverge; leadership should expect and plan for these findings before commissioning the work, because surfacing them without being prepared to respond is worse than not surfacing them at all. Updated 24 Apr 2026 Heuristic Audit client agreements for AI silence Most firms' client agreements were drafted before AI became a live question and are silent on both the firm's AI use in delivering work and the client's permitted AI use on the firm's output; that silence inherits defaults by omission and leaves the firm exposed under privacy regulation and professional guidance. Updated 24 Apr 2026 Heuristic Channel shadow AI use as signal, not risk to suppress In most organisations, staff are already using AI in ways leadership has not sanctioned; treating that shadow use as evidence of real work-in-context rather than as compliance risk reveals use cases, knowledge gaps and adoption blockers that top-down planning will not find. Updated 24 Apr 2026 Heuristic Hire for durable AI judgement, not transient AI mechanics AI skills split into durable judgement — when to use AI, how to structure problems for it, how to verify output, where not to use it — and transient mechanics — specialist prompt engineering, bespoke pipelines platforms will absorb. Hire and train for the first, be sceptical of the second. Updated 24 Apr 2026 Heuristic Involve sceptics early in AI initiatives Sceptics are more valuable than advocates during the design of an AI initiative — they see the failures most clearly; involve them early in roles that protect against the failures they fear, rather than sidelining them as resistant to change. Updated 24 Apr 2026 Heuristic Leadership team AI fluency must be collective, not individual A single AI-fluent leader in an otherwise-unfluent team creates strategic blind spots rather than an advantage; fluency has to be built across the leadership team together, because uneven adoption at the top propagates as inconsistent AI strategy below. Updated 24 Apr 2026 Heuristic Use a frontier LLM as a personal AI mentor Use a frontier LLM as a conversational partner for learning about AI itself — ask it about its capabilities, limitations and appropriate use cases while doing real work with it. The self-directed, contextualised learning this produces outperforms the structured training programmes it replaces. Updated 24 Apr 2026 Heuristic Make tacit knowledge explicit, or AI cannot use it AI cannot interpret the unwritten assumptions that shape how an organisation actually works; operational self-description is precondition, not polish. Updated 24 Apr 2026 Heuristic Measure adoption, not just implementation Deploying an AI tool and reporting success are not the same thing; track active use rather than availability, because the gap between the two is where unvoiced resistance hides and where the investment fails to earn its return. Updated 24 Apr 2026 Case study An ongoing AI advisory engagement with a growing firm An abstracted single-engagement case study showing how a growing firm used an ongoing AI-strategy advisory relationship — covering market scanning, implementation oversight and staff coaching — to navigate AI adoption without diverting internal attention from operational delivery. Updated 24 Apr 2026 Pattern Unvoiced staff resistance is the primary failure mode of AI initiatives The most insidious threat to AI adoption is not technical or budgetary but behavioural — staff publicly support the initiative while privately declining to adopt it, expressing resistance through plausible non-compliance rather than open challenge. Updated 24 Apr 2026