Passive AI adoption is an implicit policy choice
Where an organisation has not made explicit decisions about how AI will be used, the defaults of the tools and vendors become policy by inheritance; "we haven't decided yet" functions as "we have accepted whatever happens".
Most organisations reach AI adoption as a series of small procurement decisions rather than a single strategic choice. A meeting transcriber is licensed because the partners liked it. A document copilot is bundled into the productivity suite renewal. A vendor-specific AI appears inside the CRM. Each of these is a low-stakes decision on its own terms. Collectively, they determine how AI is used across the firm — and the aggregate is usually not something anyone chose.
The working rule is that “we haven’t decided yet” is not a neutral state. Where explicit decisions are absent, the defaults of the tools and the vendors become operational policy by inheritance. The vendor’s position on data retention becomes the firm’s retention policy. The tool’s default on who can see transcripts becomes the firm’s access policy. The product’s idea of what makes a useful summary becomes the firm’s idea of what gets captured from meetings. None of these are things the firm would necessarily have chosen deliberately; all of them are things the firm has accepted by not deciding.
Where the heuristic bites
Three contexts are worth calling out.
Surveillance and monitoring posture. See AI removes the practical ceiling on workplace surveillance. The capability gets installed through ordinary productivity tools; the policy gets inherited from the vendors unless the firm makes its own decision.
Data retention and sharing. Meeting transcripts, prompt histories, document drafts sent to AI assistants — each of these is retained somewhere, usually for longer than staff assume, and used for something, often including model training. “We didn’t think about it” means the retention and use defaults have become policy.
Adoption pattern and quality ceiling. The story in A tools-first AI rollout that plateaued is partly a story of passive adoption: tools got deployed without the foundations decisions that would have made them useful, because nobody decided not to, and the plateau followed. The heuristic applies in the foundations layer as much as in the governance layer.
How to apply
The move is small but consistent. For any AI tool being considered for adoption, ask what the defaults are, what they imply for behaviour, data and observation, and whether those implications are ones the firm would have chosen deliberately. Where the answer is no, the decision to accept the defaults should be made explicitly rather than by omission. “We decided to accept the vendor’s position on X” is a defensible choice; “we did not examine what the vendor’s position on X was” is not.