Make tacit knowledge explicit, or AI cannot use it
AI cannot interpret the unwritten assumptions that shape how an organisation actually works; operational self-description is precondition, not polish.
Most organisational knowledge is implicit. It lives in the heads of experienced staff, in the assumptions behind processes, in the unwritten rules of how things get done, in the things people know they know but have never had reason to say out loud. A human joining the organisation learns this by being there — sitting in on meetings, being corrected, noticing patterns, asking questions. An AI does not have that option. It needs explicit, written, structured inputs; and what is not written down is, for its purposes, not there.
Why the implicit layer is the hard part
The implicit layer is hard to extract precisely because the people who hold it often do not realise they do. A senior consultant asked “how do you approach an engagement like this” answers in generalities, because the specific judgements that shape their approach have become automatic. The same consultant in the middle of the engagement is making those specific judgements continuously, drawing on assumptions nobody has ever articulated.
Extracting that layer is interview work, observation work, and writing work. It is slow. It involves sitting with senior staff and asking questions that feel naive. It involves watching how decisions actually get made, rather than reading the documented process. It produces documents that did not previously exist, because the knowledge they encode had never been written down.
The output is often surprising to the people whose knowledge it captures. “I didn’t realise we did it that way” is a common reaction, because the practice had become invisible to them.
What follows
An AI that has access only to the codified layer of an organisation — policies, procedures, contracts, templates — is working with the shallow end of what the organisation knows. Its outputs will be generic to that depth. The firms that invest in making the implicit layer explicit are building a context asset that AI can actually use, and that asset is one of the durable forms of advantage available.
A recent practical starting point was suggested by Ethan Mollick: describe what your organisation actually does, in writing, for AI. Not marketing copy, not documented process, but the real operational description — how decisions are made, what the implicit rules are, what the organisation actually values as distinct from what it says it values. That kind of institutional self-knowledge is what AI needs to be useful, and most organisations lack it.
The work is not a weekend project. Done properly it takes months, involves interviews and observation, and produces documents that have to be maintained. But it is foundational in the sense that Start with knowledge management, not tools points at: the tools on top only work as well as the context they can draw on, and the context has to include the part that has never been written down.