Users assume AI has access to information it does not have
Users routinely overestimate the information AI has access to, treating it as if it were working from a complete picture; this overestimate compounds with AI fluency to produce misplaced trust.
When a person asks an AI tool a question about their work, they usually imagine the AI drawing on “everything the organisation knows”. In practice the AI is drawing on whatever happens to be in its accessible context window — which is almost always a small, uneven fraction of what the organisation actually knows. The user’s mental model and the tool’s actual reach can be dramatically different, and the user rarely has a way to notice.
This is distinct from the failure modes already described in AI’s most dangerous failure mode is confident wrongness and Useful AI is a context problem. Confident wrongness is a property of the AI’s output. Context inadequacy is a property of the inputs. Hidden knowledge gaps is a property of the user — a mental-model mismatch that causes the user to trust output that would not warrant trust if they understood how narrow the input was.
Why the mismatch persists
Three things entrench it. The AI’s interface hides the context boundary: the user sees a prompt and an answer, not the set of documents the model actually had access to. The organisation’s framing of AI as “our AI” suggests comprehensive knowledge, when in practice the access is piecemeal and frequently broken. And nothing in the AI’s own output signals what it didn’t have — the tool does not know what it is missing, so it cannot report the gap even if that would be useful.
Compound the mismatch with AI’s most dangerous failure mode is confident wrongness and the result is a well-shaped failure. The user asks a question assuming comprehensive input; the AI answers fluently from partial input; the output looks authoritative; the user does not know either that the input was partial or that the answer is wrong in consequence.
How to use the heuristic
The practical move is to close the gap from both ends. On the AI side, make the context boundary visible in the interface where possible: show what was retrieved, what was available, what was out of scope. On the user side, teach the assumption of partial context as the default mental model — the opposite of the “everything the organisation knows” framing — and make verification that specific required context was present a habit for any consequential use.
The heuristic is also a useful diagnostic in adoption conversations. When a user reports that “the AI is good at most things but it sometimes gives me nonsense”, the question to ask is often not “what did it say” but “what did it have access to”. The answer is usually less than the user expected.