All wiki notes
Heuristic

Polish and volume no longer signal effort

The signals that used to tell reviewers about work quality — volume, polish, comprehensiveness — correlated with effort because effort was scarce; with AI the correlation breaks, and the questions that still discriminate are about process.

Last updated 24 April 2026 First captured 24 April 2026

ai-literacyprofessional-servicesai-adoption

Partners, managers and editors have traditionally used a small set of surface signals to assess work: the length of the document, the polish of the prose, the comprehensiveness of the analysis, the breadth of the options considered. These signals worked because producing them took effort, and effort correlated with care. A twenty-page memo with clean sub-headings indicated that someone had spent the hours to produce it.

With AI assistance, the effort-to-surface ratio breaks. Producing a long, polished, comprehensive piece of work now tells a reviewer that someone knew how to brief an AI and had enough taste to edit the result. Possibly less than that. The memo looks the same as it did five years ago; it is not the same signal.

What questions still discriminate

The questions that still produce useful information about quality are about process, not appearance. What alternatives were considered, and why were they set aside? Where did the AI help, and where did the author push back on what it suggested? What was checked that might not otherwise have been checked, because AI made the check cheap enough to do? What does the author themselves think is weakest about the piece?

None of these are answered by reading the deliverable. They require a conversation between the reviewer and the author, and the conversation takes time. That is part of the point: the surface of AI-produced work passes the old tests too easily, so the slower, costlier questions are the only ones that still produce information.

How to apply the heuristic

For a reviewer, the working change is to treat the deliverable as the artefact of a process and ask about the process. For an author, the working change is to be ready to answer — to hold on to the alternatives considered, the disagreements with the AI, the checks performed, the weaknesses known but not yet resolved.

There is a second-order effect for client-facing firms. Forward-leaning clients will be asking similar process questions about the firm’s own work before long. A firm whose people are used to having those conversations internally will find them easier to have with clients than a firm whose internal review still relies on the old surface signals. The cost of the shift is paid at review time either way; the firms that pay it early get the information sooner.

The related failure mode — that AI’s fluent output can be wrong in ways the surface does not reveal — is set out in AI’s most dangerous failure mode is confident wrongness.