Hire for durable AI judgement, not transient AI mechanics
AI skills split into durable judgement — when to use AI, how to structure problems for it, how to verify output, where not to use it — and transient mechanics — specialist prompt engineering, bespoke pipelines platforms will absorb. Hire and train for the first, be sceptical of the second.
When organisations set out to build AI capability, the most visible option is usually mechanical: hire specialist prompt engineers, stand up bespoke agent frameworks, build custom retrieval pipelines. These roles are concrete, the skills are nameable, the outputs are easy to demonstrate. The trouble is that most of what they do is exactly what the platform vendors are absorbing each quarter into the tools everyone already has.
A durable AI skill is different. It is a skill of judgement — knowing when AI is the right tool for a task, how to structure a problem so AI can help, how to verify output that sounds plausible but may be wrong, where not to use AI at all. These skills do not show up in job titles easily, they are harder to demonstrate in an interview, and they take time to develop. They are also the skills that do not become obsolete when the next platform release ships.
A simple test
A useful filter when a new AI role or training programme is proposed is the twelve-month vendor test. Would this skill still matter if the platform vendor shipped the capability natively in twelve months? For judgement skills the answer is yes: knowing when to use AI, verifying its output, structuring problems for it — these are unaffected by what the platform does. For most mechanical skills the answer is no: the prompt library gets absorbed, the pipeline gets commoditised, the bespoke framework gets replaced by the vendor’s default.
The test is not a prohibition on mechanical skills. Some are genuinely worth building, usually because the specific context is unusual enough that vendor coverage will lag. But the default posture toward mechanical AI roles and training should be sceptical, and the default posture toward judgement-shaped roles and training should be generous.
How to apply the heuristic
In hiring, the test applies to role descriptions before interviews. A role specified as “senior prompt engineer” reads as a mechanical-skills hire and is worth pausing to reconsider. A role specified as “AI-assisted [substantive professional function]” reads as a judgement-skills hire and is probably the shape the firm actually needs. The former is likely to date fast; the latter is likely to hold.
In training, the pattern is similar. A programme that teaches the current prompt-engineering techniques will date with the techniques. A programme that develops the judgement to use AI well — including judgement about when not to use it, how to know when it is wrong, and how to teach others the same — is a durable investment. The underlying principle is that AI literacy is not a training problem in the one-off sense, which is why judgement-shaped capability has to be developed over time rather than installed in a workshop.
The parallel with architecture is structural: the same reasoning is set out in Architect AI around principles, not vendors for the technology layer. The HR layer and the architecture layer make the same choice for the same reason.