Use a frontier LLM as a personal AI mentor
Use a frontier LLM as a conversational partner for learning about AI itself — ask it about its capabilities, limitations and appropriate use cases while doing real work with it. The self-directed, contextualised learning this produces outperforms the structured training programmes it replaces.
The default approaches to building AI literacy — training courses, certification programmes, formal learning tracks — assume that AI is another tool someone teaches you how to use. A different approach has been working better in practice: use the AI itself as the teacher.
The tactic is to have staff use a frontier LLM as a personal AI mentor. Not just as a tool for doing work, but as a conversational partner for building their own understanding of what AI can and cannot do. Ask the LLM about its own capabilities. Ask it what it struggles with. Ask it how to structure a problem for its own best performance. Ask it to show you how to verify its output. The conversation doubles as the learning.
Why this works better than training programmes
Two mechanisms make the LLM-as-mentor tactic effective in a way that structured training often is not.
First, the learning is contextualised to each user’s actual work. The mentor is always available and can be asked about the specific task at hand, in the specific language of that task. A training programme delivers general capability once; the LLM-as-mentor is specific capability on demand.
Second, the learning is self-directed. The user controls the pace, the depth and the sequencing. They can ask the same question three different ways if the first answer did not land. They can pause when busy and resume when there is time. That flexibility is exactly what AI literacy is not a training problem argues is missing from traditional approaches.
What the conversation looks like
The productive pattern is specific. Not “explain how AI works” (too abstract) but “I am trying to write this kind of document — show me three different ways you could help me with it, and then show me what each one’s output weaknesses would be.” Not “what can you do” but “I want to use you to prepare for this meeting; what do you need from me to be useful?”
Those are conversations in which the user is simultaneously doing the work and learning the tool. The learning is attached to the task; the task is attached to the learning. Neither is happening in isolation.
How to encourage the tactic
The tactic does not require a formal programme. What it does require is leadership signal that it is acceptable (and even expected) for staff to spend time in exploratory conversation with the LLM, rather than treating every interaction as a productivity-billed task. Where firms try to measure AI use strictly by productivity metrics, the exploratory conversations that build durable literacy tend to be squeezed out; the firms that invest in the literacy get it back, over time, through better-directed use on the productivity side.
The tactic also connects to Leadership team AI fluency must be collective, not individual. Senior people who use the LLM-as-mentor approach themselves are able to model it for others; senior people who do not cannot, and the adoption pattern stalls at whatever level the most AI-fluent leader has reached.