We spent 2025 helping mid-tier organisations adopt AI. Not the big end of town: organisations with limited resources, competing priorities, and staff who are usually just trying to get through the day.
A sizeable chunk of our "pure" AI work was with professional services firms this year, so we'll use that sector for most of our examples. But the principles we're describing applied in similar ways to all our clients. Any organisation facing AI disruption (which is to say, every organisation) will recognise the dynamics we're about to describe.
We expected the work to be about helping clients climb the learning curve. Choose the right tools, train the staff, redesign some workflows, start capturing value. The standard digital transformation playbook, updated for AI.
Some of that happened, but the real learnings lay elsewhere.
What we found in practice was a set of structural barriers that made meaningful adoption uncommon. AI tools remained flaky: impressive in demos, unreliable in practice, prone to confident errors that required expert-level skill to catch. Staff were wary, often engaging in quiet resistance rather than open objection. The hype cycle around agents and automation had raised expectations to levels that current technology couldn't meet. And perhaps most importantly, the payoff from adopting AI just wasn't high enough to clear the hurdle of business-as-usual demands. When you're firefighting every day, a tool that might save you 15% on some tasks isn't worth the disruption.
We began the year believing that AI adoption was difficult but achievable. We ended it with a harder conclusion: AI adoption into existing organisations is far harder than it looks, not because AI isn't capable, but because the conditions for success almost never exist.
This isn't a reason to dismiss AI. The technology is real and its trajectory is clear. Our early stumbles aren't signs that AI doesn't work. But they should temper expectations. AI isn't a plug-and-play upgrade for the modern office. Using it effectively requires a level of structural readiness that most organisations don't have and can't build quickly.
Why This Is Harder Than It Looks
So what's going wrong? Why aren't the standard approaches working?
The core problem is that AI is not like us. It's narrow, brittle, has no common sense, and only a limited kind of memory. It can't read a room. It doesn't know when it's out of its depth. And its most dangerous failure mode is that it sounds authoritative even when it's wrong.
This means AI fails in ways that are hard to anticipate. It doesn't know what it doesn't know. It can't tell when context has shifted, when an assumption no longer holds, or when a question requires judgment rather than pattern matching. And the more fluent it sounds, the harder it is for users to spot the errors.
This is a literacy problem, not a training problem. You can't fix it with a two-hour workshop. The mental model shift required to use AI well, understanding what it actually does, when to trust it, and how to work with its limitations, takes time and sustained effort.
The missing piece lies in the realm of context management, or in older language, knowledge management. AI's limitations stem largely from context. When the AI has the right context (the right documents, the right framing, the right constraints), its outputs are often impressive. When it doesn't, the outputs range from generic to actively misleading. The difference between useful AI and dangerous AI is almost entirely about context.
What AI Is Actually Doing
While most organisations struggle with these adoption challenges, something more significant is happening in the background.
Specifically: AI is commoditising general expertise at speed.
Consider what it means to be a good accountant, or lawyer, or consultant. Traditionally, the value was in the knowledge itself: knowing the tax code, understanding precedent, being able to diagnose an organisational problem. That knowledge was hard-won and scarce.
AI is making it abundant. Not perfect, not autonomous, but good enough that the gap between an expert and a competent person with AI access is narrowing fast. That gap is where professional fees live.
What retains value is different. It's the privileged knowledge, the things you know because a client trusted you with their specific situation. It's the relationship that gives you access to context no AI has. It's the institutional memory that lets an organisation learn from its own history rather than starting fresh each time.
The implication is uncomfortable for professional services firms: the relationship is the product. The accounting, or legal analysis, or strategic advice is becoming commoditised. What can't be commoditised is the trust, the context, and the continuity.
And there's a further complication. As AI-generated work trends toward free, anything with humans in the loop will be, relatively, expensive. The value of human involvement needs to be visibly higher than the AI alternative, and that bar is rising fast.
The Displacement Threat
If relationships and privileged knowledge are what matter, the obvious question is: how much time do we have to build them before AI renders the rest of the business redundant?
One reason for cautious optimism has been friction. Our economy and society are complex, slow-moving systems. Even genuinely transformative technologies take decades to reshape industries. The assumption has been that AI adoption will follow the same pattern, giving organisations time to adapt.
That assumption may be wrong.
Part of the problem is that we're still in the horseless carriage phase of AI adoption. The first motor vehicles were designed to look and work like horse-drawn carriages. It took time for the technology to evolve into something fundamentally different, and for the infrastructure around it to catch up. We're at a similar stage with AI: we're trying to fit AI into existing workflows and business models, which creates friction and limits value.
One recently emerging model hints at what this might look like. Instead of buying software seats or building internal capability, vendors may offer "AI as a labour service". The vendor provides the AI, the prompts, the quality assurance, and the domain expertise. The client provides the data. The vendor delivers finished work product: reports, analyses, regulatory filings, client communications.
Think of it as SaaS companies becoming temp agencies, where the temps happen to be AI. The vendor underwrites the work, absorbing the risk. The client gets a result, not a tool. This model eliminates the adoption friction that has been slowing things down. No training required. No workflow redesign. No change management.
The friction we've been counting on (skill gaps, organisational inertia, the difficulty of internal transformation) only protects against disruption from within. It offers no protection against disruption from without.
If something like this gains traction, the timeline for displacement gets shorter. Perhaps much shorter.
What Survives
Given all this, what's left to defend?
Not knowledge in the abstract. Anything that can be learned from public sources (how to do accounting, how to structure a contract, how to build a strategy) will be automated. Not immediately, not perfectly, but inexorably.
What survives is what AI can't access.
Privileged knowledge: the specific details of your clients' situations, accumulated over years of working together. Their priorities, their politics, their unwritten rules, their actual tolerance for risk (as opposed to what they say in meetings). This knowledge is yours because they trusted you with it. AI doesn't have it and can't get it without your participation.
Trust itself: AI can be capable, but it can't be accountable, not in the way humans are. When something goes wrong, when the stakes are high, when judgment is required, clients need a human they trust. That trust takes time to build and is inherently scarce.
Institutional memory: the continuity that allows an organisation to remember what it learned, maintain relationships through staff turnover, and avoid repeating mistakes. Most organisations are terrible at this; the ones that get good at it will have a genuine competitive advantage.
These three (privileged knowledge, trust, and institutional memory) are the foundations of a defensible position. They're not immune to disruption, but they're the last things to fall.
Know What You Know
If trust and privileged knowledge are what survive, the practical question becomes: how do you maintain and leverage them?
A trust-based relationship depends on genuinely knowing your client, not in the abstract, but in operational detail. Their history, their preferences, the things that have worked and failed in the past. Lose that knowledge (because a partner retires, or a document gets buried in a shared drive, or nobody thought to write it down) and you're starting over. Every time you start over, you're vulnerable to a competitor (or a service) that doesn't need to.
This is why knowledge management sits at the heart of the strategy, even though it sounds unglamorous. It's not a separate IT project; it's the foundation on which everything else rests.
AI's performance is almost entirely dependent on the context it's working with. Ask an AI a question without the right background, and you get a generic, often wrong, answer. Give it the relevant client files, the history, the specific constraints, and the output can be genuinely useful. The difference between those two scenarios is knowledge management.
This is why knowledge management has become critical in ways it never was before. Historically, poor knowledge management meant wasted time and occasional embarrassment. When a client asked "didn't we discuss this last year?" and no one could find the record, the damage was reputational but manageable.
With AI in the mix, the stakes are higher. If your knowledge isn't organised, structured, and accessible in the right way, your AI tools won't work. It's that simple. And if your AI tools don't work, you're losing ground to competitors whose AI tools do work, because their knowledge foundations are better.
Knowing what you know matters far more than using the latest AI technology. The intelligence layer will keep improving. It'll get cheaper, faster, more capable. But it will always depend on context. And context comes from knowledge management.
The first step toward a defensible position is simple: understand what you actually know.
Most organisations believe they already do. They have SharePoint repositories, document management systems, CRMs. From a distance, it looks like the knowledge is managed.
Look closer and the picture changes. Repositories fill up with outdated documents that no one maintains. Version control is nominal; there are multiple copies of everything, and no one knows which is current. Critical knowledge lives in email threads, in the heads of senior staff, in informal practices that have never been written down. The "knowledge management" system is really a document storage system, and not a very good one at that.
This false confidence is dangerous. It's easier to address a problem you know you have than one you believe you've already solved. The first step is admitting that "we have SharePoint" is not the same as "we know what we know".
Start with an honest audit, genuinely honest, not a box-ticking exercise. Where is your client-specific knowledge? Is it current? Is it accessible? Could you reconstruct what you know about a major client if the partner responsible left tomorrow? Most organisations, if they're honest, will find the answer uncomfortable.
Then consider format. The documents that contain your institutional knowledge (policies, procedures, client files, historical analyses) were written for humans. They use complex formatting, embedded images, headers and footers, proprietary file formats. AI can read these, but inefficiently: the formatting gets in the way. Structured, plain-text formats (particularly Markdown) are far more effective for AI consumption. This matters because AI access to your knowledge base will increasingly determine its utility.
Finally, consider capture. What systems exist to ensure that what your organisation learns is usefully retained? When a senior consultant finishes a complex engagement, what happens to the knowledge they gained? In most organisations, it walks out the door with them. Building systems that capture, structure, and preserve organisational learning is foundational, not optional.
"Know what you know" sounds obvious. In practice, almost no one does it at the level AI demands.
Building Defensible Positions
If the diagnosis sounds grim, what can we do about it?
We're not offering a solution to AI disruption. No one has one. The technology is moving too fast and the second-order effects are too unpredictable for anyone to offer guarantees. Anyone who says otherwise is selling something.
What we're offering is a strategy for buying time.
The goal is to build foundations that remain stable and valuable as the intelligence layer keeps improving. Not to bet on specific tools or vendors, but to ensure that whatever AI looks like in two years, your organisation has the assets (knowledge, relationships, structure) to benefit from it rather than be displaced by it.
This is defensive positioning. It won't guarantee survival. But it creates a moat, however modest, that gives you room to adapt.
This means:
Don't blindly chase agentic AI. The current hype suggests that building custom AI agents is where the value lies. As exciting as agents can be, the reality is they're expensive, fragile, and require significant expertise to build and maintain. They also have a habit of failing in unexpected and sometimes spectacular ways. For most organisations, the foundations need to come first.
Leverage what already works. While organisations chase agentic AI, they're overlooking capabilities that are already here and working well. AI-powered search across your own document base can be transformative, if your documents are in the right shape. AI-assisted drafting, summarisation, and analysis are genuinely useful today, when provided with the right context. These aren't headline-grabbing capabilities, but they deliver real, immediate value.
Structure your knowledge for AI access. Convert critical documents from format-heavy files into structured, token-efficient formats. Ensure your document management reflects what you actually know, not just what you've stored. This is unglamorous work, but it's the single highest-leverage investment most organisations can make right now.
Build systems for institutional memory. Don't rely on individuals to remember what matters. Create processes that capture decisions, context, and client-specific knowledge as it's generated, not after the fact. Make it easy to record and hard to forget.
AI itself can help here. Meeting transcription and summarisation is now standard practice; there's no reason any significant conversation should go unrecorded. But capture is only valuable if what's captured is accessible and structured. A pile of transcripts is no more useful than a pile of documents.
One practical starting point comes was recently suggested by Ethan Mollick: organisations should describe what they do, in writing, for AI. Not marketing copy, not process documentation, but genuine operational descriptions: how decisions are made, what the implicit rules are, what the organisation actually values (as opposed to what it says it values). This kind of institutional self-knowledge is precisely what AI needs to be useful, and precisely what most organisations lack.
Most organisational knowledge is implicit. It lives in the heads of experienced staff, in the assumptions behind processes, in the unwritten rules of how things get done. Making it explicit is difficult, time-consuming work, but it's essential. You can't give AI context you don't have written down.
AI systems can't draw on years of context to interpret ambiguous instructions. They need explicit, written, structured inputs. Markdown (a lightweight, plain-text formatting standard) is emerging as the lingua franca for AI-ready documentation, and for good reason.
Markdown matters because it's plain text with minimal formatting: no proprietary file formats, no complex styling, no wasted tokens on headers, footers, and logos. It's readable by both humans and AI, easy to version-control, and trivially searchable. Converting your key documents into Markdown (or similarly structured formats) is one of the most practical steps you can take to make your knowledge AI-accessible.
This isn't a weekend project. Doing it properly requires genuine effort: interviewing staff, observing workflows, capturing tacit knowledge that people don't even realise they have. But the payoff is significant. An organisation that truly knows what it knows, and has that knowledge in an AI-accessible form, has an asset that's both valuable today and increasingly valuable tomorrow.
Partition for security. Not all knowledge should be accessible to all AI systems. Define what's sensitive, segment your knowledge base accordingly, and ensure your AI tools respect those boundaries. This is governance, and it matters.
Stay platform-agnostic. The tools will keep changing. Build your architecture around principles, not products. If your knowledge management depends on a specific vendor's ecosystem, you're creating a dependency that may not age well.
None of this is glamorous. It won't feature in keynote speeches about the AI revolution. But it's the work that matters now.
What We Do
This is where Shepherd Thomas comes in. We help organisations build these foundations.
We're AI realists. We can see that AI is going to change everything, but not all at once, and not in easily predictable ways. We don't sell AI hype. We don't promise transformation by Tuesday.
What we offer is practical, grounded work:
Diagnosis. We help you understand what you truly know: where your knowledge assets are, what state they're in, how accessible they are, and what gaps exist. This isn't a technology audit; it's a knowledge audit.
Strategy. Based on what we find, we help you decide what to prioritise. What knowledge is genuinely valuable and worth investing in? What can be let go? What needs to be captured before it's lost? How should your knowledge architecture be structured to support AI access while maintaining security?
Implementation support. We help you select and deploy appropriate tools, but we're not wedded to any particular vendor or platform. Our focus is on getting the fundamentals right: structured knowledge, good processes, sustainable practices.
We can't promise you'll survive the AI transition. No one can promise that. What we can offer is a clearer view of where you stand, a practical plan for strengthening your position, and the hands-on support to make it happen.