All posts

Welcome to the Shepherd Thomas wiki

By Barry Thomas • 26 April 2026 • 6 min read

The wiki on this site is a working notebook — around sixty entries and growing — where we record the patterns, heuristics and case studies we keep encountering across our AI-adoption advisory work. It is not a book and it is not a methodology. It is a deliberately incomplete set of working observations, each updatable, each replaceable, and several of them likely to be superseded or quietly deprecated within the next year.

It is also a kind of demonstration. The wiki itself is built using the same AI-assisted approach we recommend to clients, and it follows the same disposability principle we apply to every artefact in this category. Two threads run through this introduction: why we built it, and why we built it the way we did. Both reflect the same broader shift in knowledge work that the wiki is itself trying to capture.

A note on length before going further. This wiki contains rather more text than current fashion allows for. That is deliberate. We do not expect many human readers to work through the wiki end-to-end, and we are not optimising for that experience. Our default expectation is that readers will hand the material to their own AI and have it summarised, queried, or recontextualised against whatever they actually want to know. If your reaction is that there are too many words here, you are probably not the wrong reader; you may simply be the right reader using the wrong tool. And if the suggestion to get your own AI to digest all this sounds like too much of a technical hurdle in itself, you definitely need to give us a call. We can help!

Why a public wiki

The reason starts with a thesis we hold quite firmly. AI is collapsing the value of general expertise faster than most professional services firms are adjusting to. The body of formerly-scarce knowledge that experienced practitioners used to charge for — codified standards, regulatory detail, methodology frameworks, cross-industry pattern libraries — is now available for the marginal cost of a Claude or ChatGPT query, in plain language, with the answer adapted to the asker’s specific situation. The economics of that change are working through the sector now, and they are not subtle. We’ve written about the structural threat to compliance revenue and the commoditisation of general expertise elsewhere in the wiki itself; the short version is that the floor under generic expertise is moving up, and any consultancy whose offer sits at or below that floor is in trouble.

What survives the collapse, in our view, is what AI structurally cannot access: the privileged context of specific client relationships, the durable judgement that comes from many engagements rather than from any one of them, and the institutional memory of decisions made and not made over time. None of those things are propositional knowledge that can be packaged and sold. They are tacit, accumulated, and inseparable from the people doing the work.

The wiki is our attempt to make as much of our own accumulated judgement as visible as possible — to clients who are evaluating us, to ourselves as we work across engagements, and to the AI tools we increasingly use to do that work. It is both a public artefact (the case for our positioning) and an internal asset (the context our own AI tools draw on when we use them). One file, two functions. We don’t think this dual purpose is incidental; we think it is the natural shape of public-facing knowledge artefacts in an era when AI usefulness is fundamentally a context problem.

There is a second motivation worth naming. The wiki is also an experiment. We are interested, professionally, in how far AI can now be pushed for practical knowledge work — not as a chatbot, not as a productivity tool, but as a structured collaborator in a sustained intellectual project. The wiki is one place we are testing that, and some of what we have learned in the building is captured in this post.

How it was built

The technology stack is unremarkable: Astro for the static site, Netlify for hosting, GitHub for version control, Obsidian as the editing surface, a small custom plugin to resolve Obsidian-style wikilinks at build time, and a maintenance script that catches broken links, schema issues and stale notes. None of those choices is novel. The interesting part is the workflow.

What is genuinely new is the workflow shape. Each new wiki entry begins as an extraction pass over an engagement folder: meeting summaries, strategy documents, deliverables, working drafts. AI drafts a structured proposal listing candidate new notes, candidate refinements to existing notes, and evidence updates for notes that the engagement strengthens. We — the human authors — review the proposal, accept, modify or reject each item, and the AI then renders the accepted set into the wiki. A maintenance script runs against the result to catch broken links and schema problems; everything ships as draft until a human curator promotes it. Client material is anonymised by contract; identifying details live only in internal frontmatter that the site templates never render.

It is worth being explicit about how recent the underlying capability is. The extraction passes lean heavily on Claude Opus’s ability to read across tens of thousands of words of meeting notes, working drafts and prior writings simultaneously, reason coherently across them, and surface insights worth promoting into the wiki. That kind of sustained reasoning across a large, structured context within a single long-running project thread was not reliably available even a few months ago. We have repeatedly attempted things during the building of this wiki that we were ninety-five per cent confident would fail, and have been surprised again and again. The workflow described here would not have been viable on the models of late 2025; it is barely viable now; we expect it to feel ordinary within a year. Some of the design decisions in the wiki — note granularity, link density, the structure of extraction proposals — are calibrated to the current state of that capability and will need re-calibrating as it advances.

The interesting thing is what happens when the capabilities compose. AI by itself produces drafts that read plausibly but reflect no specific judgement; humans by themselves produce judgement at a pace that does not keep up with the rate at which evidence accumulates. The composition produces both, at a pace neither could individually sustain. We are not claiming this workflow is unique to us — only that the practical accessibility of it is recent enough that most knowledge-work organisations have not yet built around it, and that the productivity differential is substantial.

Two safeguards are worth naming explicitly because they make the rest workable. Every client-derived observation defaults to draft until a human review confirms anonymisation is adequate; nothing reaches the public site automatically. Every note is written with an explicit awareness that AI will read it — short, structured, with summary metadata, formatted for retrieval rather than display. The wiki is, in a literal sense, written in the form it argues for.

Why we are honest about the experiment

The wiki is unfinished, and it will remain unfinished as long as the practice it describes is alive. We will be wrong about parts of it. Some notes will age out within months as the underlying technology shifts. Some will be superseded by their own counter-evidence as we encounter engagements that test rather than confirm the pattern. We have built the structure deliberately to make that supersession cheap — every note carries a deprecated flag and a supersededBy link, and the build pipeline knows what to do with both. Public deprecation is a feature.

We mention this because the alternative — shipping a wiki that implies more durability than it actually has — would be worse than honest experimentation. AI treats documentation as authoritative in a way that human readers historically have not, and a wiki that overstates its own confidence is now actively misleading in a way it would not have been ten years ago. The honest framing is the only defensible one.

Speed is now a discipline, not a flex

The hardest thing to convey about working with AI on substantive intellectual work is what it does to planning horizons. The point is worth arguing carefully because it gets dismissed as either obvious (it isn’t) or naïve (it isn’t that either).

The standard professional posture toward intellectual artefacts — methodologies, frameworks, position papers, internal references — is to invest the time to build something durable, on the reasoning that the longer something lasts, the more value it delivers per hour invested. That reasoning held for as long as the underlying knowledge environment was stable enough to sustain a five-year horizon. It is no longer sustainable. The right time horizon for an AI-adjacent artefact in 2026 is closer to twelve to eighteen months than to five years, and the next twelve months will probably tighten that further.

Two consequences follow. The first is that building for permanence is now actively harmful. An artefact that took six months to build and is overtaken within twelve has consumed six months of time and produced six months of value. An artefact that took two weeks to build and lasts twelve has consumed two weeks and produced twelve months of value. Even if the two-week version is rougher than the six-month one — and it will be — the differential is large enough that any aesthetic argument about polish is being made at the wrong layer. Speed is not a compromise on quality. It is the new floor of quality, because the alternative is artefacts that are obsolete on arrival.

The second consequence is that disposability needs to be designed in. Building fast and discarding fast is efficient; building fast and discarding slowly because retirement is painful gives the speed advantage back at the end of the cycle. The wiki is built to be deprecated cleanly precisely so the speed economics actually work.

The natural objection is that disposable artefacts cannot accumulate the trust and authority that knowledge work depends on. We disagree, and this is the part most worth being clear about. Trust in knowledge work has never come from artefact persistence; it has always come from track record. A consultancy whose 2024 advice has been demonstrably superseded by 2026 evidence and acknowledges the supersession openly is more trustworthy than one defending its 2024 positions. Authority accumulates in the practice, not in the page.

We would rather you find a note here that you think is wrong, and tell us so we can revise or retire it, than read one that pretends to be more durable than it is. The wiki will keep growing, contracting and being refactored. So will our practice. We think both are signs of health, and the inverse — a wiki that doesn’t change, by a consultancy whose positions don’t move — would be the warning sign.

Welcome.