<?xml version="1.0" encoding="UTF-8"?><rss version="2.0"><channel><title>Shepherd Thomas Wiki</title><description>Patterns, heuristics and abstracted case studies from our AI consulting work with Australian mid-tier organisations.</description><link>https://www.shepherdthomas.com/</link><language>en-au</language><item><title>The mid-tier AI adoption threshold</title><link>https://www.shepherdthomas.com/wiki/adoption-threshold-mid-tier/</link><guid isPermaLink="true">https://www.shepherdthomas.com/wiki/adoption-threshold-mid-tier/</guid><description>In mid-tier organisations, the daily pressure of business-as-usual sets a payoff threshold that typical AI gains do not clear, so adoption stalls even when tools and training are in place.</description><pubDate>Fri, 24 Apr 2026 00:00:00 GMT</pubDate></item><item><title>AI as a labour service bypasses the adoption problem</title><link>https://www.shepherdthomas.com/wiki/ai-as-labour-service/</link><guid isPermaLink="true">https://www.shepherdthomas.com/wiki/ai-as-labour-service/</guid><description>A delivery model in which vendors sell finished work product, not AI tools, removes internal-adoption friction from the buyer side and accelerates displacement timelines.</description><pubDate>Fri, 24 Apr 2026 00:00:00 GMT</pubDate></item><item><title>AI commoditises general expertise</title><link>https://www.shepherdthomas.com/wiki/ai-commoditises-general-expertise/</link><guid isPermaLink="true">https://www.shepherdthomas.com/wiki/ai-commoditises-general-expertise/</guid><description>AI is making publicly codified expertise abundant; the gap between an expert and a competent AI-equipped generalist is narrowing, and that gap is where professional fees live.</description><pubDate>Fri, 24 Apr 2026 00:00:00 GMT</pubDate></item><item><title>AI literacy is not a training problem</title><link>https://www.shepherdthomas.com/wiki/ai-literacy-not-training/</link><guid isPermaLink="true">https://www.shepherdthomas.com/wiki/ai-literacy-not-training/</guid><description>Treat AI literacy as a durable mental-model shift, not an event — the judgement required to use AI well cannot be installed through a workshop.</description><pubDate>Fri, 24 Apr 2026 00:00:00 GMT</pubDate></item><item><title>AI as an operational interpreter of purpose, vision and values</title><link>https://www.shepherdthomas.com/wiki/ai-operationalises-values/</link><guid isPermaLink="true">https://www.shepherdthomas.com/wiki/ai-operationalises-values/</guid><description>AI may offer a different mechanism for translating stated purpose, vision and values into daily operational decisions — continuous rather than episodic, contextual rather than general, and individually available rather than programme-delivered. Whether the mechanism proves durable in practice is an open question.</description><pubDate>Fri, 24 Apr 2026 00:00:00 GMT</pubDate></item><item><title>AI removes the practical ceiling on workplace surveillance</title><link>https://www.shepherdthomas.com/wiki/ai-removes-surveillance-ceiling/</link><guid isPermaLink="true">https://www.shepherdthomas.com/wiki/ai-removes-surveillance-ceiling/</guid><description>Comprehensive workplace monitoring was always theoretically possible but practically capped by human review capacity; AI removes that cap, and the capability itself reshapes behaviour whether or not it is used.</description><pubDate>Fri, 24 Apr 2026 00:00:00 GMT</pubDate></item><item><title>Expect AI to surface authenticity gaps between stated and actual values</title><link>https://www.shepherdthomas.com/wiki/ai-surfaces-authenticity-gaps/</link><guid isPermaLink="true">https://www.shepherdthomas.com/wiki/ai-surfaces-authenticity-gaps/</guid><description>An AI system that takes an organisation&apos;s stated values seriously will quickly surface where stated and actual behaviour diverge; leadership should expect and plan for these findings before commissioning the work, because surfacing them without being prepared to respond is worse than not surfacing them at all.</description><pubDate>Fri, 24 Apr 2026 00:00:00 GMT</pubDate></item><item><title>Architect AI around principles, not vendors</title><link>https://www.shepherdthomas.com/wiki/architect-around-principles-not-vendors/</link><guid isPermaLink="true">https://www.shepherdthomas.com/wiki/architect-around-principles-not-vendors/</guid><description>Tools will keep changing; architectures tied to a specific vendor ecosystem age poorly and limit the organisation&apos;s ability to adopt what comes next.</description><pubDate>Fri, 24 Apr 2026 00:00:00 GMT</pubDate></item><item><title>Audit client agreements for AI silence</title><link>https://www.shepherdthomas.com/wiki/audit-agreements-for-ai-silence/</link><guid isPermaLink="true">https://www.shepherdthomas.com/wiki/audit-agreements-for-ai-silence/</guid><description>Most firms&apos; client agreements were drafted before AI became a live question and are silent on both the firm&apos;s AI use in delivering work and the client&apos;s permitted AI use on the firm&apos;s output; that silence inherits defaults by omission and leaves the firm exposed under privacy regulation and professional guidance.</description><pubDate>Fri, 24 Apr 2026 00:00:00 GMT</pubDate></item><item><title>Channel shadow AI use as signal, not risk to suppress</title><link>https://www.shepherdthomas.com/wiki/channel-shadow-ai-use/</link><guid isPermaLink="true">https://www.shepherdthomas.com/wiki/channel-shadow-ai-use/</guid><description>In most organisations, staff are already using AI in ways leadership has not sanctioned; treating that shadow use as evidence of real work-in-context rather than as compliance risk reveals use cases, knowledge gaps and adoption blockers that top-down planning will not find.</description><pubDate>Fri, 24 Apr 2026 00:00:00 GMT</pubDate></item><item><title>Compliance revenue is structurally threatened</title><link>https://www.shepherdthomas.com/wiki/compliance-revenue-threatened/</link><guid isPermaLink="true">https://www.shepherdthomas.com/wiki/compliance-revenue-threatened/</guid><description>Professional services firms that depend on recurring compliance revenue face structural margin compression as AI commoditises the underlying work.</description><pubDate>Fri, 24 Apr 2026 00:00:00 GMT</pubDate></item><item><title>AI&apos;s most dangerous failure mode is confident wrongness</title><link>https://www.shepherdthomas.com/wiki/confident-wrongness/</link><guid isPermaLink="true">https://www.shepherdthomas.com/wiki/confident-wrongness/</guid><description>AI&apos;s most dangerous failure is not silence but fluent, authoritative output that is wrong — making error detection a skilled, human task that cannot be deferred to the tool.</description><pubDate>Fri, 24 Apr 2026 00:00:00 GMT</pubDate></item><item><title>Context rot</title><link>https://www.shepherdthomas.com/wiki/context-rot/</link><guid isPermaLink="true">https://www.shepherdthomas.com/wiki/context-rot/</guid><description>As AI-generated content feeds back into the organisation&apos;s context — documents, transcripts, summaries — today&apos;s hallucinations become tomorrow&apos;s training data, and the quality of the context degrades over time unless the cycle is actively broken.</description><pubDate>Fri, 24 Apr 2026 00:00:00 GMT</pubDate></item><item><title>Declining AI engineering commits you to content discipline</title><link>https://www.shepherdthomas.com/wiki/declining-engineering-commits-to-content/</link><guid isPermaLink="true">https://www.shepherdthomas.com/wiki/declining-engineering-commits-to-content/</guid><description>The argument for deferring a custom AI build — pipeline, integration, evaluation harness — because content quality is the real leverage point only holds while someone is actively doing the content work; declining the engineering is a commitment to the discipline, not a free deferral.</description><pubDate>Fri, 24 Apr 2026 00:00:00 GMT</pubDate></item><item><title>Defensibility lives in what AI can&apos;t access</title><link>https://www.shepherdthomas.com/wiki/defensibility-what-ai-cant-access/</link><guid isPermaLink="true">https://www.shepherdthomas.com/wiki/defensibility-what-ai-cant-access/</guid><description>What survives AI disruption sits in three categories AI cannot access without human participation — privileged client knowledge, trust, and institutional memory.</description><pubDate>Fri, 24 Apr 2026 00:00:00 GMT</pubDate></item><item><title>A document store is not a knowledge management system</title><link>https://www.shepherdthomas.com/wiki/document-store-is-not-kms/</link><guid isPermaLink="true">https://www.shepherdthomas.com/wiki/document-store-is-not-kms/</guid><description>Shelving documents in a repository is storage, not knowledge management; the presence of the repository often produces false confidence that the problem is solved.</description><pubDate>Fri, 24 Apr 2026 00:00:00 GMT</pubDate></item><item><title>The first reader is an AI</title><link>https://www.shepherdthomas.com/wiki/first-reader-is-an-ai/</link><guid isPermaLink="true">https://www.shepherdthomas.com/wiki/first-reader-is-an-ai/</guid><description>A growing share of inbound material at mid-tier firms is first read by an AI before a human sees it; the human who engages does so through the AI&apos;s rendering, changing what the deliverable has to carry and how the sending firm should produce it.</description><pubDate>Fri, 24 Apr 2026 00:00:00 GMT</pubDate></item><item><title>Users assume AI has access to information it does not have</title><link>https://www.shepherdthomas.com/wiki/hidden-knowledge-gaps/</link><guid isPermaLink="true">https://www.shepherdthomas.com/wiki/hidden-knowledge-gaps/</guid><description>Users routinely overestimate the information AI has access to, treating it as if it were working from a complete picture; this overestimate compounds with AI fluency to produce misplaced trust.</description><pubDate>Fri, 24 Apr 2026 00:00:00 GMT</pubDate></item><item><title>Hire for durable AI judgement, not transient AI mechanics</title><link>https://www.shepherdthomas.com/wiki/hire-for-durable-ai-judgement/</link><guid isPermaLink="true">https://www.shepherdthomas.com/wiki/hire-for-durable-ai-judgement/</guid><description>AI skills split into durable judgement — when to use AI, how to structure problems for it, how to verify output, where not to use it — and transient mechanics — specialist prompt engineering, bespoke pipelines platforms will absorb. Hire and train for the first, be sceptical of the second.</description><pubDate>Fri, 24 Apr 2026 00:00:00 GMT</pubDate></item><item><title>Expect current AI deployments to look primitive in retrospect</title><link>https://www.shepherdthomas.com/wiki/horseless-carriage-ai/</link><guid isPermaLink="true">https://www.shepherdthomas.com/wiki/horseless-carriage-ai/</guid><description>Current AI deployments mostly fit the technology into existing workflows; treat today&apos;s designs as transitional and expect later shapes to differ fundamentally.</description><pubDate>Fri, 24 Apr 2026 00:00:00 GMT</pubDate></item><item><title>Human work becomes relatively expensive as AI trends to free</title><link>https://www.shepherdthomas.com/wiki/human-work-becomes-expensive/</link><guid isPermaLink="true">https://www.shepherdthomas.com/wiki/human-work-becomes-expensive/</guid><description>As AI-generated work trends toward zero marginal cost, the relative price of human involvement rises; the value delivered by humans must visibly exceed the AI alternative for the premium to hold.</description><pubDate>Fri, 24 Apr 2026 00:00:00 GMT</pubDate></item><item><title>AI interfaces are generated on demand rather than fixed by design</title><link>https://www.shepherdthomas.com/wiki/interfaces-generated-on-demand/</link><guid isPermaLink="true">https://www.shepherdthomas.com/wiki/interfaces-generated-on-demand/</guid><description>The user interface layer, built historically as fixed buttons and menus that bridge human intent and machine execution, is being replaced piecemeal by AI-generated surfaces built at runtime in response to specific requests; wrappers that sit between user and base model are increasingly a liability rather than an aid.</description><pubDate>Fri, 24 Apr 2026 00:00:00 GMT</pubDate></item><item><title>Internal-adoption friction is no protection against external disruption</title><link>https://www.shepherdthomas.com/wiki/internal-friction-no-external-shield/</link><guid isPermaLink="true">https://www.shepherdthomas.com/wiki/internal-friction-no-external-shield/</guid><description>The organisational inertia that slows internal AI adoption offers no defence against vendors who have already absorbed the technology and deliver finished outcomes.</description><pubDate>Fri, 24 Apr 2026 00:00:00 GMT</pubDate></item><item><title>Involve sceptics early in AI initiatives</title><link>https://www.shepherdthomas.com/wiki/involve-sceptics-early/</link><guid isPermaLink="true">https://www.shepherdthomas.com/wiki/involve-sceptics-early/</guid><description>Sceptics are more valuable than advocates during the design of an AI initiative — they see the failures most clearly; involve them early in roles that protect against the failures they fear, rather than sidelining them as resistant to change.</description><pubDate>Fri, 24 Apr 2026 00:00:00 GMT</pubDate></item><item><title>Knowledge management becomes an M&amp;A and partnership signal</title><link>https://www.shepherdthomas.com/wiki/km-as-partnership-signal/</link><guid isPermaLink="true">https://www.shepherdthomas.com/wiki/km-as-partnership-signal/</guid><description>As AI pervades professional services, acquirers and partners are likely to treat the target&apos;s knowledge management as a due-diligence signal because poor KM implies unreliable AI-assisted work product downstream.</description><pubDate>Fri, 24 Apr 2026 00:00:00 GMT</pubDate></item><item><title>Start with knowledge management, not tools</title><link>https://www.shepherdthomas.com/wiki/knowledge-management-before-tools/</link><guid isPermaLink="true">https://www.shepherdthomas.com/wiki/knowledge-management-before-tools/</guid><description>Audit and structure what the organisation knows before selecting AI tools; the limits of AI output are set by the limits of its input context.</description><pubDate>Fri, 24 Apr 2026 00:00:00 GMT</pubDate></item><item><title>Leadership team AI fluency must be collective, not individual</title><link>https://www.shepherdthomas.com/wiki/leadership-fluency-must-be-collective/</link><guid isPermaLink="true">https://www.shepherdthomas.com/wiki/leadership-fluency-must-be-collective/</guid><description>A single AI-fluent leader in an otherwise-unfluent team creates strategic blind spots rather than an advantage; fluency has to be built across the leadership team together, because uneven adoption at the top propagates as inconsistent AI strategy below.</description><pubDate>Fri, 24 Apr 2026 00:00:00 GMT</pubDate></item><item><title>Use a frontier LLM as a personal AI mentor</title><link>https://www.shepherdthomas.com/wiki/llm-as-personal-ai-mentor/</link><guid isPermaLink="true">https://www.shepherdthomas.com/wiki/llm-as-personal-ai-mentor/</guid><description>Use a frontier LLM as a conversational partner for learning about AI itself — ask it about its capabilities, limitations and appropriate use cases while doing real work with it. The self-directed, contextualised learning this produces outperforms the structured training programmes it replaces.</description><pubDate>Fri, 24 Apr 2026 00:00:00 GMT</pubDate></item><item><title>Make tacit knowledge explicit, or AI cannot use it</title><link>https://www.shepherdthomas.com/wiki/make-tacit-knowledge-explicit/</link><guid isPermaLink="true">https://www.shepherdthomas.com/wiki/make-tacit-knowledge-explicit/</guid><description>AI cannot interpret the unwritten assumptions that shape how an organisation actually works; operational self-description is precondition, not polish.</description><pubDate>Fri, 24 Apr 2026 00:00:00 GMT</pubDate></item><item><title>Measure adoption, not just implementation</title><link>https://www.shepherdthomas.com/wiki/measure-adoption-not-implementation/</link><guid isPermaLink="true">https://www.shepherdthomas.com/wiki/measure-adoption-not-implementation/</guid><description>Deploying an AI tool and reporting success are not the same thing; track active use rather than availability, because the gap between the two is where unvoiced resistance hides and where the investment fails to earn its return.</description><pubDate>Fri, 24 Apr 2026 00:00:00 GMT</pubDate></item><item><title>Retrieval middleware is being absorbed into platforms at mid-tier scale</title><link>https://www.shepherdthomas.com/wiki/middleware-absorbed-into-platforms/</link><guid isPermaLink="true">https://www.shepherdthomas.com/wiki/middleware-absorbed-into-platforms/</guid><description>The middleware layer that vendors and consultants propose to build around frontier models — retrieval pipelines, evaluation harnesses, observability — is being absorbed into the platforms themselves at mid-tier scale; work commissioned to build it now is liable to be stranded by the vendor&apos;s own roadmap within an acceptable timeframe.</description><pubDate>Fri, 24 Apr 2026 00:00:00 GMT</pubDate></item><item><title>An ongoing AI advisory engagement with a growing firm</title><link>https://www.shepherdthomas.com/wiki/ongoing-ai-advisory-engagement/</link><guid isPermaLink="true">https://www.shepherdthomas.com/wiki/ongoing-ai-advisory-engagement/</guid><description>An abstracted single-engagement case study showing how a growing firm used an ongoing AI-strategy advisory relationship — covering market scanning, implementation oversight and staff coaching — to navigate AI adoption without diverting internal attention from operational delivery.</description><pubDate>Fri, 24 Apr 2026 00:00:00 GMT</pubDate></item><item><title>Passive AI adoption is an implicit policy choice</title><link>https://www.shepherdthomas.com/wiki/passive-adoption-is-a-choice/</link><guid isPermaLink="true">https://www.shepherdthomas.com/wiki/passive-adoption-is-a-choice/</guid><description>Where an organisation has not made explicit decisions about how AI will be used, the defaults of the tools and vendors become policy by inheritance; &quot;we haven&apos;t decided yet&quot; functions as &quot;we have accepted whatever happens&quot;.</description><pubDate>Fri, 24 Apr 2026 00:00:00 GMT</pubDate></item><item><title>Polish and volume no longer signal effort</title><link>https://www.shepherdthomas.com/wiki/polish-no-longer-signals-effort/</link><guid isPermaLink="true">https://www.shepherdthomas.com/wiki/polish-no-longer-signals-effort/</guid><description>The signals that used to tell reviewers about work quality — volume, polish, comprehensiveness — correlated with effort because effort was scarce; with AI the correlation breaks, and the questions that still discriminate are about process.</description><pubDate>Fri, 24 Apr 2026 00:00:00 GMT</pubDate></item><item><title>The relationship is the product</title><link>https://www.shepherdthomas.com/wiki/relationship-is-the-product/</link><guid isPermaLink="true">https://www.shepherdthomas.com/wiki/relationship-is-the-product/</guid><description>When the codifiable layer of professional work commoditises, the enduring product of a services firm is the relationship itself — the privileged context and the trust attached to it.</description><pubDate>Fri, 24 Apr 2026 00:00:00 GMT</pubDate></item><item><title>Restructure pricing for work where AI compresses hours</title><link>https://www.shepherdthomas.com/wiki/restructure-pricing-where-ai-compresses-hours/</link><guid isPermaLink="true">https://www.shepherdthomas.com/wiki/restructure-pricing-where-ai-compresses-hours/</guid><description>Where AI compresses delivery hours, hour-based pricing compresses firm revenue proportionally; the only response that extends past the current year is to restructure engagements so price is no longer tied to hours, which is a governance project entwined with how people are compensated for their time.</description><pubDate>Fri, 24 Apr 2026 00:00:00 GMT</pubDate></item><item><title>A regional bank&apos;s core banking selection delivered by an AI-amplified solo engagement</title><link>https://www.shepherdthomas.com/wiki/solo-ai-consultant-engagement/</link><guid isPermaLink="true">https://www.shepherdthomas.com/wiki/solo-ai-consultant-engagement/</guid><description>An abstracted single-engagement case study showing how a solo Shepherd Thomas consultant, AI-amplified, delivered a regional bank&apos;s core banking system selection on a compressed timeline, at lower cost and comparable quality to a major consulting team.</description><pubDate>Fri, 24 Apr 2026 00:00:00 GMT</pubDate></item><item><title>Sort clients by AI posture and serve both groups deliberately</title><link>https://www.shepherdthomas.com/wiki/sort-clients-by-ai-posture/</link><guid isPermaLink="true">https://www.shepherdthomas.com/wiki/sort-clients-by-ai-posture/</guid><description>Client bases are splitting along AI-forward, moving-slowly and AI-averse lines; firms that run a single operating mode for everyone will produce the wrong shape of work for a growing share of their book, and need to classify and serve the segments deliberately.</description><pubDate>Fri, 24 Apr 2026 00:00:00 GMT</pubDate></item><item><title>Start AI governance imperfect; iterate rather than wait</title><link>https://www.shepherdthomas.com/wiki/start-governance-imperfect/</link><guid isPermaLink="true">https://www.shepherdthomas.com/wiki/start-governance-imperfect/</guid><description>AI governance should follow the same experimental posture as AI adoption — start imperfect, gather evidence, iterate — because waiting for clarity guarantees the technology gets ahead of the policy.</description><pubDate>Fri, 24 Apr 2026 00:00:00 GMT</pubDate></item><item><title>Structure documents for AI consumption, not just human reading</title><link>https://www.shepherdthomas.com/wiki/structure-documents-for-ai/</link><guid isPermaLink="true">https://www.shepherdthomas.com/wiki/structure-documents-for-ai/</guid><description>Human-formatted documents obstruct AI consumption; plain-text formats such as Markdown let AI work with the underlying knowledge efficiently.</description><pubDate>Fri, 24 Apr 2026 00:00:00 GMT</pubDate></item><item><title>Surveillance-chilled collaboration degrades knowledge work</title><link>https://www.shepherdthomas.com/wiki/surveillance-chills-collaboration/</link><guid isPermaLink="true">https://www.shepherdthomas.com/wiki/surveillance-chills-collaboration/</guid><description>The collaborative behaviours that produce good knowledge work — thinking aloud, proposing imperfect ideas, showing uncertainty, offering dissent — depend on low-observation conditions that AI-enabled monitoring degrades.</description><pubDate>Fri, 24 Apr 2026 00:00:00 GMT</pubDate></item><item><title>A tools-first AI rollout that plateaued</title><link>https://www.shepherdthomas.com/wiki/tools-first-rollout-plateau/</link><guid isPermaLink="true">https://www.shepherdthomas.com/wiki/tools-first-rollout-plateau/</guid><description>An abstracted composite showing what happens when a mid-tier firm buys AI tools without putting its information in order first.</description><pubDate>Fri, 24 Apr 2026 00:00:00 GMT</pubDate></item><item><title>Unvoiced staff resistance is the primary failure mode of AI initiatives</title><link>https://www.shepherdthomas.com/wiki/unvoiced-staff-resistance/</link><guid isPermaLink="true">https://www.shepherdthomas.com/wiki/unvoiced-staff-resistance/</guid><description>The most insidious threat to AI adoption is not technical or budgetary but behavioural — staff publicly support the initiative while privately declining to adopt it, expressing resistance through plausible non-compliance rather than open challenge.</description><pubDate>Fri, 24 Apr 2026 00:00:00 GMT</pubDate></item><item><title>Useful AI is a context problem</title><link>https://www.shepherdthomas.com/wiki/useful-ai-is-a-context-problem/</link><guid isPermaLink="true">https://www.shepherdthomas.com/wiki/useful-ai-is-a-context-problem/</guid><description>The difference between useful AI and dangerous AI is almost entirely about the context it has; output quality is bounded above by input quality.</description><pubDate>Fri, 24 Apr 2026 00:00:00 GMT</pubDate></item></channel></rss>