The first reader is an AI
By Barry Thomas • 19 April 2026 • 8 min read
One of us had a conversation recently with a software vendor about the quality of their customer support. Somewhere in it we realised that the support quality was beside the point. What mattered was whether the vendor’s documentation was structured well enough that our AI could answer our questions from it directly. The human support had become a fallback. The primary interface was machine-to-machine.
That kind of interaction is still a minority of the work flowing through most firms and most of their clients. In our experience, only a small share of people at either end have moved beyond using AI for email drafting and meeting summaries. That makes the shift easy to underestimate. It is also the reason to start adjusting now, while the cost of the adjustment is low.
This piece is about six practical changes we think partners, owners and managers at mid-tier firms should be making over the coming year, and what you can look at today to see why each one matters.
What has actually changed
AI productivity gains show up as content. More reports, more memos, more decks, more options. A minority of people inside firms and inside their clients’ organisations are using AI for much more than routine drafting. A smaller minority again have set up consumption tooling that reads inbound material through an AI filter before a human sees it.
That minority is growing, and the direction is consistent enough to plan for. Firms and clients that are further along already experience the change clearly: the first reader of most of their inbound material is an AI, and the human reader who eventually engages is doing so through that AI’s rendering. The rest of you will be there over the coming year or two.
The changes below all follow from that trajectory. They are about getting ready for a state of affairs that is already true for some and becoming true for more.
Stop trusting the signals that used to tell you about quality
The signals that partners, owners and managers have traditionally used to assess work — volume, polish, comprehensiveness, the five-option recommendation rather than two — correlated with effort because effort was scarce. Effort is no longer scarce.
Check this against last week. The confident twenty-page memo with clean sub-headings and a measured executive summary now tells you only that someone knew how to brief an AI and had enough taste to edit the result. Possibly less than that. The memo looks the same as it did five years ago. It is not the same signal.
The move in how you review work is from “does this look like good work?” to “can I see the work behind the work?”. The questions that still discriminate are about process. What alternatives were considered. Where the AI helped and where the author pushed back on what it suggested. What was checked that might not otherwise have been checked. These slow the review down. They are also the only questions that produce useful information about the underlying quality of thought. The more forward-leaning of your clients will be asking you the same questions about your own work before the year is out.
Produce for the AI that will read your work first
A growing share of your deliverables will be consumed by a client’s AI before the client sees them. It is still only a share, and still only for a subset of clients. Both are growing.
The response is unglamorous. Cleaner document structure. Explicit headings. Content that does not rely on visual design to carry meaning. A machine-readable version alongside the formatted one where the work justifies the effort. None of this requires new tools or new skills. It requires treating the document as something an AI will have to parse, not only something a human will read.
There is a larger shift underneath. For a growing subset of clients, your firm is becoming source material for their AI rather than a human-to-human advisor. Your work has to hold up under that consumption pattern, which is a different test from holding up under a senior human reader. Look at the last significant deliverable you sent. If the key caveats are buried in footnotes, or the reasoning is spread across paragraphs an AI will abstract away from, the AI will render it badly and you will not be in the room to correct the rendering.
Sort your clients by AI posture and serve both groups deliberately
Your client base is splitting and the split is widening. A minority of your clients are moving hard on AI: thinner deliverables, AI-ready formats, faster cycles, fewer options-papers. Most are moving slowly or not at all. A meaningful subset — regulated entities, government bodies, confidentiality-sensitive clients — are actively asking that AI be kept out of their engagements entirely, with attestation.
You can do this sort from memory this week. Go through the top twenty or thirty accounts and classify each one as AI-forward, moving slowly, or AI-averse. The output tells you two things. It tells you which clients need a different form of deliverable from you than they used to get. And it tells you whether your firm currently has one operating mode for everyone, or whether it can deliberately run two. Firms that drift — assuming a single position and applying it everywhere — will be producing the wrong shape of work for a growing share of their clients.
Audit your client-facing agreements for AI silence
Most firms’ client agreements were drafted before AI became a live question, and they address neither (a) whether and how the firm uses AI in delivering the work, nor (b) whether and how the client is permitted to use AI on the firm’s output.
Test this in your own firm in two questions. Ask your senior operations lead whether any client has raised AI use in the last six months. Ask whether any of your own people have been found feeding confidential client material into a consumer-grade AI tool. In our experience, most firms that look find at least one of those on their recent record.
None of this is addressed by the current agreement template. The Australian Privacy Principles have application the templates were not drafted to handle — APP 6 on use and disclosure, APP 8 on cross-border disclosure (most frontier AI services are US-hosted), APP 11 on the security of personal information. For firms operating under professional regulation (legal, accounting, financial advice) there is additional guidance from industry bodies already starting to land. Revise the templates. Audit where your own people are using AI on client work. Publish a position your clients can rely on. This is the most immediately actionable item on the list, and it is the one most firms have been quietly putting off.
Look hard at the pricing on work where AI is already saving you hours
If AI compresses the hours required to complete a piece of work, any pricing model tied to hours compresses the firm’s revenue on that work. The billable-hour model is the most obvious case, and the pressure on it is not subtle. You can see it in your own data.
Pick the five engagements you closed in the last quarter that came in furthest under their budgeted hours. Ask what drove the underrun. Some part of the answer, in most cases, is that AI assistance — tools the firm has formally adopted or tools your people are using unofficially — collapsed a task that used to take a full morning into twenty minutes. If the same work in 2026 takes half the hours it did in 2024, the firm has three options: reduce the bill, hold the bill and absorb the value conversation when the client notices, or restructure the engagement so price is no longer tied to hours.
The third option is the only one that extends past the current year, and it has to be renegotiated alongside how people in the firm are compensated for their time. This is a governance project, not a pricing adjustment. The firms that work through it deliberately this year will be in better shape than the firms that drift into it as margins compress.
Be careful which AI skills you hire and train for
Some AI skills are durable and some are transient. The durable ones are skills of judgment: when to use AI, how to structure a problem so AI can help, how to verify the output, which parts of a piece of work should not be AI-touched at all. Those are worth hiring for and worth training into your existing people.
The transient ones are skills of mechanics. Specialist prompt engineering. Hand-rolled agent frameworks. Bespoke pipelines that duplicate capabilities the platform vendors are absorbing every quarter. These are worth approaching with scepticism. The test we apply when such a role or training programme is proposed is a simple one: would the skill still matter if the platform vendor shipped the capability natively in twelve months? For the durable judgment skills the answer is yes. For most of the mechanical skills it is no, which is worth knowing before the role is approved.
What this amounts to
None of the changes above are large on their own. Several of them are things a leader can do in an afternoon. All of them are responses to a shift you can already see at the edges of your own firm’s work this month, even if the centre of it has not moved yet.
The reason we are writing about this is that most of the partners, owners and managers we work with are aware of the shift and under-acting on it. The temptation is to wait for things to stabilise before adjusting. They will not stabilise in time. The firms that are in good shape a year from now will be the ones that made these adjustments while the evidence was still merely uncomfortable, rather than when it became costly.
If this reads as less dramatic than the prevailing AI commentary, that is the intention. The practical content of the AI transition, for mid-tier Australian firms in 2026, is mostly made of small adjustments that compound. The dramatic content is for people writing about AI.