AI interfaces are generated on demand rather than fixed by design
The user interface layer, built historically as fixed buttons and menus that bridge human intent and machine execution, is being replaced piecemeal by AI-generated surfaces built at runtime in response to specific requests; wrappers that sit between user and base model are increasingly a liability rather than an aid.
For decades our relationship with software has been defined by the user interface: buttons, menus, dashboards, forms. The interface was the way we told the machine what to do. Its fixed shape was a consequence of the underlying software not being capable of interpreting intent directly. Designers built the signposts; users learned to follow them.
The assumption behind that shape is starting to fail. When the model behind the interface can understand intent from natural language and can render the specific thing the user is asking for at the moment of asking, the fixed interface is doing less of the bridging work than it used to, and wrappers that sit between user and base model can become more hindrance than help. Visible examples are accumulating: staff bypass a purpose-built AI document-generation tool in favour of direct conversation with a frontier model; legal firms that paid for bespoke workflow wrappers find their users preferring the unwrapped tool.
What the shift replaces
The traditional interface had two jobs. It translated human intent into machine-parseable commands, and it structured the surface so users knew what was possible. Both jobs assumed the machine could not do them itself.
The emerging shape is different. The model does the intent translation directly from natural language. The surface is composed at runtime — a table rendered when the conversation needs a table, a chart when the conversation needs a chart, a form when the conversation needs a form. The user does not navigate a pre-designed structure; they ask for what they want, and the rendering follows. That does not mean visual interaction disappears. It means the fixed menu of visual options is replaced by a much larger set of compositions that are created as needed.
Why wrappers lose value in this shape
A wrapper sits between user and base model, offering a task-specific interface. That specialisation was genuinely useful when base models were less capable, because the wrapper constrained the interaction into a shape the underlying model could handle reliably. As base models absorb more capability (see Retrieval middleware is being absorbed into platforms at mid-tier scale for the middleware analogue, and Architect AI around principles, not vendors for the broader architecture argument), the wrapper’s usefulness narrows. The features that once distinguished it — hallucination controls, workflow templates, output formatting — are increasingly handled natively by the model. What remains is a constraint on direct access to the model’s full capabilities, and users can feel that constraint.
This sits alongside Expect current AI deployments to look primitive in retrospect: the interface layer as currently designed is a transitional form, not a steady state. It also has a consumption-side mirror in The first reader is an AI: the shift in how work is read and the shift in how work is commissioned are both parts of the same restructuring of the layer between human intent and machine execution.
What follows
For organisations choosing AI tools, the practical implication is to be sceptical of wrappers whose value proposition is “interface over a base model you already have access to”, and to invest in direct access with the minimum constraint needed for governance. For roles whose substance is execution rather than judgement, the implication is sharper — see Hire for durable AI judgement, not transient AI mechanics for the specific HR-layer version of the argument. The interface being generated rather than designed is only the most visible part of a restructuring that is absorbing substantial classes of software work into the model itself.