Transparency label: AI-assisted
In a recent post, I observed that LLMs are coming to be seen as like computer operating systems, with prompts as the new application programs and context as the new user interface.
Support for that way of thinking comes from how we can use Anapoly’s precision content prompt pack to apply structure to a source document. The pack contains a set of prompts which are applied in sequence, with human validation of the AI’s output at each step in the sequence. In effect, it is a program governing the end-to-end interaction between AI and human.
Contract-first prompting takes the idea further. It formalises the interaction between human and AI into a negotiated agreement, locking in intent, scope, constraints, and deliverables before any output is generated. This structured handshake resembles a statement of work in engineering or a project brief in consulting. It ensures both sides – human and AI – share the same understanding of the task. Compliance mechanisms to be used during the conduct of work – such as summarisation, clarifying loops, and self-testing – are also built into the agreement. Thus, the contract becomes both compass and checklist.
Our precision content prompt pack and the contract-first idea transform prompting into programmable context design.
This reframes the interaction between human and AI: not as issuing commands, but as engineering the conditions under which the AI will perform. When we treat prompts, inputs, roles, and structure as modular components of a working system, we begin to move from improvisation toward disciplined practice. Not just better prompting – but contextual systems engineering.
Transparency label justification: This diary post was drafted by Alec, with ChatGPT used to suggest edits, refine wording, and test the logic of specific formulations. Alec initiated the framing, decided the sequence of ideas, and approved the final structure and terminology.