An emerging discipline?

Transparency label: AI-assisted

In a recent post, I observed that LLMs are coming to be seen as like computer operating systems, with prompts being the new application programs and context the new user interface.

Our precision content prompt pack is a good example of that thinking. The pack contains a set of prompts designed to take an unstructured document (notes of a meeting, perhaps) and apply structure (precision content) to the information it contains. We do this because AIs perform better if we give them structured information.

We apply the prompts in sequence, checking the AI’s output at each step in the sequence. In effect, it is a program that we run on the AI. 

Contract-first prompting takes the idea further. It formalises the interaction between human and AI into a negotiated agreement about purpose, scope, constraints, and deliverables before any output is generated. This ensures that both sides – human and AI – share the same understanding of the the work to be done. The agreement also contains compliance mechanisms (eg summarisation, clarifying loops, and self-testing) for quality control.

These ideas reframe the interaction between human and AI: not as simply issuing prompts, but as engineering the conditions under which the AI will perform. Not just prompting, more like contextual systems engineering.

Transparency label justification: This diary post was drafted by Alec, with ChatGPT used to suggest edits, refine wording, and test the logic of specific formulations. Alec initiated the framing, decided the sequence of ideas, and approved the final structure and terminology.