Tag: structured prompting

  • Conceptual Scaffolding Framework updated for ChatGPT-5

    I put our Contextual Scaffolding Framework and OpenAI’s GPT-5 Prompting Cookbook into NotebookLM and asked it “What aspects of the gpt-5 prompting cookbook are most important to know in order to apply the contextual scaffolding framework most effectively?”

    It gave me a sensible set of prompting strategies, so I told it to integrate them into an updated Contextual Scaffolding Framework for ChatGPT-5.

    I’ve given the updates a cursory review; they look good apart from a reference to use of an API (Application Programming Interface) which is probably outside our scope. But it’s late; a proper review will have to wait for another day. Perhaps a job to do in collaboration with ChatGPT-5.

    Is contextual scaffolding a worthwhile concept? The findings from some research with Perplexity suggest it is:

    Contextual scaffolding is not only still applicable to ChatGPT-5, it is more effective and occasionally more necessary, due to ChatGPT-5’s increased context window, steerability, and the complexity of its reasoning capabilities. The consensus among thought leaders is that scaffolding remains best practice for directing AI behavior, ensuring relevance, and achieving quality outcomes in collaborative tasks. Leveraging both new features (custom instructions, preset personas, automatic reasoning mode selection) and established scaffolding techniques is recommended to get the best results. The trend is towards combining sophisticated context guidance with the model’s own adaptive reasoning for “human+AI” workflows.

  • An emerging discipline?

    Transparency label: AI-assisted

    In a recent post, I observed that LLMs are coming to be seen as like computer operating systems, with prompts being the new application programs and context the new user interface.

    Our precision content prompt pack is a good example of that thinking. The pack contains a set of prompts designed to take an unstructured document (notes of a meeting, perhaps) and apply structure (precision content) to the information it contains. We do this because AIs perform better if we give them structured information.

    We apply the prompts in sequence, checking the AI’s output at each step in the sequence. In effect, it is a program that we run on the AI. 

    Contract-first prompting takes the idea further. It formalises the interaction between human and AI into a negotiated agreement about purpose, scope, constraints, and deliverables before any output is generated. This ensures that both sides – human and AI – share the same understanding of the the work to be done. The agreement also contains compliance mechanisms (eg summarisation, clarifying loops, and self-testing) for quality control.

    These ideas reframe the interaction between human and AI: not as simply issuing prompts, but as engineering the conditions under which the AI will perform. Not just prompting, more like contextual systems engineering.

    Transparency label justification: This diary post was drafted by Alec, with ChatGPT used to suggest edits, refine wording, and test the logic of specific formulations. Alec initiated the framing, decided the sequence of ideas, and approved the final structure and terminology.