Tag: contextual scaffolding

  • The art of goal-directed context management

    with acknowledgement to Erling S. Andersen, whose book Goal Directed Project Management inspired me in 1988

    Transparency label: human only


    How do we control an AI?

    To use an AI effectively, we have to configure it’s behaviour to suit the purpose of our work. We do this by giving it information in the form of instructions, guidance, questions, and reference material. We put this information into the AI’s context to govern how it behaves. See What is context? for a more detailed explanation of context. It is important to understand it well.

    To apply a standard behaviour from the AI across all chats, we use custom instructions.

    During individual chats, we control the AI through prompts, the canvas, and file uploads (prompts can include instructions, guidance and questions).

    To control the AI’s behaviour for a group of chats organised as a project, we do it through project instructions.

    To make information available to all chats in a project, we upload project files.

    Finally, if we need the services of an AI specialist, we create a form of agent (called a GPT). We configure the agent with bespoke instructions to make its behaviour appropriate to its specialist role, and we upload reference files to give it a knowledge base. Our agent can then be called in as necessary to provide specialist advice through chats.

    This method of controlling the AI by putting information into its context is similar to the way we control a computer by giving it a program to run. But there is an important difference: a computer executes the instructions in its program precisely; an AI (large language model) interprets the information we give it. We should aim, therefore, to give the AI all the information it needs to avoid losing focus. A vague prompt gets a vague answer. A precise prompt with the right background information gets us something much more useful.

    There are similarities with the way we manage a human. The human has a store of learned knowledge to call upon, can be given instructions on how to behave during a task, can use reference material, can converse with others, and can co-edit documents with them. Like an AI, the human interprets the information obtained by these means.

    Managing scale & complexity

    As the scale or complexity of a task grows, it becomes increasingly difficult to coordinate the work of humans to achieve a common goal. To overcome this problem, we use project and programme methodologies to structure the work; these provide a form of scaffolding to let the work be carried out safely and effectively. The scaffolding is made strong and stable through a plan which – amongst other things – divides the work into manageable packages, defines who or what will do the work, when and how it will be done, and what work outputs are required.

    Work whose purpose is well defined and carries an acceptable risk of failure can be organised as a project. The scaffolding for this takes the form of the project plan and related documentation, including external standards. Members of the project team are given roles, which can vary during the project, and assigned tasks. The detailed information they need to carry out their work successfully is held in the project scaffolding. When the work has greater scope and/or complexity, it can be organised as a programme. In this case the scaffolding is provided by the programme management plan and related documentation.

    AI as a team member

    When AI is used by a member of a project team, its characteristics make it more like an additional member of the team than a software tool. The AI, like the human, must be given roles and assigned tasks. It, too, must obtain detailed information from the project scaffolding. Unlike a human, however, the AI cannot yet go and fetch the information it needs; the human user must provide it. The human does does so by putting relevant information into the AI’s context.

    Information required for input to the AI’s context over the course of the project is held in a part of the project scaffolding called the contextual scaffolding. Although closely integrated with other parts of the project scaffolding, the contextual scaffolding has a distinct purpose: to hold the information – AI artefacts – needed to manage the behaviour and effectiveness of the AI over the course of the project. The contextual scaffolding is the responsibility of the project’s AI manager.

    AI artefacts

    The nature, required content, and production timetable for AI artefacts are governed by and tied to the project plan. Artefacts in the contextual scaffolding include:

    • AI policy
    • AI management plan
    • an AI startup pack to configure the AI at the outset of the project, when its objective will be to outline the contextual scaffolding and help to produce an initial set of AI artefacts;
    • stage packs tailored to the varying uses intended for the AI over the course of the project (as with other project artefacts, these begin life in outline form and gain content as the project progresses);
    • GPT packs to create the specialist AI agents that are needed.

    The art of goal-directed context management

    The art of goal-directed context management is to align the AI’s evolving context with the purpose and structure of the project or programme it is associated with, ensuring the AI always has the right information, at the right level of detail, to contribute effectively to the current stage of work.

  • Creating a video from reference documents

    Transparency label: this post is Human-only; the video is AI-only

    In an earlier post, I explained how I used NotebookLM to update our Contextual Scaffolding Framework to allow for new capabilities in ChatGPT-5. In that same session, I accidentally clicked on the “Video overview” function, and this is what NotebookLM produced. I did not give it any instructions about focus or audience, and it was working only with the then current version of my Contextual Scaffolding Framework and the ChatGPT-5 Prompting Guide. I think it’s a remarkably good. See what you think.

  • Conceptual Scaffolding Framework updated for ChatGPT-5

    I put our Contextual Scaffolding Framework and OpenAI’s GPT-5 Prompting Cookbook into NotebookLM and asked it “What aspects of the gpt-5 prompting cookbook are most important to know in order to apply the contextual scaffolding framework most effectively?”

    It gave me a sensible set of prompting strategies, so I told it to integrate them into an updated Contextual Scaffolding Framework for ChatGPT-5.

    I’ve given the updates a cursory review; they look good apart from a reference to use of an API (Application Programming Interface) which is probably outside our scope. But it’s late; a proper review will have to wait for another day. Perhaps a job to do in collaboration with ChatGPT-5.

    Is contextual scaffolding a worthwhile concept? The findings from some research with Perplexity suggest it is:

    Contextual scaffolding is not only still applicable to ChatGPT-5, it is more effective and occasionally more necessary, due to ChatGPT-5’s increased context window, steerability, and the complexity of its reasoning capabilities. The consensus among thought leaders is that scaffolding remains best practice for directing AI behavior, ensuring relevance, and achieving quality outcomes in collaborative tasks. Leveraging both new features (custom instructions, preset personas, automatic reasoning mode selection) and established scaffolding techniques is recommended to get the best results. The trend is towards combining sophisticated context guidance with the model’s own adaptive reasoning for “human+AI” workflows.

  • Contextual Scaffolding Framework for ChatGPT-5


    Contextual Scaffolding Framework for ChatGPT-5

    Note: The guidance shown in bold was extracted from the GPT-5 Prompting Cookbook by NotebookLM without human involvement. NotebookLM then integratd the guidance into the earlier version of the Contextual Scaffoling Framework to produce this document.

    Transparency label: AI-assisted


    The development of a knowledge-based product in collaboration with an AI requires structured contextual scaffolding. This scaffolding takes the form of two complementary models: a phase model to structure the overall lifecycle, and a project model to guide collaborative work within each phase.

    The phase and project models are flexible and should be applied with as light a touch as possible. The AI will assist in mapping out this programme of work — a key part of its contextual scaffolding. To ensure optimal performance and continuous improvement of the scaffolding itself, GPT-5 should be leveraged as a meta-prompter to refine the instructions and context provided, thereby allowing for iterative enhancement of the prompts that guide the AI.


    Phase Model (Cradle-to-Grave Structure)

    The phase model provides end-to-end structure across the full lifecycle of the product. When collaborating with GPT-5, the AI’s “agentic eagerness”—its balance between proactivity and awaiting explicit guidance—should be carefully calibrated for each phase to maximise efficiency and relevance.

    1. Concept — Explore potential benefits, clarify purpose, and draft a business case. During this exploratory phase, prompt GPT-5 for more eagerness by increasing its reasoning_effort parameter. This encourages persistence and allows the AI to autonomously research and deduce without frequent clarification questions, promoting proactivity in ambiguous situations.
    2. Definition — Specify the intended final deliverable in sufficient detail to support structured development. In this phase, as clarity increases, prompt for less eagerness. Define clear criteria in your prompt for how GPT-5 should explore the problem space, including specific goals, methods, early stop criteria, and depth. This reduces tangential tool-calling and focuses the AI on precise definition.
    3. Development — Develop and test the final product. Similar to the Definition phase, prompt for less eagerness and potentially lower reasoning_effort to focus GPT-5 on the defined development objectives. Encourage the AI to proactively propose changes and proceed with plans for user approval/rejection, rather than constantly asking for confirmation. This aligns with a “management by exception” approach, where the human intervenes only when necessary.
    4. Acceptance — Validate the deliverable in the hands of its intended users, ensure it meets defined needs, and bring it into use. For validation tasks, explicitly define detailed evaluation criteria in the prompt, guiding GPT-5 to assess outputs against specified needs and acceptance criteria.
    5. Operation and Maintenance — Use, monitor, and refine the product until it is retired. In this ongoing phase, configure GPT-5’s prompts to support continuous monitoring and refinement, potentially adjusting eagerness based on the nature of maintenance tasks (e.g., more eagerness for problem diagnosis, less for routine updates).

    Project Model (Within-Phase Scaffolding)

    Within each phase, a project-based approach provides structure for collaborative delivery.

    5. Human-AI Collaboration

    The AI acts as a collaborator.

    • The human is the decision-maker.
    • The AI adapts to the nature of work at hand.
    • Because contextual scaffolding keeps pace with the progress of work, the AI remains grounded and its responses stay relevant and precise.
    • To maximise this effectiveness, leverage GPT-5’s advanced capabilities as follows:
      • Utilise the Responses API for Statefulness: Always use the Responses API to pass previous reasoning_items back into subsequent requests. This allows GPT-5 to refer to its prior reasoning traces, which eliminates the need for the AI to reconstruct a plan from scratch after each tool call, leading to improved agentic flows, lower costs, and more efficient token usage.
      • Ensure Surgical Instruction Adherence: GPT-5 follows prompt instructions with “surgical precision”. Therefore, all contextual scaffolding prompts—including product descriptions, acceptance criteria, and stage objectives—must be thoroughly reviewed for ambiguities and contradictions. Poorly-constructed prompts can cause GPT-5 to waste reasoning tokens trying to reconcile conflicts, significantly impairing performance. Structured XML specifications (e.g., <context_gathering>, <code_editing_rules>) are highly effective for defining context, rules, and expectations explicitly.
      • Manage Communication with Tool Preamble Messages: GPT-5 is trained to provide clear upfront plans and consistent progress updates via “tool preamble” messages. Prompts should steer the frequency, style, and content of these preambles (e.g., using <tool_preambles> tags) to enhance the human user’s ability to follow the AI’s thinking and progress, facilitating effective feedback loops and improving the interactive user experience.
      • Control Verbosity Strategically: Use the verbosity API parameter and natural-language overrides within prompts to tailor the length of GPT-5’s final answers. For “managing progress” products or status updates, prompt for concise status updates. For “technical” interim products or final deliverables, prompt for higher verbosity where detailed explanations or more comprehensive content (e.g., heavily commented code, detailed report sections) are required.

    Quality Management

    A simplified quality management process ensures control:

    • Each product has a defined description or acceptance criteria. For knowledge-based products, prompt GPT-5 to iteratively execute against self-constructed excellence rubrics to elevate output quality. The AI can internalise and apply high-quality standards based on criteria defined in the prompt, thereby ensuring deliverables meet the highest standards.
    • Quality is assessed at stage gates.
    • Feedback loops enable iteration and correction. When developing knowledge products, provide GPT-5 with explicit instructions on design principles, structure, style guides, and best practices (e.g., citation formats, required sections, tone of voice) within the prompt. This ensures the AI’s contributions adhere to established standards and “blend in” seamlessly with existing work, maintaining consistency and professionalism.

    Example: Preparing a Competitive Tender

    When preparing a competitive tender, the required knowledge product is a complete, high-quality bid submission. However, at the outset, the scope, fulfillment strategy, resource requirements, and pricing model will not be clear.

    Phase Model in Action
    The work progresses through phases – starting with a concept phase to assess feasibility and value, followed by definition (specifying scope and win themes), development (producing the actual bid), acceptance (review and sign-off), and finally operation and maintenance (post-submission follow-up or revision).

    Project Model in Each Phase
    Within each phase, a lightweight project structure is applied to guide collaboration with the AI. For example, in the definition phase, the AI helps analyse requirements, map obligations, and develop outline responses. In the development phase, it helps draft, refine, and format bid content.

    Contextual Scaffolding
    At every stage, contextual scaffolding ensures the AI is working with the right background, priorities, and current materials. Thus it can focus on what matters most, and contribute in ways that are coherent, precise, and aligned with both the tender requirements and internal strategy.

    Transparency label justification. This document was developed through structured collaboration between Alec Fearon, ChatGPT and NotebookLM. Alec provided the core ideas, framing, and sequence. ChatGPT contributed to the organisation, refinement, and drafting of the text, under Alec’s direction. NotebookLM provided GPT-5 updates. The content reflects a co-developed understanding, with human oversight and final decisions throughout.


  • Contextual Scaffolding for AI Work

    Transparency label: AI-assisted

    In an earlier post, I introduced the idea of “contextual systems engineering”. Building on that idea, we are developing a way to manage collaborative work with AI — especially where the goal of the collaboration is a knowledge-based product: for example a report, a competitive tender, an academic paper, or a policy framework.

    What we have come up with is the idea of a Contextual Scaffolding Framework. The framework combines two models:

    • A phase model to provide an overall structure for the work of producing the product, from its concept through to operational use.
    • A project model to structure the detailed work within each phase.

    The principle is simple: if we want AI to stay helpful and relevant, we need to give it the right information. The framework helps us decide what information to provide and when.

    The information we provide is put into the AI’s “context” — and the context must evolve to keep pace with the work. Like a satnav updating in real time, the contextual scaffolding keeps the AI aware of where you are, what matters most, and how best to move forward.

    🧱 Read the full framework here


    Transparency label justification. The post was drafted by ChatGPT based on Alec’s prior work in a long chat, with Alec guiding tone and purpose. ChatGPT proposed structure and language, but all content reflects and links back to a human-authored framework.