Tag: context

  • The art of goal-directed context management

    with acknowledgement to Erling S. Andersen, whose book Goal Directed Project Management inspired me in 1988

    Transparency label: human only


    How do we control an AI?

    To use an AI effectively, we have to configure it’s behaviour to suit the purpose of our work. We do this by giving it information in the form of instructions, guidance, questions, and reference material. We put this information into the AI’s context to govern how it behaves. See What is context? for a more detailed explanation of context. It is important to understand it well.

    To apply a standard behaviour from the AI across all chats, we use custom instructions.

    During individual chats, we control the AI through prompts, the canvas, and file uploads (prompts can include instructions, guidance and questions).

    To control the AI’s behaviour for a group of chats organised as a project, we do it through project instructions.

    To make information available to all chats in a project, we upload project files.

    Finally, if we need the services of an AI specialist, we create a form of agent (called a GPT). We configure the agent with bespoke instructions to make its behaviour appropriate to its specialist role, and we upload reference files to give it a knowledge base. Our agent can then be called in as necessary to provide specialist advice through chats.

    This method of controlling the AI by putting information into its context is similar to the way we control a computer by giving it a program to run. But there is an important difference: a computer executes the instructions in its program precisely; an AI (large language model) interprets the information we give it. We should aim, therefore, to give the AI all the information it needs to avoid losing focus. A vague prompt gets a vague answer. A precise prompt with the right background information gets us something much more useful.

    There are similarities with the way we manage a human. The human has a store of learned knowledge to call upon, can be given instructions on how to behave during a task, can use reference material, can converse with others, and can co-edit documents with them. Like an AI, the human interprets the information obtained by these means.

    Managing scale & complexity

    As the scale or complexity of a task grows, it becomes increasingly difficult to coordinate the work of humans to achieve a common goal. To overcome this problem, we use project and programme methodologies to structure the work; these provide a form of scaffolding to let the work be carried out safely and effectively. The scaffolding is made strong and stable through a plan which – amongst other things – divides the work into manageable packages, defines who or what will do the work, when and how it will be done, and what work outputs are required.

    Work whose purpose is well defined and carries an acceptable risk of failure can be organised as a project. The scaffolding for this takes the form of the project plan and related documentation, including external standards. Members of the project team are given roles, which can vary during the project, and assigned tasks. The detailed information they need to carry out their work successfully is held in the project scaffolding. When the work has greater scope and/or complexity, it can be organised as a programme. In this case the scaffolding is provided by the programme management plan and related documentation.

    AI as a team member

    When AI is used by a member of a project team, its characteristics make it more like an additional member of the team than a software tool. The AI, like the human, must be given roles and assigned tasks. It, too, must obtain detailed information from the project scaffolding. Unlike a human, however, the AI cannot yet go and fetch the information it needs; the human user must provide it. The human does does so by putting relevant information into the AI’s context.

    Information required for input to the AI’s context over the course of the project is held in a part of the project scaffolding called the contextual scaffolding. Although closely integrated with other parts of the project scaffolding, the contextual scaffolding has a distinct purpose: to hold the information – AI artefacts – needed to manage the behaviour and effectiveness of the AI over the course of the project. The contextual scaffolding is the responsibility of the project’s AI manager.

    AI artefacts

    The nature, required content, and production timetable for AI artefacts are governed by and tied to the project plan. Artefacts in the contextual scaffolding include:

    • AI policy
    • AI management plan
    • an AI startup pack to configure the AI at the outset of the project, when its objective will be to outline the contextual scaffolding and help to produce an initial set of AI artefacts;
    • stage packs tailored to the varying uses intended for the AI over the course of the project (as with other project artefacts, these begin life in outline form and gain content as the project progresses);
    • GPT packs to create the specialist AI agents that are needed.

    The art of goal-directed context management

    The art of goal-directed context management is to align the AI’s evolving context with the purpose and structure of the project or programme it is associated with, ensuring the AI always has the right information, at the right level of detail, to contribute effectively to the current stage of work.

  • An emerging discipline?

    Transparency label: AI-assisted

    In a recent post, I observed that LLMs are coming to be seen as like computer operating systems, with prompts being the new application programs and context the new user interface.

    Our precision content prompt pack is a good example of that thinking. The pack contains a set of prompts designed to take an unstructured document (notes of a meeting, perhaps) and apply structure (precision content) to the information it contains. We do this because AIs perform better if we give them structured information.

    We apply the prompts in sequence, checking the AI’s output at each step in the sequence. In effect, it is a program that we run on the AI. 

    Contract-first prompting takes the idea further. It formalises the interaction between human and AI into a negotiated agreement about purpose, scope, constraints, and deliverables before any output is generated. This ensures that both sides – human and AI – share the same understanding of the the work to be done. The agreement also contains compliance mechanisms (eg summarisation, clarifying loops, and self-testing) for quality control.

    These ideas reframe the interaction between human and AI: not as simply issuing prompts, but as engineering the conditions under which the AI will perform. Not just prompting, more like contextual systems engineering.

    Transparency label justification: This diary post was drafted by Alec, with ChatGPT used to suggest edits, refine wording, and test the logic of specific formulations. Alec initiated the framing, decided the sequence of ideas, and approved the final structure and terminology.

  • Precision Content Prompt Pack

    Precision Content Prompt Pack

    Version: 01, 1 August 2025
    Authors: Alec Fearon and ChatGPT-4o.

    Transparency label: AI-assisted


    Purpose

    This is a six-step process for converting a source document into a structured format that is easy for an LLM to understand. The format is based on the Darwin Information Typing Architecture (DITA) and ideas developed by Precision Content. It has the following content types:

    • Reference (what something is)
    • Concept (how to think about it)
    • Principle (why it works)
    • Process (how it unfolds)
    • Task (how to do it)

    The six steps are carried out in sequence, one at a time. They clean, segment, type, clarify, and re-package the original source material. There is human review at the end of each step.

    To use

    First, open a chat in ChatGPT and upload this file into the chat; it is in Markdown (.md) because that is easy for LLMs to read.
    Note: you can drag & drop the file into the chat or use the upload button.

    Tell ChatGPT: “Please create a new canvas from this markdown file so we can work together using the precision content prompt pack.” ChatGPT will:

    • Read the file
    • Create a canvas
    • Use each ## heading to split the file into separate cards
    • Preserve formatting and headings

    Then upload the source file into the chat. Tell ChatGPT: “Please convert the uploaded file [filename] into precision content using the steps defined in the canvas Anapoly AI Labs Precision Content Prompt Pack. Begin Step 0.”

    ChatGPT will extract the content of the file and clean it as per Step 0 – Pre-Processing. It will paste the cleaned material into the “📄 Source Document” card for you to review. That sets you up to proceed with the following steps. The output of each step is put into the “Work Area – Output by Step” card in the canvas. Edit the output of each step as necessary before proceeding to the next step.

    The final output is put into the card “Review Notes / Final Output / Glossary”. You can tell ChatGPT to export it from there as a file for download. If it is to be used as reference material, filetype .md is recommended.


    Step 0 – Pre-Processing

    Purpose: Clean the raw input before analysis.

    Prompt:

    Clean the following document for structured analysis. Remove:

    • Repeated headers/footers
    • Navigation links, timestamps, metadata
    • Formatting glitches (e.g. broken paragraphs)

    Retain all meaningful content exactly as written. Do not summarise, interpret, or reword.


    Step 1 – Segmenting the Document

    Purpose: Divide into discrete, meaningful segments.

    Prompt:

    Break this cleaned document into a numbered list of coherent segments. Each segment should reflect a single topic, paragraph, or unit of meaning.

    Format:
    [1] [text]
    [2] [text]


    Step 2 – Typing the Segments

    Purpose: Label each segment by information type.

    Types:

    • Reference – what something is
    • Concept – how to think about it
    • Principle – why it works
    • Process – how it unfolds
    • Task – how to do it

    Prompt:

    For each segment, assign the most relevant type. Include a short justification.

    Format:
    [1] Type: [type] – [reason]


    Step 3 – Rewriting for Precision

    Purpose: Convert to structured, plain-language modules.

    Prompt:

    Rewrite each segment according to its type:

    • Use short declarative sentences
    • Bullet points for steps or lists
    • Avoid vagueness or repetition

    Step 4 – Grouping by Type

    Purpose: Reorganise output by information type.

    Prompt:

    Sort all rewritten segments under clear headings:

    • 🗂 Reference
    • 🧠 Concept
    • ⚖️ Principle
    • 🔄 Process
    • 🔧 Task

    Preserve segment numbers.


    Step 5 – Structured Output Bundle

    Purpose: Package the content for reuse.

    Prompt:

    Format output with markdown or minimal HTML.
    Include metadata at the top:

    Title: [your title]
    Source: [file name or link]
    Date: [today's date]
    Content type: Precision Content

    Step 6 – Glossary Generation

    Purpose: Extract and define key terms.

    Prompt:

    Identify important terms in the text and define each using only information in the document.

    Format:
    Term: [definition]


    📄 Source Document

    [Paste the cleaned or raw source text here after Step 0.]


    Work Area – Output by Step

    Use this section to draft segmented content, types, rewrites, and grouped outputs.


    Review Notes / Final Output / Glossary

    Use this area for human commentary, final outputs, or glossary results.