Tag: context

  • The Master Prompt in a changing scenario

    The Master Prompt in a changing scenario

    A discussion with NotebookLM
    21 September 2025


    References

    1. The Future of AI Prompting: 5 Context Levels
    2. The Master Prompt Method: Unlock AI’s Full Potential (Part 1)
    3. The Master Prompt Method (Part 2)
    4. Goal-Directed Context Management

    Contents

    What is the Master Prompt?

    Its relationship with Goal-Directed Context Management

    Application to a programme-like endeavour

    What is the Master Prompt?

    Give me a concise explanation of the purpose, scope and content of the master prompt advocated by Tiago Forte and Hayden Miyamoto in the selected sources.

    Based on the sources, the master prompt is a comprehensive document that provides an AI with detailed context about a person, company, or organisation to dramatically improve the quality, relevance, and speed of its responses. Its purpose is to transform the AI from a generic tool into a strategic partner that understands your specific business, goals, and operational style, thereby unlocking its full potential.

    Purpose

    The primary goal of the master prompt is to provide the AI with deep, persistent context, eliminating the need to explain your business details in every new conversation. By doing this, you can:

    Get more done, better and faster: The quality and productivity of your work can improve drastically, enabling you to accomplish in under an hour what might have previously taken weeks.

    Democratise execution: It allows small businesses and even one-person companies to perform complex tasks, like strategic planning or creating sophisticated hiring processes, that were once only feasible for large corporations.

    Enhance strategic thinking: The AI, armed with your business context, can act as a “thought partner,” helping you think through the strategic implications of decisions.

    Automate repetitive tasks: It can be used to systematise and automate annoying, repetitive business tasks, such as creating Standard Operating Procedures (SOPs) or generating marketing materials.

    Scope

    The master prompt’s scope is broad, intended to be a “second brain for an organization” that contains a wide range of information. It can be applied across all major business functions, including marketing, sales, finance, and operations.

    The methodology involves several layers of context:

    Level 3 (User Preferences): Basic settings on how you want the AI to communicate, such as format (e.g., bullet points), tone (e.g., conservative), and requests (e.g., stating its level of certainty). In Claude, this is called ‘Personal Preferences’; in ChatGPT, it’s ‘Custom Instructions’.

    Level 4 (Project Knowledge): For more specific tasks, you can upload project-specific documents into dedicated workspaces. The AI will draw on these files for any conversation within that project, acting like a fully onboarded team member.

    The master prompt itself is the most comprehensive layer, a single document containing detailed information about the entire organisation.

    Content

    Hayden Miyamoto suggests creating the master prompt in a document (like a Google Doc) with several key sections. The AI can even help you generate the content for each section by asking you a series of questions.

    The recommended content includes:

    Personal Info: Your name, role, strengths, weaknesses, and how you want to use AI.

    Company Info: When the company was established, number of employees, reporting structure, markets served, and ideal customer profile.

    Products and Services: Descriptions of what you offer, including pricing, features, and benefits.

    Market Information: A description of your differentiators and key competitors.

    Team / People: Who is on your team, their roles, and their key performance indicators (KPIs).

    Culture: The organisation’s core values, mission, and long-term goals (or “Big Hairy Audacious Goal”).

    Custom Prompts/Protocols: You can also embed specific frameworks or “mini-programs” that the AI can execute when given a trigger word, such as “do AI SOP” or “do AI hiring”.

    The master prompt also contains, what might be described as functions, enabling the AI to be instructed “Do “. Give me a concise explanation of what these functions are, how they fit into the master prompt, and how they can be used.

    Based on the sources, the “functions” you’re referring to are essentially pre-defined protocols or “mini-programs” embedded within the master prompt document. They are triggered by a specific command, such as “do AI hiring” or “do AI SOP”, which instructs the AI to execute a complex, multi-step task according to a pre-written framework.

    What They Are

    These functions are not literal programming functions but rather detailed sets of instructions, frameworks, and protocols that you write directly into your master prompt. They, products, and culture has been laid out. For instance, Hayden Miyamoto mentions having specific prompts like “AI hiring”, “AI CMO”, and “AI SOP” as distinct sections at the end of his master prompt.

    By being part of the master prompt, these protocols have access to all the other context within the document. When a function is triggered:

    1. The AI recognises the trigger word (e.g., “AI hiring”).

    2. It accesses the detailed instructions for that specific protocol from the master prompt [ for marketing director”.

    Execution: The AI, following the “AI hiring” protocol in the master prompt, automatically generates a comprehensive set of hiring documents. This includes:

        ◦ A detailed job description with responsibilities and performance metrics.

        ◦ A scorecard defining A-player vs. B-player performance.

        ◦ Recruitment materials like screening interview criteria and sample questions.

        ◦ Proposals for working interviews and homework assignments for candidates.

    Impact: This process, which would typically take an HR director the better part of a week involving multiple meetings, can be accomplished in minutes. It democratises sophisticated business practices like “top grading,” making them accessible even to small companies that wouldn’t otherwise have the resources.

    Another example is the “do AI SOP” function:

    Trigger: The user provides a screenshot of a process flowchart and types “do AI SOP”.

    Execution: The AI analyses the flowchart and writes out a detailed Standard Operating Procedure for every single step in the process.

    Impact: This replaces what could be months of tedious documentation work, creating accountability and consistency within the organisation.

    In essence, these embedded functions allow you to leverage a one-line command to unleash a powerful, context-aware workflow that dramatically improves productivity and the quality of the output.

    Relationship with Goal-Directed Context Management

    How do the ideas in goal-directed-context-management.md relate to those put forward by Tiago Forte and Hayden Miyamoto, in particular their levels of context and the master prompt?

    Excellent question. The ideas in goal-directed-context-management.md and those put forward by Tiago Forte and Hayden Miyamoto represent two distinct yet highly compatible approaches to managing an AI’s context. Both frameworks recognise that providing context is the key to unlocking an AI’s full potential, moving it from a generic tool to a specialised, effective partner.

    The core difference is one of perspective and application:

    • The Forte/Miyamoto Master Prompt method is an organisation-centric or user-centric framework. It focuses on creating a persistent, holistic “second brain” for a person or an entire organisation that the AI can access at all times.

    • The Goal-Directed Context Management (GDCM) framework is a project-centric or task-centric framework. It provides a formal methodology for managing the AI’s context dynamically throughout the lifecycle of a specific project or programme, ensuring the AI’s focus aligns with the current stage of work.

    Here is a more detailed comparison of how their concepts relate, especially concerning the levels of context and the master prompt.

    Relating the Levels of Context

    The five levels of context described by Tiago Forte can be mapped directly onto the concepts within the GDCM framework.

    Forte’s Context LevelForte’s DescriptionGDCM Equivalent & Relation
    Level 1: Training DataThe AI’s vast, pre-existing knowledge from its training on billions of documents.This is the AI’s inherent “learned knowledge,” which both frameworks treat as a universal baseline that needs to be shaped and directed.
    Level 2: System PromptThe hidden, uneditable instructions from the AI’s creators that shape its core behaviour. Users can leverage “trigger words” to activate deeper thinking.The GDCM framework does not explicitly name this layer but acknowledges that an AI’s behaviour is configured through instructions. The idea of using specific instructions to elicit desired behaviour is fundamental to GDCM’s concept of creating “AI artefacts” like bootstrap prompts and stage packs.
    Level 3: User PreferencesPersistent settings (like ChatGPT’s ‘Custom Instructions’ or Claude’s ‘Personal Preferences’) that define a user’s preferred communication style (e.g., tone, format) across all chats.This aligns with the GDCM concept of applying “a standard behaviour from the AI across all chats” using custom instructions. It represents a basic, global layer of context management.
    Level 4: Project KnowledgeUploading project-specific files into a dedicated workspace (like a Claude ‘Project’) so the AI acts like a “fully onboarded team member” for any conversation within that project.This is directly analogous to the GDCM concepts of project instructions and project files. The GDCM framework formalises this by placing these files within a “project scaffolding,” which holds all the information the AI needs for its role. Both approaches use this level to silo context for specific workstreams.
    Level 5: The Actual PromptThe immediate, in-the-moment instruction you give the AI in the chat box. Forte notes that this prompt becomes “vastly more powerful” when the other four layers of context are in place.This is the same as a “prompt” in the GDCM framework, which is used for controlling the AI during individual chats. The GDCM author agrees that a “precise prompt with the right background information gets us something much more useful”.

    Relating the Master Prompt to GDCM Artefacts

    The Master Prompt as advocated by Forte and Miyamoto is a comprehensive, single-document compendium containing all critical information about an organisation: its people, products, culture, processes, and more.

    Within the GDCM framework, the Master Prompt serves as a perfect real-world example of the persistent context within the “contextual scaffolding”. It’s the enduring reference material that anchors the AI’s understanding of the organisation across all tasks.

    However, the GDCM framework further organises and refines this idea by dividing the context into more granular components for managing large, complex projects:

    Persistent Context: This is the GDCM equivalent of the Master Prompt. It holds the enduring, high-level project information like objectives, scope, quality standards, and references to core documents like HR manuals. An organisation’s Master Prompt would be a primary source for this persistent context.

    Current Context: This is a key concept in GDCM that Forte and Miyamoto’s model doesn’t explicitly separate. The GDCM framework emphasises the need to load only the stage-specific working information into the AI’s active memory to prevent “distraction or drift”. For example, during a project’s “Design & Planning” stage, the current context would include the content outline and latest draft, but exclude exploratory materials from the earlier “Initiation” stage. This prevents the AI from getting confused by outdated or irrelevant information.

    AI Artefacts: The GDCM framework introduces a formal set of documents, or “AI artefacts,” designed to manage the AI’s role throughout a project’s lifecycle. The Master Prompt could be seen as a foundational “AI Startup Pack”. The GDCM model then adds stage-specific “stage packs” that dynamically reconfigure the AI for different phases of work, ensuring it always has the right information at the right time.

    Conclusion: Complementary Frameworks

    The Forte/Miyamoto Master Prompt method and the Goal-Directed Context Management framework are not competing ideas; they are complementary.

    • The Master Prompt provides the “what”: a rich, persistent body of organisational knowledge that gives the AI a deep, foundational understanding. It is excellent for ongoing, daily use and for democratising execution across a business.

    • The GDCM framework provides the “how”: a structured methodology for deploying that knowledge strategically over the lifecycle of a complex, goal-directed project. It introduces the discipline of dynamically managing the AI’s focus with “persistent” versus “current” context, ensuring the AI remains an effective team member from project initiation to closure.

    In practice, one could create a Master Prompt using the Forte/Miyamoto method and then use the principles of GDCM to extract and structure subsets of that information into “stage packs” and “current context” to manage the AI’s collaboration on a specific, multi-stage project.

    Application to a programme-like endeavour

    As presented, the Forte/Miyamoto Master Prompt is being used within a well-established enterprise. Consider an early-stage, startup enterprise that is working to turn a busiess idea into a prototype business model. The Master Prompt method needs to be adapted for this scenario, which has programme-like (rather than steady state, business-as-useal) charateristics. The Goal-Directed Context Management framework can guide that adaptation. Suggest how to combine the two approaches for this purpose.

    Of course. This is an excellent scenario for combining the strengths of these two frameworks. The Forte/Miyamoto Master Prompt method provides a superb structure for capturing an organisation’s identity, while the Goal-Directed Context Management (GDCM) framework offers the perfect methodology for managing an evolving, programme-like endeavour such as building a startup.

    For an early-stage startup, the “Master Prompt” isn’t a static document describing a well-established business; it’s a dynamic, living document that captures the current best understanding of the business as it evolves. The GDCM framework provides the structure for managing this evolution phase by phase.

    Here is a suggested approach for combining the two frameworks for this purpose.

    1. Create a “Proto Master Prompt” Using the Miyamoto Template

    Start by creating the initial version of your Master Prompt in a Google Doc, following the structure Hayden Miyamoto advocates. However, treat it not as a final record but as a “Proto Master Prompt” — a collection of hypotheses and initial decisions. Use the AI as a “thought partner” and “business coach” to help you flesh out each section, just as Tiago Forte did in the demonstration.

    The initial content will be aspirational and subject to change:

    Personal Info: Founder’s strengths, weaknesses, and goals for using AI. This grounds the project in the founder’s vision.

    Company Info (Hypothesised): Aspirational launch date, target markets, and a deeply considered Ideal Customer Profile (ICP). Getting specific on the ICP is crucial for differentiation and profitability, even if it evolves over time.

    Products/Services (Proposed): Descriptions of the initial product idea, its features, and the benefits it’s intended to deliver.

    Market Information (Initial Analysis): Hypotheses about differentiators and a first-pass analysis of potential competitors.

    Culture (Foundational): The mission, core values, and the “Big Hairy Audacious Goal” (BHAG) that will guide the startup’s development.

    This “Proto Master Prompt” becomes the foundational document—the persistent context in GDCM terminology. It represents the startup’s single source of truth at any given moment.

    2. Adopt a Programme Approach to Business Model Development

    Because turning an idea into a business model involves high uncertainty, it is best managed as a programme, not a single project. The GDCM framework outlines a typical programme lifecycle for a knowledge-based product (like a business model) which can be adapted for your startup:

    1. Concept: Frame the initial idea.

    2. Feasibility: Evaluate the idea’s viability and select a preferred approach.

    3. Definition: Specify the requirements for a Minimum Viable Product (MVP) or prototype.

    4. Development: Create the prototype.

    5. Acceptance: Validate the prototype with the target market.

    6. Operation & Maintenance: Launch the business model and begin iterating.

    Each of these phases will be treated as a distinct project.

    3. Use GDCM Artefacts to Manage the AI Through Each Phase

    For each phase of the programme, you will use “AI artefacts” as described in the GDCM framework to manage the AI’s focus. This is where the two models integrate powerfuly.

    Phase Example: Feasibility

    Let’s say you are in the Feasibility phase. Your goal is to validate your ICP and initial product idea.

    Project Scaffolding: Create a dedicated workspace for this phase (like a Claude ‘Project’ or a custom GPT).

    Persistent Context: The “Proto Master Prompt” is your primary reference file. It ensures the AI always has the overarching vision in mind.

    Current Context / Project Knowledge: This is where you load the stage-specific information into the AI’s active context to prevent it from getting distracted by irrelevant details. For the Feasibility phase, this would include:

        ◦ Market research data.

        ◦ Summaries of interviews with potential customers.

        ◦ Analysis of competitor products you’ve reviewed.

        ◦ The “stage pack” for this phase.

    AI Stage Pack: This is a set of instructions, specific to the Feasibility phase, telling the AI its role. The prompt might be: “You are a market validation expert for an early-stage startup. Your goal is to rigorously test the assumptions in our Proto Master Prompt against the market data provided in the current context. Be critical and conservative in your analysis. For each assumption about our ICP and product, state your level of confidence and identify the weakest points in our logic.”

    4. Iterate on the Master Prompt After Each Phase

    The output of each programme phase will be new knowledge. For example, the Feasibility phase might reveal that your hypothesised ICP was wrong, but a different, more promising segment emerged from your research.

    Your final step in each phase is to update the “Proto Master Prompt” with these validated learnings. This is the critical feedback loop:

    • The ICP section is rewritten to reflect the new target audience.

    • The “Products/Services” section is updated to describe features that resonate with this new audience.

    • The “Market Information” section is refined with a clearer understanding of your true competitors and differentiators.

    By doing this, the Master Prompt evolves from a set of hypotheses into a strategic plan grounded in real-world evidence. The AI, having access to this ever-improving persistent context, becomes an increasingly powerful and knowledgeable strategic partner, ready to help you tackle the next phase of development with a deeper understanding of your business.

    This combined approach allows a startup to leverage the Master Prompt’s power for deep contextual understanding while using the GDCM’s structured, phased methodology to navigate the uncertainty inherent in building a new business.

  • The art of goal-directed context management

    with acknowledgement to Erling S. Andersen, whose book Goal Directed Project Management inspired me in 1988

    Transparency label: human only


    How do we control an AI?

    To use an AI effectively, we have to configure it’s behaviour to suit the purpose of our work. We do this by giving it information in the form of instructions, guidance, questions, and reference material. We put this information into the AI’s context to govern how it behaves. See What is context? for a more detailed explanation of context. It is important to understand it well.

    To apply a standard behaviour from the AI across all chats, we use custom instructions.

    During individual chats, we control the AI through prompts, the canvas, and file uploads (prompts can include instructions, guidance and questions).

    To control the AI’s behaviour for a group of chats organised as a project, we do it through project instructions.

    To make information available to all chats in a project, we upload project files.

    Finally, if we need the services of an AI specialist, we create a form of agent (called a GPT). We configure the agent with bespoke instructions to make its behaviour appropriate to its specialist role, and we upload reference files to give it a knowledge base. Our agent can then be called in as necessary to provide specialist advice through chats.

    This method of controlling the AI by putting information into its context is similar to the way we control a computer by giving it a program to run. But there is an important difference: a computer executes the instructions in its program precisely; an AI (large language model) interprets the information we give it. We should aim, therefore, to give the AI all the information it needs to avoid losing focus. A vague prompt gets a vague answer. A precise prompt with the right background information gets us something much more useful.

    There are similarities with the way we manage a human. The human has a store of learned knowledge to call upon, can be given instructions on how to behave during a task, can use reference material, can converse with others, and can co-edit documents with them. Like an AI, the human interprets the information obtained by these means.

    Managing scale & complexity

    As the scale or complexity of a task grows, it becomes increasingly difficult to coordinate the work of humans to achieve a common goal. To overcome this problem, we use project and programme methodologies to structure the work; these provide a form of scaffolding to let the work be carried out safely and effectively. The scaffolding is made strong and stable through a plan which – amongst other things – divides the work into manageable packages, defines who or what will do the work, when and how it will be done, and what work outputs are required.

    Work whose purpose is well defined and carries an acceptable risk of failure can be organised as a project. The scaffolding for this takes the form of the project plan and related documentation, including external standards. Members of the project team are given roles, which can vary during the project, and assigned tasks. The detailed information they need to carry out their work successfully is held in the project scaffolding. When the work has greater scope and/or complexity, it can be organised as a programme. In this case the scaffolding is provided by the programme management plan and related documentation.

    AI as a team member

    When AI is used by a member of a project team, its characteristics make it more like an additional member of the team than a software tool. The AI, like the human, must be given roles and assigned tasks. It, too, must obtain detailed information from the project scaffolding. Unlike a human, however, the AI cannot yet go and fetch the information it needs; the human user must provide it. The human does does so by putting relevant information into the AI’s context.

    Information required for input to the AI’s context over the course of the project is held in a part of the project scaffolding called the contextual scaffolding. Although closely integrated with other parts of the project scaffolding, the contextual scaffolding has a distinct purpose: to hold the information – AI artefacts – needed to manage the behaviour and effectiveness of the AI over the course of the project. The contextual scaffolding is the responsibility of the project’s AI manager.

    AI artefacts

    The nature, required content, and production timetable for AI artefacts are governed by and tied to the project plan. Artefacts in the contextual scaffolding include:

    • AI policy
    • AI management plan
    • an AI startup pack to configure the AI at the outset of the project, when its objective will be to outline the contextual scaffolding and help to produce an initial set of AI artefacts;
    • stage packs tailored to the varying uses intended for the AI over the course of the project (as with other project artefacts, these begin life in outline form and gain content as the project progresses);
    • GPT packs to create the specialist AI agents that are needed.

    The art of goal-directed context management

    The art of goal-directed context management is to align the AI’s evolving context with the purpose and structure of the project or programme it is associated with, ensuring the AI always has the right information, at the right level of detail, to contribute effectively to the current stage of work.

  • An emerging discipline?

    Transparency label: AI-assisted

    In a recent post, I observed that LLMs are coming to be seen as like computer operating systems, with prompts being the new application programs and context the new user interface.

    Our precision content prompt pack is a good example of that thinking. The pack contains a set of prompts designed to take an unstructured document (notes of a meeting, perhaps) and apply structure (precision content) to the information it contains. We do this because AIs perform better if we give them structured information.

    We apply the prompts in sequence, checking the AI’s output at each step in the sequence. In effect, it is a program that we run on the AI. 

    Contract-first prompting takes the idea further. It formalises the interaction between human and AI into a negotiated agreement about purpose, scope, constraints, and deliverables before any output is generated. This ensures that both sides – human and AI – share the same understanding of the the work to be done. The agreement also contains compliance mechanisms (eg summarisation, clarifying loops, and self-testing) for quality control.

    These ideas reframe the interaction between human and AI: not as simply issuing prompts, but as engineering the conditions under which the AI will perform. Not just prompting, more like contextual systems engineering.

    Transparency label justification: This diary post was drafted by Alec, with ChatGPT used to suggest edits, refine wording, and test the logic of specific formulations. Alec initiated the framing, decided the sequence of ideas, and approved the final structure and terminology.

  • Precision Content Prompt Pack

    Precision Content Prompt Pack

    Version: 01, 1 August 2025
    Authors: Alec Fearon and ChatGPT-4o.

    Transparency label: AI-assisted


    Purpose

    This is a six-step process for converting a source document into a structured format that is easy for an LLM to understand. The format is based on the Darwin Information Typing Architecture (DITA) and ideas developed by Precision Content. It has the following content types:

    • Reference (what something is)
    • Concept (how to think about it)
    • Principle (why it works)
    • Process (how it unfolds)
    • Task (how to do it)

    The six steps are carried out in sequence, one at a time. They clean, segment, type, clarify, and re-package the original source material. There is human review at the end of each step.

    To use

    First, open a chat in ChatGPT and upload this file into the chat; it is in Markdown (.md) because that is easy for LLMs to read.
    Note: you can drag & drop the file into the chat or use the upload button.

    Tell ChatGPT: “Please create a new canvas from this markdown file so we can work together using the precision content prompt pack.” ChatGPT will:

    • Read the file
    • Create a canvas
    • Use each ## heading to split the file into separate cards
    • Preserve formatting and headings

    Then upload the source file into the chat. Tell ChatGPT: “Please convert the uploaded file [filename] into precision content using the steps defined in the canvas Anapoly AI Labs Precision Content Prompt Pack. Begin Step 0.”

    ChatGPT will extract the content of the file and clean it as per Step 0 – Pre-Processing. It will paste the cleaned material into the “📄 Source Document” card for you to review. That sets you up to proceed with the following steps. The output of each step is put into the “Work Area – Output by Step” card in the canvas. Edit the output of each step as necessary before proceeding to the next step.

    The final output is put into the card “Review Notes / Final Output / Glossary”. You can tell ChatGPT to export it from there as a file for download. If it is to be used as reference material, filetype .md is recommended.


    Step 0 – Pre-Processing

    Purpose: Clean the raw input before analysis.

    Prompt:

    Clean the following document for structured analysis. Remove:

    • Repeated headers/footers
    • Navigation links, timestamps, metadata
    • Formatting glitches (e.g. broken paragraphs)

    Retain all meaningful content exactly as written. Do not summarise, interpret, or reword.


    Step 1 – Segmenting the Document

    Purpose: Divide into discrete, meaningful segments.

    Prompt:

    Break this cleaned document into a numbered list of coherent segments. Each segment should reflect a single topic, paragraph, or unit of meaning.

    Format:
    [1] [text]
    [2] [text]


    Step 2 – Typing the Segments

    Purpose: Label each segment by information type.

    Types:

    • Reference – what something is
    • Concept – how to think about it
    • Principle – why it works
    • Process – how it unfolds
    • Task – how to do it

    Prompt:

    For each segment, assign the most relevant type. Include a short justification.

    Format:
    [1] Type: [type] – [reason]


    Step 3 – Rewriting for Precision

    Purpose: Convert to structured, plain-language modules.

    Prompt:

    Rewrite each segment according to its type:

    • Use short declarative sentences
    • Bullet points for steps or lists
    • Avoid vagueness or repetition

    Step 4 – Grouping by Type

    Purpose: Reorganise output by information type.

    Prompt:

    Sort all rewritten segments under clear headings:

    • 🗂 Reference
    • 🧠 Concept
    • ⚖️ Principle
    • 🔄 Process
    • 🔧 Task

    Preserve segment numbers.


    Step 5 – Structured Output Bundle

    Purpose: Package the content for reuse.

    Prompt:

    Format output with markdown or minimal HTML.
    Include metadata at the top:

    Title: [your title]
    Source: [file name or link]
    Date: [today's date]
    Content type: Precision Content

    Step 6 – Glossary Generation

    Purpose: Extract and define key terms.

    Prompt:

    Identify important terms in the text and define each using only information in the document.

    Format:
    Term: [definition]


    📄 Source Document

    [Paste the cleaned or raw source text here after Step 0.]


    Work Area – Output by Step

    Use this section to draft segmented content, types, rewrites, and grouped outputs.


    Review Notes / Final Output / Glossary

    Use this area for human commentary, final outputs, or glossary results.