Tag: instructions

  • First lab note published

    We’ve just posted our first lab note.

    It documents an internal experiment to refine the custom instructions we use with ChatGPT – what we expect from it, how it should respond, and how to keep it useful across different tasks. The aim is to define a persona for the AI assistant that is more consistent and can adapt to the different types of assistance required of it.

    It’s a good example of how we’re using the Labs: not to explain AI, but to find out what it’s actually good for.

    Read the lab note → Custom Instructions for ChatGPT

  • Lab Note: custom instructions for ChatGPT

    Purpose of Experiment

    To improve the clarity, coverage, and strategic value of the custom instructions used to guide ChatGPT. This involved shaping an adaptable persona for the assistant. That is, a set of behavioural expectations which define how it should think, respond, and collaborate in different contexts.

    Author and Date

    Alec Fearon, 17 June 2025

    Participants

    Alec Fearon (experiment lead), ChatGPT (Document Collaborator mode)

    The goal was to ensure that ChatGPT supports Anapoly AI Labs in a consistent, credible, and context-sensitive manner.

    Lab Configuration and Setup

    This was a document-focused lab. The session took place entirely within ChatGPT’s Document Workbench, using file uploads and canvas tools. Key source files included:

    • My current custom instructions (baseline input)
    • An alternative set of instructions (for contrast) by Matthew, an expert AI user
    • The Anapoly AI Labs project instructions (evolving draft)
    • Recent Anapoly Online diary posts
    • Email exchanges amongst Anapoly team members.

    ChatGPT acted in what we later formalised as Document Collaborator mode: assisting with drafting, editing, and structural critique in line with the evolving instruction set. I provided direction in natural language; the AI edited and reorganised accordingly.

    Preamble

    This note includes a short glossary at the end to explain terms that may be unfamiliar to non-technical readers.

    Procedure

    1. Reviewed and critiqued my current instructions.
    2. Analysed strengths and gaps in Matthew’s approach.
    3. Combined the best of both into a new, structured format.
    4. Iteratively improved wording, structure, and tone.
    5. Added meta-guidance, clarified interaction modes, and ensured adaptability across different settings.
    6. Produced markdown, plain text, and PDF versions for upload to the project files.
    7. Created a lighter version suitable for general ChatGPT use.

    Findings

    Matthew’s structure was modular and well-scoped, but lacked tone guidance and broader role adaptability.

    My original was strong on tone and intent but less clear on scope and edge-case handling.

    Combining both required trimming redundancy and strengthening interaction rules.

    The distinction between projects, documents, and informal chats is useful and worth making explicit.

    File handling (multimodal interpretation) and ambiguity management were under-specified previously.

    Discussion of Findings

    The lab assumed that small adjustments to instruction style could yield meaningful improvements in assistant behaviour, and the resulting draft reflects that working hypothesis.

    Defining five roles for ChatGPT (Thinking Partner, Document Collaborator, Research Assistant, Use-Case Designer, Multimodal Interpreter) provides a useful mental model for both human and AI. The role can be specified at the beginning of a chat, and changed during the chat as necessary. 

    Meta-guidance (what to do when a prompt is ambiguous or under-specified) should be especially valuable.

    Clarifying when ChatGPT should adopt my personal tone versus when it should adjust to suit an external audience turned out to be important. That distinction will help the assistant match its style to the task – whether drafting in my voice or producing something outward-facing and more formal.

    Including Socratic method and ranked questions makes the assistant a sharper tool for thought, not just a better rewriter.

    Conclusions

    We now have a robust set of project instructions aligned with Anapoly’s style, goals, and workflow.

    The same principles can be adapted to other roles or collaborators as Anapoly Labs grows.

    Future labs could focus on refining persona prompts, exploring AI transparency, or adapting the instructions for group sessions.

    Recommendations

    Use the new instructions consistently in all project spaces.

    Encourage collaborators to create variants suited to their own use cases.

    Monitor edge cases where the assistant behaves inconsistently—these can inform future labs.

    Continue exploring how to balance tone, clarity, and adaptability when writing for different audiences.

    Tags: lab-notes, instructions, ai-tools, prompt-engineering

    Glossary

    Edge case: A situation that occurs at an extreme—such as rare inputs or unusual usage patterns—where a system might fail or behave unpredictably.

    Meta-guidance: Instructions that tell the assistant how to handle ambiguity or uncertainty in the user’s prompt.

    Multimodal interpretation: The ability to interpret and work with different types of input (e.g. text, images, PDFs, spreadsheets).

    Markdown: A lightweight text formatting language used to create structured documents with headings, bullet points, and emphasis.

    Prompt: A user’s input or question to an AI system; often a single sentence or instruction.

    Socratic method: A questioning technique used to clarify thinking by challenging assumptions and exploring implications.

    Document Collaborator mode: A role adopted by the AI to help with drafting, editing, and improving written content through structured feedback.

    Document Workbench: The interface in ChatGPT where documents are edited interactively.

    Canvas: A specific document within the Document Workbench where collaborative editing takes place.