Category: lab note

A report of activity in the labs

  • Lab Note: building a three-layer instruction set for Anapoly’s work in ChatGPT Projects

    Transparency label: AI-assisted
    This post was developed collaboratively. Alec set the purpose and structure; ChatGPT drafted, critiqued, and refined the narrative under strict governance. Alec reviewed and accepted each stage.


    The three-layer instruction set took shape through a sequence of decisions, clarifications, and course‑corrections within a ChatGPT Project. What follows is a reflective account of how it emerged. 

    1. Recognising a structural gap

    We began with scattered ingredients: a mission statement, the value proposition, and the Lab Framework. Each document covered something important, but none told the AI what world it was working in. The Three‑Layer Model of Context made the gap obvious. We had talked for months about business context as a top layer, but there was no single, authoritative statement that the AI could rely on.

    The realisation was that, without a coherent top layer, the AI would continue to drift between voices, assumptions, and roles. The need for a stable business‑context layer became unavoidable.

    2. Using the extended mission document to surface the essentials

    To understand what the top layer must contain, we drafted an extended mission document. Writing it forced us to specify Anapoly’s identity, boundaries, ethos, and tone in operational rather than literary terms.

    Amongst other things, we clarified:

    • that Anapoly is exploratory, not consultative;
    • that we work only with synthetic data;
    • that our tone is plain and grounded;
    • that we are not selling AI or performing expert evaluation;
    • that transparency is a defining value.

    The exercise exposed the core elements the AI would need if it were to behave as a consistent Anapoly collaborator. Those insights quickly became the skeleton of the business‑context layer.

    3. Asking the decisive question: What else does the AI need?

    The next turning point came when Alec asked: given the mission, the value proposition, and the Lab Framework, what else does the AI still lack? The answer was longer than expected. Beyond the mission and methods, the AI needed:

    • explicit organisational identity;
    • a clear audience model;
    • non‑negotiable values;
    • boundaries on what Anapoly does not do;
    • tone and communication standards;
    • risk posture;
    • definitions of quality;
    • strategic intent.

    This list turned a loose idea into a concrete specification.

    4. Consolidating into one canonical business‑context block

    At this point, we faced a structural choice: leave the business context scattered across multiple documents, or merge them into a single canonical block. Alec chose consolidation. That removed ambiguity and ensured that every project would begin with the same fixed identity, values, and constraints. Once the consolidated block was drafted, the top layer of the instruction set effectively snapped into place.

    5. Rebuilding the behavioural governance layer from first principles

    Anapoly’s existing governance notes had grown organically and were no longer fully aligned with the clearer business context. Alec asked for a complete rewrite. We replaced fragmented instructions with a behavioural layer defining:

    • tone (plain, dry, concise);
    • stance (critical, truth‑first, no flattery);
    • interaction rules (ask when unclear, challenge lazy assumptions);
    • risk handling (flag operational, ethical, or data‑protection issues);
    • constraints (no hype, no verbosity, no softening of justified critique).

    The most important element was the decision to adopt Strong Governance as the default. The behaviour is now predictable, sceptical, and aligned with Anapoly’s ethos.

    6. Adding a deliberate escape clause

    Strong governance is effective but inflexible. To avoid it becoming a straitjacket, we added a controlled override mechanism: a mandatory keyword (OVERRIDE:) followed by natural‑language instructions. The override lasts exactly one turn.

    7. Sharpening the task‑prompt layer

    With the first two layers established, the task‑prompt layer became straightforward. It defines how immediate instructions are handled:

    • follow the task prompt as written;
    • interpret it inside the constraints of the business context and governance layer;
    • ask for clarification when needed;
    • use project files only when explicitly referenced.

    This aligns directly with the Micro‑Enterprise Setup Blueprint, which treats task prompts as the active layer atop stable configuration.

    8. Assembling the final three‑layer instruction set

    Once the components were complete, we assembled them in order:

    1. Business Context — Anapoly’s identity, values, tone, boundaries, risk posture, and strategic intent.
    2. Behavioural Governance Layer — strict rules for tone, reasoning, interaction, critique, and risk.
    3. Task‑Prompt Layer — guidance for interpreting immediate instructions.

    We added a short explanatory note to clarify how the layers fit together and how overrides work.

    The final result ensures the AI behaves like an informed, grounded collaborator who understands Anapoly’s mission, values, and constraints.


    The outcome

    This process created a durable operating profile for all future Anapoly projects. The instruction set now:

    • anchors the model in Anapoly’s identity,
    • constrains drift through strict governance,
    • ensures tasks are interpreted consistently,
    • and provides a clean override path when needed.

    We now have a dependable foundation to build on — and a clear method for adapting it when Anapoly evolves.

  • Drafting Anapoly’s first Lab Setup with ChatGPT

    Transparency label: AI‑heavy (ChatGPT model o3 produced primary content; Alec Fearon curated and lightly edited)


    Purpose of experiment

    Use ChatGPT as a thinking partner to create a first draft of an acclimatisation lab setup for Anapoly AI Labs and to compare output quality between two models (o3 and 4o).

    Author and date

    Alec Fearon, 24 June 2025

    Participants

    • Alec Fearon – experiment lead
    • ChatGPT model o3 – reasoning model
    • ChatGPT model 4o – comparison model

    Lab configuration and setup

    Interaction took place entirely in the ChatGPT Document Workbench. The first prompt was duplicated into two branches, each tied to a specific model. All files needed for reference (conceptual framework, lab note structure) were pre‑uploaded in the project space.

    Preamble

    Alec wanted a concise, critical first draft to stimulate team discussion. The exercise also served as a live test of whether o3’s “reasoning” advantage produced materially better drafts than the newer 4o model.

    (more…)
  • Lab Note: modelling our own use of AI tools

    Transparency label: AI-heavy


    Purpose of experiment

    To identify and configure an AI toolkit for Anapoly AI Labs that credibly models the use of general-purpose AI tools in a small consultancy setting.

    Author and date

    Alec Fearon, 24 June 2025

    Participants

    Alec Fearon, with Ray Holland and Dennis Silverwood in email consultation
    ChatGPT-4o

    Lab configuration and setup

    This setup models a real-world micro-consultancy with three collaborators. It assumes limited budget, modest technical support, and a practical orientation. We aim to reflect the toolkit choices we might recommend to participants in Anapoly AI Labs sessions.

    Preamble

    If Anapoly AI Labs is to be a credible venture, we believe it must model the behaviour it explores. That means our own internal work should demonstrate how small teams or sole traders might use AI tools in everyday tasks – writing, research, analysis, and communication – not just talk about it. This lab note outlines our proposed working configuration.

    Procedure

    We identified common functions we ourselves perform (and expect others will want to model), for example:

    • Writing, summarising, and critiquing text
    • Researching topics and checking facts
    • Extracting and organising information from documents
    • Sharing and collaborating on files
    • Managing project knowledge

    We then selected tools that:

    • Are available off the shelf
    • Require no specialist training
    • Are affordable on a small-business budget
    • Can be configured and used transparently

    Findings

    Core Tools Selected

    FunctionToolLicenceNotes
    Writing & promptingChatGPT Team£25–30/m/userMain workspace for drafting, reasoning, editing
    Search & fact-checkingPerplexity Pro$20/m/userFast, source-aware, good for validating facts
    Document interrogationNotebookLMFree (for now)Project libraries, good with PDFs and notes
    Office appsMS 365 or Google£5–15/m/userMatches common small business setups
    Visual inputsChatGPT VisionIncluded with ChatGPTUsed for images, scans, and screenshots

    Discussion of findings

    This configuration balances affordability, realism, and capability. We expect participants in Anapoly AI Labs to have similar access to these tools, or to be able to get it. By using these tools ourselves in Anapoly’s day-to-day running, we:

    • Gain first hand experience to share
    • Create reusable examples from real work
    • Expose gaps, workarounds, and lessons worth documenting

    We considered whether personal licences could be shared during lab sessions. Technically, they can’t: individual ChatGPT and Perplexity licences are for single-user use. While enforcement is unlikely, we’ve chosen to adopt the position that participants should bring their own their own AI tools – free or paid – to lab sessions as part of the learning experience. This avoids ambiguity about licencing and sets the ethical standard we want to maintain.

    Conclusions

    This toolkit would enable us to model our own small-business operations, treating Anapoly itself as one of the lab setups. That would reinforce our stance: we don’t claim to be AI experts; we’re practitioners asking the questions small businesses wish they had time to ask, and showing what happens when you do.

    Recommendations

    • Configure project workspaces in ChatGPT Team to reflect different lab contexts
    • Maintain prompt libraries and reasoning trails
    • Make costs, configurations, and limitations explicit in diary and lab notes
    • Evaluate whether to add AI-enhanced spreadsheet or knowledge tools (e.g. Notion, Obsidian) in future iterations

    Tags

    ai tools, toolkit, configuration, modelling, small business, chatgpt, perplexity, notebooklm, office software, credibility

    Glossary

    ChatGPT Team – OpenAI’s paid workspace version of ChatGPT, allowing collaboration, custom GPTs, and project folders.
    NotebookLM – A Google tool for working with uploaded documents using AI, currently free.
    Perplexity Pro – A subscription AI assistant known for showing sources.
    Vision input – The ability to upload images (photos, scans) and have the AI interpret them.

  • Assessing AI support for defining a value proposition

    transparency label: AI-assisted

    Purpose of Experiment

    To test whether a general-purpose AI (NotebookLM) can generate a credible, insightful articulation of Anapoly AI Labs’ value proposition when prompted appropriately.

    Author and Date

    Alec Fearon, 22 June 2025

    Participants

    Alec Fearon, Ray Holland, NotebookLM (AI), Perplexity (AI)

    Lab Configuration and Setup

    This experiment was run using NotebookLM, in a notebook containing background files on Anapoly AI Labs. To explore how we might play to the strengths of different AIs, a follow-up was performed using Perplexity with the NotebookLM output as input.

    Preamble

    Procedure

    NotebookM was given this prompt:

    “What is Anapoly AI Labs’ core value proposition and how does it differentiate itself?”

    NotebookLM’s response was then given to Perplexity, with this prompt:

    “How does Anapoly’s hands-on approach help me understand AI better in my work?”

    Findings

    NotebookLM responded with a detailed analysis of Anapoly’s strategic positioning:

    Anapoly AI Labs’ core value proposition lies in bridging the gap between AI curiosity and practical application for non-technical professionals and small businesses. They address the common issue where individuals are interested in AI but are unsure how to use it effectively in their daily work. Their purpose is to help these professionals explore how general-purpose AI can assist with everyday tasks, such as improving productivity, creating better quality output, or saving money.

    Anapoly AI Labs differentiates itself through a distinctive “modeling exploration” approach and its stance on credibility:

    • Hands-on, Practical Exploration over Theory or Expertise: Unlike traditional courses, product demos, or sales pitches, Anapoly AI Labs focuses on hands-on exploration of general-purpose AI tools in simulated real-world contexts. The emphasis is on demonstrating “what AI tools can and can’t do when pointed at everyday work,” rather than teaching complex AI theory or coding. This involves setting up specific “labs” for different market segments, such as a Small Consultancy Office or Independent Author, to apply AI to real problems.
    • Credibility Through Transparency and Shared Journey: A central aspect of their differentiation is their approach to credibility. They explicitly state they “don’t claim to be AI experts”. Instead, their credibility is built upon a “shared journey of discovery and documented practical insights”. As practitioners “in the early stages of building with the tools”, they aim to show that they are serious, curious, and hands-on. The AI itself helped refine this stance, suggesting that their value comes from “modeling exploration” and “asking the questions small businesses wish they had time to ask, and showing what happens when you do”. This is presented as more credible than a static CV or pre-existing expert status.
    • Public Documentation and Transparency: Anapoly AI Labs maintains an online presence called “Anapoly Online,” which functions as a “public notebook”. This platform is used to “document real tests, learning, missteps, and insights,” sharing what works and what doesn’t. It is designed to be “lean, understated, and quietly credible”, building a “body of work others can trust” bit by bit through openness and demonstrated effort. They also have a transparent system for disclosing AI involvement in all published material, using classifications like “Human-led,” “AI-assisted,” or “AI-heavy”.
    • Focus on the “Capability-Absorption Gap”: The labs directly address the “capability-absorption gap,” which refers to why businesses struggle to adopt AI despite its capabilities. By providing a low-barrier-to-entry, informal, and social environment for peer learning, they aim to make AI adoption more accessible and less confusing for non-technical users.

    Perplexity then provided the following simplified summary:

    “Anapoly’s hands-on approach helps you understand AI better in your work by letting you actually try out AI tools on real tasks that matter to you, instead of just reading about them or watching demos. This practical experience shows you what AI can and can’t do in your daily work, making it easier to see how it fits into your routine and where it can save you time or improve your results.

    You get to experiment in a safe, informal setting, learn from real examples, and see both successes and mistakes. This helps you build real confidence with AI, instead of just theory, and makes it much easier to spot where AI could help you most in your job.”

    Discussion of findings

    The NotebookLM output demonstrated that a general-purpose AI can synthesise a coherent and plausible value proposition from structured background material. Perplexity, by contrast, generated a shorter and more audience-friendly version in response to a focused, user-oriented question.

    Together, these outputs show complementary strengths: NotebookLM delivers a comprehensive internal formulation; Perplexity extracts an accessible, benefit-led summary. Used in tandem, these tools help refine messaging for different audiences: internal strategists vs. prospective participants.

    Of particular interest is NotebookLM’s identification of the “capability-absorption gap”, a concise and useful term for a key problem that Anapoly AI Labs addresses. While the founders had recognised this issue in practical terms, the AI’s phrasing sharpens it into a strategic talking point. Framing Anapoly’s purpose in terms of reducing this gap may prove valuable in both internal planning and external communication.

    This experiment also highlights the value of re-prompting and testing across different AI models to triangulate clarity and tone.

    Recommendations

    1. Use AI tools like NotebookLM to draft key positioning statements, especially when materials are already well developed.
    2. Always review AI-generated value propositions critically. Look for overfitting, vagueness, or unearned claims.
    3. Use simpler AI prompts with tools like Perplexity to test how propositions land with a non-specialist audience.
    4. Consider publishing selected AI outputs as-is, but with clear disclosure and context-setting.
    5. Repeat this exercise periodically to test whether the value proposition evolves or ossifies.

    Tags
    value, lab-setup, worked, use-cases, prompting, ai-only, positioning, credibility, communication, capability-absorption-gap

    Glossary

    • Modeling Exploration: A term used to describe Anapoly AI Labs’ approach—testing and demonstrating AI use in practical contexts without claiming expertise.
    • Capability-Absorption Gap: The space between what AI tools can do and what users actually manage to adopt in real settings. First coined (in this context) by NotebookLM.
    • Public Notebook: Anapoly Online’s role as a transparent log of what was tried, what worked, and what didn’t.
    • General-purpose AI Tools: Tools like ChatGPT or NotebookLM that are not tailored to a specific domain but can assist with a wide range of tasks.
    • AI-only: A transparency label denoting that the content was fully generated by AI without human rewriting or editorial shaping.
    • Overfitting: In this context, an AI response that sticks too closely to the language or structure of source material, potentially limiting originality or insight.
    • Vagueness: A tendency in AI outputs to use safe, abstract phrases that lack specificity or actionable detail.
    • Unearned Claims: Assertions made by AI that sound impressive but are not substantiated by evidence or experience in the given context.

    Transparency Label Justification. The experimental outputs (NotebookLM and Perplexity responses) were AI-generated and included unedited. However, the lab note itself – its framing, interpretation, and derived recommendations – was co-written by a human and ChatGPT in structured dialogue.
    ChatGPT’s role included: drafting the findings and recommendations, articulating the reasoning behind terms like “capability-absorption gap”, refining the explanatory framing, tags, and glossary.

  • Lab Note: custom instructions for ChatGPT

    Purpose of Experiment

    To improve the clarity, coverage, and strategic value of the custom instructions used to guide ChatGPT. This involved shaping an adaptable persona for the assistant. That is, a set of behavioural expectations which define how it should think, respond, and collaborate in different contexts.

    Author and Date

    Alec Fearon, 17 June 2025

    Participants

    Alec Fearon (experiment lead), ChatGPT (Document Collaborator mode)

    The goal was to ensure that ChatGPT supports Anapoly AI Labs in a consistent, credible, and context-sensitive manner.

    Lab Configuration and Setup

    This was a document-focused lab. The session took place entirely within ChatGPT’s Document Workbench, using file uploads and canvas tools. Key source files included:

    • My current custom instructions (baseline input)
    • An alternative set of instructions (for contrast) by Matthew, an expert AI user
    • The Anapoly AI Labs project instructions (evolving draft)
    • Recent Anapoly Online diary posts
    • Email exchanges amongst Anapoly team members.

    ChatGPT acted in what we later formalised as Document Collaborator mode: assisting with drafting, editing, and structural critique in line with the evolving instruction set. I provided direction in natural language; the AI edited and reorganised accordingly.

    Preamble

    This note includes a short glossary at the end to explain terms that may be unfamiliar to non-technical readers.

    Procedure

    1. Reviewed and critiqued my current instructions.
    2. Analysed strengths and gaps in Matthew’s approach.
    3. Combined the best of both into a new, structured format.
    4. Iteratively improved wording, structure, and tone.
    5. Added meta-guidance, clarified interaction modes, and ensured adaptability across different settings.
    6. Produced markdown, plain text, and PDF versions for upload to the project files.
    7. Created a lighter version suitable for general ChatGPT use.

    Findings

    Matthew’s structure was modular and well-scoped, but lacked tone guidance and broader role adaptability.

    My original was strong on tone and intent but less clear on scope and edge-case handling.

    Combining both required trimming redundancy and strengthening interaction rules.

    The distinction between projects, documents, and informal chats is useful and worth making explicit.

    File handling (multimodal interpretation) and ambiguity management were under-specified previously.

    Discussion of Findings

    The lab assumed that small adjustments to instruction style could yield meaningful improvements in assistant behaviour, and the resulting draft reflects that working hypothesis.

    Defining five roles for ChatGPT (Thinking Partner, Document Collaborator, Research Assistant, Use-Case Designer, Multimodal Interpreter) provides a useful mental model for both human and AI. The role can be specified at the beginning of a chat, and changed during the chat as necessary. 

    Meta-guidance (what to do when a prompt is ambiguous or under-specified) should be especially valuable.

    Clarifying when ChatGPT should adopt my personal tone versus when it should adjust to suit an external audience turned out to be important. That distinction will help the assistant match its style to the task – whether drafting in my voice or producing something outward-facing and more formal.

    Including Socratic method and ranked questions makes the assistant a sharper tool for thought, not just a better rewriter.

    Conclusions

    We now have a robust set of project instructions aligned with Anapoly’s style, goals, and workflow.

    The same principles can be adapted to other roles or collaborators as Anapoly Labs grows.

    Future labs could focus on refining persona prompts, exploring AI transparency, or adapting the instructions for group sessions.

    Recommendations

    Use the new instructions consistently in all project spaces.

    Encourage collaborators to create variants suited to their own use cases.

    Monitor edge cases where the assistant behaves inconsistently—these can inform future labs.

    Continue exploring how to balance tone, clarity, and adaptability when writing for different audiences.

    Tags: lab-notes, instructions, ai-tools, prompt-engineering

    Glossary

    Edge case: A situation that occurs at an extreme—such as rare inputs or unusual usage patterns—where a system might fail or behave unpredictably.

    Meta-guidance: Instructions that tell the assistant how to handle ambiguity or uncertainty in the user’s prompt.

    Multimodal interpretation: The ability to interpret and work with different types of input (e.g. text, images, PDFs, spreadsheets).

    Markdown: A lightweight text formatting language used to create structured documents with headings, bullet points, and emphasis.

    Prompt: A user’s input or question to an AI system; often a single sentence or instruction.

    Socratic method: A questioning technique used to clarify thinking by challenging assumptions and exploring implications.

    Document Collaborator mode: A role adopted by the AI to help with drafting, editing, and improving written content through structured feedback.

    Document Workbench: The interface in ChatGPT where documents are edited interactively.

    Canvas: A specific document within the Document Workbench where collaborative editing takes place.