Tag: ai tools

  • Drafting Anapoly’s first Lab Setup with ChatGPT

    Transparency label: AI‑heavy (ChatGPT model o3 produced primary content; Alec Fearon curated and lightly edited)


    Purpose of experiment

    Use ChatGPT as a thinking partner to create a first draft of an acclimatisation lab setup for Anapoly AI Labs and to compare output quality between two models (o3 and 4o).

    Author and date

    Alec Fearon, 24 June 2025

    Participants

    • Alec Fearon – experiment lead
    • ChatGPT model o3 – reasoning model
    • ChatGPT model 4o – comparison model

    Lab configuration and setup

    Interaction took place entirely in the ChatGPT Document Workbench. The first prompt was duplicated into two branches, each tied to a specific model. All files needed for reference (conceptual framework, lab note structure) were pre‑uploaded in the project space.

    Preamble

    Alec wanted a concise, critical first draft to stimulate team discussion. The exercise also served as a live test of whether o3’s “reasoning” advantage produced materially better drafts than the newer 4o model.

    (more…)
  • That was the moment …

    transparency label: Human only

    It hit me that generative AI is the first kind of technology that can tell you how to use itself. You ask it what to do, and it explains the tool, the technique, the reasoning—it teaches you. And that flipped something for me. It stopped being a support tool and became more like a co-founder.

    A quote from BRXND Dispatch, a SubStack newsletter by Noah Brier, which featured Craig Hepburn, former Chief Digital Officer at Art Basel.

  • Lab Note: modelling our own use of AI tools

    Transparency label: AI-heavy


    Purpose of experiment

    To identify and configure an AI toolkit for Anapoly AI Labs that credibly models the use of general-purpose AI tools in a small consultancy setting.

    Author and date

    Alec Fearon, 24 June 2025

    Participants

    Alec Fearon, with Ray Holland and Dennis Silverwood in email consultation
    ChatGPT-4o

    Lab configuration and setup

    This setup models a real-world micro-consultancy with three collaborators. It assumes limited budget, modest technical support, and a practical orientation. We aim to reflect the toolkit choices we might recommend to participants in Anapoly AI Labs sessions.

    Preamble

    If Anapoly AI Labs is to be a credible venture, we believe it must model the behaviour it explores. That means our own internal work should demonstrate how small teams or sole traders might use AI tools in everyday tasks – writing, research, analysis, and communication – not just talk about it. This lab note outlines our proposed working configuration.

    Procedure

    We identified common functions we ourselves perform (and expect others will want to model), for example:

    • Writing, summarising, and critiquing text
    • Researching topics and checking facts
    • Extracting and organising information from documents
    • Sharing and collaborating on files
    • Managing project knowledge

    We then selected tools that:

    • Are available off the shelf
    • Require no specialist training
    • Are affordable on a small-business budget
    • Can be configured and used transparently

    Findings

    Core Tools Selected

    FunctionToolLicenceNotes
    Writing & promptingChatGPT Team£25–30/m/userMain workspace for drafting, reasoning, editing
    Search & fact-checkingPerplexity Pro$20/m/userFast, source-aware, good for validating facts
    Document interrogationNotebookLMFree (for now)Project libraries, good with PDFs and notes
    Office appsMS 365 or Google£5–15/m/userMatches common small business setups
    Visual inputsChatGPT VisionIncluded with ChatGPTUsed for images, scans, and screenshots

    Discussion of findings

    This configuration balances affordability, realism, and capability. We expect participants in Anapoly AI Labs to have similar access to these tools, or to be able to get it. By using these tools ourselves in Anapoly’s day-to-day running, we:

    • Gain first hand experience to share
    • Create reusable examples from real work
    • Expose gaps, workarounds, and lessons worth documenting

    We considered whether personal licences could be shared during lab sessions. Technically, they can’t: individual ChatGPT and Perplexity licences are for single-user use. While enforcement is unlikely, we’ve chosen to adopt the position that participants should bring their own their own AI tools – free or paid – to lab sessions as part of the learning experience. This avoids ambiguity about licencing and sets the ethical standard we want to maintain.

    Conclusions

    This toolkit would enable us to model our own small-business operations, treating Anapoly itself as one of the lab setups. That would reinforce our stance: we don’t claim to be AI experts; we’re practitioners asking the questions small businesses wish they had time to ask, and showing what happens when you do.

    Recommendations

    • Configure project workspaces in ChatGPT Team to reflect different lab contexts
    • Maintain prompt libraries and reasoning trails
    • Make costs, configurations, and limitations explicit in diary and lab notes
    • Evaluate whether to add AI-enhanced spreadsheet or knowledge tools (e.g. Notion, Obsidian) in future iterations

    Tags

    ai tools, toolkit, configuration, modelling, small business, chatgpt, perplexity, notebooklm, office software, credibility

    Glossary

    ChatGPT Team – OpenAI’s paid workspace version of ChatGPT, allowing collaboration, custom GPTs, and project folders.
    NotebookLM – A Google tool for working with uploaded documents using AI, currently free.
    Perplexity Pro – A subscription AI assistant known for showing sources.
    Vision input – The ability to upload images (photos, scans) and have the AI interpret them.

  • Working Towards a Strategy

    In the last few weeks, we’ve been thinking more clearly about how to describe what Anapoly AI Labs is for and how we intend to run it.

    The idea, from the start, was not to teach AI or to promote it, but to work out what it’s actually good for. Not in theory, but in the real day-to-day work of people like us. We’d do this by setting up simulated work environments (labs) and running real tasks with current AI tools to see what helps, what doesn’t, and what’s worth doing differently.

    To sharpen that thinking, I used ChatGPT as a sounding board. I gave it access to our core documents, including the notes from a deep dive discussion in NotebookLM about what we’re trying to build. Then I asked it to act as a thinking partner and help me write a strategy.

    The result is a good working draft that sets out our purpose, stance, methods, and measures of success. It’s a helpful starting point for discussion. One thing we’ve been clear about from the start: we don’t claim to be AI experts. We’re practitioners, working things out in public. That’s our stance, and the strategy captures it well.

    We’ve published the draft as a PDF. It explains how Anapoly AI Labs will work: how the labs are set up, what kind of people they’re for, how we plan to run sessions, and what success would look like. We now want to shift focus from shaping the idea to working out how to make it happen.

    Download the full document if you’d like to see more. We’d welcome thoughts, questions, or constructive criticism – we’re still working it out.

  • First lab note published

    We’ve just posted our first lab note.

    It documents an internal experiment to refine the custom instructions we use with ChatGPT – what we expect from it, how it should respond, and how to keep it useful across different tasks. The aim is to define a persona for the AI assistant that is more consistent and can adapt to the different types of assistance required of it.

    It’s a good example of how we’re using the Labs: not to explain AI, but to find out what it’s actually good for.

    Read the lab note → Custom Instructions for ChatGPT

  • Lab Note: custom instructions for ChatGPT

    Purpose of Experiment

    To improve the clarity, coverage, and strategic value of the custom instructions used to guide ChatGPT. This involved shaping an adaptable persona for the assistant. That is, a set of behavioural expectations which define how it should think, respond, and collaborate in different contexts.

    Author and Date

    Alec Fearon, 17 June 2025

    Participants

    Alec Fearon (experiment lead), ChatGPT (Document Collaborator mode)

    The goal was to ensure that ChatGPT supports Anapoly AI Labs in a consistent, credible, and context-sensitive manner.

    Lab Configuration and Setup

    This was a document-focused lab. The session took place entirely within ChatGPT’s Document Workbench, using file uploads and canvas tools. Key source files included:

    • My current custom instructions (baseline input)
    • An alternative set of instructions (for contrast) by Matthew, an expert AI user
    • The Anapoly AI Labs project instructions (evolving draft)
    • Recent Anapoly Online diary posts
    • Email exchanges amongst Anapoly team members.

    ChatGPT acted in what we later formalised as Document Collaborator mode: assisting with drafting, editing, and structural critique in line with the evolving instruction set. I provided direction in natural language; the AI edited and reorganised accordingly.

    Preamble

    This note includes a short glossary at the end to explain terms that may be unfamiliar to non-technical readers.

    Procedure

    1. Reviewed and critiqued my current instructions.
    2. Analysed strengths and gaps in Matthew’s approach.
    3. Combined the best of both into a new, structured format.
    4. Iteratively improved wording, structure, and tone.
    5. Added meta-guidance, clarified interaction modes, and ensured adaptability across different settings.
    6. Produced markdown, plain text, and PDF versions for upload to the project files.
    7. Created a lighter version suitable for general ChatGPT use.

    Findings

    Matthew’s structure was modular and well-scoped, but lacked tone guidance and broader role adaptability.

    My original was strong on tone and intent but less clear on scope and edge-case handling.

    Combining both required trimming redundancy and strengthening interaction rules.

    The distinction between projects, documents, and informal chats is useful and worth making explicit.

    File handling (multimodal interpretation) and ambiguity management were under-specified previously.

    Discussion of Findings

    The lab assumed that small adjustments to instruction style could yield meaningful improvements in assistant behaviour, and the resulting draft reflects that working hypothesis.

    Defining five roles for ChatGPT (Thinking Partner, Document Collaborator, Research Assistant, Use-Case Designer, Multimodal Interpreter) provides a useful mental model for both human and AI. The role can be specified at the beginning of a chat, and changed during the chat as necessary. 

    Meta-guidance (what to do when a prompt is ambiguous or under-specified) should be especially valuable.

    Clarifying when ChatGPT should adopt my personal tone versus when it should adjust to suit an external audience turned out to be important. That distinction will help the assistant match its style to the task – whether drafting in my voice or producing something outward-facing and more formal.

    Including Socratic method and ranked questions makes the assistant a sharper tool for thought, not just a better rewriter.

    Conclusions

    We now have a robust set of project instructions aligned with Anapoly’s style, goals, and workflow.

    The same principles can be adapted to other roles or collaborators as Anapoly Labs grows.

    Future labs could focus on refining persona prompts, exploring AI transparency, or adapting the instructions for group sessions.

    Recommendations

    Use the new instructions consistently in all project spaces.

    Encourage collaborators to create variants suited to their own use cases.

    Monitor edge cases where the assistant behaves inconsistently—these can inform future labs.

    Continue exploring how to balance tone, clarity, and adaptability when writing for different audiences.

    Tags: lab-notes, instructions, ai-tools, prompt-engineering

    Glossary

    Edge case: A situation that occurs at an extreme—such as rare inputs or unusual usage patterns—where a system might fail or behave unpredictably.

    Meta-guidance: Instructions that tell the assistant how to handle ambiguity or uncertainty in the user’s prompt.

    Multimodal interpretation: The ability to interpret and work with different types of input (e.g. text, images, PDFs, spreadsheets).

    Markdown: A lightweight text formatting language used to create structured documents with headings, bullet points, and emphasis.

    Prompt: A user’s input or question to an AI system; often a single sentence or instruction.

    Socratic method: A questioning technique used to clarify thinking by challenging assumptions and exploring implications.

    Document Collaborator mode: A role adopted by the AI to help with drafting, editing, and improving written content through structured feedback.

    Document Workbench: The interface in ChatGPT where documents are edited interactively.

    Canvas: A specific document within the Document Workbench where collaborative editing takes place.