Author: Alec Fearon

  • Coping with newsletters

    transparenty label: human-led

    I subscribe to quite a lot of newsletters, covering topics that interest me. But it’s difficult to find the time needed to filter out which newsletters merit a closer look. I remembered reading that Azeem Azhar solves this problem by telling an AI what kinds of things he is looking for, giving it a week’s worth of newsletters, and asking it to give him a concise summary of what might interest him. So I followed his example. I gave NotebookLM background information about Anapoly AI Labs, and asked it for a detailed prompt that would flag anything in my newsletters that might suit a diary post. 

    When I used that prompt, one of the points it picked up was that our transparency framework ties in well to the EU AI Act, which has a section on transparency requirements That prompted me to look into the UK’s approach. I learned that, instead of regulations, we have regulatory principles for the guidance of existing regulatory bodies such as the Information Commissioner’s Office or Ofcom. The principles cover:

    • Safety, security & robustness
    • Appropriate transparency and explainability
    • Fairness
    • Accountability and governance
    • Contestability and redress

    It occurred to me that some of the experimentation in our labs could focus on these aspects, individual labs being set up to specialise on one aspect, for example. Equally, we might explore how the regulatory princples could form the basis for a quality assurance framework applicable to business outputs created with AI involvement. This will become an important consideration for small businesses and consultancies.

    A final thought: any enterprise doing business in the EU must comply with the EU AI Act. That alone could justify a focused lab setup. We might simulate how a small consultancy could meet the Act’s transparency and accountability requirements when using general-purpose AI tools, modelling practical compliance, not just reading the rules. This, too, might merit some experimentation by Anapoly AI Labs. 

    Transparency Label Justification. This post was drafted by Alec Fearon. Newsletter filtering was supported by NotebookLM as a separate exercise. ChatGPT was used to revise wording and clarify structure. All reflections and framing are human-authored.

  • How we flag AI involvement in what we publish

    transparency label: AI-assisted

    Most of what we publish here is written with the help of AI. That’s part of the point. Anapoly AI Labs is about trying these tools out on real work and seeing what they’re good for.

    To keep things transparent, we label every post to show how much AI was involved. The label links to our transparency framework. This doesn’t try to assign percentages. Instead, we use a straightforward five-level scale:

    • Human-only: Entirely human-authored. No AI involvement at any stage of development.
    • Human-led: Human-authored, with AI input limited to suggestions, edits, or fact-checking.
    • AI-assisted: AI was used to draft, edit, or refine content. Human authors directed the process.
    • AI-heavy: AI played a large role in drafting or synthesis. Human authors curated and finalised the piece.
    • AI-only: Fully generated by AI without human input beyond the original prompt. No editing or revision.

    We sometimes add the following:

    • Justification: A brief note explaining why we chose a particular label.
    • Chat Summary: A short summary of what the AI contributed to the piece.
    • Full Transcript: A link to the full chat behind a piece, lightly edited for clarity and privacy, when it contains something worth reading.

    Transparency Label Justification. This post was developed collaboratively. The human author defined the purpose, structure, and tone; the AI assisted with drafting, tightening prose, applying style conventions, and rewording for clarity. The final version reflects a human-led editorial process with substantial AI input at multiple stages.

  • Working Towards a Strategy

    In the last few weeks, we’ve been thinking more clearly about how to describe what Anapoly AI Labs is for and how we intend to run it.

    The idea, from the start, was not to teach AI or to promote it, but to work out what it’s actually good for. Not in theory, but in the real day-to-day work of people like us. We’d do this by setting up simulated work environments (labs) and running real tasks with current AI tools to see what helps, what doesn’t, and what’s worth doing differently.

    To sharpen that thinking, I used ChatGPT as a sounding board. I gave it access to our core documents, including the notes from a deep dive discussion in NotebookLM about what we’re trying to build. Then I asked it to act as a thinking partner and help me write a strategy.

    The result is a good working draft that sets out our purpose, stance, methods, and measures of success. It’s a helpful starting point for discussion. One thing we’ve been clear about from the start: we don’t claim to be AI experts. We’re practitioners, working things out in public. That’s our stance, and the strategy captures it well.

    We’ve published the draft as a PDF. It explains how Anapoly AI Labs will work: how the labs are set up, what kind of people they’re for, how we plan to run sessions, and what success would look like. We now want to shift focus from shaping the idea to working out how to make it happen.

    Download the full document if you’d like to see more. We’d welcome thoughts, questions, or constructive criticism – we’re still working it out.

  • First lab note published

    We’ve just posted our first lab note.

    It documents an internal experiment to refine the custom instructions we use with ChatGPT – what we expect from it, how it should respond, and how to keep it useful across different tasks. The aim is to define a persona for the AI assistant that is more consistent and can adapt to the different types of assistance required of it.

    It’s a good example of how we’re using the Labs: not to explain AI, but to find out what it’s actually good for.

    Read the lab note → Custom Instructions for ChatGPT

  • Lab Note: custom instructions for ChatGPT

    Purpose of Experiment

    To improve the clarity, coverage, and strategic value of the custom instructions used to guide ChatGPT. This involved shaping an adaptable persona for the assistant. That is, a set of behavioural expectations which define how it should think, respond, and collaborate in different contexts.

    Author and Date

    Alec Fearon, 17 June 2025

    Participants

    Alec Fearon (experiment lead), ChatGPT (Document Collaborator mode)

    The goal was to ensure that ChatGPT supports Anapoly AI Labs in a consistent, credible, and context-sensitive manner.

    Lab Configuration and Setup

    This was a document-focused lab. The session took place entirely within ChatGPT’s Document Workbench, using file uploads and canvas tools. Key source files included:

    • My current custom instructions (baseline input)
    • An alternative set of instructions (for contrast) by Matthew, an expert AI user
    • The Anapoly AI Labs project instructions (evolving draft)
    • Recent Anapoly Online diary posts
    • Email exchanges amongst Anapoly team members.

    ChatGPT acted in what we later formalised as Document Collaborator mode: assisting with drafting, editing, and structural critique in line with the evolving instruction set. I provided direction in natural language; the AI edited and reorganised accordingly.

    Preamble

    This note includes a short glossary at the end to explain terms that may be unfamiliar to non-technical readers.

    Procedure

    1. Reviewed and critiqued my current instructions.
    2. Analysed strengths and gaps in Matthew’s approach.
    3. Combined the best of both into a new, structured format.
    4. Iteratively improved wording, structure, and tone.
    5. Added meta-guidance, clarified interaction modes, and ensured adaptability across different settings.
    6. Produced markdown, plain text, and PDF versions for upload to the project files.
    7. Created a lighter version suitable for general ChatGPT use.

    Findings

    Matthew’s structure was modular and well-scoped, but lacked tone guidance and broader role adaptability.

    My original was strong on tone and intent but less clear on scope and edge-case handling.

    Combining both required trimming redundancy and strengthening interaction rules.

    The distinction between projects, documents, and informal chats is useful and worth making explicit.

    File handling (multimodal interpretation) and ambiguity management were under-specified previously.

    Discussion of Findings

    The lab assumed that small adjustments to instruction style could yield meaningful improvements in assistant behaviour, and the resulting draft reflects that working hypothesis.

    Defining five roles for ChatGPT (Thinking Partner, Document Collaborator, Research Assistant, Use-Case Designer, Multimodal Interpreter) provides a useful mental model for both human and AI. The role can be specified at the beginning of a chat, and changed during the chat as necessary. 

    Meta-guidance (what to do when a prompt is ambiguous or under-specified) should be especially valuable.

    Clarifying when ChatGPT should adopt my personal tone versus when it should adjust to suit an external audience turned out to be important. That distinction will help the assistant match its style to the task – whether drafting in my voice or producing something outward-facing and more formal.

    Including Socratic method and ranked questions makes the assistant a sharper tool for thought, not just a better rewriter.

    Conclusions

    We now have a robust set of project instructions aligned with Anapoly’s style, goals, and workflow.

    The same principles can be adapted to other roles or collaborators as Anapoly Labs grows.

    Future labs could focus on refining persona prompts, exploring AI transparency, or adapting the instructions for group sessions.

    Recommendations

    Use the new instructions consistently in all project spaces.

    Encourage collaborators to create variants suited to their own use cases.

    Monitor edge cases where the assistant behaves inconsistently—these can inform future labs.

    Continue exploring how to balance tone, clarity, and adaptability when writing for different audiences.

    Tags: lab-notes, instructions, ai-tools, prompt-engineering

    Glossary

    Edge case: A situation that occurs at an extreme—such as rare inputs or unusual usage patterns—where a system might fail or behave unpredictably.

    Meta-guidance: Instructions that tell the assistant how to handle ambiguity or uncertainty in the user’s prompt.

    Multimodal interpretation: The ability to interpret and work with different types of input (e.g. text, images, PDFs, spreadsheets).

    Markdown: A lightweight text formatting language used to create structured documents with headings, bullet points, and emphasis.

    Prompt: A user’s input or question to an AI system; often a single sentence or instruction.

    Socratic method: A questioning technique used to clarify thinking by challenging assumptions and exploring implications.

    Document Collaborator mode: A role adopted by the AI to help with drafting, editing, and improving written content through structured feedback.

    Document Workbench: The interface in ChatGPT where documents are edited interactively.

    Canvas: A specific document within the Document Workbench where collaborative editing takes place.

  • Sense from confusion

    Early in the process of developing Anapoly Online, our website, I asked ChatGPT to help me create a diary dashboard: a page acting as a central point for diary posts. Amongst other things, I wanted the page to let us select a tag and see only the posts thus tagged. I was unsure how to implement the filltering control this needed, so asked ChatGPT for a step by step guide. The AI confidently produced a procedure, and I put it into action. It soon became clear, however, that the procedure did not reflect the reality of the software I was using. ChatGPT tried to correct things in response to my complaints, but things simply became more confused.

    When all else fails, I said to myself, read the documentation. 

    The software’s documentation proved to be like the curate’s egg: good in parts. Soon, I was as muddled as ChatGPT had been, and it took me some trial and much error to work out the correct procedure for what I wanted to do. 

    Conclusion: current AI can’t create sense out of confusion. That’s still a task for humans.

  • The concept

    Purpose: To model and investigate how non-technical people can make good use of general-purpose AI in their work, using experimentation to understand the strengths and limitations of current AI tools.

    Why does this matter? AI is now widely available, but there’s a credibility gap between hype and reality. Many people are unsure how to use AI effectively.

    What is Anapoly AI Labs? Not a research lab, nor a tech incubator. A collection of small, hands-on labs simulating real-world contexts to explore the practical use of general-purpose AI tools.

    How it works

    A lab is a simulated workspace: a model of an office or home environment, set up to reflect the tasks and tools typical of a real working situation. It is equipped with one or more PCs and other internet-connected devices.

    For some labs, the devices are physically co-located in one office, together with a large, touchscreen display. This setup is designed for when we want better interaction through face to face contact and shared viewing of experiments. In other labs, the devices may be distributed over two or more locations for remote working.

    For all labs, digital files are held in cloud storage. Standard software such as Microsoft Office is used to create and edit documents, manage data, communicate by email, and support typical workflows. General-purpose AI tools like ChatGPT, Perplexity, and NotebookLM are accessed online.

    The participants in a lab carry out realistic tasks in a simulated working context – researching a topic, drafting a proposal, analysing correspondence, writing a report – just as they might in their professional life.

    To create a lab, we configure the physical and digital parts to suit its purpose. This involves connecting the equipment to a dedicated area of file storage whose content is tailored to the work context being modelled by that lab. Thus all documents, data, and outputs in a lab are context-specific and separate from those in other labs.

    What Makes It Different? This isn’t a course, a product demo, or a sales pitch. It’s a testbed. The emphasis is practical: hands-on exploration of what general-purposeAI tools can and can’t do when pointed at everyday work.

    Intended audience: curious professionals, small business owners, writers, and community actors – anyone who works with words, data, or decisions.

    Mode of Operation: Small, in-person, hands-on sessions. Sometimes co-located, otherwise working remotely.

    Outcomes: Better understanding of what AI can and cannot do in everyday contexts. A growing library of real examples and honest reflections. A trusted local presence in the AI literacy landscape.

    Founders’ position: Experienced, local professionals not selling AI services but exploring their use. Not trying to be experts, but honest, curious testers of what’s actually useful. Hoping to pass on the baton to a younger team.

  • A pivot

    Our initial idea, prompted by Kamil Banc’s writing on practical AI use, was to run a small, local club. Somewhere people like us could meet in person, experiment with ChatGPT, and see what we could actually do with it. A “non-threatening, friendly environment,” we called it at the time.

    But the concept developed, and the name seemed too cosy. A reference to Google Labs brought up the idea of a lab as a place to experiment with tools and ideas. This resonated, so we pivoted to thinking of ourselves not as conveners of a club but as facilitators of a sandbox: a safe space to try things out and see what works.

    Our sandbox would be friendly and exploratory, but with a clear purpose: to model the use of general-purpose AI tools in everyday working environments. It would enable a number of labs, each modelling a different working situation, where we could try things out, see what helps and what doesn’t, and work out how to get better results.

    Hence Anapoly AI Labs: one sandbox, many lab setups.


    sandbox: a safe play area where computer programs can be used without affecting the operational system; useful for experimenting with or testing new software.

  • Our stance

    Stance: a way of thinking about something, especially expressed in a publicly stated opinion.

    We don’t claim to be AI experts. We’re practitioners exploring AI in real problems faced by professionals like us. We’re testing, documenting, and improving – in public. That’s our value.

  • Initial assumptions

    Having decided that the idea of an AI Club was worth pursuing, Dennis and I co-opted Ray into the initiative and set out its underlying assumptions.These are listed below.

    Assumption: there is a market for an AI Club amongst the local population of active and retired professionals, small business owners, and the like. These people are aware of AI’s potential, curious about it, but unsure how best to make use of it.

    Assumption: the social aspect of our club will make it a suitable, informal setting for people to learn how AI tools can improve the quality and productivity of their work.

    Assumption: Although different people will find different aspects of AI useful, there are some common purposes. These include:

    • supporting personal or professional development;
    • getting more done in less time;
    • creating better quality output;
    • improving the the quality of service offered to others; and
    • saving money.

    Assumption: a face-to-face, small-group format with peer interaction, real-time demonstrations, and a narrative focus will be more appealing than virtual courses or corporate-style workshops.

    Assumption: people will be willing to attend in-person sessions and contribute a modest fee once value is evident. They will think that the experience is more useful, trustworthy, and rewarding than online alternatives.

    Assumption: at present, no equivalent offering exists locally that blends live demonstration, peer learning, and practical AI support with such a low-barrier to entry.

    Assumption: Anapoly can deliver sessions using current, general-purpose AI tools such as ChatGPT, Perplexity and NotebookLM without a costly technical infrastructure, relying on existing facilities and minimal setup.

    Assumption: Anapoly, as a local consultancy run by experienced professionals, will be trusted by the audience and seen as non-threatening, practical, and thoughtful.