Category: diary

A public record of the development of our ideas

  • ChatGPT models: which to use when?

    transparency label: Human only

    ChatGPT-4ofast, for brainstorming, quick questions, general chat
    o3powerful, for serious work (analysis, writing, research, coding)
    o3-proultra-powerful, for the hardest problems

    Source: One Useful Thing, Substack newsletter by Ethan Mollick, 23 June 2025

  • That was the moment …

    transparency label: Human only

    It hit me that generative AI is the first kind of technology that can tell you how to use itself. You ask it what to do, and it explains the tool, the technique, the reasoning—it teaches you. And that flipped something for me. It stopped being a support tool and became more like a co-founder.

    A quote from BRXND Dispatch, a SubStack newsletter by Noah Brier, which featured Craig Hepburn, former Chief Digital Officer at Art Basel.

  • Mapping the territory: a conceptual framework for our labs


    Transparency label: AI-assisted

    As Anapoly AI Labs begins to take clearer shape, we’ve stepped back to ask: what exactly is a lab, and how should we think about the different types we’re running or planning?

    We now have an answer in the form of framework that describes what labs are for, how they vary, and what kind of value they generate. This will help us give lab participants a sense of where they are and what comes next in the process of understanding how to make use of general-purpose AI tools. The framework will help us design better labs. It will evolve in line with our thinking about all these things.

    The framework defines four key functions a lab can serve:

    • Acclimatisation – helping people get comfortable with AI tools
    • Experimentation – trying out tasks to see what works and what doesn’t
    • Proof of concept – asking whether AI could handle a specific challenge
    • Iteration – going back to improve on an earlier result

    It also distinguishes between domains (like consultancy or authorship) and contexts (like “a solo consultant writing a project bid”). Labs are set up to reflect domain + context.

    The framework defines a simple set of participant roles – observer, explorer, and facilitator – and outlines the kinds of outcomes we’re hoping for: confidence, insight, and learning.

    The full conceptual framework is here, and we’ll continue to refine it as our practice develops.

    This diary post is part of our public notebook. It helps document not just what we’ve tried, but how we’re thinking and rethinking as we go.


    Transparency Label Justification: This post was developed in dialogue with ChatGPT. Alec directed the content, structure, and tone, while ChatGPT contributed drafts, edits, and structural suggestions. All decisions about framing and language were reviewed and approved by Alec. This collaborative process fits the “AI-assisted” classification under the Anapoly Online transparency framework.


  • Sandboxes

    transparency label: human-led

    The EU AI Act establishes a risk-based classification system for AI systems. Compliance requirements depend on the risk the system poses to users. Risk levels are set at unacceptable and high. General-Purpose AI systems like ChatGPT are not classified as high-risk but are subject to specific transparency requirements and must comply with EU copyright law. 

    Also, the Act “aims to support AI innovation and start-ups in Europe, allowing companies to develop and test general-purpose AI models before public release. That is why it requires that national authorities provide companies with a testing environment for AI that simulates conditions close to the real world. This will help small and medium-sized enterprises (SMEs) compete in the growing EU artificial intelligence market.

    Unlike the EU, the UK has no general-purpose AI sandbox. The UK takes a sector-led approach to AI oversight, relying on existing regulators to operate their own sandbox initiatives under the government’s pro-innovation framework. Each sandbox is designed around the compliance needs and risk profiles of its domain. Existing sandboxes are focused on sector-specific or compliance-heavy contexts, for example:

    • FCA AI Sandbox (2025) – Financial services only; supports firms developing or integrating AI tools into fintech workflows.
    • ICO Regulatory Sandbox – Suitable for testing AI applications involving personal data, especially where GDPR-like safeguards are needed.
    • MHRA AI Airlock – For AI used in medical devices.

    These UK sandboxes are geared toward testing purpose-built AI tools or integrations into regulated industry IT systems. There is no current sandbox designed for SMEs exploring general-purpose AI like ChatGPT in everyday, low-risk tasks. While tools like ChatGPT could be included in sandbox trials (e.g. via the ICO for data privacy concerns or the FCA for financial services), these environments are not designed for routine or everyday use. They are structured for defined compliance challenges, not routine experimentation, and they require a defined project with specific compliance aims.

    A search by ChatGPT found no evidence of any current provision for open-ended, exploratory use of general-purpose AI by SMEs in unregulated or lightly regulated workflows. Anapoly AI Labs can occupy that space, modelling practical, credible AI use outside formal regulation.

    In that context, Anapoly’s modelling approach could include a cycle of experimentation followed by proof of concept (POC). A proof of concept is a small-scale test to check whether an idea works before investing in full implementation.

    Transparency Label Justification. This piece was authored by Alec Fearon. ChatGPT was used to assist with regulatory research (e.g. the EU AI Act and UK sandbox landscape), structure the argument, and refine language. The ideas, framing, and conclusions are Alec’s. AI contributed supporting detail and wording options, but did not generate or structure the content independently. 

  • Coping with newsletters

    transparenty label: human-led

    I subscribe to quite a lot of newsletters, covering topics that interest me. But it’s difficult to find the time needed to filter out which newsletters merit a closer look. I remembered reading that Azeem Azhar solves this problem by telling an AI what kinds of things he is looking for, giving it a week’s worth of newsletters, and asking it to give him a concise summary of what might interest him. So I followed his example. I gave NotebookLM background information about Anapoly AI Labs, and asked it for a detailed prompt that would flag anything in my newsletters that might suit a diary post. 

    When I used that prompt, one of the points it picked up was that our transparency framework ties in well to the EU AI Act, which has a section on transparency requirements That prompted me to look into the UK’s approach. I learned that, instead of regulations, we have regulatory principles for the guidance of existing regulatory bodies such as the Information Commissioner’s Office or Ofcom. The principles cover:

    • Safety, security & robustness
    • Appropriate transparency and explainability
    • Fairness
    • Accountability and governance
    • Contestability and redress

    It occurred to me that some of the experimentation in our labs could focus on these aspects, individual labs being set up to specialise on one aspect, for example. Equally, we might explore how the regulatory princples could form the basis for a quality assurance framework applicable to business outputs created with AI involvement. This will become an important consideration for small businesses and consultancies.

    A final thought: any enterprise doing business in the EU must comply with the EU AI Act. That alone could justify a focused lab setup. We might simulate how a small consultancy could meet the Act’s transparency and accountability requirements when using general-purpose AI tools, modelling practical compliance, not just reading the rules. This, too, might merit some experimentation by Anapoly AI Labs. 

    Transparency Label Justification. This post was drafted by Alec Fearon. Newsletter filtering was supported by NotebookLM as a separate exercise. ChatGPT was used to revise wording and clarify structure. All reflections and framing are human-authored.

  • How we flag AI involvement in what we publish

    transparency label: AI-assisted

    Most of what we publish here is written with the help of AI. That’s part of the point. Anapoly AI Labs is about trying these tools out on real work and seeing what they’re good for.

    To keep things transparent, we label every post to show how much AI was involved. The label links to our transparency framework. This doesn’t try to assign percentages. Instead, we use a straightforward five-level scale:

    • Human-only: Entirely human-authored. No AI involvement at any stage of development.
    • Human-led: Human-authored, with AI input limited to suggestions, edits, or fact-checking.
    • AI-assisted: AI was used to draft, edit, or refine content. Human authors directed the process.
    • AI-heavy: AI played a large role in drafting or synthesis. Human authors curated and finalised the piece.
    • AI-only: Fully generated by AI without human input beyond the original prompt. No editing or revision.

    We sometimes add the following:

    • Justification: A brief note explaining why we chose a particular label.
    • Chat Summary: A short summary of what the AI contributed to the piece.
    • Full Transcript: A link to the full chat behind a piece, lightly edited for clarity and privacy, when it contains something worth reading.

    Transparency Label Justification. This post was developed collaboratively. The human author defined the purpose, structure, and tone; the AI assisted with drafting, tightening prose, applying style conventions, and rewording for clarity. The final version reflects a human-led editorial process with substantial AI input at multiple stages.

  • Working Towards a Strategy

    In the last few weeks, we’ve been thinking more clearly about how to describe what Anapoly AI Labs is for and how we intend to run it.

    The idea, from the start, was not to teach AI or to promote it, but to work out what it’s actually good for. Not in theory, but in the real day-to-day work of people like us. We’d do this by setting up simulated work environments (labs) and running real tasks with current AI tools to see what helps, what doesn’t, and what’s worth doing differently.

    To sharpen that thinking, I used ChatGPT as a sounding board. I gave it access to our core documents, including the notes from a deep dive discussion in NotebookLM about what we’re trying to build. Then I asked it to act as a thinking partner and help me write a strategy.

    The result is a good working draft that sets out our purpose, stance, methods, and measures of success. It’s a helpful starting point for discussion. One thing we’ve been clear about from the start: we don’t claim to be AI experts. We’re practitioners, working things out in public. That’s our stance, and the strategy captures it well.

    We’ve published the draft as a PDF. It explains how Anapoly AI Labs will work: how the labs are set up, what kind of people they’re for, how we plan to run sessions, and what success would look like. We now want to shift focus from shaping the idea to working out how to make it happen.

    Download the full document if you’d like to see more. We’d welcome thoughts, questions, or constructive criticism – we’re still working it out.

  • First lab note published

    We’ve just posted our first lab note.

    It documents an internal experiment to refine the custom instructions we use with ChatGPT – what we expect from it, how it should respond, and how to keep it useful across different tasks. The aim is to define a persona for the AI assistant that is more consistent and can adapt to the different types of assistance required of it.

    It’s a good example of how we’re using the Labs: not to explain AI, but to find out what it’s actually good for.

    Read the lab note → Custom Instructions for ChatGPT

  • Sense from confusion

    Early in the process of developing Anapoly Online, our website, I asked ChatGPT to help me create a diary dashboard: a page acting as a central point for diary posts. Amongst other things, I wanted the page to let us select a tag and see only the posts thus tagged. I was unsure how to implement the filltering control this needed, so asked ChatGPT for a step by step guide. The AI confidently produced a procedure, and I put it into action. It soon became clear, however, that the procedure did not reflect the reality of the software I was using. ChatGPT tried to correct things in response to my complaints, but things simply became more confused.

    When all else fails, I said to myself, read the documentation. 

    The software’s documentation proved to be like the curate’s egg: good in parts. Soon, I was as muddled as ChatGPT had been, and it took me some trial and much error to work out the correct procedure for what I wanted to do. 

    Conclusion: current AI can’t create sense out of confusion. That’s still a task for humans.

  • The concept

    Purpose: To model and investigate how non-technical people can make good use of general-purpose AI in their work, using experimentation to understand the strengths and limitations of current AI tools.

    Why does this matter? AI is now widely available, but there’s a credibility gap between hype and reality. Many people are unsure how to use AI effectively.

    What is Anapoly AI Labs? Not a research lab, nor a tech incubator. A collection of small, hands-on labs simulating real-world contexts to explore the practical use of general-purpose AI tools.

    How it works

    A lab is a simulated workspace: a model of an office or home environment, set up to reflect the tasks and tools typical of a real working situation. It is equipped with one or more PCs and other internet-connected devices.

    For some labs, the devices are physically co-located in one office, together with a large, touchscreen display. This setup is designed for when we want better interaction through face to face contact and shared viewing of experiments. In other labs, the devices may be distributed over two or more locations for remote working.

    For all labs, digital files are held in cloud storage. Standard software such as Microsoft Office is used to create and edit documents, manage data, communicate by email, and support typical workflows. General-purpose AI tools like ChatGPT, Perplexity, and NotebookLM are accessed online.

    The participants in a lab carry out realistic tasks in a simulated working context – researching a topic, drafting a proposal, analysing correspondence, writing a report – just as they might in their professional life.

    To create a lab, we configure the physical and digital parts to suit its purpose. This involves connecting the equipment to a dedicated area of file storage whose content is tailored to the work context being modelled by that lab. Thus all documents, data, and outputs in a lab are context-specific and separate from those in other labs.

    What Makes It Different? This isn’t a course, a product demo, or a sales pitch. It’s a testbed. The emphasis is practical: hands-on exploration of what general-purposeAI tools can and can’t do when pointed at everyday work.

    Intended audience: curious professionals, small business owners, writers, and community actors – anyone who works with words, data, or decisions.

    Mode of Operation: Small, in-person, hands-on sessions. Sometimes co-located, otherwise working remotely.

    Outcomes: Better understanding of what AI can and cannot do in everyday contexts. A growing library of real examples and honest reflections. A trusted local presence in the AI literacy landscape.

    Founders’ position: Experienced, local professionals not selling AI services but exploring their use. Not trying to be experts, but honest, curious testers of what’s actually useful. Hoping to pass on the baton to a younger team.