Category: diary

A public record of the development of our ideas

  • Use cases for NotebookLM

    Posting in his SubStack Adjacent Possible, Steven Johnson discusses how “language models are opening new avenues for inquiry in historical research and writing“. He suggests they can act as collaborative tools, rather than replacements for the writer’s engagement with primary sources.

    Johnson argues that NotebookLM is designed to facilitate rather than replace the reading of original sources. I t does so by making the entire source readable within the app, and by provinding inline citations linked directly to the original material.

    He identifies some interesting use cases.

    The AI can be a tool for collaborative brainstorming by allowing users to explore different hypotheses and see patterns within personally curated sources.

    NotebookLM can be used for targeted information retrieval.

    • It can help “fill in blank spots” or remind users of forgotten details from their readings.
    • The tool is valuable for fact-checking against uploaded source material.
    • For specific information, like in a car manual, it can provide direct answers to questions through a conversational Q&A format.

    It can enhance serendipitous discovery by suggesting surprising, less obvious connections amongst the sources.

    It can create mind maps from the sources, in effect indexing them on the fly.

    Finally he speculates on a future where e-books could come with a NotebookLM-like interface. This would bundle together the main work with all the original sources used by the author, enabling “timelines, mind maps, and explanations of key themes, anything you can think to ask”.

  • How ChatGPT helped draft our first acclimatisation lab setup

    Date: 24 June 2025

    transparency label: AI-heavy

    Our latest Lab Note records a quick experiment where I asked two ChatGPT models to draft the outline for an “acclimatisation” session – the starter lab we plan to run with newcomers to AI.

    Highlights:

    • Model face‑off: I ran the same prompt in parallel on model o3 and model 4o. The reasoning‑focused o3 delivered a tight nine‑part outline. 4o wandered off‑piste.
    • Time cost: The branch test took three minutes and gave us a clear winner.
    • Transparency: The Lab Note carries an AI‑heavy label because most of the prose came straight from o3. I trimmed, corrected one hallucination, and signed off.

    If you are curious about our process or want to see how structured prompting keeps the bot on track, read the full note here: First Acclimatisation Session Lab Note →

  • No substitute for reading the paper

    transparency label: Human-only

    … what I can say is that a theme throughout this self-analysis is this: I find ChatGPT to be a really useful tool when I already have some idea of what I want to do and when I’m actually engaged with the issue. I find it much less reliable or useful for completely automating parts of the process. There’s no substitute for reading the paper.

    Source: Sean Trott in his newsletter How I use (and don’t use) ChatGPT on 24 June 2025

  • ChatGPT models: which to use when?

    transparency label: Human only

    ChatGPT-4ofast, for brainstorming, quick questions, general chat
    o3powerful, for serious work (analysis, writing, research, coding)
    o3-proultra-powerful, for the hardest problems

    Source: One Useful Thing, Substack newsletter by Ethan Mollick, 23 June 2025

  • That was the moment …

    transparency label: Human only

    It hit me that generative AI is the first kind of technology that can tell you how to use itself. You ask it what to do, and it explains the tool, the technique, the reasoning—it teaches you. And that flipped something for me. It stopped being a support tool and became more like a co-founder.

    A quote from BRXND Dispatch, a SubStack newsletter by Noah Brier, which featured Craig Hepburn, former Chief Digital Officer at Art Basel.

  • Mapping the territory: a conceptual framework for our labs


    Transparency label: AI-assisted

    As Anapoly AI Labs begins to take clearer shape, we’ve stepped back to ask: what exactly is a lab, and how should we think about the different types we’re running or planning?

    We now have an answer in the form of framework that describes what labs are for, how they vary, and what kind of value they generate. This will help us give lab participants a sense of where they are and what comes next in the process of understanding how to make use of general-purpose AI tools. The framework will help us design better labs. It will evolve in line with our thinking about all these things.

    The framework defines four key functions a lab can serve:

    • Acclimatisation – helping people get comfortable with AI tools
    • Experimentation – trying out tasks to see what works and what doesn’t
    • Proof of concept – asking whether AI could handle a specific challenge
    • Iteration – going back to improve on an earlier result

    It also distinguishes between domains (like consultancy or authorship) and contexts (like “a solo consultant writing a project bid”). Labs are set up to reflect domain + context.

    The framework defines a simple set of participant roles – observer, explorer, and facilitator – and outlines the kinds of outcomes we’re hoping for: confidence, insight, and learning.

    The full conceptual framework is here, and we’ll continue to refine it as our practice develops.

    This diary post is part of our public notebook. It helps document not just what we’ve tried, but how we’re thinking and rethinking as we go.


    Transparency Label Justification: This post was developed in dialogue with ChatGPT. Alec directed the content, structure, and tone, while ChatGPT contributed drafts, edits, and structural suggestions. All decisions about framing and language were reviewed and approved by Alec. This collaborative process fits the “AI-assisted” classification under the Anapoly Online transparency framework.


  • Sandboxes

    transparency label: human-led

    The EU AI Act establishes a risk-based classification system for AI systems. Compliance requirements depend on the risk the system poses to users. Risk levels are set at unacceptable and high. General-Purpose AI systems like ChatGPT are not classified as high-risk but are subject to specific transparency requirements and must comply with EU copyright law. 

    Also, the Act “aims to support AI innovation and start-ups in Europe, allowing companies to develop and test general-purpose AI models before public release. That is why it requires that national authorities provide companies with a testing environment for AI that simulates conditions close to the real world. This will help small and medium-sized enterprises (SMEs) compete in the growing EU artificial intelligence market.

    Unlike the EU, the UK has no general-purpose AI sandbox. The UK takes a sector-led approach to AI oversight, relying on existing regulators to operate their own sandbox initiatives under the government’s pro-innovation framework. Each sandbox is designed around the compliance needs and risk profiles of its domain. Existing sandboxes are focused on sector-specific or compliance-heavy contexts, for example:

    • FCA AI Sandbox (2025) – Financial services only; supports firms developing or integrating AI tools into fintech workflows.
    • ICO Regulatory Sandbox – Suitable for testing AI applications involving personal data, especially where GDPR-like safeguards are needed.
    • MHRA AI Airlock – For AI used in medical devices.

    These UK sandboxes are geared toward testing purpose-built AI tools or integrations into regulated industry IT systems. There is no current sandbox designed for SMEs exploring general-purpose AI like ChatGPT in everyday, low-risk tasks. While tools like ChatGPT could be included in sandbox trials (e.g. via the ICO for data privacy concerns or the FCA for financial services), these environments are not designed for routine or everyday use. They are structured for defined compliance challenges, not routine experimentation, and they require a defined project with specific compliance aims.

    A search by ChatGPT found no evidence of any current provision for open-ended, exploratory use of general-purpose AI by SMEs in unregulated or lightly regulated workflows. Anapoly AI Labs can occupy that space, modelling practical, credible AI use outside formal regulation.

    In that context, Anapoly’s modelling approach could include a cycle of experimentation followed by proof of concept (POC). A proof of concept is a small-scale test to check whether an idea works before investing in full implementation.

    Transparency Label Justification. This piece was authored by Alec Fearon. ChatGPT was used to assist with regulatory research (e.g. the EU AI Act and UK sandbox landscape), structure the argument, and refine language. The ideas, framing, and conclusions are Alec’s. AI contributed supporting detail and wording options, but did not generate or structure the content independently. 

  • Coping with newsletters

    transparenty label: human-led

    I subscribe to quite a lot of newsletters, covering topics that interest me. But it’s difficult to find the time needed to filter out which newsletters merit a closer look. I remembered reading that Azeem Azhar solves this problem by telling an AI what kinds of things he is looking for, giving it a week’s worth of newsletters, and asking it to give him a concise summary of what might interest him. So I followed his example. I gave NotebookLM background information about Anapoly AI Labs, and asked it for a detailed prompt that would flag anything in my newsletters that might suit a diary post. 

    When I used that prompt, one of the points it picked up was that our transparency framework ties in well to the EU AI Act, which has a section on transparency requirements That prompted me to look into the UK’s approach. I learned that, instead of regulations, we have regulatory principles for the guidance of existing regulatory bodies such as the Information Commissioner’s Office or Ofcom. The principles cover:

    • Safety, security & robustness
    • Appropriate transparency and explainability
    • Fairness
    • Accountability and governance
    • Contestability and redress

    It occurred to me that some of the experimentation in our labs could focus on these aspects, individual labs being set up to specialise on one aspect, for example. Equally, we might explore how the regulatory princples could form the basis for a quality assurance framework applicable to business outputs created with AI involvement. This will become an important consideration for small businesses and consultancies.

    A final thought: any enterprise doing business in the EU must comply with the EU AI Act. That alone could justify a focused lab setup. We might simulate how a small consultancy could meet the Act’s transparency and accountability requirements when using general-purpose AI tools, modelling practical compliance, not just reading the rules. This, too, might merit some experimentation by Anapoly AI Labs. 

    Transparency Label Justification. This post was drafted by Alec Fearon. Newsletter filtering was supported by NotebookLM as a separate exercise. ChatGPT was used to revise wording and clarify structure. All reflections and framing are human-authored.

  • How we flag AI involvement in what we publish

    transparency label: AI-assisted

    Most of what we publish here is written with the help of AI. That’s part of the point. Anapoly AI Labs is about trying these tools out on real work and seeing what they’re good for.

    To keep things transparent, we label every post to show how much AI was involved. The label links to our transparency framework. This doesn’t try to assign percentages. Instead, we use a straightforward five-level scale:

    • Human-only: Entirely human-authored. No AI involvement at any stage of development.
    • Human-led: Human-authored, with AI input limited to suggestions, edits, or fact-checking.
    • AI-assisted: AI was used to draft, edit, or refine content. Human authors directed the process.
    • AI-heavy: AI played a large role in drafting or synthesis. Human authors curated and finalised the piece.
    • AI-only: Fully generated by AI without human input beyond the original prompt. No editing or revision.

    We sometimes add the following:

    • Justification: A brief note explaining why we chose a particular label.
    • Chat Summary: A short summary of what the AI contributed to the piece.
    • Full Transcript: A link to the full chat behind a piece, lightly edited for clarity and privacy, when it contains something worth reading.

    Transparency Label Justification. This post was developed collaboratively. The human author defined the purpose, structure, and tone; the AI assisted with drafting, tightening prose, applying style conventions, and rewording for clarity. The final version reflects a human-led editorial process with substantial AI input at multiple stages.

  • Working Towards a Strategy

    In the last few weeks, we’ve been thinking more clearly about how to describe what Anapoly AI Labs is for and how we intend to run it.

    The idea, from the start, was not to teach AI or to promote it, but to work out what it’s actually good for. Not in theory, but in the real day-to-day work of people like us. We’d do this by setting up simulated work environments (labs) and running real tasks with current AI tools to see what helps, what doesn’t, and what’s worth doing differently.

    To sharpen that thinking, I used ChatGPT as a sounding board. I gave it access to our core documents, including the notes from a deep dive discussion in NotebookLM about what we’re trying to build. Then I asked it to act as a thinking partner and help me write a strategy.

    The result is a good working draft that sets out our purpose, stance, methods, and measures of success. It’s a helpful starting point for discussion. One thing we’ve been clear about from the start: we don’t claim to be AI experts. We’re practitioners, working things out in public. That’s our stance, and the strategy captures it well.

    We’ve published the draft as a PDF. It explains how Anapoly AI Labs will work: how the labs are set up, what kind of people they’re for, how we plan to run sessions, and what success would look like. We now want to shift focus from shaping the idea to working out how to make it happen.

    Download the full document if you’d like to see more. We’d welcome thoughts, questions, or constructive criticism – we’re still working it out.