Diary Posts



  • Use cases for NotebookLM

    Posting in his SubStack Adjacent Possible, Steven Johnson discusses how “language models are opening new avenues for inquiry in historical research and writing“. He suggests they can act as collaborative tools, rather than replacements for the writer’s engagement with primary sources. Johnson argues that NotebookLM is designed to facilitate rather than replace the reading of…

    Read more >>>

  • How ChatGPT helped draft our first acclimatisation lab setup

    Date: 24 June 2025 transparency label: AI-heavy Our latest Lab Note records a quick experiment where I asked two ChatGPT models to draft the outline for an “acclimatisation” session – the starter lab we plan to run with newcomers to AI. Highlights: If you are curious about our process or want to see how structured prompting keeps…

    Read more >>>

  • No substitute for reading the paper

    transparency label: Human-only … what I can say is that a theme throughout this self-analysis is this: I find ChatGPT to be a really useful tool when I already have some idea of what I want to do and when I’m actually engaged with the issue. I find it much less reliable or useful for completely automating…

    Read more >>>

  • ChatGPT models: which to use when?

    transparency label: Human only ChatGPT-4o fast, for brainstorming, quick questions, general chat o3 powerful, for serious work (analysis, writing, research, coding) o3-pro ultra-powerful, for the hardest problems Source: One Useful Thing, Substack newsletter by Ethan Mollick, 23 June 2025

    Read more >>>

  • That was the moment …

    transparency label: Human only It hit me that generative AI is the first kind of technology that can tell you how to use itself. You ask it what to do, and it explains the tool, the technique, the reasoning—it teaches you. And that flipped something for me. It stopped being a support tool and became…

    Read more >>>

  • Mapping the territory: a conceptual framework for our labs

    Transparency label: AI-assisted As Anapoly AI Labs begins to take clearer shape, we’ve stepped back to ask: what exactly is a lab, and how should we think about the different types we’re running or planning? We now have an answer in the form of framework that describes what labs are for, how they vary, and…

    Read more >>>

  • Sandboxes

    transparency label: human-led The EU AI Act establishes a risk-based classification system for AI systems. Compliance requirements depend on the risk the system poses to users. Risk levels are set at unacceptable and high. General-Purpose AI systems like ChatGPT are not classified as high-risk but are subject to specific transparency requirements and must comply with EU copyright…

    Read more >>>

  • Coping with newsletters

    transparenty label: human-led I subscribe to quite a lot of newsletters, covering topics that interest me. But it’s difficult to find the time needed to filter out which newsletters merit a closer look. I remembered reading that Azeem Azhar solves this problem by telling an AI what kinds of things he is looking for, giving…

    Read more >>>

  • How we flag AI involvement in what we publish

    transparency label: AI-assisted Most of what we publish here is written with the help of AI. That’s part of the point. Anapoly AI Labs is about trying these tools out on real work and seeing what they’re good for. To keep things transparent, we label every post to show how much AI was involved. The label…

    Read more >>>