Tag: participant roles

  • First thoughts on a lab framework

    transparency label: Human-only

    A few hours spent with ChatGPT-o3 resulting in good first draft of a framework for thinking about our labs. It covers

    • types of lab
    • the roles of people involved with the labs
    • the core technical configuration of a lab
    • assets needed to launch, operate, and archive a lab
    • a naming convention for these assets

    No doubt the framework will need to be tweaked and added to as our ideas mature.

    The chat with o3 was a valuable mind-clearing exercise for me, and I was impressed by how much more “intellectual” it is compared to the 4o model. Like many intellectuals, it also displayed a lack of common sense on occasions, especially when I asked for simple formatting corrections to the canvas we were editing together. The 4o model is much more agile in that respect.

    During the chat, when the flow with ChatGPT didn’t feel right, I hopped back and forth to consult with Perplexity and NotebookLM. Their outputs provided interestingly and usefully different perspectives that helped to clear the logjam.

    A decision arising from my joint AI consultation process was the choice of Google Workspace for the office productivity suite within our labs. This will allow for much better collaboration when using office tools with personal licences than would be the case with Microsoft Office 365. Given the ad hoc nature of labs and the cost constaints we have, this is an important consideration.

  • Mapping the territory: a conceptual framework for our labs


    Transparency label: AI-assisted

    As Anapoly AI Labs begins to take clearer shape, we’ve stepped back to ask: what exactly is a lab, and how should we think about the different types we’re running or planning?

    We now have an answer in the form of framework that describes what labs are for, how they vary, and what kind of value they generate. This will help us give lab participants a sense of where they are and what comes next in the process of understanding how to make use of general-purpose AI tools. The framework will help us design better labs. It will evolve in line with our thinking about all these things.

    The framework defines four key functions a lab can serve:

    • Acclimatisation – helping people get comfortable with AI tools
    • Experimentation – trying out tasks to see what works and what doesn’t
    • Proof of concept – asking whether AI could handle a specific challenge
    • Iteration – going back to improve on an earlier result

    It also distinguishes between domains (like consultancy or authorship) and contexts (like “a solo consultant writing a project bid”). Labs are set up to reflect domain + context.

    The framework defines a simple set of participant roles – observer, explorer, and facilitator – and outlines the kinds of outcomes we’re hoping for: confidence, insight, and learning.

    The full conceptual framework is here, and we’ll continue to refine it as our practice develops.

    This diary post is part of our public notebook. It helps document not just what we’ve tried, but how we’re thinking and rethinking as we go.


    Transparency Label Justification: This post was developed in dialogue with ChatGPT. Alec directed the content, structure, and tone, while ChatGPT contributed drafts, edits, and structural suggestions. All decisions about framing and language were reviewed and approved by Alec. This collaborative process fits the “AI-assisted” classification under the Anapoly Online transparency framework.