Tag: lab design

  • Levi Swims

    Levi Swims


    Levi Swims lab brief describes:

    • the scenario
    • purpose of the lab
    • opportunities for the use of AI
    • strengths of AI in this context
    • mapping and analysis of AI strengths across high-level processes
    • the approach to lab work.

    Principles of the three-layer model of AI context are applied to this work.

    Booking Admin Copilot components comprise:

    Preliminary tests with the Booking Admin Copilot running on ChatGPT were successful. In these tests, the copilot was given booking request emails and tasked with:

    • extracting structured data from these emails,
    • identifying any missing information, and
    • drafting clear responses.

    When more comprehensive testing of the Booking Admin Copilot functions have been completed, the focus will move to an exploration of how we can deploy the Booking Admin Copilot in a sandboxed, local-only AI configuration. That is, to run the copilot on a local computer without internet connection. This conceptual architecture (for a local-only AI system suitable for use by a micro-enterprise) illustrates the general approach we propose to experiment with.

  • Mapping the territory: a conceptual framework for our labs


    Transparency label: AI-assisted

    As Anapoly AI Labs begins to take clearer shape, we’ve stepped back to ask: what exactly is a lab, and how should we think about the different types we’re running or planning?

    We now have an answer in the form of framework that describes what labs are for, how they vary, and what kind of value they generate. This will help us give lab participants a sense of where they are and what comes next in the process of understanding how to make use of general-purpose AI tools. The framework will help us design better labs. It will evolve in line with our thinking about all these things.

    The framework defines four key functions a lab can serve:

    • Acclimatisation – helping people get comfortable with AI tools
    • Experimentation – trying out tasks to see what works and what doesn’t
    • Proof of concept – asking whether AI could handle a specific challenge
    • Iteration – going back to improve on an earlier result

    It also distinguishes between domains (like consultancy or authorship) and contexts (like “a solo consultant writing a project bid”). Labs are set up to reflect domain + context.

    The framework defines a simple set of participant roles – observer, explorer, and facilitator – and outlines the kinds of outcomes we’re hoping for: confidence, insight, and learning.

    The full conceptual framework is here, and we’ll continue to refine it as our practice develops.

    This diary post is part of our public notebook. It helps document not just what we’ve tried, but how we’re thinking and rethinking as we go.


    Transparency Label Justification: This post was developed in dialogue with ChatGPT. Alec directed the content, structure, and tone, while ChatGPT contributed drafts, edits, and structural suggestions. All decisions about framing and language were reviewed and approved by Alec. This collaborative process fits the “AI-assisted” classification under the Anapoly Online transparency framework.