Conceptual Framework for Anapoly AI Labs
transparency label: AI-assisted
Note: This document is an early draft, currently under review. Some of the terminology, in particular, is under debate.
Our stance
We don’t claim to be AI experts. We’re practitioners exploring AI in real problems faced by professionals like us. We’re testing, documenting, and improving – in public. That’s our value.
The offer
How does Anapoly’s hands-on approach help me understand AI better in my work?
Anapoly’s hands-on approach helps you understand AI better in your work by letting you actually try out AI tools on real tasks that matter to you, instead of just reading about them or watching demos. This practical experience shows you what AI can and can’t do in your daily work, making it easier to see how it fits into your routine and where it can save you time or improve your results.
You get to experiment in a safe, informal setting, learn from real examples, and see both successes and mistakes. This helps you build real confidence with AI instead of just theory, and makes it much easier to spot where AI could help you most in your job.
Core definition
An Anapoly lab is a simulated working environment set up to test how AI tools perform in real-world tasks. Labs don’t teach AI, they reveal its behaviour through use. The purpose is not just to explore what AI can do, but also to help participants build the confidence and judgment needed to use it well.
Labs enable:
- Acclimatisation – first exposure, orientation, and familiarisation with AI tools and language
- Experimentation – practical trials to understand capabilities and limits
- Proof of concept – focused tests to determine whether AI tools can support specific tasks, processes and workflows
- Iteration – refining approaches and outputs through repeated trials, informed by feedback and earlier attempts
Types of lab
Each lab simulates a work domain. Within a domain, there may be multiple working contexts or scenarios, each providing a different lens for exploration.
Examples of domains and contexts:
- Consultancy
Contexts: small consultancy team members, freelance bid writer - Creative Authorship
Contexts: memoir writing, family storytelling, publishing workflow - Community Non-Profit
Contexts: campaign group, volunteer-run organisation, community media - Retail and Services
Contexts: small shop, local tradesperson, independent therapist - Academic or Research Straddler
Contexts: semi-retired lecturer, MSc student, club organiser
Modes of engagement
Labs support different modes and depths of engagement:
Acclimatisation
- For newcomers to AI
- Gentle exposure to prompts, outputs, and concepts
- Encourages questioning and scepticism before application
- May involve guided demonstrations, tool walkthroughs, or annotated examples
Experimentation
- Structured or open-ended trials using realistic tasks
- Focus on how to prompt, what works, what fails
- May include prompt tuning, error analysis, or process re-runs
- Experiment with domain-specific use cases and workflows
- Develop concepts for applying AI in domain-specific processes
Proof of Concept
- Goal-driven tests for specific problems or use cases
- Seeks to answer: “Could this actually work for me?”
Iteration
- Reworking previous attempts using feedback, insights, or new tools
- Begins to build reusability and generalisation
Participant roles
We use “participant” as a general term for anyone involved in a lab. Within that, three main roles are distinguished:
- Observer – Watches AI in use, reflects, and asks questions without interacting directly
- Explorer – Actively tries prompts, evaluates responses, and engages with the tools
- Facilitator – Sets up and runs the lab, provides guidance, and curates outputs
Lab setup and structure
Each lab setup includes:
- Name and Purpose – A short title and clear reason for the lab
- Mode of Engagement – Whether the session is for acclimatisation, experimentation, proof of concept, or iteration
- Simulated Context – What kind of real-world work is being modelled
- Participants and Roles – Who’s involved and how
- Task Set – The specific activities or challenges being explored
- Tool Configuration – Which AI and non-AI tools are used, and how they are set up
- Resources – A file library, sample documents, and any relevant data inputs
- Lab Outputs – Deciding what types of output matter for this lab, and providing the facilities to produce them (eg templates, folder structure)
Our position is that people take part in a lab session to explore, alongside fellow explorers, what they can do with their own AI tools. They can begin with a free licence, if that’s all they have. Then, as we and others demonstrate the greater capability of the paid licences, free uses might choose to upgrade themselves in order to continue their AI learning curve. The point here is that we are not delivering a training course for which they are paying. Were that so, we would be expected to provide the necessary training materials. In our model, participants bring along the tools they want to experiment with, to mutual benefit.
Outcomes
Labs generate value in multiple forms. Not just outputs, but understanding, confidence, and practical insight. These outcomes may arise during the session or emerge through later reflection and reuse.
Types of value include:
- Practical insight – What AI tools help with, where they fall short, and how to work with their quirks
- Confidence and capability – Participants gain familiarity, reduce anxiety, and become more willing to explore
- Use-case refinement – A vague idea becomes a better-shaped task, with clearer expectations and improved prompts
- Reusable components – Prompt-output pairs, templates, or notes that can feed future labs or be reused by others
- Narrative material – Raw experience captured as diary posts, lab notes, or reflections on what worked and what didn’t
- Inputs for iteration – Insights that shape future experiments or point to promising next steps
- Trust-building – Sharing missteps and progress openly builds credibility, especially for those unsure where to begin
These outcomes matter even if the AI doesn’t perform well. Failures are often the most instructive results.
Transparency Label Justification. This canvas was developed through a structured and iterative collaboration between Alec Fearon and ChatGPT. Alec directed the purpose, structure, and terminology of the document, prompted each section’s development, and critically reviewed all contributions. ChatGPT generated draft content, proposed structural changes, and refined language under Alec’s supervision. The ideas and framing reflect shared authorship, with Alec clearly in charge of direction and tone.