Tag: lab setup

  • Drafting Anapoly’s first Lab Setup with ChatGPT

    Transparency label: AI‑heavy (ChatGPT model o3 produced primary content; Alec Fearon curated and lightly edited)


    Purpose of experiment

    Use ChatGPT as a thinking partner to create a first draft of an acclimatisation lab setup for Anapoly AI Labs and to compare output quality between two models (o3 and 4o).

    Author and date

    Alec Fearon, 24 June 2025

    Participants

    • Alec Fearon – experiment lead
    • ChatGPT model o3 – reasoning model
    • ChatGPT model 4o – comparison model

    Lab configuration and setup

    Interaction took place entirely in the ChatGPT Document Workbench. The first prompt was duplicated into two branches, each tied to a specific model. All files needed for reference (conceptual framework, lab note structure) were pre‑uploaded in the project space.

    Preamble

    Alec wanted a concise, critical first draft to stimulate team discussion. The exercise also served as a live test of whether o3’s “reasoning” advantage produced materially better drafts than the newer 4o model.

    (more…)
  • Assessing AI support for defining a value proposition

    transparency label: AI-assisted

    Purpose of Experiment

    To test whether a general-purpose AI (NotebookLM) can generate a credible, insightful articulation of Anapoly AI Labs’ value proposition when prompted appropriately.

    Author and Date

    Alec Fearon, 22 June 2025

    Participants

    Alec Fearon, Ray Holland, NotebookLM (AI), Perplexity (AI)

    Lab Configuration and Setup

    This experiment was run using NotebookLM, in a notebook containing background files on Anapoly AI Labs. To explore how we might play to the strengths of different AIs, a follow-up was performed using Perplexity with the NotebookLM output as input.

    Preamble

    Procedure

    NotebookM was given this prompt:

    “What is Anapoly AI Labs’ core value proposition and how does it differentiate itself?”

    NotebookLM’s response was then given to Perplexity, with this prompt:

    “How does Anapoly’s hands-on approach help me understand AI better in my work?”

    Findings

    NotebookLM responded with a detailed analysis of Anapoly’s strategic positioning:

    Anapoly AI Labs’ core value proposition lies in bridging the gap between AI curiosity and practical application for non-technical professionals and small businesses. They address the common issue where individuals are interested in AI but are unsure how to use it effectively in their daily work. Their purpose is to help these professionals explore how general-purpose AI can assist with everyday tasks, such as improving productivity, creating better quality output, or saving money.

    Anapoly AI Labs differentiates itself through a distinctive “modeling exploration” approach and its stance on credibility:

    • Hands-on, Practical Exploration over Theory or Expertise: Unlike traditional courses, product demos, or sales pitches, Anapoly AI Labs focuses on hands-on exploration of general-purpose AI tools in simulated real-world contexts. The emphasis is on demonstrating “what AI tools can and can’t do when pointed at everyday work,” rather than teaching complex AI theory or coding. This involves setting up specific “labs” for different market segments, such as a Small Consultancy Office or Independent Author, to apply AI to real problems.
    • Credibility Through Transparency and Shared Journey: A central aspect of their differentiation is their approach to credibility. They explicitly state they “don’t claim to be AI experts”. Instead, their credibility is built upon a “shared journey of discovery and documented practical insights”. As practitioners “in the early stages of building with the tools”, they aim to show that they are serious, curious, and hands-on. The AI itself helped refine this stance, suggesting that their value comes from “modeling exploration” and “asking the questions small businesses wish they had time to ask, and showing what happens when you do”. This is presented as more credible than a static CV or pre-existing expert status.
    • Public Documentation and Transparency: Anapoly AI Labs maintains an online presence called “Anapoly Online,” which functions as a “public notebook”. This platform is used to “document real tests, learning, missteps, and insights,” sharing what works and what doesn’t. It is designed to be “lean, understated, and quietly credible”, building a “body of work others can trust” bit by bit through openness and demonstrated effort. They also have a transparent system for disclosing AI involvement in all published material, using classifications like “Human-led,” “AI-assisted,” or “AI-heavy”.
    • Focus on the “Capability-Absorption Gap”: The labs directly address the “capability-absorption gap,” which refers to why businesses struggle to adopt AI despite its capabilities. By providing a low-barrier-to-entry, informal, and social environment for peer learning, they aim to make AI adoption more accessible and less confusing for non-technical users.

    Perplexity then provided the following simplified summary:

    “Anapoly’s hands-on approach helps you understand AI better in your work by letting you actually try out AI tools on real tasks that matter to you, instead of just reading about them or watching demos. This practical experience shows you what AI can and can’t do in your daily work, making it easier to see how it fits into your routine and where it can save you time or improve your results.

    You get to experiment in a safe, informal setting, learn from real examples, and see both successes and mistakes. This helps you build real confidence with AI, instead of just theory, and makes it much easier to spot where AI could help you most in your job.”

    Discussion of findings

    The NotebookLM output demonstrated that a general-purpose AI can synthesise a coherent and plausible value proposition from structured background material. Perplexity, by contrast, generated a shorter and more audience-friendly version in response to a focused, user-oriented question.

    Together, these outputs show complementary strengths: NotebookLM delivers a comprehensive internal formulation; Perplexity extracts an accessible, benefit-led summary. Used in tandem, these tools help refine messaging for different audiences: internal strategists vs. prospective participants.

    Of particular interest is NotebookLM’s identification of the “capability-absorption gap”, a concise and useful term for a key problem that Anapoly AI Labs addresses. While the founders had recognised this issue in practical terms, the AI’s phrasing sharpens it into a strategic talking point. Framing Anapoly’s purpose in terms of reducing this gap may prove valuable in both internal planning and external communication.

    This experiment also highlights the value of re-prompting and testing across different AI models to triangulate clarity and tone.

    Recommendations

    1. Use AI tools like NotebookLM to draft key positioning statements, especially when materials are already well developed.
    2. Always review AI-generated value propositions critically. Look for overfitting, vagueness, or unearned claims.
    3. Use simpler AI prompts with tools like Perplexity to test how propositions land with a non-specialist audience.
    4. Consider publishing selected AI outputs as-is, but with clear disclosure and context-setting.
    5. Repeat this exercise periodically to test whether the value proposition evolves or ossifies.

    Tags
    value, lab-setup, worked, use-cases, prompting, ai-only, positioning, credibility, communication, capability-absorption-gap

    Glossary

    • Modeling Exploration: A term used to describe Anapoly AI Labs’ approach—testing and demonstrating AI use in practical contexts without claiming expertise.
    • Capability-Absorption Gap: The space between what AI tools can do and what users actually manage to adopt in real settings. First coined (in this context) by NotebookLM.
    • Public Notebook: Anapoly Online’s role as a transparent log of what was tried, what worked, and what didn’t.
    • General-purpose AI Tools: Tools like ChatGPT or NotebookLM that are not tailored to a specific domain but can assist with a wide range of tasks.
    • AI-only: A transparency label denoting that the content was fully generated by AI without human rewriting or editorial shaping.
    • Overfitting: In this context, an AI response that sticks too closely to the language or structure of source material, potentially limiting originality or insight.
    • Vagueness: A tendency in AI outputs to use safe, abstract phrases that lack specificity or actionable detail.
    • Unearned Claims: Assertions made by AI that sound impressive but are not substantiated by evidence or experience in the given context.

    Transparency Label Justification. The experimental outputs (NotebookLM and Perplexity responses) were AI-generated and included unedited. However, the lab note itself – its framing, interpretation, and derived recommendations – was co-written by a human and ChatGPT in structured dialogue.
    ChatGPT’s role included: drafting the findings and recommendations, articulating the reasoning behind terms like “capability-absorption gap”, refining the explanatory framing, tags, and glossary.

  • Working Towards a Strategy

    In the last few weeks, we’ve been thinking more clearly about how to describe what Anapoly AI Labs is for and how we intend to run it.

    The idea, from the start, was not to teach AI or to promote it, but to work out what it’s actually good for. Not in theory, but in the real day-to-day work of people like us. We’d do this by setting up simulated work environments (labs) and running real tasks with current AI tools to see what helps, what doesn’t, and what’s worth doing differently.

    To sharpen that thinking, I used ChatGPT as a sounding board. I gave it access to our core documents, including the notes from a deep dive discussion in NotebookLM about what we’re trying to build. Then I asked it to act as a thinking partner and help me write a strategy.

    The result is a good working draft that sets out our purpose, stance, methods, and measures of success. It’s a helpful starting point for discussion. One thing we’ve been clear about from the start: we don’t claim to be AI experts. We’re practitioners, working things out in public. That’s our stance, and the strategy captures it well.

    We’ve published the draft as a PDF. It explains how Anapoly AI Labs will work: how the labs are set up, what kind of people they’re for, how we plan to run sessions, and what success would look like. We now want to shift focus from shaping the idea to working out how to make it happen.

    Download the full document if you’d like to see more. We’d welcome thoughts, questions, or constructive criticism – we’re still working it out.