Tag: use cases

  • Use cases for NotebookLM

    Posting in his SubStack Adjacent Possible, Steven Johnson discusses how “language models are opening new avenues for inquiry in historical research and writing“. He suggests they can act as collaborative tools, rather than replacements for the writer’s engagement with primary sources.

    Johnson argues that NotebookLM is designed to facilitate rather than replace the reading of original sources. I t does so by making the entire source readable within the app, and by provinding inline citations linked directly to the original material.

    He identifies some interesting use cases.

    The AI can be a tool for collaborative brainstorming by allowing users to explore different hypotheses and see patterns within personally curated sources.

    NotebookLM can be used for targeted information retrieval.

    • It can help “fill in blank spots” or remind users of forgotten details from their readings.
    • The tool is valuable for fact-checking against uploaded source material.
    • For specific information, like in a car manual, it can provide direct answers to questions through a conversational Q&A format.

    It can enhance serendipitous discovery by suggesting surprising, less obvious connections amongst the sources.

    It can create mind maps from the sources, in effect indexing them on the fly.

    Finally he speculates on a future where e-books could come with a NotebookLM-like interface. This would bundle together the main work with all the original sources used by the author, enabling “timelines, mind maps, and explanations of key themes, anything you can think to ask”.

  • Mapping the territory: a conceptual framework for our labs


    Transparency label: AI-assisted

    As Anapoly AI Labs begins to take clearer shape, we’ve stepped back to ask: what exactly is a lab, and how should we think about the different types we’re running or planning?

    We now have an answer in the form of framework that describes what labs are for, how they vary, and what kind of value they generate. This will help us give lab participants a sense of where they are and what comes next in the process of understanding how to make use of general-purpose AI tools. The framework will help us design better labs. It will evolve in line with our thinking about all these things.

    The framework defines four key functions a lab can serve:

    • Acclimatisation – helping people get comfortable with AI tools
    • Experimentation – trying out tasks to see what works and what doesn’t
    • Proof of concept – asking whether AI could handle a specific challenge
    • Iteration – going back to improve on an earlier result

    It also distinguishes between domains (like consultancy or authorship) and contexts (like “a solo consultant writing a project bid”). Labs are set up to reflect domain + context.

    The framework defines a simple set of participant roles – observer, explorer, and facilitator – and outlines the kinds of outcomes we’re hoping for: confidence, insight, and learning.

    The full conceptual framework is here, and we’ll continue to refine it as our practice develops.

    This diary post is part of our public notebook. It helps document not just what we’ve tried, but how we’re thinking and rethinking as we go.


    Transparency Label Justification: This post was developed in dialogue with ChatGPT. Alec directed the content, structure, and tone, while ChatGPT contributed drafts, edits, and structural suggestions. All decisions about framing and language were reviewed and approved by Alec. This collaborative process fits the “AI-assisted” classification under the Anapoly Online transparency framework.


  • Assessing AI support for defining a value proposition

    transparency label: AI-assisted

    Purpose of Experiment

    To test whether a general-purpose AI (NotebookLM) can generate a credible, insightful articulation of Anapoly AI Labs’ value proposition when prompted appropriately.

    Author and Date

    Alec Fearon, 22 June 2025

    Participants

    Alec Fearon, Ray Holland, NotebookLM (AI), Perplexity (AI)

    Lab Configuration and Setup

    This experiment was run using NotebookLM, in a notebook containing background files on Anapoly AI Labs. To explore how we might play to the strengths of different AIs, a follow-up was performed using Perplexity with the NotebookLM output as input.

    Preamble

    Procedure

    NotebookM was given this prompt:

    “What is Anapoly AI Labs’ core value proposition and how does it differentiate itself?”

    NotebookLM’s response was then given to Perplexity, with this prompt:

    “How does Anapoly’s hands-on approach help me understand AI better in my work?”

    Findings

    NotebookLM responded with a detailed analysis of Anapoly’s strategic positioning:

    Anapoly AI Labs’ core value proposition lies in bridging the gap between AI curiosity and practical application for non-technical professionals and small businesses. They address the common issue where individuals are interested in AI but are unsure how to use it effectively in their daily work. Their purpose is to help these professionals explore how general-purpose AI can assist with everyday tasks, such as improving productivity, creating better quality output, or saving money.

    Anapoly AI Labs differentiates itself through a distinctive “modeling exploration” approach and its stance on credibility:

    • Hands-on, Practical Exploration over Theory or Expertise: Unlike traditional courses, product demos, or sales pitches, Anapoly AI Labs focuses on hands-on exploration of general-purpose AI tools in simulated real-world contexts. The emphasis is on demonstrating “what AI tools can and can’t do when pointed at everyday work,” rather than teaching complex AI theory or coding. This involves setting up specific “labs” for different market segments, such as a Small Consultancy Office or Independent Author, to apply AI to real problems.
    • Credibility Through Transparency and Shared Journey: A central aspect of their differentiation is their approach to credibility. They explicitly state they “don’t claim to be AI experts”. Instead, their credibility is built upon a “shared journey of discovery and documented practical insights”. As practitioners “in the early stages of building with the tools”, they aim to show that they are serious, curious, and hands-on. The AI itself helped refine this stance, suggesting that their value comes from “modeling exploration” and “asking the questions small businesses wish they had time to ask, and showing what happens when you do”. This is presented as more credible than a static CV or pre-existing expert status.
    • Public Documentation and Transparency: Anapoly AI Labs maintains an online presence called “Anapoly Online,” which functions as a “public notebook”. This platform is used to “document real tests, learning, missteps, and insights,” sharing what works and what doesn’t. It is designed to be “lean, understated, and quietly credible”, building a “body of work others can trust” bit by bit through openness and demonstrated effort. They also have a transparent system for disclosing AI involvement in all published material, using classifications like “Human-led,” “AI-assisted,” or “AI-heavy”.
    • Focus on the “Capability-Absorption Gap”: The labs directly address the “capability-absorption gap,” which refers to why businesses struggle to adopt AI despite its capabilities. By providing a low-barrier-to-entry, informal, and social environment for peer learning, they aim to make AI adoption more accessible and less confusing for non-technical users.

    Perplexity then provided the following simplified summary:

    “Anapoly’s hands-on approach helps you understand AI better in your work by letting you actually try out AI tools on real tasks that matter to you, instead of just reading about them or watching demos. This practical experience shows you what AI can and can’t do in your daily work, making it easier to see how it fits into your routine and where it can save you time or improve your results.

    You get to experiment in a safe, informal setting, learn from real examples, and see both successes and mistakes. This helps you build real confidence with AI, instead of just theory, and makes it much easier to spot where AI could help you most in your job.”

    Discussion of findings

    The NotebookLM output demonstrated that a general-purpose AI can synthesise a coherent and plausible value proposition from structured background material. Perplexity, by contrast, generated a shorter and more audience-friendly version in response to a focused, user-oriented question.

    Together, these outputs show complementary strengths: NotebookLM delivers a comprehensive internal formulation; Perplexity extracts an accessible, benefit-led summary. Used in tandem, these tools help refine messaging for different audiences: internal strategists vs. prospective participants.

    Of particular interest is NotebookLM’s identification of the “capability-absorption gap”, a concise and useful term for a key problem that Anapoly AI Labs addresses. While the founders had recognised this issue in practical terms, the AI’s phrasing sharpens it into a strategic talking point. Framing Anapoly’s purpose in terms of reducing this gap may prove valuable in both internal planning and external communication.

    This experiment also highlights the value of re-prompting and testing across different AI models to triangulate clarity and tone.

    Recommendations

    1. Use AI tools like NotebookLM to draft key positioning statements, especially when materials are already well developed.
    2. Always review AI-generated value propositions critically. Look for overfitting, vagueness, or unearned claims.
    3. Use simpler AI prompts with tools like Perplexity to test how propositions land with a non-specialist audience.
    4. Consider publishing selected AI outputs as-is, but with clear disclosure and context-setting.
    5. Repeat this exercise periodically to test whether the value proposition evolves or ossifies.

    Tags
    value, lab-setup, worked, use-cases, prompting, ai-only, positioning, credibility, communication, capability-absorption-gap

    Glossary

    • Modeling Exploration: A term used to describe Anapoly AI Labs’ approach—testing and demonstrating AI use in practical contexts without claiming expertise.
    • Capability-Absorption Gap: The space between what AI tools can do and what users actually manage to adopt in real settings. First coined (in this context) by NotebookLM.
    • Public Notebook: Anapoly Online’s role as a transparent log of what was tried, what worked, and what didn’t.
    • General-purpose AI Tools: Tools like ChatGPT or NotebookLM that are not tailored to a specific domain but can assist with a wide range of tasks.
    • AI-only: A transparency label denoting that the content was fully generated by AI without human rewriting or editorial shaping.
    • Overfitting: In this context, an AI response that sticks too closely to the language or structure of source material, potentially limiting originality or insight.
    • Vagueness: A tendency in AI outputs to use safe, abstract phrases that lack specificity or actionable detail.
    • Unearned Claims: Assertions made by AI that sound impressive but are not substantiated by evidence or experience in the given context.

    Transparency Label Justification. The experimental outputs (NotebookLM and Perplexity responses) were AI-generated and included unedited. However, the lab note itself – its framing, interpretation, and derived recommendations – was co-written by a human and ChatGPT in structured dialogue.
    ChatGPT’s role included: drafting the findings and recommendations, articulating the reasoning behind terms like “capability-absorption gap”, refining the explanatory framing, tags, and glossary.

  • Sandboxes

    transparency label: human-led

    The EU AI Act establishes a risk-based classification system for AI systems. Compliance requirements depend on the risk the system poses to users. Risk levels are set at unacceptable and high. General-Purpose AI systems like ChatGPT are not classified as high-risk but are subject to specific transparency requirements and must comply with EU copyright law. 

    Also, the Act “aims to support AI innovation and start-ups in Europe, allowing companies to develop and test general-purpose AI models before public release. That is why it requires that national authorities provide companies with a testing environment for AI that simulates conditions close to the real world. This will help small and medium-sized enterprises (SMEs) compete in the growing EU artificial intelligence market.

    Unlike the EU, the UK has no general-purpose AI sandbox. The UK takes a sector-led approach to AI oversight, relying on existing regulators to operate their own sandbox initiatives under the government’s pro-innovation framework. Each sandbox is designed around the compliance needs and risk profiles of its domain. Existing sandboxes are focused on sector-specific or compliance-heavy contexts, for example:

    • FCA AI Sandbox (2025) – Financial services only; supports firms developing or integrating AI tools into fintech workflows.
    • ICO Regulatory Sandbox – Suitable for testing AI applications involving personal data, especially where GDPR-like safeguards are needed.
    • MHRA AI Airlock – For AI used in medical devices.

    These UK sandboxes are geared toward testing purpose-built AI tools or integrations into regulated industry IT systems. There is no current sandbox designed for SMEs exploring general-purpose AI like ChatGPT in everyday, low-risk tasks. While tools like ChatGPT could be included in sandbox trials (e.g. via the ICO for data privacy concerns or the FCA for financial services), these environments are not designed for routine or everyday use. They are structured for defined compliance challenges, not routine experimentation, and they require a defined project with specific compliance aims.

    A search by ChatGPT found no evidence of any current provision for open-ended, exploratory use of general-purpose AI by SMEs in unregulated or lightly regulated workflows. Anapoly AI Labs can occupy that space, modelling practical, credible AI use outside formal regulation.

    In that context, Anapoly’s modelling approach could include a cycle of experimentation followed by proof of concept (POC). A proof of concept is a small-scale test to check whether an idea works before investing in full implementation.

    Transparency Label Justification. This piece was authored by Alec Fearon. ChatGPT was used to assist with regulatory research (e.g. the EU AI Act and UK sandbox landscape), structure the argument, and refine language. The ideas, framing, and conclusions are Alec’s. AI contributed supporting detail and wording options, but did not generate or structure the content independently. 

  • Sense from confusion

    Early in the process of developing Anapoly Online, our website, I asked ChatGPT to help me create a diary dashboard: a page acting as a central point for diary posts. Amongst other things, I wanted the page to let us select a tag and see only the posts thus tagged. I was unsure how to implement the filltering control this needed, so asked ChatGPT for a step by step guide. The AI confidently produced a procedure, and I put it into action. It soon became clear, however, that the procedure did not reflect the reality of the software I was using. ChatGPT tried to correct things in response to my complaints, but things simply became more confused.

    When all else fails, I said to myself, read the documentation. 

    The software’s documentation proved to be like the curate’s egg: good in parts. Soon, I was as muddled as ChatGPT had been, and it took me some trial and much error to work out the correct procedure for what I wanted to do. 

    Conclusion: current AI can’t create sense out of confusion. That’s still a task for humans.

  • Exploring the idea with ChatGPT

    Once Dennis and I had come up with the idea of a small, local AI club, I used ChatGPT to help me explore its possibilities. I explained the idea to the AI and began the chat by asking: Is there a market for such a club in Plymouth? The answer was a helpful analysis of the main issues and a qualified yes; but the pitch would need to be “pragmatic rather than evangelical”. We would need to keep things down to earth.

    From there, we explored aspects ranging from possible niche markets to session formats. We talked about why people might want to use AI. We looked at different types of users and what each might want from a club like this.

    The conversation broadened my thinking about the viability of the idea and brought the issues into better focus. What started as a vague idea of “a club for people like us” grew into something I began to think we could actually try.

    As an aside, this research with AI was also research about AI: a simple use case, a small step on my path to AI proficiency.