Tag: chatgpt

  • No substitute for reading the paper

    transparency label: Human-only

    … what I can say is that a theme throughout this self-analysis is this: I find ChatGPT to be a really useful tool when I already have some idea of what I want to do and when I’m actually engaged with the issue. I find it much less reliable or useful for completely automating parts of the process. There’s no substitute for reading the paper.

    Source: Sean Trott in his newsletter How I use (and don’t use) ChatGPT on 24 June 2025

  • ChatGPT models: which to use when?

    transparency label: Human only

    ChatGPT-4ofast, for brainstorming, quick questions, general chat
    o3powerful, for serious work (analysis, writing, research, coding)
    o3-proultra-powerful, for the hardest problems

    Source: One Useful Thing, Substack newsletter by Ethan Mollick, 23 June 2025

  • Lab Note: modelling our own use of AI tools

    Transparency label: AI-heavy


    Purpose of experiment

    To identify and configure an AI toolkit for Anapoly AI Labs that credibly models the use of general-purpose AI tools in a small consultancy setting.

    Author and date

    Alec Fearon, 24 June 2025

    Participants

    Alec Fearon, with Ray Holland and Dennis Silverwood in email consultation
    ChatGPT-4o

    Lab configuration and setup

    This setup models a real-world micro-consultancy with three collaborators. It assumes limited budget, modest technical support, and a practical orientation. We aim to reflect the toolkit choices we might recommend to participants in Anapoly AI Labs sessions.

    Preamble

    If Anapoly AI Labs is to be a credible venture, we believe it must model the behaviour it explores. That means our own internal work should demonstrate how small teams or sole traders might use AI tools in everyday tasks – writing, research, analysis, and communication – not just talk about it. This lab note outlines our proposed working configuration.

    Procedure

    We identified common functions we ourselves perform (and expect others will want to model), for example:

    • Writing, summarising, and critiquing text
    • Researching topics and checking facts
    • Extracting and organising information from documents
    • Sharing and collaborating on files
    • Managing project knowledge

    We then selected tools that:

    • Are available off the shelf
    • Require no specialist training
    • Are affordable on a small-business budget
    • Can be configured and used transparently

    Findings

    Core Tools Selected

    FunctionToolLicenceNotes
    Writing & promptingChatGPT Team£25–30/m/userMain workspace for drafting, reasoning, editing
    Search & fact-checkingPerplexity Pro$20/m/userFast, source-aware, good for validating facts
    Document interrogationNotebookLMFree (for now)Project libraries, good with PDFs and notes
    Office appsMS 365 or Google£5–15/m/userMatches common small business setups
    Visual inputsChatGPT VisionIncluded with ChatGPTUsed for images, scans, and screenshots

    Discussion of findings

    This configuration balances affordability, realism, and capability. We expect participants in Anapoly AI Labs to have similar access to these tools, or to be able to get it. By using these tools ourselves in Anapoly’s day-to-day running, we:

    • Gain first hand experience to share
    • Create reusable examples from real work
    • Expose gaps, workarounds, and lessons worth documenting

    We considered whether personal licences could be shared during lab sessions. Technically, they can’t: individual ChatGPT and Perplexity licences are for single-user use. While enforcement is unlikely, we’ve chosen to adopt the position that participants should bring their own their own AI tools – free or paid – to lab sessions as part of the learning experience. This avoids ambiguity about licencing and sets the ethical standard we want to maintain.

    Conclusions

    This toolkit would enable us to model our own small-business operations, treating Anapoly itself as one of the lab setups. That would reinforce our stance: we don’t claim to be AI experts; we’re practitioners asking the questions small businesses wish they had time to ask, and showing what happens when you do.

    Recommendations

    • Configure project workspaces in ChatGPT Team to reflect different lab contexts
    • Maintain prompt libraries and reasoning trails
    • Make costs, configurations, and limitations explicit in diary and lab notes
    • Evaluate whether to add AI-enhanced spreadsheet or knowledge tools (e.g. Notion, Obsidian) in future iterations

    Tags

    ai tools, toolkit, configuration, modelling, small business, chatgpt, perplexity, notebooklm, office software, credibility

    Glossary

    ChatGPT Team – OpenAI’s paid workspace version of ChatGPT, allowing collaboration, custom GPTs, and project folders.
    NotebookLM – A Google tool for working with uploaded documents using AI, currently free.
    Perplexity Pro – A subscription AI assistant known for showing sources.
    Vision input – The ability to upload images (photos, scans) and have the AI interpret them.

  • Sandboxes

    transparency label: human-led

    The EU AI Act establishes a risk-based classification system for AI systems. Compliance requirements depend on the risk the system poses to users. Risk levels are set at unacceptable and high. General-Purpose AI systems like ChatGPT are not classified as high-risk but are subject to specific transparency requirements and must comply with EU copyright law. 

    Also, the Act “aims to support AI innovation and start-ups in Europe, allowing companies to develop and test general-purpose AI models before public release. That is why it requires that national authorities provide companies with a testing environment for AI that simulates conditions close to the real world. This will help small and medium-sized enterprises (SMEs) compete in the growing EU artificial intelligence market.

    Unlike the EU, the UK has no general-purpose AI sandbox. The UK takes a sector-led approach to AI oversight, relying on existing regulators to operate their own sandbox initiatives under the government’s pro-innovation framework. Each sandbox is designed around the compliance needs and risk profiles of its domain. Existing sandboxes are focused on sector-specific or compliance-heavy contexts, for example:

    • FCA AI Sandbox (2025) – Financial services only; supports firms developing or integrating AI tools into fintech workflows.
    • ICO Regulatory Sandbox – Suitable for testing AI applications involving personal data, especially where GDPR-like safeguards are needed.
    • MHRA AI Airlock – For AI used in medical devices.

    These UK sandboxes are geared toward testing purpose-built AI tools or integrations into regulated industry IT systems. There is no current sandbox designed for SMEs exploring general-purpose AI like ChatGPT in everyday, low-risk tasks. While tools like ChatGPT could be included in sandbox trials (e.g. via the ICO for data privacy concerns or the FCA for financial services), these environments are not designed for routine or everyday use. They are structured for defined compliance challenges, not routine experimentation, and they require a defined project with specific compliance aims.

    A search by ChatGPT found no evidence of any current provision for open-ended, exploratory use of general-purpose AI by SMEs in unregulated or lightly regulated workflows. Anapoly AI Labs can occupy that space, modelling practical, credible AI use outside formal regulation.

    In that context, Anapoly’s modelling approach could include a cycle of experimentation followed by proof of concept (POC). A proof of concept is a small-scale test to check whether an idea works before investing in full implementation.

    Transparency Label Justification. This piece was authored by Alec Fearon. ChatGPT was used to assist with regulatory research (e.g. the EU AI Act and UK sandbox landscape), structure the argument, and refine language. The ideas, framing, and conclusions are Alec’s. AI contributed supporting detail and wording options, but did not generate or structure the content independently. 

  • First lab note published

    We’ve just posted our first lab note.

    It documents an internal experiment to refine the custom instructions we use with ChatGPT – what we expect from it, how it should respond, and how to keep it useful across different tasks. The aim is to define a persona for the AI assistant that is more consistent and can adapt to the different types of assistance required of it.

    It’s a good example of how we’re using the Labs: not to explain AI, but to find out what it’s actually good for.

    Read the lab note → Custom Instructions for ChatGPT