Tag: transparency

  • A new way of working

    Transparency: AI-assisted (see justification below)

    A reflection on where I have got to after using ChatGPT and other AIs intensively for the past six weeks.

    In earlier times, my digital workspace was built around the usual office applications – Microsoft Office or Google Workspace, and their ilk. Now, it’s different. 

    Microsoft is going down the path of embedding AI (mainly ChatGPT) into its productivity suite. I have taken a different path, choosing to use ChatGPT, Perplexity, and NotebookLM independently. My workspace is now populated by a team of AI assistants, researchers, analysts, editors, and more, sitting alongside the standard office productivity apps.

    Where I would previously have started a piece of work in Word, now I start in a ChatGPT project canvas. During the course of work, I switch between AI team members. ChatGPT is the main thought partner – good at shaping ideas, helping structure prose, pushing back on woolly thinking. The 4o model is an excellent collaborator for many tasks, but if deeper thinking is needed I switch to ChatGPT-o3; the difference is remarkable.  When the flow falters, I’ll hop over to Perplexity: fast, well-cited, useful for breaking through with a different angle or clarifying a half-formed idea. NotebookLM, meanwhile, knows my  files; it acts like a personal librarian, drawing references and insight from the sources I’ve given it.

    It’s not seamless yet. But this is a distinctly new way of working, shaped by interaction between multiple AI agents. Even when ostensibly alone at my desk, the feeling is less of working as a solitary engineer and more as the leader of a team. A quiet, tireless, and always surprisingly helpful team. The skill here is no longer about choosing the best tool. It’s about understanding which kind of tool to use and when – treating each one as if it were a specialist on call.

    This hopping back and forth isn’t distraction, it’s coordination. Each assistant brings a slightly different lens. Triangulating between them often clears a logjam in my thinking or brings up a new insight. An intrguing thought is whether – or perhaps when – they will gain an awareness of each other’s contribution, turning coordination into collaboration.

    For the record, I drafted this post in a ChatGPT project canvas, helped by the AI in a chat alongside the canvas. When complete, I told the AI to write the “transparency label justification” below. It was spot on!

    Transparency Label Justification: This diary post was drafted collaboratively with ChatGPT using the GPT-4o model. Alec described his working patterns in detail and directed the structure and tone throughout. ChatGPT proposed phrasing, clarified distinctions between tools, and refined transitions, but the observations, reflections, and examples are all drawn from Alec’s own practice. The post captures a real and evolving style of AI-supported work, shaped and narrated by its practitioner.

  • How ChatGPT helped draft our first acclimatisation lab setup

    Date: 24 June 2025

    transparency label: AI-heavy

    Our latest Lab Note records a quick experiment where I asked two ChatGPT models to draft the outline for an “acclimatisation” session – the starter lab we plan to run with newcomers to AI.

    Highlights:

    • Model face‑off: I ran the same prompt in parallel on model o3 and model 4o. The reasoning‑focused o3 delivered a tight nine‑part outline. 4o wandered off‑piste.
    • Time cost: The branch test took three minutes and gave us a clear winner.
    • Transparency: The Lab Note carries an AI‑heavy label because most of the prose came straight from o3. I trimmed, corrected one hallucination, and signed off.

    If you are curious about our process or want to see how structured prompting keeps the bot on track, read the full note here: First Acclimatisation Session Lab Note →

  • Drafting Anapoly’s first Lab Setup with ChatGPT

    Transparency label: AI‑heavy (ChatGPT model o3 produced primary content; Alec Fearon curated and lightly edited)


    Purpose of experiment

    Use ChatGPT as a thinking partner to create a first draft of an acclimatisation lab setup for Anapoly AI Labs and to compare output quality between two models (o3 and 4o).

    Author and date

    Alec Fearon, 24 June 2025

    Participants

    • Alec Fearon – experiment lead
    • ChatGPT model o3 – reasoning model
    • ChatGPT model 4o – comparison model

    Lab configuration and setup

    Interaction took place entirely in the ChatGPT Document Workbench. The first prompt was duplicated into two branches, each tied to a specific model. All files needed for reference (conceptual framework, lab note structure) were pre‑uploaded in the project space.

    Preamble

    Alec wanted a concise, critical first draft to stimulate team discussion. The exercise also served as a live test of whether o3’s “reasoning” advantage produced materially better drafts than the newer 4o model.

    (more…)
  • Sandboxes

    transparency label: human-led

    The EU AI Act establishes a risk-based classification system for AI systems. Compliance requirements depend on the risk the system poses to users. Risk levels are set at unacceptable and high. General-Purpose AI systems like ChatGPT are not classified as high-risk but are subject to specific transparency requirements and must comply with EU copyright law. 

    Also, the Act “aims to support AI innovation and start-ups in Europe, allowing companies to develop and test general-purpose AI models before public release. That is why it requires that national authorities provide companies with a testing environment for AI that simulates conditions close to the real world. This will help small and medium-sized enterprises (SMEs) compete in the growing EU artificial intelligence market.

    Unlike the EU, the UK has no general-purpose AI sandbox. The UK takes a sector-led approach to AI oversight, relying on existing regulators to operate their own sandbox initiatives under the government’s pro-innovation framework. Each sandbox is designed around the compliance needs and risk profiles of its domain. Existing sandboxes are focused on sector-specific or compliance-heavy contexts, for example:

    • FCA AI Sandbox (2025) – Financial services only; supports firms developing or integrating AI tools into fintech workflows.
    • ICO Regulatory Sandbox – Suitable for testing AI applications involving personal data, especially where GDPR-like safeguards are needed.
    • MHRA AI Airlock – For AI used in medical devices.

    These UK sandboxes are geared toward testing purpose-built AI tools or integrations into regulated industry IT systems. There is no current sandbox designed for SMEs exploring general-purpose AI like ChatGPT in everyday, low-risk tasks. While tools like ChatGPT could be included in sandbox trials (e.g. via the ICO for data privacy concerns or the FCA for financial services), these environments are not designed for routine or everyday use. They are structured for defined compliance challenges, not routine experimentation, and they require a defined project with specific compliance aims.

    A search by ChatGPT found no evidence of any current provision for open-ended, exploratory use of general-purpose AI by SMEs in unregulated or lightly regulated workflows. Anapoly AI Labs can occupy that space, modelling practical, credible AI use outside formal regulation.

    In that context, Anapoly’s modelling approach could include a cycle of experimentation followed by proof of concept (POC). A proof of concept is a small-scale test to check whether an idea works before investing in full implementation.

    Transparency Label Justification. This piece was authored by Alec Fearon. ChatGPT was used to assist with regulatory research (e.g. the EU AI Act and UK sandbox landscape), structure the argument, and refine language. The ideas, framing, and conclusions are Alec’s. AI contributed supporting detail and wording options, but did not generate or structure the content independently. 

  • How we flag AI involvement in what we publish

    transparency label: AI-assisted

    Most of what we publish here is written with the help of AI. That’s part of the point. Anapoly AI Labs is about trying these tools out on real work and seeing what they’re good for.

    To keep things transparent, we label every post to show how much AI was involved. The label links to our transparency framework. This doesn’t try to assign percentages. Instead, we use a straightforward five-level scale:

    • Human-only: Entirely human-authored. No AI involvement at any stage of development.
    • Human-led: Human-authored, with AI input limited to suggestions, edits, or fact-checking.
    • AI-assisted: AI was used to draft, edit, or refine content. Human authors directed the process.
    • AI-heavy: AI played a large role in drafting or synthesis. Human authors curated and finalised the piece.
    • AI-only: Fully generated by AI without human input beyond the original prompt. No editing or revision.

    We sometimes add the following:

    • Justification: A brief note explaining why we chose a particular label.
    • Chat Summary: A short summary of what the AI contributed to the piece.
    • Full Transcript: A link to the full chat behind a piece, lightly edited for clarity and privacy, when it contains something worth reading.

    Transparency Label Justification. This post was developed collaboratively. The human author defined the purpose, structure, and tone; the AI assisted with drafting, tightening prose, applying style conventions, and rewording for clarity. The final version reflects a human-led editorial process with substantial AI input at multiple stages.

  • transparency framework

    Transparency Framework

    transparency label: AI-assisted

    Anapoly AI Labs exists to explore how general-purpose AI tools can be used in real work. Therefore, most of the content on this site will have been developed with the help of such tools. To ensure transparency and maintain trust, we disclose AI involvement in all published material using a standardised system, as follows.

    Item-Level Disclosure. Every item of content on Anapoly Online includes a clear, consistent classification of AI involvement. This is a transparency label placed at or near the top of an item, linked to this page.

    Classification System To avoid false precision, we use a narrative typology rather than percentages.

    LabelDescription
    Human-onlyEntirely human-authored. No AI involvement at any stage of development.
    Human-ledHuman-authored, with AI input limited to suggestions, edits, or fact-checking.
    AI-assistedAI was used to draft, edit, or refine content. Human authors directed the process.
    AI-heavyAI played a large role in drafting or synthesis. Human authors curated and finalised the piece.
    AI-onlyFully generated by AI without human input beyond the original prompt. No editing or revision.

    Justification (Optional) Where appropriate, a brief explanation may be added to clarify the classification chosen for a piece.

    Example of a Justification
    This conversation is classified as AI-assisted. Alec initiated the topic, made key decisions about tone, structure, and terminology, and reviewed every suggestion. ChatGPT drafted language, proposed classification options, and revised the canvas under direction. The content was shaped collaboratively, but with Alec clearly in charge.

    Chat Summary (Optional) Where AI involvement brought particular value to the development of a piece, we may include a short summary of the interaction:

    Example of an AI Involvement Summary
    This piece of work was developed through a dialogue between Alec and ChatGPT.

    What Alec did
    Alec initiated the topic, framed the questions, and reviewed all drafts. He also decided which suggestions to accept or reject. For example, he asked whether AI involvement should be disclosed at all and explored the idea of linking to full transcripts.

    What ChatGPT did
    ChatGPT proposed ways to classify content by level of AI involvement, suggested terminology such as “AI-assisted” and “Hybrid,” and drafted several options for post labels and summaries. It also created the first version of the site-wide disclosure policy.

    How the piece developed
    The policy took shape through iterative exchange: Alec pushed for clarity, consistency, and minimalism, while ChatGPT adapted and expanded the document to match. The result is a practical and credible policy that reflects the ethos of Anapoly AI Labs.


    Full Transcript (Optional) Where useful, we may link to the full chat transcript behind the content. These links will be included sparingly, only when the transcript adds genuine insight for readers. All transcripts will be lightly cleaned (if needed) to protect privacy and ensure readability.