Tag: regulation

  • Sandboxes

    transparency label: human-led

    The EU AI Act establishes a risk-based classification system for AI systems. Compliance requirements depend on the risk the system poses to users. Risk levels are set at unacceptable and high. General-Purpose AI systems like ChatGPT are not classified as high-risk but are subject to specific transparency requirements and must comply with EU copyright law. 

    Also, the Act “aims to support AI innovation and start-ups in Europe, allowing companies to develop and test general-purpose AI models before public release. That is why it requires that national authorities provide companies with a testing environment for AI that simulates conditions close to the real world. This will help small and medium-sized enterprises (SMEs) compete in the growing EU artificial intelligence market.

    Unlike the EU, the UK has no general-purpose AI sandbox. The UK takes a sector-led approach to AI oversight, relying on existing regulators to operate their own sandbox initiatives under the government’s pro-innovation framework. Each sandbox is designed around the compliance needs and risk profiles of its domain. Existing sandboxes are focused on sector-specific or compliance-heavy contexts, for example:

    • FCA AI Sandbox (2025) – Financial services only; supports firms developing or integrating AI tools into fintech workflows.
    • ICO Regulatory Sandbox – Suitable for testing AI applications involving personal data, especially where GDPR-like safeguards are needed.
    • MHRA AI Airlock – For AI used in medical devices.

    These UK sandboxes are geared toward testing purpose-built AI tools or integrations into regulated industry IT systems. There is no current sandbox designed for SMEs exploring general-purpose AI like ChatGPT in everyday, low-risk tasks. While tools like ChatGPT could be included in sandbox trials (e.g. via the ICO for data privacy concerns or the FCA for financial services), these environments are not designed for routine or everyday use. They are structured for defined compliance challenges, not routine experimentation, and they require a defined project with specific compliance aims.

    A search by ChatGPT found no evidence of any current provision for open-ended, exploratory use of general-purpose AI by SMEs in unregulated or lightly regulated workflows. Anapoly AI Labs can occupy that space, modelling practical, credible AI use outside formal regulation.

    In that context, Anapoly’s modelling approach could include a cycle of experimentation followed by proof of concept (POC). A proof of concept is a small-scale test to check whether an idea works before investing in full implementation.

    Transparency Label Justification. This piece was authored by Alec Fearon. ChatGPT was used to assist with regulatory research (e.g. the EU AI Act and UK sandbox landscape), structure the argument, and refine language. The ideas, framing, and conclusions are Alec’s. AI contributed supporting detail and wording options, but did not generate or structure the content independently. 

  • Coping with newsletters

    transparenty label: human-led

    I subscribe to quite a lot of newsletters, covering topics that interest me. But it’s difficult to find the time needed to filter out which newsletters merit a closer look. I remembered reading that Azeem Azhar solves this problem by telling an AI what kinds of things he is looking for, giving it a week’s worth of newsletters, and asking it to give him a concise summary of what might interest him. So I followed his example. I gave NotebookLM background information about Anapoly AI Labs, and asked it for a detailed prompt that would flag anything in my newsletters that might suit a diary post. 

    When I used that prompt, one of the points it picked up was that our transparency framework ties in well to the EU AI Act, which has a section on transparency requirements That prompted me to look into the UK’s approach. I learned that, instead of regulations, we have regulatory principles for the guidance of existing regulatory bodies such as the Information Commissioner’s Office or Ofcom. The principles cover:

    • Safety, security & robustness
    • Appropriate transparency and explainability
    • Fairness
    • Accountability and governance
    • Contestability and redress

    It occurred to me that some of the experimentation in our labs could focus on these aspects, individual labs being set up to specialise on one aspect, for example. Equally, we might explore how the regulatory princples could form the basis for a quality assurance framework applicable to business outputs created with AI involvement. This will become an important consideration for small businesses and consultancies.

    A final thought: any enterprise doing business in the EU must comply with the EU AI Act. That alone could justify a focused lab setup. We might simulate how a small consultancy could meet the Act’s transparency and accountability requirements when using general-purpose AI tools, modelling practical compliance, not just reading the rules. This, too, might merit some experimentation by Anapoly AI Labs. 

    Transparency Label Justification. This post was drafted by Alec Fearon. Newsletter filtering was supported by NotebookLM as a separate exercise. ChatGPT was used to revise wording and clarify structure. All reflections and framing are human-authored.