Diary Posts
-
Sandboxes
transparency label: human-led The EU AI Act establishes a risk-based classification system for AI systems. Compliance requirements depend on the risk the system poses to users. Risk levels are set at unacceptable and high. General-Purpose AI systems like ChatGPT are not classified as high-risk but are subject to specific transparency requirements and must comply with EU copyright…
-
Coping with newsletters
transparenty label: human-led I subscribe to quite a lot of newsletters, covering topics that interest me. But it’s difficult to find the time needed to filter out which newsletters merit a closer look. I remembered reading that Azeem Azhar solves this problem by telling an AI what kinds of things he is looking for, giving…
-
How we flag AI involvement in what we publish
transparency label: AI-assisted Most of what we publish here is written with the help of AI. That’s part of the point. Anapoly AI Labs is about trying these tools out on real work and seeing what they’re good for. To keep things transparent, we label every post to show how much AI was involved. The label…
-
Working Towards a Strategy
In the last few weeks, we’ve been thinking more clearly about how to describe what Anapoly AI Labs is for and how we intend to run it. The idea, from the start, was not to teach AI or to promote it, but to work out what it’s actually good for. Not in theory, but in…
-
First lab note published
We’ve just posted our first lab note. It documents an internal experiment to refine the custom instructions we use with ChatGPT – what we expect from it, how it should respond, and how to keep it useful across different tasks. The aim is to define a persona for the AI assistant that is more consistent…
-
Sense from confusion
Early in the process of developing Anapoly Online, our website, I asked ChatGPT to help me create a diary dashboard: a page acting as a central point for diary posts. Amongst other things, I wanted the page to let us select a tag and see only the posts thus tagged. I was unsure how to…
-
The concept
Purpose: To model and investigate how non-technical people can make good use of general-purpose AI in their work, using experimentation to understand the strengths and limitations of current AI tools. Why does this matter? AI is now widely available, but there’s a credibility gap between hype and reality. Many people are unsure how to use…
-
A pivot
Our initial idea, prompted by Kamil Banc’s writing on practical AI use, was to run a small, local club. Somewhere people like us could meet in person, experiment with ChatGPT, and see what we could actually do with it. A “non-threatening, friendly environment,” we called it at the time. But the concept developed, and the…
-
Our stance
Stance: a way of thinking about something, especially expressed in a publicly stated opinion. We don’t claim to be AI experts. We’re practitioners exploring AI in real problems faced by professionals like us. We’re testing, documenting, and improving – in public. That’s our value.