Transparency label: Human-only
We ran our first acclimatisation lab yesterday, a trial session with two external participants. There were aspects which can be improved, but overall the feedback was very positive.
This was the session plan.
Transparency label: Human-only
We ran our first acclimatisation lab yesterday, a trial session with two external participants. There were aspects which can be improved, but overall the feedback was very positive.
This was the session plan.
Transparency label: AI-assisted
In an earlier post, I introduced the idea of “contextual systems engineering”. Building on that idea, we are developing a way to manage collaborative work with AI — especially where the goal of the collaboration is a knowledge-based product: for example a report, a competitive tender, an academic paper, or a policy framework.
What we have come up with is the idea of a Contextual Scaffolding Framework. The framework combines two models:
The principle is simple: if we want AI to stay helpful and relevant, we need to give it the right information. The framework helps us decide what information to provide and when.
The information we provide is put into the AI’s “context” — and the context must evolve to keep pace with the work. Like a satnav updating in real time, the contextual scaffolding keeps the AI aware of where you are, what matters most, and how best to move forward.
🧱 Read the full framework here
Transparency label justification. The post was drafted by ChatGPT based on Alec’s prior work in a long chat, with Alec guiding tone and purpose. ChatGPT proposed structure and language, but all content reflects and links back to a human-authored framework.
Transparency label: AI-assisted
In a recent post, I observed that LLMs are coming to be seen as like computer operating systems, with prompts being the new application programs and context the new user interface.
Our precision content prompt pack is a good example of that thinking. The pack contains a set of prompts designed to take an unstructured document (notes of a meeting, perhaps) and apply structure (precision content) to the information it contains. We do this because AIs perform better if we give them structured information.
We apply the prompts in sequence, checking the AI’s output at each step in the sequence. In effect, it is a program that we run on the AI.
Contract-first prompting takes the idea further. It formalises the interaction between human and AI into a negotiated agreement about purpose, scope, constraints, and deliverables before any output is generated. This ensures that both sides – human and AI – share the same understanding of the the work to be done. The agreement also contains compliance mechanisms (eg summarisation, clarifying loops, and self-testing) for quality control.
These ideas reframe the interaction between human and AI: not as simply issuing prompts, but as engineering the conditions under which the AI will perform. Not just prompting, more like contextual systems engineering.
Transparency label justification: This diary post was drafted by Alec, with ChatGPT used to suggest edits, refine wording, and test the logic of specific formulations. Alec initiated the framing, decided the sequence of ideas, and approved the final structure and terminology.
How we guide AI not with buttons, but with words and context.
When you use an AI like ChatGPT, there are no menus or toolbars. Instead, the way you interact – what you say, how you say it, and what you show it – is the interface.
That’s what we mean by context. It’s the information you give the AI to help it understand what you want. A vague prompt gives you a vague answer. A clear prompt, with the right background, gets you something much more useful.
This new kind of interface has several parts:
You don’t click buttons. You build context. And the better the context, the better the outcome.
Backlink: An LLM is like an operation system
This post was written by Alec with support from ChatGPT. Alec directed the concept, structure, and final wording. ChatGPT contributed phrasings and refinements, which were reviewed and approved by Alec.
transparency label: Human-only
Mike Caulfield has released what he calls a Deep Background GPT. It is an AI fact-checking tool, available to all as a completely free GPT. Mike says:
I just released a (largely) non-hallucinating rigorous AI-based fact-checker that anyone can use for free. And I don’t say that lightly: I literally co-wrote the book on using the internet to verify things. All you do is log into ChatGPT, click the link below, and put in a sentence or paragraph for it to fact check.
I have experimented, and it seems a very useful tool.
Before going to the link, make sure you have selected the o3 model of ChatGPT.
Here is the link to his SubStack with all the details.
transparency label: Human-only
Terminology was a vexed topic in our recent discussion. It dawned on me that smarter people than us must have faced this problem too, and sure enough Perplexity took me to the National Institute of Standards and Technology (NIST) in the US Department of Commerce.
Their paper AI Use Taxonomy, a Human-Centered Approach offers a good way forward. We have adopted it and updated our Lab Framework accordingly.
NIST also floated some terms that could help us think about the value of the outcomes from our work.
Usability: the extent to which a system, product or service can be used by specified users to achieve specified goals with effectiveness, efficiency and satisfaction in a specified context of use.
Human-centred quality: the extent to which requirements for usability, accessibility, user experience and avoidance of harm from use are met.>>
Transparency label: Human-only
A nice little use case for AI.
The Anapoly team met this morning to talk about progress, harmonise our thinking, and firm up the plan for the next few weeks. It brought home the value of face to face discussion, because two interesting new ideas popped out of some animated debate.
When discussion crystallised something worth recording, we paused for a few moments to dictate a note into my mobile phone. The app transcribed it in real time, but with poor punctuation and various transcription errors (I need to get a better app, perhaps!).
Later, I gave the transcript to ChatGPT with the instruction: “The file 2025-06-28 meeting notes contains dictated notes from a meeting earlier today. Please give them a sensible format and correct obvious faults. If in doubt about anything, ask me to clarify.”
ChatGPT did not need clarification. In the blink of an eye it produced a set of well written, clearly organised notes that would have taken me twenty or thirty minutes to produce to the same standard.
Substantial update to the Lab Framework, clarifying how we actually do labs!
Transparency: AI-assisted (see justification below)
A reflection on where I have got to after using ChatGPT and other AIs intensively for the past six weeks.
In earlier times, my digital workspace was built around the usual office applications – Microsoft Office or Google Workspace, and their ilk. Now, it’s different.
Microsoft is going down the path of embedding AI (mainly ChatGPT) into its productivity suite. I have taken a different path, choosing to use ChatGPT, Perplexity, and NotebookLM independently. My workspace is now populated by a team of AI assistants, researchers, analysts, editors, and more, sitting alongside the standard office productivity apps.
Where I would previously have started a piece of work in Word, now I start in a ChatGPT project canvas. During the course of work, I switch between AI team members. ChatGPT is the main thought partner – good at shaping ideas, helping structure prose, pushing back on woolly thinking. The 4o model is an excellent collaborator for many tasks, but if deeper thinking is needed I switch to ChatGPT-o3; the difference is remarkable. When the flow falters, I’ll hop over to Perplexity: fast, well-cited, useful for breaking through with a different angle or clarifying a half-formed idea. NotebookLM, meanwhile, knows my files; it acts like a personal librarian, drawing references and insight from the sources I’ve given it.
It’s not seamless yet. But this is a distinctly new way of working, shaped by interaction between multiple AI agents. Even when ostensibly alone at my desk, the feeling is less of working as a solitary engineer and more as the leader of a team. A quiet, tireless, and always surprisingly helpful team. The skill here is no longer about choosing the best tool. It’s about understanding which kind of tool to use and when – treating each one as if it were a specialist on call.
This hopping back and forth isn’t distraction, it’s coordination. Each assistant brings a slightly different lens. Triangulating between them often clears a logjam in my thinking or brings up a new insight. An intrguing thought is whether – or perhaps when – they will gain an awareness of each other’s contribution, turning coordination into collaboration.
For the record, I drafted this post in a ChatGPT project canvas, helped by the AI in a chat alongside the canvas. When complete, I told the AI to write the “transparency label justification” below. It was spot on!
Transparency Label Justification: This diary post was drafted collaboratively with ChatGPT using the GPT-4o model. Alec described his working patterns in detail and directed the structure and tone throughout. ChatGPT proposed phrasing, clarified distinctions between tools, and refined transitions, but the observations, reflections, and examples are all drawn from Alec’s own practice. The post captures a real and evolving style of AI-supported work, shaped and narrated by its practitioner.