Category: diary

A public record of the development of our ideas

  • An emerging discipline?

    Transparency label: AI-assisted

    In a recent post, I observed that LLMs are coming to be seen as like computer operating systems, with prompts as the new application programs and context as the new user interface.

    Support for that way of thinking comes from how we can use Anapoly’s precision content prompt pack to apply structure to a source document. The pack contains a set of prompts which are applied in sequence, with human validation of the AI’s output at each step in the sequence. In effect, it is a program governing the end-to-end interaction between AI and human. 

    Contract-first prompting takes the idea further. It formalises the interaction between human and AI into a negotiated agreement, locking in intent, scope, constraints, and deliverables before any output is generated. This structured handshake resembles a statement of work in engineering or a project brief in consulting. It ensures both sides – human and AI – share the same understanding of the task. Compliance mechanisms to be used during the conduct of work – such as summarisation, clarifying loops, and self-testing – are also built into the agreement. Thus, the contract becomes both compass and checklist.

    Our precision content prompt pack and the contract-first idea transform prompting into programmable context design.

    This reframes the interaction between human and AI: not as issuing commands, but as engineering the conditions under which the AI will perform. When we treat prompts, inputs, roles, and structure as modular components of a working system, we begin to move from improvisation toward disciplined practice. Not just better prompting – but contextual systems engineering.

    Transparency label justification: This diary post was drafted by Alec, with ChatGPT used to suggest edits, refine wording, and test the logic of specific formulations. Alec initiated the framing, decided the sequence of ideas, and approved the final structure and terminology.

  • Context is the new user interface

    Transparency label: Human-led

    How we guide AI not with buttons, but with words and context.

    When you use an AI like ChatGPT, there are no menus or toolbars. Instead, the way you interact – what you say, how you say it, and what you show it – is the interface.

    That’s what we mean by context. It’s the information you give the AI to help it understand what you want. A vague prompt gives you a vague answer. A clear prompt, with the right background, gets you something much more useful.

    This new kind of interface has several parts:

    • Prompt window – where you ask questions and give tasks
    • Custom instructions – where you shape the AI’s behaviour
    • File uploads – where you provide background material
    • Project spaces – where you store persistent context for a task
    • Canvas – where you co-develop ideas with the AI

    You don’t click buttons. You build context. And the better the context, the better the outcome.


    Backlink: An LLM is like an operation system

    This post was written by Alec with support from ChatGPT. Alec directed the concept, structure, and final wording. ChatGPT contributed phrasings and refinements, which were reviewed and approved by Alec.

  • An LLM is like an operating system

    Transparency label: human-led

    Reference: Andrej Karpathy’s keynote address on 17 June 2025 at AI Startup School in San Francisco

    In the 1960s and ’70s:

    • a computer filled a large room in a central location, far from most of its users
    • it had an operating system which provided basic functions
    • application programs used operating system functions to do useful things
    • we accessed the computer remotely over telephone lines
    • we communicated with the computer using only text, from a terminal

    In 2010:

    • computers were personal, portable, and mostly ran local software
    • the operating system managed files, programs, and user settings
    • applications were installed and launched directly by the user
    • internet access became always-on, but most processing still happened locally
    • we communicated with the computer through keyboard, mouse, and graphical interface

    In 2025:

    • an AI runs in a cloud-based data centre, far from the user
    • it has an LLM which provides core functions for language, reasoning, and knowledge work
    • prompts use the core functions to do useful things
    • we access the AI remotely over the internet
    • we communicate with it using mainly text, (voice, or images are also possible, now) and still mostly from a terminal.

    In 2050:

    • an AI will be personal, portable, and …

    LLMs are the new operating systems.

    Prompts are the new application programs.

    Context is the new user interface.

    Transparency: This post was conceived, structured, and written by a human author. ChatGPT was used to suggest phrasing and refine analogies, but all key ideas, narrative framing, and editorial decisions were made by the human. AI input was limited to supporting the writing process.

  • ChatGPT can check facts

    transparency label: Human-only

    Mike Caulfield has released what he calls a Deep Background GPT. It is an AI fact-checking tool, available to all as a completely free GPT. Mike says:

    I just released a (largely) non-hallucinating rigorous AI-based fact-checker that anyone can use for free. And I don’t say that lightly: I literally co-wrote the book on using the internet to verify things. All you do is log into ChatGPT, click the link below, and put in a sentence or paragraph for it to fact check.

    https://chatgpt.com/g/g-684fa334fb0c8191910d50a70baad796-deep-background-fact-checks-and-context?model=o3

    I have experimented, and it seems a very useful tool.

    Before going to the link, make sure you have selected the o3 model of ChatGPT.

    Here is the link to his SubStack with all the details.

  • Terminology

    transparency label: Human-only

    Terminology was a vexed topic in our recent discussion. It dawned on me that smarter people than us must have faced this problem too, and sure enough Perplexity took me to the National Institute of Standards and Technology (NIST) in the US Department of Commerce. 

    Their paper AI Use Taxonomy, a Human-Centered Approach offers a good way forward.  We have adopted it and updated our Lab Framework accordingly.

    NIST also floated some terms that could help us think about the value of the outcomes from our work. 

    Usability: the extent to which a system, product or service can be used by  specified users to achieve specified goals with effectiveness, efficiency and satisfaction in a specified context of use.

    Human-centred quality: the extent to which requirements for usability,  accessibility, user experience and avoidance of harm from use are met.>>

  • Voice to meeting notes in 30 seconds

    Transparency label: Human-only

    A nice little use case for AI.

    The Anapoly team met this morning to talk about progress, harmonise our thinking, and firm up the plan for the next few weeks. It brought home the value of face to face discussion, because two interesting new ideas popped out of some animated debate.

    When discussion crystallised something worth recording, we paused for a few moments to dictate a note into my mobile phone. The app transcribed it in real time, but with poor punctuation and various transcription errors (I need to get a better app, perhaps!).

    Later, I gave the transcript to ChatGPT with the instruction: “The file 2025-06-28 meeting notes contains dictated notes from a meeting earlier today. Please give them a sensible format and correct obvious faults. If in doubt about anything, ask me to clarify.”

    ChatGPT did not need clarification. In the blink of an eye it produced a set of well written, clearly organised notes that would have taken me twenty or thirty minutes to produce to the same standard.

  • Lab Framework updated

    Substantial update to the Lab Framework, clarifying how we actually do labs!

  • A new way of working

    Transparency: AI-assisted (see justification below)

    A reflection on where I have got to after using ChatGPT and other AIs intensively for the past six weeks.

    In earlier times, my digital workspace was built around the usual office applications – Microsoft Office or Google Workspace, and their ilk. Now, it’s different. 

    Microsoft is going down the path of embedding AI (mainly ChatGPT) into its productivity suite. I have taken a different path, choosing to use ChatGPT, Perplexity, and NotebookLM independently. My workspace is now populated by a team of AI assistants, researchers, analysts, editors, and more, sitting alongside the standard office productivity apps.

    Where I would previously have started a piece of work in Word, now I start in a ChatGPT project canvas. During the course of work, I switch between AI team members. ChatGPT is the main thought partner – good at shaping ideas, helping structure prose, pushing back on woolly thinking. The 4o model is an excellent collaborator for many tasks, but if deeper thinking is needed I switch to ChatGPT-o3; the difference is remarkable.  When the flow falters, I’ll hop over to Perplexity: fast, well-cited, useful for breaking through with a different angle or clarifying a half-formed idea. NotebookLM, meanwhile, knows my  files; it acts like a personal librarian, drawing references and insight from the sources I’ve given it.

    It’s not seamless yet. But this is a distinctly new way of working, shaped by interaction between multiple AI agents. Even when ostensibly alone at my desk, the feeling is less of working as a solitary engineer and more as the leader of a team. A quiet, tireless, and always surprisingly helpful team. The skill here is no longer about choosing the best tool. It’s about understanding which kind of tool to use and when – treating each one as if it were a specialist on call.

    This hopping back and forth isn’t distraction, it’s coordination. Each assistant brings a slightly different lens. Triangulating between them often clears a logjam in my thinking or brings up a new insight. An intrguing thought is whether – or perhaps when – they will gain an awareness of each other’s contribution, turning coordination into collaboration.

    For the record, I drafted this post in a ChatGPT project canvas, helped by the AI in a chat alongside the canvas. When complete, I told the AI to write the “transparency label justification” below. It was spot on!

    Transparency Label Justification: This diary post was drafted collaboratively with ChatGPT using the GPT-4o model. Alec described his working patterns in detail and directed the structure and tone throughout. ChatGPT proposed phrasing, clarified distinctions between tools, and refined transitions, but the observations, reflections, and examples are all drawn from Alec’s own practice. The post captures a real and evolving style of AI-supported work, shaped and narrated by its practitioner.

  • Collaboration in ChatGPT?

    transparency label: human only

    There are reports that:

    OpenAI has been quietly developing collaboration features for ChatGPT that would let multiple users work together on documents and chat about projects, a direct assault on Microsoft’s core productivity business. The designs have been in development for nearly a year, with OpenAI’s Canvas feature serving as a first step toward full document collaboration tools.

    Source: The Information via BRXND Dispatch

    This would be a change towards something like simultaneous collaboration with colleagues on a shared document in Microsoft Office. At present, a ChatGPT Team account allows more than one person in the Team account to work in a project space and to take part in the chats within that project, but only one person at a time, as I understand it.

  • First thoughts on a lab framework

    transparency label: Human-only

    A few hours spent with ChatGPT-o3 resulting in good first draft of a framework for thinking about our labs. It covers

    • types of lab
    • the roles of people involved with the labs
    • the core technical configuration of a lab
    • assets needed to launch, operate, and archive a lab
    • a naming convention for these assets

    No doubt the framework will need to be tweaked and added to as our ideas mature.

    The chat with o3 was a valuable mind-clearing exercise for me, and I was impressed by how much more “intellectual” it is compared to the 4o model. Like many intellectuals, it also displayed a lack of common sense on occasions, especially when I asked for simple formatting corrections to the canvas we were editing together. The 4o model is much more agile in that respect.

    During the chat, when the flow with ChatGPT didn’t feel right, I hopped back and forth to consult with Perplexity and NotebookLM. Their outputs provided interestingly and usefully different perspectives that helped to clear the logjam.

    A decision arising from my joint AI consultation process was the choice of Google Workspace for the office productivity suite within our labs. This will allow for much better collaboration when using office tools with personal licences than would be the case with Microsoft Office 365. Given the ad hoc nature of labs and the cost constaints we have, this is an important consideration.