Category: diary

A public record of the development of our ideas

  • Methodology & tool-kits

    Transparancy label:human-only

    Yesterday’s discussion crystallised some ideas that I have been teasing at recently. Namely that we could develop our labs as a place where we:

    (a) develop a methodology and its associated tool-kits for the implementation of AI in an enterprise; and

    (b) use these in a synthetic environment to help business people gain confidence in moving forward with AI.

    Our value to clients would derive from the relevance and usefulness of the methodology and tool-kits.

    Methodology: a comprehensive system of principles, processes, and rules that governs a specific discipline to ensure consistency and repeatability. It serves as a structural framework defining what must be done and who is responsible. It is applied through specialized tool-kits consisting of templates, software, and techniques that enable practitioners to apply the theoretical standards to real-world tasks.

    To pick up on the point Dennis emphasised, an early part of the methodology might be analysis of how an enterprise actually works (rather than what the non-existent, incomplete or out of date documentation says). The process for this would involve eliciting information from people in the enterprise. A tool-kit would enable AI to be configured for that purpose within the specific context of that enterprise. 

  • Do AI models have a prompt appetite?

    About once a week, I use ChatGPT to pull out the main themes from the stack of newsletters I receive. This week I was surprised to be told that different AI models now have different prompt appetites. So I got the AI to explain what this means and the implications for Anapoly.


    ChatGPT’s response to the question “What is meant by different models now have different prompt appetites?”

    It means the era of “one prompting style works everywhere” is over.

    Models used to behave broadly the same: structure your instructions, specify roles, delimit sections, and you’d get predictable results. Now the frontier models diverge. They respond differently not just in quality but in preference for instruction style.

    A “prompt appetite” is the set of conditions in which a model performs best: how much structure it wants, how explicit you need to be, how conversational you can be, and how sensitive it is to examples, delimiters, or verbosity. In other words, each model now has its own diet.

    Gemini 3, for example, performs better with example-rich, lightly structured, conversational prompts. Too much rigid scaffolding and it feels choked. GPT-5.1, by contrast, still thrives on formal instruction and well-defined control layers. Claude it is noticeably sensitive to tone and framing. Even small changes in wording can push these models into noticeably different modes of reasoning.

    For Anapoly, that has two implications.

    First, your governance and context layers can’t be one-size-fits-all. If you’re orchestrating across multiple models—internal RAG, foundation models, domain-specific SLMs—you need a prompting strategy that adapts to the quirks of each one. Your orchestration layer becomes, in part, a “dietician”: matching tasks to the right model and the right prompt genre.

    Second, this strengthens your case for standardised, model-agnostic knowledge units. If models vary in appetite, the only way to keep outputs consistent is to control the inputs: modular content blocks, transparent context, and structured reasoning artefacts that can be rephrased per model without degrading meaning.

    In short: prompt appetite is emerging as a genuine design constraint. Ignore it and you get erratic behaviour; work with it and you get reliability.

  • A three-layer instruction set for use in ChatGPT Projects

    Transparency label: human only


    Building on recent work around the three-layer model of AI context, I spent some time today working with ChatGPT to write a full three-Layer Instruction set for use in Anapoly’s ChatGPT Projects.

    If it works as hoped, strong governance is now the default. The AI will challenge weak logic, ask for clarification when the ground is shaky, and keeps its tone plain and grounded. And for the occasions when I want it to bend the rules baked into it, I added an escape hatch: a temporary override, clearly marked, that lasts exactly one turn.

    It will be interesting to see how well it works.

  • Using local-only AI in a micro-enterprise

    Transparency label: AI-assisted

    I’ve added a briefing note in the Resources area that sets out an approach for a micro-enterprise to run AI entirely on local machines. This creates a system that is private, predictable, comparatively inexpensive, and easy to expand in small steps.

    The note explains how a three-layer model for controlling the behaviour of an AI ( business context at the top, governance rules in the middle, and task instructions at the bottom) can still apply even when everything sits on a laptop, and how a small software component (the orchestrator) can keep the whole arrangement predictable and safe.

    If you’re curious about “local-only AI” for a micro-enterprise, or wondering what it might look like in practice, you should find this a useful starting point.

    Read the note: Using Local-Only AI in a Micro-Enterprise

  • Make ChatGPT mark its own homework

    Transparency label: human only

    This prompt results in a marked improvement in the quality of a document being drafted collaboratively with ChatGPT.

    Review the content of the canvas for completenes, correctness, consistency and the quality of its line of reasoning, and make improvements as necessary. Before outputting the improved version, repeat the review and make further improvements as appropriate. Only then output the improved canvas.

  • Content and context are key

    … to successful use of AI. This is a distinction that matters now because many teams only notice the problem once their AI systems start giving confident but contradictory answers.

    Transparency label: AI-assisted. AI was used to draft, edit, or refine content. Alec Fearon directed the process.

    With acknowledgment to Scott Abel and Michael Iantosca, whose writing provided the source material for this post.


    In an earlier post, I defined an AI-embedded business as one in which AI systems are deeply integrated into its operations. For this to be successful, I suggested that we needed contextual scaffolding to define the AI’s working environment for a given task, context engineering to manage that environment as part of the business infrastructure, and the disciplined management of knowledge. We can call the latter – the disciplined management of knowledge – content management. 

    Content management governs what goes into the contextual scaffolding (the AI’s knowledge environment).
    Context engineering governs how the model uses it at inference time.

    Between them, they define the only two levers humans actually have over AI behaviour today, and crucially, they sit entirely outside the model. If either discipline is missing or under-performing, the system degrades:

    • Without content engineering, you get knowledge collapse (defined below).
      The sources of truth become out of date, fragment, contradict, and mislead the model.
    • Without context engineering, you get context rot (defined below).
      Even good content becomes unusable because it’s handed to the model in ways that overwhelm its attention budget.

    Together, these two disciplines enable a coherent means of control:

    • Content engineering → the quality, structure, governance, and lifecycle of the organisation’s knowledge.
    • Context engineering → the orchestration of instructions, persona, reference materials, scope, constraints, and retrieval so the model actually behaves as intended.

    Definitions

    Knowledge collapse

    A systemic failure in an organisation’s knowledge environment where incorrect, outdated, conflicting, or poorly structured content overwhelms the reliable material, causing both humans and AI systems to lose the ability to determine what is authoritative.

    In plainer terms:

    The knowledge base stops being a source of truth and becomes a source of error.

    It happens when:

    • Content ages faster than it’s maintained.
    • There is no lifecycle governance.
    • Tools ingest everything without curation.
    • Retrieval yields contradictions rather than clarity.
    • AI amplifies the mess until nobody can tell what’s accurate.

    The collapse is not sudden; it is accumulative and invisible until a critical threshold is crossed, for example when a small business relies on an outdated onboarding manual and the AI dutifully repeats obsolete steps that no longer match how the company actually works.

    Context rot

    The degradation of an LLM’s reasoning as the context window grows.
    The model becomes distracted by the sheer number of tokens it must attend to, because:

    1. Attention is a finite resource.
      Each new token drains the model’s “attention budget.”
    2. Transformers force every token to attend to every other token.
      As the number of tokens rises, the pairwise attention load explodes.
    3. Signal-to-noise collapses.
      Useful tokens become diluted by irrelevant ones, so the model fixates on the wrong cues or loses the thread entirely.

    Anthropic’s researchers summarise it neatly:

    Tokens accumulate beyond the model’s ability to meaningfully attend to them, so the context becomes increasingly noisy and less relevant.

    In short: the more you give the model, the worse it thinks, which is why practitioners often need to reset sessions or prune earlier inputs to keep the model focused on the task at hand.

    This is a structural limitation of today’s transformer architecture, not a parameter-tuning issue. It sets a ceiling on long-context performance unless a new architecture replaces or supplements attention.

  • An AI-embedded business

    Some thoughts about what it means to be an AI-embedded business. Also, small language models are coming into favour for focused tasks, as they are more accurate and more efficient. The approach is to combine them with large language models into hybrid AI systems. That seems to be the direction small businesses will be going and therefore something we need to keep abreast of.

  • A more personalised way to learn with NotebookLM

    I came across an interesting piece by AI Maker explaining how he uses NotebookLM to learn in a more personalised way. He suggests that learning improves dramatically when we control our sources, shape the content into formats that suit us, and then test ourselves deliberately.

    This approach fits neatly with our existing work and is worth experimenting with. Participants in our labs might appreciate being able to gain confidence with NotebookLM whilst, as a by-product, learning about something else of value to them.

    The approach is outlined below


    Find high‑quality sources

    Learning needs good source material. NotebookLM’s Discover feature helps by scanning the web for material relevant to our topic and filtering out noise. The goal is a clean, reliable starting set.

    We can reinforce that by using Perplexity AI:

    1. Use Perplexity’s Deep Research to gather a solid set of articles, videos, documents, and case studies.
    2. Export the citations as raw links.
    3. Import those links directly into NotebookLM as our base sources.
    4. Use NotebookLM’s Discover feature to expand, refine, and diversify the set.

    We should aim for varied perspectives: Reddit for beginner intuition, YouTube for demonstrations, official documentation for depth, enterprise case studies for realism.

    Build sources into multiple formats

    Once our sources are loaded into NotebookLM, we can shape their content into formats to suit how we like to learn.

    A focused and well-structured report helps us understand faster. Here are three effective techniques:

    1. Anchor new ideas to familiar systems. Ask NotebookLM to explain the concept by contrasting it with something you already know.
    2. Layer complexity progressively. Tell NotebookLM to start with a plain‑language explanation, then add the underlying processes, then technical detail.
    3. Use a structured four‑pass model. Request versions for beginner, intermediate, advanced, and expert levels so you can climb the ladder rather than jump in halfway.

    Audio is ideal for learning on the move and for reinforcement. NotebookLM’s podcast generator can be shaped deliberately:

    • Beginner interviewing expert for clear explanations and basic intuition.
    • Expert debate to highlight competing approaches and trade‑offs.
    • Expert critique of the source material to expose over-simplifications or gaps in your understanding.

    Short structured video explainers are helpful for visual learners. We can prompt NotebookLM to create comparison tables, workflows, or mistake‑prevention checklists that would be tedious to build ourselves.

    Test to expose the gaps

    NotebookLM’s flashcards and quizzes can help consolidate what has been learnt.


  • Testing a local AI

    I am using Obsidian to build not a second brain, but a workspace for my brain: a space in which to think. The workspace is intended to become an ideas factory, designed as a knowledge network: a network of ideas at different stages of development.

    There is scope for AI technology to enhance my thinking in that space. For example, ideas in the knowledge network can be embedded into a vector database, roughly similar to how an AI organises knowledge. This allows similarities among widely scattered notes — the semantic connections — to be highlighted, and lets me search not just for specific words but also for ideas with related meaning. The Smart Connections plugin implements these useful capabilities.

    I can take this a step further by inviting an AI into my thinking space. Ollama lets me run a large language model on my laptop and connect it to my Obsidian vault through the Smart Connections plugin, so the AI can query, summarise, and respond directly from my notes. I downloaded the Mistral 7B Q4_K_M model for this purpose. Put simply, it’s a compressed version of the 7B model that runs faster and uses less memory, at the cost of some accuracy.

    I ran two tests. First, I pointed it to a short daily note and asked, “What is Claire’s new job?” After a pause of about half a minute, it gave me the correct answer.

    For the second test, I pointed it to a more substantial document and asked, “What is the purpose of Goal-Directed Context Management?” The reply took eight minutes, but the answer was very good:

    The purpose of Goal-Directed Context Management, as outlined in the provided document, is to manage a large language model (such as ChatGPT) effectively when it’s being used as a collaborator during the production or editing of knowledge-based products.

    This framework aims to configure and adapt the behavior of the AI to suit different roles over the course of work, while also providing scaffolding in the form of project and program management techniques to keep the AI focused and effective over the lifecycle of an information-based product.

    By aligning the AI’s evolving context with the purpose and structure of a project or program, Goal-Directed Context Management ensures that the AI always has the right information to contribute effectively to the current stage of work.

    The test confirmed three things: I could run an AI locally to keep my data private, the model gave accurate responses, and, although performance was slow on my three-year-old Dell XPS, it still worked. Watching my own laptop pull a thoughtful answer from my notes was oddly satisfying, even if I had time to make tea while it thought.

  • Mind Maps, Podcasts, and a Pocket Brain

    Lately I’ve been testing a simple idea: can AI turn Obsidian into a genuinely useful second brain for me? The answer was a complicated but resounding “Yes”.

    My first step was to find an experienced Obsidian user whose ideas resonated with mine. This turned out to be a YouTuber whose channel has over fifteen  videos relating his personal experience in building a second brain, and offering advice about all aspects of Obsidian for that purpose. 

    I satisfied myself that this approach would be a good basis from which to develop my own, but didn’t have the time to watch every video in order to benefit from their content. I needed a quick and efficient way to fast-track that process. Step forward NotebookLM.

    One of the great things about NotebookLM is that you can give it fifteen YouTube videos and then have a conversation about their content. The discussion can encompass the content of one, several, or all of the videos. To help you structure the conversation, NotebookLM can produce a mind map setting out all the concepts or ideas contained in the videos. 

    On top of that, to help you reflect on these ideas while strolling round the park after work, the AI can produce an audio overview. This takes the form of a podcast-style discussion between two hosts, and you can set the ground rules for their discussion, for example the focus points, audience, technical level. Listen in for yourself.

    Intriguingly, the discussion is interactive when you’re online to the AI. You can join in to ask questions or steer the discussion in a particular direction.

    With the big picture in place, the next step was the hands-on work of shaping Obsidian to fit my needs. That will be the subject of my next post, where I’ll dig into the practicalities of building it and explore how a local AI might give my second brain extra intelligence without compromising its privacy.