Author: Alec Fearon

  • Methodology & tool-kits

    Transparancy label:human-only

    Yesterday’s discussion crystallised some ideas that I have been teasing at recently. Namely that we could develop our labs as a place where we:

    (a) develop a methodology and its associated tool-kits for the implementation of AI in an enterprise; and

    (b) use these in a synthetic environment to help business people gain confidence in moving forward with AI.

    Our value to clients would derive from the relevance and usefulness of the methodology and tool-kits.

    Methodology: a comprehensive system of principles, processes, and rules that governs a specific discipline to ensure consistency and repeatability. It serves as a structural framework defining what must be done and who is responsible. It is applied through specialized tool-kits consisting of templates, software, and techniques that enable practitioners to apply the theoretical standards to real-world tasks.

    To pick up on the point Dennis emphasised, an early part of the methodology might be analysis of how an enterprise actually works (rather than what the non-existent, incomplete or out of date documentation says). The process for this would involve eliciting information from people in the enterprise. A tool-kit would enable AI to be configured for that purpose within the specific context of that enterprise. 

  • Do AI models have a prompt appetite?

    About once a week, I use ChatGPT to pull out the main themes from the stack of newsletters I receive. This week I was surprised to be told that different AI models now have different prompt appetites. So I got the AI to explain what this means and the implications for Anapoly.


    ChatGPT’s response to the question “What is meant by different models now have different prompt appetites?”

    It means the era of “one prompting style works everywhere” is over.

    Models used to behave broadly the same: structure your instructions, specify roles, delimit sections, and you’d get predictable results. Now the frontier models diverge. They respond differently not just in quality but in preference for instruction style.

    A “prompt appetite” is the set of conditions in which a model performs best: how much structure it wants, how explicit you need to be, how conversational you can be, and how sensitive it is to examples, delimiters, or verbosity. In other words, each model now has its own diet.

    Gemini 3, for example, performs better with example-rich, lightly structured, conversational prompts. Too much rigid scaffolding and it feels choked. GPT-5.1, by contrast, still thrives on formal instruction and well-defined control layers. Claude it is noticeably sensitive to tone and framing. Even small changes in wording can push these models into noticeably different modes of reasoning.

    For Anapoly, that has two implications.

    First, your governance and context layers can’t be one-size-fits-all. If you’re orchestrating across multiple models—internal RAG, foundation models, domain-specific SLMs—you need a prompting strategy that adapts to the quirks of each one. Your orchestration layer becomes, in part, a “dietician”: matching tasks to the right model and the right prompt genre.

    Second, this strengthens your case for standardised, model-agnostic knowledge units. If models vary in appetite, the only way to keep outputs consistent is to control the inputs: modular content blocks, transparent context, and structured reasoning artefacts that can be rephrased per model without degrading meaning.

    In short: prompt appetite is emerging as a genuine design constraint. Ignore it and you get erratic behaviour; work with it and you get reliability.

  • A three-layer instruction set for use in ChatGPT Projects

    Transparency label: human only


    Building on recent work around the three-layer model of AI context, I spent some time today working with ChatGPT to write a full three-Layer Instruction set for use in Anapoly’s ChatGPT Projects.

    If it works as hoped, strong governance is now the default. The AI will challenge weak logic, ask for clarification when the ground is shaky, and keeps its tone plain and grounded. And for the occasions when I want it to bend the rules baked into it, I added an escape hatch: a temporary override, clearly marked, that lasts exactly one turn.

    It will be interesting to see how well it works.

  • Lab Note: building a three-layer instruction set for Anapoly’s work in ChatGPT Projects

    Transparency label: AI-assisted
    This post was developed collaboratively. Alec set the purpose and structure; ChatGPT drafted, critiqued, and refined the narrative under strict governance. Alec reviewed and accepted each stage.


    The three-layer instruction set took shape through a sequence of decisions, clarifications, and course‑corrections within a ChatGPT Project. What follows is a reflective account of how it emerged. 

    1. Recognising a structural gap

    We began with scattered ingredients: a mission statement, the value proposition, and the Lab Framework. Each document covered something important, but none told the AI what world it was working in. The Three‑Layer Model of Context made the gap obvious. We had talked for months about business context as a top layer, but there was no single, authoritative statement that the AI could rely on.

    The realisation was that, without a coherent top layer, the AI would continue to drift between voices, assumptions, and roles. The need for a stable business‑context layer became unavoidable.

    2. Using the extended mission document to surface the essentials

    To understand what the top layer must contain, we drafted an extended mission document. Writing it forced us to specify Anapoly’s identity, boundaries, ethos, and tone in operational rather than literary terms.

    Amongst other things, we clarified:

    • that Anapoly is exploratory, not consultative;
    • that we work only with synthetic data;
    • that our tone is plain and grounded;
    • that we are not selling AI or performing expert evaluation;
    • that transparency is a defining value.

    The exercise exposed the core elements the AI would need if it were to behave as a consistent Anapoly collaborator. Those insights quickly became the skeleton of the business‑context layer.

    3. Asking the decisive question: What else does the AI need?

    The next turning point came when Alec asked: given the mission, the value proposition, and the Lab Framework, what else does the AI still lack? The answer was longer than expected. Beyond the mission and methods, the AI needed:

    • explicit organisational identity;
    • a clear audience model;
    • non‑negotiable values;
    • boundaries on what Anapoly does not do;
    • tone and communication standards;
    • risk posture;
    • definitions of quality;
    • strategic intent.

    This list turned a loose idea into a concrete specification.

    4. Consolidating into one canonical business‑context block

    At this point, we faced a structural choice: leave the business context scattered across multiple documents, or merge them into a single canonical block. Alec chose consolidation. That removed ambiguity and ensured that every project would begin with the same fixed identity, values, and constraints. Once the consolidated block was drafted, the top layer of the instruction set effectively snapped into place.

    5. Rebuilding the behavioural governance layer from first principles

    Anapoly’s existing governance notes had grown organically and were no longer fully aligned with the clearer business context. Alec asked for a complete rewrite. We replaced fragmented instructions with a behavioural layer defining:

    • tone (plain, dry, concise);
    • stance (critical, truth‑first, no flattery);
    • interaction rules (ask when unclear, challenge lazy assumptions);
    • risk handling (flag operational, ethical, or data‑protection issues);
    • constraints (no hype, no verbosity, no softening of justified critique).

    The most important element was the decision to adopt Strong Governance as the default. The behaviour is now predictable, sceptical, and aligned with Anapoly’s ethos.

    6. Adding a deliberate escape clause

    Strong governance is effective but inflexible. To avoid it becoming a straitjacket, we added a controlled override mechanism: a mandatory keyword (OVERRIDE:) followed by natural‑language instructions. The override lasts exactly one turn.

    7. Sharpening the task‑prompt layer

    With the first two layers established, the task‑prompt layer became straightforward. It defines how immediate instructions are handled:

    • follow the task prompt as written;
    • interpret it inside the constraints of the business context and governance layer;
    • ask for clarification when needed;
    • use project files only when explicitly referenced.

    This aligns directly with the Micro‑Enterprise Setup Blueprint, which treats task prompts as the active layer atop stable configuration.

    8. Assembling the final three‑layer instruction set

    Once the components were complete, we assembled them in order:

    1. Business Context — Anapoly’s identity, values, tone, boundaries, risk posture, and strategic intent.
    2. Behavioural Governance Layer — strict rules for tone, reasoning, interaction, critique, and risk.
    3. Task‑Prompt Layer — guidance for interpreting immediate instructions.

    We added a short explanatory note to clarify how the layers fit together and how overrides work.

    The final result ensures the AI behaves like an informed, grounded collaborator who understands Anapoly’s mission, values, and constraints.


    The outcome

    This process created a durable operating profile for all future Anapoly projects. The instruction set now:

    • anchors the model in Anapoly’s identity,
    • constrains drift through strict governance,
    • ensures tasks are interpreted consistently,
    • and provides a clean override path when needed.

    We now have a dependable foundation to build on — and a clear method for adapting it when Anapoly evolves.

  • Using local-only AI in a micro-enterprise

    Transparency label: AI-assisted

    I’ve added a briefing note in the Resources area that sets out an approach for a micro-enterprise to run AI entirely on local machines. This creates a system that is private, predictable, comparatively inexpensive, and easy to expand in small steps.

    The note explains how a three-layer model for controlling the behaviour of an AI ( business context at the top, governance rules in the middle, and task instructions at the bottom) can still apply even when everything sits on a laptop, and how a small software component (the orchestrator) can keep the whole arrangement predictable and safe.

    If you’re curious about “local-only AI” for a micro-enterprise, or wondering what it might look like in practice, you should find this a useful starting point.

    Read the note: Using Local-Only AI in a Micro-Enterprise

  • Make ChatGPT mark its own homework

    Transparency label: human only

    This prompt results in a marked improvement in the quality of a document being drafted collaboratively with ChatGPT.

    Review the content of the canvas for completenes, correctness, consistency and the quality of its line of reasoning, and make improvements as necessary. Before outputting the improved version, repeat the review and make further improvements as appropriate. Only then output the improved canvas.

  • Content and context are key

    … to successful use of AI. This is a distinction that matters now because many teams only notice the problem once their AI systems start giving confident but contradictory answers.

    Transparency label: AI-assisted. AI was used to draft, edit, or refine content. Alec Fearon directed the process.

    With acknowledgment to Scott Abel and Michael Iantosca, whose writing provided the source material for this post.


    In an earlier post, I defined an AI-embedded business as one in which AI systems are deeply integrated into its operations. For this to be successful, I suggested that we needed contextual scaffolding to define the AI’s working environment for a given task, context engineering to manage that environment as part of the business infrastructure, and the disciplined management of knowledge. We can call the latter – the disciplined management of knowledge – content management. 

    Content management governs what goes into the contextual scaffolding (the AI’s knowledge environment).
    Context engineering governs how the model uses it at inference time.

    Between them, they define the only two levers humans actually have over AI behaviour today, and crucially, they sit entirely outside the model. If either discipline is missing or under-performing, the system degrades:

    • Without content engineering, you get knowledge collapse (defined below).
      The sources of truth become out of date, fragment, contradict, and mislead the model.
    • Without context engineering, you get context rot (defined below).
      Even good content becomes unusable because it’s handed to the model in ways that overwhelm its attention budget.

    Together, these two disciplines enable a coherent means of control:

    • Content engineering → the quality, structure, governance, and lifecycle of the organisation’s knowledge.
    • Context engineering → the orchestration of instructions, persona, reference materials, scope, constraints, and retrieval so the model actually behaves as intended.

    Definitions

    Knowledge collapse

    A systemic failure in an organisation’s knowledge environment where incorrect, outdated, conflicting, or poorly structured content overwhelms the reliable material, causing both humans and AI systems to lose the ability to determine what is authoritative.

    In plainer terms:

    The knowledge base stops being a source of truth and becomes a source of error.

    It happens when:

    • Content ages faster than it’s maintained.
    • There is no lifecycle governance.
    • Tools ingest everything without curation.
    • Retrieval yields contradictions rather than clarity.
    • AI amplifies the mess until nobody can tell what’s accurate.

    The collapse is not sudden; it is accumulative and invisible until a critical threshold is crossed, for example when a small business relies on an outdated onboarding manual and the AI dutifully repeats obsolete steps that no longer match how the company actually works.

    Context rot

    The degradation of an LLM’s reasoning as the context window grows.
    The model becomes distracted by the sheer number of tokens it must attend to, because:

    1. Attention is a finite resource.
      Each new token drains the model’s “attention budget.”
    2. Transformers force every token to attend to every other token.
      As the number of tokens rises, the pairwise attention load explodes.
    3. Signal-to-noise collapses.
      Useful tokens become diluted by irrelevant ones, so the model fixates on the wrong cues or loses the thread entirely.

    Anthropic’s researchers summarise it neatly:

    Tokens accumulate beyond the model’s ability to meaningfully attend to them, so the context becomes increasingly noisy and less relevant.

    In short: the more you give the model, the worse it thinks, which is why practitioners often need to reset sessions or prune earlier inputs to keep the model focused on the task at hand.

    This is a structural limitation of today’s transformer architecture, not a parameter-tuning issue. It sets a ceiling on long-context performance unless a new architecture replaces or supplements attention.

  • An AI-embedded business

    Some thoughts about what it means to be an AI-embedded business. Also, small language models are coming into favour for focused tasks, as they are more accurate and more efficient. The approach is to combine them with large language models into hybrid AI systems. That seems to be the direction small businesses will be going and therefore something we need to keep abreast of.

  • A more personalised way to learn with NotebookLM

    I came across an interesting piece by AI Maker explaining how he uses NotebookLM to learn in a more personalised way. He suggests that learning improves dramatically when we control our sources, shape the content into formats that suit us, and then test ourselves deliberately.

    This approach fits neatly with our existing work and is worth experimenting with. Participants in our labs might appreciate being able to gain confidence with NotebookLM whilst, as a by-product, learning about something else of value to them.

    The approach is outlined below


    Find high‑quality sources

    Learning needs good source material. NotebookLM’s Discover feature helps by scanning the web for material relevant to our topic and filtering out noise. The goal is a clean, reliable starting set.

    We can reinforce that by using Perplexity AI:

    1. Use Perplexity’s Deep Research to gather a solid set of articles, videos, documents, and case studies.
    2. Export the citations as raw links.
    3. Import those links directly into NotebookLM as our base sources.
    4. Use NotebookLM’s Discover feature to expand, refine, and diversify the set.

    We should aim for varied perspectives: Reddit for beginner intuition, YouTube for demonstrations, official documentation for depth, enterprise case studies for realism.

    Build sources into multiple formats

    Once our sources are loaded into NotebookLM, we can shape their content into formats to suit how we like to learn.

    A focused and well-structured report helps us understand faster. Here are three effective techniques:

    1. Anchor new ideas to familiar systems. Ask NotebookLM to explain the concept by contrasting it with something you already know.
    2. Layer complexity progressively. Tell NotebookLM to start with a plain‑language explanation, then add the underlying processes, then technical detail.
    3. Use a structured four‑pass model. Request versions for beginner, intermediate, advanced, and expert levels so you can climb the ladder rather than jump in halfway.

    Audio is ideal for learning on the move and for reinforcement. NotebookLM’s podcast generator can be shaped deliberately:

    • Beginner interviewing expert for clear explanations and basic intuition.
    • Expert debate to highlight competing approaches and trade‑offs.
    • Expert critique of the source material to expose over-simplifications or gaps in your understanding.

    Short structured video explainers are helpful for visual learners. We can prompt NotebookLM to create comparison tables, workflows, or mistake‑prevention checklists that would be tedious to build ourselves.

    Test to expose the gaps

    NotebookLM’s flashcards and quizzes can help consolidate what has been learnt.


  • Testing a local AI

    I am using Obsidian to build not a second brain, but a workspace for my brain: a space in which to think. The workspace is intended to become an ideas factory, designed as a knowledge network: a network of ideas at different stages of development.

    There is scope for AI technology to enhance my thinking in that space. For example, ideas in the knowledge network can be embedded into a vector database, roughly similar to how an AI organises knowledge. This allows similarities among widely scattered notes — the semantic connections — to be highlighted, and lets me search not just for specific words but also for ideas with related meaning. The Smart Connections plugin implements these useful capabilities.

    I can take this a step further by inviting an AI into my thinking space. Ollama lets me run a large language model on my laptop and connect it to my Obsidian vault through the Smart Connections plugin, so the AI can query, summarise, and respond directly from my notes. I downloaded the Mistral 7B Q4_K_M model for this purpose. Put simply, it’s a compressed version of the 7B model that runs faster and uses less memory, at the cost of some accuracy.

    I ran two tests. First, I pointed it to a short daily note and asked, “What is Claire’s new job?” After a pause of about half a minute, it gave me the correct answer.

    For the second test, I pointed it to a more substantial document and asked, “What is the purpose of Goal-Directed Context Management?” The reply took eight minutes, but the answer was very good:

    The purpose of Goal-Directed Context Management, as outlined in the provided document, is to manage a large language model (such as ChatGPT) effectively when it’s being used as a collaborator during the production or editing of knowledge-based products.

    This framework aims to configure and adapt the behavior of the AI to suit different roles over the course of work, while also providing scaffolding in the form of project and program management techniques to keep the AI focused and effective over the lifecycle of an information-based product.

    By aligning the AI’s evolving context with the purpose and structure of a project or program, Goal-Directed Context Management ensures that the AI always has the right information to contribute effectively to the current stage of work.

    The test confirmed three things: I could run an AI locally to keep my data private, the model gave accurate responses, and, although performance was slow on my three-year-old Dell XPS, it still worked. Watching my own laptop pull a thoughtful answer from my notes was oddly satisfying, even if I had time to make tea while it thought.