Author: Alec Fearon

  • Using local-only AI in a micro-enterprise

    Transparency label: AI-assisted

    I’ve added a briefing note in the Resources area that sets out an approach for a micro-enterprise to run AI entirely on local machines. This creates a system that is private, predictable, comparatively inexpensive, and easy to expand in small steps.

    The note explains how a three-layer model for controlling the behaviour of an AI ( business context at the top, governance rules in the middle, and task instructions at the bottom) can still apply even when everything sits on a laptop, and how a small software component (the orchestrator) can keep the whole arrangement predictable and safe.

    If you’re curious about “local-only AI” for a micro-enterprise, or wondering what it might look like in practice, you should find this a useful starting point.

    Read the note: Using Local-Only AI in a Micro-Enterprise

  • Make ChatGPT mark its own homework

    Transparency label: human only

    This prompt results in a marked improvement in the quality of a document being drafted collaboratively with ChatGPT.

    Review the content of the canvas for completenes, correctness, consistency and the quality of its line of reasoning, and make improvements as necessary. Before outputting the improved version, repeat the review and make further improvements as appropriate. Only then output the improved canvas.

  • Content and context are key

    … to successful use of AI. This is a distinction that matters now because many teams only notice the problem once their AI systems start giving confident but contradictory answers.

    Transparency label: AI-assisted. AI was used to draft, edit, or refine content. Alec Fearon directed the process.

    With acknowledgment to Scott Abel and Michael Iantosca, whose writing provided the source material for this post.


    In an earlier post, I defined an AI-embedded business as one in which AI systems are deeply integrated into its operations. For this to be successful, I suggested that we needed contextual scaffolding to define the AI’s working environment for a given task, context engineering to manage that environment as part of the business infrastructure, and the disciplined management of knowledge. We can call the latter – the disciplined management of knowledge – content management. 

    Content management governs what goes into the contextual scaffolding (the AI’s knowledge environment).
    Context engineering governs how the model uses it at inference time.

    Between them, they define the only two levers humans actually have over AI behaviour today, and crucially, they sit entirely outside the model. If either discipline is missing or under-performing, the system degrades:

    • Without content engineering, you get knowledge collapse (defined below).
      The sources of truth become out of date, fragment, contradict, and mislead the model.
    • Without context engineering, you get context rot (defined below).
      Even good content becomes unusable because it’s handed to the model in ways that overwhelm its attention budget.

    Together, these two disciplines enable a coherent means of control:

    • Content engineering → the quality, structure, governance, and lifecycle of the organisation’s knowledge.
    • Context engineering → the orchestration of instructions, persona, reference materials, scope, constraints, and retrieval so the model actually behaves as intended.

    Definitions

    Knowledge collapse

    A systemic failure in an organisation’s knowledge environment where incorrect, outdated, conflicting, or poorly structured content overwhelms the reliable material, causing both humans and AI systems to lose the ability to determine what is authoritative.

    In plainer terms:

    The knowledge base stops being a source of truth and becomes a source of error.

    It happens when:

    • Content ages faster than it’s maintained.
    • There is no lifecycle governance.
    • Tools ingest everything without curation.
    • Retrieval yields contradictions rather than clarity.
    • AI amplifies the mess until nobody can tell what’s accurate.

    The collapse is not sudden; it is accumulative and invisible until a critical threshold is crossed, for example when a small business relies on an outdated onboarding manual and the AI dutifully repeats obsolete steps that no longer match how the company actually works.

    Context rot

    The degradation of an LLM’s reasoning as the context window grows.
    The model becomes distracted by the sheer number of tokens it must attend to, because:

    1. Attention is a finite resource.
      Each new token drains the model’s “attention budget.”
    2. Transformers force every token to attend to every other token.
      As the number of tokens rises, the pairwise attention load explodes.
    3. Signal-to-noise collapses.
      Useful tokens become diluted by irrelevant ones, so the model fixates on the wrong cues or loses the thread entirely.

    Anthropic’s researchers summarise it neatly:

    Tokens accumulate beyond the model’s ability to meaningfully attend to them, so the context becomes increasingly noisy and less relevant.

    In short: the more you give the model, the worse it thinks, which is why practitioners often need to reset sessions or prune earlier inputs to keep the model focused on the task at hand.

    This is a structural limitation of today’s transformer architecture, not a parameter-tuning issue. It sets a ceiling on long-context performance unless a new architecture replaces or supplements attention.

  • An AI-embedded business

    Some thoughts about what it means to be an AI-embedded business. Also, small language models are coming into favour for focused tasks, as they are more accurate and more efficient. The approach is to combine them with large language models into hybrid AI systems. That seems to be the direction small businesses will be going and therefore something we need to keep abreast of.

  • A more personalised way to learn with NotebookLM

    I came across an interesting piece by AI Maker explaining how he uses NotebookLM to learn in a more personalised way. He suggests that learning improves dramatically when we control our sources, shape the content into formats that suit us, and then test ourselves deliberately.

    This approach fits neatly with our existing work and is worth experimenting with. Participants in our labs might appreciate being able to gain confidence with NotebookLM whilst, as a by-product, learning about something else of value to them.

    The approach is outlined below


    Find high‑quality sources

    Learning needs good source material. NotebookLM’s Discover feature helps by scanning the web for material relevant to our topic and filtering out noise. The goal is a clean, reliable starting set.

    We can reinforce that by using Perplexity AI:

    1. Use Perplexity’s Deep Research to gather a solid set of articles, videos, documents, and case studies.
    2. Export the citations as raw links.
    3. Import those links directly into NotebookLM as our base sources.
    4. Use NotebookLM’s Discover feature to expand, refine, and diversify the set.

    We should aim for varied perspectives: Reddit for beginner intuition, YouTube for demonstrations, official documentation for depth, enterprise case studies for realism.

    Build sources into multiple formats

    Once our sources are loaded into NotebookLM, we can shape their content into formats to suit how we like to learn.

    A focused and well-structured report helps us understand faster. Here are three effective techniques:

    1. Anchor new ideas to familiar systems. Ask NotebookLM to explain the concept by contrasting it with something you already know.
    2. Layer complexity progressively. Tell NotebookLM to start with a plain‑language explanation, then add the underlying processes, then technical detail.
    3. Use a structured four‑pass model. Request versions for beginner, intermediate, advanced, and expert levels so you can climb the ladder rather than jump in halfway.

    Audio is ideal for learning on the move and for reinforcement. NotebookLM’s podcast generator can be shaped deliberately:

    • Beginner interviewing expert for clear explanations and basic intuition.
    • Expert debate to highlight competing approaches and trade‑offs.
    • Expert critique of the source material to expose over-simplifications or gaps in your understanding.

    Short structured video explainers are helpful for visual learners. We can prompt NotebookLM to create comparison tables, workflows, or mistake‑prevention checklists that would be tedious to build ourselves.

    Test to expose the gaps

    NotebookLM’s flashcards and quizzes can help consolidate what has been learnt.


  • Testing a local AI

    I am using Obsidian to build not a second brain, but a workspace for my brain: a space in which to think. The workspace is intended to become an ideas factory, designed as a knowledge network: a network of ideas at different stages of development.

    There is scope for AI technology to enhance my thinking in that space. For example, ideas in the knowledge network can be embedded into a vector database, roughly similar to how an AI organises knowledge. This allows similarities among widely scattered notes — the semantic connections — to be highlighted, and lets me search not just for specific words but also for ideas with related meaning. The Smart Connections plugin implements these useful capabilities.

    I can take this a step further by inviting an AI into my thinking space. Ollama lets me run a large language model on my laptop and connect it to my Obsidian vault through the Smart Connections plugin, so the AI can query, summarise, and respond directly from my notes. I downloaded the Mistral 7B Q4_K_M model for this purpose. Put simply, it’s a compressed version of the 7B model that runs faster and uses less memory, at the cost of some accuracy.

    I ran two tests. First, I pointed it to a short daily note and asked, “What is Claire’s new job?” After a pause of about half a minute, it gave me the correct answer.

    For the second test, I pointed it to a more substantial document and asked, “What is the purpose of Goal-Directed Context Management?” The reply took eight minutes, but the answer was very good:

    The purpose of Goal-Directed Context Management, as outlined in the provided document, is to manage a large language model (such as ChatGPT) effectively when it’s being used as a collaborator during the production or editing of knowledge-based products.

    This framework aims to configure and adapt the behavior of the AI to suit different roles over the course of work, while also providing scaffolding in the form of project and program management techniques to keep the AI focused and effective over the lifecycle of an information-based product.

    By aligning the AI’s evolving context with the purpose and structure of a project or program, Goal-Directed Context Management ensures that the AI always has the right information to contribute effectively to the current stage of work.

    The test confirmed three things: I could run an AI locally to keep my data private, the model gave accurate responses, and, although performance was slow on my three-year-old Dell XPS, it still worked. Watching my own laptop pull a thoughtful answer from my notes was oddly satisfying, even if I had time to make tea while it thought.

  • Mind Maps, Podcasts, and a Pocket Brain

    Lately I’ve been testing a simple idea: can AI turn Obsidian into a genuinely useful second brain for me? The answer was a complicated but resounding “Yes”.

    My first step was to find an experienced Obsidian user whose ideas resonated with mine. This turned out to be a YouTuber whose channel has over fifteen  videos relating his personal experience in building a second brain, and offering advice about all aspects of Obsidian for that purpose. 

    I satisfied myself that this approach would be a good basis from which to develop my own, but didn’t have the time to watch every video in order to benefit from their content. I needed a quick and efficient way to fast-track that process. Step forward NotebookLM.

    One of the great things about NotebookLM is that you can give it fifteen YouTube videos and then have a conversation about their content. The discussion can encompass the content of one, several, or all of the videos. To help you structure the conversation, NotebookLM can produce a mind map setting out all the concepts or ideas contained in the videos. 

    On top of that, to help you reflect on these ideas while strolling round the park after work, the AI can produce an audio overview. This takes the form of a podcast-style discussion between two hosts, and you can set the ground rules for their discussion, for example the focus points, audience, technical level. Listen in for yourself.

    Intriguingly, the discussion is interactive when you’re online to the AI. You can join in to ask questions or steer the discussion in a particular direction.

    With the big picture in place, the next step was the hands-on work of shaping Obsidian to fit my needs. That will be the subject of my next post, where I’ll dig into the practicalities of building it and explore how a local AI might give my second brain extra intelligence without compromising its privacy.

  • The art of goal-directed context management

    with acknowledgement to Erling S. Andersen, whose book Goal Directed Project Management inspired me in 1988

    Transparency label: human only


    How do we control an AI?

    To use an AI effectively, we have to configure it’s behaviour to suit the purpose of our work. We do this by giving it information in the form of instructions, guidance, questions, and reference material. We put this information into the AI’s context to govern how it behaves. See What is context? for a more detailed explanation of context. It is important to understand it well.

    To apply a standard behaviour from the AI across all chats, we use custom instructions.

    During individual chats, we control the AI through prompts, the canvas, and file uploads (prompts can include instructions, guidance and questions).

    To control the AI’s behaviour for a group of chats organised as a project, we do it through project instructions.

    To make information available to all chats in a project, we upload project files.

    Finally, if we need the services of an AI specialist, we create a form of agent (called a GPT). We configure the agent with bespoke instructions to make its behaviour appropriate to its specialist role, and we upload reference files to give it a knowledge base. Our agent can then be called in as necessary to provide specialist advice through chats.

    This method of controlling the AI by putting information into its context is similar to the way we control a computer by giving it a program to run. But there is an important difference: a computer executes the instructions in its program precisely; an AI (large language model) interprets the information we give it. We should aim, therefore, to give the AI all the information it needs to avoid losing focus. A vague prompt gets a vague answer. A precise prompt with the right background information gets us something much more useful.

    There are similarities with the way we manage a human. The human has a store of learned knowledge to call upon, can be given instructions on how to behave during a task, can use reference material, can converse with others, and can co-edit documents with them. Like an AI, the human interprets the information obtained by these means.

    Managing scale & complexity

    As the scale or complexity of a task grows, it becomes increasingly difficult to coordinate the work of humans to achieve a common goal. To overcome this problem, we use project and programme methodologies to structure the work; these provide a form of scaffolding to let the work be carried out safely and effectively. The scaffolding is made strong and stable through a plan which – amongst other things – divides the work into manageable packages, defines who or what will do the work, when and how it will be done, and what work outputs are required.

    Work whose purpose is well defined and carries an acceptable risk of failure can be organised as a project. The scaffolding for this takes the form of the project plan and related documentation, including external standards. Members of the project team are given roles, which can vary during the project, and assigned tasks. The detailed information they need to carry out their work successfully is held in the project scaffolding. When the work has greater scope and/or complexity, it can be organised as a programme. In this case the scaffolding is provided by the programme management plan and related documentation.

    AI as a team member

    When AI is used by a member of a project team, its characteristics make it more like an additional member of the team than a software tool. The AI, like the human, must be given roles and assigned tasks. It, too, must obtain detailed information from the project scaffolding. Unlike a human, however, the AI cannot yet go and fetch the information it needs; the human user must provide it. The human does does so by putting relevant information into the AI’s context.

    Information required for input to the AI’s context over the course of the project is held in a part of the project scaffolding called the contextual scaffolding. Although closely integrated with other parts of the project scaffolding, the contextual scaffolding has a distinct purpose: to hold the information – AI artefacts – needed to manage the behaviour and effectiveness of the AI over the course of the project. The contextual scaffolding is the responsibility of the project’s AI manager.

    AI artefacts

    The nature, required content, and production timetable for AI artefacts are governed by and tied to the project plan. Artefacts in the contextual scaffolding include:

    • AI policy
    • AI management plan
    • an AI startup pack to configure the AI at the outset of the project, when its objective will be to outline the contextual scaffolding and help to produce an initial set of AI artefacts;
    • stage packs tailored to the varying uses intended for the AI over the course of the project (as with other project artefacts, these begin life in outline form and gain content as the project progresses);
    • GPT packs to create the specialist AI agents that are needed.

    The art of goal-directed context management

    The art of goal-directed context management is to align the AI’s evolving context with the purpose and structure of the project or programme it is associated with, ensuring the AI always has the right information, at the right level of detail, to contribute effectively to the current stage of work.

  • ChatGPT-5 Availability and Features


    Ray passed on this useful summary of the new features in ChatGPT-5 and their availability to the various tiers of user.

  • Creating a video from reference documents

    Transparency label: this post is Human-only; the video is AI-only

    In an earlier post, I explained how I used NotebookLM to update our Contextual Scaffolding Framework to allow for new capabilities in ChatGPT-5. In that same session, I accidentally clicked on the “Video overview” function, and this is what NotebookLM produced. I did not give it any instructions about focus or audience, and it was working only with the then current version of my Contextual Scaffolding Framework and the ChatGPT-5 Prompting Guide. I think it’s a remarkably good. See what you think.