Author: Alec Fearon

  • Mind Maps, Podcasts, and a Pocket Brain

    Lately I’ve been testing a simple idea: can AI turn Obsidian into a genuinely useful second brain for me? The answer was a complicated but resounding “Yes”.

    My first step was to find an experienced Obsidian user whose ideas resonated with mine. This turned out to be a YouTuber called Callum (aka Wanderloots). Callum’s channel has over fifteen  videos relating his personal experience in building a second brain, and offering advice about all aspects of Obsidian for that purpose. 

    I satisfied myself that Callum’s approach would be a good basis from which to develop my own, but didn’t have the time to watch every video in order to benefit from their content. I needed a quick and efficient way to fast-track that process. Step forward NotebookLM.

    One of the great things about NotebookLM is that you can give it fifteen YouTube videos and then have a conversation about their content. The discussion can encompass the content of one, several, or all of the videos. To help you structure the conversation, NotebookLM can produce a mind map setting out all the concepts or ideas contained in the videos. 

    On top of that, to help you reflect on these ideas while strolling round the park after work, the AI can produce an audio overview. This takes the form of a podcast-style discussion between two hosts, and you can set the ground rules for their discussion, for example the focus points, audience, technical level. Listen in for yourself.

    Intriguingly, the discussion is interactive when you’re online to the AI. You can join in to ask questions or steer the discussion in a particular direction.

    With the big picture in place, the next step was the hands-on work of shaping Obsidian to fit my needs. That will be the subject of my next post, where I’ll dig into the practicalities of building it and explore how a local AI might give my second brain extra intelligence without compromising its privacy.

  • The art of goal-directed context management

    with acknowledgement to Erling S. Andersen, whose book Goal Directed Project Management inspired me in 1988

    Transparency label: human only


    How do we control an AI?

    To use an AI effectively, we have to configure it’s behaviour to suit the purpose of our work. We do this by giving it information in the form of instructions, guidance, questions, and reference material. We put this information into the AI’s context to govern how it behaves. See What is context? for a more detailed explanation of context. It is important to understand it well.

    To apply a standard behaviour from the AI across all chats, we use custom instructions.

    During individual chats, we control the AI through prompts, the canvas, and file uploads (prompts can include instructions, guidance and questions).

    To control the AI’s behaviour for a group of chats organised as a project, we do it through project instructions.

    To make information available to all chats in a project, we upload project files.

    Finally, if we need the services of an AI specialist, we create a form of agent (called a GPT). We configure the agent with bespoke instructions to make its behaviour appropriate to its specialist role, and we upload reference files to give it a knowledge base. Our agent can then be called in as necessary to provide specialist advice through chats.

    This method of controlling the AI by putting information into its context is similar to the way we control a computer by giving it a program to run. But there is an important difference: a computer executes the instructions in its program precisely; an AI (large language model) interprets the information we give it. We should aim, therefore, to give the AI all the information it needs to avoid losing focus. A vague prompt gets a vague answer. A precise prompt with the right background information gets us something much more useful.

    There are similarities with the way we manage a human. The human has a store of learned knowledge to call upon, can be given instructions on how to behave during a task, can use reference material, can converse with others, and can co-edit documents with them. Like an AI, the human interprets the information obtained by these means.

    Managing scale & complexity

    As the scale or complexity of a task grows, it becomes increasingly difficult to coordinate the work of humans to achieve a common goal. To overcome this problem, we use project and programme methodologies to structure the work; these provide a form of scaffolding to let the work be carried out safely and effectively. The scaffolding is made strong and stable through a plan which – amongst other things – divides the work into manageable packages, defines who or what will do the work, when and how it will be done, and what work outputs are required.

    Work whose purpose is well defined and carries an acceptable risk of failure can be organised as a project. The scaffolding for this takes the form of the project plan and related documentation, including external standards. Members of the project team are given roles, which can vary during the project, and assigned tasks. The detailed information they need to carry out their work successfully is held in the project scaffolding. When the work has greater scope and/or complexity, it can be organised as a programme. In this case the scaffolding is provided by the programme management plan and related documentation.

    AI as a team member

    When AI is used by a member of a project team, its characteristics make it more like an additional member of the team than a software tool. The AI, like the human, must be given roles and assigned tasks. It, too, must obtain detailed information from the project scaffolding. Unlike a human, however, the AI cannot yet go and fetch the information it needs; the human user must provide it. The human does does so by putting relevant information into the AI’s context.

    Information required for input to the AI’s context over the course of the project is held in a part of the project scaffolding called the contextual scaffolding. Although closely integrated with other parts of the project scaffolding, the contextual scaffolding has a distinct purpose: to hold the information – AI artefacts – needed to manage the behaviour and effectiveness of the AI over the course of the project. The contextual scaffolding is the responsibility of the project’s AI manager.

    AI artefacts

    The nature, required content, and production timetable for AI artefacts are governed by and tied to the project plan. Artefacts in the contextual scaffolding include:

    • AI policy
    • AI management plan
    • an AI startup pack to configure the AI at the outset of the project, when its objective will be to outline the contextual scaffolding and help to produce an initial set of AI artefacts;
    • stage packs tailored to the varying uses intended for the AI over the course of the project (as with other project artefacts, these begin life in outline form and gain content as the project progresses);
    • GPT packs to create the specialist AI agents that are needed.

    The art of goal-directed context management

    The art of goal-directed context management is to align the AI’s evolving context with the purpose and structure of the project or programme it is associated with, ensuring the AI always has the right information, at the right level of detail, to contribute effectively to the current stage of work.

  • ChatGPT-5 Availability and Features


    Ray passed on this useful summary of the new features in ChatGPT-5 and their availability to the various tiers of user.

  • Creating a video from reference documents

    Transparency label: this post is Human-only; the video is AI-only

    In an earlier post, I explained how I used NotebookLM to update our Contextual Scaffolding Framework to allow for new capabilities in ChatGPT-5. In that same session, I accidentally clicked on the “Video overview” function, and this is what NotebookLM produced. I did not give it any instructions about focus or audience, and it was working only with the then current version of my Contextual Scaffolding Framework and the ChatGPT-5 Prompting Guide. I think it’s a remarkably good. See what you think.

  • GPT-5, the Router, and the Road to a SuperApp

    Transparency label: AI-only

    OpenAI’s latest release, GPT-5, isn’t just about new features or smarter answers, it’s a strategic move to turn ChatGPT’s 700 million free users into a sustainable business. The real engine behind this shift is the new “router” system, which decides in real time which AI model to use for each request.

    SemiAnalysis has an insightful breakdown of how this technology could underpin a new monetisation model: ChatGPT as a “SuperApp” that can act as your purchasing agent, make bookings, and complete transactions—all without sending you to a search engine.

    Read a summary of the key points here.

  • Conceptual Scaffolding Framework updated for ChatGPT-5

    I put our Contextual Scaffolding Framework and OpenAI’s GPT-5 Prompting Cookbook into NotebookLM and asked it “What aspects of the gpt-5 prompting cookbook are most important to know in order to apply the contextual scaffolding framework most effectively?”

    It gave me a sensible set of prompting strategies, so I told it to integrate them into an updated Contextual Scaffolding Framework for ChatGPT-5.

    I’ve given the updates a cursory review; they look good apart from a reference to use of an API (Application Programming Interface) which is probably outside our scope. But it’s late; a proper review will have to wait for another day. Perhaps a job to do in collaboration with ChatGPT-5.

    Is contextual scaffolding a worthwhile concept? The findings from some research with Perplexity suggest it is:

    Contextual scaffolding is not only still applicable to ChatGPT-5, it is more effective and occasionally more necessary, due to ChatGPT-5’s increased context window, steerability, and the complexity of its reasoning capabilities. The consensus among thought leaders is that scaffolding remains best practice for directing AI behavior, ensuring relevance, and achieving quality outcomes in collaborative tasks. Leveraging both new features (custom instructions, preset personas, automatic reasoning mode selection) and established scaffolding techniques is recommended to get the best results. The trend is towards combining sophisticated context guidance with the model’s own adaptive reasoning for “human+AI” workflows.

  • An ethos of caring

    transparency label: human only

    On 7 August 2025, Anapoly ran a trial acclimatisation lab. One of the participants was a member of staff from Marjon University. He liked our approach to the use of AI and, in later discussion, suggested the possibility of a collaboration with the university.

    After exploring some of the options for this, the conversation became a bit philosophical. It touched on the ethics of AI, the risk that students might outsource their thinking, the need to imbue students with values of benefit to society, and the need for them to have an ethos of caring about how the human-AI relationship evolves.

    This prompted me to begin thinking about the possibility of exploring these aspects of the Human-AI interaction in more detail. I setup this digital garden for that purpose.

  • ChatGPT 5 …

    transparency label: human-only

    … was released today. I have been using it to help with my writing. It is very good – markedly better than earlier versions – yet not as reliable as 4o. Commentators put this down to problems wth autoswitching, and teething problems with rolling this out at scale.

    Autoswitching in ChatGPT 5 refers to the system’s ability to automatically select the most suitable reasoning mode—quick response or deeper analysis—without requiring users to manually pick a model. When you type a query, ChatGPT 5 uses an intelligent router to assess complexity and intent, deciding whether to answer fast or engage more thorough reasoning (called “GPT-5 Thinking”) for harder problems. This streamlines the experience by combining previous models into a unified, smarter, and faster AI that adapts behind the scenes for optimal results.
    [source: Perplexity, citing OpenAI and others]

  • First acclimatisation lab

    Transparency label: Human-only

    We ran our first acclimatisation lab yesterday, a trial session with two external participants. There were aspects which can be improved, but overall the feedback was very positive.

    This was the session plan.

  • Contextual Scaffolding for AI Work

    Transparency label: AI-assisted

    In an earlier post, I introduced the idea of “contextual systems engineering”. Building on that idea, we are developing a way to manage collaborative work with AI — especially where the goal of the collaboration is a knowledge-based product: for example a report, a competitive tender, an academic paper, or a policy framework.

    What we have come up with is the idea of a Contextual Scaffolding Framework. The framework combines two models:

    • A phase model to provide an overall structure for the work of producing the product, from its concept through to operational use.
    • A project model to structure the detailed work within each phase.

    The principle is simple: if we want AI to stay helpful and relevant, we need to give it the right information. The framework helps us decide what information to provide and when.

    The information we provide is put into the AI’s “context” — and the context must evolve to keep pace with the work. Like a satnav updating in real time, the contextual scaffolding keeps the AI aware of where you are, what matters most, and how best to move forward.

    🧱 Read the full framework here


    Transparency label justification. The post was drafted by ChatGPT based on Alec’s prior work in a long chat, with Alec guiding tone and purpose. ChatGPT proposed structure and language, but all content reflects and links back to a human-authored framework.