{"id":877,"date":"2025-11-18T17:56:33","date_gmt":"2025-11-18T17:56:33","guid":{"rendered":"https:\/\/anapoly.co.uk\/labs\/?p=877"},"modified":"2025-11-18T18:20:20","modified_gmt":"2025-11-18T18:20:20","slug":"content-and-context-are-key-to-successful-use-of-ai","status":"publish","type":"post","link":"https:\/\/anapoly.co.uk\/labs\/content-and-context-are-key-to-successful-use-of-ai\/","title":{"rendered":"Content and context are key"},"content":{"rendered":"\n<p>&#8230; to successful use of AI. This is a distinction that matters now because many teams only notice the problem once their AI systems start giving confident but contradictory answers.<\/p>\n\n\n\n<p class=\"is-style-text-annotation is-style-text-annotation--1\">Transparency label: AI-assisted. AI was used to draft, edit, or refine content. Alec Fearon directed the process.<\/p>\n\n\n\n<p>With acknowledgment to <a href=\"https:\/\/anapoly.co.uk\/labs\/thought-leaders\/#scott-abel\">Scott Abel<\/a> and&nbsp;<a href=\"https:\/\/anapoly.co.uk\/labs\/thought-leaders\/#michael-iantosca\">Michael Iantosca<\/a>, whose writing provided the source material for this post.<\/p>\n\n\n\n<hr class=\"wp-block-separator has-alpha-channel-opacity\"\/>\n\n\n\n<p><a href=\"https:\/\/anapoly.co.uk\/labs\/an-ai-embedded-business\/\" data-type=\"post\" data-id=\"874\">In an earlier post<\/a>, I defined an AI-embedded business as one in which AI systems are deeply integrated into its operations. For this to be successful, I suggested that we needed contextual scaffolding to define the AI\u2019s working environment for a given task, context engineering to manage that environment as part of the business infrastructure, and the disciplined management of knowledge. We can call the latter &#8211; the disciplined management of knowledge &#8211; content management.&nbsp;<\/p>\n\n\n\n<blockquote class=\"wp-block-quote is-layout-flow wp-block-quote-is-layout-flow\">\n<p><em>Content management<\/em>&nbsp;governs <em>what goes into<\/em> the contextual scaffolding (the AI&#8217;s knowledge environment).<br><em>Context engineering<\/em> governs <em>how the model uses it<\/em> at inference time.<\/p>\n<\/blockquote>\n\n\n\n<p>Between them, they define the only two levers humans actually have over AI behaviour today, and crucially, they sit entirely outside the model.&nbsp;If either discipline is missing or under-performing, the system degrades:<\/p>\n\n\n\n<ul class=\"wp-block-list\">\n<li>Without <strong>content engineering<\/strong>, you get <strong>knowledge collapse<\/strong>&nbsp;(defined below).<br>The sources of truth become out of date, fragment, contradict, and mislead the model.<\/li>\n\n\n\n<li>Without <strong>context engineering<\/strong>, you get <strong>context rot<\/strong>&nbsp;(defined below).<br>Even good content becomes unusable because it\u2019s handed to the model in ways that overwhelm its attention budget.<\/li>\n<\/ul>\n\n\n\n<p>Together, these two disciplines enable a coherent means of control:<\/p>\n\n\n\n<ul class=\"wp-block-list\">\n<li>Content engineering \u2192 the quality, structure, governance, and lifecycle of the organisation\u2019s knowledge.<\/li>\n\n\n\n<li>Context engineering \u2192 the orchestration of instructions, persona, reference materials, scope, constraints, and retrieval so the model actually behaves as intended.<\/li>\n<\/ul>\n\n\n\n<hr class=\"wp-block-separator has-alpha-channel-opacity\"\/>\n\n\n\n<h2 class=\"wp-block-heading\">Definitions<\/h2>\n\n\n\n<h3 class=\"wp-block-heading\">Knowledge collapse<\/h3>\n\n\n\n<p><em>A systemic failure in an organisation\u2019s knowledge environment where incorrect, outdated, conflicting, or poorly structured content overwhelms the reliable material, causing both humans and AI systems to lose the ability to determine what is authoritative.<\/em><\/p>\n\n\n\n<p>In plainer terms:<\/p>\n\n\n\n<p>The knowledge base stops being a source of truth and becomes a source of error.<\/p>\n\n\n\n<p>It happens when:<\/p>\n\n\n\n<ul class=\"wp-block-list\">\n<li>Content ages faster than it\u2019s maintained.<\/li>\n\n\n\n<li>There is no lifecycle governance.<\/li>\n\n\n\n<li>Tools ingest everything without curation.<\/li>\n\n\n\n<li>Retrieval yields contradictions rather than clarity.<\/li>\n\n\n\n<li>AI amplifies the mess until nobody can tell what\u2019s accurate.<\/li>\n<\/ul>\n\n\n\n<p>The collapse is not sudden; it is <strong>accumulative and invisible<\/strong> until a critical threshold is crossed, for example when a small business relies on an outdated onboarding manual and the AI dutifully repeats obsolete steps that no longer match how the company actually works.<\/p>\n\n\n\n<h3 class=\"wp-block-heading\">Context rot<\/h3>\n\n\n\n<p>The degradation of an LLM\u2019s reasoning as the context window grows.<br>The model becomes <strong>distracted<\/strong> by the sheer number of tokens it must attend to, because:<\/p>\n\n\n\n<ol class=\"wp-block-list\">\n<li><strong>Attention is a finite resource.<\/strong><br>Each new token drains the model\u2019s \u201cattention budget.\u201d<\/li>\n\n\n\n<li><strong>Transformers force every token to attend to every other token.<\/strong><br>As the number of tokens rises, the pairwise attention load explodes.<\/li>\n\n\n\n<li><strong>Signal-to-noise collapses.<\/strong><br>Useful tokens become diluted by irrelevant ones, so the model fixates on the wrong cues or loses the thread entirely.<\/li>\n<\/ol>\n\n\n\n<p>Anthropic\u2019s researchers summarise it neatly:<\/p>\n\n\n\n<blockquote class=\"wp-block-quote is-layout-flow wp-block-quote-is-layout-flow\">\n<p><em>Tokens accumulate beyond the model\u2019s ability to meaningfully attend to them, so the context becomes increasingly noisy and less relevant.<\/em><\/p>\n<\/blockquote>\n\n\n\n<p>In short: <strong>the more you give the model, the worse it thinks<\/strong>, which is why practitioners often need to reset sessions or prune earlier inputs to keep the model focused on the task at hand.<\/p>\n\n\n\n<p>This is a structural limitation of today\u2019s transformer architecture, not a parameter-tuning issue. It sets a ceiling on long-context performance unless a new architecture replaces or supplements attention.<\/p>\n","protected":false},"excerpt":{"rendered":"<p>&#8230; to successful use of AI. This is a distinction that matters now because many teams only notice the problem once their AI systems start giving confident but contradictory answers. Transparency label: AI-assisted. AI was used to draft, edit, or refine content. Alec Fearon directed the process. With acknowledgment to Scott Abel and&nbsp;Michael Iantosca, whose [&hellip;]<\/p>\n","protected":false},"author":2,"featured_media":0,"comment_status":"closed","ping_status":"","sticky":false,"template":"","format":"standard","meta":{"footnotes":""},"categories":[5],"tags":[476,477,253],"class_list":["post-877","post","type-post","status-publish","format-standard","hentry","category-diary","tag-content-engineering","tag-context-engineering","tag-contextual-scaffolding"],"_links":{"self":[{"href":"https:\/\/anapoly.co.uk\/labs\/wp-json\/wp\/v2\/posts\/877","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/anapoly.co.uk\/labs\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/anapoly.co.uk\/labs\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/anapoly.co.uk\/labs\/wp-json\/wp\/v2\/users\/2"}],"replies":[{"embeddable":true,"href":"https:\/\/anapoly.co.uk\/labs\/wp-json\/wp\/v2\/comments?post=877"}],"version-history":[{"count":4,"href":"https:\/\/anapoly.co.uk\/labs\/wp-json\/wp\/v2\/posts\/877\/revisions"}],"predecessor-version":[{"id":884,"href":"https:\/\/anapoly.co.uk\/labs\/wp-json\/wp\/v2\/posts\/877\/revisions\/884"}],"wp:attachment":[{"href":"https:\/\/anapoly.co.uk\/labs\/wp-json\/wp\/v2\/media?parent=877"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/anapoly.co.uk\/labs\/wp-json\/wp\/v2\/categories?post=877"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/anapoly.co.uk\/labs\/wp-json\/wp\/v2\/tags?post=877"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}