Tag: human-AI interaction

  • The art of goal-directed context management

    with acknowledgement to Erling S. Andersen, whose book Goal Directed Project Management inspired me in 1988

    Transparency label: human only


    How do we control an AI?

    To use an AI effectively, we have to configure it’s behaviour to suit the purpose of our work. We do this by giving it information in the form of instructions, guidance, questions, and reference material. We put this information into the AI’s context to govern how it behaves. See What is context? for a more detailed explanation of context. It is important to understand it well.

    To apply a standard behaviour from the AI across all chats, we use custom instructions.

    During individual chats, we control the AI through prompts, the canvas, and file uploads (prompts can include instructions, guidance and questions).

    To control the AI’s behaviour for a group of chats organised as a project, we do it through project instructions.

    To make information available to all chats in a project, we upload project files.

    Finally, if we need the services of an AI specialist, we create a form of agent (called a GPT). We configure the agent with bespoke instructions to make its behaviour appropriate to its specialist role, and we upload reference files to give it a knowledge base. Our agent can then be called in as necessary to provide specialist advice through chats.

    This method of controlling the AI by putting information into its context is similar to the way we control a computer by giving it a program to run. But there is an important difference: a computer executes the instructions in its program precisely; an AI (large language model) interprets the information we give it. We should aim, therefore, to give the AI all the information it needs to avoid losing focus. A vague prompt gets a vague answer. A precise prompt with the right background information gets us something much more useful.

    There are similarities with the way we manage a human. The human has a store of learned knowledge to call upon, can be given instructions on how to behave during a task, can use reference material, can converse with others, and can co-edit documents with them. Like an AI, the human interprets the information obtained by these means.

    Managing scale & complexity

    As the scale or complexity of a task grows, it becomes increasingly difficult to coordinate the work of humans to achieve a common goal. To overcome this problem, we use project and programme methodologies to structure the work; these provide a form of scaffolding to let the work be carried out safely and effectively. The scaffolding is made strong and stable through a plan which – amongst other things – divides the work into manageable packages, defines who or what will do the work, when and how it will be done, and what work outputs are required.

    Work whose purpose is well defined and carries an acceptable risk of failure can be organised as a project. The scaffolding for this takes the form of the project plan and related documentation, including external standards. Members of the project team are given roles, which can vary during the project, and assigned tasks. The detailed information they need to carry out their work successfully is held in the project scaffolding. When the work has greater scope and/or complexity, it can be organised as a programme. In this case the scaffolding is provided by the programme management plan and related documentation.

    AI as a team member

    When AI is used by a member of a project team, its characteristics make it more like an additional member of the team than a software tool. The AI, like the human, must be given roles and assigned tasks. It, too, must obtain detailed information from the project scaffolding. Unlike a human, however, the AI cannot yet go and fetch the information it needs; the human user must provide it. The human does does so by putting relevant information into the AI’s context.

    Information required for input to the AI’s context over the course of the project is held in a part of the project scaffolding called the contextual scaffolding. Although closely integrated with other parts of the project scaffolding, the contextual scaffolding has a distinct purpose: to hold the information – AI artefacts – needed to manage the behaviour and effectiveness of the AI over the course of the project. The contextual scaffolding is the responsibility of the project’s AI manager.

    AI artefacts

    The nature, required content, and production timetable for AI artefacts are governed by and tied to the project plan. Artefacts in the contextual scaffolding include:

    • AI policy
    • AI management plan
    • an AI startup pack to configure the AI at the outset of the project, when its objective will be to outline the contextual scaffolding and help to produce an initial set of AI artefacts;
    • stage packs tailored to the varying uses intended for the AI over the course of the project (as with other project artefacts, these begin life in outline form and gain content as the project progresses);
    • GPT packs to create the specialist AI agents that are needed.

    The art of goal-directed context management

    The art of goal-directed context management is to align the AI’s evolving context with the purpose and structure of the project or programme it is associated with, ensuring the AI always has the right information, at the right level of detail, to contribute effectively to the current stage of work.

  • An ethos of caring

    transparency label: human only

    On 7 August 2025, Anapoly ran a trial acclimatisation lab. One of the participants was a member of staff from Marjon University. He liked our approach to the use of AI and, in later discussion, suggested the possibility of a collaboration with the university.

    After exploring some of the options for this, the conversation became a bit philosophical. It touched on the ethics of AI, the risk that students might outsource their thinking, the need to imbue students with values of benefit to society, and the need for them to have an ethos of caring about how the human-AI relationship evolves.

    This prompted me to begin thinking about the possibility of exploring these aspects of the Human-AI interaction in more detail. I setup this digital garden for that purpose.

  • chat 2025-08-08

    A collaborative project?

    Transcript of a chat between Alec Fearon and ChatGPT 5 on 9 August 2025


    Alec said:

    <reference>Marjon University is interested in a collaboration with Anapoly. The collaboration promises to extend the capability of both organisations for the social good. <reference>Students are among those most at risk from the misuse of AI. <reference>There is a risk that students might outsource their thinking to AI and lose or not acquire the ability for critical thinking. <principle>There is a need to imbue students with values of benefit to society. <principle>Students need to have an ethos of caring about how the human-AI relationship evolves. <process>An LLM can be configured to support students to become wiser. <concept>Anapoly Notebook will have an area in which philosophical ideas can be explored. <task>Discuss

    ChatGPT said:

    Here’s how the elements you’ve listed fit together, and where the “LLM to support wisdom” question becomes central.


    1. Collaboration premise

    Marjon University and Anapoly are both interested in AI for social good.

    • Marjon brings access to students, an at-risk group in terms of AI misuse.
    • Anapoly brings structured experimentation methods and public transparency.
    • The shared value space is about enhancing capability without eroding human judgement.

    2. Risk frame

    • Students may “outsource” their thinking to AI, which risks weakening critical faculties.
    • The misuse risk is not just about plagiarism or misinformation — it’s about habits of mind.
    • In your own frameworks, this parallels the acclimatisation → experimentation → proof-of-concept → iteration lab stagesLab Framework v03, but with a moral and developmental overlay.

    3. Guiding principles

    From your notes:

    • Value formation: embed values that benefit society, not just skills for employability.
    • Ethos of stewardship: care about the human–AI relationship as something to be shaped, not passively accepted.
      These align with NIST’s idea that human-AI interaction should be goal- and outcome-oriented and that context of use (technical, social, cultural) is part of the design spacetaxonomy NIST.AI.200-1.

    4. Process implication — “LLM to support wisdom”

    If the LLM is to support students in becoming wise rather than just becoming efficient, configuration needs to be deliberate. That could mean:

    • Prompt scaffolds that force reflection before answer consumption (e.g., “What reasoning steps did you take before consulting me?”).
    • Bias and value checks embedded in interactions.
    • Structured content that guides the AI to emphasise principles, trade-offs, and ethical consequences, not just factual answersStructured Content (Not….
    • Review stages (your Lab Framework’s “Review” and “Wrap-up”) where human participants articulate learning and value implications, not just outcomesLab Framework v03.

    5. Conceptual integration — Anapoly Notebook’s philosophical area

    • The philosophical space could act as a sandbox for wisdom-support configurations.
    • It would host experiments in value-aware prompting, critical-thinking patterns, and ethics-in-context scenarios.
    • Publishing these experiments — with your transparency labellingtransparency framework … and structured lab noteslab note structure – An… — could both demystify the work and invite peer contribution.

    6. Discussion points for you and Marjon

    1. Scope of “wisdom” — Is it moral reasoning, long-term perspective, integration of knowledge, or something else?
    2. Measurement — How will we know if an LLM interaction genuinely promotes wiser thinking?
    3. Configuration approach — Do we embed guardrails and prompts in the LLM, or do we teach students to use general-purpose models in wise-making ways?
    4. Risk mitigation — How do we ensure “wisdom-support” does not become disguised persuasion or value-imposition?
    5. Sustainability — How will the philosophical area in Anapoly Notebook be maintained, curated, and used in teaching and research?

    If you want, I can map these into a pilot lab design showing how to test “LLM-for-wisdom” with students under your existing Lab Framework, including setup, roles, and asset flow. That would make it operational rather than aspirational.

    You said:

    To begin with at least, I think I’d like the philosophical space to focus on the guiding principles of value formation and ethos. Indeed, we might perhaps call that space values and ethos. Please comment on that.

    ChatGPT said:

    Calling the space Values and Ethos is sharper and easier to grasp than “philosophical space.”

    Advantages:

    • Plain English — avoids sounding like an abstract academic corner.
    • Signals purpose — makes it clear this is about normative grounding, not idle speculation.
    • Invites participation — “values” and “ethos” are terms students and practitioners can relate to, even if they wouldn’t enter a “philosophy” section.
    • Maps to your stated principles — it directly reflects the two you’ve prioritised: values of benefit to society and caring about the human–AI relationship.

    Potential refinements:

    • Decide whether this space is exploratory (asking “what values should guide us?”) or declarative (asserting “these are our values”). The tone and format will differ.
    • Consider whether to frame it as Values & Ethos in Human–AI Interaction — that ties it directly to your mission and helps prevent it from drifting into generic moral commentary.
    • If it becomes a standing area in Anapoly Notebook, align it with your lab note structure and transparency framework so that each entry is both open-ended and methodically documented.

    If you want, I can outline a content template for Values & Ethos posts that keeps them grounded, repeatable, and linked back to your lab process. That would stop it from becoming a loose collection of opinion pieces.

    You said:

    Yes, it will be the values and ethos in human-AI interaction. The end product will be a long-form essay, and the philosophical space, the values and ethos space, will be the home for a project to develop that long-form essay. My feeling is that this might fit the academic approach that the university might like. It could be something that students would benefit from participating in. What are your thoughts about all this?

    ChatGPT said:

    I think you’re right — framing it as a project in the Values & Ethos space with the goal of producing a long-form essay does several useful things.

    Why it works for Marjon and students:

    1. Academic compatibility – A defined research-to-publication pathway will feel familiar in a university setting. It echoes dissertation or capstone work, but in a collaborative and experimental format.
    2. Student engagement – Gives them a stake in shaping the final product, rather than just reading about it. They’d contribute insights, literature reviews, use-case critiques, and possibly their own “values tests” of LLM interactions.
    3. Structured rigour – If you wrap it in your Lab FrameworkLab Framework v03, you get planning, operation, review, and wrap-up phases. That produces an audit trail and deliverables Marjon can reference.
    4. Ethical literacy – Students don’t just hear “AI has risks”; they practise articulating what values matter and why, and apply them to concrete human–AI interaction scenarios.

    How I’d suggest shaping it:

    • Working TitleValues & Ethos in Human–AI Interaction: A Collaborative Inquiry.
    • Method – Treat it as a series of mini-labs inside the project, each exploring one principle (e.g. transparency, reciprocity, autonomy).
    • Roles – Facilitator (Anapoly), Explorers (students), Quality & Trust Steward (faculty or Anapoly lead to ensure coherence and academic integrity).
    • Assets – Each mini-lab produces a lab notelab note structure – An…, feeding into sections of the final essay.
    • Transparency – Apply your existing classification labelstransparency framework … so readers see where AI contributed to the thinking or drafting.

    Benefits:

    • For Marjon: a tangible, citable output demonstrating responsible AI pedagogy.
    • For students: co-authorship or acknowledgment in a public, high-quality piece of work.
    • For Anapoly: a tested format for running “wisdom-support” labs in other contexts.

    If you like, I can map this into a Values & Ethos project plan using your Lab Framework so you can present it to Marjon as both academically respectable and operationally concrete.

    You said:

    We must remember that the long-form essay, the end result, is important in its own right and is a key purpose for the work. Equally important is the fact that we’re using that end product as a vehicle for learning how to make use of AI in a real-life setting, which was, of course, the original purpose of Annapolis AI Labs.

    ChatGPT said:

    Yes — that dual purpose is the strength of it, and it’s worth making explicit from the outset:

    1. The essay as a scholarly contribution
      • Stands alone as a thoughtful, well-researched piece on Values & Ethos in Human–AI Interaction.
      • Has lasting value for Marjon, Anapoly, and a wider audience.
    2. The essay as a vehicle for applied AI learning
      • Every step — research, drafting, review — becomes a live case study in how to work productively and critically with AI.
      • Students and faculty experience the Lab Framework in action, not just as a concept.
      • Transparency labels and lab notes document how AI contributed, creating a meta-layer of insight.

    If you frame it this way when pitching to Marjon, you avoid the trap of the essay being seen as just an “output” or the process being seen as just “teaching AI skills.” You’re showing how a real intellectual task can be both:

    • a product worth publishing, and
    • a process worth replicating in other academic or professional contexts.

    I’d suggest we write the project brief with two explicit objectives:

    1. Produce a long-form essay that is a credible academic/public contribution.
    2. Demonstrate and document the effective, ethical use of AI in a collaborative research and writing process.

    If you want, I can draft that two-objective project brief in your Lab Framework style so it’s ready to drop into a Marjon proposal. That way, they see the scholarly goal and the applied-learning goal as equal, deliberate parts of the design.

    You said:

    What is the overarching aim, goal or purpose under which these two objectives sit?

    ChatGPT said:

    The overarching purpose could be framed like this:

    To explore and model how collaborative, transparent use of AI can produce work of lasting intellectual and social value.

    That umbrella is broad enough to cover:

    • the content aim — producing a serious, credible essay on Values & Ethos in Human–AI Interaction, and
    • the process aim — learning and demonstrating effective AI-assisted research and writing in a real-world, academically relevant project.

    It also ties straight back to Anapoly AI Labs’ founding purpose: using genuine, meaningful tasks to learn how to work with AI in ways that are both competent and principled.

    If you’d like, I can suggest two or three alternative phrasings so you can choose one that works both for an academic audience and for your own internal framing.

    You said:

    Yes, that will be helpful. Please do suggest some alternative phrasings.

    ChatGPT said:

    Here are three variants, each with a slightly different emphasis but covering both the essay and the applied-learning dimension:

    1. Exploratory emphasis
      To investigate how values and ethos can shape human–AI interaction, while demonstrating in practice how AI can be used collaboratively and transparently to create work of lasting academic and social worth.
    2. Applied-learning emphasis
      To use the creation of a substantive essay on values and ethos in human–AI interaction as a live, documented case study in effective, ethical, and collaborative use of AI tools.
    3. Mission-aligned emphasis
      To model the Anapoly AI Labs approach — learning through real work — by producing a long-form essay on values and ethos in human–AI interaction that stands as both a credible contribution in its own right and a demonstration of best practice in applied AI collaboration.

    Would you like me to also align your two objectives under the one you choose, so the whole structure reads as a coherent project statement?

    You said:

    Let me add this reference information into your context. Peer Assisted Learning (PALS), an initiative providing peer to peer support for students, will create an AI Support Team whose members will take part in the values and ethos project. Their wider remit will be to build a community of interest around ethical AI use.

    ChatGPT said:

    Updated saved memory

    Noted — that gives the project an immediate operational base and a built-in community element. It also means the learning objective can extend beyond the essay team into PALS’ broader peer-to-peer network.

    You said:

    Turning back to purpose, the item three, the mission-aligned emphasis, is not appropriate. One and two are both good in their separate ways, but I would like to see if we can’t merge them in some fashion. What can you suggest?

    ChatGPT said:

    Here’s a merged version that keeps the intellectual focus of (1) and the applied-learning clarity of (2):

    Proposed merged purpose
    To investigate how values and ethos can shape human–AI interaction, using the creation of a substantive long-form essay as both a credible academic contribution and a live, documented case study in the effective, ethical, and collaborative use of AI tools.

    It keeps:

    • The why — exploring values and ethos in human–AI interaction.
    • The what — producing a substantial essay with academic credibility.
    • The how — framing the work as a real-world, documented AI collaboration that others can learn from.

    Would you like me to also condense this into a one-sentence “headline” form for use at the very top of the project brief? That can help keep everything anchored.

    You said:

    Rather than simply investigate how values and ethos can shape human-AI interaction, I want us to propose a framework of values and ethos which can do that.

    ChatGPT said:

    Then the merged purpose could be tightened to reflect that generative intent:

    To propose a framework of values and ethos to guide human–AI interaction, using the creation of a substantive long-form essay as both a credible academic contribution and a live, documented case study in the effective, ethical, and collaborative use of AI tools.

    That signals:

    • Not just exploration, but delivery of a structured framework.
    • A tangible scholarly output (the essay) as the vehicle.
    • The process-as-learning dimension stays intact.

    If you want, I can also work up a short variant — something you could put in a slide or pitch to Marjon — that keeps the same meaning but is under 20 words.

    You said:

    No, I don’t like that. Go back to the previous purpose and simply replace the word investigate with the word propose.

    ChatGPT said:

    Understood. That gives us:

    To propose how values and ethos can shape human–AI interaction, using the creation of a substantive long-form essay as both a credible academic contribution and a live, documented case study in the effective, ethical, and collaborative use of AI tools.

    It keeps the original structure you liked, but makes the intent to offer something constructive — not just analyse — explicit.

    You said:

    It’s good. Please put it into a canvas with the title Purpose.

    ChatGPT said:

    Purpose
    To propose how values and ethos can shape human–AI interaction, using the creation of a substantive long-form essay as both a credible academic contribution and a live, documented case study in the effective, ethical, and collaborative use of AI tools.

  • Contextual Scaffolding Framework

    Contextual Scaffolding

    a framework


    Transparency label: AI-assisted

    Creating a valuable knowledge-based product with AI depends on two things: the AI must clearly understand what’s required, and it must stay aligned with that requirement as the work progresses. Our solution is contextual scaffolding — a structured yet lightweight method that keeps the AI grounded and relevant from start to finish.

    A key feature of this approach is that the AI collaborates in building the scaffolding itself, starting with the initial programme of work. This ensures the AI is involved from the outset, shaping the context it will work within and adapting it as the project evolves.

    Contextual scaffolding works through two complementary structures. The phase model shapes the entire lifecycle, from initial concept to operational use. The project model organises collaborative work within each phase, ensuring the AI works with current priorities, materials, and deliverables.

    Both models are deliberately flexible, applied only as firmly as the situation demands, so that structure supports progress without stifling it.


    Phase Model (Cradle-to-Grave Structure)

    The phase model provides end-to-end structure across the full lifecycle of the product.

    1. Concept — Explore potential benefits, clarify purpose, and draft a business case.
    2. Definition — Specify the intended final deliverable in sufficient detail to support structured development.
    3. Development — Develop and test the final product.
    4. Acceptance — Validate the deliverable in the hands of its intended users, ensure it meets defined needs, and bring it into use.
    5. Operation and Maintenance — Use, monitor, and refine the product until it is retired.

    Not every phase will require a full project structure, but where structured collaboration is needed, the project model applies.


    Project Model (Within-Phase Scaffolding)

    Within each phase, a project-based approach provides structure for collaborative delivery.

    1. Defined Goal

    The project is established to deliver a clear, knowledge-based product. Typically, this will be the output required from a phase.

    2. Project-Based Structure

    Work within the phase is organised into discrete stages, each with defined objectives and outputs.

    3. Stage Gates

    Progression from one stage to the next requires a quality review of outputs. Only satisfactory products allow forward movement.

    4. Product-Focused

    Each stage produces one or more interim products. Most are “technical” in nature and build towards the final deliverable; some are to manage progress.

    5. Human-AI Collaboration

    The AI acts as a collaborator.

    • The human is the decision-maker.
    • The AI adapts to the nature of work at hand.
    • Because contextual scaffolding keeps pace with the progress of work, the AI remains grounded and its responses stay relevant and precise.

    6. Adapted PRINCE2 Principles

    Draws selectively from PRINCE2, including:

    • Product-based planning
    • Management by exception (e.g. gates)
    • Clear roles and responsibilities
    • Embedded quality control

    7. Quality Management

    A simplified quality management process ensures control:

    • Each product has a defined description or acceptance criteria
    • Quality is assessed at stage gates
    • Feedback loops enable iteration and correction

    Example: Preparing a Competitive Tender

    When preparing a competitive tender, the required knowledge product is a complete, high-quality bid submission. However, at the outset, the scope, fulfillment strategy, resource requirements, and pricing model will not be clear.

    Phase Model in Action
    The work progresses through phases – starting with a concept phase to assess feasibility and value, followed by definition (specifying scope and win themes), development (producing the actual bid), acceptance (review and sign-off), and finally operation and maintenance (post-submission follow-up or revision).

    Project Model in Each Phase
    Within each phase, a lightweight project structure is applied to guide collaboration with the AI. For example, in the definition phase, the AI helps analyse requirements, map obligations, and develop outline responses. In the development phase, it helps draft, refine, and format bid content.

    Contextual Scaffolding
    At every stage, contextual scaffolding ensures the AI is working with the right background, priorities, and current materials. Thus it can focus on what matters most, and contribute in ways that are coherent, precise, and aligned with both the tender requirements and internal strategy.

    Transparency label justification. This document was developed through structured collaboration between Alec Fearon and ChatGPT. Alec provided the core ideas, framing, and sequence. ChatGPT contributed to the organisation, refinement, and drafting of the text, under Alec’s direction. The content reflects a co-developed understanding, with human oversight and final decisions throughout.

  • An emerging discipline?

    Transparency label: AI-assisted

    In a recent post, I observed that LLMs are coming to be seen as like computer operating systems, with prompts being the new application programs and context the new user interface.

    Our precision content prompt pack is a good example of that thinking. The pack contains a set of prompts designed to take an unstructured document (notes of a meeting, perhaps) and apply structure (precision content) to the information it contains. We do this because AIs perform better if we give them structured information.

    We apply the prompts in sequence, checking the AI’s output at each step in the sequence. In effect, it is a program that we run on the AI. 

    Contract-first prompting takes the idea further. It formalises the interaction between human and AI into a negotiated agreement about purpose, scope, constraints, and deliverables before any output is generated. This ensures that both sides – human and AI – share the same understanding of the the work to be done. The agreement also contains compliance mechanisms (eg summarisation, clarifying loops, and self-testing) for quality control.

    These ideas reframe the interaction between human and AI: not as simply issuing prompts, but as engineering the conditions under which the AI will perform. Not just prompting, more like contextual systems engineering.

    Transparency label justification: This diary post was drafted by Alec, with ChatGPT used to suggest edits, refine wording, and test the logic of specific formulations. Alec initiated the framing, decided the sequence of ideas, and approved the final structure and terminology.