Tag: collaboration

  • An ethos of caring

    transparency label: human only

    On 7 August 2025, Anapoly ran a trial acclimatisation lab. One of the participants was a member of staff from Marjon University. He liked our approach to the use of AI and, in later discussion, suggested the possibility of a collaboration with the university.

    After exploring some of the options for this, the conversation became a bit philosophical. It touched on the ethics of AI, the risk that students might outsource their thinking, the need to imbue students with values of benefit to society, and the need for them to have an ethos of caring about how the human-AI relationship evolves.

    This prompted me to begin thinking about the possibility of exploring these aspects of the Human-AI interaction in more detail. I setup this digital garden for that purpose.

  • chat 2025-08-08

    A collaborative project?

    Transcript of a chat between Alec Fearon and ChatGPT 5 on 9 August 2025


    Alec said:

    <reference>Marjon University is interested in a collaboration with Anapoly. The collaboration promises to extend the capability of both organisations for the social good. <reference>Students are among those most at risk from the misuse of AI. <reference>There is a risk that students might outsource their thinking to AI and lose or not acquire the ability for critical thinking. <principle>There is a need to imbue students with values of benefit to society. <principle>Students need to have an ethos of caring about how the human-AI relationship evolves. <process>An LLM can be configured to support students to become wiser. <concept>Anapoly Notebook will have an area in which philosophical ideas can be explored. <task>Discuss

    ChatGPT said:

    Here’s how the elements you’ve listed fit together, and where the “LLM to support wisdom” question becomes central.


    1. Collaboration premise

    Marjon University and Anapoly are both interested in AI for social good.

    • Marjon brings access to students, an at-risk group in terms of AI misuse.
    • Anapoly brings structured experimentation methods and public transparency.
    • The shared value space is about enhancing capability without eroding human judgement.

    2. Risk frame

    • Students may “outsource” their thinking to AI, which risks weakening critical faculties.
    • The misuse risk is not just about plagiarism or misinformation — it’s about habits of mind.
    • In your own frameworks, this parallels the acclimatisation → experimentation → proof-of-concept → iteration lab stagesLab Framework v03, but with a moral and developmental overlay.

    3. Guiding principles

    From your notes:

    • Value formation: embed values that benefit society, not just skills for employability.
    • Ethos of stewardship: care about the human–AI relationship as something to be shaped, not passively accepted.
      These align with NIST’s idea that human-AI interaction should be goal- and outcome-oriented and that context of use (technical, social, cultural) is part of the design spacetaxonomy NIST.AI.200-1.

    4. Process implication — “LLM to support wisdom”

    If the LLM is to support students in becoming wise rather than just becoming efficient, configuration needs to be deliberate. That could mean:

    • Prompt scaffolds that force reflection before answer consumption (e.g., “What reasoning steps did you take before consulting me?”).
    • Bias and value checks embedded in interactions.
    • Structured content that guides the AI to emphasise principles, trade-offs, and ethical consequences, not just factual answersStructured Content (Not….
    • Review stages (your Lab Framework’s “Review” and “Wrap-up”) where human participants articulate learning and value implications, not just outcomesLab Framework v03.

    5. Conceptual integration — Anapoly Notebook’s philosophical area

    • The philosophical space could act as a sandbox for wisdom-support configurations.
    • It would host experiments in value-aware prompting, critical-thinking patterns, and ethics-in-context scenarios.
    • Publishing these experiments — with your transparency labellingtransparency framework … and structured lab noteslab note structure – An… — could both demystify the work and invite peer contribution.

    6. Discussion points for you and Marjon

    1. Scope of “wisdom” — Is it moral reasoning, long-term perspective, integration of knowledge, or something else?
    2. Measurement — How will we know if an LLM interaction genuinely promotes wiser thinking?
    3. Configuration approach — Do we embed guardrails and prompts in the LLM, or do we teach students to use general-purpose models in wise-making ways?
    4. Risk mitigation — How do we ensure “wisdom-support” does not become disguised persuasion or value-imposition?
    5. Sustainability — How will the philosophical area in Anapoly Notebook be maintained, curated, and used in teaching and research?

    If you want, I can map these into a pilot lab design showing how to test “LLM-for-wisdom” with students under your existing Lab Framework, including setup, roles, and asset flow. That would make it operational rather than aspirational.

    You said:

    To begin with at least, I think I’d like the philosophical space to focus on the guiding principles of value formation and ethos. Indeed, we might perhaps call that space values and ethos. Please comment on that.

    ChatGPT said:

    Calling the space Values and Ethos is sharper and easier to grasp than “philosophical space.”

    Advantages:

    • Plain English — avoids sounding like an abstract academic corner.
    • Signals purpose — makes it clear this is about normative grounding, not idle speculation.
    • Invites participation — “values” and “ethos” are terms students and practitioners can relate to, even if they wouldn’t enter a “philosophy” section.
    • Maps to your stated principles — it directly reflects the two you’ve prioritised: values of benefit to society and caring about the human–AI relationship.

    Potential refinements:

    • Decide whether this space is exploratory (asking “what values should guide us?”) or declarative (asserting “these are our values”). The tone and format will differ.
    • Consider whether to frame it as Values & Ethos in Human–AI Interaction — that ties it directly to your mission and helps prevent it from drifting into generic moral commentary.
    • If it becomes a standing area in Anapoly Notebook, align it with your lab note structure and transparency framework so that each entry is both open-ended and methodically documented.

    If you want, I can outline a content template for Values & Ethos posts that keeps them grounded, repeatable, and linked back to your lab process. That would stop it from becoming a loose collection of opinion pieces.

    You said:

    Yes, it will be the values and ethos in human-AI interaction. The end product will be a long-form essay, and the philosophical space, the values and ethos space, will be the home for a project to develop that long-form essay. My feeling is that this might fit the academic approach that the university might like. It could be something that students would benefit from participating in. What are your thoughts about all this?

    ChatGPT said:

    I think you’re right — framing it as a project in the Values & Ethos space with the goal of producing a long-form essay does several useful things.

    Why it works for Marjon and students:

    1. Academic compatibility – A defined research-to-publication pathway will feel familiar in a university setting. It echoes dissertation or capstone work, but in a collaborative and experimental format.
    2. Student engagement – Gives them a stake in shaping the final product, rather than just reading about it. They’d contribute insights, literature reviews, use-case critiques, and possibly their own “values tests” of LLM interactions.
    3. Structured rigour – If you wrap it in your Lab FrameworkLab Framework v03, you get planning, operation, review, and wrap-up phases. That produces an audit trail and deliverables Marjon can reference.
    4. Ethical literacy – Students don’t just hear “AI has risks”; they practise articulating what values matter and why, and apply them to concrete human–AI interaction scenarios.

    How I’d suggest shaping it:

    • Working TitleValues & Ethos in Human–AI Interaction: A Collaborative Inquiry.
    • Method – Treat it as a series of mini-labs inside the project, each exploring one principle (e.g. transparency, reciprocity, autonomy).
    • Roles – Facilitator (Anapoly), Explorers (students), Quality & Trust Steward (faculty or Anapoly lead to ensure coherence and academic integrity).
    • Assets – Each mini-lab produces a lab notelab note structure – An…, feeding into sections of the final essay.
    • Transparency – Apply your existing classification labelstransparency framework … so readers see where AI contributed to the thinking or drafting.

    Benefits:

    • For Marjon: a tangible, citable output demonstrating responsible AI pedagogy.
    • For students: co-authorship or acknowledgment in a public, high-quality piece of work.
    • For Anapoly: a tested format for running “wisdom-support” labs in other contexts.

    If you like, I can map this into a Values & Ethos project plan using your Lab Framework so you can present it to Marjon as both academically respectable and operationally concrete.

    You said:

    We must remember that the long-form essay, the end result, is important in its own right and is a key purpose for the work. Equally important is the fact that we’re using that end product as a vehicle for learning how to make use of AI in a real-life setting, which was, of course, the original purpose of Annapolis AI Labs.

    ChatGPT said:

    Yes — that dual purpose is the strength of it, and it’s worth making explicit from the outset:

    1. The essay as a scholarly contribution
      • Stands alone as a thoughtful, well-researched piece on Values & Ethos in Human–AI Interaction.
      • Has lasting value for Marjon, Anapoly, and a wider audience.
    2. The essay as a vehicle for applied AI learning
      • Every step — research, drafting, review — becomes a live case study in how to work productively and critically with AI.
      • Students and faculty experience the Lab Framework in action, not just as a concept.
      • Transparency labels and lab notes document how AI contributed, creating a meta-layer of insight.

    If you frame it this way when pitching to Marjon, you avoid the trap of the essay being seen as just an “output” or the process being seen as just “teaching AI skills.” You’re showing how a real intellectual task can be both:

    • a product worth publishing, and
    • a process worth replicating in other academic or professional contexts.

    I’d suggest we write the project brief with two explicit objectives:

    1. Produce a long-form essay that is a credible academic/public contribution.
    2. Demonstrate and document the effective, ethical use of AI in a collaborative research and writing process.

    If you want, I can draft that two-objective project brief in your Lab Framework style so it’s ready to drop into a Marjon proposal. That way, they see the scholarly goal and the applied-learning goal as equal, deliberate parts of the design.

    You said:

    What is the overarching aim, goal or purpose under which these two objectives sit?

    ChatGPT said:

    The overarching purpose could be framed like this:

    To explore and model how collaborative, transparent use of AI can produce work of lasting intellectual and social value.

    That umbrella is broad enough to cover:

    • the content aim — producing a serious, credible essay on Values & Ethos in Human–AI Interaction, and
    • the process aim — learning and demonstrating effective AI-assisted research and writing in a real-world, academically relevant project.

    It also ties straight back to Anapoly AI Labs’ founding purpose: using genuine, meaningful tasks to learn how to work with AI in ways that are both competent and principled.

    If you’d like, I can suggest two or three alternative phrasings so you can choose one that works both for an academic audience and for your own internal framing.

    You said:

    Yes, that will be helpful. Please do suggest some alternative phrasings.

    ChatGPT said:

    Here are three variants, each with a slightly different emphasis but covering both the essay and the applied-learning dimension:

    1. Exploratory emphasis
      To investigate how values and ethos can shape human–AI interaction, while demonstrating in practice how AI can be used collaboratively and transparently to create work of lasting academic and social worth.
    2. Applied-learning emphasis
      To use the creation of a substantive essay on values and ethos in human–AI interaction as a live, documented case study in effective, ethical, and collaborative use of AI tools.
    3. Mission-aligned emphasis
      To model the Anapoly AI Labs approach — learning through real work — by producing a long-form essay on values and ethos in human–AI interaction that stands as both a credible contribution in its own right and a demonstration of best practice in applied AI collaboration.

    Would you like me to also align your two objectives under the one you choose, so the whole structure reads as a coherent project statement?

    You said:

    Let me add this reference information into your context. Peer Assisted Learning (PALS), an initiative providing peer to peer support for students, will create an AI Support Team whose members will take part in the values and ethos project. Their wider remit will be to build a community of interest around ethical AI use.

    ChatGPT said:

    Updated saved memory

    Noted — that gives the project an immediate operational base and a built-in community element. It also means the learning objective can extend beyond the essay team into PALS’ broader peer-to-peer network.

    You said:

    Turning back to purpose, the item three, the mission-aligned emphasis, is not appropriate. One and two are both good in their separate ways, but I would like to see if we can’t merge them in some fashion. What can you suggest?

    ChatGPT said:

    Here’s a merged version that keeps the intellectual focus of (1) and the applied-learning clarity of (2):

    Proposed merged purpose
    To investigate how values and ethos can shape human–AI interaction, using the creation of a substantive long-form essay as both a credible academic contribution and a live, documented case study in the effective, ethical, and collaborative use of AI tools.

    It keeps:

    • The why — exploring values and ethos in human–AI interaction.
    • The what — producing a substantial essay with academic credibility.
    • The how — framing the work as a real-world, documented AI collaboration that others can learn from.

    Would you like me to also condense this into a one-sentence “headline” form for use at the very top of the project brief? That can help keep everything anchored.

    You said:

    Rather than simply investigate how values and ethos can shape human-AI interaction, I want us to propose a framework of values and ethos which can do that.

    ChatGPT said:

    Then the merged purpose could be tightened to reflect that generative intent:

    To propose a framework of values and ethos to guide human–AI interaction, using the creation of a substantive long-form essay as both a credible academic contribution and a live, documented case study in the effective, ethical, and collaborative use of AI tools.

    That signals:

    • Not just exploration, but delivery of a structured framework.
    • A tangible scholarly output (the essay) as the vehicle.
    • The process-as-learning dimension stays intact.

    If you want, I can also work up a short variant — something you could put in a slide or pitch to Marjon — that keeps the same meaning but is under 20 words.

    You said:

    No, I don’t like that. Go back to the previous purpose and simply replace the word investigate with the word propose.

    ChatGPT said:

    Understood. That gives us:

    To propose how values and ethos can shape human–AI interaction, using the creation of a substantive long-form essay as both a credible academic contribution and a live, documented case study in the effective, ethical, and collaborative use of AI tools.

    It keeps the original structure you liked, but makes the intent to offer something constructive — not just analyse — explicit.

    You said:

    It’s good. Please put it into a canvas with the title Purpose.

    ChatGPT said:

    Purpose
    To propose how values and ethos can shape human–AI interaction, using the creation of a substantive long-form essay as both a credible academic contribution and a live, documented case study in the effective, ethical, and collaborative use of AI tools.

  • A new way of working

    Transparency: AI-assisted (see justification below)

    A reflection on where I have got to after using ChatGPT and other AIs intensively for the past six weeks.

    In earlier times, my digital workspace was built around the usual office applications – Microsoft Office or Google Workspace, and their ilk. Now, it’s different. 

    Microsoft is going down the path of embedding AI (mainly ChatGPT) into its productivity suite. I have taken a different path, choosing to use ChatGPT, Perplexity, and NotebookLM independently. My workspace is now populated by a team of AI assistants, researchers, analysts, editors, and more, sitting alongside the standard office productivity apps.

    Where I would previously have started a piece of work in Word, now I start in a ChatGPT project canvas. During the course of work, I switch between AI team members. ChatGPT is the main thought partner – good at shaping ideas, helping structure prose, pushing back on woolly thinking. The 4o model is an excellent collaborator for many tasks, but if deeper thinking is needed I switch to ChatGPT-o3; the difference is remarkable.  When the flow falters, I’ll hop over to Perplexity: fast, well-cited, useful for breaking through with a different angle or clarifying a half-formed idea. NotebookLM, meanwhile, knows my  files; it acts like a personal librarian, drawing references and insight from the sources I’ve given it.

    It’s not seamless yet. But this is a distinctly new way of working, shaped by interaction between multiple AI agents. Even when ostensibly alone at my desk, the feeling is less of working as a solitary engineer and more as the leader of a team. A quiet, tireless, and always surprisingly helpful team. The skill here is no longer about choosing the best tool. It’s about understanding which kind of tool to use and when – treating each one as if it were a specialist on call.

    This hopping back and forth isn’t distraction, it’s coordination. Each assistant brings a slightly different lens. Triangulating between them often clears a logjam in my thinking or brings up a new insight. An intrguing thought is whether – or perhaps when – they will gain an awareness of each other’s contribution, turning coordination into collaboration.

    For the record, I drafted this post in a ChatGPT project canvas, helped by the AI in a chat alongside the canvas. When complete, I told the AI to write the “transparency label justification” below. It was spot on!

    Transparency Label Justification: This diary post was drafted collaboratively with ChatGPT using the GPT-4o model. Alec described his working patterns in detail and directed the structure and tone throughout. ChatGPT proposed phrasing, clarified distinctions between tools, and refined transitions, but the observations, reflections, and examples are all drawn from Alec’s own practice. The post captures a real and evolving style of AI-supported work, shaped and narrated by its practitioner.

  • Collaboration in ChatGPT?

    transparency label: human only

    There are reports that:

    OpenAI has been quietly developing collaboration features for ChatGPT that would let multiple users work together on documents and chat about projects, a direct assault on Microsoft’s core productivity business. The designs have been in development for nearly a year, with OpenAI’s Canvas feature serving as a first step toward full document collaboration tools.

    Source: The Information via BRXND Dispatch

    This would be a change towards something like simultaneous collaboration with colleagues on a shared document in Microsoft Office. At present, a ChatGPT Team account allows more than one person in the Team account to work in a project space and to take part in the chats within that project, but only one person at a time, as I understand it.