LLM`s and the openweb

The debate about so called #AI and large language models inside the #openweb paths is not, at its core, a technical argument. It is a question of relationship. Not “is this tool good or bad?” but how is it used, who controls it, and whose interests it serves.

This tension is not new, every wave of open communication technology has arrived carrying the same anxiety: printing presses, telephones, email, the web itself. Each was accused – often correctly – of flattening culture, centralising power and then when enclosed eroding human connection. And yet, each was also reclaimed, repurposed, and bent toward collective use when used within humanistic social structures. The #openweb path was obviously never about rejecting technology, it was about refusing enclosure.

On the #FOSS and the #openweb, we have always understood that tools are political. Not only because they contain ideology in their code, but because they embody power relations in how they are built, owned, governed, and deployed. The #OMN project grew from this understanding, it isn’t an anti-tech project, it is a re-grounding of technology in social process: trust-based publishing, local autonomy, messy collaboration, and human-scale governance. On this path we have to constantly balance the #geekproblem that servers mattered less than relationships, code mattered less than continuity.

#LLMs arrive into this tradition not as something unprecedented, but as something familiar: a tool emerging inside systems that are deeply broken. The danger is not that LLMs exist, the danger is that they are being normalised inside closed, extractive, #dotcons infrastructures.

What makes LLMs unsettling is not intelligence, they have none, It’s proximity. They sit close to language, meaning, memory, synthesis, things humans associate with thought, culture, and identity. When an LLM speaks fluently without being feed lived experience, then yes, it can feel hollow, verbose, even uncanny. This is the “paid-by-the-word” reaction many people have: form without presence, articulation without accountability. This discomfort is valid.

But confusing discomfort with real danger leads to the wrong response. #LLMs do not have agency, consciousness, or ethics, they don’t take responsibility, they cannot sit in a meeting, be accountable to a community, or live with the consequences of what they produce. Which means the responsibility is entirely ours. Just like with publishing tools, encryption, or federated protocols.

Much of the current backlash against “AI” is not about facts. It’s about vibe. People aren’t only disputing accuracy or pointing to errors. They’re saying: “This feels wrong.” That instinct is worth listening to, but it’s not enough. The #openweb tradition asks harder questions:

  • Who controls the infrastructure?
  • Can this tool be used without enclosure?
  • Can its outputs be traced, contextualised, and contested?
  • Does it strengthen collective capacity, or replace it?
  • Does it help people build, remember, translate, and connect, or does it manufacture authority?

An LLM used to simulate “wisdom”, speak for communities, and replace lived participation is rightly rejected. That is automation of voice, not amplification of agency. But an LLM used as:

  • an archive index
  • a translation layer
  • a research assistant
  • a memory prosthetic
  • a bridge between fragmented histories

…can work within in a humanistic path if it is embedded in transparent, accountable, human governance. The #openweb lesson has always been the same: you don’t wait for systems to fail – you build alongside them until they are no longer needed. On this path #LLMs will become infrastructure, the real question is whether they are integrated into: Closed corporate stacks, surveillance capitalism, and narrative control or federated, inspectable, collectively governed knowledge commons.

If the open web does not claim this space, authoritarian systems will. This is not about fetishising this so-called AI, nor about rejecting it on moral grounds. Both are forms of avoidance. The #OMN path is pragmatic:

  • build parallel systems
  • insist on open processes
  • embed tools in social trust
  • keep humans in the loop
  • keep power contestable

#LLMs can’t and don’t need to understand spirit, culture, or community, humans do. What matters is whether we remain grounded while using tools – or whether we outsource judgment, memory, and meaning to systems that cannot be accountable.

Every generation of the open tech faces this moment, and every time, the answer needs to be not purity, but practice. Not withdrawal, but responsibility. Not fear, but composting the mess and planting something better. #LLMs are just the latest shovel, the question is whether we use them to deepen the enclosure, or to help dig our way out.

On the #OMN and #openweb paths, the answer has never been abstract. It has always been: build, govern, and care – together.

Governance, the mess of AI tech-fix paths

Seminar Reflection: Philosophy, AI, and Innovation – Week 6
Topic: AI Deliberation at Scale
Speakers: Chris Summerfield (Oxford & AI Safety Institute), MH Tessler (Google DeepMind)
Key texts: Jürgen Habermas, The Structural Transformation of the Public Sphere (excerpt) and Summerfield et al., “AI Can Help Humans Find Common Ground in Democratic Deliberation”

This seminar focus is on scaling democratic deliberation via AI. The example proposal is the #HabermasMachine a test projects to facilitate large-scale consensus using #LLMs (Large Language Models). The framing, unsurprisingly, is drawn from the elitist tech sector – Google DeepMind and Oxford – with a focus on “safety” and “moderation” over human messiness and agency.

The problem we face is that this #techshit path might work, but for who is the question, what kind of “public sphere” is this #AI recreating, and who holds the power to shape it? These are strongly top-down, technocratic proposals, rooted in a narrow utilitarian logic. The underlying assumption is that human decision-making is flawed and must be mediated, and ultimately managed, by algorithmic systems. Consensus is determined not through lived human to human dialogue or, as I like to say – mess, but through an AI that quietly nudges discussions to centrist consensuses.

There is no meaningful eye-to-eye group interaction in this project, no room for DIY, #bottom up agency. Participants become data points in a system that claims to “listen,” but acts through elitist mediation. It is consensus without community, and safety without solidarity. What’s missing is the power of mess, the presenter ignores this central question: Can we build messy, human-scale deliberation that doesn’t rely on top-down interventions?

Projects like this are not grassroots governance, rather it’s governance-by-black-box, mainstreaming by design, the incentive model is telling: ideas that align with the status quo or dominant narratives are rewarded with more money. Consensus is guided not by grassroots engagement or dissenting voices, but by what the algorithm (and its funders) consider “productive.” This is the quiet suffocating hand of #mainstreaming, cloaked in neutral code.

#TechFixes paths like this are about stability at all costs, yet we live in a time when stability is the problem, with #ClimateChaos threatening billions, the demand is for transformation, not moderation.

This is AI as intermediary, not a facilitator of the commons paths we need. Transparency? Not here, no one knows how the #AI reaches consensus. The models are proprietary, the tweaks are political, and the outcomes are mediated by those already in power. The system becomes an unaccountable broker, not of truth, but of what power is willing to hear.

We need to be wary of any system that claims to represent us without us being meaningfully involved. This is a curated spectacle of consensus, delivered by machines, funded by corporations, and mediated by invisible hands. What we need is human to human projects like the #OGB, not tech managed consensus. This #mainstreaming path isn’t compost. It’s simply more #techshit to be composted, mess is a feature, not a bug.

In the #OMN (Open Media Network), we explore paths rooted in trust, openness, and peer-to-peer process. Not asking for power to listen, but taking space to act. We compost the mess; we don’t pretend it can be sanitized by top-down coding.

#Oxford #AI #techshit #dotcons