
The debate about so called #AI and large language models inside the #openweb paths is not, at its core, a technical argument. It is a question of relationship. Not “is this tool good or bad?” but how is it used, who controls it, and whose interests it serves.
This tension is not new, every wave of open communication technology has arrived carrying the same anxiety: printing presses, telephones, email, the web itself. Each was accused – often correctly – of flattening culture, centralising power and then when enclosed eroding human connection. And yet, each was also reclaimed, repurposed, and bent toward collective use when used within humanistic social structures. The #openweb path was obviously never about rejecting technology, it was about refusing enclosure.
On the #FOSS and the #openweb, we have always understood that tools are political. Not only because they contain ideology in their code, but because they embody power relations in how they are built, owned, governed, and deployed. The #OMN project grew from this understanding, it isn’t an anti-tech project, it is a re-grounding of technology in social process: trust-based publishing, local autonomy, messy collaboration, and human-scale governance. On this path we have to constantly balance the #geekproblem that servers mattered less than relationships, code mattered less than continuity.
#LLMs arrive into this tradition not as something unprecedented, but as something familiar: a tool emerging inside systems that are deeply broken. The danger is not that LLMs exist, the danger is that they are being normalised inside closed, extractive, #dotcons infrastructures.
What makes LLMs unsettling is not intelligence, they have none, It’s proximity. They sit close to language, meaning, memory, synthesis, things humans associate with thought, culture, and identity. When an LLM speaks fluently without being feed lived experience, then yes, it can feel hollow, verbose, even uncanny. This is the “paid-by-the-word” reaction many people have: form without presence, articulation without accountability. This discomfort is valid.
But confusing discomfort with real danger leads to the wrong response. #LLMs do not have agency, consciousness, or ethics, they don’t take responsibility, they cannot sit in a meeting, be accountable to a community, or live with the consequences of what they produce. Which means the responsibility is entirely ours. Just like with publishing tools, encryption, or federated protocols.
Much of the current backlash against “AI” is not about facts. It’s about vibe. People aren’t only disputing accuracy or pointing to errors. They’re saying: “This feels wrong.” That instinct is worth listening to, but it’s not enough. The #openweb tradition asks harder questions:
- Who controls the infrastructure?
- Can this tool be used without enclosure?
- Can its outputs be traced, contextualised, and contested?
- Does it strengthen collective capacity, or replace it?
- Does it help people build, remember, translate, and connect, or does it manufacture authority?
An LLM used to simulate “wisdom”, speak for communities, and replace lived participation is rightly rejected. That is automation of voice, not amplification of agency. But an LLM used as:
- an archive index
- a translation layer
- a research assistant
- a memory prosthetic
- a bridge between fragmented histories
…can work within in a humanistic path if it is embedded in transparent, accountable, human governance. The #openweb lesson has always been the same: you don’t wait for systems to fail – you build alongside them until they are no longer needed. On this path #LLMs will become infrastructure, the real question is whether they are integrated into: Closed corporate stacks, surveillance capitalism, and narrative control or federated, inspectable, collectively governed knowledge commons.
If the open web does not claim this space, authoritarian systems will. This is not about fetishising this so-called AI, nor about rejecting it on moral grounds. Both are forms of avoidance. The #OMN path is pragmatic:
- build parallel systems
- insist on open processes
- embed tools in social trust
- keep humans in the loop
- keep power contestable
#LLMs can’t and don’t need to understand spirit, culture, or community, humans do. What matters is whether we remain grounded while using tools – or whether we outsource judgment, memory, and meaning to systems that cannot be accountable.
Every generation of the open tech faces this moment, and every time, the answer needs to be not purity, but practice. Not withdrawal, but responsibility. Not fear, but composting the mess and planting something better. #LLMs are just the latest shovel, the question is whether we use them to deepen the enclosure, or to help dig our way out.
On the #OMN and #openweb paths, the answer has never been abstract. It has always been: build, govern, and care – together.