A Policy Case for Commons-Based Moderation in the Fediverse
The problem with the current approach
The normal response to harmful content and behaviour on federated social platforms today is the block. Instance administrators block other instances. Users block other users. Communities build blocklists and share them. This is understandable – it is the tool available – but it is not a solution. It is, at best, a temporary containment strategy.
Blocking is the digital equivalent of closing the curtains. The problem does not go away. The harmful actor does not change. The tension, between open participation and community safety, between freedom of expression and protection from harm, is not resolved. It is deferred, and at a cost to the openness that makes the Fediverse worth defending in the first place.
When entire instances are blocked, legitimate users on those instances lose access to communities they value. When blocklists are the primary moderation infrastructure, the communities that maintain them acquire disproportionate power over what the network sees. The default is isolation, the Fediverse fragments, not because of any external threat, but because of its own defensive reflexes.
This matters beyond the technical community. The Fediverse represents the largest functioning alternative to corporate social media. It is, in the most literal sense, public digital infrastructure owned by nobody and available to everyone. How it handles the tension between openness and safety determines whether it can scale to serve democratic societies, or if it remains a technically interesting experiment for a self-selecting community.
The #4opens principle and why it matters for policy
The Fediverse is built on a set of principles called the #4opens: open data, open source, open standards, and open process. These are not just technical preferences, they are a statement about what public digital infrastructure should look like – transparent, accountable, forkable, improvable by anyone.
The fourth open – open process – is the most politically significant and the most underdeveloped. It means that governing our communities, including how we handle conflict and harm, should be visible, contestable, and collectively grown. Not handed down by a platform’s trust and safety team or enforced by an opaque algorithm, not dependent on the goodwill of a instance administrator.
The current state of Fediverse moderation largely fails this test. Moderation decisions are made locally, inconsistently, and without shared infrastructure for collective reasoning. The result on balance is less freedom, more a patchwork of micro-kingdoms, each with its own rules, enforced by blocking the kingdoms whose rules they disagree with. This is not a stable foundation for the kind of digital public sphere that European democratic values require.
The commercial platforms are not the solution – but they are in the room talking loudly
Commercial social media platforms – what we call the #dotcons, shorthand for the dot-com era corporations that monetised public digital space, are present in or adjacent to the Fediverse. Meta’s Threads now implements ActivityPub, the protocol underlying Fediverse federation. This means that the same open standard that allows community-run instances to talk to each other also allows a platform with three billion users and an advertising-driven engagement model to participate in the same network.
The response in parts of the Fediverse community has been, predictably, to block Threads at the instance level. This is coherent as a local decision. As a strategy for the #openweb, it is kinda self-defeating. Blocking Meta does not make Meta go away, it does not change Meta’s incentives. It does not protect users who remain on Meta from the harms of algorithmic amplification. And it does little to build the alternative infrastructure that would give those users somewhere better to go.
The principled response to commercial platform encroachment on the openweb is not isolation, it is to build commons infrastructure so robust, so trustworthy, and so genuinely useful that the value proposition of centralised platforms diminishes. That means solving the moderation problem properly, not routing around it.
What trust-based flows offer that blocking cannot
The research and development work at projects like the Open Media Network (#OMN) points toward a different model: moderation not as exclusion but as flow management. In a trust-based flow architecture, content does not move through the network based on algorithms optimising for engagement, nor is it blocked at the border by administrators making binary decisions. Instead, it flows – or slows, or stops – based on trust relationships that communities build and maintain themselves. Trust is local, it is composable, different communities will apply different trust filters to the same content without requiring global consensus or any centralised authority.
This model has several properties that should interest European policymakers directly:
Accountability without centralisation. Trust relationships are explicit and auditable. When a community decides not to propagate certain content, that decision is visible and contestable within the community. This is categorically different from both corporate content moderation (opaque, unaccountable) and simple blocking (binary, irreversible).
Resilience against capture. Because trust is distributed and local, there is no single chokepoint that a bad actor – commercial, state, or otherwise – can capture to control information flows across the network. This is critical infrastructure resilience in the same sense that distributed energy grids are resilient against single points of failure.
Reversibility. The rollback function – the ability to re-evaluate historical content visibility based on updated trust relationships – is something no current platform offers at scale. It means that moderation decisions can evolve as communities learn, rather than being permanently encoded in block lists that few people maintain.
Scalability without hierarchy. Top-down moderation breaks down as communities grow. Moderators experience burnout and trauma. Rules based decision making become inconsistent. The trust-based model scales horizontally – as the network grows, the trust infrastructure grows with it, because it is built into the relationships between nodes rather than concentrated in any central authority.
The culture question is not separate from the technical question
It is a constant mistake to read this as purely a technical, you cannot build a healthy online culture without infrastructure – and you cannot build the working infrastructure without clear visions of what culture looks like.
The Fediverse community is currently navigating this without adequate tools. The result is a recurring cycle: a wave of new users arrives, often fleeing a crisis on commercial platforms. The existing community debates how to handle them. Blocking becomes the instrument of cultural negotiation. Fragmentation follows. The cycle repeats.
What is needed is not better blacklists, we need infrastructure that makes constructive engagement the path of least resistance, where trust can be extended incrementally, withdrawn proportionally, and rebuilt over time. Where communities are not forced to choose between openness and safety because the tools exist to manage both simultaneously.
This is a social and political problem that has a technical component. The Open Media Network project is one concrete path to solving it, building on mature existing infrastructure, proven open standards, and a decade of practical experience in grassroots on the ground and online federated media.
What European public investment can achieve here
European public funding for digital commons infrastructure has a strong track record. The NGI Zero programme has supported foundational work on everything from secure routing protocols to private messaging to federated video platforms. This investment compounds: open source outputs are reused, extended, and built upon by developers and institutions across the continent and beyond.
The case for investing in trust-based moderation infrastructure for the Fediverse is straightforward as the problem is real, well-documented, and getting worse. The Fediverse is growing, but without better tools for managing harmful content and building coherent information flows, its growth will hit a ceiling defined by the limits of volunteer moderator capacity and the inadequacy of binary blocking as a governance tool.
The solution is technically tractable, the components exist, the protocol exists, the codebases exist, the community exists. What is missing is the focused R&D investment to implement trust-based flows as working, deployable, open infrastructure.
The alternative is worse, if the Fediverse fails to solve this problem, the vacuum will be filled either by commercial platforms extending their reach into the federated space on their own terms, or by the continued fragmentation of the #openweb into isolated communities talking only to themselves. Neither outcome serves European democratic values or European digital sovereignty.
The investment required is modest. The upside, is a functioning commons layer for federated media distribution, owned by nobody, available to everyone, accountable to the communities it serves, is large. The time to build it is now, before the structural problems of the current moment ossify into the permanent architecture of the next generation internet.













