Governance, the mess of AI tech-fix paths

Seminar Reflection: Philosophy, AI – Week 6
Speakers: Chris Summerfield (Oxford & AI Safety Institute), MH Tessler (Google DeepMind)
Key texts: Jürgen Habermas, The Structural Transformation of the Public Sphere (excerpt) and Summerfield et al., “AI Can Help Humans Find Common Ground in Democratic Deliberation”

This seminar focus is on scaling democratic deliberation via AI. The example proposal is the #HabermasMachine a test projects to facilitate large-scale consensus using #LLMs (Large Language Models). The framing, unsurprisingly, is drawn from the elitist tech sector – Google DeepMind and Oxford – with a focus on “safety” and “moderation” over human messiness and agency.

The problem we face is that this #techshit path might work, but for who is the question, what kind of “public sphere” is this #AI recreating, and who holds the power to shape it? These are strongly top-down, technocratic proposals, rooted in a narrow utilitarian logic. The underlying assumption is that human decision-making is flawed and must be mediated, and ultimately managed, by algorithmic systems. Consensus is determined not through lived human to human dialogue or, as I like to say – mess, but through an AI that quietly nudges discussions to centrist consensuses.

There is no meaningful eye-to-eye group interaction in this project, no room for DIY, #bottom up agency. Participants become data points in a system that claims to “listen,” but acts through elitist mediation. It is consensus without community, and safety without solidarity. What’s missing is the power of mess, the presenter ignores this central question: Can we build messy, human-scale deliberation that doesn’t rely on top-down interventions?

Projects like this are not grassroots governance, rather it’s governance-by-black-box, mainstreaming by design, the incentive model is telling: ideas that align with the status quo or dominant narratives are rewarded with more money. Consensus is guided not by grassroots engagement or dissenting voices, but by what the algorithm (and its funders) consider “productive.” This is the quiet suffocating hand of #mainstreaming, cloaked in neutral code.

#TechFixes paths like this are about stability at all costs, yet we live in a time when stability is the problem, with #ClimateChaos threatening billions, the demand is for transformation, not moderation.

This is AI as intermediary, not a facilitator of the commons paths we need. Transparency? Not here, no one knows how the #AI reaches consensus. The models are proprietary, the tweaks are political, and the outcomes are mediated by those already in power. The system becomes an unaccountable broker, not of truth, but of what power is willing to hear.

We need to be wary of any system that claims to represent us without us being meaningfully involved. This is a curated spectacle of consensus, delivered by machines, funded by corporations, and mediated by invisible hands. What we need is human to human projects like the #OGB, not tech managed consensus. This #mainstreaming path isn’t compost. It’s simply more #techshit to be composted, mess is a feature, not a bug.

In the #OMN (Open Media Network), we explore paths rooted in trust, openness, and peer-to-peer process. Not asking for power to listen, but taking space to act. We compost the mess; we don’t pretend it can be sanitized by top-down coding.

#Oxford #AI #techshit #dotcons


Discover more from Hamish Campbell

Subscribe to get the latest posts sent to your email.

Leave a Reply

Only people in my network can comment.