Pushing back AI hype and building better futures

This week, Dr. Emily M. Bender (University of Washington), co-author of The AI Con, delivered a much-needed reality check in Oxford, cutting through the fog of #AI PR myths and techno-dystopian smoke. In The Q&A by Professor Catherine Pope (Nuffield Dept. of Primary Care), the conversation explored how AI is being used not to elevate us, but to devalue human creativity, justify surveillance, and concentrate wealth and power in the bands of the #nastyfew

This wasn’t the normal breathless “future of work” keynote. It was a call to arms about the AI Con – What Are We Really Being Sold? Dr. Bender, known for coining the term stochastic parrot, highlights how AI hype isn’t just noise – it’s a strategy, to push unregulated, underperforming, resource-hungry technologies into every part of society. It turns complex problems into opportunities to extract data, deskill more workers, and justify more austerity.

We’re not being sold intelligence, we’re being sold plagiarism machines that mimic but don’t understand, synthetic text extruders trained to sound right, but to often hallucinate. Mathy-maths cloaked in prestige, built on broken benchmarks like the Turing Test – long since reduced to a measure of gullibility.

Anthropomorphism by design, responsibility by none, is insidious that AI systems are designed to mimic humanity. They pull users in through anthropomorphism, but when something goes wrong, no one is held responsible. Not the engineers, not the companies, not the funders. Just the user caught in the middle. As Dr. Bender and others have pointed out, there’s no “intelligence” in AI, just statistics, training data, and the motives of those who built it.

What’s Lost in the Hype?

“We used to do language translation better with fewer resources.”
“Cloud computing is a lie, it’s just someone else’s server burning through energy and water.”

These are the quiet truths ignored by AI boosterism. Dr. Bender laid bare the ecological, cognitive, and political costs:

Corruption pushing ecological waste: AI training and cloud infrastructure depend on water, energy, and mining—routed not where they’re sustainable, but where regulation is weak.

Erosion of trust: Models trained to sound authoritative spread confident falsehoods, degrading public discourse.

Security risks: Code generation tools are notoriously lax, riddled with hallucinations and vulnerabilities.

Dehumanisation of labour: AI doesn't replace bad jobs with good ones, it turns good work into mechanical “oversight” roles, where humans are paid to babysit broken systems.

And in health and care, where these technologies are increasingly being pushed, the stakes are life, dignity, and wellbeing.

What I have personal found is that Oxford is feeding its brightest minds into AI. As institutions bend to corporate funding and hype cycles, critique becomes harder, not easier. But critique is essential. This is a fight about who benefits, and who bears the cost.

Like the Luddites of the 19th century, we’re not against machines, we’re against machines used against us. The Luddites knew that the issue wasn’t the loom, it was who owned the loom. That’s why we need more conversations like this. Not just about what AI is, but about what kind of society we want. And more importantly, who gets to decide.

What could work on these tech pats is:

  • Smaller, dumber, domain-specific models where needed.
  • Open standards, not closed corporate APIs.
  • Tech built with consent, accountability, and ecological limits.
  • A refusal to let “innovation” be an excuse to undermine public infrastructure.

Above all, we need to centre people, not profit, humility, not hype. Very important not to be a prat about this.

#Oxford

This is what the #dotcons, control, is doing with the #AI mess. to us.

Discover more from Hamish Campbell

Subscribe to get the latest posts sent to your email.

Leave a Reply

Only people in my network can comment.