There is no intelligence in AI – and no path to any

Despite the constant #mainstreaming hype, the branding, and the trillions of dollars being poured into it, there is a simple reality that needs to be stated plainly: There is no intelligence in current “AI”, and there is no working path from today’s Large Language Models (#LLM) and Machine Learning (#ML) systems to anything resembling real, general intelligence.

What we are living through is not an intelligence revolution, it is a bubble – one we’ve seen many times before. The problem with this recurring mess is social, as a functioning democracy depends on the free flow of information. At its core, democracy is an information system, shared agreement that knowledge flows outward, to inform debate, shape collective decisions, and enable dissent. The wisdom of the many is meant to constrain the power of the few.

Over recent decades, we have done the opposite. We built ever more legal and digital locks to consolidate power in the hands of gatekeepers. Academic research, public data, scientific knowledge, and cultural memory have been locked behind paywalls and proprietary #dotcons platforms. The raw materials of our shared understanding, often created with public funding, have been enclosed, monetised, and sold back to the public for profit.

Now comes the next inversion. Under the banner of so-called #AI “training”, that same locked up knowledge has been handed wholesale to machines owned by a small number of corporations. These firms harvest, recombine, and extract value from it, while returning nothing to the commons. This is not a path to liberal “innovation”. It is the construction of anti-democratic, authoritarian power – and we do need to say this plainly.

A democracy that defers its knowledge to privately controlled algorithms becomes a spectator to its own already shaky governance. Knowledge is a public good, or democracy fails even harder than it already is.

Instead of knowledge flowing to the people, it flows upward into opaque black boxes. These closed custodians decide what is visible, what is profitable, and increasingly, what is treated as “truth”. This enclosure stacks neatly on top of twenty years of #dotcons social-control technologies, adding yet more layers of #techshit that we now need to compost.

Like the #dotcons before it, this was never really about copyright or efficiency. It is about whether knowledge is governed by openness or corporate capture, and therefore who knowledge is for. Knowledge is a #KISS prerequisite for any democratic path. A society cannot meaningfully debate science, policy, or justice if information is hidden behind paywalls and filtered through proprietary systems.

If we allow AI corporations to profit from mass appropriation of public knowledge while claiming immunity from accountability, we are choosing a future where access to understanding is governed by corporate power rather than democratic values.

How we treat knowledge – who can access it, who can build on it, and who is punished for sharing it – has become a direct test of our democratic commitments. We should be honest about what our current choices say about us in this ongoing mess.

The uncomfortable technical truth is this: general #AI is not going to emerge from current #LLM and ML systems – regardless of scale, compute, or investment. This has serious consequences. There is no coming step-change toward the “innovation” promised to investors, politicians, and corporate strategists, now or in any foreseeable future. The economic bubble beneath the hype matters because AI is currently propping up a fragile, fantasy economic reality. The return-on-investment investors are desperate for simply is not there.

So-called “AI agents”, beyond trivial and tightly constrained tasks, will default to being just more #dotcons tools of algorithmic control. Beyond that, thanks to the #geekproblem, they represent an escalating security nightmare, one in which attackers will always have the advantage over defenders, this #mainstreaming arms race will be endless and structurally unwinnable.

Yes, current #LLM systems do have useful applications, but they are narrow, specific, and limited. They do not justify the scale of capital being burned. There are no general-purpose deliverables coming to support the hype. At some point, the bubble will end – by explosion, implosion, or slow deflation.

What we can already predict, especially in the era of #climatechaos, is the lost opportunity cost. Vast financial, human, and institutional resources are being misallocated. When this collapses, the tech sector will be even more misshapen, and history suggests it will not be kind to workers, let alone the environment. This is the same old #deathcult pattern: speculation, enclosure, damage, and denial.

This moment is not about being “pro” or “anti” technology. It is about recognising that intelligence is social, contextual, embodied, and collective – and that no amount of #geekproblem statistical pattern-matching can replace that. It is about understanding that democracy cannot survive when knowledge is enclosed and mediated by #dotcons corporate capture beyond meaningful public control.

To recap: There is no intelligence in current #AI. There is no path to real AI from here. Pretending otherwise is not innovation – it is denial, producing yet more #techshit that we will eventually have to compost. Any sophist that argue otherwise need to be sacked if they arnt doing anything practical.

The only question is whether we use this moment to rebuild knowledge as a public good – or allow one more enclosure to harden around us. History – if it continues – will not be neutral about the answer.

What Did We Learn from Web3, Crypto?

Looking back from the mid-2020s, the arc of #web03, #NFTs, and blockchain culture is very clear. What once promised (lied about) decentralisation, liberation, and a break from corporate capture now looks like the same, mess, #techcurn pattern repeating itself, yes it had new language, new branding, but it was easy to see it had the same underlying mess making dynamics.

As these #geekproblem projects hollowed out, the signs became hard to ignore. The technical optimism faded, the user bases thinned, and the economic logic exposed itself. What followed was totally predictable: spin. Makeup and perfume slapped onto decaying projects to hide the smell of rot and exploitation. Rebrands. New narratives. New demographics. Same extraction. This was the outcome of building “liberation tech” on foundations that still centred virtical money, speculation, and power concentration.

With these projects we are now in the zombie phase, projects kept moving, kept talking, kept selling – long after the animating ideas had died. Influencers and promoters continued to perform belief, even as any substance drained away.

This is a few years when #fashionista culture met #encryptionist ideology – aesthetics and technical absolutism snogging the undead remnants of a failed #deatcult vision. The result wasn’t in any way decentralisation; it was a simply a new enclosure. People weren’t being freed, they were being financialised, the money problem #KISS

At the core was a simple structural truth: #dotcons feed on money. Put money in, influence comes out. That logic doesn’t disappear just because you wrap it in cryptography or decentralised rhetoric. “Bad actors” weren’t anomalies – they were following the incentives as designed. Aany social good becomes just collateral damage. This is why the lie collapsed in te end.

The deeper harm and problem with #techcurn is each wave claims to have fixed the problems of the last. But each wave reproduces them, because this is what works when worshiping a #deathcult. This isn’t just a failed tech trend, the #techcurn disparity, driven by extraction systems cause enormous human harm, displacing livelihoods, concentrating power, and amplifying inequality at planetary scale.

These systems don’t fail harmlessly, they fail onto people. That’s why the call isn’t just to “be critical,” but to step away – and help others step away too. Not through purity exits or individual moralising, but through collective paths back to technologies built for people rather than profit, life over zombies

There has always been another path: the #openweb. Messy, imperfect, slower, less glamorous, but grounded in shared infrastructure, social trust, and human-scale governance. The #OMN approach doesn’t promise salvation. It offers compost instead of speculation. Process instead of hype. People over tokens.

A note on hashtags: And yes, the hashtags matter. Click them., search for them. They cut sideways through algorithms – small back doors into less mediated, less controlled ways of seeing. Not a solution, but a crack in the wall.

The current #Ai hype bubble is repeating this mess with a little more useful #LLM functionality, but on top of this is a huge mess of #techchurn, which will need composting.


Observation: some people go into news to speak truth from power – using institutions to legitimise the status quo and defend the worship of the #deathcult.

Others speak truth to power – using journalism to expose, question, and challenge unequal power and its consequences.

Only one of those serves the public interest #KISS

Telegram messaging app is dieing

Telegram partnering with Elon’s #AI to distribute #Grok inside chats is a clear line crossed. This matters because private data ≠ training fodder, bringing Grok (or any #LLM) into messaging apps opens the door to pervasive data harvesting and normalization of surveillance.

This is an example of platform drift: Telegram was always sketchy (proprietary, central control, opaque funding), but this is active betrayal of its user base, especially those in repressive regions who relied on it.

Any #LLM like Grok in chats = always-on observer: Even if “optional,” it becomes a trojan horse for ambient monitoring and a normalization vector for AI-injected communication.

“Would be better if we had not spent 20 years building our lives and societies around them first.”

That’s the #openweb lesson in a sentence, that the #dotcons will kill themselves. This is what we mean by “use and abuse” of these platforms which have been driven by centralization, adtech, and data extraction, that they inevitably destroy the trust that made them popular. It’s entropy baked into their #DNA. As Doctorow calls this #enshitification, the tragedy is how much time, emotion, and culture we invested in them – only to have to scramble for alternatives once they inevitably betray us.

What to do now, first step, remove data from your account then delete telegram app, not just for principle, but for your own safety. Move to alternatives – #Signal for encrypted, centralized messaging (trusted but closed server). There are other more #geekproblem options in the #FOSS world but like #XMPP, #RetroShare, or good old email+GPG can work too, but they can be isolating, so stick to #signal if you’re at all #mainstreaming.

Then the second step, build parallel #4opens paths by supporting and develop alt infrastructure like the #Fediverse (Mastodon, Lemmy, etc.), #OMN (Open Media Network – decentralized media), XMPP and #p2p-first protocols, #DAT/#Hypercore, #IPFS, or #Nostr etc.

Yeah, things will get worse before they get better, what we’re seeing now is the terminal phase of the #dotcons era. These companies are devouring themselves and will eventually collapse under the weight of their contradictions. The question is, will we have built anything to replace them?

If not, authoritarian tech (like Elon’s empire) fills the void. That’s why we rebuild the “native” #openweb, even if it’s slow, messy, and underground. That’s why projects like #OMN and #Fediverse matter. If you’re reading this, you’re early to the rebuild, welcome, let’s do better this time.