Why the pushing of #AI is more #techshit

The #stupidindividualism of the Silicon Valley’s ideology, around tech-driven libertarianism and as our chattering classes say “hyper-individualism”, is spreading social mess and #techshit, we need shovels to compost. It’s now clear that these anti #mainstreaming ‘solutions’ create more problems than they attempt to solve, particularly in terms of social breakdown and environmental damage. The utopian nightmares of tech billionaires collapse under the weight of on rushing real-world challenges. This should make visible to more of us the #geekproblem, the limits of technocratic fixes. The lies under the once-promised technological mediated future of freedom and innovation has been shown to be control and chaos, this should make it obvious that we need to take different paths away from the Silicon Valley’s delusion.

A podcast from of our weak liberals on the subject of #AI https://flex.acast.com/audio.guim.co.uk/2024/07/15-61610-gnl.sci.20240715.eb.ai_climate.mp3 a #mainstreaming view of the mess we are making on this path. The big issue is not the actual “nature” of AI, though that is not without issues. What I am covering here is that #AI is reinforcing existing power structures and socioeconomic realities, #neoliberal ideology and historical bias. This is driven by the goals of enhancing efficiency, reducing costs, and maximizing profits by increased surveillance, this in itself should raise ethical concerns about privacy and freedoms, that the #geekproblem so often justifies under the guise of security.

We need to think about this: AI systems trained on data from the past 40 years are inherently biased by the socio-political context of that period, perpetuating what are now outdated and obsolete beliefs. This historical bias locks in narrow ideological paths, particularly those associated with #neoliberalism and our 40 years worshipping at this #deathcult. This is not only a problem with AI, its a wider issue, we continue to prioritize economic growth over social and environmental paths, with the resent election victory in the UK, the Labour Party’s is pushing the normal #mainstreaming established during the #Thatcher era, in this we see past ideologies continue to shape current #mainstreaming political paths, the tech simply reinforces this.

It’s hard to know what path to take with this mess. Ethical frameworks like the #4opens and regulatory oversight to guide the responsible use of AI might help. By addressing the current mess and challenges, we might be able to work towards an AI path that reflects diverse perspectives and serves a more common good rather than reinforcing narrow #deathcult litany and hard right ideological paths this grows, which is the current default path. Recognizing and addressing the challenges in AI development is the first step towards the change we need to challenge, us, to compost this social mess and heaps of #techshit we have created, that shapes us.

UPDATE: An academic talking about this has just come out https://arxiv.org/pdf/2410.18417

The Mess of Web3: Why #openweb natives question the Blockchain Narrative

In the ongoing discourse surrounding #openweb and its relation to failing technologies like #web3 and #blockchain, a critical question emerges: why do we readily accept solutions without first defining the problem at hand?

“… it’s not secure, it’s not safe, it’s not reliable, it’s not trustworthy, it’s not even decentralized, it’s not anonymous, it’s helping destroy the planet. I haven’t found one positive use for blockchain. It has nothing that couldn’t be done better without it.”

—Bruce Schneier, *Bruce Schneier on the Crypto/Blockchain Disaster

The allure of decentralized autonomous organizations (DAOs) and blockchain technology for the last ten years has overshadowed the necessity of understanding the fundamental issues within our communities. Instead of exploring how we want to govern, decide, and interact within our communities, we find ourselves seduced by the promises of #DAO pitches.

The core of the matter lies in the conflation of culture with technology. Every time a DAO or blockchain solution is proposed, the culture and organization of communities become intertwined with the #geekproblem tools being offered. This bundling tactic obscures the essence of the technology and stifles meaningful discourse. By presenting technology as a fait accompli, we are robbed of the opportunity to critically assess its implications.

In the realm of the #openweb, technology is envisioned as a manifestation of communal decisions and conscious choices. It is the crystallization of community values, traditions, and needs. Where blockchain and DAOs represent an antithesis to this vision. They dictate choices rather than empower communities to determine their own paths.

One of the most concerning aspects of blockchain technology is its enforced financialization within communities. The implementation of ledger systems and tokens mirrors the #dotcons capitalist market traditions, where wealth equates to power. In stark contrast to the principles of “native” gift economies and communalism, blockchain perpetuates a system where those with the most resources wield influence.

In this, even in #mainstreaming dialogue, these ten years of blinded move to blockchain threatens to undermine centuries of liberal evolution by replacing established legal systems with #web3 engineers acting as arbiters of justice. This shift from #mainstreaming transparent and “equitable” legal frameworks to opaque and centralized technological solutions is deeply troubling.

As proponents of #4opens ideals, we should question the last ten years narrative of blockchain’s and DAOs. We must resist the allure of #geekproblem technological solutions that obscure the essence of community governance and autonomy. Instead, let’s engage in meaningful dialogue, grounded in clear understanding of the problems we address and the values we hold to forge a “native” #openweb path.

We now face another wasted ten years of #AI hype with the same issues and agender. We have to stop feeding this mess.

#OGB #OMN #makeinghistory

Algorithms of War: The Use of AI in Armed Conflict



Joel H. Rosenthal (Carnegie Council for Ethics in International Affairs), Janina Dill (University of Oxford), Professor Ciaran Martin (Blavatnik School of Government, Oxford), Tom Simpson (Blavatnik School of Government), Brianna Rosen (Blavatnik School of Government, University of Oxford)

Algorithms of war

Arriving early, the panel and audience are ugly broken people, priests and worshippers of the #deathcult

Near the start the young and energetic start to flood in, eager and chatty yet to be broken by service of the dark side of #mainstreaming

The ritual of making killing “humane” and “responsible”, ticking the boxes on this new use of technology in war, repression and death.

Touching on the “privatisation” that this technology pushes to shift traditional military command.

The exeptabl rate of collateral damage 15 to 1 in the case of the IDF Gaza conflict

Introducing human “friction” into the process, the means to the end, is the question. Public confidence and trust is key to this shift, policy is in part about this process.

The establishment policy response to AI in war, this is already live, so these people are catching up. They are at the stage of “definition” in this academic flow.

The issue agen is that none of this technology actually works, we wasted ten years on blockchain and cryptocurrency, this had little value and a lot of harm, we are now going to spend ten years on #AI and yes this will affect society, but is the anything positive in this? Or another wasted ten years of #fashernista thinking, in this case death.


Artificial intelligence (#AI) into warfare raises ethical, practical, and strategic considerations.

Technological Advancements and Warfare: The use of AI in war introduces new algorithms and technologies that potentially reshape military strategies and tactics. AI is used for tasks like autonomous targeting, decision-making, or logistics optimization.

Ethical Concerns: ethical dilemmas associated with AI-driven warfare. Making killing more “humane” and “responsible” through technological advancements, can lead to a perception of sanitizing violence.

Privatization of Military Command: The shift towards AI in warfare leads to a privatization of military functions, as technology companies play a role in developing and implementing AI systems.

Collateral Damage and Public Perception: Collateral damage ratios like 15 to 1 raises questions about the acceptability of casualties in conflicts where AI is employed. Public confidence and trust in AI-driven warfare become critical issues.

Policy and Governance: Establishing policies and regulations around AI in warfare is crucial. Defining the roles of humans in decision-making processes involving AI and ensuring accountability for actions taken by autonomous systems.

Challenges and Risks: The effectiveness of AI technology in warfare draws parallels with previous tech trends like blockchain and cryptocurrency. There’s concern that investing heavily in AI for military purposes will yield little value while causing harm.

Broader Societal Impact: Using AI in warfare will have broader societal implications beyond the battlefield. It will influence public attitudes towards technology, privacy concerns, and the militarization of AI in civilian contexts.

Balance of Innovation and Responsibility: Whether the pursuit of AI in warfare represents progress or merely another trend driven by superficial or misguided thinking #fashernista thinking with potentially dire consequences.

In summary, the integration of AI into warfare demands ethical, legal, and societal implications. The goal should be to leverage technological advancements responsibly, ensuring that human values and principles guide the development and deployment of AI systems in any contexts.

#Oxford

Cambridge Analytica, 5 years on

I think we face the usual problem of working on and implementing policy for yesterday’s issues.

* We are coming out of ten years of Blockchain mess

* Now we are into #AI mess, the is no intelligence in the current round, only artificial writing.

Let’s look at what actually matters

The original openweb had in this context #opendata is the issue we are talking about.

We then had 20 years of the #dotcons with #closeddata. Which you have talked about.

Coming out of this, we have an active openweb reboot happing with federation and opendata.

For example with #Mastodon, the #Fediverse, #bluesky and #Nosta which have grown from half a million to 10 to 15 million users over the last year. #WordPress building #ActivityPub support for a quarter of the internet and #Failbook‘s #threads.

You are seeing a different world back to #opendata, if you run a mastodon instance you will have a large part of the content of the Fediverse sitting in your database in plan text….

Take this into account with policy and regulation please

#Oxford