Non classé

Algorithms of War: The Use of AI in Armed Conflict

Joel H. Rosenthal (Carnegie Council for Ethics in International Affairs), Janina Dill (University of Oxford), Professor Ciaran Martin (Blavatnik School of Government, Oxford), Tom Simpson (Blavatnik School of Government), Brianna Rosen (Blavatnik School of Government, University of Oxford)

Algorithms of war

Arriving early, the panel and audience are ugly broken people, priests and worshippers of the #deathcult

Near the start the young and energetic start to flood in, eager and chatty yet to be broken by service of the dark side of #mainstreaming

The ritual of making killing “humane” and “responsible”, ticking the boxes on this new use of technology in war, repression and death.

Touching on the “privatisation” that this technology pushes to shift traditional military command.

The exeptabl rate of collateral damage 15 to 1 in the case of the IDF Gaza conflict

Introducing human “friction” into the process, the means to the end, is the question. Public confidence and trust is key to this shift, policy is in part about this process.

The establishment policy response to AI in war, this is already live, so these people are catching up. They are at the stage of “definition” in this academic flow.

The issue agen is that none of this technology actually works, we wasted ten years on blockchain and cryptocurrency, this had little value and a lot of harm, we are now going to spend ten years on #AI and yes this will affect society, but is the anything positive in this? Or another wasted ten years of #fashernista thinking, in this case death.

Artificial intelligence (#AI) into warfare raises ethical, practical, and strategic considerations.

Technological Advancements and Warfare: The use of AI in war introduces new algorithms and technologies that potentially reshape military strategies and tactics. AI is used for tasks like autonomous targeting, decision-making, or logistics optimization.

Ethical Concerns: ethical dilemmas associated with AI-driven warfare. Making killing more “humane” and “responsible” through technological advancements, can lead to a perception of sanitizing violence.

Privatization of Military Command: The shift towards AI in warfare leads to a privatization of military functions, as technology companies play a role in developing and implementing AI systems.

Collateral Damage and Public Perception: Collateral damage ratios like 15 to 1 raises questions about the acceptability of casualties in conflicts where AI is employed. Public confidence and trust in AI-driven warfare become critical issues.

Policy and Governance: Establishing policies and regulations around AI in warfare is crucial. Defining the roles of humans in decision-making processes involving AI and ensuring accountability for actions taken by autonomous systems.

Challenges and Risks: The effectiveness of AI technology in warfare draws parallels with previous tech trends like blockchain and cryptocurrency. There’s concern that investing heavily in AI for military purposes will yield little value while causing harm.

Broader Societal Impact: Using AI in warfare will have broader societal implications beyond the battlefield. It will influence public attitudes towards technology, privacy concerns, and the militarization of AI in civilian contexts.

Balance of Innovation and Responsibility: Whether the pursuit of AI in warfare represents progress or merely another trend driven by superficial or misguided thinking #fashernista thinking with potentially dire consequences.

In summary, the integration of AI into warfare demands ethical, legal, and societal implications. The goal should be to leverage technological advancements responsibly, ensuring that human values and principles guide the development and deployment of AI systems in any contexts.

Leave a Reply

Your email address will not be published. Required fields are marked *