The debate over AI’s energy consumption is one piece of a larger mess about technological in the face of current existential risks. Yes, #AI’s energy demands are a huge #dotcons waste, but focusing only on this is distracting us from a more discussions about the underlying ideology and assumptions driving the #geekproblenm technological paths—an example, the ideas of #longtermism, lets look at ths:

#Longtermism is a philosophe prioritizes the far future, arguing that we should make decisions today that benefit humanity hundreds or thousands of years from now. Proponents of longtermism advocate for technological advancements like AI and space colonization, pushing that these will ultimately secure humanity’s future, that is after many of us have been killed and displaced by #climatchoas and the resulting social brake down of mass migration. The outcome of the last 40 years of worshipping the #deathcult is this sleight of hand by changing the subject, yes, its a mess.

This mindset is a ridiculous and obviously stupid path we should not take, some of the issues:

  • Overconfidence in predicting the future: Longtermists assume that we can reliably predict the long-term outcomes of our actions. History has shown that even short-term predictions are fraught with uncertainty. The idea that we can accurately forecast the impact of technologies like AI or space colonization centuries from now is, at best, speculative and, at worst, dangerously hubristic.
  • The danger of #geekproblem mentality, the idea that we should “tech harder” to solve our problems, that is, to invest more heavily in advanced technologies with the hope that they will eventually pull us out of our current crises, mirrors longtermist thinking. It assumes that the resource consumption, environmental degradation, and social upheaval caused by these technologies will be justified by the benefits they might bring in the future.

This path is the current mess and flawed for meany reasons:

  • Resource Consumption: The development of AI, space technologies, and other technological “solutions” requires vast amounts of energy and resources. If these technologies do not deliver the expected returns, the initial resource consumption itself exacerbate the crises we are trying to solve, such as the onrushing catastrophe of climate change.
  • Opportunity Costs: By focusing on speculative technologies, we neglect immediate and practical solutions, like transitioning away from fossil fuels, which mitigates some of the worst effects of climate change. These simpler, more grounded paths may not be as glamorous as AI or space travel, but they cannot backfire catastrophically.
  • Moral and Ethical Implications: Whether it is right to invest heavily in speculative technologies when there are pressing issues today that need addressing—issues that affect billions of lives. The idea that a few future lives might be more valuable than current ones is a dangerous and ethically questionable stance.

The is always a strong case for caution and pragmatism in technology. Instead of betting our future on high-stakes #geekproblem technological gambles, a pragmatic approach to focus on solutions that offer benefits today while reducing the risks of tomorrow is almost always a good path. For example, changing our social relations and economic systems away from the current #deathcult, by using social tools to investing in renewable energy, rethinking urban planning, and restore ecosystems would all be actions that can have immediate positive effects while also contributing to a humanistic future. This #KISS path carry far fewer risks if they turn out to be less impactful than hoped. The worst-case scenario with renewable energy is that it doesn’t solve every problem—but it won’t make them worse. In contrast, if AI or space colonization doesn’t deliver on its pie in the sky promises, the consequences are simply disastrous.

A #mainstreaming view of this mess

A call for grounded action, the challenge of our time is not to “tech harder” in the hope that advanced technologies will save us, but to consider the balance between “native” humanistic innovation and #dotcons caution. The example here #Longtermism, with its emphasis on far-off futures, leads us to a dangerous path by neglecting the immediate, tangible actions we can take now, not in a thousand years. We need to focus on paths that address our most pressing problems without risking everything on pie in the sky self-serving mess making. This means actions like reducing fossil fuel dependence, preserving biodiversity, and creating more change and challenge social systems like the #OMN and #OGB—steps that will help us build a resilient and humanist world for both the present and the future #KISS

The media noise about the current #AI is mostly noise https://www.bath.ac.uk/announcements/ai-poses-no-existential-threat-to-humanity-new-study-finds/ and money mess, it’s the normal #deathcult with a bit of kinda working tech.