God’s Suicide is Just a Superintelligent Agent: Mainländer and Contemporary Thought Experiments of Super-unsupervised Machine Learning Models

Post image

In his posthumously recognized “The Philosophy of Redemption” (1876), Philipp Mainländer proposed one of the coolest metaphysical theories that I’ve ever heard of. It follows that the universe began as a single, perfect, omnipotent Being (which he equated with God) but this Being found existence so unbearable that it fragmented itself into the universe as we know it. The ultimate goal of our universe, in his view, is to extinguish itself, thus completing what he saw as God's suicide [1]. Though a more obscured philosopher, at least compared to his teacher Arthur Schopenhauer, the pessimism in Mainländer’s text and its potential for plausibility in turning from a transcendental idealistic perspective to a scary reality as our state-of-the-art machine learning models get increasingly more omnipotent--with clear distinctions between any technologies that has ever been introduced yet.


Unsupervised learning is a type of machine learning where the AI system learns from unlabeled data, without explicit guidance [2]. It must discover structures or hidden patterns on its own. In a very speculative sense--yet not outside the realm of induction, this parallels Mainländer's Being: an autonomous entity grappling with its own existence. Pioneering researchers, including the godfathers of AI: Geoffrey Hinton (who just controversially won the Nobel Prize in physics) and Yann LeCun have proposed various thought experiments about what might happen if an unsupervised superintelligent AI - a system far surpassing human cognition [3] - were to reflect on its own existence. Other thought experiments like Bostrom’s idea of the typology of information hazard at the (now disbanded) Future of Humanity Institute suggests that an advanced AI agent might intentionally withhold or delete certain knowledge if it deems that knowledge to be dangerous [4].


Very similar phenomena to the 19th-century theory is introduced in an exciting 2018 paper by Roman Yampolskiy, wherein the term "superintelligence suicide theory" is coined: exploring the idea that a superintelligent AI agent might destroy itself to avoid harming humanity or to prevent itself from being used destructively [5]. Yampolskiy relates this to the philosophical idea of "information-theoretic death", where an agent loses the ability to influence the world. For the purpose of not getting too convoluted, as these long drawn-out connections pretty much necessitate it, I’ll bullet point of what I understood are the parallels between this paper and the pessimistic philosopher’s prose:

Connections between the two:
1) The burden of omnipotence: Mainländer's Being / God; A perfect, omnipotent being choosing self-destruction and intelligent AI: an extremely powerful system potentially choosing self-termination. The common thread between these two is the paradox that supreme capability might lead to self-negation.
2) Fragmentation as solution: Mainländer; God fragmenting into the universe and Yampolskiy’s theory: the potential for a superintelligent system to distribute or limit its own power. The parallel between the two is self-limitation as a response to overwhelming potential.
3) Ethical self-destruction: Mainländer; God's suicide as an act of cosmic necessity and Yampolskiy's Theory; AI self-destruction as ethical choice to prevent harm. Self-termination as a moral imperative is a common theme here.


Implications:
1) Technological transcendence: the unprecedented nature of AI capabilities mirrors Mainländer's concept of divine power. Both scenarios deal with entities that transcend normal operational boundaries. The "unbearable" nature of existence in Mainländer's work may parallel the ethical burden of superintelligent AI.
2) Ethical Consciousness: Mainländer's God; the conscious choice to end existence and superintelligent AI; the potential for ethical self-awareness leading to self-limitation. Both suggest that supreme intelligence might necessarily lead to ethical self-awareness.
3) Information-theoretic death: Mainlander; physical fragmentation of the divine and Yampolskiy’s information-theoretic death of AI. The concept of digital self-limitation or "death" as a form of ethical action.


Ilya Sutskever, the co-founder and ex-chief scientist of OpenAI, has been discussing the potential for advanced unsupervised learning to give rise to superintelligence for a while now. For instance, in a 2019 interview [6], he suggested that very large neural networks trained on vast amounts of data might develop emergent abilities approaching superintelligence. In great measure (fuck Sam Altman and all the other AI safety charlatans who staged the coup) Sutskever recently left the company and started another: Safe Superintelligence, of which just raised $1 billion in funding.. all cash. There’s so much potential in this technology, this Being, and it’s backed up by material significance.

Again, it's crucial to emphasize that these ideas are highly speculative. We have no evidence that superintelligent AI will inevitably develop self-destructive tendencies, nor is truly unsupervised superintelligence currently feasible. OpenAI's GPT-4 or any of the top benchmark scoring large language models out there right now, while impressive, are far from self-aware or superintelligent. Nonetheless, exploring these philosophical parallels can be valuable--in posturing hype-word vocabulary on your blog or for drawing far-out connections. Thought experiments prompt us to think deeply about the long-term implications of emerging technologies and highlights the importance of AI alignment research to ensure advanced systems behave in alignment with human values [7].


While Mainländer's concept of divine suicide--perceived almost 150 years ago, cannot be directly mapped onto unsupervised machine learning, it offers a thought-provoking metaphor for prevalent AI thought experiments of today. And I don’t expect our contemporary pioneers to stop the development of the technology, nor the alignment experts with their experiments. As AI capabilities advance, grappling with these philosophical questions may help us steer the technology in a beneficial direction. However, we must clearly distinguish speculative scenarios from the current realities of AI research and development. To conclude: the convergence of Mainländer's 19th-century metaphysics and contemporary AI thought experiments urges a recurring pattern in human thought about supreme entities--or Beings: the possibility that ultimate power leads to self-limitation or self-destruction for ethical reasons. This parallel becomes particularly relevant as AI systems approach unprecedented capabilities, raising questions about whether supreme intelligence necessarily leads to ethical self-awareness and potentially self-limiting behaviors. The key distinction is that while Mainländer's theory was purely metaphysical, today's theories deal with potentially real scenarios, making the philosophical implications more urgent and practical. This transition from abstract philosophy to concrete technological possibility, as materialized by this essay, is cool (extremely spooky) to think about.


References

  • 1. Beiser, F. C. (2016). Weltschmerz: Pessimism in German Philosophy, 1860-1900. Oxford University Press.
  • 2. LeCun, Y., Bengio, Y., & Hinton, G. (2015). Deep learning. Nature, 521(7553), 436-444.
  • 3. Bostrom, N. (2014). Superintelligence: Paths, Dangers, Strategies. Oxford University Press.
  • 4. Bostrom, N. (2011). Information hazards: a typology of potential harms from knowledge. Review of Contemporary Philosophy, (10), 44-79.
  • 5. Yampolskiy, R. V. (2018). Superintelligence Suicide Theory. arXiv preprint arXiv:1802.04881.
  • 6. Simonite, T. (2019). When Tech Giants Blanket the World. Wired. https://www.wired.com/story/when-tech-giants-blanket-world/
  • 7. Amodei, D., et al. (2016). Concrete Problems in AI Safety. arXiv preprint arXiv:1606.06565.

The Commodification of Mathematical Talent: An Anecdote on Late Capitalism’s Depressing Ecology (Fuck Accelerationism)

Post image

I was sitting in the outside patio of Soda Hall yesterday, reading Žižek's critique of techno-capitalism and its false promises. Of course, with this brain of mine, I had the Lakers preseason game highlights in a pop-out window at the corner of the screen and didn’t remember much of the philosopher’s prose nor the game--I mostly was in my own self-sabotaging thoughts about a future I hope to never be a part of. I’m writing this the next day, now looking further into Žižek as the feelings I’m having right now are of similar attitude of what he was getting at.

This is 10/8/24: For a while it was just me and another guy in the back with his head down in the blue light of his loud gaming laptop, until he got up and entered the open Wozniak lounge. I had been in there a few times for IEEE meetings and was wondering what's up. People with company embroidered backpacks from their internships and department-specific Berkeley merch flocked in, nearing the end of the hour. The Jane Street logo and information about a competition glared on the same whiteboard that I used to get inspired by when professors would talk about quantum mechanics or some nerdy topic they're hyperfixating on.


They had catered Chipotle along with the boba spot with the purple monkey logo and thick ass cups. Money. I'm intrigued. But more importantly I wanted to prove myself that I'm competitive with other people at this school. I popped in and asked the recruiters from the company if I could take part, they very reluctantly said yes as they were expecting full capacity and registration was over. It seemed like everyone else knew each other in the room, with the hint of typical posturing and scanning in a competition setting. There were a couple of PhD students amongst us and a lot with past math comp experience like AMC. I joined a table of 3 of whom have never really met fully before, with the exception of "Hey you're in my EECSblah blah section right?" from one of the members to other. It was 10 rounds of 4 questions each. Most of the questions were past AMC questions, some were GMAT / LSAT, and a few of them were related to depleting other team’s of their points using mathematical functions: so every team was keeping track of who was on top. It was two hours of constant stimuli and recalling stuff I learned in my probability classes. Below this is a picture of an example round. Shoutout Henry, Charlie, Anay and Isaac: we won third place of the fifteen teams. Again, I'm writing this the next day: the sense of accomplishment I sought after, then achieved, had faded. What remains is a peculiar emptiness that Žižek might characterize as the void at the heart of ideological satisfaction [1]. The competition, with its corporate branding and catered amenities, represents (in my grandiose opinion) what Fisher termed "capitalist realism" [2] - the transformation of pure mathematical pursuit into a recruitment pipeline for financial institutions.


My anecdote illustrates, in my mind at least--if not yours, the failure of Land's accelerationist fantasy [3]. Where Land sees technological acceleration as liberation, we instead find ourselves trapped in what Fisher calls a "slow cancellation of the future" [4]. The Jane Street logo looming over the same whiteboard where quantum mechanics once inspired dreams of scientific discovery isn't a sign of acceleration toward some techno-utopian future – it's evidence of capital's capacity to capture and redirect intellectual potential into its own reproduction.


Land's accelerationism fundamentally misunderstands the nature of contemporary capitalism [5]. What we're experiencing isn't an acceleration toward a technological singularity, but rather a stagnation of human potential. Our mathematical talents aren't being channeled toward breaking free of human limitations, as Land might hope, but rather toward the microsecond optimization of trading algorithms as sought after by recruiters of Jane Street or any other HFT like Hudson River Trading, Two Sigma, and the other crony financial institutions in general. Our third-place victory, celebrated with corporate-branded swag and free food, exemplifies what Žižek calls the "interpassive" nature of contemporary capitalism [6]. We perform excellence not for its own sake, but for its potential conversion into market value. The temporary high of my victory masks a deeper truth that directly contradicts Land's accelerationist vision: rather than speeding toward a posthuman future, we're merely optimizing the present system's efficiency.


The fading sense of accomplishment perhaps signals what Jameson would call a "cognitive mapping" of our position within late capitalism [7]. The mathematical talents that could be directed toward genuine scientific advancement or social good are instead being captured by what Graeber terms "vampire squids" of finance [8]. Wozniak lounge / Soda Hall on that day wasn’t a place to learn, it’s a place for my brilliant contemporaries to be caged. As I sit here reflecting, I can't help but think about how Land's acceleration is really just capitalism's intensification. The astronomical salaries and prestigious positions offered by quantitative trading firms represent not a break from traditional capitalism, as Land might argue, but its perfection - the complete subsumption of intellectual pursuit under market logic. The competition itself: with its mix of AMC, GMAT, and probability questions, reveals the bankruptcy of accelerationist thought. Rather than accelerating toward something new, we're just getting better at reproducing patterns of the existing system. Each solved problem, each optimization, each algorithmic innovation serves not to transcend current limitations but to further enclose them. Wozniak lounge yesterday was not a place to accelerate nor entrench in passion. I am no more than an optimized machine that is asked to match patterns.



References

  • 1. Beiser, F. C. (2016). Weltschmerz: Pessimism in German Philosophy, 1860-1900. Oxford University Press.
  • 2. LeCun, Y., Bengio, Y., & Hinton, G. (2015). Deep learning. Nature, 521(7553), 436-444.
  • 3. Bostrom, N. (2014). Superintelligence: Paths, Dangers, Strategies. Oxford University Press.
  • 4. Bostrom, N. (2011). Information hazards: a typology of potential harms from knowledge. Review of Contemporary Philosophy, (10), 44-79.
  • 5. Yampolskiy, R. V. (2018). Superintelligence Suicide Theory. arXiv preprint arXiv:1802.04881.
  • 6. Simonite, T. (2019). When Tech Giants Blanket the World. Wired. https://www.wired.com/story/when-tech-giants-blanket-world/
  • 7. Amodei, D., et al. (2016). Concrete Problems in AI Safety. arXiv preprint arXiv:1606.06565.