God’s Suicide is Just a Superintelligent Agent: Mainländer and Contemporary Thought Experiments of Super-unsupervised Machine Learning Models
In his posthumously recognized “The Philosophy of Redemption” (1876), Philipp Mainländer proposed one of the coolest metaphysical theories that I’ve ever heard of. It follows that the universe began as a single, perfect, omnipotent Being (which he equated with God) but this Being found existence so unbearable that it fragmented itself into the universe as we know it. The ultimate goal of our universe, in his view, is to extinguish itself, thus completing what he saw as God's suicide [1]. Though a more obscured philosopher, at least compared to his teacher Arthur Schopenhauer, the pessimism in Mainländer’s text and its potential for plausibility in turning from a transcendental idealistic perspective to a scary reality as our state-of-the-art machine learning models get increasingly more omnipotent--with clear distinctions between any technologies that has ever been introduced yet.
Unsupervised learning is a type of machine learning where the AI system learns from unlabeled data, without explicit guidance [2]. It must discover structures or hidden patterns on its own. In a very speculative sense--yet not outside the realm of induction, this parallels Mainländer's Being: an autonomous entity grappling with its own existence. Pioneering researchers, including the godfathers of AI: Geoffrey Hinton (who just controversially won the Nobel Prize in physics) and Yann LeCun have proposed various thought experiments about what might happen if an unsupervised superintelligent AI - a system far surpassing human cognition [3] - were to reflect on its own existence. Other thought experiments like Bostrom’s idea of the typology of information hazard at the (now disbanded) Future of Humanity Institute suggests that an advanced AI agent might intentionally withhold or delete certain knowledge if it deems that knowledge to be dangerous [4].
Very similar phenomena to the 19th-century theory is introduced in an exciting 2018 paper by Roman Yampolskiy, wherein the term "superintelligence suicide theory" is coined: exploring the idea that a superintelligent AI agent might destroy itself to avoid harming humanity or to prevent itself from being used destructively [5]. Yampolskiy relates this to the philosophical idea of "information-theoretic death", where an agent loses the ability to influence the world. For the purpose of not getting too convoluted, as these long drawn-out connections pretty much necessitate it, I’ll bullet point of what I understood are the parallels between this paper and the pessimistic philosopher’s prose:
Connections between the two:
1) The burden of omnipotence: Mainländer's Being / God; A perfect, omnipotent being choosing self-destruction and intelligent AI: an extremely powerful system potentially choosing self-termination. The common thread between these two is the paradox that supreme capability might lead to self-negation.
2) Fragmentation as solution: Mainländer; God fragmenting into the universe and Yampolskiy’s theory: the potential for a superintelligent system to distribute or limit its own power. The parallel between the two is self-limitation as a response to overwhelming potential.
3) Ethical self-destruction: Mainländer; God's suicide as an act of cosmic necessity and Yampolskiy's Theory; AI self-destruction as ethical choice to prevent harm. Self-termination as a moral imperative is a common theme here.
Implications:
1) Technological transcendence: the unprecedented nature of AI capabilities mirrors Mainländer's concept of divine power. Both scenarios deal with entities that transcend normal operational boundaries. The "unbearable" nature of existence in Mainländer's work may parallel the ethical burden of superintelligent AI.
2) Ethical Consciousness: Mainländer's God; the conscious choice to end existence and superintelligent AI; the potential for ethical self-awareness leading to self-limitation. Both suggest that supreme intelligence might necessarily lead to ethical self-awareness.
3) Information-theoretic death: Mainlander; physical fragmentation of the divine and Yampolskiy’s information-theoretic death of AI. The concept of digital self-limitation or "death" as a form of ethical action.
Ilya Sutskever, the co-founder and ex-chief scientist of OpenAI, has been discussing the potential for advanced unsupervised learning to give rise to superintelligence for a while now. For instance, in a 2019 interview [6], he suggested that very large neural networks trained on vast amounts of data might develop emergent abilities approaching superintelligence. In great measure (fuck Sam Altman and all the other AI safety charlatans who staged the coup) Sutskever recently left the company and started another: Safe Superintelligence, of which just raised $1 billion in funding.. all cash. There’s so much potential in this technology, this Being, and it’s backed up by material significance. Again, it's crucial to emphasize that these ideas are highly speculative. We have no evidence that superintelligent AI will inevitably develop self-destructive tendencies, nor is truly unsupervised superintelligence currently feasible. OpenAI's GPT-4 or any of the top benchmark scoring large language models out there right now, while impressive, are far from self-aware or superintelligent. Nonetheless, exploring these philosophical parallels can be valuable--in posturing hype-word vocabulary on your blog or for drawing far-out connections. Thought experiments prompt us to think deeply about the long-term implications of emerging technologies and highlights the importance of AI alignment research to ensure advanced systems behave in alignment with human values [7].
While Mainländer's concept of divine suicide--perceived almost 150 years ago, cannot be directly mapped onto unsupervised machine learning, it offers a thought-provoking metaphor for prevalent AI thought experiments of today. And I don’t expect our contemporary pioneers to stop the development of the technology, nor the alignment experts with their experiments. As AI capabilities advance, grappling with these philosophical questions may help us steer the technology in a beneficial direction. However, we must clearly distinguish speculative scenarios from the current realities of AI research and development. To conclude: the convergence of Mainländer's 19th-century metaphysics and contemporary AI thought experiments urges a recurring pattern in human thought about supreme entities--or Beings: the possibility that ultimate power leads to self-limitation or self-destruction for ethical reasons. This parallel becomes particularly relevant as AI systems approach unprecedented capabilities, raising questions about whether supreme intelligence necessarily leads to ethical self-awareness and potentially self-limiting behaviors. The key distinction is that while Mainländer's theory was purely metaphysical, today's theories deal with potentially real scenarios, making the philosophical implications more urgent and practical. This transition from abstract philosophy to concrete technological possibility, as materialized by this essay, is cool (extremely spooky) to think about.
References
- 1. Beiser, F. C. (2016). Weltschmerz: Pessimism in German Philosophy, 1860-1900. Oxford University Press.
- 2. LeCun, Y., Bengio, Y., & Hinton, G. (2015). Deep learning. Nature, 521(7553), 436-444.
- 3. Bostrom, N. (2014). Superintelligence: Paths, Dangers, Strategies. Oxford University Press.
- 4. Bostrom, N. (2011). Information hazards: a typology of potential harms from knowledge. Review of Contemporary Philosophy, (10), 44-79.
- 5. Yampolskiy, R. V. (2018). Superintelligence Suicide Theory. arXiv preprint arXiv:1802.04881.
- 6. Simonite, T. (2019). When Tech Giants Blanket the World. Wired. https://www.wired.com/story/when-tech-giants-blanket-world/
- 7. Amodei, D., et al. (2016). Concrete Problems in AI Safety. arXiv preprint arXiv:1606.06565.