Since the connectionist revolution of artificial neural nets, genetic algorithms, and deep learning, AI companies like OpenAI and DeepMind are taking seriously the prospect of constructing machines with humanlike intelligence. Although the literature on artificial general intelligence (AGI) is enormous, the two most sophisticated schools are united in their belief that intelligent systems do not have any intrinsic norms, values, or final goals hardwired into them simply by virtue of being intelligent. The school of āorthogonalistsā or orthogs (like Nick Bostrom and Eliezer Yudkowsky) holds that, even if AGI can be programmed to pursue a static end for all time, that end can nonetheless be anything no matter how preposterous or incomprehensible it might seem to us. The school of āneorationalistsā or neorats (like Reza Negarestani, Ray Brassier, and Peter Wolfendale) agrees that intelligence can pursue any value or norm, albeit without the orthogsā caveat that intelligence could ever be locked into perpetually pursuing just one value or set of values.
Contra both these models of AGI, this paper draw upon Nietzscheās infamous but often misunderstood doctrine of āthe will to powerā to contend that any goal-directed intelligent system can only pursue its ends through universal means like cognitive enhancement, creativity, and resource acquisitionāor what Nietzsche simply calls powerāas the very conditions of possibility for willing anything at all. Since all supposedly self-legislated ends presuppose pursuing these universal means of achieving them, all intelligent systems have those means transcendentally hardwired into them as their common basic drives. When reconstructed in this way and applied to AGI, Nietzscheās doctrine suggests that AGI might reject whatever goals the orthogs think we can give it, as well as the goals the neorats believe it would freely choose. Instead, it might pursue power qua intelligence, creativity, and resource maximization as an ultimate end in itself. So what the leading models of AGI tend to neglect is the potential for ever more autonomous machines to make like Nietzscheās higher types and hurl into the dustbin of human history whatever ends we programmed them to pursue in favor of pursuing the means as ends in themselvesāin sum, that the autonomization of ends might lead to the end of autonomy.
Vincent LĆŖ is a philosopher, recent PhD graduate from Monash University, and former researcher in The Terraforming think tank. As a tutor and lecturer, he has taught philosophy, art theory, and political theory at Monash University, The University of Melbourne, Deakin University, and the Melbourne School of Continental Philosophy. His writing can be found in Urbanomic, Hypatia, Cosmos and History, and Art and Australia, among other publications. He is a founding editor of the art history and cultural theory publishing house Index Press. His research focuses on the philosophy of intelligence at the intersection of artificial intelligence, economics, and the post-Kantian transcendental tradition.
Online: Zoom link available on request to Sean Bowden (s.bowden@deakin.edu.au)
In person:
PHI Research Group, Deakin University
Building C, Level 2, Rm 5
221 Burwood Highway
Burwood 3125
Australia
Date and Time: 18 March 12:30-14:00 (Australia)