Part of the
4TU.
Ethics and Technology
TU DelftTU EindhovenUniversity of TwenteWageningen University
4TU.
Ethics and Technology
Close

4TU.Federation

+31(0)6 48 27 55 61

secretaris@4tu.nl

Website: 4TU.nl

Artificial Will

Deakin Philosophy Seminar Series
Date/deadline: Tuesday, 18 March 2025

Since the connectionist revolution of artificial neural nets, genetic algorithms, and deep learning, AI companies like OpenAI and DeepMind are taking seriously the prospect of constructing machines with humanlike intelligence. Although the literature on artificial general intelligence (AGI) is enormous, the two most sophisticated schools are united in their belief that intelligent systems do not have any intrinsic norms, values, or final goals hardwired into them simply by virtue of being intelligent. The school of ā€œorthogonalistsā€ or orthogs (like Nick Bostrom and Eliezer Yudkowsky) holds that, even if AGI can be programmed to pursue a static end for all time, that end can nonetheless be anything no matter how preposterous or incomprehensible it might seem to us. The school of ā€œneorationalistsā€ or neorats (like Reza Negarestani, Ray Brassier, and Peter Wolfendale) agrees that intelligence can pursue any value or norm, albeit without the orthogsā€™ caveat that intelligence could ever be locked into perpetually pursuing just one value or set of values.

Contra both these models of AGI, this paper draw upon Nietzscheā€™s infamous but often misunderstood doctrine of ā€œthe will to powerā€ to contend that any goal-directed intelligent system can only pursue its ends through universal means like cognitive enhancement, creativity, and resource acquisitionā€”or what Nietzsche simply calls powerā€”as the very conditions of possibility for willing anything at all. Since all supposedly self-legislated ends presuppose pursuing these universal means of achieving them, all intelligent systems have those means transcendentally hardwired into them as their common basic drives. When reconstructed in this way and applied to AGI, Nietzscheā€™s doctrine suggests that AGI might reject whatever goals the orthogs think we can give it, as well as the goals the neorats believe it would freely choose. Instead, it might pursue power qua intelligence, creativity, and resource maximization as an ultimate end in itself. So what the leading models of AGI tend to neglect is the potential for ever more autonomous machines to make like Nietzscheā€™s higher types and hurl into the dustbin of human history whatever ends we programmed them to pursue in favor of pursuing the means as ends in themselvesā€”in sum, that the autonomization of ends might lead to the end of autonomy.

Vincent LĆŖ is a philosopher, recent PhD graduate from Monash University, and former researcher in The Terraforming think tank. As a tutor and lecturer, he has taught philosophy, art theory, and political theory at Monash University, The University of Melbourne, Deakin University, and the Melbourne School of Continental Philosophy. His writing can be found in Urbanomic, Hypatia, Cosmos and History, and Art and Australia, among other publications. He is a founding editor of the art history and cultural theory publishing house Index Press. His research focuses on the philosophy of intelligence at the intersection of artificial intelligence, economics, and the post-Kantian transcendental tradition.

Online: Zoom link available on request to Sean Bowden (s.bowden@deakin.edu.au)
In person:
PHI Research Group, Deakin University
Building C, Level 2, Rm 5
221 Burwood Highway
Burwood 3125
Australia

Date and Time: 18 March 12:30-14:00 (Australia)