Guest author: Sebastian Levar SpiveyÂ
The world proliferates with apocalyptic speculation: the high priests of AI warn against the extinction they themselves threaten to usher in; specialists quantify the climateâs many impending disasters with formulae and flourish; and scientists and diplomats set their clocks to nuclear Armageddon. Alongside such dramatized speculation, covens of experts of seek to manage these uncertain futures by assessment and analysis, ascribing probabilities and forecasting futures. Risk assessments such as these are usually positioned as an advanced technique of modernity, in a lineage of Greek rationality, enlightenment mathematics, and contemporary science that is contraposed to all-things-mystical and magical.
However, this conception of risk assessments is inaccurate, or at least incomplete. As an attempt to reckon with and make decisions into unknowable futures, risk assessments are as kin to the magico-religious as they are to the scientific (itself a dubious dichotomy). Indeed, they can be seen as a form of modern technoscientific divination. Considering risk assessments as divination does more than add a dash of magickal flavor to a dry technique. Rather, it shows how risk assessments make the future as much as they mathematize it. This identification also renders the present as strange enough to imagine possible alternative futures, should the probable one prove productive of harm.
Divination and Risk, in Brief
Risk assessments are the practice of assigning probabilities to possible futures, an attempt to disclose, manage, and make choices into the unknown. This practice is both epistemic and ethical, as it seeks to make knowable and actionable that which exists outside the bounds of human finitude, be that finitude temporal, perceptual, or moral. There is immediate resonance here with divination. Religious studies scholars Jesper Sørensen and Anders Klostergaard Petersen describe divination as âpractices that manipulate the non-mundane, supernatural, or supra-human domain ⌠with the purpose of extracting otherwise hidden, hard to come by informationâ (p. 11). Other scholars further push the idea of practice, naming divination explicitly as a technology of decision-making. As a technology, divination releases the âanguish of an uncertain futureâ by making it something that can be addressed in human terms (Boutinet in Zeitlyn, pp. 145-46). Historian Kim Beerden explicitly names risk assessment as divination, drawing a parallel between it and the divinatory techniques of ancient Greece. She argues that both approaches to divination follow a pattern of perception, identification, and signification. The only true difference is from whence the source of the sign is deemed to have risen (p. 3). There is resonance also between ancient and modern divination in the role that experts play. In both, they serve two purposes. First, to legitimize the identification and interpretation in question, and second to defer decisional responsibility from both themselves and the decision maker they consult (Beerden p. 3, Zeitlyn pp. 146-47).
Making the Future
Alright, this is a pretty idea, but why does naming risk assessment as divination matter? There are two reasons. The first is that, in its attempts to know the future, it also makes the future. Outcomes which are hypothesized and queried as options âsketch the bounds of possibility ⌠the alternatives they propose are accepted as viable and are literally unquestionedâ (Zeitlyn, p. 152). This means it is both the questions that are being asked and the questions that are left unformed that matter. âWe make out futures,â writes anthropologist David Zeitlyn, ânot only by the choices we make but, before then, by the outcomes we contemplate, by the patterns of our multiple anticipationsâ (p. 152). We can only act towards that which we conceive as possible, and only in a manner that corresponds to the rationality of how that possibility is conceived. There is potential here, both for building a more just world and for reifying and reimposing harmful structures of the past and present.
AI Armageddon
By way of illustration, letâs return to one of the scenarios with which which I began: the AIpocalypse. Numerous studies have been done attempting to identify, forecast and quantify possible AI catastrophes, especially those that portend âexistential risk.â Industry leaders also have a habit of proclaiming their own fears, signing dramatic open letters and testifying to it before the US Congress.
As these experts seek to divine the future here, they have also enabled the reality in which that future could emerge. AIâs risky potential, much hyped by the industry leaders, has been a means of accruing the attention, money, and power necessary for that potential to be realized. Positing AI as existential risk is, in the words of Intelligencer columnist John Herrman, âa clever and effective way to make an almost cartoonishly brazen proposal to investors â we are the best investment of all time, with infinite upside â in the disarming passive voice, as concerned observers with inside knowledge of an unstoppable trend and an ability to accept capital.â The horizon of possibility here obscures the already existing harms of AI, such as bias enforcement, misinformation, and surveillance.
Notice here the characteristics of divination that I described above. Legitimization occurs through both expert mediation and the seeming objectivity of the risk assessment itself, which, as Herrman described, makes technologists themselves passive to AIâs Armageddon, perpetually deferring the responsibility from developer to legislator to user and back again. More crucially, the risk that is assessed is dependent on what is put forth as possible; that is, what is perceived, identified, and signified. That then becomes the congealing point around which the situation is understood, priorities are set, attention is focused, and decisions are made. Forecasting the AIpocalypse as superintellignece or nuclear meltdown also leaves other harms unsignified, such as the material and embodied violence of rare earth mining and data labeling. This thus obscures the many localized world-endings already generated, and perpetuates the inequalities of the system as it is. So to say risk assessments set the possibilities of the future is not to say that they determine the full outcome of probabilities, but that they shape the world into which that reality does or doesnât emerge, setting the conditions in which it is imagined and experienced and comes to be. It divines whatever future by constructing the conditions (material, imaginal) of the present.
The Unknown
The second reason that this identification as divination is important is for the sake of opening up the âhorizon of expectationâ in which risk assessments exist (Zeitlyn, p. 151). To situate them as such is to make them strange, give just enough critical distance to unsettle the assumption that they proffer something objective or objectively, or that they transform humans into creatures any less finite than our ancestors in the face of the unknown. This doesnât imply risk assessments are useless or must be tossed out. Rather, naming them as divination discloses their limits and opens up space for other possibilities of worlding the unknown.
 References
Aven, Terje. 2016. âRisk assessment and risk management: Review of recent advances on their foundation.â European Journal of Operational Research 253:1â13. https://doi.org/10.1016/j.ejor.2015.12.023.
Bartholomew, Jem. 2023. âQ&A: Uncovering the labor exploitation that powers AI.â Columbia Journalism Review, August 29. https://www.cjr.org/tow_center/qa-uncovering-the-labor-exploitation-that-powers-ai.php.
Beerden, Kim. 2014. âAncient Greek futures: Diminishing uncertainties by means of
divination.â Futures. http://dx.doi.org/10.1016/j.futures.2014.03.002.
Bernstein, Peter L. 1996. Against the gods: the remarkable story of risk. New York: Wiley & Sons.
Bucknall, Benjamin S. & Dori-Hacohen, Shiri. 2022. âCurrent and Near-Term AI as a Potential Existential Risk Factor.â AIES â22, August 1â3. Oxford. https://doi.org/10.1145/3514094.3534146.
Center for A.I. Safety. Statement on AIÂ Risk. Open Letter. https://www.safe.ai/work/statement-on-ai-risk.
Covello, Vincent T. & Mumpower, Jeryl. 1985. âRisk Analysis and Risk Management: An Historical Perspective.â Risk Analysis 5(2):103-120. https://doi.org/10.1111/j.1539-6924.1985.tb00159.x.
Crawford, Kate. 2021. The Atlas of AI: Power, Politics, and the Planetary Costs of Artificial Intelligence. Yale University Press. https://doi.org/10.2307/j.ctv1ghv45t.Â
Einhorn, Gill. 2024. âThese are the top 3 climate risks we face â and what to do about them.â World Economic Forum, Jan 11. https://www.weforum.org/stories/2024/01/climate-risks-are-finally-front-and-centre-of-the-global-consciousness/.Â
Future of Life Institute. 2023. Pause Giant AI Experiments: An Open Letter. March 22. https://archive.ph/OrZK9#selection-815.0-815.42.Â
Hendrycks, Dan, Thomas Woodside & Mantas Mazeika. 2023. âAn Overview of Catastrophic AI Risks.â arXiv:2306.12001. https://doi.org/10.48550/arXiv.2306.12001.
Herrman, John. 2024. âWhat Ever Happened to the AI Apocalypse? Out: building God. In: partnering with Apple.â Intelligencer, June 4. https://nymag.com/intelligencer/article/what-ever-happened-to-the-ai-apocalypse.html.
Kak, Amba & West, Sarah Myers. 2023. âThe AI Debate Is Happening in a Cocoon.â The Atlantic, November 9. www.theatlantic.com/ideas/archive/2023/11/focus-problems-artificial-intelligence-causing-today/675941/
Lempert, Robert J. 2021. âMeasuring global climate risk.â Nature Climate Change 11: 805â806. https://doi.org/10.1038/s41558-021-01165-9
Marantz, Andrew. 2024. âAmong the A.I. Doomsayers.â The New Yorker, March 11. https://www.newyorker.com/magazine/2024/03/18/among-the-ai-doomsayers.
Mecklin, John (Ed.). 2024. âA moment of historic danger: It is still 90 seconds to midnight; 2024 Doomsday Clock Statement.â Bulletin of the Atomic Scientists, Jan 23. https://thebulletin.org/doomsday-clock/current-time/nuclear-risk/.
Metz, Cade & Schmidt, Gregory. 2023. âElon Musk and Others Call for Pause on A.I., Citing âProfound Risks to Society.ââ New York Times, March 29. https://archive.ph/aIvpy#selection-4609.0-4609.79.
Roose, Kevin. 2023. âA.I. Poses âRisk of Extinction,â Industry Leaders Warn.â New York Times, May 30. https://archive.ph/zIfFb#selection-461.0-461.54.
 Sørensen, Jesper & Petersen Anders Klostergaard (Eds). 2021. Theoretical and Empirical Investigations of Divination and Magic. Brill.
UNSC. 2024. âNuclear Warfare Risk at Highest Point in Decades, Secretary-General Warns Security Council, Urging Largest Arsenal Holders to Find Way Back to Negotiating Table.â Press Release, March 18. https://press.un.org/en/2024/sc15630.doc.htm.
Vold, Karina & Harris, Daniel R. Forthcoming. âHow Does Artificial Intelligence Pose an Existential Risk?â in Oxford Handbook of Digital Ethics C. VĂŠliz, ed. Oxford University Press. https://doi.org/10.1093/oxfordhb/9780198857815.001.0001.
Zeitlyn, David. 2021. âDivination and Ontologies: A Reflection.â The International Journal of Anthropology 65(2):139â1.