کد مقاله | کد نشریه | سال انتشار | مقاله انگلیسی | نسخه تمام متن |
---|---|---|---|---|
1015444 | 1482758 | 2015 | 5 صفحه PDF | دانلود رایگان |
Metaphysics, future studies, and artificial intelligence (AI) are usually regarded as rather distant, non-intersecting fields. There are, however, interesting points of contact which might highlight some potentially risky aspects of advanced computing technologies. While the original simulation argument of Nick Bostrom was formulated without reference to the enabling AI technologies and accompanying existential risks, I argue that there is an important generic link between the two, whose net effect under a range of plausible scenarios is to reduce the likelihood of our living in a simulation. This has several consequences for risk analysis and risk management, the most important being putting greater priority on confronting “traditional” existential risks, such as those following from the misuse of biotechnology, nuclear winter or supervolcanism. In addition, the present argument demonstrates how – rather counterintuitively – seemingly speculative ontological speculations could, in principle, influence practical decisions on risk mitigation policies.
Journal: Futures - Volume 72, September 2015, Pages 27–31