📆 April 22, 2023 | ⏱️ 4 minutes read

On Nick Bostrom

For those who don’t know Nick Bostrom, I’ll include a snippet of his bio on his website:

“Nick Bostrom is a Swedish-born philosopher with a background in theoretical physics, computational neuroscience, logic, and artificial intelligence, as well as philosophy. He is the most-cited professional philosopher in the world under the age of 50.

He is a Professor at Oxford University, where he heads the Future of Humanity Institute as its founding director. He is the author of some 200 publications, including Anthropic Bias (2002), Global Catastrophic Risks (2008), Human Enhancement (2009), and Superintelligence: Paths, Dangers, Strategies (2014), a New York Times bestseller which helped spark a global conversation about the future of AI. He has also published a series of influential papers, including ones that introduced the simulation argument (2003) and the concept of existential risk (2002).

Bostrom’s academic work has been translated into more than 30 languages. He is a repeat main TED speaker and has been interviewed more than 1,000 times by various media. He has been on Foreign Policy’s Top 100 Global Thinkers list twice and was included in Prospect’s World Thinkers list, the youngest person in the top 15. As a graduate student he dabbled in stand-up comedy on the London circuit, but he has since reconnected with the heavy gloom of his Swedish roots.”

Bostrom is obviously a very accomplished guy. I believe I first discovered him through his oft-misunderstood paper on the Simulation Argument. I studied his simulation argument and its criticisms very closely. After many hours of researching criticisms and his responses to criticisms, I concluded that his argument is sound and even intuitive if you change your perspective to 4th-dimensional thinking. He adequately addressed some of the criticisms in his own responses and for those criticisms he didn’t address, I was able to come up with my own responses.

In my opinion, publishing the simulation hypothesis alone should be enough to make a name in philosophy. But Bostrom, like me, is a polymath. He’s contributed ideas to many other disciplines. Instead of listing them out, I’ll include another snippet from his website:

“Aside from my work related to the AI control problem, I have also originated or contributed to the development of ideas such as the simulation argument, existential risk, transhumanism, information hazards, superintelligence strategy, astronomical waste, crucial considerations, observation selection effects in cosmology and other contexts of self-locating belief, anthropic shadow, the unilateralist’s curse, the parliamentary model of decision-making under normative uncertainty, the notion of a singleton, the vulnerable world hypothesis, along with a number of analyses of future technological capabilities and concomitant ethical issues, risks, and opportunities. Most recently, I’ve been doing some work on the moral and political status of digital minds, and on some issues in metaethics.”

The common thread running through almost all Bostrom’s work is what he refers to as “macrostrategy”, meaning that it thinks about our present behavior, long-term plans, and their effects on our future prosperity as our species. He’s not just another academic writing about some obscure topic that doesn’t really matter. With every paper I read from him, I learn something. So if that’s the kind of thing that interests you as it does me, definitely take a look at his work.

I’d love to see more people doing work on strategies for humanity’s long-term future. I think it’s desperately needed work and there’s not enough academics doing philosophy on it. His papers on AI in particular have become increasingly relevant as AI has made leaps and bounds over the past decade.

I admit I haven’t read all of his published papers, but I’ve read and understood a good number of them. There are many more I’d love to read just based on skimming them, but I don’t currently have the time. I’ll leave you with a few favorites from what I’ve read so far, in no particular order: