Planning for an Uncertain Future
Foreword
I just want to make clear that this entry does not constitute a return to writing. I’m still taking a step back from writing. I’m only writing this entry now because it’s important for people to get this message now rather than later.
Lead-In
The seeds of this journal entry were planted in my mind many years ago when I was in my mid-teens, and I’ve hinted at these sorts of thoughts in other journal entries. Here are a few quotes from my response to the movie “Don’t Look Up” which might help set the tone:
“[…] why do they [people] still talk about retirement or making long term investments as if they’re certain civilization will still be here in the distant future?”
“Shouldn’t young people at least consider the possibilities that humanity will have destroyed itself or civilization will have collapsed by the time they would have retired so that they can enjoy that money now rather than wasting it on a future they won’t have?”
“Many young would-be mothers have decided against having children because they’re afraid of the quality of life their child would have given climate change. But other would-be mothers, the vast majority, aren’t even considering that possibility. Shouldn’t they?”
There are also seeds of the thoughts I’m about to express in my entry “Antinatalism”:
“[…] there’s also the legitimate concern about what kind of world children born today will live in. Given the current trajectory of climate change and the failure of nations to address the problem, children born today will be destined to live in a world where large regions are uninhabitable and there’s constant conflict and war over resources unless drastic action is taken to prevent disaster. Is it moral to put another being into a world like that?”
All these quotes follow a similar line of argument, which I’ll generalize into one cohesive argument which I’ll name “The Argument For Futurist Planning”.
The Argument For Futurist Planning
We are at a unique moment in human history. It’s more likely than ever that one of the following occurs:
- A global catastrophic event or existential risk to the human species is realized
- New technology radically transforms society
Given the combined probability of all the events in both of these categories occurring, certain courses of action and plans that were considered good common sense historically, might no longer apply in the modern world.
Let’s see an example.
Applying Futurist Planning
We will use saving for retirement as an example since historically, it has always been considered financial common sense. But let’s think about what potential future events challenge that common sense and what their probabilities of occurrence are.
So everything from within the first category potentially influences the logic of saving for retirement. If a global catastrophic event or an existential risk to the human species is realized before I retire, then saving for retirement will have been a waste because I could have enjoyed that money instead while I still had the chance. Let’s make some assumptions to make this more concrete:
I am 25 years old. If I retire at age 60, I will retire in 35 years. Suppose there is a 1% probability of global nuclear war in any given year.
That works out to a 1-(1-0.01)^35 ≈ 30% chance that global nuclear war happens before I retire, meaning that I can’t enjoy any of the money I saved and I would’ve been better off spending it instead.
In this hypothetical, there’s a 70% chance that global nuclear war doesn’t happen and I still need the retirement money. So I’d probably save. But what if we factor in other risks? Let’s pick one from the second category: new technology.
Assume that, in addition to the nuclear risk, there’s also a 3% chance every year that automation takes us into a post-scarcity economy such that those who saved for retirement don’t live better lives than those who never saved a dime. For the simplicity of the calculation, we’ll also assume that the probability of post-scarcity is independent of the nuclear risk.
That works out to a 1-(1-0.01)^35*(1-0.03)^35 ≈ 76% chance that, before I retire, either global nuclear war occurs or automation undermines my financial prudence. In that case, it’s quite probable that saving for retirement would be a waste. So maybe I would choose to enjoy money now instead.
If I save, I eliminate the risk of being left penniless in old age, but accept the risk that it might all be for nothing. If I spend, I eliminate the risk of years of wasted labor, but accept the risk of being penniless. Now imagine that we added in more risks and the probability of realizing any retirement-nullifying risk shot up from 76% to 90%. Would you still save for retirement?
To make the calculation more accurate, we would also have to include all the high-probability future risks that could threaten the logic of saving for retirement, base our risk probabilities on something other than guesses, and change the math to reflect that the risks aren’t independent from one another. I have no interest in doing any of that, so don’t base your decision on whether to save for retirement on any calculations in this entry.
To be clear, I’m merely demonstrating how one can perform calculations like this and end up with conclusions that contradict common sense as presented to us by mainstream society and culture. I am not prescribing a specific course of action.
You may have noted that I did not mention individual risks such as getting into a fatal car accident. It would also be valid to include them in the calculation, but I didn’t include them because they’re not as broadly applicable as systemic risks and I think they’re less interesting because people already intuitively understand them better than these more abstract risks.
Other Applications of Futurist Planning
What are some other potential applications of futurist planning? In principle, it can apply to any long-term plans you may have, where “long term” is defined as the time duration after which future risks belonging to the categories I outlined become probable enough that it makes sense to modify your plans.
If I had to give a concrete estimate, I would say that “long term” in this context is probably anything longer than a few years.
Higher Education
We talked about retirement already. What about pursuing a degree in higher education? That’s a long-term plan.
If you like studying just to study, then the reward is immediately realized and Futurist Planning doesn’t apply. But if you’re thinking of going back to school because it’ll land you a higher-paying career afterwards, I would strongly suggest that you consider whether that well-paying career will be automated by the time you complete your education. I’m not telling you not to study necessarily. I’m just advocating that you consider automation before making a final decision to study or not.
Large Projects
If you have a project that you enjoy working on just for the sake of doing it, then knock yourself out. But if you’re working on something that’ll take a long time to complete, it doesn’t need done immediately, and you only care about the end result, it might make more sense to wait until machine intelligence gains the capability to do it for you in a fraction of the time. That way, you can spend this time on something else that you find more fulfilling.
Imagine spending years compiling and organizing a large dataset. You finally finish it. Then, a week later, a new powerful open-source AI model is released which could’ve accomplished the same task in minutes, and done a better job.
Crime
If you commit crimes that could land you in jail or prison for years, you should consider stopping if you can. A few years incarcerated starting in 2024 could amount to a de facto life sentence due to the global catastrophic risk level.
Another reason to avoid long prison sentences right now is the rate of technological progress. Even if you spend just a few years in prison, the society you are eventually released back into may be radically different, even unrecognizable, from the one you were in when you entered prison. So unless you have a very good reason, now is a good time to avoid committing any major crimes.
Writing
I’d like to share a personal example now.
I’m in a situation where it’s critical for me to advance career-wise and any extra time/energy spent writing is not spent on my career. When I stopped writing prolifically a year ago, I planned to come back to writing later after I’d made enough progress in my career and other personal life goals that I felt ready to return.
However, seeing the rapid progress of machine learning and other global catastrophic risks has made me second-guess waiting until later to return to writing. I’m not so sure there will be a later anymore. It has also caused me to consider what I really want out of my writing. Am I just writing to help myself think through certain topics? Am I trying to leave a legacy? Do I care if many people read it?
Once machine intelligence surpasses us, it’ll be able to write more compellingly than us, which raises questions like “What will be the value in a human writing a blog? Will people still read blogs written by other humans?”
I think certain types of blogging may fit into Category 1 work, as defined in my entry “Automation and the Meaning of Work”:
“The first category of work is where the human prefers doing the work and the beneficiary of the work prefers a human doing it.”
I’m not sure if the writing I do on this journal is the kind that people would appreciate a human doing instead of an AI. Perhaps people would still like to read personal entries that relate to my experience specifically, such as my entries about autism and how I live with it. Machine intelligence can’t produce that even in principle because it’s not me and doesn’t know my experience, but the more ideological writings could potentially be automated. So they may fall into Category 3:
“The third category of work is where the human prefers doing the work but the beneficiary of the work prefers AI doing it.”
If I decide that having an audience in the future is important, I may focus my future writing more on real life experiences and less on concepts.
Anyhow, those are just a few examples of futurist planning to get you thinking. Consider how this kind of reasoning could apply to aspects of your own life.
A Historical Perspective
Now I want to take a moment to go back and focus on something I wrote nearer to the beginning of this entry:
“We are at a unique moment in human history.”
By that I mean that, for the vast majority of all human history up until the Industrial Revolution, you couldn’t have made The Argument For Futurist Planning. Technological progress happened too slowly for individuals to have to plan for it and the global catastrophes of the day like the Black Death couldn’t be planned for anyways. There was either nothing to prepare for or nothing you could do to prepare, so our ancestors never had to invoke Futurist Planning.
The situation is very different now though. Technology advances from year to year and people are starting to understand that they have to prepare for and even anticipate it. We also have much more information than we used to, which allows us to plan our individual lives better for global catastrophic risks. I think it’s the responsibility of all of us now to use Futurist Planning and adjust our long-term plans accordingly.
Cultural Lag
Unfortunately, evolution did not prepare us to plan for the abstract, long-term risks that Futurist Planning is concerned with and culture is only just beginning to catch up. The only semi-common example of Futurist Planning I’ve seen is when educated young people say they’re not having kids because of climate change, and even they’re not aware that such thinking can be generalized and applied to other things.
For instance, I’ve never heard anyone make the point I made earlier about how it’s worse to get incarcerated nowadays because historically short sentences might now be de facto life sentences. In the judicial system, there’s still no recognition of this at all. We still have sentences like “25 years to life”, which I find absurd given my estimate of the chances of either human extinction or some prison-sentence-nullifying technology being invented within the next 25 years, but I digress.
Returning to my original example for evidence of this cultural lag, I challenge you to find a single online finance article that even hints at any of the potential risks to saving for retirement that I’ve written about here. You won’t find any. They all just go on about how much interest you’ll have earned 40 years from now after we all might be living in a post-scarcity economy where money is meaningless.
What this means for you, dear reader, is that you’re actually years ahead of mainstream culture now for reading this journal entry. You’ll be able to make better long-term life decisions based on relevant factors that no one else is even considering, because their cultural programming never taught them to. Congratulations.
General Advice
To wrap up, I’d like to offer some general advice beyond the specific future scenarios I’ve already discussed.
Understand that when it comes to global catastrophic risk, there are a very small number of people who will determine whether those risks are realized or not. Most of us aren’t in a position to change the outcome. So just do what you can, and don’t panic or worry excessively about it. That won’t do any good.
If you’re intelligent, technically-minded, and motivated to do so, work on solving AI safety. It’s probably the most pressing existential risk right now and every other good deed you could conceivably do would amount to a rounding error in comparison. If you’re involved in politics, government, or activism, work on getting money poured into AI safety, reducing existential risk, and passing an international treaty banning AI models more powerful than GPT-4 until AI safety is solved. If you create content, talk about these issues to raise public awareness.
Whether or not you’re in a position to affect the outcome, try to live in the moment and enjoy life now. Do what matters now. Don’t put important things off for later. Have fun. Don’t be afraid to make mistakes or look like a fool. Be open-minded. If there are experiences you haven’t had yet, you better have them soon. If there are things you’ve left unsaid, you better say them. If there are places you want to see, you better go see them. And finally, try to embrace (positive) change, because there’s going to be a lot of it.