Predicting The Near-Term Consequences of AI
I thought I’d have at least a few more years until AI became this good, but it’s to the point where I’m starting to feel like I’m in a Black Mirror episode. As AI technology improves, the near-term future becomes increasingly uncertain. Nevertheless, I think it’s necessary to at least try to predict what will happen so we can prepare ourselves.
So far on this journal, I’ve made several predictions about the effects of AI on society in the near term. I’d like to summarize my major predictions in this entry to make them more easily accessible, add a few more predictions, make some judgments, and suggest how humanity may proceed. Keep in mind that many of these predictions are highly speculative and may not pan out exactly the way I predict. They’ll only apply if AI doesn’t immediately drive us into a utopia/dystopia scenario. My predictions are by no means comprehensive and they’re not in any particular order.
Let’s begin with some predictions I started thinking about in “Implications of Synthetic Media”.
I believe that free online service providers such as social media networks will be forced to take extreme actions to avoid being inundated with sock puppet accounts. This will probably include mandatory identity verification. Email providers will have to enforce a sender whitelist to prevent users from being flooded with spam emails. Since human users will easily be capable of generating synthetic (AI-generated) media and they’ll have incentives to, everything one sees from unknown online sources will have to be treated with the highest degree of skepticism.
One good thing about synthetic media is that it may make online extortion and blackmail harder. Since every computer-literate human will be able to generate nude photos and other embarrassing/incriminating data of anyone else, everyone will have full deniability because it’ll be impossible to prove that the data is real.
With AIs that can help hackers find software vulnerabilities and fake voices for social engineering, businesses will have to spend more resources on cybersecurity and cybersecurity training. Detecting cheaters in video games will become hard, if not impossible. Gaming companies may have to take extreme measures to prevent cheats.
AI will open up new forms of art and self-expression. I predict that AI will completely undermine intellectual property. It’s already happening. Artists’ work is being remixed and reused without their permission. Free software is being laundered through AI into proprietary software. I think intellectual property was always a mistake and the only sensible way forward, especially given recent developments in AI, is to abolish it, put everything in the public domain, and set up a fund to reimburse artists, drug companies, movies producers, and anyone else who may depend on it for their livelihood.
If intellectual property rights do continue to exist in the same capacity they exist today, I predict the laws regarding AI and intellectual property will be ineffective and unenforceable.
Humans will begin to form relationships with AIs like in the movie Her. Even though the AIs won’t be as good as in the movie, it won’t matter. They’ll be good enough to be used for many purposes. They’ll be people’s friends, significant others, therapists, life coaches, teachers, and everything in between. This may cause human-to-human relationships to become less common or important.
I think AI will be a privacy disaster in two separate ways. First, there will be more AI-based privacy-invading technology. I’m specifically concerned about:
- AI causing private information disclosure through scarily accurate inferential capabilities
- AI surveillance being used on groups of people in a way that exacerbates unjust power differentials
Second, in my entry “AI Poses a Threat to Privacy” I expressed concern that AI would harm privacy in the same way smartphones do. Currently, the only way to benefit from the most powerful AIs is to give your private data to the service which provides the AI. If this remains true, it may create a two-tier society in which the small minority who chooses to forego the benefits of AI to preserve their privacy faces an intolerably difficult life.
There won’t be any law saying “You must use AI.” just as there’s no law saying “You must own a smartphone.” It’ll just be too difficult to function in society without it. For example, it’ll be impossible to compete in the workforce against people who are willing to use AI to augment their abilities if you’re not willing to. Thus, agreement to the AI service providers’ terms of service will be coerced.
Since this implicit coercion issue isn’t discussed at all for smartphones, I expect it won’t get any attention for AI either. Therefore if AI somehow doesn’t end up harming privacy and undermining consent in the way I just described, it’ll be a matter of luck rather than careful planning.
Attention Engineering / Manipulation
AI-powered social media sites are partially responsible for destroying people’s ability to pay attention and making them depressed and angry. In case you’ve been living under a rock, it has now become normalised for everyone to be addicted to their smartphone, checking social media hundreds of times per day. For that reason, I call social media networks, “digital Skinner boxes”.
I don’t carry a smartphone because I didn’t want to be a part of that. Unfortunately, since everybody else has them, I’m often tempted to borrow other people’s smartphones and get sucked in anyways. The pull of social media is very strong even for someone like me who goes out of their way to avoid it. If social media becomes any more addictive than it already is, and it almost certainly will since AI will only improve, then I think humanity is going to have an even bigger attention crisis on its hands.
I won’t go into too much detail about AI-driven lethal autonomous weapons. Rather, I have a short video which captures my concern better than anything I could write here. It’s called “Slaughterbots”. If you haven’t seen it, I would highly recommend it.
I haven’t researched this area enough to make any solid predictions. All I can say is that I hope we don’t end up in a situation like in the video where everyone has to stay indoors all the time, nowhere is safe, etc.
I predict that all major useful proprietary software will be reverse engineered with AI assistance. Translation software will become good enough that no one will need to learn foreign languages unless they want to. As I mentioned in “Automation, Bullshit Jobs, And Work”, so much human labor will be automated that only two practical possibilities will remain:
- In countries that stubbornly maintain a poor social safety net, loads of bullshit jobs will be created to prevent mass homelessness, starvation, and ultimately revolution.
- Alternatively, a socialist program like universal basic income will be implemented so that people don’t have to work to survive and are free to do other things.
Perhaps some forms of automation could be banned to prevent mass unemployment, but I’m skeptical that would work since it might make one’s country unable to compete in the global economy. I don’t know enough about that to make any definitive claims though.
In my entry “Automation and The Meaning of Work”, I predicted how automation would affect how people find meaning. I think it will have some positive benefits like no more child labor and freeing people from miserable and dangerous jobs, giving people more time to do things they like doing. But it will also have negative effects such as taking away work people find meaningful. I predict some jobs will still remain, specifically those where human workers like doing them and the people who benefit from the labor prefer humans doing them.
I predict that if nothing is done to incentivize students, they’ll be discouraged from attending higher education since their future jobs will be automated anyways. Perhaps students won’t be discouraged though if going to university is more of a sociocultural expectation than a rational economic choice they’re making.
With the dramatic reduction in useful human labor, I predict that culture will be forced to adapt so that human meaning is no longer associated with what one does for money.
I’m very concerned about how AI will affect the (in)justice system. There are worrying trends that I hope reverse themselves, such as AI surveillance taking U.S. prisons by storm. That terrifies me because U.S. prisons are already farcically punitive unlike reasonable prison systems, there are far too many Americans in jail many of which haven’t even been convicted, and many of which have been convicted, but for breaking unjust laws.
I predict that AI will make the illegal practice of parallel construction more effective and potentially more common. Perfect or near-perfect enforcement of laws would be highly undesirable or, to put it in less diplomatically, a total fucking nightmare. I think that we need to be very cautious in deciding which AI technologies, if any, police are permitted to use.
As for the court system, I predict that it’ll be so easy to create synthetic media that photos, videos, audio, and other digital evidence will not be taken seriously any more. We will have to revert back to relying more on other forms of evidence such as impartial witnesses, contextual information, and DNA.
AI is already revolutionising scientific research. We can expect this trend to continue into the future. There are a few ideas floating around that try to make sure this new scientific understanding and technology helps mitigate existential risk rather than increasing it.
Two ideas I’m in favor of are differential technological development and differential intellectual progress. The idea of the former is to develop existential-risk-reducing technologies rather than existential-risk-increasing technologies. The idea of the latter is that we should increase our philosophical sophistication and wisdom before proceeding with technological progress.
It helps to have global coordination to accomplish these goals. Humanity currently lacks global cooperation, so it’s going to be challenging to get everyone to agree to differentially pursue technological development. Even if international treaties are signed, it’s hard to be sure that governments aren’t secretly pursuing the banned technology, especially if it would give them an edge.
With a higher rate of technological development than in the past, governments will have to adopt more agile decision-making frameworks or else they won’t keep pace with technological progress and won’t be able to effectively govern. Computer-illiterate elderly government officials that can’t keep up with smartphones nor social media just aren’t going to cut it in the age of rapidly-advancing AI. We need leadership that can understand new technology.
There’s so much more that I wish I could get to, but I don’t have the time. For instance, I didn’t even mention any propositions concerning digital minds. That may be a more long-term issue, but I would argue that it’s relevant now because we will soon build AIs that constitute primitive digital minds. Fortunately people like Nick Bostrom and Carl Shulman have made some headway on digital minds in their paper “Propositions Concerning Digital Minds and Society”.
Anyways, I thank you for reading my journal entries and considering these issues with me. I hope to write more about AI in the future. Sometimes I look at the work of the people like Nick Bostrom and think “Wow! I am so underqualified to write about this. Should I even bother?” but then I remind myself that:
- He writes academic papers while I’m just writing a blog, so expectations of rigor are different
- I have decent reasoning skills and more thinking is needed on this subject
- There are people out there with far greater reach than I who are even less qualified, publicly thinking about AI
So based on that, I don’t think I’m out of bounds here. As I said and I’ll repeat, my predictions are highly speculative. Nobody knows exactly what’s going to happen in the near-term future. All we can do is make our best guess, and this is mine. If anyone has constructive criticism, feel free to get in contact with me and share it.