Free Will in AI and The Specter of Human Extinction
Top of the food chain as humans are, it seems only natural to fear our own creations. A malevolent AI is us – but better, smarter, and outside of our control. This fear touches on all of our weaknesses. How long is the path from customer service chatbot to genocidal, omnipotent machine? We have only our own history to guide us, and the esoteric idea of Free Will.
Technitonoimosyniphobia, the fear of artificial intelligence, is real, and divisive. The possible apocalyptic scenarios are colorful, and countless. I hear (and contribute) my share of robot overlord jokes as part of the whole gestalt of the internet age. Elon Musk, Stephen Hawking, Isaac Asimov and many others have tapped into the general unease around AI. This fear is general because we have no idea what may happen – yet the greater discussion hasn’t gone beyond doomsday scenarios.
The idea of an “evil” AI is deeply rooted in our culture. With all the advancements in AI it seems inevitable that humans will arrive at sentient intelligence, and soon. With our evolutionary history in mind, the question becomes: will a new “species” be a step change in human evolution, or will it be the cause of our extinction? How afraid should we be of a new king of the jungle, more capable, more adaptive, more energy efficient?
What will AI dream of?
We are scared of an AI capable of two things:
- coming to the conclusion that humans must be eliminated, and
- doing so.
Some suggest that we should eliminate this possibility completely. If we can program AI not to harm humans, then the AI-pocalypse seems less likely. Unfortunately for us, we know the score – The good guys won’t cross the line, bad guys will. There’s no room for absolute good in the real world we live in. Let’s assume we can’t directly control the collective AIs of the future.
The Free Will Question remains. Science fiction authors, theologists, philosophers and scientists alike have been debating whether humans have free will, and if there is a way to know for sure. Perhaps the point is moot; if there is an illusion of free will, why wonder whether we have the real thing?
If we are creating AIs in our own likeness, an AI of the future is likely to inherit its value systems, dreams, decision trees from us, including whatever degree of free will we do have, illusion or not.
We all know that it’s impossible to “reprogram” a friend or colleague who disagrees with you – but it’s possible to influence them. Both Machiavelli and Aristotle would tell you that the best way to do that is to study their motivations. Appealing to their value system can prompt them to change their thoughts and behaviors. We don’t have to go far to see this in action on a large scale. As per a 2018 report to Senate Intelligence Committee, certain Russian actors used social media, the right motivations and American value system to try and affect election of an American President.
Westworld, I Robot, Ex Machina, Star Trek – there’s no shortage of great TV/movies out there attempting to unravel the potential motivations of artificial life. Granted, it’s all science fiction. Reality is likely to be even more bizarre, even something totally unimaginable. One thing is certain, though – as AI programs flirt with sentience, their programmers – us – will have a very good grasp of their motivations and reasoning, of their dreams. And understanding motivation is the ultimate tool in predicting or changing behavior.
We are scared that an AI will come to the conclusion that humans must be eliminated. Our own history tells us that we don’t need to be.
“A model is a lie that helps you see the truth”
For now, we are not even close to creating fully sentient AI. That is no reason not to work towards the utopia we hope for – that AIs can fulfill their promise without our total loss of control. We assume AIs will evolve to be “better” than us because we are building them with the goal of emulating perfected human thinking. That is, thinking and doing as we do, only optimally, every time. We assume that AIs will consider all information necessary, weigh it accordingly, and quantify the value of each consequence before any decision.
Let’s not forget that despite our evolutionary history of 3.7 billion years, we haven’t come even close to this. As long as we have the illusion of free will, humans will always disagree. We think of ourselves as individuals with individual ways to consider, weigh and quantify information.
AI is not being developed by a single Cyberdyne-like entity. It’s likely that as AI advances, different AIs will develop different value systems based on their programming. These AIs will reflect our individuality and the differences in their training. Just like humans – we come to different conclusions in similar situations based on our biological, social, and moral history.
For example, while humans have agreed that the best tool we have to combat differences is language. We really use it to highlight our differences about everything else – God, science, parenting, politics, the color of the living room wall. We cannot even agree on scientifically established facts like the necessity of vaccinations.
And AI programs do not even have language.
Some are scared that an AI will come to the conclusion that humans must be eliminated. Even if that happens, there are likely to be many other AIs, perhaps equally competent, that conclude otherwise.
Howard Skipper once quipped, “A model is a lie that helps you see the truth.” We are masters of creativity and problem-solving. Those pursuing AI for the sake of unburdening humanity from mindless tasks, like Microsoft, IBM and even upstarts like Coseer,understand the value that language brings in modeling human thought, and its collective implications. They also understand that the best an AI can achieve is this model.
A chain of improbabilities
Let’s put all of this together. AIs can make humans extinct if the following are true:
- AI agents develop free will.
- Humans don’t understand or control AIs’ dreams and motivations.
- The various AIs, perhaps billions, develop a language to communicate with each other, and start to form predominant ideas as if they all were a single AI.
- AIs decide that humans must be eliminated.
- Humans and humanist AIs let the extinctionist AIs have dominating corporeal power.
Each of the above is highly improbable. In particular, the second step is easy to thwart – we have been managing each other based on mutual understanding of the dreams we have for our individual futures. This is how governments stay in power, global economy functions, and families command loyalty, love and respect.
In harnessing the unlimited potential of AI, and to protect ourselves, we need to care about only one question – What will AIs dream of?
Thanks to the Coseer team for their ‘lively’ debates on this issue, esp to Lindsey Parker for helping me articulate all these thoughts.