The idea that AIs can become superintelligent1 and kill us all is gaining momentum among AI pundits. It’s shiny. It’s appealing, it’s super fun to write about, and so much more fun to watch in movies.
Superintelligence is sexy because it’s the romantic fantasy enemy you wish you had.
AI poses a number of threats (and opportunities, to be sure) varying in urgency and potency. On that scale, I’m pretty convinced superintelligence is a red herring.
I tend to agree with machine learning pioneer Andrew Ng who famously said “I don’t work on preventing AI from turning evil for the same reason that I don’t work on the problem of overpopulation on the planet Mars.”2
Let’s be really clear here. This is just not the most pressing problem we face. AI causes real problems for real people today. But it’s not because of superintelligence. These problems are darker, more urgent, more relevant, and demand more attention.
The real threats from AI are more like having a house with hidden mold. Cleaning it up is not sexy at all. It would make a terrible plot for an action movie. Given the choice, I totally understand why pundits and the journalists who follow them play up the superintelligence tale.
Ignoring mold can eventually make a house unlivable. Ignoring the AI threat we live with today is more of a clear and present danger to humanity than evil killer robots.
We love new technology. Usually, when it’s new, we don’t understand it, and we may even suspect the fun comes at a price. We want the good parts and wish to minimise the harm. How good are we at sorting out the warning signs? Let’s take a look at another cherished technology.
Automobiles have brought benefits and threats to our civilization. The nature of how we perceive those threats has evolved over time.
For nearly a century after the invention of the automobile, people saw the dominant threat play out… mostly in terms of highway safety. We sought security through legal battles and engineering breakthroughs around crumple zones, traffic signals, seat belts, antilock brakes, and exploding gas tanks.
In the latter half of the century, our love for automobiles gradually turned our attention to threats to our oil supply, mostly in the middle east. We sought security through complex geopolitics and sometimes wars to protect our dependence on petroleum.
Finally, after so much effort on the first two threats, we are only recently coming to terms with climate change. Global environmental disaster is a latecomer to the threat board. It has been largely ignored for a century.
Even mold can destroy the place you live, but not because of its intelligence.
Cancerous stupidity is the mundane but very real enemy we face today. The threat is not that AIs are too smart for us, it’s that they are profoundly stupid while we trust them like geniuses. Like cancer, the stupid can spread and metastasize.
Today AI is enormously popular because we use it to automate decisionmaking at industrial scale. When you combine enormous scale, flawed outcomes, and uncritical acceptance, the spread of stupidity is hard to stop.
AI relies on a vivid illusion of trustworthiness, when the algorithms are anything but. As Douglas Hofstadter said, they are “not just clueless but cluelessly clueless.”3 They have no idea, he says, that they have no idea.
In many useful domains AI has proven great competence. Often an AI can be even more competent than the humans that trained it. But being extremely competent is not the same as perfection. And being competent in some domains does not guarantee competence in others.
AIs are terribly brittle and error-prone under certain conditions. These errors can have catastrophic consequences, especially for the individuals who they affect. AIs behave poorly when
Not only are AIs limited, but we are setting ourselves up. We put our full trust in AIs without protecting ourselves adequately for when they inevitably fail us.
All of these problems are currently unsolved, and possibly unsolveable for the foreseeable future. We must address these before they can do us harm, at industrial scale. We really don’t have to worry about evil superintelligence until well after these problems are behind us.
When AIs fail, the consequences are corrosive to our society, because they corrupt the information we rely on to make good decisions. The more important these decisions become, the more likely a single mistake can have a catastrophic effect for an individual.
AI threats are not an external enemy to fear and defend against, but insider threats woven into the fabric of our civilization and already growing among the systems we rely on. The safeguards we seek should not be a Maginot line against a fantasy invader. We need to take stock of the corruption we see today, understand the ways in which AI can and does betray our trust in them, and make informed countermeasures.
Bostrom, N. Superintelligence: Paths, Dangers, Strategies. Oxford University Press. 2014. ↩
Ng, A. What’s Next in Deep Learning. GPU Technology Conference. (quotation starts at 1h:02m of 2h:06m). 2015. ↩
Hofstadter, D. Artificial neural networks today are not conscious. The Economist, June 8, 2022. ↩
Bender et al. On the Dangers of Stochastic Parrots: Can Language Models Be Too Big? 🦜. doi:/10.1145/3442188.3445922. 2021. ↩
image credit: Crystal de Passillé-Chabot ↩