I Think You Should Be More Worried About A.I.
I've become a "single-issue voter."
Which is wild, because there's a lot going on in the world right now - SO many issues that drive me to anger or despair.
But the next time I vote - at any level - I will be voting primarily based on one issue, and one issue only:
AI safety.
And I think you should, too.
If that seems silly to you, I get it. I used to think that AI safety concerns were mostly overblown, a sci-fi fantasy.
I no longer believe that.
I don't expect everyone to vote solely based on AI safety issues, but I would like AI safety to become a major concern of yours the next time you go to the polls.
So - here are the arguments that convinced me.
---
THE TECH ALREADY EXISTS
I used to think that AI safety was a future thing - something that required not-yet-existent tech to be a worry.
But it isn't. In fact, all the most important parts of AI disaster scenarios already exist.
Right now, we have AI agents (semi-autonomous, reasoning AI) becoming increasingly capable at longer programming tasks, as well as AI agents capable of using software tools. Not only do these agents exist, but their rate of improvement continues to accelerate. They are not just getting better, they are getting better, faster.
This acceleration is critical to most AI disaster scenarios. Why?
AI agents getting very good at coding means that AI has the tools to improve itself. This unlocks an intelligence acceleration loop: AI engineers—who never sleep, never stop, and, though not as skilled as the very best human engineers, are infinitely duplicatable—accelerate AI advancement. This, in turn, improves the quality of AI engineers further, causing even faster advancement. This is a direct path to what Nick Bostrom originally termed "Superintelligence": AI that is not just smarter than humans, but smarter than every human on earth, put together.
If that sounds far-fetched, consider that AI models we already possess are more intelligent than humans in many areas.
For instance, a study recently published in Nature showed that AI models not only surpassed human doctors in disease diagnosis, they did it by a wide margin. They also significantly surpassed humans and AI working together. This is superhuman performance.
Increasingly, wherever AI is specifically trained on a task, it outperforms humans at that task. The primary limitation for AI right now is cognitive flexibility, or the ability to shift strategies beyond its trained domain. Significant evidence suggests that flexibility is developing, as seen in AI agents becoming better at troubleshooting their own problems.
In short:
AI is already better than humans at anything you train it on. Once AI gets good at accelerating its own progress, we could see a quantum leap in intelligence for which there is no precedent.
HUMAN BEINGS HAVE A VERY BAD TRACK RECORD AT THIS KIND OF THING
The second reason for concern comes from Game Theory.
History has shown that when enormous power goes to the winner of an arms race, safety considerations often become secondary.
We saw this with nuclear weapons: everyone, including the people building the nuclear weapons, knew they were building something bad. They all knew they were ushering in a new age of world history, and not one that made everyone better off.
So why did they do it? To beat the other guys. The only thing worse than inventing nuclear weapons was being the second country to do it.
That did end up being the case. We won the Second World War. We ushered in an age of hegemony for the United States. Our primary enemy? The only other people with nuclear weapons. None of this was coincidence.
AI - and the potential superintelligence which emerges from AI development - is affected by "arms race" dynamics.
The U.S. and China are locked in a race to develop increasingly powerful models, ultimately aiming for artificial general intelligence (AGI) which will give them a lasting (potentially permanent) advantage. The winner of the AI arms race could use it to develop better weapons, sabotage enemy computer systems, make better strategic decisions, massively expand their economy, and so on. Hitting AGI even a few months ahead of everyone else could confer a massive political, military and economic advantage. To the victor go the spoils.
Many wish that the US and China could come to some agreement to slow or even halt AI development below a certain threshold. The problem is that the dynamics of that situation make such an agreement very difficult. If either side defects or breaks the rules, that player gains significantly. This makes cooperative agreements hard to achieve and even harder to maintain: each party has every incentive to break the rules and keep developing in secret.
In game theory exercises like the Prisoner's Dilemma, trust emerges only when you keep playing the game over and over with the same people. That isn't the case here: someone will be first, and someone will be last, and then the game will be over.
When such dynamics are at play, safety concerns get brushed aside as "slowing progress" and "ceding territory to the enemy."
THE FUTURE DOESN'T NEED TO BE EVIL, IT JUST NEEDS TO BE WEIRD
Another critical point is that we don't need an evil AI overlord bent on destruction to experience massive disruption or damage from AI. In fact, the more likely scenario is far more banal, but just as dangerous: shit gets weird.
Complex systems show us that the more interconnected components there are in a system, the more unpredictable that system becomes. As AI systems increasingly interact independently, this unpredictability will rise. Predicting - or even explaining - the weirdness that results from these interactions will be impossible.
AI operates essentially inside a black box—no one understands what's happening inside, or what leads to a specific outcome. We created the mechanisms that create those outcomes, but how they actually work is a mystery - much the same way we can dissect a human brain, yet still not be able to understand why any particular "thought" appears inside your mind (or even what a "mind" is!).
This lack of transparency guarantees bizarre and potentially damaging outcomes with no real recourse.
One example of this was the flash crash on Wall Street, which occurred due to automated trading systems interacting an unexpected way.
From Wikipedia:
Stock indices, such as the S&P 500, Dow Jones Industrial Average and Nasdaq Composite, collapsed and rebounded very rapidly.[5] The Dow Jones Industrial Average had its second biggest intraday point decline (from the opening) up to that point,[5] plunging 998.5 points (about 9%), most within minutes, only to recover a large part of the loss.[6][7] It was also the second-largest intraday point swing (difference between intraday high and intraday low) up to that point, at 1,010.14 points.[5][6][8][9] The prices of stocks, stock index futures, options and exchange-traded funds (ETFs) were volatile, thus trading volume spiked.[5]: 3 A CFTC 2014 report described it as one of the most turbulent periods in the history of financial markets.[5]
And that wasn't even driven by AI! The systems behind this disruption were simple in comparison.
Humans do a poor job of understanding complexity and consistently underestimate how unpredictable things can become, and how quickly. The classical example of this is the three-body problem:
In physics, specifically classical mechanics, the three-body problem is to take the initial positions and velocities (or momenta) of three point masses that orbit each other in space and calculate their subsequent trajectories using Newton's laws of motion and Newton's law of universal gravitation.[1]
Unlike the two-body problem, the three-body problem has no general closed-form solution, meaning there is no equation that always solves it.[1] When three bodies orbit each other, the resulting dynamical system is chaotic for most initial conditions. Because there are no solvable equations for most three-body systems, the only way to predict the motions of the bodies is to estimate them using numerical methods.
In other words, two bodies interacting is predictable. Three bodies? Chaotic and essentially unpredictable. That's how fast complexity takes hold.
I don't need to look that far afield to find examples of this in our current tech ecosystem, however.
I run a Google Ads agency. A few years ago, Google began automatically suspending all our new client accounts, citing "suspicious payment" errors. However, no actual suspicious payments existed, and many accounts hadn't even processed any transactions yet.
Despite contacting Google repeatedly, no one could explain why this was happening, since the suspensions were algorithmically driven. Even speaking directly with the algorithm designers might not have provided answers, as algorithms, much like AI systems, cannot be simply opened up and examined clearly. You can set up all the rules, know every single variable that goes into the box and still be unsure of the reasons behind a given output.
When countless AI "black boxes" interact, the uncertainty and volatility in the world increase substantially. Decisions like getting flagged for a Google Ads suspension, having a mortgage denied or getting a job interview become completely unintelligible. These outcomes are unpredictable, uncontrollable, and incomprehensible, resembling the whims of ancient gods more than logical, modern systems. Nietzsche wept.
WHAT'S THE HARM?
"OK Dan, I get it," you may be saying. "AI has problems. But so does every other technology. Economies get disrupted by all kinds of things, and every step forward technologically also increases the amount of complexity in our lives. But you weren't up in arms about those, so what makes this different?"
Currently, we are in a race to automate as much of our economy as possible, and this trend is accelerating. We're increasingly entrusting critical business and personal functions to unpredictable AI systems. People use AI as a therapist and as a romantic partner. Some people think some of the math behind Trump's recent tariff plans was produced by ChatGPT.
This is just the beginning. AI is increasingly making or informing real-world business decisions. It's writing code - soon, its own code. Startups are racing to bring AI into the world of manual labor and manufacturing. AI is perhaps 1-2 generations away from being able to help a regular person produce a biological weapon.
This is all right now. Not "in the future." We haven't even touched on what superintelligence might mean (you can see that scenario spelled out in the harrowing and yet research-backed https://ai-2027.com/).
Right here, right now, increasingly large swaths of our economy and lives are being placed into a gigantic accelerating black box.
Where could that end up? Anywhere from total personal liberation, soaring GDP and cancer cures to the end of all human life.
The takeaway here isn't that we must halt AI development or fear it.
Instead, we must take AI safety issues seriously. This is something every politician needs to understand thoroughly and address proactively. It requires significant investments of time, resources, and attention.
Please give it some of yours.
Dan