In the 1970s, I was in my 20s and locked in a long-running play-by-mail war game with a group of friends. It was one of those sprawling, imagination-fueled strategy games—light on rules, heavy on consequences. My fictitious country wasn’t the biggest or the boldest, but it was crafty.
So I developed a system. A weapon, really. One that could destabilize a rival country’s environment—wreck its ecosystem, disrupt its weather, throw its food supply into chaos. In game terms, it was brilliant. In real terms, it was terrifying.
There was only one catch: the game required me to deploy this system via intercontinental ballistic missiles. In other words, everyone knew when you pulled the trigger. There was no sneaking around with ICBMs. You launched one, you were automatically the villain.
But I didn’t want to go loud. I wanted deniability. I wanted covert disruption—the kind that left your opponent blaming nature while their crops died and their rivers turned to sludge.
Back then, that was pure sci-fi.
Now? That’s Tuesday.
The New Battlefield Isn’t on a Map
Fast forward to today, and the rules of engagement have changed—but not in our favor. We’ve traded maps for servers, missiles for algorithms, and battlefield victories for silent disruptions that leave no fingerprints. The kind of covert environmental manipulation I once imagined in a game is now an emerging reality, thanks to artificial intelligence.
This isn’t speculation. It’s where the technology and national leaders are heading.
What AI Can Do That Missiles Never Could
Let’s start with cyber-physical attacks. AI systems can now coordinate intrusions into water treatment plants, dam control systems, or electric grids. Not to cause explosions—but to nudge systems toward failure at just the right moment. A minor flood becomes a major disaster. A brownout becomes an agricultural collapse.
Then there’s bioengineering, now turbocharged by machine learning. Need to target a specific wheat strain in a rival’s breadbasket? AI can help you design a virus, mold, or pest that does just that. Quietly. Discreetly. Without triggering NORAD.
Want plausible deniability? Enter the synthetic weather age—geoengineering tools (still on the fringe, for now), enhanced by AI-driven climate models. It’s one thing to study how to direct rainfall. It’s another to quietly weaponize it.
And if you’d rather not get your hands dirty at all, there’s always AI-powered disinformation. Flood a population with fake reports of contaminated soil, poisoned water, or radiation leaks. Watch markets crash, communities evacuate, and supply chains seize—without ever firing a shot.
The Real Enemy May Look Like Mother Nature
This is the part where people scoff. Surely AI can’t do all that yet. But here’s the truth: it doesn’t need to do it all today. It just needs to be on a trajectory. And it is.
What I imagined as part of a war game strategy in my twenties has evolved into an actual strategic doctrine. The idea of environmental warfare is no longer science fiction—it’s science reality, quietly researched and carefully couched in terms like “non-kinetic effect,” “infrastructure disruption,” and “gray zone conflict.”
Generative AI can create. Advanced Generative AI can coordinate, manipulate, and act. That’s what makes quiet, deniable disruption not just possible—but operational. And AGI it’s almost here.
But I remember the old war game. I remember how hard it was to control the blowback once you went down that road. The longer you play, the more blurred the lines get between clever and catastrophic.
Ethics and Guardrails Aren’t Optional
Here’s the kicker: now that the technology exists, it’s not just a matter of what’s possible—it’s a matter of what’s allowed. And right now, there’s precious little standing between ambition and action. No Geneva Convention for AI. No global guardrails for covert climate disruption. Just a patchwork of vague policy, corporate promises, and hope.
Ethics can’t be an afterthought. Not when algorithms can destabilize nations without anyone pulling a trigger. Not when weaponized code can alter environments, perceptions, and economies with a few lines of well-placed instruction. We need rules. We need oversight. And we need it now, not after the damage is done.
Because if we don’t build in the guardrails now, someone else will test the boundaries for us—and they won’t be using a game board.
The scary thing isn’t that AI might be used this way. It’s that we may not even realize it’s being used this way—until it’s too late. The dam bursts, the crops fail, the power flickers out—and no one knows who did it, or even if anyone did.
And that’s the genius—and danger—of it.
So here we are, in 2025, staring down a future that was once just a game on my dining room table. Only this time, the pieces move themselves.
And they don’t always ask permission.