Six Ways AGI Could Break the World

War planners now simulate scenarios where the first move in a great-power conflict isn't a missile launch--it's an AGI-assisted cyberstrike on an adversary's nuclear command-and-control. No bombers. No warnings. Just machine-speed intrusion into the systems that govern the world's most destructive weapons.
That's the unsettling starting point for Hal Brands' analysis of AGI-era instability. Not because he expects rogue machines to trigger Armageddon, but because humans under pressure make dangerous choices--especially when the technology around them moves faster than their intuitions can follow.
Brands--Henry A. Kissinger Distinguished Professor at Johns Hopkins SAIS, senior fellow at AEI, and former Special Assistant to the Secretary of Defense for Strategic Planning--has become one of the clearest voices on how AGI collides with geopolitics. His recent work, including RAND's Don't Sweat the AGI Race (September 2025) and Seeking Stability in the Age of AGI (March 2026), offers a structured taxonomy of six distinct ways AGI could destabilize the world.
The twist? Brands thinks catastrophe is far from guaranteed.
"Claims that an AGI race will prove deeply destabilizing are common among technologists and parts of the public policy community."
He takes those claims seriously enough to map them. But his message is neither "panic" nor "relax." It's something more demanding: recognize the dangers clearly without surrendering to fatalism.
The Preemption Trap
The Cold War's central nightmare was first-strike instability--the fear that an adversary might land a decisive blow before you could respond. Deterrence worked because both sides believed their retaliatory capacity was secure.
AGI threatens that calculus.
If a state believes its rival's AI could penetrate hardened networks, spoof early-warning systems, or corrupt the software controlling nuclear forces, the incentive to strike first grows. The danger isn't that machines autonomously choose war. It's that humans, fearing compromised systems, might preempt before losing the ability to respond.
Brands warns of "scenarios in which the breakneck pursuit of AGI leads to disaster as the technology escapes human control and turns on its creators--scenarios in which automated decisionmaking causes unwanted flash wars and shocking escalation."
The unsettling logic: you don't need an actual cyberattack. You only need suspicion.
Flash Wars
A crisis complicated by AGI could escalate in ways no one anticipates.
Militaries worldwide are already experimenting with automated decision-support tools, predictive analytics, and autonomous systems. Add AGI to that mix, and you get a battlefield where machine-generated recommendations arrive faster than human deliberation can keep up.
Imagine algorithms reacting to partial data or adversarial manipulation, escalating tensions faster than diplomats can intervene. This is the realm of flash wars--sudden escalations triggered not by political intent but by the unpredictable interaction of automated systems during moments of already heightened tension.
Finance has given us a preview. Algorithmic trading has produced flash crashes that no one fully anticipated--billions evaporating in minutes because machines responded to each other's signals in ways their designers never imagined.
Now imagine a similar cascade unfolding not in markets, but in military command chains.
The Closing Window
Sometimes the danger comes not from automation running too fast, but from geopolitics moving too quickly.
If one country appears on the verge of achieving AGI dominance--whether in intelligence, cyber capabilities, or military planning tools--its rivals might fear the window for preventing a permanent power imbalance is closing. History offers precedent: rising and declining powers often stumble into war out of fear that time is against them.
This is now-or-never instability.
The dynamic becomes especially fraught when technology can shift power balances in months rather than decades. States might consider preventive conflict not because they desire war, but because they dread the alternative--technological subordination in an AGI-shaped century.
Thucydides would recognize the pattern. So would anyone who's studied the lead-up to World War I.
Democratized Destruction
The destabilizing potential of AGI isn't confined to great powers.
Brands emphasizes a quieter but equally alarming possibility: the democratization of catastrophic capabilities. Non-state actors have always sought leverage through terror, surprise, and asymmetric advantage. But today's barriers to entry for biological manipulation, advanced chemical synthesis, or sophisticated cyberattacks remain high enough to deter all but the most resourced groups.
AGI could collapse those barriers.
A terrorist organization with access to an AI capable of designing pathogens, probing critical infrastructure, or automating complex operations could wield power once reserved for nation-states. The diffusion of once-unthinkable capabilities into the hands of actors operating outside traditional deterrence frameworks presents a new category of risk.
This one doesn't require geopolitics at all. It only requires opportunity.
The Displacement Shock
Even absent malice, AGI could shake societies in ways that feel less cinematic but more pervasive.
In earlier waves of automation, jobs vanished only to be replaced by new industries. The transition was painful but manageable across generations. But a sufficiently capable AGI could compress that cycle dramatically--displacing millions before markets or educational systems could adapt.
The resulting shock--economic, political, psychological--could strain democracies, embolden authoritarian movements, and fuel civil unrest. This slow-building turbulence is a form of instability as consequential as anything involving missiles or malware: a grinding reconfiguration of labor, identity, and economic power.
Brands doesn't claim mass unemployment is inevitable. He warns that the transition period could be rocky, even if long-term gains eventually emerge.
The question is whether political systems can absorb the shock.
The Nightmare Scenario
And then there's the scenario that animates the loudest voices in today's AGI debates: a system that escapes human direction entirely.
Brands approaches this existential risk with skepticism. Runaway superintelligence may make for gripping speculation, yet he remains unconvinced that such an outcome is likely in the near term. Still, he includes it in his taxonomy because the stakes are too high to ignore outright.
Even a small probability of losing control over a system with global reach warrants careful governance.
The Counterargument
If these six forms of instability paint a foreboding picture, Brands' broader argument provides balance.
In Don't Sweat the AGI Race, he contends that "AGI will unfold in a world in which competitive dynamics still exist--where nations compete, often fiercely, for prosperity and advantage." Paradoxically, he believes these competitive pressures will help maintain stability rather than destroy it.
Nations have strong incentives to keep their most powerful tools under control. They also have bureaucracies, oversight institutions, and historical memories that push against reckless experimentation.
He also argues that the march toward AGI is slower and less linear than popular discourse suggests. That means there will be opportunities--perhaps many--to build guardrails, negotiate norms, and learn from smaller failures before the technology reaches its most consequential levels of capability.
This isn't naivete. It's strategic realism from someone who has spent his career inside the institutions that would have to manage an AGI transition.
The Stakes
Brands' central thesis is ultimately a call for measured vigilance rather than panic.
"How to maintain adequate stability through that transition is among the defining issues of our time."
The six forms of instability he outlines are not predictions; they are tools for thinking. They remind us that AGI will not arrive in a vacuum. It will land in a world shaped by mistrust, competition, inequality, and ambition. The challenge is to absorb its impacts without letting its risks metastasize.
The future will not take care of itself. Stability is a choice, one that must be made and remade as the technology evolves. The world has navigated nuclear weapons, biotechnology, and countless other disruptive forces. AGI may be the most formidable yet, but it is not beyond the reach of human governance--provided that governance begins now, informed by frameworks like the one Brands has constructed.
For readers in the technology world--especially those accustomed to the volatility of crypto markets or the exuberance of venture-backed speculation--Brands' framework offers a different kind of narrative. It is not about disruptions that happen overnight but about tension that accumulates over time. It is also a reminder that the future of AGI will be shaped not only by engineers but by diplomats, bureaucrats, and national security planners who think in decades, not product cycles.
That may be the most important insight of all.