People are becoming more worried about a race to build powerful AI weapons. There’s a lot of guessing about how soon we might create artificial general intelligence (AGI) — a type of AI that could be smarter than humans and solve new problems on its own, not just the ones it was trained for. Many are writing about AI’s growing abilities, but research on its impact on global strategy is still lacking. A recent paper by Eric Schmidt and others adds to the debate, though some of its analysis falls short.
The AGI Debate & Strategic Preparation
Importance of AI Non-Proliferation
Questionable Comparisons: AI vs. Nuclear Weapons
Flawed Analogy: MAIM vs. MAD
|
Concept |
Explanation |
Concerns |
|
MAIM (Mutual Assured AI Malfunction) |
Strategy to deter AI misuse, inspired by nuclear logic (MAD) |
Misleading comparison; AI doesn’t have the same kind of destructive certainty as nukes |
|
MAD (Mutual Assured Destruction) |
Cold War idea: nuclear attack by one state ensures devastating counterattack |
Applies to physical weapons; not suitable for decentralized technologies like AI |
|
Destroying Rogue AI Projects |
Proposal to sabotage terrorist or rogue AI initiatives |
High risk of error, escalation, and unintended consequences |
|
AI’s Decentralized Nature |
AI is built by global teams across borders |
Hard to pinpoint and attack without harming innocent or unintended targets |
|
Sabotage as Strategic Deterrence |
Authors support preemptive action against enemy AI |
Could justify aggressive military actions, increase global instability |
Key Risks of the MAIM Approach
Controlling AI Chips Like Nuclear Material: A Flawed Proposal
Key Differences Between Nuclear Materials and AI Chips
|
Aspect |
Nuclear Technology |
AI Technology |
|
Physical Resource |
Needs ongoing supply of enriched uranium |
Needs powerful chips only for training, not for use |
|
Centralization |
Tightly controlled by states |
Spread across companies, labs, and individuals worldwide |
|
Traceability |
Easier to monitor due to physical properties |
Harder to track digital models and chip distribution |
|
Control Feasibility |
Relatively feasible with treaties and checks |
Very difficult due to the open and global nature of AI |
Questionable Assumptions in the Paper
Limits of Using Historical Analogies for AI Strategy
Takeaway for Policymakers
Need for more scholarship
Conclusion
The only way countries can prepare to deal with superintelligent AI in the future is by doing more research on how AI affects global strategy. However, the key questions are if such AI will ever exist and when it might appear — because right now, we have no way of knowing what it could actually do, and that uncertainty will shape how policies are made.