IAS/UPSC Coaching Institute  

 Editorial 2: A closer look at strategic affairs and the AI factor

Context

Research on how AI affects global strategy is still very limited, and we currently have no way to know what superintelligent AI might be able to do.

 

Introduction

People are becoming more worried about a race to build powerful AI weapons. There’s a lot of guessing about how soon we might create artificial general intelligence (AGI) — a type of AI that could be smarter than humans and solve new problems on its own, not just the ones it was trained for. Many are writing about AI’s growing abilities, but research on its impact on global strategy is still lacking. A recent paper by Eric Schmidt and others adds to the debate, though some of its analysis falls short.

 

The AGI Debate & Strategic Preparation

  • Whether AGI (Artificial General Intelligence) is near or not is still uncertain and hotly debated.
  • Schmidt, Hendrycks, and Wang argue that states must be ready to handle the risks of AGI if it becomes a reality.
  • This includes preparing for security threats and global competition tied to advanced AI.

 

Importance of AI Non-Proliferation

  • A RAND commentary agrees that AI non-proliferation — keeping powerful AI away from bad actors — is crucial.
  • It highlights the global risk if dangerous AI tools fall into the wrong hands.
  • The idea draws inspiration from past nuclear arms control efforts.

 

Questionable Comparisons: AI vs. Nuclear Weapons

  • The authors compare AI risks to nuclear weapons, especially through the concept of MAIM.
  • This comparison is flawed, as AI differs greatly in how it’s built, used, and spread.
  • Unlike nuclear arms, AI is decentralized and collaborative, not confined to national labs.

 

Flawed Analogy: MAIM vs. MAD

Concept

Explanation

Concerns

MAIM (Mutual Assured AI Malfunction)

Strategy to deter AI misuse, inspired by nuclear logic (MAD)

Misleading comparison; AI doesn’t have the same kind of destructive certainty as nukes

MAD (Mutual Assured Destruction)

Cold War idea: nuclear attack by one state ensures devastating counterattack

Applies to physical weapons; not suitable for decentralized technologies like AI

Destroying Rogue AI Projects

Proposal to sabotage terrorist or rogue AI initiatives

High risk of error, escalation, and unintended consequences

AI’s Decentralized Nature

AI is built by global teams across borders

Hard to pinpoint and attack without harming innocent or unintended targets

Sabotage as Strategic Deterrence

Authors support preemptive action against enemy AI

Could justify aggressive military actions, increase global instability

 

Key Risks of the MAIM Approach

  • Oversimplifying AI as a weapon may lead to poor strategic decisions.
  • Encouraging sabotage or preemptive strikes based on imperfect intelligence could worsen conflicts.
  • Policies based on flawed analogies like MAIM risk promoting militarized responses to complex, tech-driven threats.

 

Controlling AI Chips Like Nuclear Material: A Flawed Proposal

  • The authors suggest controlling the distribution of AI chips in the same way enriched uranium is regulated for nuclear weapons.
  • But this analogy doesn't work well because:
    • AI models, once trained, don’t need constant access to chips or materials like uranium.
    • Supply chains for AI are harder to track and control — making enforcement difficult.

 

Key Differences Between Nuclear Materials and AI Chips

Aspect

Nuclear Technology

AI Technology

Physical Resource

Needs ongoing supply of enriched uranium

Needs powerful chips only for training, not for use

Centralization

Tightly controlled by states

Spread across companies, labs, and individuals worldwide

Traceability

Easier to monitor due to physical properties

Harder to track digital models and chip distribution

Control Feasibility

Relatively feasible with treaties and checks

Very difficult due to the open and global nature of AI

 

Questionable Assumptions in the Paper

  • The authors assume AI-based bioweapons and cyberattacks are inevitable without early state intervention.
    • This is a worst-case scenario without clear supporting evidence.
    • While AI could lower barriers to cyber threats, it’s not yet proven to justify being treated like a weapon of mass destruction.
  • Another assumption: AI development will be led by states.
    • In reality, the private sector currently leads AI innovation.
    • Governments often adopt AI after it is developed by private firms, especially in defense or security.

 

Limits of Using Historical Analogies for AI Strategy

  • Comparing AI to nuclear weapons can be misleading for policy planning.
  • Though drawing from history is useful, AI operates differently:
    • It is developed, distributed, and deployed in ways that don’t resemble nuclear tech.
  • Assuming deterrence strategies used in the nuclear era will work for AI may lead to wrong policy choices.

 

Takeaway for Policymakers

  • AI is dynamic, decentralized, and evolving rapidly — unlike nuclear weapons.
  • Policymakers need to build new frameworks for AI governance rather than rely on outdated models.
  • Historical analogies may help guide thinking but shouldn’t shape full strategies for handling future AI threats.

 

Need for more scholarship

  • We need better examples and models to understand how AI fits into global strategy.
  • One possible model is the General Purpose Technology (GPT) framework, which explains how powerful technologies spread across different areas and become key to a country’s strength.
  • AI could be seen through this lens, but it doesn’t fully fit the GPT model right now.
  • This is because current AI tools like large language models (LLMs) still have big limitations.
  • These models are not yet advanced enough to spread and impact all sectors the way true GPTs do.

 

Conclusion

The only way countries can prepare to deal with superintelligent AI in the future is by doing more research on how AI affects global strategy. However, the key questions are if such AI will ever exist and when it might appear — because right now, we have no way of knowing what it could actually do, and that uncertainty will shape how policies are made.