Facing Catastrophic AI Risk: Lessons from the Nuclear Age
- Yuna Noh
- Jul 12
- 1 min read
AI poses catastrophic, existential risks. It can manifest in various ways: bad actors jailbreaking or stealing powerful AI models for harmful purposes; the development and proliferation of AI-enabled weapons of mass destruction; catastrophic outcomes from losing control over autonomous weapons; and the breakdown of critical infrastructure systems powered by AI. It may be worth learning from the nuclear playbook. However, the nuclear analogy both captures and misses.
International governance was paramount in the nuclear age. Nuclear weapons were contained through treaties and direct negotiations between great powers. Catastrophic risks were managed, although not eliminated, through mutual deterrence and safety protocols.
Comparable principles should govern AI, but with adaptations: dialogue between great AI powers, the United States and China, is critical to establish international standards and implement safety-enhancing mechanisms; multi-stakeholder governance is essential; and universal safety measures, such as cryptographic controls, must be adopted. The nuclear industry’s model of assigning liability to operators, setting liability caps, creating multi-tiered insurance pools, and requiring mandatory insurance with a government backstop offers important insights for managing advanced AI risks.
However, while the nuclear analogy is instructive, it is not complete. AI is built from computer code, not uranium. Its proliferation is far more difficult to stem. Models can be duplicated into millions and disseminated globally at the speed of the internet with far less centralized management. Unlike nuclear weapons, AI is already heavily integrated into civilian life. AI arms race fundamentally differs from the nuclear race in key respects and poses unique challenges. Mitigating catastrophic risk from AI will require innovative regulatory frameworks and much broader coalitions.