Published 2026-01-15 | Version v1.0
Working PaperOpenPublished

When Decapitation No Longer Matters

AI-Delegated Execution and the Potential Failure of Preemptive Strike Logic

Description

This working paper examines how AI-enabled delegated execution can undermine the risk-reduction logic of preemptive strike. It argues that preemptive strike depends on a disruptable human decision bottleneck, and that when retaliatory execution is pre-authorized, institutionally insulated, and no longer contingent on leadership survival, decapitation loses strategic leverage. The paper develops the concept of decapitation irrelevance and reframes deterrence stability around pre-crisis institutional commitment rather than crisis-time leader discretion.

Abstract

Preemptive and preventive strike doctrines in international relations are commonly understood as strategies of risk reduction, premised on the belief that striking first can suppress or prevent future retaliation. This logic presumes that diverse target sets—military forces, leadership nodes, and critical infrastructure—are strategically substitutable within a unified framework of early violence. This article argues that such substitutability rests on an underexamined structural condition: the existence of a disruptable human decision bottleneck whose removal can meaningfully alter retaliatory execution. Decapitation functions as a necessary enabling condition within preemptive strike logic even when it is not the sole objective. When leadership disruption no longer affects the probability, scale, or certainty of retaliation, preemption forfeits its defining function as risk reduction and collapses into reciprocal destruction. This condition is increasingly undermined by AI-enabled delegated execution. When retaliatory execution is pre-authorized and institutionally insulated from real-time human intervention, killing leaders no longer alters strategic outcomes—a condition this article terms decapitation irrelevance. Contrary to prevailing concerns in AI governance scholarship, this transformation does not entail automated decision-making, but a reconfiguration of commitment structures. The article concludes that deterrence stability in AI-integrated environments depends less on crisis-time restraint than on institutional architectures of pre-commitment established before conflict begins.

Files

PDF preview

Keywords

  • Preemptive strike
  • Preventive strike
  • Decapitation
  • Decapitation irrelevance
  • AI-delegated execution
  • Delegated execution
  • Deterrence theory
  • AI and strategic stability
  • Nuclear deterrence
  • Pre-commitment
  • Decision bottleneck
  • Leadership decapitation
  • Retaliatory execution
  • Crisis stability
  • International security
  • Strategic risk
  • AI governance
  • Human-in-the-loop
  • Institutional commitment
  • Preemptive strike logic

Subjects

  • Strategic Studies
  • International Security
  • Deterrence Theory
  • AI-Enabled Conflict
  • Nuclear Strategy
  • Crisis Stability
  • Preemptive and Preventive Strike
  • AI Governance
  • International Relations Theory
  • Security Studies

Recommended citation

Wu, Shaoyuan. (2026). When decapitation no longer matters: AI-delegated execution and the potential failure of preemptive strike logic (EPINOVA Working Paper No. EPINOVA–WP–F–2026–01). Global AI Governance and Policy Research Center, EPINOVA LLC. https://doi.org/10.5281/zenodo.18252768. DOI: To be assigned after Crossref membership approval.

APA citation

Wu, S. (2026). When decapitation no longer matters: AI-delegated execution and the potential failure of preemptive strike logic (EPINOVA Working Paper No. EPINOVA–WP–F–2026–01). Global AI Governance and Policy Research Center, EPINOVA LLC. https://doi.org/10.5281/zenodo.18252768. DOI: To be assigned after Crossref membership approval.

Alternate identifiers

SchemeIdentifierDescription
DOI10.5281/zenodo.18252768Zenodo/DataCite DOI shown in the PDF recommended citation
DOI10.5281/zenodo.18252767DOI recorded in the early ORCID-derived metadata; retained as a discrepancy note for reconciliation
ORCID put-code202494938ORCID Public API record identifier from early metadata
EPINOVA working paper numberEPINOVA–WP–F–2026–01Working paper number shown in the PDF title page and running header
File nameWhen Decapitation No Longer Matters AI-Delegated Execution and the Potential Failure of Preemptive Strike Logic(EPINOVA–WP–F–2026–01).pdfSource PDF file name
Short titleWhen Decapitation No Longer MattersShort form of the working paper title

Related works

RelationIdentifierTypeDescription
Related EPINOVA work on machine-speed OODA, human control, and strategic stability in AI-enabled warfare10.5281/zenodo.18089642
Related EPINOVA work on algorithmic warfare, human role migration, IHL, and accountability10.5281/zenodo.18088850
Related EPINOVA work on uncertainty, strategic stability, and unmanned systems under adversarial inference10.5281/zenodo.18081107

References

  1. Acton, J. M. (2020, April 9). Is it a nuke? Pre-launch ambiguity and inadvertent escalation. Carnegie Endowment for International Peace. https://carnegieendowment.org/2020/04/09/is-it-a-nuke-pre-launch-ambiguity-and-inadvertent-escalation-pub-81446
  2. Betts, R. K. (2003). Striking first: A history of thankfully lost opportunities. Ethics & International Affairs, 17(1), 17–24.
  3. Blair, B. G. (1993). The logic of accidental nuclear war. Brookings Institution.
  4. Chiozza, G., & Goemans, H. E. (2011). Leaders and international conflict. Cambridge University Press.
  5. Depp, M., & Scharre, P. (2024, January 16). Artificial intelligence and nuclear stability. War on the Rocks. https://warontherocks.com/2024/01/artificial-intelligence-and-nuclear-stability/
  6. Fearon, J. D. (1995). Rationalist explanations for war. International Organization, 49(3), 379–414.
  7. Fearon, J. D. (1997). Signaling foreign policy interests: Tying hands versus sinking costs. Journal of Conflict Resolution, 41(1), 68–90.
  8. Feaver, P. D. (1992). Command and control in emerging nuclear nations. International Security, 17(3), 160–187.
  9. Finnemore, M., & Sikkink, K. (1998). International norm dynamics and political change. International Organization, 52(4), 887–917.
  10. Horowitz, M. C. (2018). Artificial intelligence, international competition, and the balance of power. Texas National Security Review, 1(3). https://tnsr.org/2018/05/artificial-intelligence-international-competition-and-the-balance-of-power/
  11. Horowitz, M. C., Stam, A. C., & Ellis, C. D. (2015). Why leaders fight. Cambridge University Press.
  12. Horowitz, M. C., & Scharre, P. (2021, January 12). AI and international stability: Risks and confidence-building measures. Center for a New American Security. https://www.cnas.org/publications/commentary/ai-and-international-stability-risks-and-confidence-building-measures
  13. Jervis, R. (1978). Cooperation under the security dilemma. World Politics, 30(2), 167–214.
  14. Johnson, J. (2024). Revisiting the ‘stability–instability paradox’ in AI-enabled warfare: A modern-day Promethean tragedy under the nuclear shadow? Review of International Studies. https://doi.org/10.1017/S0260210524000767
  15. Levy, J. S. (2008). Preventive war and democratic politics. International Studies Quarterly, 52(1), 1–24.
  16. Payne, K. (2021). Artificial intelligence: A revolution in strategic affairs? Survival, 63(2), 7–32.
  17. Powell, R. (1990). Nuclear deterrence theory: The search for credibility. Cambridge University Press.
  18. Sagan, S. D. (1993). The limits of safety: Organizations, accidents, and nuclear weapons. Princeton University Press.
  19. Scharre, P. (2018). Army of none: Autonomous weapons and the future of war. W. W. Norton & Company.
  20. Scharre, P. (2023). Four battlegrounds: Power in the age of artificial intelligence. W. W. Norton & Company.
  21. Schelling, T. C. (1966). Arms and influence. Yale University Press.
  22. Snyder, G. H. (1961). Deterrence and defense: Toward a theory of national security. Princeton University Press.