🤖

Military AI and Ethical Challenges

Jul 23, 2025

Overview

The session featured a discussion with Lieutenant General (Ret.) Shanahan and moderator Gary Korn on military applications of artificial intelligence (AI), legal and ethical considerations, accountability, and the evolving international landscape.

Introduction and Speaker Background

  • General Shanahan led major DoD AI efforts, including Project Maven and the Joint Artificial Intelligence Center (JAIC).
  • Gary Korn introduced and moderated as a former U.S. Cyber Command legal advisor.
  • The session focused on emerging military AI, law, and the balance between innovation and responsible use.

Defining Artificial Intelligence and Autonomy

  • AI is defined as machines performing at or above human level, currently focused on narrow tasks.
  • Autonomy refers to systems acting independently based on programming, with or without AI.
  • AI-enabled autonomy could result in unexpected actions beyond direct human control.
  • The distinction between automation, autonomy, and AI is critical for military applications.

The Digital Revolution in Defense

  • Shanahan predicts AI marks a new digital revolution, fundamentally different due to human-machine-data integration.
  • The rise of AI is compared to major historic technological shifts.
  • Rapid data accumulation in military operations created a need for AI, exemplified by Project Maven addressing intelligence analysis overload.

Project Maven and Department of Defense AI Adoption

  • Project Maven was built to solve practical data analysis problems using computer vision.
  • Success in Project Maven demonstrated the viability of startup culture within Pentagon bureaucracy.
  • Following Maven, Shanahan was tasked with leading the JAIC to expand AI across military operations.
  • The key was aligning AI solutions with real needs, not forcing technology where unnecessary.

Legal and Ethical Considerations

  • AI in warfare raises issues of accountability, human control, and risk management.
  • U.S. process includes testing, evaluation, and legal review at multiple stages, guided by DOD Directive 3000.09.
  • Humans remain accountable for decisions and outcomes in military AI systems.
  • Law, ethics, and morality frameworks are applied to AI as with past technologies, but new risks must be recognized.

Accountability, Bias, and Human-Machine Interaction

  • Accountability resides with humans, not machines or developers.
  • Machine bias originates from human bias in training data and coding.
  • Human-machine interaction—how users design, monitor, and trust AI systems—requires legal and operational input.
  • Continuous test, evaluation, and risk mitigation are necessary as systems evolve.
  • The pace and depth of AI diffusion challenge traditional decision-making cycles and could risk automation bias or abdication of responsibility.

International Competition and Collaboration

  • There is aggressive competition, if not a race, with China in AI development, with concerns over escalation and risk of unsafe, untested systems.
  • Collaboration with allies is essential; U.S. efforts include Five Eyes and broader partnerships to ensure interoperability and shared standards.
  • Responsible AI guidelines (ethical, reliable, traceable, governable) have been institutionalized in the U.S. but vary globally.

Recommendations and Further Learning

  • Officers and legal advisors are encouraged to develop AI literacy through books (e.g., Garry Kasparov’s "Deep Thinking"), films (e.g., "AlphaGo"), substack posts, and key texts by Corin Stone, Charlie Baker, and Charlie Dunlap.
  • There is a critical need for more legal professionals with technology expertise in national security.

Questions and Follow-Ups

  • Address disparities in language data for large language models and the risk of missing cultural context in military AI applications.
  • Encourage ongoing evaluation of emerging risks, especially as AI capabilities enter new domains and scenarios.