🤖

Military AI and Autonomy Overview

Jul 23, 2025

Overview

This session featured a keynote and discussion with Lt. Gen. (Ret.) Shanahan on the intersection of artificial intelligence (AI), military operations, and law, focusing on AI implementation, risk, accountability, and international considerations.

Key Concepts in Military AI and Autonomy

  • AI is defined as machines performing tasks at or above human capability, with current systems focused on narrow, single-domain use.
  • Autonomy and AI-enabled autonomy are distinct, with the latter allowing machines to act with greater independence and unpredictability.
  • The rise of AI is characterized as a digital revolution, changing the nature of power and cognition at scale.

Project Maven and DoD AI Initiatives

  • Project Maven addressed the challenge of analyzing vast drone-derived video data beyond human capacity, using computer vision AI.
  • Success in Project Maven demonstrated the possibility of a startup-like culture in DoD and led to the establishment of the Joint AI Center (JAIC).
  • The approach emphasized enabling rather than replacing human analysts and highlighted the importance of matching AI solutions to actual problems.

Intersection of AI, Law, and Ethics

  • AI should not fundamentally alter the legal principles around warfare initiation and conduct, but speed and autonomy bring new challenges.
  • Accountability, control, and the "black box" problem are core legal and ethical issues as decision-making is increasingly delegated to machines.
  • Human responsibility and legal frameworks must remain central, with early and ongoing legal involvement in AI system design and deployment.

Risk, Testing, and Evaluation

  • All weapon systems entail risk; risk management and robust test/evaluation processes are essential, not just rapid software development.
  • DOD Directive 3000.09 provides a framework for developing and fielding autonomous weapon systems.
  • Risk in AI use spans from negligible (administrative tasks) to excessive (nuclear command and control), with clear human-in-the-loop requirements for the latter.

International Competition and Collaboration

  • The U.S.–China AI competition is characterized as a race, with concerns about an "AI race to the bottom" if proper safeguards are not observed.
  • Collaboration with allies (e.g., Five Eyes, NATO) is critical for interoperability and shared ethical standards, though policy hurdles persist.

AI Bias and Data Limitations

  • Machine bias is inherently human bias reflected in training data and outputs; transparency and legal oversight are necessary.
  • Large language models are limited by the breadth and quality of multilingual and culturally nuanced data, impacting effectiveness in non-English contexts.

Guidance and Resources for AI Understanding

  • Recommended starting points: Kasparov’s book on Deep Blue, the AlphaGo documentary, New York Times on Google Translate, and expert writings like Corin Stone’s work.
  • Continual learning is required due to rapid AI advancements; legal, ethical, and technical literacy are all necessary.

Decisions

  • Stand up responsible AI division in JAIC to establish and implement ethical principles for AI use.
  • Full-time legal support for AI programs is necessary to manage complex legal and privacy considerations.

Action Items

  • TBD – AI and Legal Teams: Increase interdisciplinary training in AI, law, and ethics for DoD staff.
  • TBD – Project Leaders: Continue expanding international AI collaboration and interoperability initiatives.

Recommendations / Advice

  • Embed legal and ethical review early in AI system design and throughout the lifecycle.
  • Maintain robust risk management and test/evaluation for all AI-enabled military systems.
  • Focus on human-machine interaction research to optimize collaboration and accountability.

Questions / Follow-Ups

  • How can multilingual and culturally contextual data gaps in AI models be effectively addressed?
  • What safeguards are needed as AI approaches higher levels of autonomy in critical systems?