Artificial Intelligence in the Australian Defence Forces: Strengthening Denial, Managing Escalation
Policy Briefs

Artificial Intelligence in the Australian Defence Forces: Strengthening Denial, Managing Escalation

Download or Print the Report

Australia’s defence transformation is being driven by the integration of Artificial Intelligence across surveillance, targeting, command, and logistics systems. In this policy brief, Aina Turillazzi examines how AI is reshaping the Australian Defence Force under the 2024 National Defence Strategy and Integrated Investment Program, where AI is largely financed through the same machinery that funds connectivity, data infrastructure, and command and control (C2). AI could strengthen decision advantage and deterrence, but also raises new escalation and perception risks in crisis settings. To mitigate these risks, Turillazzi argues that the policy task is not to slow adoption, but to ensure AI holds up under crisis pressure. She outlines four strategic shifts to build a more controllable AI-enabled force:

  1. Build “decision advantage with brakes” into AI-enabled decision support. Any AI-enabled DSS used for ISR, planning, or targeting should be designed for crises as well as routine operations. The DoD should mandate a minimum assurance package: an explicit “slow mode” that raises evidentiary thresholds and forces uncertainty to be surfaced, auditable provenance for key outputs (e.g., model version, assumptions), and structured red teaming built into operational cycles. The objective is to preserve AI-enabled speed where it is useful, without allowing tempo to become a bias that narrows interpretation and compresses political control during escalation.
  2. Make OneDefence a gate for higher-risk AI, and fund delivery like operational infrastructure. The unified data layer should be treated as a prerequisite, not an aspiration. Where data remains fragmented, AI outputs become inconsistent across units and harder to verify in joint settings, precisely the conditions under which socio-technical overreliance takes hold. Policy should therefore link higher-risk AI applications to explicit OneDefence readiness thresholds, covering data standards and interoperability, before operational deployment is authorised.
  3. Signal restraint through selective transparency on how AI is used. Perception and signalling ambiguity are an under-addressed stability risk of military AI. Australia should publish a short set of principles on how AI is used in decision support and intelligence processing, including what Australia will not automate and what oversight requirements apply. The purpose is not virtue signalling. It is practical signalling that reduces worst case inference and clarifies that decision advantage is being pursued with controls designed for crisis stability.
  4. Fix the transition to capability: fast software acquisition, operational pull-through, and the people pipeline. Australia’s late delivery problem is not only a budget issue but also a stability issue, because it increases incentives for hurried integration and premature reliance on prototypes. DoD should create a fast acquisition pathway for software and models that supports rapid iteration and modular upgrades, while reducing ad hoc modifications that delay delivery and fracture interoperability. ASCA should be directed to privilege projects that integrate with OneDefence, have an explicit sustainment model covering updates and cyber hardening, and are adopted by operational units with dedicated integration staff. Finally, treat the people pipeline as an enabler of assurance, not an afterthought. Without sufficient in house technical competence, auditability becomes aspirational rather than real. The Australian DoD should therefore treat clearances, retention, and technical career pathways as foundational to the governance architecture, ensuring that the humans responsible for overseeing AI systems have the expertise to do so meaningfully.

 

About the Author

Aina Turillazzi is a PhD candidate at the Strategic and Defence Studies Centre, Australian National University. Her research examines AI-enabled autonomy in weapons systems and its implications for crisis escalation, with a particular focus on grey zone dynamics in the Indo-Pacific.

The opinions articulated above represent the views of the authors and do not necessarily reflect the position of the Asia-Pacific Leadership Network or any of its members. The APLN website is a source of authoritative research and analysis and serves as a platform for debate and discussion among our senior network members, experts and practitioners, as well as the next generation of policymakers, analysts and advocates. Comments and responses can be emailed to apln@apln.network.

Image: A Boeing MQ-28 Ghost Bat is on display at the 2025 Avalon International Airshow in Avalon, Australia, in March 2025. (Alexander Bogatyrev/SOPA Images/LightRocket via Getty Images)

Related Articles