Automating the OODA Loop in the Age of AI

Because of the confluence of several cognitive, geopolitical, and organizational factors, the line between machines analyzing and synthesizing (i.e., prediction) data that informs humans who make decisions (i.e., judgment) will become an increasingly blurred human-machine decision-making continuum.

FacebookTwitterLinkedInEmailCopy Link

In a recently published article with Defence Studies I argue that artificial intelligence (AI) enabled capabilities cannot effectively, reliably, or safely complement – let alone replace – humans in understanding and apprehending the strategic environment to make predictions and judgments to inform and shape command-and-control (C2) decision-making – the authority and direction assigned to a commander. Moreover, the rapid diffusion of and growing dependency on AI technology (especially machine learning) to augment human decision-making at all levels of warfare harbinger strategic consequences that counterintuitively increase the importance of human involvement in these tasks across the entire chain of command. 

Because of the confluence of several cognitive, geopolitical, and organizational factors, the line between machines analyzing and synthesizing (i.e., prediction) data that informs humans who make decisions (i.e., judgment) will become an increasingly blurred human-machine decision-making continuum. When the handoff between machines and humans becomes incongruous, this slippery slope argument will make efforts to impose boundaries or contain the strategic effects of AI-supported tactical decisions inherently problematic and unintended strategic consequences more likely. 

While the diffusion and adoption of “narrow” AI systems have had some success in non-military domains to make predictions and support – largely linear-based – decision-making (e.g., commercial sector, healthcare, and education), AI in a military context is much more problematic. Specifically, military decision-making in non-linear, complex, and uncertain environments entails much more than copious, cheap datasets and inductive machine logic. In command-and-control decision-making, commanders’ intentions, the rules of law and engagement, and ethical and moral leadership are critical to effective and safe decision-making in the application of military force.  

Because machines cannot perform these intrinsically human traits, the role of human agents will become even more critical in future AI-enabled warfare. Moreover, as geostrategic and technological deterministic forces spur militaries to embrace AI systems in the quest for first-mover advantage and reduce their perceived vulnerabilities in the digital age, commanders’ intuition, latitude, and flexibility will be demanded to mitigate and manage the unintended consequences, organizational friction, strategic surprise, and dashed expectations associated with the implementation of military innovation

The ‘Real’ OODA loop is more than just speed 

John Boyd’s OODA loop has been firmly established in many strategic, business, and military tropes. Several scholars have criticized Boyd’s OODA loop concept as overly simplistic, too abstract, and over-emphasizing speed and information dominance in warfare. Critics argue, for instance, that beyond pure granular tactical considerations (i.e., air-to-air combat), the OODA loop has minimal novelty or utility at a strategic level, for instance, managing nuclear brinkmanship, civil wars, or insurgencies. The OODA concept was not designed as comprehensive means to explain the theory of victory at the strategic level. Instead, the concept needs to be viewed as part of a broader cannon on conceptualizations Boyd developed to elucidate the complex, unpredictable, and uncertain dynamics of warfare and strategic interactions. The influence of this science and scholarly Zeitgeist most clearly manifests in the OODA loop’s vastly overlooked “orientation” element. 

Figure 1: The “simple” OODA loop 
Figure 2: The “real” OODA loop 

Thus, a broader interpretation of the OODA loop analogy is best viewed as less a general theory of war than an attempt to depict the strategic behavior of decision-makers within the broader framework of complex adaptive organizational systems in a dynamic non-linear environment. According to Clausewitz, general (linearized) theoretical laws used a perspective to explain the non-linear dynamics of war must be heuristic; each war is a “series of actions obeying its own peculiar law.” Colin Gray opined that Boyd’s OODA loop is a grand theory because the concept has an elegant simplicity, a vast application domain, and valuable insights about strategic essentials. 

Intelligent machines in non-linear chaotic warfare 

Nonlinearity, chaos theory, complexity theory, and systems theory have been broadly applied to understand organizational behavior and intra-organizational and the inter-organizational dynamics associated with competition and war. During conflict and crisis, where the information quality is poor (i.e., information asymmetry) and judgment and prediction are challenging, decision-makers must balance these competing requirements.  

Automation is generally an asset when high information quality can be combined with precise and relatively predictable judgments. Clausewitz highlights the relationship between the unpredictability of war – caused by interaction, friction, and chance – as a manifestation and contributor of the role of nonlinearity. Clausewitz wrote, “in war, the will is directed at an animate object that reacts” – and thus, the causes that the outcome of the action cannot be predicted.  

Inanimate intelligent machines operating and reacting in human-centric (“animate”) environments exhibiting interaction, friction, and chance cannot predict outcomes and thus control. While human decision-making under these circumstances is far from perfect, the unique interaction of factors including, among other things, psychological, social, cultural (individual and organizational), ideology, emotion, experience (i.e., Boyd’s “orientation” concept), and luck, give humans a sporting chance to make clear-eyed political and ethical judgments in the chaos (“fog”) and nonlinearity (“friction”) of warfare. 

The computer revolution and recent AI-ML approaches have made militaries increasingly reliant on statistical probabilities and self-organizing algorithmic rules to solve complex problems in the non-linear world. AI-ML techniques (e.g., image recognition, pattern recognition, and natural language processing) inductively (inference from general rules) fill the gaps in missing information to identify patterns and trends; thereby increasing the speed and accuracy of certain standardized military operations, including open-source intelligence collation, satellite navigation, and logistics. 

However, because these quantitative models are isolated from the broader external strategic environment of probabilities rather than axiomatic certainties characterized by Boyd’s “orientation,” human intervention remains critical to avoid distant analytical abstraction and causal deterministic predictions during the non-linear chaotic war. Several scholars assume that the perceived first-mover benefits of AI-augmented war machines will create self-fulfilling spirals of security dilemma dynamics that will upend deterrence. 

AI exacerbating the “noise” and “friction” of war 

AI-ML predictions and judgments tend to deteriorate where data is sparse (i.e., nuclear war), low quality (i.e., biased, intelligence is politicized, or data is poisoned or manipulated by mis-disinformation). Thus, military strategy requires Clausewitzian human “genius” to navigate battlefield “fog” and the political, organizational, and information (or “noise” in the system) “friction” of war. For example, the lack of training data in the nuclear domain means that AI would depend on synthetic simulations to predict how adversaries might react during brinkmanship between two or more nuclear-armed states. Nuclear deterrence is a nuanced perceptual dance in competition and manipulation between adversaries, “keeping the enemy guessing” by leaving something to chance. 

To cope with novel strategic situations and mitigate unintended consequences, human “genius” – the contextual understanding afforded by Boyd’s “orientation” – is needed to finesse multiple flexible, sequential, and resilient policy responses. Robert Jervis noted that “good generals not only construct fine war plans but also understand that events will not conform to them.” Unlike machines, humans use abductive reasoning (or inference to the best explanation) and introspection (or “metacognition”) to think laterally and adapt and innovate in novel situations.  

Faced with uncertainty or lack of knowledge and information, people adopt a heuristic approach – cognitive short-cuts or rules of thumb derived from experience, learning, and experimentation – promoting intuition and reasoning to solve complex problems. AI systems use heuristics derived from vast training datasets to make inferences that inform predictions; they lack, however, human intuition that depends on experience and memory. While human intuitive thinking heuristics often produce biases and cognitive blind spots, they also provide an effective means to make quick judgments and decisions under stress in a priori situations. 

Furthermore, human intervention is critical in deciding when and how changes to the algorithm’s configuration (e.g., the tasks it is charged with, the division of labor, and the data it is trained on) are needed, which changes in the strategic environment demand. In other words, rather than complimenting human operators, linear algorithms trained on static datasets will exacerbate the “noise” in non-linear, contingent, and dynamic scenarios such as tracking insurgents and terrorists or providing targeting information for armed drones and missile guidance systems. Moreover, some argue that AI systems designed to “lift the fog of war” might instead compound the friction within organizations with unintended consequences, particularly when disagreements, bureaucratic inertia, or controversy exists about war aims, procurement, civil-military relations, and the chain of command amongst allies. 

Rapid looping and dehumanizing war 

AI-ML systems that excel at routine and narrow tasks and games (e.g., DeepMind’s StarCraft II and DARPA’s Alpha Dogfight with clearly defined pre-determined parameters in relatively controlled, static, and isolated (i.e. there is no feedback) linear environments. For example, logistics, finance, economics, and data collation are found wanting when addressing politically and morally strategic questions in the non-linear world of C2 decision-making. 

For what national security interests are we prepared to sacrifice soldiers’ lives? What stage on the escalation ladder should a state sue for peace over escalation? When do the advantages of empathy and restraint trump coercion and the pursuit of power? And at what point should actors step back from the brink in crisis bargaining? How should states respond to deterrence failure, and what if allies view things differently?  

In high-intensity and dynamic combat environments such as densely populated urban warfare – even where well-specified goals and standard operating procedures exist – the latitude and adaptability of “mission command” remains critical, and the functional utility of ML-AI tools for even routine “task orders” (i.e., the opposite of “mission command”) is problematic. Routine task orders such as standard operating procedures, doctrinal templates, explicit protocols, and logistics performed in dynamic combat settings still have the potential for accidents and risk of life, therefore commanders exhibiting initiative, flexibility, empathy, and creativity are needed. 

War is not a game; instead, it is intrinsically structurally unstable. An adversary rarely plays by the same rules and, to achieve victory, often attempts to change the rules that do exist or invent new ones. The diffusion of AI-ML will unlikely assuage this ambiguity in a myopic and likely ephemeral quest, as many have noted, to speed up and compress the command-and-control OODA decision cycle – or Boyd’s “rapid looping.” Instead, policymakers risk being blind-sided by the potential tactical utility – where speed, scale, precision, and lethality coalesce to improve situational awareness – offered by AI-augmented capabilities, without sufficient regard for the potential strategic implications of artificially imposing non-human agents on the fundamentally human endeavor of warfare.  

Moreover, the appeal of “rapid looping” may persuade soldiers operating in a high-stress environment with large amounts of data to use AI tools to offload cognitively, thus placing undue confidence and trust in machines – known as “automation bias.” Recent studies demonstrate that the more cognitively demanding, time-pressured, and stressful a situation is, the more likely humans are to defer to machine judgments.  

Butterfly effects and unintended consequences 

Even a well-running optimized algorithm is vulnerable to adversarial attacks which may corrupt its data, embed biases, or become a target of novel tactics that seek to exploit blind spots in a system’s architecture (or “going beyond the training set”), which the AI cannot predict and thus effectively counter. Moreover, algorithms optimized to fulfill a specific goal in unfamiliar domains (i.e., nuclear war) and contexts – or if deployed inappropriately – false positives are possible, which inadvertently spark escalatory spirals

In war, much like other domains like economics and politics, there is a new problem for every solution that AI (or social scientists) can conceive. Thus, algorithmic recommendations which may look technically correct and inductively sound may have unintended consequences unless they are accompanied by novel strategies authored by policymakers who are (in theory) psychologically and politically prepared to cope with these consequences with flexibility, resilience, and creativity – or the notion of “genius” in mission command. 

Conceptually speaking, barriers could be placed between AI’s analyzing and synthesizing (prediction) data that inform decisions, and humans making decisions (judgment); for example, through recruitment, the use of simulations and wargaming exercises, and training combatants, contractors, algorithm engineers, and policymakers in human-machine teaming. However, several factors (cognitive bias, intelligence politicization, bureaucratic inertia, unstable civil-military relation, geopolitical first-mover pressures, etc.) will likely blur these boundaries along the human-machine decision-making continuum. 

 
Figure 3: The human-machine decision-making continuum 

AI algorithms isolated from the broader external strategic environment (i.e., the political, ethical, cultural, and organizational contexts depicted in Boyd’s “real OODA loop”) are no substitute for human judgment and decisions in chaotic and non-linear situations. Even when algorithms are functionally aligned with human decision-makers – with knowledge of crucial human decision-making attributes – human-machine teaming risks diminishing the role of human “genius” where it is in high demand. Consequently, commanders are less psychologically, ethically, and politically prepared to respond to nonlinearity, uncertainty, and chaos with flexibility, creativity, and adaptivity.  

Because of the non-binary nature of tactical and strategic decision-making – tactical decisions are not made in a vacuum and invariably have strategic effects – using AI-enabled digital devices to complement human decisions will have strategic consequences that increase the importance of human involvement in these tasks. According to the US-led Multinational Capability Development Campaign (MCDC): “Whatever our C2 models, systems and behaviours of the future will look like, they must not be linear, deterministic and static. They must be agile, autonomously self-adaptive and self-regulating.” 

AI-empowered ‘strategic corporals’ vs. ‘tactical generals’ 

U.S. Amy Gen. Charles Krulak coined the term “strategic corporal” to describe the strategic implications which flow from the increasing responsibilities and pressures placed on small-unit tactical leaders due to rapid technological diffusion and the resulting operational complexity in modern warfare that followed the information revolution-based revolution in military affairs in the late-1990’s. Krulak argues that recruitment, training, and mentorship will empower junior officers to exercise judgment, leadership, and restraint to become effective “strategic corporals.” 

On the digitized battlefield, tactical leaders will need to judge the reliability of AI-ML predictions, determine algorithmic outputs’ ethical and moral veracity, and judge in real-time whether, why, and to what degree AI systems should be recalibrated to reflect changes to human-machine teaming and the broader strategic environment. In other words, “strategic corporals” will need to become military, political, and technological “geniuses.”  

While junior officers have displayed practical bottom-up creativity and innovation in using technology in the past, the new multi-directional pressures from AI systems will unlikely be resolved by training and recruiting practices. Instead, pressures to make decisions in high-intensity, fast-moving, data-centric, multi-domain human-teaming environments might undermine the critical role of “mission command,” which connects tactical leaders with the political-strategic leadership – namely, the decentralized, lateral, and two-way (explicit and implicit) communication between senior command and tactical units. 

Under tactical pressures to compress the decision-making, reduce “friction,” and speed up the OODA loop, tactical leaders may make unauthorized modifications to algorithms (e.g., reconfigure human-machine teaming, ignore AI-ML recommendations) or launch cross-domain countermeasures in response to adversarial attacks) that puts them in direct conflict with other parts of the organization or contradicts the strategic objectives of political leaders. 

According to Boyd, the breakdown of the implicit communication and bonds of trust that defines “mission command” will produce “confusion and disorder, which impedes vigorous or directed activity, hence, magnifies friction or entropy” – precisely the outcome of the empowerment of small-group tactical leaders on the twenty-first-century battlefield was intended to prevent.  

This breakdown may also be precipitated by adversarial attacks on the electronic communications (e.g., electronic warfare jammers and cyber-attacks) that advanced militaries rely on, thus forcing tactical leaders to fend for themselves. In 2009, for example, the Shiite militia fighters used off-the-shelf software to hack into un-secured video feed from U.S. Predator drones. To avoid this outcome, Boyd advocates a highly decentralized hierarchical structure that allows tactical commanders initiative while insisting that senior command resist the temptation to invest overly cognitively and interfere with tactical decisions. 

On the other side of the command spectrum, AI-ML augmented ISR, autonomous weapons, and real-time situational awareness might produce a juxtaposed yet contingent phenomenon, the rise of “tactical generals.” Where senior commanders gain unprecedented access to tactical information, the temptation to micro-manage and directly intervene in tactical decisions from afar will rise. Who understands the commander’s intent better than the generals themselves? 

While AI-ML enhancements can help senior commanders become better informed and take personal responsibility for high-intensity situations as they unfold, the thin line between timely intervention and obsessive micro-management is fine. This dynamic may increase uncertainty and confusion and amplify friction and entropy. Centralizing the decision-making process – and contrary to both Boyd’s guidance and the notion of “strategic corporals” – and creating a new breed of “tactical generals” micro-managing theater commanders from afar, AI-ML enhancements might compound the additional pressures placed on tactical unit leaders to speed up the OODA loop, and become political, and technological “geniuses.”  

Micromanaging or taking control of tactical decision-making could also mean that young officers lack the experience in making complex tactical solutions in the field, which might cause confusion or misperceptions in the event communications are compromised and the “genius” of “strategic corporals” is demanded. Absent the verification and supervision of machine decisions by tactical units (e.g., the troop movement of friendly and enemy forces), if an AI gave the green light to an operation in a fast-moving combat scenario, false positives from automated warning systems, mis-disinformation, or an adversarial attack, may have dire consequences

Moreover, tactical units executing orders received from brigade headquarters – and assuming a concomitant erosion of two-way communication flows – may not only diminish the providence and fidelity of information received by senior commanders deliberately from their ivory towers but also result in unit leaders following orders blindly and eschewing moral, ethical, or even legal concerns

Whether the rise of “tactical generals” compliments, subsume, or conflicts with Krulak’s vision of the twenty-first century “strategic corporals,” and the impact of this interaction on battlefield decision-making are open questions. Psychology research has found that humans tend to perform poorly at setting the appropriate objectives and are predisposed to harm others if ordered by an ‘expert’ (or “trust and forget”). As a corollary, human operators may begin to view AI systems as agents of authority (i.e., more intelligent and more authoritative than humans) and thus be more inclined to follow their recommendations blindly; even in the face of information (e.g., that debunks mis-disinformation) that indicates they would be wiser not to. 

* * * 

Whether or when AI will continue to trail, match, or surpass human intelligence and, in turn, compliment or replace humans in decision-making is a necessary speculative endeavor but a vital issue to deductively analyze, nonetheless. Specifically, efforts must be directed to developing a robust theory-driven hypothesis to guide analysis of the empirical data about the impact of the diffusion and synthesis of “narrow” AI technology in command-and-control decision-making processes and structures. This endeavor has a clear pedagogical benefit, guiding future military professional education and training in the need to balance data and intuition as human-machine teaming matures. 

A policy-centric utility includes adapting militaries (doctrine, civil-military relations, and innovation) to the changing character of war and the likely effects of AI-enabled capabilities – which already exist or are being developed – on war’s enduring chaotic, non-linear, and uncertain nature. Absent fundamental changes to our understanding of the impact (cognitive effects, organizational, and technical) of AI systems on the human-machine relationship, we risk not only failing to harness AI’s transformative potential but, more dangerously, misaligning AI capabilities with human values, ethics, and norms of warfare that spark unintended strategic consequences. 

In imagined future wars between rival AI’s who define their objectives and possess a sense of existential threat to their survival, the role of humans in warfare – aside from suffering the physical and virtual consequences of dehumanized autonomous hyper-war – is unclear. In this scenario, “strategic corporals” and “tactical generals” would become obsolete, and machine “genius” – however that might look – would fundamentally change Clausewitz’s nature of war. 

James Johnson is a Lecturer in Strategic Studies at the University of Aberdeen. He is also an Honorary Fellow at the University of Leicester, a Non-Resident Associate on the ERC-funded Towards a Third Nuclear Age Project, and a Mid-Career Cadre member with the Center for Strategic Studies (CSIS) Project on Nuclear Issues. He is the author of Artificial Intelligence and the Future of Warfare: USA, China & Strategic Stability. His latest book project with Oxford University Press is AI & the Bomb: Nuclear Strategy and Risk in the Digital Age @James_SJohnson. 

FacebookTwitterLinkedInEmailCopy Link