The Algo Wars Are Real: Inside The Pentagon’s “Algorithmic-Warfare Team”

Via Dustin Lewis, Naz Modirzadeh, Gabriella Blum for Lawfare

In April 2017, the Pentagon created an “Algorithmic Warfare Cross-Functional Team,” pending a transfer of $70 million from Congress. The premise of this initiative is that maintaining a qualitative edge in war will increasingly require harnessing algorithmic systems that underpin artificial intelligence (AI) and machine learning (ML). This realization is not unique to the United States: while the Pentagon’s algorithmic-warfare team gets up and running, other countries are also seeking to integrate AI and ML into various military functions. As armed forces race to secure technological innovations in these fields, it is imperative to match those developments with sound regulatory responses.

The broad remit of this new Department of Defense (DoD) team–to consolidate “existing algorithm-based technology initiatives related to mission areas of the Defense Intelligence Enterprise”–underscores that it is not just weapons that are of interest; far from it. Think logistics, communications, situational awareness, and intelligence collection management, among many other possibilities. And a May 2017 report from the Hague Centre for Strategic Studies explains that other countries–including China and Russia, as well as several traditional U.S. partner forces–are also pursuing an edge through diverse algorithmically-derived functions related to war.

Notwithstanding the breadth and possibilities of AI and ML and their military applications, much of the legal debate around advanced military technology still revolves around “autonomous weapon systems” (AWS). For instance, in December 2016, states parties to the Convention on Certain Conventional Weapons established a Group of Governmental Experts (GGE) on Lethal Autonomous Weapons Systems. Irrespective of the GGE’s possible success or failure, however, the focus on weapons is too constrictive a framework to offer an adequate response to the challenges ahead.

Last fall, we proposed the concept of a “war algorithm” and an accompanying framework for “war-algorithm accountability.” (Prof. Anderson reviewed our report on Lawfare.) The background idea is that in war, as in so many other areas of modern life, authority and power are increasingly expressed algorithmically. (We defined a “war algorithm” as any algorithm that is expressed in computer code, that is effectuated through a constructed system, and that is capable of operating in relation to armed conflict.) The DoD’s new algorithmic-warfare team embodies the core concept, which is apparently gaining steam elsewhere as well, as demonstrated in a recent Breaking Defense series on “The War Algorithm.”

Many war algorithms may pose challenges to key concepts–including attribution, control, foreseeability, and reconstructability–underpinning legal frameworks that regulate conduct in armed conflict. The underlying concern about potential gaps in regulation and accountability is not a wholly new phenomenon. But it is becoming more pressing due in part to rapid advancements in AI and ML. Of particular concern are technologies whose “choices” may be difficult–or even impossible–for humans to anticipate or unpack or whose “decisions” are seen as “replacing” human judgment. Across diverse domains, such advancements are already occurring in the U.S. as well as in many other technologically-advanced countries. Moreover, the war-algorithm pipeline often includes, beyond governments, many universities and tech companies not traditionally associated (at least in the U.S.) with military research and development.

China, for example, raises many of these concerns for the U.S. government. In February 2017, the New York Times reported that, increasingly, the “smartest guys” in AI are not only in the U.S. but are “also in China. The United States no longer has a strategic monopoly on the technology, which is widely seen as the key factor in the next generation of warfare.” In March 2017, the New York Times highlighted how some investments in high-tech American start-ups by Chinese firms owned by state-owned companies or with connections to Chinese leaders are ringing alarm bells in Washington. And, in May 2017, the New York Times noted connections in China between industry, academia, and government, with two professors who worked on a government award for a “military-use intelligence ground robot” now slated to head a government lab that will cooperate with Baidu on AI research.

Against the broader backdrop, it is increasingly urgent to consider and pursue an array of accountability approaches to war algorithms, including frameworks that extend beyond the U.S. Of the various possible frameworks, international law is the only normative regime that purports to be both universal and uniform. Relevant international legal fields that would need to help inform future regulation include international humanitarian law/law of armed conflict (IHL/LOAC), international criminal law, international human rights law, the law of state responsibility, and outer-space law. (Other normative and legal regimes–at the domestic and transnational levels, conventional and unconventional alike–should be considered as well.)

Though these international legal doctrines offer an initial path towards regulation, in key respects they are neither exhaustive nor necessarily directly applicable to this new strategic reality–at least not yet. These doctrines do not, for instance, easily translate into rules of engagement (ROEs) that would help guide military operations that utilize war algorithms.

Consider the initial tasks of the DoD’s new algorithmic-warfare team. Its first assignment is a relatively modest one: to field technology that will help reduce burdens on human operators in analyzing video feeds of Iraq and Syria captured by unmanned aerial systems (a.k.a., drones). Through an initial three-phase effort, with each phase lasting 90 days, the Team aims to “increase actionable intelligence” and “enhance military decision-making.” The first phase envisages the Team “organiz[ing] a data-labeling effort, and develop[ing], acquir[ing], and/or modify[ing] algorithms to accomplish key tasks.” In the second phase, the Team is slated to “identify required computational resources and identify a path to fielding that infrastructure.” And in the third phase, the Team will seek to “integrate algorithmic-based technology” with existing intelligence projects in 90-day sprints.

Without context, a mere “data-labeling effort” might sound benign. But the setting for this Pentagon Team’s first assignment is reportedly U.S. operations directed against ISIS (and others) in Iraq and Syria. “Labeling” such data may implicate an array of IHL/LOAC concerns, such as the status of the individual under scrutiny: Does he or she qualify as a combatant, as a civilian, as a member of an organized armed group, as a civilian directly participating in hostilities, as religious personnel, as medical personnel, or as something else? The stakes are extremely high as, under IHL/LOAC, status is a key determinant for whether an individual may be subject to targeting in direct attack. In some cases, the determination of status is relatively straightforward. In many others, however, it can be very difficult.

The labeling of such video feeds may stream into various (other) IHL/LOAC-based assessments as well. Consider, for instance, whether such “data-labeling effort[s]” would include the notion of “hostile intent” that is relevant for U.S. forces’ ROEs on opening fire. In addition, it is currently not clear whether war algorithms will be capable of formulating and implementing certain IHL/LOAC-based evaluative decisions and value judgments, such as:

  • The presumption of civilian status in case of “doubt”;
  • The assessment of “excessiveness” of expected incidental harm in relation to anticipated military advantage;
  • The betrayal of “confidence” in relation to the prohibition of perfidy; or
  • The prohibition of destruction of civilian property except where “imperatively” demanded by the necessities of war.

The challenge of making these kinds of judgments is not unique to algorithmic systems; they are hard for human analysts as well. Yet, for human decision-makers, we have prospective training programs and after-the-fact accountability mechanisms to help ensure compliance with the law. Training and accountability, though imperfect, allow us to guide and examine the decisions humans make. It is not clear at the moment what “training” or “accountability” means when it comes to war algorithms.

Meanwhile, the Algorithmic Warfare Cross-Functional Team’s overall and long-term ambitions are much broader than lessening human burdens in monitoring video feeds. As noted above, the Team has been assigned to consolidate “existing algorithm-based technology initiatives related to mission areas of the Defense Intelligence Enterprise.” That remit expressly includes “all initiatives that develop, employ, or field artificial intelligence, automation, machine learning, deep learning, and computer vision algorithms.” Down the road, according to Air Force Lt. Gen. John N.T. “Jack” Shanahan, the Director for Defense Intelligence (Warfighter Support) who has been tasked with finding the Team’s new technology, “We [at DoD] see all sorts of things for intelligence, for targeting, for collection management, for sensor fusion. For the department … logistics, command and control, communications.” Indeed, “[e]verything that industry is working on has some applicability throughout the entire department,” Lt. Gen. Shanahan told Defense One. That wide-ranging vision of possible warfighting benefits from AI and ML strongly aligns with the overall approach of the June 2016 Defense Science Board’s Summer Study on Autonomy.

It is clear that algorithmic warfare is developing now. Governments, industry, academia, and civil society should all be pursuing ways to secure war-algorithm accountability.

Leave a Reply to d. dugger.Cancel reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.

3 thoughts on “The Algo Wars Are Real: Inside The Pentagon’s “Algorithmic-Warfare Team”

  1. And yet, relatively poorly funded groups such as ISIS, Al Queda and many others that seldom make the 10 minute news cycle – cause the vastly greater funded developed country governments, their militaries (both extremely aggressively and extra-legally “marketed” to by a self-serving MIC) to spend many orders of magnitude greater resources than these infinitely smaller terrorist groups and or the direct cost of their terrorist attacks.

    From a resource expenditure standpoint – these smaller groups will continue to “win” against developed nations tech programs – advanced AI algorithms or whatever – on an economic basis if nothing else. The most successful model to date in fighting terrorism has been just the opposite of high tech – that of the Israelis, which is comparatively low tech. A much less resource costly program and very direct in eliminating the terrorist themselves.

    In the end, neither the governments or terrorist will “win.” The circumstances of overpopulation and competition for critical finite resources will only increase until resources and population come into balance. Unfortunately, the necessary circumstances to bring this about will likely be universally far worse than the circumstances that cause the current conflicts.

NEWSROOM crewneck & prints