The icon indicates free access to the linked research on JSTOR.

Excitement about generative artificial intelligence has reached fever pitch since the launch of ChatGPT and image engines like MidJourney in the past year. Yet both elation or anxiety about the new technology’s applications in the present lead soon to bigger, existential projections about the future. With the exception of the “singularity” and AI replacing humanity directly, no outcome provokes quite the same amount of anxiety as the effect AI could have on warfare. Could autonomous weaponry and strategy bots oppress civilians, defy human operators—or even ignite World War III?

JSTOR Daily Membership AdJSTOR Daily Membership Ad

The world’s leaders are making a show of action. Sixty states met this February to hammer out goals for the “responsible use of AI” by militaries. The State Department followed with a political declaration of the United States’ principles, which include senior officers remaining involved in the design and operation of all autonomous weapons, designing them for “explicit, well-defined uses,” maintaining careful surveillance of all safety features during the entire lifespan of the weapons, and seeking to avoid bias in their use. The Human Rights Watch noted, however, that none of these goals are yet legally binding nor slow the development of sophisticated systems that their operators do not fully understand.

Despite its novelty, these recent advances are merely the latest steps in the automation of state violence that took its first big leap with the formation of standing militaries in Europe in the eighteenth century. Although the machine gun, tank, and fighter jet are more familiar symbols of this process, drilling infantry to remove individual human emotions and increase its speed and reliability to commanders gave such armies’ their first major advantage over unprepared rivals.

States have feared falling behind on any automation advantage ever since, which is why there’s a parallel discourse in the news media spurring on the US to adopt military AI more quickly. Breathless features on the ways generative AI makes the analysis of secret-level data far more efficient run side-by-side with stern warnings that any restrictions will give an advantage to the Chinese military. The New York Times gives free op-ed space to AI military contractors. Even gaffes that seem accidental, such as the colonel who claimed that an AI drone killed its operator in a virtual exercise in order to achieve its goal or the general who said the US military treats AI weapons more ethically because of its “Judeo-Christian values,” (i.e., unlike China) all reinforce a message to the world that military AI development is moving full steam ahead.

The applications of AI, particularly in intelligence analysis, are still very hypothetical. At present, it seems the hype itself is a weapon to warn off pretenders to geopolitical supremacy. But would military commanders really go so far as to try to automate wartime decision-making itself?

The Worst Case Scenario

It makes sense that actual military officers often call for more caution than war industry contractors and think tank-adjacent media. Three US officers get straight to the point in a 2018 article in PRISM, proposing that Chinese and American automated strategy systems operating in 2024 could escalate a conflict to a “limited nuclear exchange” in only two hours.

In their scenario, the crisis begins with an accidental collision between a Vietnamese fishing vessel and a Chinese ship in the South China Sea. This event sets off a series of seemingly unrelated chain reactions in cyberspace, including the fall of stock market indices and the rise of unfavorable commentary about either side on social media. These indications trigger AI warning systems about internet attacks on IT infrastructure on both sides, subsequently setting off actual military actions. Although China strikes first in their scenario, the authors hold the interaction of both sides responsible for the potential murder of millions.

This scenario highlights the frustrating unknowns in the newest generation of AI. The foremost is the nature of the neural networks that drive the technology. Essentially, generative AI “teaches itself” to write, paint, play video games, etc., by iterating its task millions of times and assigning success rates or probability outcomes, carefully honing its output to produce a result its programmers deem optimal. But it’s unable by its very nature to provide a full account of the “decision making” process that led to that result—because the algorithmic machinery actually made no decisions.

Moreover, AI systems are only as capable as the data to which they are linked, which redoubles concerns about the reliability and ethics of mass data collection ongoing in every field of life today. Most crucially, by shoveling data analysis in front of human users ever more quickly, AI reduces the window of time necessary for humans to make an informed decision, even when they are in “control.”

The negative outcomes of these cumulative flaws in AI have been evident in other applications for some time, although our society has taken strides to normalize them. Securities trading algorithms have caused “flash crashes” by amplifying bad data and poor operator assumptions. Human drivers blindly obey their AI map assistants, causing crashes every day—and the technology for fully autonomous vehicles seems seriously stalled. Facial recognition software in public security continues to spread, despite serious biases.

The problem of this technology, the authors argue, is that “strategic AI systems may reduce the friction of war while increasing the fog.” Ultimately, the capabilities of AI won’t matter as much as the way humans interact with it—will it produce blind faith or healthy skepticism in its operators?

AI Rivals?

If the current hype over military AI is meant in part to send a message to United States’ geopolitical rivals, it should be instructive to analyze the activities and claims of those rivals. In 2019, Elena Kania summarized the available information about China’s military strategy for AI, and she found no shortage of hype there, too.

The People’s Liberation Army is seeking to integrate AI throughout the every branch of its operations as a part of a broader strategy of what it calls “intelligentization.” It calls for creating a “system of systems that consists of not only intelligent weapons but…involves human-machine integration with (artificial) intelligence in a ‘leading’ or dominant position.”

The PLA’s goals reflect the fears of the State Department statement from February: machine-learning techniques that can function with “limited computing,” AI for political work and psychological operations, increased autonomy of “unmanned” systems, and counter-AI data manipulation. Kania cites an impressive list of strategic initiatives in the army, navy, and air force, as well as billions of dollars invested in private and military/private “fusion” industrial projects. And yet, the actual weapons in use (in 2019) were nothing more than remote-control drone planes, boats, and robots.

Kania’s analysis of China’s AI “weaknesses” are more telling, inasmuch as they could apply to the difficulties facing any large military establishment—in particular, the United States. Existing bureaucratic structures threatened with replacement will hinder the PLA’s adoption of radically new technology. The PLA faces challenges recruiting or training talented computer scientists. It has made little progress toward integrating many different types of military data with cloud computing for AI to utilize it. Pouring so much capital into industrial development all at once will lead to waste and corruption.

As Kania is writing for the hawkish Democratic think tank Center for a New American Security, she concludes her report with policy points intended to confront the threat her report describes: reinvestment in American STEM education and more security-state surveillance of industrial links with Chinese companies. Notably, she doesn’t weigh the possibility of AI-assisted mutually assured destruction. This raises one final imponderable. Which is more dangerous for the future of humanity: a military AI that hums with seamless efficiency, or one that is riddled with errors and wielded by a resentful officer corps?


Support JSTOR Daily! Join our new membership program on Patreon today.

Resources

JSTOR is a digital library for scholars, researchers, and students. JSTOR Daily readers can access the original research behind our articles for free on JSTOR.

PRISM, Vol. 7, No. 4 (2018), pp. 92–105
Institute for National Strategic Security, National Defense University
2019, Center for a New American Security
Center for a New American Security