And so concludes another year in the twenty-first century’s digital revolution. The technological advancements of the past two decades have ensured that artificial intelligence (AI) will be a contending force in the future of state activities, namely in policymaking. However, infusing technology into policy isn’t just a matter of algorithms and machine learning, but of security, strategy, and ethics. The question of whether optimization—one of AI’s main aims—can intersect with the welfare of nations and peoples remains unanswered.
Popular works have long forewarned us of the risks of a world governed by technocratic machine reliance. Scholar Schoni Song considers this phenomenon by juxtaposing popular bestseller Dune with contemporary case studies of AI-related happenings on the global stage. Song points out that regulation of AI today echoes the concerns raised by Frank Herbert’s epic novel, in which AI machines and cyborgs were outlawed after an insurrection. For Song, “the prohibition of ‘thinking’ robots, and a much more draconian moral code that stated humans would no longer build machines ‘in likeness of a human mind’” reverberates through current national and supranational discussions of AI.
The new year brings a number of states one perennial step closer to fulfilling their policy objectives—some of which are time-sensitive—around AI technologies. Argentina wants to implement an AI national plan by 2029; China wants to enact Law360—which integrates AI into judicial proceedings—by 2025; and the EU announced its indefinite plan to be a pioneering force in AI ethics back in 2018—admitting that it was open to changes based on the future of new technologies. Even the UN wants to fold AI-regulatory plans into its Sustainable Development Goals, targeting a 2030 finish-line.
According to Song, such proposals have fed an ongoing international race of sorts, but not all states have successfully implemented their plans. For instance, in 2018, Macron proposed the glamorous Joint European Disruptive Initiative (JEDI), but it has since quietly expired into irrelevance.
Algorithmic decision-making is a daunting reality: at worst, it enacts pre-existing biases; at best, it borrows the strongest traits of a workforce. Optimization in of itself can be polarizing, especially in international relations. As Song explains, from the Alfie Evans case in the UK to a new social credit score initiative in China, global concerns over AI in politics is mounting. While many concerns might be dismissed as based in a Dune-like hysterical groupthink, some are legitimate problems.
For instance, AI application at the global level is still evolving, and there’s still no central governance with a significant ethics framework for regulating the use and integration of technology in public policy. Instead, the focus has been on the paving of principled precedence on a case-by-case basis. This method, however, comes with foreseeable flaws: what works in one nation’s Big Tech industry might not easily work in another. This unevenness in implementation could allow technological hegemonies to call the shots, with significant consequences for the slow adopter.
Song forecasts that, at both the national and global levels, “the competition for AI supremacy will come down to whoever has the best and biggest data.” At present, “autocratic regimes clearly carry a competitive edge in this area of data acquisition and control.” Nations forced to navigate “civil liberty and privacy concerns with respect to the idea of Big Brother collecting and using their own data through various applications of artificial intelligence,” including the United States, could find themselves in a precarious, non-competitive position.
“The future of a free and prosperous world may be on the line,” Song contends, “depending on who articulates the next global vision of AI and what values they represent.”