Time to get serious about AI risk

In November 2024, US President Joe Biden and Chinese President Xi Jinping made their first substantive joint statement about the national security risks posed by artificial intelligence. Specifically, they noted that both the US and China believe in “the need to maintain human control over the decision to use nuclear weapons.”

That may sound like diplomatic low-hanging fruit, since it would be hard to find a reasonable person willing to argue that we should hand control over nuclear weapons to AI. But in this situation, there is no such thing as low-hanging fruit, especially on weighty security matters. The Chinese are inherently skeptical of US risk-reduction proposals and Russia had opposed similar language in multilateral bodies. Because bilateral talks with the US on AI and nuclear security would open daylight between Russia and China, progress on this front was not a foregone conclusion.

In the end, it took more than a year of negotiation to make that simple joint statement happen. Yet, simple as it seems, the result was significant because it demonstrated that the two AI superpowers can engage in constructive risk management even as they compete vigorously for AI leadership.

Moreover, diplomats and experts from our two countries had met earlier in 2024, in Geneva, for an extended session dedicated to AI risks. It was the first meeting of its kind and, though it did not produce any significant results, the very fact that it occurred was an important step and both sides did manage to identify critical areas of risk that required further work.

Now, as the momentum behind AI development and deployment, both civil and military, gathers pace, the US and China need to build on this foundation by pursuing sustained, senior-level diplomacy on AI risks, even as each strives for the lead in the AI race. They must do so because the risks of AI are real and only growing.

For example, as AI capabilities advance and diffuse, nonstate actors like terrorist organizations could leverage them to threaten both the US and China — as well as the rest of the world. Such threats could take many forms, including cyberattacks that paralyze critical infrastructure, novel bioweapons that evade detection or treatment, disinformation campaigns that destabilize governments and societies, and AI-enabled lethal drones that can strike anywhere with little or no notice.

Nor do the risks stop there. As the US and Chinese militaries increase their use of AI — shortening decision loops and altering deterrence frameworks in the process — the risk of AI-powered systems inadvertently triggering a conflict or catastrophic escalation will grow. As AI becomes increasingly central to the global banking system, AI-powered trading could cause a market crash in the absence of adequate firewalling. And looking further ahead, one can imagine a powerful, misaligned AI system (pursuing aims other than what their creators intended) threatening grievous harm to humanity. As the world’s only AI superpowers, the US and China need to engage one another directly to address these and other dangers.

Engagement does not mean that China and the US will stop competing vigorously. This autumn, China showed how sharp that competition had become when it issued extreme new export controls on rare earths that are critical to the production of microchips and other components of AI systems.

As national security adviser during the Biden administration, I worked hard to ensure that the US maintained its lead on AI so that the technology will work for us rather than against us. I saw that the race for leadership in military, intelligence and commercial applications, and for the adoption of American and Chinese AI models and applications by countries around the world, would only continue to heat up.

But it is precisely because of this intense competition that diplomacy is essential, even in this period of heightened tensions. It would be deeply irresponsible for the US and China to race ahead without engaging each other on the risks or without talking about the immense opportunities that AI presents to address transnational challenges — from the climate crisis to public health.

There is no substitute for direct government-to-government engagement, even if it is quite modest at first. Jake Sullivan

To be sure, leading thinkers in both countries have participated in “Track II” diplomatic efforts — talks outside of government, usually involving universities, business leaders and civil society groups. Such discussions are valuable and they should continue. Ultimately, though, there is no substitute for direct government-to-government engagement, even if it is quite modest at first.

Nor can engagement wait, given the breathtaking speed of technological advancement and the foreseeable difficulties of reaching diplomatic breakthroughs that are equal to the moment. Managing AI risks is uncharted territory, so progress will be neither swift nor easy. The US and China need to get started.

Many commentators have drawn parallels to nuclear arms control as it developed over the decades and that analogy has some merit. Superpowers bear responsibility for managing the risks associated with powerful technologies and we have successfully discharged that responsibility in the past through arms control agreements — including at the very height of the Cold War. But AI also presents different challenges, requiring more innovative approaches than arms control.

There are several reasons for this. First, verification is more vexing. Counting missiles and warheads, with their detectable signatures, is one thing; counting algorithms — let alone discerning all the capabilities and applications of a given algorithm — is quite another.

Second, the challenge of dual use presents itself differently in the case of AI. Yes, splitting the atom has both civilian and military uses, but there is a fairly straightforward line between peaceful nuclear power and nuclear weapons, and the International Atomic Energy Agency has a lot of experience policing it. By contrast, the same AI model that can help advance scientific research and generate economic growth can also help deliver terrifying lethal effects. This makes the competitive dynamics between the US and China much harder to manage and the line between opportunity and risk much harder to discern.

Third, arms control discussions have focused chiefly on states threatening other states, whereas AI risk involves state-to-state threats but also nonstate threats and the risks associated with AI misalignment. This presents different challenges and opportunities for diplomacy.

Fourth, at least in the US, AI development is driven not by the government but by the private sector — and not by one company but by many competing against each other. That means a wider range of actors must be involved in discussions aimed at mitigating the risks that the technology poses.

Lastly, there is a wide spectrum of views on how fast and how far AI capabilities will go. Some see it as a “normal” technology whose full adoption will take decades, while others argue that a superintelligence explosion is just a few years away. With nuclear weapons, you could get a slightly bigger or smaller detonation, or a faster or more maneuverable delivery vehicle, but you basically knew what you were dealing with. The evolution and impact of AI capabilities is much less clear.

When I served as national security adviser, I worked to make certain that the US government was ready for every scenario along the uncertainty spectrum. Doing so requires an additional level of flexibility, subtlety and steadiness. The nuclear arms control framework did not emerge overnight. It took years to devise the relevant export controls, testing schemas, verification protocols and guardrails — and it took decades of diplomacy to maintain them. With AI, we are at the opening stages of something similar in ambition but different in substance and complexity. That makes it all the more important to proceed with risk-reduction efforts immediately.

BY: Writer Jake Sullivan, US National Security Adviser (2021-25) to President Joe Biden, is Professor of the Practice of Statecraft and World Order at Harvard Kennedy School.

Disclaimer: Views expressed by writers in this section are their own and do not necessarily reflect The Times Union‘ point of view