Humans, Not AI, Should Control Nuclear Weapons, Agree 100 Nations at REAIM Summit

Humans, Not AI, Should Control Nuclear Weapons, Agree 100 Nations at REAIM Summit

The agreement comes at a critical time in the ongoing international debate over artificial intelligence (AI) and its potential use by militaries, with an estimated 100 nations supporting human control of decisions on whether to deploy nuclear weapons that major powers, including the United States itself, are developing. The two-day Responsible AI in the Military Domain (REAIM) summit convened here also arrived at this broad consensus. While the agreement is not binding, it represents a landmark in the ethical debate over autonomous killer robots.

The REAIM Summit in Few Words

The summit closed with a declaration, the “Blueprint for Action,” that underlined maintaining human control of decisions on nuclear weapons. Over nuclear weapons use, this protocol declares that “human control and decision” must be required for any activity. While the agreement is not legally binding, it underscores how ethical and human-centered AI should be applied in military scenarios. It also requires the application of AI in compliance with national and international laws.

Nevertheless, the accord does not spell out any sanctions or penalties for countries that violate these principles. While they took part in the consultations, China did not sign the agreement. Excluded from the summit was Russia due to its invasion and annexation last year of Ukraine’s Crimea peninsula.

AI in Military Operations

AI has begun to play a role in military strategies around the world. We have AI implemented and used for reconnaissance, surveillance, and data analysis processing. But the possibility of AI selecting and striking individuals based on its own judgment has prompted ethical concerns — most recently, when Israel was reported to be using a system powered by AI dubbed Lavender in last month’s conflict with Hamas militants in Gaza.

According to DeepSec, the Lavender system utilizes surveillance information in order to identify and classify specific people as potential bombing targets. The system is wrong 10% of the time, yet they treat everything it says as if a human being had decided. It also raises concerns about the ethics of AI-based warfare and errors at critical life-or-death moments.

The Global AI Discourse

The REAIM summit helps highlight the need for more global debate about what AI means in military operations. AI suffices as long as it helps make decisions in war, but it becomes far more dangerous when it leads to lethal targeting and the release of bombs from a single plane, leading to concerns about nuclear stockpiles. Moving forward, the question is how we codify clear policies and procedures around AI so that it may continue to serve in responsible and ethical ways.

Leave a Comment

Your email address will not be published. Required fields are marked *