Stop Catastrophic AI
We condemn the deployment of artificial intelligence (AI) where it indiscriminately kills civilians and risks crimes against humanity. We urge an urgent review of public and private sector AI in conflicts and the deployment of autonomous and semi-autonomous AI for state violence.
We must act now to prohibit AI deployed for the use of force even where humans retain control.
As we have witnessed, Israel's devastating military campaign made possible by AI is indefensible and has killed tens of thousands of Gazans, nearing a hundred journalists, targeted over a hundred health facilities and destroyed tens of thousands of dwellings. The speed and scale of destruction is only possible with AI targeting. Even with human control, the humanitarian crisis plainly challenges the Department of Defense (DoD) claims that "artificial intelligence (AI) technologies enable leaders to make better decisions faster, from the boardroom to the battlefield."^1
As technology workers, we are appalled and concerned by the reporting^2 of AI in scaling this war and the slaughter of Palestinian civilians it has wrought. "Operation Iron Swords," which followed the Hamas led assault in southern Israel on October 7th, has seen a significant expansion in its targeting of facilities not military in nature, including historic sites, apartment blocks, residences, schools, humanitarian buildings, and public infrastructure, intended to shock the civilian population into submission. Intelligence units using AI, know in advance approximately how many civilians will be killed with a "civilian-to-combatant deaths is at least five to one and probably much greater."^3 Indiscriminate AI targeting and collateral damage estimations far exceed the DoD's standard of lawful and ethical behavior, suggesting that any use AI in armed conflicts cannot be regulated.
Since October 7th, Israel has targeted over tens of thousands of targets, an unprecedented figure compared to previous operations, many of which have little to no military value. In a statement, the IDF confirmed that the AI system Habsora "enables the use of automatic tools to produce targets at a fast pace, and works by improving accurate and high-quality intelligence material according to operational needs … causing great damage to the enemy and minimal damage to non-combatants."^1 It generates bombing sites in real time that "tens of thousands of intelligence officers could not process," with an "emphasis on quantity and not on quality," generating targets faster than strikes can be conducted. The results of relying on AI have led to indiscriminate bombings and collateral damage far exceeding other modern conflicts that are also likely illegal under international humanitarian law.
We are the first to witness AI warfare and mass slaughter deployed to a blockaded populated, half of whom are children. What targets are AI decision support systems surfacing that call for heavy bombing among residential blocks? How does AI weigh vulnerable populations like children, the elderly and injured in its collateral damage estimations? Does AI consider humanitarian operations in the area? Gaza is the first war deployment of AI at this magnitude, over 40,000 tonnes or the equivalent of three nuclear bombs worth and it will not be the last unless action is taken. This AI enabled bombing campaign continues unabated because of US supplied munitions^4. Semi-autonomous AI warfare has already been shown to:
- Have no limits in its collateral damage
- Maximize civilian harm in its campaigns
- Operate with little human interaction and oversight, and
- Conduct destruction at a tempo and totality unseen before
Stop Catastrophic AI
Catastrophic AI operates well beyond compliance with international humanitarian law (IHL) and it can be safely assumed that Israel's civilian casualties will near 20 times that of Russia's targeting of Ukrainian civilians^5 while casualties among children dwarf previous recorded conflicts. This campaign makes an urgent call to address the harms of semi-autonomous and autonomous use of AI and for new international laws on deploying AI to armed conflicts. This leap in AI warfare represents a grave violation of international humanitarian law, risks setting a new precedence for civilian harm in ongoing conflicts and incentivizes an AI arms race driving humanity to complete catastrophe not unlike what we've witnessed in Gaza.
We are calling for private and public sector leaders to determine the extent to which AI services and technologies have been deployed to the war field and act in accordance with the United States Conventional Arms Transfer Policy where it is “more likely than not” that the weapons will be used to commit or “aggravate the risks” that the recipient will commit “genocide; crimes against humanity … including attacks intentionally directed against civilian objects or civilians protected as such; or other serious violations of international humanitarian or human rights law, including … serious acts of violence against children.”^6
We stand with the UN Secretary-General and the International Committee of the Red Cross in calling for a legal instrument on artificial intelligence in the military before catastrophic AI proliferates.