New Policies For AI In War

Amy Ishlan
April 06, 2024

Using AI, the Israeli army has marked tens of thousands of Gazans for assassination.

Inputs that matter: Named "Lavender," +972 reports that reliance on the machine's discovery resulted in essentially treating the AI's outputs "as if it were a human decision."

  • "Formally, the Lavender system is designed to mark all suspected operatives in the military wings of Hamas and Palestinian Islamic Jihad (PIJ), including low-ranking ones, as potential bombing targets."
  • "Moreover, the Israeli army systematically attacked the targeted individuals while they were in their homes — usually at night while their whole families were present."
  • China is producing AI tech for the battlefield with an unmanned unit that can fire a 12.7 mm machine gun or rockets to support ground troops, operated as a swarm or individually.
  • The U.S. and NATO also have AI-based battlefield technology.

The opportunity: Foreign Policy details "The United Nations (UN) adopted a U.S.-led resolution on artificial intelligence, marking what Washington says is a major step toward establishing a global baseline to regulate the rapidly developing technology."

  • The UN proclaims, "An Algorithm Must Not Be in Full Control of Decisions Involving Killing."
  • The resolution approved by the UN's first committee in November of 2023 "expresses concern about the possible negative consequences and impact of autonomous weapons systems on global security and regional and international stability, including the risk of an emerging arms race, and lowering the threshold for conflict and proliferation, including to non-State actors."

Zoom in: The Week details that the Israeli military has vehemently denied the claims. "The IDF [Israel Defense Forces] outright rejects the claim regarding any policy to kill tens of thousands of people in their homes," it said in response to the allegations.

  • According to Dr Elke Schwarz, a lecturer in political theory at Queen Mary, University of London, the deployment of AI-enabled weapon systems has profound implications for the future of warfare.
  • "Technological innovation has always changed warcraft," said Andreas Kluth on Bloomberg in March.

Between the lines: Technology security is a more pressing defense challenge than lethal robots and AI targeting tools.

  • Arstechnica reports, "A federal Cyber Safety Review Board (CSRB) has issued its report on what led to last summer's capture of hundreds of thousands of emails by Chinese hackers from cloud customers, including federal agencies."
  • "The Cyber Safety Review Board (CSRB), formed two years ago, comprises government and industry officials from entities including the Departments of Homeland Security, Justice, and Defense, the NSA, FBI, and others."

Follow the money: Axios reports, "Autonomous weapons are no longer science fiction - and they're becoming a top priority for major military powers."

  • Anna Hehir of the Future of Life Institute says we need an international treaty to ban some of the most dangerous autonomous weapons and that we have a unique window now to do just that.
  • However, the United States, Russia, and over 100 other states signed the UN Outer Space Treaty (OST) in 1967. Still, in 2019, the Pentagon created the Space Force, an entirely new military branch "focused solely on pursuing superiority in the space domain."
  • Besides, weapons development is mainly done by private companies, not governments.

Go deeper: Subscribe to the free newsletter to learn more.

Read More

  1. https://www.972mag.com/lavender-ai-israeli-army-gaza/
  2. https://aibusiness.com/responsible-ai/flying-dragons-and-sharp-claws-china-s-ai-powered-military
  3. https://arstechnica.com/information-technology/2024/04/microsoft-blamed-for-a-cascade-of-security-failures-in-exchange-breach-report/
  4. https://foreignpolicy.com/2024/03/21/un-ai-regulation-vote-resolution-artifical-intelligence-human-rights
  5. https://press.un.org/en/2023/gadis3731.doc.htm
  6. https://theweek.com/politics/the-age-of-ai-warfare
  7. https://www.axios.com/2024/04/04/ai-weapons-war-autonomous-regulation-ban