Two recent investigations have revealed the extent of artificial intelligence (AI) use in the genocide being waged on Gaza, as well as in the surveillance of populations in the occupied Palestinian territories. Startups, often portrayed as symbols of Israel’s modernity, are in fact an integral part of the country’s military-industrial complex. A third investigation shows how the American giant Microsoft is complicit in the strategy of death against Palestinians.

“Once automation takes over, the target identification process goes beyond any control.” This statement—exposing a deadly acceleration in the Israeli army’s methods—was made by an officer involved in Operation Iron Swords in Gaza, which followed the October 7th attack. It underscores the growing role of AI in Tel Aviv’s extermination strategy.

The Israeli military had already used AI in previous operations to refine target identification. But for the first time after October 7th, the army delegated to AI the responsibility of assassinating these targets, expanding the scope of “collateral damage.” This was revealed in an investigation by journalist Yuval Abraham of +972 Magazine, partially based on documents obtained by Drop Site News and anonymous military sources.

37,000 Potential Targets

According to Yuval Abraham, AI—previously just a tool for document indexing and decision-making—has intensified in use, while human intervention has diminished. In 2021, the current commander of Israel’s elite intelligence Unit 8200 outlined in a military manual titled “The Human-Machine Team, or How Synergy Between Humans and AI Will Revolutionize the World” the need for a program to automate data collection and decision-making using AI. By 2024, such a project, named “Lavender” (Arabic for “lavender flower”), was operational.

Once Lavender identifies someone as a Hamas activist, that designation becomes a death sentence—without verifying the data that led the machine to this conclusion. Ignoring ranks or military roles, the AI marked 37,000 Palestinians based on “shared characteristics” among Hamas and Islamic Jihad members. The military accepts the risk that this logic could misidentify police officers, civil defense personnel, relatives of militants, or people with similar names as targets—which is exactly what happened.

Previously, assassinating a single individual required an identification process ensuring the target was indeed a high-ranking Hamas military official—the only ones eligible for home bombings. With AI in charge, verification was reduced to mere seconds before a strike.

Lavender played a crucial role in the intensified bombing, especially in the war’s early stages, where officers—shielded by their command—were not required to verify the AI’s selections (despite a 10% error rate). They only had to confirm the target was male, as Hamas and Islamic Jihad do not use female fighters.

This approach led to the killing of 15,000 Palestinians in the first six weeks of Israel’s Gaza offensive. The massacre did not stop for ethical reasons but due to fears of running out of bombs—a baseless concern, as U.S. aid continued uninterrupted.

Exterminating 2% of Gaza’s Population in Ten Months

An anonymous source told Yuval Abraham:
“In war, there’s no time to verify every target. So we accept AI’s margin of error, risking collateral damage, civilian deaths, and mistaken strikes—and live with it.”
The source explained:
“Automation was driven by the constant demand for more assassination targets. On days with no targets (where their profiles met strike criteria), we attacked less. They pressured us, screaming, ‘Bring us more targets!’ In the end, we had to kill targets extremely quickly.”

This resulted in the extermination of 2% of Gaza’s population in ten months, per official figures. With no limits to this deadly ambition, another AI system, “The Gospel”, automated the tracking of targeted men, locating them when they entered their homes. This data-linked program, mockingly nicknamed “Where’s Daddy?”, allowed strikes at dawn—often wiping out entire families.

According to UN figures, in the war’s first month, over half the casualties (6,120 people) were from 1,340 families killed inside their homes. In this genocide, it’s no surprise most victims were women and children.

The Israeli military also preferred unguided “dumb bombs” with unreliable precision, dubbed “silent bombs” for their undetectability. An anonymous source described their protocol:
“The only question was: Can we strike the building while limiting collateral damage? In reality, we mostly used dumb bombs—literally wiping out entire homes and their inhabitants. Even if an attack was avoided, we didn’t care and moved to the next target. Thanks to this system, targets never run out. There are still 36,000 waiting.”

Collateral damage was treated differently depending on the war phase. In the week after October 7th, Israel ignored it entirely, as confirmed by the source:
“We didn’t even count people in bombed homes because we didn’t know if they were inside.” Later, limits were imposed—first 15, then five civilians per strike—making attacks harder. But these restrictions were later lifted.

This allowed the military, in November 2023, to assassinate Ayman Nofal, commander of Hamas’ Central Gaza Brigade, by killing 300 civilians and destroying multiple buildings in the Al-Bureij refugee camp via airstrikes.

An Unprecedented 21st-Century Kill Rate

On April 5, 2024, when asked by Canadian radio about the credibility of Israeli media reports, Brianna Rosen—who has studied the militarization of new technologies, including AI, for 15 years—confirmed the core findings of the investigation. In an email to Radio Canada, she wrote:
“There’s much we don’t know about Israel’s AI use in Gaza. Israel has not been transparent about these classified systems.”
Rosen, a researcher at Oxford University and Just Security, added:
“What we do know: Israel uses AI-dependent systems to accelerate the pace and scale of war.”
Her colleague Amélie Férey of the French Institute of International Relations’ Defense Research Lab echoed this in a July 18, 2024 interview with Mediapart:
“What we’re seeing in Gaza is a kill rate almost unheard of in the 21st century.”

Knowing What Every Palestinian Does

Another investigation by +972 Magazine and The Guardian revealed that Unit 8200, under Israeli military intelligence, was tasked with developing a social surveillance system using facial recognition and a ChatGPT-like chatbot integrating data on Palestinians’ daily lives under occupation.

An intelligence source told Yuval Abraham:
“AI amplifies power. It uses data from more people to execute operations, enabling population control. It’s not just about preventing shootings—I can track human rights activists or even monitor Palestinian construction in Area C (West Bank). I have more tools to know what everyone in the West Bank is doing.”

Israeli military intelligence realized existing language models only understood formal Arabic, not the “dialects that hate us” (in one officer’s words). Reserve soldiers working in tech companies bridged this linguistic gap.

Uri Goshen, co-founder of one such company, said clients could now “ask questions and get answers” from the chatbot—like determining if two people had met or if someone committed a crime. However, the CEO of AI21 Labs admitted:
“These are probabilistic models. You ask a question, and they generate something that seems like magic. But often, the answer is meaningless. We call this ‘hallucination.’”

These advanced language models, combined with heightened surveillance in occupied territories, tightened Israel’s grip on Palestinians, increasing arrests—jailing as many Palestinians as were released in the Hamas hostage deal.

Microsoft: Committed to Tel Aviv

Since October 7th, the Israeli military has deepened collaboration with Microsoft’s cloud and AI services and its partner OpenAI, whose employees joined military units to deploy their technologies. Microsoft’s website admits:
“Microsoft experts become an integral part of (the military’s) team.”

Documents reviewed by Yuval Abraham show dozens of military units purchased services from Microsoft Azure. Microsoft also provided expanded access to OpenAI’s GPT-4, the engine behind ChatGPT.

Before 2024, OpenAI’s terms with Microsoft included a clause banning military and warfare use. But in January 2024—as Israel’s reliance on GPT-4 grew during Gaza bombings—the company quietly removed this clause and expanded partnerships with militaries and intelligence agencies, from which Israel benefited.

Israeli military units using Azure include:

  • Ofek Unit (Air Force): Manages bombing target databases.
  • Matzpen Unit: Develops combat support systems.
  • Saber Unit: Maintains military intelligence IT infrastructure.
  • Unit 8200’s tech branch: Produces surveillance equipment.
  • Military Advocate General: Prosecutes Palestinians—and theoretically, soldiers violating the army’s “ethical code.”

The same documents indicate Rolling Stone, the system tracking Palestinian movements in the West Bank and Gaza, relies on Azure. Microsoft maintains close ties with Israel’s Defense Ministry, handling “sensitive workloads” no other cloud company touches.

Did you enjoy this article? Feel free to share it on social media and subscribe to our newsletter so you never miss a post! And if you'd like to go a step further in supporting us, you can treat us to a virtual coffee ☕️. Thank you for your support ❤️!

Categorized in: