Israel’s military uses AI systems to identify Hamas targets in Gaza. Programs nicknamed “The Gospel,” “Lavender,” and “Where’s Daddy?” analyze data, track phones, and scan buildings. These tools shortened target list creation from months to one week after the October 7 attacks. While officials defend the technology as legal, critics worry about civilian casualties and privacy violations. The global military community watches closely as this approach could reshape future warfare.
While Israel’s military has long been considered technologically advanced, its use of artificial intelligence in the Gaza conflict has transformed modern warfare into something previously unseen. Following the October 7, 2023 attacks, the Israel Defense Forces (IDF) rapidly deployed AI systems to identify and eliminate Hamas targets with unprecedented speed.
Programs like “The Gospel” scan for buildings that might house Hamas operations, while “Lavender” identifies suspected militants for targeting. These AI systems have dramatically shortened the time needed to generate target lists from months to just one week.
Another tool called “Where’s Daddy?” tracks phone movements to confirm identities before strikes, though this has sometimes led to attacks on family homes. The assassination of Ismail Haniyeh in Tehran utilized a high-tech bomb with AI capabilities for remote detonation.
These technologies emerged from collaboration between Unit 8200 (Israel’s intelligence unit) and reservists who work at major tech companies like Google and Microsoft. The systems combine civilian tech innovations with military applications, though the companies themselves aren’t directly involved in military uses.
The AI systems include facial recognition that can identify partially hidden faces and audio surveillance tools that analyze calls and background noises to locate both Hamas fighters and hostages. These tools helped track high-profile Hamas leaders, including Ibrahim Biari, who was killed along with 50 militants in a November 2023 operation.
Despite their effectiveness, these systems have raised serious ethical concerns. Critics point to increased civilian casualties in targeted strikes and question whether AI recommendations lead to wrongful targeting. The use of these advanced technologies has prompted ethical implications as highlighted by former NSC director Hadas Lorber.
While the IDF claims it uses these technologies legally and responsibly, specific details remain classified. Many experts are concerned about privacy violations, as these AI systems collect vast personal data often beyond their intended use. Pentagon officials and international observers have called for greater transparency in how strike decisions are made.
The debate continues about how much human oversight should exist when AI helps make life-or-death decisions. Israel’s AI warfare approach has created a new model that militaries worldwide are watching closely.
References
- https://www.jpost.com/israel-news/defense-news/article-851549
- https://www.axios.com/2024/08/01/haniyeh-assassination-mossad
- https://www.timesofisrael.com/israel-using-ai-to-pinpoint-hamas-leaders-find-hostages-in-gaza-tunnels-report/
- https://en.wikipedia.org/wiki/Assassination_of_Ismail_Haniyeh
- https://time.com/7202584/gaza-ukraine-ai-warfare/