top of page

Weapons of Moral Dilution: How AI Is Reprogramming Warfare, Surveillance, and Repression in the Middle East

Updated: Nov 30

Written by Fiza Rizvi, MSc Applied Social Data Science


The character of warfare is shifting toward a seemingly algorithmic battlefield, increasingly governed by the firing power of data. Though not the image of robots wielding heavy artillery, the once science‑fiction prospect has turned the Middle East into an experimental ground for artificial intelligence (AI).


As the MENA region is projected to invest  nearly $169 billion into AI by 2026, today’s technological military advances may soon be a familiar sight . National security has become the region’s most coveted, and scarcest, resource as proxy wars intensify and Iranian- Israeli hostilities escalate amidst the horrors in Gaza. Yet, as countries turn to AI-spending for defense, the future of warfare grows increasingly uncertain. Is AI merely a tool for augmenting human decision-making, or are we witnessing the quiet surrender of human control in warfare? The Middle East’s security dilemma reflects a sharp resurgence of realpolitik: where security is defined more by the brutish projection of power and the preemption of threats than the diplomatic pursuit of peace. For the purpose of this analysis, I will focus on three main regional players– Saudi Arabia, Iran, and Israel.


The AI Security Race


Saudi Arabia


Spearheaded by Saudi Arabia’s Vision 2030, the Middle East is witnessing an aggressive push toward the integration of artificial intelligence into military planning and infrastructure. In pursuit of strategic self-sufficiency, Riyadh has prioritized the development of autonomous defense systems for objectives ranging from logistics optimization to securing its southern border with Yemen. The 1,307 km border between Saudi Arabia and Yemen stands as the frontline of the proxy war with the Iranian-backed Yemeni Houthis. Between 2016 to 2024, the Saudi government spent  $1.5 billion on integrating the use of AI for surveillance, deploying night pulser cameras and thermal imaging along the border.


In 2023, the Kingdom signed a landmark $3 billion defense deal with Turkey to acquire Bayraktar AKINCI combat drones, marking one of the largest defense exports in Ankara’s history. These unmanned aerial vehicles (UAVs) feature AI-assisted flight control systems and autonomous terrain navigation, allowing for highly precise strikes without constant human input. AKINCI drones can also independently jam enemy communications and radar, offering a strategic edge in controlling contested airspace. Across conflict zones like Yemen, theRed Sea, and the Strait of Hormuz, AI now functions  as a force multiplier for Saudi Arabia, amplifying its capabilities against adversaries who still largely rely on conventional systems.


Iran


AI deployments that appear preemptive or deterrent encourage aggressive countermeasures from opponents. Saudi acquisition of AKINCI poses a direct challenge to Iran’s regional edge in unmanned drones. Iran has fielded  loitering munitions such as the Shahed‑131 and Shahed‑136: one‑way attack UAVs designed to strike and self‑destruct on impact. These systems were used  in Iran’s April retaliation against Israel in Operation True Promise and have been exported to Russia for use in Ukraine. A persistent danger is the modifiability of these platforms: Ukrainian examinations reportedly found  Russian modifications to Shahed‑136s that included upgraded guidance and AI-enhancements. The advancement of UAVs suggests the growing potential for AI-augmentation, both working in tandem to reduce human oversight.


General Hossein Salami, commander of the Islamic Revolutionary Guard Corps (IRGC), has claimed that Iran is integrating AI features into its drone fleets. Even if such statements serve partly as deterrence rhetoric, the implication is clear: AI‑enabled UAVs offer Iran asymmetric capabilities to signal power, especially across maritime zones, compensating  Iran’s relatively modest conventional navy. As AI adoption accelerates, the Middle East’s fragile balance of power risks increasing volatility; Gulf states’ counter‑efforts and Iranian build-up responses endanger a broader arms race for regional supremacy.


Israel


AI’s use in UAVs, surveillance, and logistics is visible across regional investments and arms trade, perhaps nowhere more controversially than in Israeli targeting systems used in Gaza. The IDF reportedly employs  a suite of AI tools, including a system referred to as “Gospel,” which analysts say identifies  potential infrastructural targets where Hamas or Palestinian Islamic Jihad operatives may be positioned. Gospel is described as analyzing  drone footage, satellite imagery, intercepted communications, and other datasets to generate target suggestions that are then evaluated by human intelligence analysts. Another tool, reportedly called “Lavender,” is said to assist  in querying a large database of individuals to help identify suspects linked to militant groups. This data is collected  from the mass surveillance of the over 2 million residents of Gaza. Together, these systems work together to surface intelligence for human decision‑makers about where, and in some cases whom, to strike.


Unlike conventional weapons, AI systems are black boxes: their decision-making often abstruse even to their operators. A dangerous paradox, as states race to gain strategic power through automation, they may also be ceding command to systems they do not fully understand. The question facing the Middle East is no longer if AI will dominate the battlefield, but who, or what, will ultimately be in charge when it does.


Close Precision or Distanced Morality?


The power of artificial intelligence in warfare and security is seductive: expansive targeting, smarter surveillance and fewer boots on the ground. Precision is its selling point but at what expense? As algorithms are inflated from supplementary tools to deadly decision-makers, the delineation between human and algorithmic control becomes blurred. Where machines cannot fathom complex ethical quandaries, the reduction of human oversight to a formality underscores the prioritization of speed over careful review.


This section probes the ethical considerations in AI’s use and potential across three regional actors with each converging toward a shared risk: In the pursuit of precision, they are building systems that operate at a fatal distance from moral judgment.


Israel


A former Israeli intelligence officer described  the IDF use of the Lavender and Gospel systems as operating a “mass assassination factory”. It is a characterization aimed not at the automation itself, but at the absence of meaningful human oversight. +972 Magazine was able to confirm  and verify the identities of seven former IDF whistleblowers mainly in the airforce and military intelligence, although their claims are disputed by the IDF. According to one officer, Lavender identified  approximately 37,000 individuals as potential targets within the first few weeks of the conflict, while human review of these recommendations was reportedly limited to a cursory “20-second rubber stamp,” rather than a substantive evaluation process.


While the full accuracy of these claims remains contested, they underscore a broader and well-documented risk in the integration of AI into military decision-making: automation bias. This phenomenon refers  to the tendency of human operators to over-rely on the outputs of automated systems, often trusting machine-generated assessments over their own judgment. In high-pressure environments such as armed conflict, where time determines outcomes, this bias can incentivize over-reliance on automated systems, weakening ethical scrutiny. This very efficiency can overshadow prudent deliberation. A so-called “human-in-the-loop” framework is often cited as a safeguard, but in practice, it can prove  to be nothing more than a pseudo- check.


Iran


The Islamic Republic of Iran has long relied  on clandestine smuggling networks to supply weaponry to its regional proxies, including Hezbollah in Lebanon, Hamas in Gaza, and various Shia militias in Iraq and Syria. These groups have increasingly integrated unmanned aerial vehicles (UAVs) into their arsenals, marking a shift from rudimentary drone use to more advanced, semi-autonomous systems. Hezbollah and Hamas have both deployed  drones in combat operations, with confirmed use in surveillance, kamikaze-style attacks, and cross-border provocations. Notably, Hezbollah has launched  Iranian-origin drones equipped with AI-powered image recognition systems near the Israeli-Lebanese border, which are reportedly capable of identifying and tracking ground assets.


This trend reveals a haunting threat: the impending widespread diffusion of AI-enabled weapons into the arsenal of non-state actors. Unlike states, proxies like Hezbollah or Hamas are not bound by international organizations nor by human rights laws. As Iran advances its development and export of loitering munitions and AI-enhanced drones, the risk isn’t just in the weaponization by states, but in how easily these systems, or their core technologies, can be repurposed, modified, and redeployed by actors operating outside any legal accountability. Existing arms control frameworks are ill-equipped to address  the challenges posed by dual-use technologies like AI, whose proliferation is harder to trace and even harder to regulate.


Saudi Arabia


While Saudi investment in AI surveillance has improved its capacity to monitor border activity and detect militant threats, particularly from the Houthi movement, it also raises concerns about the extension of domestic repression. In a state characterized  as an absolute monarchy, the use of AI to process and analyze vast volumes of personal data poses significant risks to freedom of expression and civil liberties. For regimes like Saudi Arabia’s, AI can serve not only as a security asset but as a powerful tool for consolidating more control.


Saudi authorities have been reported  to use AI-enabled monitoring systems to detect and remove online dissent, limiting the spread of critical content and preventing traction around opposition narratives. The Kingdom’s substantial investment in drone surveillance, mass data collection, and AI infrastructure encroaches on what limited space remains for political expression. With a data processing power outside of human capability, AI surveillance risks institutionalizing repression at an unprecedented scale.


Whether in the form of mass civilian casualties, the consolidation of autocracy, or the haze of untraceable militants wielding advanced weaponry, what binds these cases is the same unsettling truth: liability becomes intangible. AI promises close precision, but without regulation, delivers distance from accountability, security, and morality. Left unregulated, AI will not just change how violence is justified and executed, but how easily it is forgotten.


Governing the Algorithmic Battlefield


AI’s proliferation does not require uranium enrichment plants or intercontinental launch systems, rather, it requires lines of code, cloud storage, and strategic intent. In AI, the code humans write is not deterministic of a machine’s output. As AI becomes more cognitively advanced through training on vast data sets, it grows more capable but also less predictable. This makes regulation far more difficult. No legally binding, comprehensive framework currently governs the use of AI in military operations.


Existing international humanitarian law, most notably the Geneva Conventions, rests on a foundational assumption: that humans make decisions in war. The Geneva Conventions demand  proportionality, distinction, discrimination and precautions in armed conflict. But can an algorithm truly assess the proportionality of a retaliatory strike when a militant is hiding in a residential building? Can an autonomous drone be held legally accountable if a civilian is killed in pursuit of a target? Can a dataset reliably distinguish between a high-value operative and a political dissident? The legal architecture meant to proscribe humanity in the most inhumane of activities risks becoming obsolete.


Even the “human-in-the-loop” model, often cited as a safeguard, crumbles with advancement: the more data an AI parses , the less meaningful human oversight is. When humans cannot fully understand, much less audit, the algorithmic pathways for weaponry, accountability and morality become intangible artifacts in the cloud. Recommendations disproportionately focus  on human oversight, yet major powers still lack enforceable regulatory bodies for private AI enterprises. This is a critical gap and perhaps the starting point. Companies like the UAE’s EDGE and USA’s Palantir, which develop high-risk AI products, must be held to rigorous standards. A comprehensive, enforceable multinational framework is urgently needed. One that not only mandates operational norms for military AI, but also demands transparency between machines and humans. Where possible, AI systems must be supplemented with explainable intelligence, offering robust insights into how decisions are made. Only then can we begin to demystify the black box and encode human morality into AI.



Sources


① « Gartner Forecasts MENA IT Spending to Reach $169 Billion in 2026 », Gartner, 4 August 2025.


② Council of Preventative Action, « Conflict in Yemen and the Red Sea », Council

of Foreign Relations, 12 November 2025.


③ Noor Omer, « AI As a Force Multiplier in the Middle East Defense Systems », Innov8, 18 March 2025.


④ « How Saudi Arabia Uses AI to Monitor Yemen », StarNavi, 23 February 2025.


⑤ Girish Linganna, « Inside Saudi Arabia's $3 billion Turkish drone gamble », 15

October 2025.


⑥ Sebastian Clapp, « Defence and artificial intelligence », European Parliamentary

Research Service, April 2025.


⑦ Joe Emmett, Trevor Ball, N.R. Jenzen-Jones, « Shahed-131 & -136 UAVs: a visual guide », Open Source Munitions Portal, 2025.


⑧ Vlad Litnarovych, « Ukraine Reportedly Shoots Down Russia’s New AI‑Powered

Shahed‑136 Drone. Here’s What We Know », United24 Media, 19 June 2025.


⑨ Michael Rubin, « Iran Announces Integration of Artificial Intelligence Into Drone Fleet », American Enterprise Institute (AEI), 1 October 2023.


⑩ Khaldoun Khelil, « AI and Israel’s Dystopian Promise of War without Responsibility»,

Center for International Policy, 8 April 2024.


⑪ Michael N. Schmitt, « The Gospel, Lavender, and the Law of Armed Conflict », Lieber

Institute, West Point, 28 June 2024.


⑫ Elke Schwarz, « Gaza War: Israel Using AI to Identify Human Targets, Raising Fears That Innocents Are Being Caught in the Net », Queen Mary University of London , 17 April 2024.


⑬ Yuval Abraham, « Inside “Lavender”: AI and Israel’s Targeting Algorithms », +972

Magazine, 3 April 2024.


⑭ Lauren Kahn, Emelia Probasco and Ronnie Kinoshita, « AI Safety and Automation Bias: The Downside of Human-in-the-Loop », Center for Security and Emerging Technology (CSET), November 2024.


⑮ « Captured Documents reveal how Iran smuggles weapons via Syria and Jordan », The Meir Amit Intelligence and Terrorism Information Center, 19 December 2024.


⑯ Clarisa Nelu, « Exploitation of Generative AI by Terrorist Groups », International Centre for Counter-Terrorism (ICCT), 10 June 2024.


⑰ Amir Bohbot, « Hamas and Hezbollah’s drone warfare poses new threats to Israel's security - analysis », The Jerusalem Post, 15 June 2024.


⑱ « OSCE Arms Control Portal », Organization for Security and Co-operation in Europe

(OSCE), 2025.


⑲ « Saudi Arabia has big AI ambitions. They could come at the cost of human rights », The Conversation, 16 May 2025.


⑳ Albert Cevallos, « How Autocrats Weaponize AI — and How to Fight Back », Journal of

Democracy , March 2024.


㉑ Brian Judge, Mark Nitzberg, Stuart Russell, « When code isn’t law: rethinking regulation for artificial intelligence, Policy and Society », Center for Human- Compatible AI, University of California—Berkeley, 19 May 2024.


㉒ « What is International Humanitarian Law? », International Committee of the Red Cross

(ICRC), July 2004.


㉓ Kristian Humble, « War, Artificial Intelligence, and the Future of Conflict », Georgetown Journal of International Affairs, 12 July 2024.


㉔ Theresa Adie, MS, « Harnessing Technology to Safeguard Human Rights: AI, Big Data,

and Accountability », Human Rights Research Center, 8 April 2025.



Comments


LSESU Think Tank | Best New Society 2024

bottom of page