[...]Developed for Gaza, Sold AbroadThese AI-powered systems—trained, tested, and refined during the war on Gaza—are not developed in a vacuum. Gaza has functioned as a live laboratory for these technologies, allowing Israel and its corporate partners to demonstrate the effectiveness and ‘efficiency’ of AI-enhanced warfare in real time. As with previous military technologies (e.g., drones), what is developed and tested in Gaza is often marketed globally as ‘battle-tested’ solutions, fueling a profitable security industry that benefits from war and repression. Indeed, Israel is already one of the world’s largest arms exporters relative to its size, and its AI technologies are likely to become core components of its growing defense export portfolio.As these AI systems mature and demonstrate their ‘effectiveness’ in high-casualty environments, there is an increasing risk that they will be sold to regimes with long histories of human rights abuses. Governments seeking to consolidate power, suppress dissent, or control marginalized populations will find in these AI technologies an attractive toolset. Surveillance platforms like facial recognition software and automated target selection systems, especially when paired with biometric databases or predictive policing algorithms, can become instruments of mass control and political persecution.For instance, authoritarian governments could purchase and deploy variants of Lavender or facial recognition systems similar to those used in Gaza, repurposed to monitor and neutralize political opposition, ethnic minorities, or protest movements. Such systems, powered by partnerships with U.S. firms or trained on data from U.S.-linked cloud infrastructure, would be challenging to regulate once exported. Without enforceable international regulations, tech companies face few legal or financial consequences for supplying repressive regimes with tools of digital authoritarianism.Furthermore, the revolving door between Silicon Valley, the Pentagon, and foreign militaries such as the Israel Defense Forces facilitates the rapid international spread of these technologies. With the proliferation of AI-enabled surveillance and targeting tools, the distinction between ‘defense technology’ and tools of domestic repression becomes increasingly blurred.As Matt Mahmoudi of Amnesty International warns, the opacity of these partnerships means that “U.S. technology companies contracting with Israeli defense authorities have had little insight or control over how their products are used by the Israeli government”—a dynamic that is likely to be replicated in other jurisdictions where authoritarianism is on the rise.In this context, the Gaza war may represent not just a humanitarian catastrophe but also a pivotal moment in the globalization of AI-enabled warfare. If unregulated, the collaboration between private tech firms and military powers risks accelerating the spread of surveillance, repression, and high-casualty targeting strategies around the globe, placing civilians in authoritarian regimes—and even democratic ones—at unprecedented risk.