Microsoft has officially confirmed it supplies AI and cloud computing services to the Israeli military during the ongoing Gaza war. However, the tech giant denies that its technology has been used to harm civilians or facilitate violence.
Microsoft’s Admission and Denial
On May 15, 2025, Microsoft issued a statement responding to public and employee concerns about its role in the Israel-Gaza conflict. The company acknowledged providing the Israel Ministry of Defense with software, professional services, Azure cloud computing, and AI capabilities, including language translation tools. Microsoft also supports Israel’s cybersecurity efforts against external threats.
Despite these admissions, Microsoft insists its technology has not been used to target or harm individuals during the Gaza conflict. The company cited internal and external investigations—including interviews with employees and document reviews—that found no evidence of misuse of its Azure or AI platforms for violent purposes.
Scope of Services Provided
Reports indicate Microsoft’s Azure cloud services have been integrated into various branches of Israel’s defense forces, including air, ground, naval, and intelligence units. Some technology use appears administrative, while other elements reportedly support combat and intelligence operations.
Microsoft’s access to OpenAI’s GPT-4 model has also been granted to the Israeli military since OpenAI lifted its ban on military and intelligence clients in January 2024. This development raises further ethical questions about AI’s role in modern warfare.
Employee Backlash and Ethical Concerns
Microsoft has faced internal protests over its contracts with the Israeli military. In February 2025, five employees were removed from a CEO meeting after demonstrating against these contracts. Earlier, two employees were dismissed for holding a vigil for Palestinian refugees. This indicates a growing divide within the company on the ethical implications of its technology partnerships.
Microsoft’s Responsibility and Limitations
While Microsoft emphasizes its commitment to responsible AI use—including human oversight and prohibitions on harm—it also admits it lacks direct visibility into how customers deploy its software once delivered. This gap raises critical concerns about accountability when technology is potentially used in conflict zones.
Industry Context and Similar Moves
Microsoft’s confirmation follows similar reports about Google providing AI and cloud services to the Israeli Defense Forces. Google revised its AI principles in early 2025, removing explicit pledges not to supply AI for weapons or surveillance systems, aligning with a broader tech industry trend toward military collaboration.
Human Cost of the Conflict
The war in Gaza has caused over 50,000 deaths, many civilians, including women and children, according to the Associated Press. The involvement of major tech companies in providing military-grade AI and cloud infrastructure intensifies debates over corporate ethics and complicity in wartime actions.
Microsoft’s public acknowledgment of its role in supplying AI and cloud services to the Israeli military amid a devastating conflict marks a critical moment. Despite claims of no misuse, the company’s lack of control over how its technology is applied in combat situations raises urgent questions about corporate responsibility, ethical AI deployment, and the need for transparent oversight in warzones.
Related stories:
Netanyahu Announces Full-Scale Israeli Military Entry Into Gaza
Israel Approves Plan to Seize Full Control of Gaza Strip
Israel Conducting ‘Fastest Starvation Campaign in Modern History’ in Gaza: UN Envoy
77 Years Since the Nakba: A Continuing Catastrophe