Tech companies must face up to accusations of complicity in conflict

29 April 2025

At a recent event marking tech giant Microsoft’s 50th anniversary, chief executive for artificial intelligence, Mustafa Sulayman, was speaking about new AI features the company is introducing when he was interrupted by Ibtihal Aboussad, a software engineer and employee.  Aboussad accused Microsoft of having blood on its hands because its cloud computing product, Azure, is used by the Israeli Defense Forces (IDF) in Gaza.

Another event which included Microsoft’s top leadership, was interrupted by employee Vaniya Agrawal, who made similar accusations of complicity in ‘surveillance, apartheid, and genocide.’

Both Aboussad and Agrawal were fired immediately. This is not the first time Microsoft has faced such internal protests.

These cases raise two interlinked issues: one, should employees have the right to protest against the actions of their employer, and two, should companies develop technologies that can be misused?

Permitting protests by staff

Protests of the kind that occurred at Microsoft are not unique to the company. Employees at Google have staged sit-ins protesting the company’s ties with Israel, and over 50 employees have been fired as a consequence. Employees have often spoken out against company policies, when they have traded with apartheid-era South Africa, or Myanmar in the 1990s, or if the company prevents formation of unions, or when employees have chosen to become whistleblowers.

To be sure, freedom of expression is not a clear-cut issue, and even governments around the world do not agree with one another on what’s protected speech and what isn’t. Rules that restrain the state from restricting speech do not apply to the private sector: companies can frame their own rules. The question is whether the rules are reasonable. For exemple, some non-disclosure agreements (such as those concerning trade secrets and confidential matters) do make sense; but blanket gag orders do not. As Microsoft is an American company and this is in the US context, free speech protections that individuals have do not apply in the private sector (which is why social media companies, for instance, can have policies regarding the kind of speech permitted on their platforms). They can also restrict speech without legal challenges under the First Amendment (which only applies to the State). Likewise, employees also have protections from arbitrary restrictions by companies on work-related issues, and can use the National Labor Relations Act in the US to protect such rights. Besides, scholars suggest that Article 19 of the International Covenant on Civil and Political Rights can be broadly read to mean that private entities too should allow employees to hold and express opinions freely.

Companies should permit free expression of opinion within their organisations so that all ideas can be expressed freely as long as those are expressed peacefully and do not harass or intimidate anyone, or incite violence, without fear that their employment will be terminated. Companies can take disciplinary action against incitement to violence, harassment, intimidation, threatening or abusive behaviour, or other forms of unruly conduct that disrupts an event so much that it is impossible for the event to continue. But unpleasant protests which do not cross such a threshold should be permissible.

Employees and other concerned individuals and groups are right to ask companies if their business conduct is consistent with the principles that they have themselves developed on using AI responsibly. Microsoft’s AI principles for example deal with transparency, accountability, fairness, reliability, safety, privacy, security, and inclusiveness.

The question of complicity for tech companies

The second issue is more complex. The Geneva Conventions lay down the rules of armed conflict, and international humanitarian law is intended to protect civilians during conflict. While combatants can target one another as legitimate targets, they cannot use force indiscriminately against civilians, and have an obligation to protect civilian lives and property.  

Armies often rely on private defense companies to manufacture equipment, armaments, and systems to develop more lethal, accurate weapons. Armies also rely on other companies for products and services to support their operations.

In Israel’s case, Azure, a Microsoft software, is used to gather data and information through mass surveillance, which is processed rapidly. This includes monitoring phone conversations, texts, and audio messages. The accumulated data is then verified with other sources of information. The whole point of efficient data management is to process mass amounts of data quickly to identify patterns. A platform like Azure can enable quick analysis of such information, and an army can then choose to target specific individuals or property as targets. As Israel’s track record in the current war shows, mistakes occur often, and many times civilians, and civilian establishments like aid agencies, UN staff, charity kitchens, news organisations, hospitals, and schools have been targeted. Ignoring this as collateral damage is wrong.

Similarly, Amazon provides cloud computing services under ‘Project Nimbus’ which has caused concern. Israel has also used Cisco and Dell data centres. An IBM subsidiary has also worked with the Israeli government, and Palantir, controversial for its data harvesting technology, which enables surveillance and partners with Microsoft in defence contracts, has engaged in business with Israeli authorities as well.

But in making liberal use of AI during ongoing and large scale violence, Israel and many other countries have put tech companies in the spotlight.

Commercial AI models are now being used in warfare, according to Heidy Khlaaf at the AINow Institute. Investigations by the Associated Press show a two-hundred-fold increase in the use of AI by the IDF since the 7 October attacks on Israel by Hamas. Israel says it uses AI to identify bombing targets, but has them cross-checked through other sources to ensure accuracy. While OpenAI (in which Microsoft has invested) has a policy stating customers should not use the technology to develop deadly weapons, destroy property or cause harm to people, it has amended it to permit the use of technology to meet national security goals. Unless military manuals have clear systems and procedures in place to ensure that there will be reasoned human supervision (and not merely approval of what the data says), and use of intelligence to determine specific targets, mistakes are bound to happen. Ignoring those as the necessary cost of war exposes the army to future charges of disproportionate use of force and potential war crimes.

Weapons manufacturers have to be concerned about disproportionate or illegal use of their weapons. The challenge for companies not engaged in making weapons is how they can avoid their products or services being used in direct warfare that could lead to charges of complicity in human rights violations.

A company can no longer argue it is merely following contractual obligations, or its executives cannot simply say they are only following orders. That defence was unsustainable during the Nuremberg trials after World War II. As cases involving German banks during World War II showed, some (not all) financial institutions were found guilty of having been complicit in war crimes. In more recent conflicts in Syria and Sudan, a case in Sweden and another in France are examining the conduct of companies during these armed conflicts, precisely investigating complicity.

Guidance on avoiding business complicity during conflict

The UNDP and the UN Working Group on Business and Human Rights together produced a helpful document on the need for heightened due diligence by companies in conflict-affected contexts. The burden of decision-making remains with the company, which must assess risks and impacts of its actions and business decisions in the context of war. Mere association with or doing business with a government or its military forces does not imply complicity in specific conduct, but assisting an armed force raises risks.

The difficult question is how can a company do business with any army that has a record of violating international norms and laws, and used disproportionate force leading to large civilian casualties and the International Criminal Court has issued arrest warrants and International Court of Justice is warning of genocide risks? Those are red flags.

A conflict zone is not a law-free zone. At the same time, it is not necessarily unlawful (unless proven otherwise) for companies to do business in conflict zones. The question they must examine is whether their presence protects civilians and helps them lead normal lives, or whether the company’s conduct enables widespread human rights abuses. Conducting heightened due diligence is a requirement. The questions such due diligence require need to be detailed, examining all possible consequences. These include:

  • Does the company have leverage to ensure that the warring party acts consistently with international humanitarian law?
  • If it does not, does the company’s links with that warring party expose it to potential prosecution?
  • If the company has claimed to adhere to international standards and norms, is its conduct in line with those standards, regardless of what rivals might do if it were to withdraw?
  • Is it possible for the company to continue to do normal business – such as providing cloud computing to schools, universities, and hospitals, but not to defence forces – without getting enmeshed in the conflict?
  • Is it possible for the company to disable misuse of its technology by, for example, not upgrading software or preventing certain functions of the product or service sold?
  • Can the company adequately assure itself, its regulators, and other appropriate authorities, that it has taken all possible measures to prevent abuse of its technology?

The approach the Microsoft employees took – raining on the company’s parade on its 50th anniversary – may have been deeply unpleasant for the company’s senior management, but the employees were right to raise such concerns, given the devastating toll the Gaza war has taken on civilian life. The company could have offered to have an urgent conversation with colleagues who feel strongly about the issues involved. As  Principle 19 of the UN Guiding Principles for Business and Human Rights are not prescriptive about what a company should do in each possible scenario, but they do raise the prospect of a company ending a business relationship when it has no leverage to change the behaviour of the partner who is causing violations. The Principle clearly states: “There are situations in which the enterprise lacks the leverage to prevent or mitigate adverse impacts and is unable to increase its leverage. Here, the enterprise should consider ending the relationship, taking into account credible assessments of potential adverse human rights impacts of doing so.”

This may not be a simple or easy decision to make. Some relationships are crucial for companies, especially where they are providing a product or service for which alternative sources do not exist. But it should still be possible to service civilian sectors of the economy while cutting off supplies to the military. Oil companies operating in the Sudan in the 2000s were taken to task by human rights groups because they provided aviation fuel to the Sudanese air force, which was bombing civilian targets in what is now South Sudan. When companies protested, that they also provided aviation fuel to aircraft of Operation Lifeline Sudan (which provided food relief in famine-affected areas), the human rights community rightly argued that the oil companies can continue to supply OLS; it is the air force that they must cut off ties with. (One company eventually complied with the call from human rights groups).

In making these decisions, the key issue is severity of the abuse: the more severe it is, the quicker the company must respond. If it continues to operate, the onus is on the company to demonstrate that it is making efforts to mitigate adverse impacts. And if it persists in operating, it will have to face consequences, which would include reputational, but also financial and legal consequences.

These are not rhetorical campaigning issues, nor are the questions posed earlier the only ones. But such are the questions tech companies need to explore, unpack, and assess, if they care about the principles they sign up to, their own reputation, avoiding the risk of complicity, product boycotts and protests, and how they wish to be remembered.