Unpacking the human rights consequences of social media moderation backsliding

28 March 2025 | 7 minute read

In 2017 hate speech online led to violent attacks on Rohingya refugees

Since Donald Trump’s inauguration as US president in January, many leading US companies have made every effort to accommodate the administration in what they perceive as a changing legal and political landscape. Some corporate executives have joined the administration, the most notable being Elon Musk, the world’s richest man.

The effect of such steps has been far-reaching in specific areas. Given the pivotal importance of free speech on the Internet, one area of particular concern is what happens to the promise of freedom of expression that the Internet offered, and how vulnerable voices should be protected in an era of free-for-all Internet, which tech companies’ founders espouse. One company’s journey in particular - Meta - is a cautionary tale on the risks to human rights.
  

The need for robust moderation policies and editors

Irresponsible use of tech platforms due to the spread of misinformation and disinformation has led to real world consequences, which are harmful to human rights. These include multiple examples of how the spread of disinformation during elections can affect outcomes. It is also seen in misinformation that causes panic, propaganda and attacks on ethnic, racial, or religious minorities that has led to violence, invasion and violations of privacy, undermining of groups whose rights are under attack, including women and minorities as well as children, and dangerous misinformation to create skepticism on matters involving public health and during natural disasters, such as the wildfires that caused wide-scale destruction in southern California. 

It was not long ago that tech companies and social media platforms in particular adopted progressive policies to responsibly manage content on their platforms. Stung by cases from Myanmar, Sri Lanka, India, and elsewhere that showed how hate speech disseminated from their platforms had led to real violence on the ground, companies decided to adopt technology and human moderation to monitor adherence to community standards on their platforms. 

Aware of these powerful developments and allegations over its own role in the past, Meta strengthened its policies on safety and hate speech. The company developed a sophisticated content moderation policy to mitigate harm, and set up an oversight board to which aggrieved individuals or groups could appeal. Meta also hired human rights practitioners and experts to help the company make responsible decisions. While not perfect, content moderation became more vigorous, and fact-checking was sub-contracted to credible organisations. Red flags highlighted content that spread hatred, incited violence, vilified minorities and other vulnerable groups, or made fraudulent claims. Affected parties could raise concerns by appealing to Facebook’s autonomous Oversight Board. 

Now, with Trump in power, corporate backsliding is a real concern. In a move that raises serious doubts about the nature and future of free speech on the Internet, Meta disbanded its fact-checking programme in the United States. Some  content moderation infrastructure (in particular the Oversight Board) remains in place, but fact-checking is now entrusted to crowd-sourced ‘community notes.’ While crowd-sourcing can be useful, it is possible for agenda-driven organisations and individuals to game the process and spread misinformation and disinformation. The effectiveness of such fact-checking is raising questions and other research indicates decreasing trust in fact-checking.

Without robust moderation policies and editors, greater risks from misinformation and disinformation are likely, impacting in particular ethnic, racial, and religious minorities, immigrants (lawful or undocumented), LGBTQI+ communities, and women. Trained editors and content moderators are expected to be guided by facts and to pursue accuracy by providing contextual information, while being mindful of human rights consequences. But when activists – of the right or the left, a vaccine skeptic or a vaccine enthusiast, an election denier or a political partisan - make a concerted effort to dominate ‘community notes,’ it can have devastating consequences, denigrating or crowding out factual and granular information about a contentious topic (such as effectiveness of the MMR or measles vaccine). 

Self-governing mechanisms are needed but are not perfect. Meta has made mis-steps, such as taking down pages belonging to dissidents in some countries, permitting government propaganda, and in times of a global crisis such as Israel’s war in Gaza, it has been accused of being biased by credible entities and affected groups. Conservative and right-wing critics have long resented content moderation by tech platforms. But for companies to conclude that Trump’s victory shows a changed mood in the US about content moderation is an unproven assertion. 

What’s more, Meta’s decision has significant impact because in many parts of the world, Facebook – Meta’s flagship – is the de facto Internet. Meta’s content moderation has ensured greater space for minority voices in countries where they struggle to be heard. How would Meta counter anti-immigrant sentiments or anti-LGBTQI propaganda in societies where xenophobia is rife and takes violent forms, or where same-sex relationships are criminalised? What if governments insist on removing LGBTQI+ or pro-immigrant content? M Should Meta refuse to comply, wouldn’t those governments point out how the company is complying with the changed political landscape in the US, and so it must do in their country too?
 

Unpacking ‘free speech’ 

Meta seems to have justified this policy shift drawing on libertarian free speech ideals. But in reality, poor content moderation contaminates the public sphere, leading to counter-abuses, or people withdrawing entirely. Meta’s commitment to freedom of expression becomes rhetorical, since it is intended for a specific audience, and to that extent it lacks credibility. 

Meta CEO Mark Zuckerberg’s motive may be to please an administration with whom he has previously crossed swords (by deplatforming Trump towards the end of his first term). The aim is to avoid being dragged into ideological and cultural battles with the administration and its loud supporters and focus on developing consumer-focused technology. To be sure, Meta was never able to please ‘both’ sides of any debate: for example, pro-Israel activists felt it was too lenient towards Palestinian voices; pro-Palestinian activists complained about their posts being taken down routinely. Policing cyberspace is never easy: ask a traffic officer in a city with blinking traffic lights and every driver keen to get to the destination before others.

Can Meta continue to be committed to freedom of expression and at the same time remove contentious pro-government pages that incite violence or fake accounts that indulge in hate speech? Can it defy other governments’ takedown requests, rely on international law, and resist calls for greater governmental regulation? Can it fight against other governments by refusing to comply with blatantly partisan orders when it complies with a divisive US administration?

Friction between US standards and the EU

Crucially, companies like Meta must balance Trump administration edicts with requirements to comply with regulations elsewhere in the world, including the European Union, Australia, and Brazil (where after much chest-thumping Elon Musk blinked and obeyed Brazil’s new laws and paid fines). 

The consequence? The Internet becomes Splinternet.

While Meta will retain control over the more severe forms of violations of its policies, in the interest of promoting free of speech, it will allow less severe forms of awful speech to remain online, if such speech is ‘lawful.’ That takes Meta into  tricky terrain: while the First Amendment in the US prohibits government actions to suppress freedom of speech or press, it does not require private actors to do so. They can frame their own rules about what can and cannot be said, as most tech platforms around the world do. In countries with stricter government controls, some topics are impossible to discuss, being deemed against public morality, decency, or national interest; in some countries specific forms of speech, such as anti-semitism or glorifying the Nazis, are prohibited. Even countries with more libertarian standards have rules preventing speech that can pose an imminent threat of violence. 

There is a silver lining: Meta says it will allow more political content online, which ought to mean good news for writers, journalists, dissidents, human rights activists, and others with strong views critical of governments or others in the establishment. Removing fewer posts that are contentious can do good in the long run, since it could give human rights groups wider reach. The test will be if there is an orchestrated backlash of hate speech against such groups: would Meta protect such a group’s presence online? While there is no evidence of this, could algorithms drive traffic away from human rights groups’ posts, and divert it towards conspiracy theorists? 

One reason many individuals and organisations left X, formerly Twitter, is that once it abandoned content moderation and permitted a wild west-like free-for-all, the quality of discourse deteriorated significantly, and the noise-to-signal ratio increased drastically. Facebook and other platforms run a similar risk. 

To be sure, the biggest threat to free speech often comes from governments, which can pass draconian laws or harass the media in other insidious ways, such as suing them or preventing them from raising funds abroad, or restricting their operations on their territory. By standing up in the past to governments demanding specific material be taken down, or by challenging governments that shut down the Internet, companies in the information technology sector burnished their reputations as championing liberties and free speech. They could afford to do so, because unlike manufacturing companies, they had fewer employees on the ground, and often no physical assets which could be seized. Governments began to assert control by invoking national security laws or raiding the premises of the companies, and civil society groups warmed to the companies, seeing in them kindred allies. 

As the keepers of the so-called townhall or public square (which it is not), platforms have devised and enforced rules to protect rights. Now those are being diluted, if not cast aside. Meta is sanguine; it believes the content of human rights defenders and dissenting writers will now get greater protection from those who seek to ban such views. But crowd-sourced content moderation may ‘crowd out’ those voices, since the one who is the loudest, who has the control of the microphone, will get heard. You can’t have a conversation in a raucous town hall where hecklers prevail. Meta and other platforms need to figure out ways that promote freedom of expression, protect the vulnerable, and prevent spread of lies or speech that might incite violence. In other words, do what they previously wanted to do and had put in place infrastructure to achieve  it. Abandoning it is not an option; and if it demonstrates that it can do it right, it will find that it is not alone; there are allies.