Skip to main content

Photo: India-Pakistan Borderlands. Source: NASA Johnson, used under CC BY-NC 2.0 licence (https://flic.kr/p/ajRveE)

The Association for Progressive Communications (APC) expresses deep concern over the failure of major digital platforms to respond effectively to the surge in disinformation, hate speech and censorship during the recent India-Pakistan conflict. These harms are not incidental, but the result of engagement-driven business models that amplify polarising content and deprioritise user safety, especially in crisis contexts. Urgent structural reforms are needed to align platform practices with human rights obligations.

Along with increased state censorship of the internet and heightened disinformation in mainstream media aimed at inciting extreme emotions and chaos among people on both sides of the border, there has also been a sharp escalation of hate speech, misinformation and disinformation on social media platforms as the tensions escalated in recent weeks between the two countries, particularly targeting religious and gender identities of people through the use of dehumanising and abusive language. Of particular concern is the spread of anti-Muslim rhetoric by right-wing accounts in India, part of a broader global surge in Islamophobia. Such content harms not only those directly targeted, like Kashmiris and Indian Muslims, but also others who share similar identities. For already marginalised communities, this rhetoric poses real threats to their physical, psychological and economic security.

There has also been an amplification of gendered abuse by social media users in both countries, through slurs and insults as well as dehumanising language that seeks to frame women as disposable in the conflict. 

The rapid spread of hate and disinformation across media platforms fuels a self-perpetuating cycle of anger and misinformation, embedding falsehoods in public memory and undermining efforts toward peace. Social media platforms have had little to no response to this situation. While X issued a statement calling out the Indian government’s order to block accounts, the company has undertaken no firm steps to curb the spread of misinformation and disinformation, with its community notes feature proving to be ineffective and easily manipulated. Other platforms, including Facebook, YouTube and TikTok, have similarly failed to take meaningful action against the surge of hate speech and disinformation.

X’s gutting of its trust and safety programme in recent years, along with Meta’s rollback of protections for vulnerable groups, including women, its weakening of fact checking, and removal of diversity, equity and inclusion (DEI) policies shed a harsh light on social media companies’ commitment to providing a safe platform for their users and the precarious situations such users find themselves in during times of conflict.

The current crisis has highlighted the systemic failures of tech platforms, whose business models prioritise engagement and profit over human rights and safety. Opaque algorithms and monetisation policies designed to maximise user engagement have amplified divisive and inflammatory content, contributing to a climate of hate, fear and hostility. The architecture of these platforms, optimised for engagement and profit, systemically rewards outrage, polarisation and sensationalism, making harmful content not a glitch but a feature of the design. This crisis is not an exception, but a stark example of how platform logics continue to undermine safety and accountability, reinforcing the urgent need for structural reform. Similar negligence has been documented during other conflicts, such as the role of Facebook in the spread of anti-Rohingya propaganda in Myanmar, which contributed to mass violence and a genocide.

In the India-Pakistan conflict, platforms have amplified unverified, sensationalist content that fuels nationalism and deepens divisions, while also enabling state-sponsored censorship and suppression of dissent. The Indian government's directive to block over 8,000 accounts on X, including those of Indian and international news organisations, Pakistani news outlets, Kashmiri voices and independent Indian accounts involved in fact checking, represents a sweeping crackdown on online expression. This action, executed under threat of substantial fines and imprisonment for local staff, further stifles the free flow of information during an already volatile situation. This came a day after the Pakistan government lifted its own ban on X, in what it called an effort to participate in the “narrative war” against India. X had previously been banned in Pakistan since February 2024. Meanwhile, the Pakistani government blocked 16 Indian YouTube channels, 31 video links and 32 websites, citing the spread of false information.

In moments of heightened conflict like these, social media companies must be held to a higher standard of urgency and responsibility. Given the profound impact of information shared on these platforms, it is imperative that they implement emergency protocols, digital equivalents of disaster response measures, that can trigger rapid, rights-based action to mitigate harm. These frameworks should include enhanced moderation of harmful content, expedited human rights due diligence, and stringent neutrality safeguards to prevent radicalism, whether intentional or driven by algorithmic bias.

To ensure that platforms are engaging responsibly in conflict and crisis situations, APC calls on platforms to:

  • Establish crisis protocols and safeguards against extremism: Platforms must adopt emergency frameworks for conflict situations, including rapid moderation escalation, temporary algorithmic adjustments, and independent oversight to prevent the spread of radical and inflammatory content. These protocols must be guided by transparent and consistent publicly available criteria that are co-developed with civil society and informed by local contexts, and include independent oversight to ensure they are not exploited to expand platform control or suppress legitimate expression.
  • Protect free expression:  Platforms must resist compliance with state censorship orders that target dissenting voices, independent media and human rights defenders. At a minimum, they should publicly document and justify all takedown actions taken at the request of governments.
  • Conduct mandatory human rights due diligence: Social media platforms must conduct and publish regular, independent human rights impact assessments, particularly in conflict zones, and take concrete steps to mitigate risks to vulnerable communities. This also includes restoration and strengthening of trust and safety mechanisms, reversing recent layoffs and policy rollbacks that have disproportionately affected the safety of vulnerable groups.
  • Be transparent in algorithmic practices: Companies must disclose how their algorithms rank and recommend content, including whether they prioritise engagement over accuracy, and provide independent oversight mechanisms to audit these systems.
  • Adopt equitable content moderation: Platforms must significantly invest in multilingual, context-aware moderation, that isn’t reliant on AI, including hiring regional experts, to ensure equal protection and enforcement across all geographies, languages and contextualisation to specific needs of crisis and conflict.
  • Proactively demonetise harmful content: Ensure effective implementation of platform policies focused on demonetisation of disinformation and hate content, and proactively disable ad revenue, influencer payments and algorithmic boosting for accounts and pages repeatedly flagged for violations.