As a Palestinian, Meta has failed me once again. In yet another example of dehumanisation, a WhatsApp feature that allows users to search AI-generated images reveals blatantly racist portrayals of Palestinians.
A recent Guardian report showed that a search for “Muslim boy Palestinian” generated a cartoon of a boy wielding a gun, while “Israeli boy” showed smiling children at play.
This is only the latest iteration of problematic trends within Meta, WhatsApp’s parent company. In my seven years running the Palestinian digital rights organisation 7amleh, I have watched these trends intensify.
Relying on biased generative AI, whether for emojis or content moderation, dehumanises Palestinians. It is also insulting.
Throughout the current crisis, Meta has systematically silenced and censored Palestinian voices, muffling one of the only unfiltered avenues for the world to hear from Palestinians directly.
As genocide has unfolded before the world over the past month, people have been relying on social media to share their voices and report facts on the ground. But Meta has been accused of primarily targeting and removing pro-Palestinian content.
The ongoing conflict has served as a crucial test for Meta, and the company has unequivocally failed. Censorship of Palestinian voices has occurred on both the individual and organisational levels.
Silencing Palestinians
Last month, Meta disabled the Facebook page of Quds News Network, a prominent Palestinian outlet with around 10 million followers, along with a number of other Palestinian media pages. The state of Israel, which routinely pressures social media companies in an effort to control the narrative, quickly took to X (formerly Twitter) to post a seemingly inconspicuous “thank you”.
This is not the first time Meta has failed Palestinians during a crisis. In May 2021, amid mass protests over the forced eviction of Palestinian families from the occupied East Jerusalem neighbourhood of Sheikh Jarrah, Palestinians took to social media en masse to share their perspectives – and they were met with widespread censorship.
A subsequent report commissioned by Meta, published in 2022, cited evidence of bias against Palestinians – the exact issues I have been speaking out about for years. Meta said it was committed to changing, and I felt as though we were finally making progress.
But this feeling was short-lived. The recent proliferation of one-sided censorship, limited visibility and other forms of silencing Palestinians have been explained as “technical glitches”, but they present unacceptable barriers to sharing perspectives online.
In the past, bans on the reach of stories have led some users to accuse Meta of deliberate censorship. The latest “technical” issues appear even more egregious.
The Palestinian flag emoji, for instance, has been flagged as “potentially offensive” by Instagram, resulting in it being hidden. Another “technical error” encountered by Palestinian users on Facebook and Instagram prevented the display of images of Palestinian victims in hospitals, because they were deemed to be “nude” pictures.
Among the most significant and disturbing examples was Meta’s inexplicable mistranslation of innocuous Arabic phrases in the bios of some Palestinian Instagram users to add the word “terrorist”. The company later attributed this to an interpretive error.
There is no accountability in a technical error – and importantly, nothing to stop it from happening again.
Public scepticism
Why do all these technical errors seem to exclusively affect Palestinians, and why is this pattern repeated with every escalation?
The use of the word “terrorist”, rather than a more neutral term, has only added to public scepticism over the company’s credibility and capacity to treat Palestinian and Arab users fairly. There appears to be deep bias in Meta’s datasets and machine-learning systems, notwithstanding the company’s expressed regrets.
There has been no indication that Meta intends to conduct a rigorous internal investigation into these matters. As in many other times of conflict, inflammatory Hebrew-language content or racism against Palestinians has not faced the same restrictions or censorship.
Content moderation policies often reflect the dynamics of international power. Like so many other corporate strategies, they are driven by a blend of commercial and political concerns that filter the world through the lens of American global interests.
This raises concerns about the equitable and universal application of content moderation policies, and the extent to which social media platforms prioritise political and economic interests over human rights. Israelis are not currently experiencing the same sort of censorship as Palestinians.
Content moderation policies must be impartial. We must uphold the right to free expression, without regard for political or economic power and its proximity to the interests of the US or Israeli governments. Policies should be guided by human rights and international humanitarian law; any standard that falls short of this should be unequivocally rejected.
The views expressed in this article belong to the author and do not necessarily reflect the editorial policy of Sunna Files Website
Sunna Files Free Newsletter - اشترك في جريدتنا المجانية
Stay updated with our latest reports, news, designs, and more by subscribing to our newsletter! Delivered straight to your inbox twice a month, our newsletter keeps you in the loop with the most important updates from our website