InfoLab

What the Online Harassment of Iranian Journalists Teach Us About Safety and Gender-Based Violence on X

By

Abdullah Ahmad

Abdullah Ahmad

While social media platforms are often positioned as civic spaces, in reality, the online information ecosystem is often marked by coordinated efforts by threat actors to manipulate political narratives. The front-liners combating these asymmetric attacks on information are journalists and activists who highlight misleading stories and false consensus, thus safeguarding the integrity of informed public dialogue online. However, these individuals are disproportionately targeted, particularly in non-Western contexts.

Our research investigates specific instances of targeted campaigns against underrepresented communities within mainstream trust and safety conversations. We look into a novel harassment campaign targeting different subgroups of journalists and activist reporting on the 2022 Iranian protests. To put it succinctly, the investigation found that female journalists of the Iranian diaspora experienced significantly higher levels of harassment compared to other groups. The campaign was primarily executed through the deployment of bot accounts and used a new strategy to evade detection by purposefully avoiding words which are flagged in abusive words filters.

What Happened

During the Iranian protests of 2022, journalists and activists, particularly female journalists of the diaspora have been at the forefront of informing the world about the events unraveling in Iran. This was particularly critical because journalists in Iran were routinely imprisoned during heightened periods of protests thus putting the onus of amplifying communication on those not physically present in the country. During the protests, female journalists from the diaspora reported on experiencing higher levels of online harassment. In order to test these claims, we used X (formarlly Twitter) data to delve into this trend to understand if the attacks were gendered, what were the strategies deployed by bad actors and what this meant for designing safer social media platforms for underrepresented communities.

Methodology

We analyzed how different subgroups of activists and journalists faced harassment on X during the protests by examining the language of Tweets directed at them. The three subgroups explored are as follows:

  • Female journalists of the Iranian diaspora
  • Non-Iranian journalists
  • Male journalists of the Iranian diaspora.

What We Found

The first investigation used the traditional method of analyzing the use of abusive language directed towards journalists. Our results identified that female journalist of the Iranian diaspora experienced substantially more tweets with curses, crass language, and insults. Between Sept 15-30, female journalist of the diaspora experienced 8,975 abusive tweets which was 65 times more than that received by non-Iranian journalists and 8 times more than male Iranian journalists of the diaspora. The abusive language against the female Iranians was particularly associated to their gender and their occupation.

The second investigation dived into a novel methodology of identifying coordinated campaign comprising of credibility-attacking tweets that systematically tarnish the reputations of journalists by portraying them as biased proponents of the Iranian government. These set of tweets had no explicit abusive words but still harassed the journalists by calling them mouthpieces, apologists and puppets of the establishment. We found that female journalist of the Iranian diaspora received 7,039 such tweets compared to 10 and 418 experienced by non-Iranian and male Iranians respectively. Analysis suggested that females were subjected to 895 more credibility attacking tweets on average compared to non-female. Additional, 78% of tweets attacking female journalists have been created by bot accounts.

The analysis identified three categories of credibility-attacking bot-generated tweets targeting female Iranian journalists:  

  • Dismiss Tweets: Over 80 percent of the bot-generated tweets that attacked credibility were labeled as “Dismiss” by our model. “Dismiss” tactics aim to nullify the target’s message by discrediting their associations, e.g. branding them as apologists or spokespeople for the Iranian government.
  • Dismay Tweets: 66 percent of tweets aim to provoke negative feelings towards the targeted individual by questioning the sincerity of their work.
  • Nuke Tweets: 65 percent of tweets aimed to alter the community structure around a specific topic. These tweets discouraged readers from regarding the journalists as movement representatives and called out institutions employing them, such as the NY Times and Human Rights Watch, suggesting they should regret hiring these journalists.

Way Forward

  • Differential Targeting: Online threats are not uniformly distributed amongst different occupations, gender identities and regions. In our investigation, female Iranian diaspora journalists faced unique, credibility-based attacks, and at a much higher level than non-Iranian or their male counterparts. It is clear that a one-size-fits-all solution does not address the nuanced vulnerabilities of different communities, and this disproportionately impacts underrepresented groups. Therefore, platforms need to consider differential power dynamics within protected classes when developing their policies. To further understand differential exposure to harmful content and their impact, platforms need to be transparent about prevalence of harmful exposure by region and protected classes.
  • Adaptive Inauthentic Behavior: The perpetrators of these campaigns are increasingly utilizing subtle, sophisticated tactics, deliberately eschewing overtly derogatory language to undermine their targets, thereby challenging conventional detection methods based on sentiment analysis. The lack of abusive language and accompanying hashtags allowed these harassment tweets to go undetected by abuse measuring models and made it difficult to track such tweets. This evolution necessitates the development of equally advanced counterstrategies to maintain online safety and integrity. This can potentially be achieved by focusing more on the detection of inauthentic actors (bots) and tracking their actions over a period to understand the new strategies being employed.

Abdullah Ahmad

Data researcher at Carnegie Mellon University, conducting investigations into disinformation networks, bot clusters, and toxicity on social media networks. Specializing in understanding and forecasting cyber-mediated changes in political and social outcomes using quantitative methods. You can read more of his work at bluechroma.substack.com.

This investigation is part of an academic research paper which has recently been accepted for publication at Carnegie Mellon University. The findings of the investigation were also presented at TrustCon 2023. You can learn more about the research and methodology here.