Section: 3.4 Technology and online platforms

Platform responses: existing measures

This section provides a synopsis of some responses to harmful content by platforms, with particular focus on manifestations of online misogyny such as online abuse and harassment of women and gendered hate speech. Each subsection below outlines examples of some responses to address harmful misogynistic content or behaviours on these platforms.

This list is not comprehensive but provides a glimpse into some of the responses available by the end of this project.

Existing measures


Many online platforms do not specifically mention misogyny or gender-based violence in their community guidelines and policies.


Meta

  • Meta’s policies (which also apply to Facebook and Instagram) address hate speech1 and define it as “a direct attack against people – rather than concepts or institutions – on the basis of what we [Meta] call protected characteristics: race, ethnicity, national origin, disability, religious affiliation, caste, sexual orientation, sex, gender identity and serious disease”. While gender-identity is a protected characteristic, Meta’s hate speech policy doesn’t explicitly discuss or use the terms online misogyny or gender-based violence, nor are these terms mentioned in their policies, community guidelines or reporting mechanisms.
  • Meta has an independent Oversight Board that helps Instagram and Facebook decide what content to leave up and what to take down. This response provides external accountability and addresses the issue of policy-violating content on Meta’s platforms.
  • Meta’s Dangerous Organisations and Individuals Policy2 prohibits dangerous organisations or individuals that proclaim a violent mission or are engaged in violence from having a presence on Meta’s platforms.
  • Meta platforms run a programme designed to prevent the spread of image and video-based abuse by using ‘hashes’ or images converted into ‘digital fingerprints’.3 As of December 2022, more than 12,000 people had turned more than 40,000 photos and videos into hashes on their own device, then shared them with the organisation Stop Non-Consensual Intimate Image Abuse (StopNCII), a cross-industry partnership to combat intimate image abuse. This stops anyone from being able to upload the original photos across Meta’s platforms. This response is an active preventative initiative that explicitly addresses gender-based violence. The tool can be accessed on Stop Non-Consensual Intimate Image Abuse’s website (stopncii.org). 3
  • Instagram and Facebook are also founding members of Take It Down, a platform hosted by the National Center for Missing & Exploited Children in the United States. Take It Down is a free service that can help young people under 18 to remove or stop the online sharing of nudes, partially nude or sexually explicit images or videos of them, including those generated by AI.4
  • At the end of 2021, in response to incidents of digital harassment in the Metaverse,5 Meta took several measures. These included the introduction of a ‘Personal Boundary’ feature, ‘hand harassment measures’, and a ‘safe zone or Pause feature’.67 The three responses are integrated features designed to address unsolicited touching in virtual reality. However, this safety zone feature must be manually activated by users.7
  • Facebook and Instagram use an artificial intelligence program to identify and remove violating content before users report it.89 It is a general tool used against all forms of content violations, which might include misogynistic content and online gender-based violence. The platforms also have built-in user reporting options and send content to human content reviewers for evaluation.8910
  • Instagram utilises artificial intelligence to warn users if what they have typed may be offensive and/or harmful to others. The platform allows users to filter out comments containing offensive words, phrases, or emojis using the ‘Hidden Words Tool’, and disable tags from unfamiliar accounts.1112 Instagram has also committed to working with law enforcement and responding to valid legal information requests.11
  • Meta is working on a ‘nudity protection’ feature for Instagram to protect users from unsolicited nude photos. The feature will work in a way similar to its ‘Hidden Words’ feature, which lets users automatically filter DM requests containing offensive content. The tool will be optional and will cover photos that may contain nudity in chats.13

X (formerly Twitter)

  • X doesn’t explicitly mention misogyny or gender-based violence in their policies or community guidelines. However, it prohibits hateful conduct based on protected traits including gender, gender identity, sexual orientation and other identities.14
  • X has enacted several responses to curb harmful content on its platform. These include internal policies like its Hateful Conduct Policy and integrated platform features like its user-reporting feature. Content that violates X’s hateful conduct policy includes hateful references, incitement, slurs and tropes, and dehumanisation of a group of people based on their protected characteristics including gender, hateful imagery, and hateful profiles.14 X has several responses to hateful content, including limiting visibility, restricting discoverability, downranking, requiring removal and suspending accounts.14
  • X uses artificial intelligence, alongside human content moderation, to identify and review harmful content.15
  • X disbanded its Trust and Safety Council, which was established in 2016 as a volunteer advisory group to combat hate speech, child exploitation, suicide, self-harm, and other harmful content on the platform.16

Most online platforms rely on both artificial intelligence and human moderators to review potentially harmful content.


YouTube

  • YouTube’s policies and community guidelines don’t explicitly mention misogyny or gender-based violence as prohibited content. However, YouTube doesn’t allow content promoting violence or hatred based on sex/gender, gender identity and expression, sexual orientation and other identities.17
  • YouTube removes content that violates their policies and notifies users by email. YouTube applies a strikes system and repeated violations will result in terminating channels or accounts.18
  • YouTube applies limited features to content that is close to hate speech, thus limiting the reach of that content.19
  • YouTube age-restricts content such as nudity and sexually suggestive content and violent or graphic content.20
  • YouTube’s policies prohibit cyberbullying, harassment and harmful behaviours including threats, swatting, non-consensual sharing of intimate images, stalking, blackmailing, directing malicious abuse via raiding and doxing.21

Fringe platforms typically lack content moderation, allowing unchecked posting of any content, including content that supports or promotes violence, misogyny, terrorism and violent extremism.


Reddit

  • Reddit doesn’t explicitly mention misogyny or gender-based violence in their policies. Reddit’s Content Policy includes overarching rules for all communities prohibiting, for example, harassment or threats based on identity, sexual or suggestive content involving minors, and illegal content.22
  • Reddit requires content to be appropriately labelled, especially if graphic, offensive or sexual.22
  • The platform ‘quarantines’ specific communities which have content that the average users might find highly offensive or upsetting. The quarantine prevents this content from being accessed by those who do not knowingly wish to do so.23
  • Reddit relies on content moderators, machine learning, and automation to identify and remove content reported by users for hate.24

Telegram

  • Telegram prefers not to remove content on its platform.25 It states that it won’t apply local restrictions on freedom of speech; however, Telegram only blocks bots and channels related to Daesh.26 None of its responses appear to address other types of violent extremism, online misogyny or gender-based violence.25
  • The platform also allows users to report pornographic and Daesh-related content, but nothing else. Telegram can act against public channels but does not do so regarding private chats.25
  • Telegram provides some form of content moderation, primarily through group administrators.25

4chan, 8kun and Gab

  • All three platforms are considered hubs for extreme misogyny.2728 4chan and 8kun are largely unmoderated and allow their users to be entirely anonymous.2729 8kun was originally 8chan but rebranded after Cloudflare ended their working relationship.27
  • Gab was formed in 2016 as a way to escape Twitter’s content moderation policies, and promotes itself as an avenue for free speech, welcoming users who are blocked from other platforms.30After being used by the Pittsburgh synagogue shooter in 2018, Gab made a commitment to curbing violent murder or death threats; however, it still planned on remaining a site where hate speech and other extreme posts were allowed.3132