Section: 3.0 Technology and online platforms

Technology and online platforms

Key insights

  • Online platforms play a pivotal role in amplifying and perpetuating misogynistic attitudes and behaviours, significantly influenced by their design, algorithms, and business models.
  • Emerging categories of content and technology pose challenges.

Features that contribute to amplifying harmful content

Evidence indicates that the design, algorithms and policies of online platforms can contribute to the spread of misogynistic attitudes and online gendered violence.

Academic observes that these platforms often operate under what Harvard Professor Shoshana Zuboff describes as a model of “surveillance capitalism”, offering free services to users while monetising their data for targeted advertising.1 This model creates a commercial imperative to keep users engaged, even if it means amplifying harmful content like misinformation, conspiracies, and hate speech.12

At the same time, while many people use online platforms for benign purposes, they are also breeding grounds for antisocial activities, from spreading hate to interfering in elections.

Siva Vaidhyanathan, a media scholar at the University of Virginia, observes in his book Antisocial Media: How Facebook Disconnects Us and Undermines Democracy that the combination of these business models and user behaviours creates a troubling synergy, making societal problems an inherent feature of these platforms rather than an unintended consequence.2

The World Health Organization pointed out that a surge in social media and internet use during the Covid-19 pandemic led to an infodemic: too much information was shared, including false and misleading information, in digital and physical environments.34

Social and interactive features on social media platforms, including the ability to like and share posts and videos, are also significant. When used, these features boost the visibility and engagement of content, making social media algorithms more likely to recommend it to others and increase its visibility.

In her 2023 paper5 on technology-facilitated gender-based violence, hate speech and terrorism, Esli Chan highlights that these features might contribute to the potential creation of groups of like-minded people, which might allow users to encourage and disseminate misogynistic and hateful speech online, thereby creating networks promoting online hate. The researcher points out that this "networked" quality can diffuse user responsibility and allows groups to launch coordinated online attacks against individuals and groups, barraging them with hateful messages, images, and posts.

While the platforms themselves are grappling with these issues (see platform responses), the literature review revealed some examples of how the online platforms' business models, architecture and algorithms contributed to the spread of harmful misogynistic content in the past few years.

A report by the European Parliament’s Policy Department for Citizens’ Rights and Constitutional Affairs6 that looked into cyber violence and hate speech online against women highlighted four key aspects in present-day online spaces that might contribute to the amplification of online violence and misogyny against women and girls. These are privacy, anonymity, mob mentality and permanence of data, all of which lead to multiple re-victimisations of women and girls online.

Mob mentality (also known as herd or pack mentality and groupthink) in social media or online contexts refers to how individuals' opinions and behaviours can shift to align with the prevalent attitudes within an online community, often resulting in more extreme or polarised actions than those individuals would take on their own. This phenomenon is amplified by the anonymity and distance provided by the internet, reducing personal accountability and inhibitions. In online settings, people may engage in aggressive, bullying, or harassing behaviours, driven by the collective sentiment of the group, without considering the personal consequences. This can lead to a snowball effect, where more users join in on the behaviour, further intensifying the group's actions. Social media platforms, with their ability to quickly spread information and rally large groups, are particularly conducive to the development of mob mentality.

Adding to the complexity is the issue of the permanence of data online, which has a significant impact on victims of behaviours like ‘revenge porn’. This permanence means that once something is posted online, it can essentially last forever, accessible even after attempts to delete it. This can lead to ongoing victimisation, where victims feel they have no control over their own story. This sense of continuous re-victimisation strips victims of their agency and power, making image-based sexual abuse a particularly effective form of abuse from the perspective of perpetrators. The idea that anything posted can be permanently saved and potentially used against someone adds another layer of danger to the consequences of online mob mentality.


Anonymity online can amplify online violence and hate speech, as it creates a perception of no rules or accountability. When individuals cannot be easily identified, it often emboldens them to act without restraint.


Jesse Fox, Carlos Cruz and Ji Young Lee published a report7 on their investigation whether users’ anonymity and level of interactivity with sexist content on social media influenced sexist attitudes and offline behaviour. They focused on two Twitter (now X) features, the lack of a real name clause and the use of hashtags to link posts thematically. They noted that the use of hashtags on the platform has led to some sexist topics going viral, such as #LiesToldByFemales, #IHateFemalesWho and #ThatsWhatSlutsDo, promoting and giving rise to an increase in participation in sexist attitudes. They found that interacting with sexist content anonymously promotes greater hostile sexism than interacting with it using an identified account. It allows for increased aggression, and rude or anti-social behaviour, accelerating the dissemination of misogynistic and sexist messaging.7


Other contributing factors to the amplification of toxic and misogynistic content might include lack of moderation, platform design, algorithm bias, online disinhibition and filter bubbles or echo chambers.


Further reading:

Studies: Features that contribute to amplifying harmful content

Harmful content on online platforms

This section highlights some research conducted on features and prevalence of harmful content on online platforms such as YouTube, TikTok, X (formerly Twitter) and Facebook.

YouTube: A 2022 qualitative study by the Institute for Strategic Dialogue8 indicated that the YouTube algorithm, especially with its ‘YouTube Shorts’ feature, tends to recommend misogynistic and extremist content to boys and young Australian males, without making any distinction between the underage and adult accounts in terms of content served.

Facebook and YouTube: A 2020 study by Luke Munn9 from the New Zealand Digital Cultures Institute highlights how the design of platforms like Facebook and YouTube amplifies toxic content, with Facebook prioritising incendiary comments and YouTube nudging users towards more extreme content based on engagement.

TikTok: A study10 conducted by the Center for Countering Digital Hate in 2022 found that TikTok’s algorithm promoted a variety of harmful content to young girls and boys including misogyny, suicide, body image and eating disorders content.

An investigation11 by The Observer in 2022 found that TikTok was promoting misogynistic content to young people despite claiming to ban it, revealing a gap between the platform’s claims and its actual content moderation practices.

X: Recent changes in ownership and policy have had a significant impact on the type and amount of harmful content circulating on X.12 Reports from the Anti-Defamation League and the Center for Countering Digital Hate highlighted a surge in the daily rate of tweets containing misogynistic, racist, homophobic and transphobic terminology in the last few years.1314

Similarly, a 2023 study by the BBC and the Institute for Strategic Dialogue found that the change of leadership and direction at X led to a rise in the creation of new accounts that followed known misogynistic and abusive channels. The average number of such accounts increased by 69% after the change. The study suggested that the change in leadership created a perception of a more permissive environment for hate speech and misogyny.15

Further reading

Studies: Harmful content on online platforms

Helplines

We understand that this research could be confronting or upsetting for some readers. If you or someone you know needs to talk:

  • Visit Netsafe to complete an online form to report any online safety issues or free call 0508 638 723 for support.
  • Free call or text 1737 anytime for support from a trained counsellor.
  • Free call Youthline 0800 376 633 or text 234 to talk with someone from a safe and youth-centred organisation.