Section: 6.1 Government responses

Government Responses to Online Misogyny

Government Responses to Online Misogyny

This section reviews how different governments respond to harmful online content, including intimate image abuse. The overview outlines some of the measures taken by each country to address the spread of harmful and illegal content online, including the adoption of online safety regulation or policies. It also highlights where countries have enacted or are in the process of enacting laws specifically to address online manifestations of misogyny, with a focus on intimate image abuse.

Please note that this information is not all-inclusive and is accurate as of the date it was published.

What is online misogyny?

In this report, we define online misogyny as any form of misogyny that occurs online, whether through social media platforms, messaging apps, or other digital platforms. It includes behaviours and discourse that target women and girls because of their gender and that reinforce gender stereotypes or inequalities. Online misogyny can take many forms, such as threats, stalking, harassment and hate speech, and it impacts women and girls disproportionately.

The United Kingdom

The UK's Online Safety Act was passed on 26 October 2023 and the majority of its provisions were implemented on 10 January 2024. The groundwork for this legislation began with the Online Harms White Paper published in April 2019. The Act was catalysed by high-profile incidents, including the tragic suicide of 14-year-old Molly Russell, which spotlighted the risks of children accessing harmful online content, and the Cambridge Analytica scandal, both of which underscored the urgent need for regulatory oversight of online platforms.1

The UK Government published a draft of the Online Safety Bill in May 2021. While the draft Bill didn’t directly address misogyny, it tackled content that is shaped by and reinforces misogyny. The Bill requireplatforms to remove content that violates their policies, remove all content that is illegal under UK law, and ensure children do not see harmful content. Illegal content relevant to misogyny might include controlling or coercive behaviour, extreme sexual violence, hate crimes, people smuggling, revenge porn, sexual exploitation, terrorism, or inciting violence. Harmful content that platforms will need to protect children from encountering online includes pornography, online abuse, cyber-bullyingand online harassment, and non-criminal content promoting eating disorders, self-harm or suicide. Ofcom has regulatory powers and ability to impose punishments for non-compliance.2

In June 2023, amendments to the Online Safety Bill 2023 were tabled (at Lords report stage) with the intention of refining the legislative framework established by the Bill. We discuss below two key amendments that were included in the final Act:

The first key change addresses violence against women and girls across all UK regions – England, Wales, Scotland and Northern Ireland. It requires social media firms to address online misogyny and sexist abuse on their platforms. Additionally, it mandates Ofcom to create guidance that includes measures to mitigate harm targeting women and girls online, alongside best practices, as outlined in section 54 of the Online Safety Act 2023.3

The second major change focuses on abuse involving intimate images. This applies only to England and Wales. This change repeals sections 33 to 35 of the Criminal Justice and Courts Act 2015, which dealt with disclosing or threatening to disclose private sexual photographs and films with intent to cause distress, commonly known as ‘revenge porn.’ Instead, it introduces a more comprehensive framework consisting of four new offenses under section 188 of the Online Safety Act 2023. These include:

  • A base offence of sharing intimate images without consent (inserted as section 66B(1) into the Sexual Offences Act 2003), removing the requirement to prove intent to cause distress.
  • Two more serious offences involving sharing intimate images either with the intent to cause alarm, distress, or humiliation, or for sexual gratification (inserted as sections 66B(2) & 66B(3) into the Sexual Offences Act 2003).
  • An offence of threatening to share intimate images (inserted as section 66B(4) into the Sexual Offences Act 2003), which does not require that an actual image exists, thus addressing threats involving non-existent images (inserted as section 66b(7) into the Sexual Offences Act 2003).

Furthermore, ‘cyberflashing’ is criminalised under section 187 of the Online Safety Act 2023 (only applicable in England and Wales). This section, which was inserted as section 66A into the Sexual Offences Act 2003, addresses the intentional distribution of genital images intended to cause distress, alarm or humiliation, or for sexual gratification, and being reckless to whether it caused the recipient alarm, distress or humiliation.

These offences include images created by AI such as ‘deepfakes’, which were not previously covered by the ‘revenge porn’ offence, section 33 of the Criminal Justice and Courts Act 2015 and other related offences, now repealed.

Victims of these offences are now granted automatic lifelong anonymity across the UK, a provision under section 2(1)(da) of the Sexual Offences Act 2003, which ensures enhanced protection for victims by maintaining their confidentiality permanently.

The new offences are included under the provisions for Sexual Harm Prevention Orders, which permit courts to implement restrictions on individuals' behaviour to prevent sexual harm to the public. Moreover, these offences carry requirements for sexual offender registration, dependent on factors such as the sentence's severity and the perpetrator's age. As a result, certain offenders are listed on the sex offenders register, in accordance with section 191(3) of the Online Safety Act 2023, section 80 of the Sexual Offences Act 2003, and Schedule 3. This provision applies to England, Wales, Scotland and Northern Ireland.

The UK is also introducing several new criminal offences to address the unauthorised taking or recording of intimate images, as well as the installation of equipment to facilitate such offenses. These initiatives are part of a broader effort to expand upon existing laws in the Sexual Offences Act 2003, created through the Online Safety Act 2023, following recommendations from the Law Commission’s 2022 report on Intimate image abuse.4

The Criminal Justice Bill: Intimate images5 intend to repeal two existing voyeurism offences (Sections 67(3) and 67A(2) of the Sexual Offences Act 2003) that relates to recording a person doing a private act, and recording an image beneath a person’s clothing (eg up-skirting or down-blousing) and replace them with three new offences: a base offense for intentionally taking or recording intimate images without consent, and two more serious offenses of taking or recording an intimate image or a film to cause alarm, distress, humiliation, or for sexual gratification, with the latter potentially placing offenders on the sex offender’s register.

These new offences are designed to align with and complement the existing regulations against sharing or threatening to share intimate images, introduced by the Online Safety Act 2023. They also adopt the Law Commission’s definitions of ‘intimate,’ which include nude, partially nude, sexual, or toileting behaviour.4

Moreover, new offences will be introduced to criminalise the installation or modification of equipment intended to facilitate the unauthorised taking of intimate images. This is in addition to the current offence of installing equipment for observation under Section 67(4) of the Sexual Offences Act 2003. Victims associated with these offenses will also qualify for anonymity and special measures, enhancing their protection under the law.

See the UK Government Response to: Gendered Hate.

See the UK Government Response to: Misogynistic Extremism.

Scotland and Northern Ireland

Section 239 of the Online Safety Act outlines its territorial extent, stating that while the Act extends to the entire United Kingdom, its practical application varies by region. Specific provisions of the Act do not apply uniformly across all areas. For instance, the communications offences in sections 179 to 183, and the repeal in Section 189(1) do not apply in Scotland. Provisions in Section 239(3) are exclusive to England and Wales, Section 214(4) to (6) applies only to Scotland, and sections 189(3) and 214(7) to (9) apply only to Northern Ireland. Consequently, laws regarding intimate image abuse differ throughout the UK, with Northern Ireland and Scotland maintaining separate legislation.

In Scotland, it is illegal to share or threaten to share intimate images if the intent is to cause fear, alarm, or distress, or if there is recklessness in doing so. An intimate image is legally defined in Scotland as any image of a sexual act or showing exposed genitals, buttocks, or breasts, either bare or covered only by underwear, including manipulated images such as deepfakes. Intimate image abuse offences fall under the Abusive Behaviour and Sexual Harm (Scotland) Act 2016 while voyeurism offences are covered under the Sexual Offences (Scotland) Act 2009.

In Northern Ireland, the law also prohibits the sharing or threatening to share intimate images with the intent to cause distress. The definition of an intimate image here includes any nude image that displays all or part of the genitals or pubic area but does not cover deepfakes. Intimate image abuse offences fall under the Justice Act (Northern Ireland) 2016 while voyeurism offences are covered under The Sexual Offences (Northern Ireland) Order 2008.

See Scotland and Northern Ireland Response to: Gendered Hate.

See Scotland and Northern Ireland Response to: Misogynistic Extremism.

Further Reading

Ireland

In December 2022, Ireland enacted the Online Safety and Media Regulation Act (OSMR Act), significantly enhancing the regulatory framework for managing online content. 67 This Act establishes robust rules to combat harmful online content by building on existing data protection and criminal justice measures. Key provisions of the Act include the establishment of the new Media Commission and the appointment of an online safety commissioner. These authorities are empowered to enforce online safety codes, issue fines, sanction and block platforms, and initiate criminal proceedings against senior management of non-compliant platforms.6

The Act also aims to complement the Hate Crime Bill, which recognises misogyny as a hate crime, extending its application to online speech (See Government Responses to Gendered Hate). On 22 February 2023, Catherine Martin, Ireland's Minister for Tourism, Culture, Arts, Gaeltacht, Sport & Media, officially established the Coimisiún na Meán (Media Commission) effective from 15 March 2023. This body, which replaces the Broadcasting Authority of Ireland, is tasked with overseeing Ireland's regulatory framework for online safety, broadcasting services, and video-on-demand services, aligning with the EU's Audiovisual Media Services Directive.

Coimisiún na Meán also acts as Ireland’s digital services coordinator under the Digital Services Act (DSA), responsible for enforcing its provisions within Ireland. Starting 15 March 2023, the commission is managing a comprehensive online safety regime, focusing on regulating harmful content and safeguarding online users, particularly children.67

Under Part 11 of the OSMR Act, online safety codes are developed for video-sharing platforms and other relevant online services hosting user-generated content. These codes are designed to minimise the availability of harmful content and protect users through stringent service standards, content moderation, risk assessment, and user complaint handling. Organisations (media service providers and online services) in breach of these codes face severe penalties, including fines of up to €20 million or 10% of their annual turnover.

Online services must now assess whether they fall under the "relevant online services" category, review their content for potential harms, and adjust their complaint mechanisms and safety measures to comply with the new regulations. This preparation is crucial as the obligations under the OSMR Act may overlap with those of the DSA, necessitating a comprehensive approach to compliance.

In Irish Law, it is illegal to share or threaten to share intimate images with the intent to cause distress. This is stipulated under the Harassment, Harmful Communications and Related Offences Act (2020), which defines an intimate image as any nude image displaying all or part of a person's exposed genitals or pubic area, including deepfakes. The Act specifically addresses the unauthorised sharing of intimate images (section 2), threats to share such images (section 4), and the unauthorised recording of intimate images (section 3), providing clear legal recourse against these violations. Additionally, section 5 ensures anonymity for the victims. These offences are further encompassed under the OSMR Act 2022 in Schedule 3, which lists offence-specific categories of harmful online content.8

See the Republic of Ireland Response to: Gendered Hate

The European Union

The EU Digital Services Act (DSA) was adopted by the Council and the European Parliament in 2022.9 The DSA hold social media platforms accountable for content posted on their sites in an effort to protect users. The DSA provisions include taking action against risks such disinformation and cyber-violence towards women. Very large platforms and search engines will have to analyse the systemic risks they create and how to mitigate them.

The DSA outlines four categories of systemic risks (sections 80-83) that should be assessed in-depth by the online platforms. These are:

  • Risks associated with the dissemination of illegal content, such as child sexual abuse material or illegal hate speech or other types of misuse of their services for criminal offences.
  • The actual or foreseeable impact of the service on the exercise of fundamental rights, including but not limited to human dignity, freedom of expression and of information, including media freedom.
  • The actual or foreseeable negative effects on democratic processes, civic discourse and electoral processes, as well as public security.
  • Risks stemming from concerns relating to the design, functioning or use, including through manipulation, of very large online platforms and of very large online search engines with an actual or foreseeable negative effect on the protection of public health, minors and serious negative consequences to a person's physical and mental well-being, or on gender-based violence (GBV).910

The inclusion of GBV as a systemic risk is in line with the EU’s aim to criminalise certain types of violence against women as seen in the March 2022 draft DIRECTIVE OF THE EUROPEAN PARLIAMENT AND OF THE COUNCIL on combating violence against women and domestic violence. The directive seeks to establish minimum criminal standards for online stalking, non-consensual sharing of intimate or manipulated content, and promotion of violence or hatred online.11

See the European Union Response to: Gendered Hate.

See the European Union Response to: Misogynistic Extremism.

The United States of America

The US has taken several steps to address online misogyny and gender-based violence. These include: First, establishing the White House Task Force to Address Online Harassment and Abuse through a presidential memorandum signed by President Biden on 16 June 2022.12 The Task Force aims to prevent and address technology-facilitated gender-based violence, including online harassment and abuse, which disproportionately impacts women, girls, people of colour and LGBTQI+ individuals.

On 15 March 2022, the President signed the Consolidated Appropriations Act of 2022 (Public Law 117-103, 136 Stat. 49), which encompasses the Violence Against Women Act Reauthorization Act of 2022 (VAWA 2022). This legislation renews and enhances the original Violence Against Women Act of 1994, introducing amendments that aim to strengthen the provisions and extend the scope of protection and support for victims of violence.1314 This reauthorisation introduces amendments to enhance protections and support for victims of domestic violence, sexual assault, stalking and sex trafficking. It addresses technological abuse, defined broadly as any abusive behaviour using technology that is intended to harm or control another person, encompassing acts such as harassment, impersonation, and exploitation using digital means.

At the state level, almost all US states (plus the District of Columbia, Puerto Rico, and Guam) have laws criminalising nonconsensual pornography, which covers intimate image abuse. In the majority of these jurisdictions, distributing such content is a criminal offence, particularly when the perpetrator disseminates the images with specific harmful intent—such as to harass or intimidate—or with some level of awareness, whether direct or inferred through recklessness or negligence, that the depicted person did not consent to the disclosure.15

Despite the absence of specific federal criminal offences for distributing nonconsensual pornography, such actions can still breach various federal laws. For example, distributing nonconsensual pornography over the internet or across state lines might contravene laws against child sexual exploitation if it involves individuals under the age of 18. Other related offences may include threats, extortion, or harassment.15

To address these legal gaps, section 1309 of VAWA 2022, effective from 1 October 2022, introduced the private right of action, allowing individuals whose intimate images are disclosed without their consent to sue the perpetrator.16 Under this provision, for a plaintiff to win such a case, they must demonstrate that the defendant disclosed the images knowingly without consent or with reckless disregard to the plaintiff's consent, which typically involves ignoring a significant risk.1516 Successful plaintiffs under section 1309 may receive monetary damages and injunctions preventing further disclosures of the images.1516

One of these multinational efforts to address online harassment and abuse of women is the Global Partnership for Action on Gender-Based Online Harassment and Abuse, which was launched during the 2022 meeting of the United Nations Commission on the Status of Women.1718 Current members of the Global Partnership include Australia, Canada, Chile, Denmark, Iceland, Kenya, Mexico, New Zealand, the Republic of Korea, Sweden, the United Kingdom, and the United States.19

See the United States Government Response to: Gendered Hate.

See the United States Government Response to: Misogynistic Extremism.

Further reading

Canada

The Canadian Government committed to addressing online safety issues and to implementing a regulatory framework for online safety in Canada. The framework aims to address harmful content on online platforms and create a safe online space.

In March 2022, the Government established an expert advisory group on online safety, which is expected to provide recommendations on how to address harmful content on online platforms. The Citizens’ Assembly on Democratic Expression released its report in September 2022 with recommendations for reducing online harms and safeguarding human rights in Canada.20

The Assemblies unanimously emphasised the need for immediate and comprehensive regulations to address the harmful actions of individuals who exploit, harass, and harm Canadians online.20 Another important initiative by the Canadian Government is the Digital Citizen Initiative, which aims to help equip Canadians with the tools and skills needed to critically assess online information, including misinformation and harmful and extremist content.21

Section 162.1 of the Criminal Code, as amended by the Protecting Canadians from Online Crime Act (S.C. 2014, c. 31), makes it illegal (a criminal offence) for a person to knowingly publish, distribute, transmit, sell, make available or advertise an intimate image of a person where the person in that image did not give their consent.

When such cases go to trial, both the Criminal Code and the Sex Offender Information Registration Act (SOIRA) are applicable. If convicted under this section of the Criminal Code, the offender may be subject to a SOIRA order, although such orders are not mandatory. The decision to impose a SOIRA order is at the discretion of the court, based on a request from the Crown.22

Bill S-12, proposed to amend SOIRA, included changes to expand the scope of offences related to the non-consensual distribution of intimate images. Specifically, the bill aims to classify the non-consensual distribution of intimate images as a primary designated offence. Additionally, it proposed amendments to the Criminal Code concerning publication bans, by adding the offence of publishing an intimate image without consent to the list of offences for which a publication ban can be ordered.

However, Bill S-12 did not specifically address the issue of deepfake image-based sexual abuse. This means that under the proposed amendments, creating a pornographic deepfake of another adult, without other criminal elements, remains in a legal grey area under Canadian law. Nonetheless, any creation or consumption involving child pornography deepfakes would still likely lead to charges under child pornography offences.

See the Government of Canada’s Response to: Gendered Hate.

See the Government of Canada’s Response to: Misogynistic Extremism.

Australia

In Australia, there are two primary penalty regimes for handling intimate image abuse. The first is a civil penalty regime managed by the eSafety Commissioner through the Image-Based Abuse and Adult Cyber Bullying schemes of the Online Safety Act 2021. The second is a criminal penalty regime, governed by the Criminal Code, specifically involving an aggravated offence under section 474.17A, which addresses the use of carriage services to menace, harass, or cause offence with private sexual material.23

Australia’s Online Safety Act 2021 became effective from 23 January 2022. The Act makes platforms more accountable for user safety and requires providers to remove seriously harmful content.24 The eSafety Commissioner, an independent regulator, oversees several schemes to improve online user safety and content regulation and address online abuse and harmful behaviour, including schemes for Adult Cyber Bullying, Image-Based Abuse and Illegal and Restricted Materials (see here Australia’s response to misogynistic extremism).24

Part 6 of the Act, the Image-Base Abuse Scheme, allows the eSafety Commissioner to help when intimate images of someone are shared without their consent or where a person initially consented to their images being shared online. The scheme is designed to help with images showing any of these: private body parts; a person during a private activity, such as undressing, showering, bathing, using the toilet, or during sexual activity; and a person without their normally worn religiously or culturally significant clothing. This also includes images that have been altered through photoshop or deepfakes, or images that a perpetrator claims are of the victim but are not.

The eSafety Commissioner has the authority to issue a removal notice to platforms hosting such content, demanding the removal of the intimate image within 24 hours, although this timeframe can be extended at the Commissioner's discretion. The Image-Based Abuse Scheme also imposes significant penalties on anyone who posts or threatens to post intimate images without the consent of the depicted person.

Part 7 of the Act, the Adult Cyber Abuse scheme, gives the E-Safety Commissioner the authority to require online service providers to remove online abuse that targets Australian adults with the intention of causing serious harm. For the scheme to apply, the abuse must reach a high threshold of ‘serious harm,’ meaning the abuse must be intended to cause serious physical or psychological harm such as threats going beyond ordinary fear. The abuse must also be menacing, harassing or offensive in all circumstances. However, this scheme focuses only on instances of serious harm when it endangers or could endanger a person’s life or could have some lasting effect on them, excluding offensive and harassing content that doesn’t meet these criteria.

Criminal offences for image-based abuse can be found under Section 474.17A of the Criminal Code Act 1995. Since 1 September 2018, specific aggravated offences were included into the Criminal Code under Sections 474.17A(1) and (4).23

These provisions are invoked if an online offence involves menacing, harassing, or causing offense through the transmission or sharing of private sexual material, which could result in a standard aggravated offence being established. If an individual has been subject to three or more civil penalty orders, a special aggravated offence may also be proven.

It's important to note that these offences specifically apply to material depicting adults. Separate provisions under the Criminal Code address 'child abuse material.’

With the implementation of the new civil penalty regime for adult cyber-abuse by the eSafety Commissioner on 24 January 2022, which extends beyond the sharing of private sexual material, individuals who have received three or more civil penalty orders for adult cyber-abuse may now face the special aggravated offence under Section 474.17A(4) of the Criminal Code.

Victims of image-based abuse can also seek protection orders through the police or legal services to prevent further abuse. These orders can prohibit actions like sharing intimate images or videos, approaching or contacting the victim, or monitoring the victim's movements. Breaching a protection order is a criminal offence.

See the Government of Australia’s Response to: Gendered Hate.

See the Government of Australia’s Response to: Misogynistic Extremism.

Further reading

New Zealand

The Harmful Digital Communications Act 2015 (HDCA) addresses online harms including bullying, harassment, abuse and the non-consensual sharing of intimate images and videos. Netsafe is an independent non-profit organisation responsible for carrying out the HDCA in New Zealand. New Zealanders can report harmful content such as sextortion, image-based abuse, bullying and harassment to Netsafe.

The HDCA was amended through the Harmful Digital Communications (Unauthorised Posting of Intimate Visual Recording) Amendment Act 2022, which took effect in March 2022. This amendment inserted two new sections into the Act (sections 22A and 22B).

Section 22A establishes it as an offence to share an intimate visual recording without the express consent of the person depicted. Under this section, posting such a recording without a reasonable excuse, knowing the victim has not consented, or being reckless about the victim's consent, constitutes an offence. The law specifies that individuals aged under 16 years cannot consent to the posting of such recordings. Violations can lead to severe penalties, including imprisonment for up to two years or fines up to $50,000 for natural persons, and fines up to $200,000 for corporate entities.

Section 22B allows the courts to issue civil orders during proceeding to remove or disable intimate recordings that have been shared without consent (for offences under section 22A). This includes orders to take down material, prevent further similar communications, and stop the continuation of the offending conduct.

Before this amendment, proving an offence required showing intent to cause harm and actual harm. The amendment removed this requirement for cases involving intimate visual recordings, making the unauthorised sharing itself an offence regardless of intended or actual harm.

Currently, New Zealand's legal landscape lacks explicit provisions that specifically address abuse facilitated by AI technology such as deepfakes. Under the Films, Videos, and Publications Classification Act 1993, the Chief Censor could classify deepfake publications, but only if classification criteria are met, if for example, they depict a minor engaged in a sexual activity or being sexually abused.

Sections 216G, 216H, 216I, 216J, 216K, 216L, 216M and 216N of the Crimes Act 1961 deals with the legal offences of the creation, possession and distribution of intimate visual recording of another person.

Section 216G of the Act indicates that an intimate visual recording includes, for example, photographs, videotapes or digital images and is created without the knowledge or consent of the person depicted, using any device. This includes recordings of individuals in private settings where privacy is expected, and they may be naked, partially clothed, or engaged in intimate activities such as sexual acts, showering, or changing.

It also covers recordings made from beneath or through a person’s clothing inappropriately (commonly known as up-skirting and down-blousing). The definition extends to recordings transmitted in real-time without being stored, either physically or electronically, yet capable of being reproduced.

The Aotearoa New Zealand Code of Practice for Online Safety and Harms is a voluntary code of accountability for platforms. Signatories to the code are accountable to the Code’s administrator and multi-stakeholder sub-committee. Signatories’ commitments include managing bullying and harassment, hate speech, incitement of violence, and violent or graphic content.25

See the Government of New Zealand’s Response to: Gendered Hate.

See the Government of New Zealand’s Response to: Misogynistic Extremism.

Further reading

Helplines

We understand that this research could be confronting or upsetting for some readers. If you or someone you know needs to talk:

  • Free call Women’s Refuge 0800 733 843 for support for women and children experiencing family violence.
  • Visit Netsafe to complete an online form to report any online safety issues or free call 0508 638 723 for support.
  • Free call or text 1737 any time for support from a trained counsellor.
  • Free call Youthline 0800 376 633 or text 234 to talk with someone from a safe and youth-centred organisation.
  • Free call Safe to Talk 0800 044 334 or text 4334 anytime for support about sexual harm.
  • Free call OutLine Aotearoa 0800 688 5463 any evening to talk to trained volunteers from Aotearoa's rainbow communities.