Section: 3.1 Technology and online platforms

AI-powered tools and the creation of deepfakes

Deepfakes can re-victimise women and girls by circulating manipulated, non-consensual content, leading to public harassment and significant emotional harm.


Deepfakes are a form of manipulated or synthesised media that can create convincing images, audio and videos purporting to depict individuals, often public figures, saying or doing things they never actually did. Deepfakes are typically created using AI algorithms to generate realistic-looking media by using datasets of real images and videos. For example, the technology can be used to superimpose a person’s face onto another person’s body.1

Deepfakes have the potential to spread misinformation, manipulate public opinion, and target and discredit individuals. Deepfakes have been deployed in the political sphere, in traditional cyber threats, and in the dissemination of non-consensual deepfake pornography.2

What is deepfake pornography?


AI-generated pornography, also known as deepfake porn, refers to the use of advanced machine learning algorithms to manipulate images and videos of real individuals and superimpose them onto pornographic content.


A 2019 study2 by cybersecurity firm Deeptrace found that between 2017 and 2019 there were approximately 15,000 deepfake videos online, 96% of which contained pornographic content where women’s faces were overlaid onto naked or sexual images and videos. The consumers of this content were predominantly male, while the targets were overwhelmingly female, and the content amassed millions of views.34


Deepfakes are significantly more common in porn than in politics.


Sensity AI, a research company that has tracked online deepfake videos since 2018, has found that at the end of 2020 roughly 85,000 deepfake videos were circulating online. Between 90% and 95% of them were non-consensual porn, and about 90% featured women.567

Deepfakes are much more prevalent in porn than in politics.289 While women in public roles, including politicians, are targets, the majority of deepfakes don't focus on them; in fact, only 5% target politicians.2 Initially, deepfakes were primarily used to superimpose images of celebrities onto explicit content. However, as the technology becomes more accessible, everyday individuals, mainly women and girls, are now becoming victims of image-based sexual abuse through deepfakes.910 For example, a Telegram bot was used to create simulated nude images of over 680,000 women without their knowledge. The bot, discovered by tech company Sensity AI, can produce deepfakes from just one photo. Most of these images, found in a Telegram group with over 101,000 members, were sourced from social media. Some victims were underage.11

The BBC12 reported on the Sensity AI study highlighting that around 104,852 women were victims of publicly shared fake nude images between July 2019 and 2020. The investigation found that some images appeared to feature underage females, suggesting the bot was used to create and disseminate content of a paedophilic nature.

New AI powered tools (including nudifying and pornifying tools) are making the detection of deepfakes more difficult. While platforms such as Reddit13 and Pornhub14 have actively tried to ban these types of videos, they are not only becoming more difficult to detect, but are quick to resurface, and make their way onto major discussion forums such as 4chan and 8kun where moderation is absent.10

Further reading

Studies and articles: AI-powered tools and the creation of deepfakes

Helplines

We understand that this research could be confronting or upsetting for some readers. If you or someone you know needs to talk:

  • Visit Netsafe to complete an online form to report any online safety issues or free call 0508 638 723 for support.
  • Free call or text 1737 any time for support from a trained counsellor.
  • Free call Youthline 0800 376 633 or text 234 to talk with someone from a safe and youth-centred organisation.