AI-generated image-based sexual abuse: prevalence, attitudes, and impacts

This project explores the prevalence, nature and impacts of AI-generated image-based sexual abuse (AI-IBSA), including non-consensual “deepfake pornography” and intimate images (photos or videos) created using generative AI.

The research also examines the ethical, legal and policy implications of AI-generated image-based sexual abuse, and people’s attitudes, opinions, and perspectives on this topic.

Description

This research project on artificial intelligence and image-based sexual abuse (AI-IBSA) is a collaboration between RMIT University, Deakin University, Queensland University of Technology, and the Google Trust & Safety Research team. The project aims to:

  1. Investigate the general public’s attitudes towards AI-IBSA;
  2. Empirically measure and examine the prevalence of AI-IBSA perpetration and victimization;
  3. Investigate how different forms of AI (e.g., intimate image generators) are being used, or could be used, to perpetrate AI-IBSA; and
  4. Explore the digital tools or approaches that societal actors are using, or could use, to detect, prevent and respond to AI-IBSA.

Through a desk-based review, interviews with stakeholders, and a multi-country online survey, this project will generate much needed empirical evidence on AI-IBSA. The expected benefits of this project include increased understandings of how different types of AI technologies are being, or could be, used to perpetrate AI-IBSA; an understanding of how digital tools are being, or could be, used to better detect, prevent and respond to this type of abuse in practice; enhanced industry and scholarly collaborations; research outputs; and recommendations for research, policy, and practice.

Please note:

Image-based sexual abuse (IBSA) refers to the non-consensual taking, creating, or sharing of intimate (nude or sexual) images, including threats to share intimate images.

‘Generative AI’ is a form of AI which refers to algorithms which are trained on existing ‘data’ (e.g., books, web text, photos, videos) to create new, high-quality content, such as new text, photos, or videos.

Deepfakes refers to the creation of realistic-looking but fake photos or videos using AI to depict a person doing or saying something that they haven’t said or done. These images are created by superimposing or `stitching' a person's face onto another another person's body, or by altering voice, facial expressions, or body movements of a person in a video.

SERC researchers

  • Nicola Henry
  • Lisa Given
  • Alexa Ridgway
  • Gemma Beard

Project dates

2024

Funding body

Google

aboriginal flag
torres strait flag

Acknowledgement of Country

RMIT University acknowledges the people of the Woi wurrung and Boon wurrung language groups of the eastern Kulin Nation on whose unceded lands we conduct the business of the University. RMIT University respectfully acknowledges their Ancestors and Elders, past and present. RMIT also acknowledges the Traditional Custodians and their Ancestors of the lands and waters across Australia where we conduct our business - Artwork 'Sentient' by Hollie Johnson, Gunaikurnai and Monero Ngarigo.

aboriginal flag
torres strait flag

Acknowledgement of Country

RMIT University acknowledges the people of the Woi wurrung and Boon wurrung language groups of the eastern Kulin Nation on whose unceded lands we conduct the business of the University. RMIT University respectfully acknowledges their Ancestors and Elders, past and present. RMIT also acknowledges the Traditional Custodians and their Ancestors of the lands and waters across Australia where we conduct our business.