Image-based abuse – when someone takes, shares or threatens to share nude, semi-nude or sexual images or video without consent – has become a growing issue, experienced by 1 in 3 Australians surveyed in 2019.
Lead researcher behind the creation of ‘Umibot’, Professor Nicola Henry from RMIT's Social and Global Studies Centre, said ‘deepfake’ content (fake videos or images generated using AI), incidents where people are pressured into creating sexual content and being sent unsolicited sexual images or videos also count as image-based abuse.
“It's a huge violation of trust that’s designed to shame, punish or humiliate. It’s often a way for perpetrators to exert power and control over others,” said Henry, who is an Australian Research Council Future Fellow.
“A lot of victim-survivors we talked to just want the issue to go away and the content to be taken down or removed but often they don’t know where to go for help.”
That is what this pilot chatbot is here to address.
The idea came to Henry after conducting interviews with victim-survivors about their experiences of image-based abuse.
While the people she spoke to had diverse experiences, Henry said they often did not know where to go for help and some did not know that what had happened to them was a crime.
“The victim-survivors we interviewed said they were often blamed by friends, family members and others and made to feel ashamed, which made them even more reluctant to seek help,” Henry said.
Dr Alice Witt, an RMIT Research Fellow working on the project with Henry, said Umibot is not a replacement for human support, but it is designed to help people navigate complex pathways and provide them with options for reporting and tips on collecting evidence or how to keep safe online.
“It is not just for victim-survivors,” Witt said.
“Umibot is designed to also help bystanders and even perpetrators as a potential tool to prevent this abuse from happening.”