Banning generative AI use not the answer
The study found five of the surveyed outlets barred staff from using AI to generate images, and three of those outlets only barred photorealistic images. Others allowed AI-generated images if the story was about AI.
“Many of the policies I’ve seen from media organisations about generative AI are general and abstract. If a media outlet creates an AI policy, it needs to consider all forms of communication, including images and videos, and provide more concrete guidance,” Thomson said.
“Banning generative AI outright would likely be a competitive disadvantage and almost impossible to enforce.
“It would also deprive media workers of the technology’s benefits, such as using AI to recognise faces or objects in visuals to enrich metadata and to help with captioning.”
Thomson said Australia was still at “the back of the pack” when it came to AI regulation, with the US and the EU leading.
“Australia’s population is much smaller, so our resources limit our ability to be flexible and adaptive,” he said.
“However, there is also a wait-and-see attitude where we are watching what other countries are doing so we can improve or emulate their approaches.
“I think it’s good to be proactive, whether that’s from government or a media organisation. If we can show we are being proactive to make the internet a safer place, it shows leadership and can shape conversations around AI.”
Algorithmic bias affecting trust
The study found journalists were concerned about how algorithmic bias could perpetuate stereotypes around gender, race, sexuality and ability, leading to reputational risk and distrust of media.
“We had a photo editor in our study type a detailed prompt into a text-to-image generator to show a South Asian woman wearing a top and pants,” Thomson said.
“Despite detailing the woman’s clothing, the generator persisted with creating an image of a South Asian woman wearing a sari.
“Problems like this stem from a lack of diversity in the training data, and it leads us to question how representative are our training data, and what can we do to think about who is being represented in our news, stock photos but also cinema and video games, which can all be used to train these algorithms.”
Copyright was also a concern for photo editors as many text-to-image generators were not transparent about where their source materials came from.
While there have been generative AI copyright cases making their way into the courts, such as The New York Times’ lawsuit against OpenAI, Thomson said it’s still an evolving area.
“Being more conservative and only using third-party AI generators that are trained on proprietary data or only using them for brainstorming or research rather than publication can lessen the legal risk while the courts settle the copyright question,” he said.
“Another option is to train models with an organisation's own content and that way, they have confidence they own copyright to resulting generations.”