Generative visual AI in newsrooms Considerations related to production, presentation, and audience interpretation and impact
By T. J. Thomson and Ryan J. Thomas | AI services that provide responses to prompts, such as ChatGPT, have ignited passionate discussions over the future of learning, work, and creativity. AI-enabled text-to-image generators, such as Midjourney, pose profound questions about the purpose, meaning, and value of images yet have received considerably less research attention, despite the implications they raise for both the production and consumption of images. This essay explores key considerations that journalists and news organizations should be aware of when conceiving, sourcing, presenting, or seeking to fact-check AI-generated images. Specifically, it addresses transparency around how algorithms work, discusses provenance and algorithmic bias, touches on labor ethics and the displacement of traditional lens-based workers, explores copyright implications, identifies the potential impacts on the accuracy and representativeness of the images audiences see in their news, and muses about the lack of regulation and policy development governing the use of AI-generated images in news. We explore these themes through the insights provided by eight photo editors or equivalent roles at leading news organizations in Australia and the United States.