By Kim Björn Becker | The introduction of the language model ChatGPT created plenty of hype around the use of artificial intelligence – not least in journalism. In a profession based around language, the new technology has a wide range of applications. Yet these new possibilities also give rise to questions about how the media deal with artificial intelligence (AI). Some editorial offices have now begun to react to the challenge by publishing their own AI guidelines, aiming to clarify the principles on which their use of algorithms is based. This paper conducts a comparative examination of the documents issued by seven international media in order to gain a fundamental understanding of where the editorial offices see opportunities and the pitfalls they address. The investigation looks at two organizations each from Germany and the USA, as well as one each from the Netherlands, the United Kingdom, and Canada. The analysis shows that news agencies tend to have more concise rules, while public service broadcasters are subjected to more comprehensive regulatory standards. Each editorial office sets its own focus: While almost all the media’s guidelines cover human control of AI and questions of transparency, there is less focus on requirements for trustworthy algorithms. The investigation shows that, although media are already looking at fundamental questions thrown up by the new technology, newsrooms still have blind spots when it comes to dealing with AI.