By Kim Björn Becker
Abstract: The introduction of the language model ChatGPT created plenty of hype around the use of artificial intelligence – not least in journalism. In a profession based around language, the new technology has a wide range of applications. Yet these new possibilities also give rise to questions about how the media deal with artificial intelligence (AI). Some editorial offices have now begun to react to the challenge by publishing their own AI guidelines, aiming to clarify the principles on which their use of algorithms is based. This paper conducts a comparative examination of the documents issued by seven international media in order to gain a fundamental understanding of where the editorial offices see opportunities and the pitfalls they address. The investigation looks at two organizations each from Germany and the USA, as well as one each from the Netherlands, the United Kingdom, and Canada. The analysis shows that news agencies tend to have more concise rules, while public service broadcasters are subjected to more comprehensive regulatory standards. Each editorial office sets its own focus: While almost all the media’s guidelines cover human control of AI and questions of transparency, there is less focus on requirements for trustworthy algorithms. The investigation shows that, although media are already looking at fundamental questions thrown up by the new technology, newsrooms still have blind spots when it comes to dealing with AI.
1. Introduction: Why editorial offices are not simply doing nothing
Editorial offices are already using artificial intelligence (AI) at practically every stage of the journalistic process (cf. Diakopoulos 2019: 76f.; Buxmann/Schmidt 2022). The new technical possibilities in research, production, and distribution present new, urgent questions for media organizations. How should they deal responsibly with language models like ChatGPT? Which goals should guide their use of AI in the newsroom? And what hazards do they need to steer clear of if at all possible? Given that distortions of content and factual errors could have a direct impact on the perceived credibility of reporting, editorial offices and publishers need clarity quickly.
Professional ethics provides a good basis for finding initial answers to these questions. Yet the journalistic principles set out by specialist organizations like the Deutscher Presserat [German Press Council] and the USA’s Society of Professional Journalists tend to be very general and too abstract for these challenging issues. In Brussels, the Raad voor de Journalistiek [Press Council] has published guidelines that supplement the Belgian Press Code and focus on transparency in the way journalists work. They state that users need to know when an algorithm has been used to help write a story (cf. Raad voor de Journalistiek n.y.). In Spain, the Catalonian press council issued recommendations to editorial offices in December 2021, setting out eight rules that can be used to avoid potential pitfalls in data usage, transparency, and algorithmic distortion (cf. Ventura Pocino 2021). And in Germany, the Deutscher Journalisten-Verband [German Federation of Journalists, DJV] published a position paper in late April in which it made clear that editorial offices cannot »steal away« from responsibility for content created with the involvement of AI (Deutscher Journalisten-Verband 2023: 1). It states that the actions of AI applications are »far removed from ethics and a value system« and that such applications are therefore »not able to take on the watchdog function that journalists have always held« (ibid.). In nine content-related points, the Federation then sets out the boundaries for the use of AI in journalism, intended to ensure that the new technology is handled responsibly and transparently (cf. Deutscher Journalisten-Verband 2023: 2f.).
Yet, looking at the specific way such guidelines are implemented in the newsroom, editorial offices are often on their own when it comes to dealing with AI – not least because implementing algorithms in media organizations can be seen as a challenging communicative task (cf. Skrubbeltrang Mahnke/Karlin 2023) for which little training material is available. A few organizations have begun to react to this by issuing their own guidelines for dealing with AI in the newsroom. Yet the responses from the editors responsible vary widely. This paper looks into seven sets of guidelines from international media in terms of both form and content, with the aim of gaining a fundamental understanding of early approaches to self-regulation in the media. How detailed and binding are the rules that media houses set themselves? At which objectives and journalistic values is their commitment to AI aimed? And does a human need to check and approve every item of news in which a large language model was involved? By conducting a comparative investigation into seven regulatory documents, this paper intends to create a fundamental understanding of the editorial fields of application that international media generally consider suitable for algorithmic applications, the requirements they set for responsible AI, and the situation regarding transparency for the user.
2. Recommendations, legal codes, and guiding questions: Formal aspects of editorial AI guidelines
Each set of guidelines on dealing with AI is different, in both form and content. Some boundaries are set out in the form of a powerful journalistic text; others are formal and matter-of-fact like an ordinance from a ministry. Some documents link each rule to ethical self-assurance, others are like algorithms in themselves – work instructions intended to train interdisciplinary project teams. And while one document sets out laws that seem unbreakable, another goes no further than offering careful guiding questions.
Before analyzing the content, this paper first takes a look at how the relevant documents were created in the first place. Who published them and when? What standards did the authors want to achieve? And in how much detail are the specifications for dealing with AI set out?
Which media have issued their own guidelines?
In liberal democracies, media are usually free to report as they choose, their internal processes not subject to any monitoring by state authorities. Media houses are therefore under no obligation to set out guidelines for dealing with AI – nor are there central bodies where such guidelines need to be recorded. That results in a problem for this investigation: It is all but impossible to state the number of editorial offices that had set out their own AI guidelines by the end of April 2023. As a result, it is only possible to investigate those documents that editorial offices have themselves made public or specifically made available for research.
The study looks at seven sets of guidelines. Six of them were published by the respective organization themselves; the guidelines of the Dutch news agency Algemeen Nederlands Persbureau, ANP, became public via social media (cf. ANP 2023). The investigation covers documents from media organizations from five Western countries, including two each from Germany and the United States, and one each from Canada, the United Kingdom and the Netherlands. Making up four of the seven organizations, news agencies dominate the investigation. They include ANP, based in The Hague, and the dpa or Deutsche Presse-Agentur (cf. dpa 2023), based in Hamburg. Both of these report predominantly from their respective countries. They are joined by two agencies with an international focus: the New York City-based Associated Press (cf. AP n.y.), AP for short, and the internationally active agency Thomson Reuters (cf. Thomson Reuters n.y.), based in Toronto. There are also documents from two public service broadcasters: Munich-based Bayerischer Rundfunk (BR) (cf. BR 2020) and the British Broadcasting Corporation (BBC), based in London (cf. BBC 2021). Finally, the American tech magazine Wired has also issued guidelines; the editorial office of the magazine, which is published by Condé Nast, is in San Francisco. The organizations included in the investigation thus work on the basis of national rules on AI which differ in some areas, such as in relation to data protection.
Table 1
Overview of guidelines investigated
Headquarters | Category | Date on which AI guidelines were first published | |
Algemeen Nederlands Persbureau (ANP) | The Hague, Netherlands | News agency | March/April 2023 |
Associated Press (AP) | New York City, USA | News agencyr | No date |
Bayerischer Rundfunk (BR) | Munich, Germany | Public service broadcaster | November 30, 2020 |
British Broadcasting Corporation (BBC) | London, UK | Public service broadcaster | May 2021 |
Deutsche Presse-Agentur (dpa) | Hamburg, Germany | News agency | 3 April 2023 |
Thomson Reuters | Toronto, Canada | News agencyr | No date |
Wired | San Francisco, USA | Technical magazine | No date |
Table compiled by author
Four of the seven documents include the date on which they were first published. As Table 1 shows, the precise date is known in two cases, while for two others the period can be narrowed down to one or two months. It is therefore difficult to be entirely certain about the chronological order in which the guidelines were published. At most, there are some possible connections within the media genres. BR, for example, published its guidelines around six months earlier than the BBC. It is also striking that the two news agencies dpa and ANP both published their own guidelines for the first time within weeks of each other in spring 2023 – around six months after the American provider OpenAI launched its GPT-3 language model in an adapted form as ChatGPT on November 30, 2022 (cf. OpenAI 2022). There is therefore much to suggest that this event and the »hype« (Menn 2023) that went with it may have accelerated the development of editorial guidelines. Analyses by the American search engine Google show that global interest in AI has grown significantly since the start of December (cf. Google Trends 2023).
Specificity, structure and binding nature
The first way in which the guidelines differ is in their form, i.e., in terms of their specificity, structure and how binding they are. ›Specificity‹ describes the question of whether AI guidelines are explicitly labelled as such. The term ›guidelines‹ is used comparatively broadly here to include any text in a central location, such as an editorial, a blog post or a separate document, that targets the fundamental way in which the respective media organization deals with AI – even if the text in question is not explicitly described as such. ›Structure‹ refers to the form and construction of the document. Some editorial offices embed their recommendations in journalistic-style prose, while others structure their papers strictly in multiple chapters or points. Finally, the binding nature defines the extent to which the rules set out in each case are to be applied.
The specificity of the guidelines differs widely. The American news agency Associated Press presents its rules on an overview page in which it sets out its AI activities (cf. AP n.y.). Recommendations on handling AI are therefore not bindingly marked as such, but are covered under the term »strategy around the technology« (AP n.y.). The three news agencies are clearer in setting out their aim of placing limits on dealing with AI in their documents. In its guidelines, the ANP also refers to the main editorial office in order to underscore the binding nature of the regulations (cf. ANP 2023). The BBC, too, is clear about the crux of the matter, albeit using not the term AI, but instead machine learning, or ML (cf. BBC 2021). The American magazine Wired is clearest about the binding force that readers can expect the document to have: »How WIRED Will Use Generative AI Tools« (Wired 2023, capitals in original). Instead of soberly setting out the principles, those responsible wrap their guidelines in powerful language in a declaration of intent. The Dutch news agency takes a similar approach (cf. ANP 2023).
There are also differences in the way the documents are structured. As a general rule, the more structured a document, the more specific the individual elements can be – and the more precisely guidelines can be directed at specific application cases in an editorial context. The ANP is the only one with unstructured guidelines (cf. ANP 2023). The AP’s recommendations are also relatively general, but the agency does refer to four levels in its strategy: three reflecting the journalistic process and the fourth covering collaboration with other actors (cf. AP n.y.). The makers of Wired relate their remarks to two levels of content – text and images – subdivided into five and three points respectively (cf. Wired 2023). Two news agencies have decided to break their guidelines down into five points (cf. dpa 2023; Thomson Reuters n.y.). Fittingly, the content of their guidelines focuses predominantly on the core area of news production. The dpa even provides an accompanying text that examines the selection of the scope of the guidelines, stating that the guidelines are intended to help »guide the way AI is handled without becoming lost in a jungle of rules« (dpa 2023). BR differentiates its guidelines in more detail and structures its document in ten points (cf. Bayerischer Rundfunk 2020). The most structured guidelines are those of the BBC, which does not limit itself to presenting general criteria, but has instead created a document as a recommendation for interdisciplinary teams with 47 individual points (cf. BBC 2021).
Hardly any of the guidelines are detailed and specific about how binding they are. We can therefore assume that each of the criteria set out is intended to apply in full at all times, regardless of how innovation progresses. Only BR uses a concept that staggers the guidelines’ binding nature, stating that the closer an application comes to implementation, the more criteria need to be met (cf. BR 2020).
3. The art of self-limitation: Editorial guidelines between bans and opportunities
AI promises editorial offices almost unlimited opportunities for handling texts and images. Early on, researchers spotted that the new opportunities for personalizing journalism bring with them dangers for society – and that it is often possible to reduce distortions in public discourse when media houses subject themselves to guidelines (cf. Marconi 2020: 46). Their voluntary guidelines are the organizations’ attempt to account for this concern.
Each of the seven documents sets out different guidelines on different levels. Below, this paper examines the extent to which the rules are related to corporate goals and journalistic values, whether and which fields of application and limits are set, the extent to which the guidelines set out requirements for responsible AI, the situation when it comes to transparency and human control, the requirements to be set for journalistic collaboration, and the extent to which the documents are to be seen as amendable sets of rules.
Corporate goals and journalistic values
Any attempt to draft editorial guidelines builds on the question of what the authors want to be guided by. In many, but not all, cases, the media houses have made defining strategic goals or journalistic values a key feature of their guidelines. AI guidelines thus often provide a deep insight into the journalistic self-image of an editorial office or organization.
Some of the guidelines underscore the role of trust, with Thomson Reuters describing it as »one of our most important values« (Thomson Reuters n.y.). The principles that follow are thus intended »to promote trustworthiness in our continuous design, development, and deployment of AI« (Thomson Reuters n.y.). And there is good reason for those responsible in New York to underscore the importance of trustworthiness. According to its own representation, the agency feels obligated to pursue the »Trust Principles« – a set of rules intended to ensure that the agency’s reporting is free and reliable (cf. Thomson Reuters 2018). The BBC takes a similar approach although, for them, the concept of trustworthiness does not go far enough. The corporation’s guidelines set out its values as follows: »upholding trust, putting audiences at the heart of everything we do, celebrating diversity, delivering quality and value for money and boosting creativity« (BBC 2021: 5). The foundation on which the BBC bases its AI guidelines is thus much broader. The requirement to provide the best value possible given the fact that the BBC is funded by license fee payers is included in the document for good reason – its inclusion must be viewed in the context of the ongoing political debate on the future funding of the BBC (cf. Waterson 2022). The guidelines state that AI in an editorial context should make a central contribution to the audience’s citizenship education: »We will also seek to broaden, rather than narrow, our audience’s horizons« (BBC 2021: 6).
Yet using efficiency and responsibility to society as arguments to justify the use of AI is not unique to public service broadcasting in the United Kingdom. BR published its guidelines six months earlier, with the concept of »added value« (BR 2020) being the key focus for those responsible. In its own words, BR uses AI »to make our work more efficient and to handle the resources that license fee payers entrust to us responsibly« (BR 2020). AI should also be used »to generate new content, develop new methods for investigative research, and make services for our users more attractive« (BR 2020). The future of license fee-funded broadcasting is the subject of regular debate in Germany, too.
Actors in the private sector associate the use of AI with different goals from public service broadcasters. The Deutsche Presse-Agentur quotes achieving journalistic competitive advantages as the reason behind their use of AI, stating that AI will »help to make our work better and faster – always for the benefit of our customers and our products« (dpa 2023). ANP is more cautious, emphasizing their »pursuit of quality and reliability« (ANP 2023[1]) and noting the regulations on editorial status. These rule out external influences on the agency’s journalism, while also emphasizing the values of impartiality and due diligence (cf. ANP 2023).
Fields of application and limits of AI
Around half of the media organizations who have subjected themselves to guidelines for handling KI also outline potential fields of application for the new technology. On the one hand, this means editorial offices defining the areas of the journalistic process in which they see the use of AI as particularly useful, thus creating a kind of positive list for AI in the newsroom. On the other, companies use the guidelines to define a negative list of applications whose journalistic integrity, they believe, should never be compromised by algorithms.
BR and the Deutsche Presse-Agentur limit themselves to abstract principles in their guidelines. The Associated Press is not much clearer in outlining possible fields of application: finding topics (»to break news and dig deeper«, AP n.y.), production (»to streamline workflows«, AP n.y.) and distribution. These examples remain comparably unspecific and are in line with journalistic common sense (cf. Beckett 2019).
Wired’s guidelines, on the other hand, are not limited to listing positive examples of fields of application – and are written in a very journalistic style. Instead, for dealing with both texts and images, the editorial office differentiates between cases in stating the extent to which they might want to use AI, they definitely do not want to use AI, or might experiment. Wired makes it clear that the magazine will not publish any stories whose text, or any part thereof, has been generated by AI (cf. Wired n.y.). This is justified by the limits of current text generators, which appear unreconcilable with the editorial office’s journalistic standards in both content and style: »The current AI tools are prone to both errors and bias, and often produce dull, unoriginal writing« (Wired n.y.). Ultimately, the editorial office would see publishing a text based on an algorithm as an insult to their journalistic honor: »we think someone who writes for a living needs to constantly be thinking about the best way to express complex ideas in their own words« (Wired n.y.). For the same reason, the magazine also rejects the idea of journalistic texts being edited by AI. Conceivable fields of application are instead limited to attempts to allow AI to produce headlines or short texts for social media, as well as for generating ideas for possible topics. Tools like ChatGPT could also help the editorial office during the research phase, albeit with limits. For example, language models could be used to read through large quantities of documents, in a role similar to that of the Google search engine or Wikipedia. Wired intends to take the same approach to images produced by AI – such as the Dall-E tool developed by OpenAI.
ANP in The Hague goes even further, stating that its journalists see »many opportunities« (ANP 2023) to use AI and be inspired by algorithms – for example when writing headlines and background information. The authors do not define clear limits on the use of text generators and, as a result, the use of artificial intelligence in the newsroom is »is up to the editors« (ANP 2023).
Features of responsible AI
The more AI instruments are used, the greater the influence of these systems on the life of humans. In order to prevent the increasing influence of algorithms from leading to »dependencies […] or pressure to adapt« (Deutscher Ethikrat 2023) among people, AI needs to be used in a socially responsible way. ›Responsible AI‹ generally relates to three dimensions of content: accountability, responsibility, and transparency (cf. Dignum 2019: 52f.). There is an international consensus in the Western world that people should be at the heart of such activities, as the German Ethics Council recently argued in its statement on AI (cf. Deutscher Ethikrat 2023). IT experts note that good technical design of AI applications makes it possible to combine a high level of human control with working towards a high degree of automation (cf. Shneiderman 2022: 79). Given the ongoing debate about the criteria for and implementation of responsible AI, some media have extended their guidelines to include aspects of data protection and the quality of algorithms and of training and other data.
When it comes to data protection, the laws applicable in each case set the framework for all kinds of AI application, with the ANP among others referring directly to this (cf. ANP 2023). Thomson Reuters merely states that they want to »prioritize safety, security, and privacy throughout the design, development and deployment of our AI products and services« (Thomson Reuters n.y.). In Germany, where the European General Data Protection Regulation sets out a large portion of the legal limits, BR emphasizes the concept of data economy. The rule is, they explain, to collect »as little data as possible and as much as necessary in order to fulfil our role« (BR 2020). The BBC in the UK starts with a very general promise: »we will ensure that data is handled securely« (BBC 2021: 6), before moving on to describe more detailed points in the guidance for AI project teams. When it comes to the use of data, the principles refer to another of the Corporation’s documents, the »privacy promise« last updated in 2023 (BBC 2023). This covers questions of transparency, selection options for the audience, and data use by the BBC. In addition, it requires that project teams in London document all data use and modification, correct potential errors in the data sets, and examine the legal permissibility of the data used, including its compliance with the principles of the EU General Data Protection Regulation (cf. BBC 2021: 15f.).
When it comes to the quality of the algorithms, the guidelines are very general in their statements. The Deutsche Presse-Agentur, for example, merely promises to use »lawful AI« (dpa 2023) »that adheres to applicable law and legal provisions and that is in line with our ethical principles, such as human autonomy, fairness and democratic values« (dpa 2023). Thomson Reuters is even more general in its principles, formulating them as a declaration of intent that the agency is striving for a human-centered approach. The aim is to develop and use tools »that treat people fairly« (Thomson Reuters n.y.). This is similar to the standard that the BBC sets itself, namely to »serve our audiences equally & fairly« (BBC 2021: 6).
In practice, the criterion of fairness describes an algorithm that has been trained using balanced training data, which prevents it from producing overly distorted results. Both BR and the ANP address the problem of algorithmic distortion in their guidelines, although the latter does no more than name the risk and take it into account when assessing tools (cf. ANP 2023). In general, it relies on »reliability« as a criterion (ANP 2023). BR takes a more sophisticated view of the topic, demanding that service providers deliver »reliable information on the data sources (BR 2020) and discussing the »integrity and quality of the training data« (BR 2020) as a matter of principle, even for internal developments. According to them, minimizing algorithmic distortion helps to »reflect the diversity of society« that the broadcaster highlights in its guidelines (BR 2020). The BBC, too, instructs its project teams to examine the underlying training data for possible »bias« (BBC 2021: 19) and to correct this if necessary. BR is the only organization to go further than addressing only the quality of the training data, also looking at the quality of the other data with which the model works. The Munich-based organization undertakes to maintain employees’ »awareness of the value of data and consistent data maintenance« (BR 2020), since reliable AI applications can only be developed with »reliable data« (BR 2020).
Transparency in the journalistic end product
Whenever AI plays a significant role in creating a journalistic text, a crucial question arises: »Who should get the byline?« (Marconi 2020: 97). While language models like ChatGPT or Bard have not commonly been included in the byline until now, many editorial offices – including AP, British daily newspaper The Guardian and America’s Wall Street Journal – have at least begun, in various ways, to identify the contribution that AI has made to the creation of a piece of journalistic work (cf. Marconi 2020: 97f.). It thus comes as no surprise that the question of transparency comes up a lot in the guidelines, always in the context of arguments for the greatest possible visibility. However, the guidelines rarely state how exactly AI’s contribution should be flagged.
Three quarters of the news agencies investigated undertake to label the use of AI. Thomson Reuters promises a desire to make its use »explainable« (Thomson Reuters n.y.). The dpa expresses its self-obligation in similar terms: »Where content is generated exclusively by AI, we make this transparent and explainable« (dpa 2023 [translation: SC]). The Netherlands’ ANP, too, vows to be »open to our customers about our methods and the use of Al or other technical systems« towards its customers (ANP 2023). However, ANP’s promise to make efforts to achieve »transparency« (ANP 2023) is diluted later on, the guidelines stating that »we mention where we as editors deem appropriate the extent of AI use« (ANP 2023).
The magazine Wired obligates its authors to flag the contribution AI has made to creating the respective piece – if not, this is considered equivalent to plagiarism (cf. Wired n.y.). The magazine also intends, where possible, to disclose the sources of the AI. Bayerischer Rundfunk undertakes to be transparent about »which technologies we use, which data we collect and which editorial offices or partners bear responsibility for this« (BR 2020). And those responsible in Munich even go one step further, stating that, where problems occur in dealing with AI, the intention is to make this »the topic of self-reflective reporting« (BR 2020). Although the BBC does not go as far as this in its recommendations, it shines elsewhere when it comes to transparency: The London-based organization instructs AI teams to enter their project in a special register for internal AI applications (cf. BBC 2021: 21). Furthermore, the way the application works is to be made clear not only to BBC employees, but also to the audience »in plain English« (BBC 2021: 21), i.e., straightforwardly and without jargon.
It is worth noting that academia, rather than journalism, has recently paid more attention to the question of whether AI justifies a mention in the byline of a piece. At the start of the year, the journals Nature and Proceedings of the National Academy of Sciences of the United States of America (PNAS) published editorials making it clear that language models like ChatGPT do not qualify for being named as an author, as they cannot be held accountable for the results (cf. Nature 2023; PNAS 2023).
Human control
Many considerations currently center around the question of the extent to which the use of AI in the production of journalistic content should be subject to human control. From its early stages, AI enabled editorial offices to create automated texts based on structured data (cf. Diakopoulos 2019: 96f.). Large language models have now expanded these possibilities enormously, at least in theory. In practice, problems often arise from content errors in the texts generated. Journalism research is therefore not the only field to have warned of the importance of human control in the use of text blocks written by AI (cf. Marconi 2020: 49) – editorial offices are also working intensively on this issue (cf. Wolfnagel 2023).
It therefore comes as no surprise that all seven sets of guidelines address the question of human control of AI contributions to final journalistic products – albeit to varying degrees and categorizing the content in different ways. Wired is once again the most restrictive, its guidelines stating that absolutely no texts are published in which any part has been written or edited by AI (cf. Wired n.y.). This restrictive attitude makes the American magazine an outsider. Five of the seven sets of guidelines stipulate that, under certain circumstances, people conduct editorial control of text blocks created by AI. Some make inspection of the respective content mandatory at all times, others only under certain conditions.
The guidelines of the two news agencies dpa and ANP stipulate that a person has to inspect all journalistic content created by or with the support of AI before publication. The Deutsche Presse-Agentur states that, »The dpa uses AI only under human supervision« (dpa 2023), with the editorial office emphasizing that a person makes the »final decision« (dpa 2023) on the use of AI. The ANP has chosen similar conditions: »We can use Al or similar systems to support final editing, provided a human does a final check afterwards.« (ANP 2023). They go on to specify that AI can be used predominantly in intermediate editing steps: »In our production chain, we stick to the line already in place man-machine-human« (ANP 2023). Content generated by AI, they continue, is not used »without checking this information by a human being« (ANP 2023).
The news agency Thomson Reuters and the two broadcasters in this investigation are not quite as strict. Their guidelines stipulate mandatory control of journalistic end products only under certain conditions and not, unlike the two agencies from Germany and the Netherlands, always. Thomson Reuters not only gives a less binding definition of the role of humans, but also dresses it in the weaker language of a declaration of intent: »Thomson Reuters will strive to maintain a human-centric approach« (Thomson Reuters n.y.). The document does not show exactly what makes up this approach. The same goes for the comparably vague guideline that the organization bears responsibility for the products and services in which AI is used: »Thomson Reuters will maintain appropriate accountability measures for our AI products and services« (Thomson Reuters n.y.).
The BBC describes the role of deskmen under the heading »Human in the loop« (BBC 2021: 6) – although this does not come with a clear definition of the fields in which a human needs to check and approve the work of AI. With development continuing, the BBC wants to experiment. »Algorithms form only part of the content discovery process for our audiences, and sit alongside (human) editorial curation« (BBC 2021: 6). This statement can be read as fundamentally involving the editorial office in the use of AI, but is in need of more precise definition.
BR succeeds in putting together an opinion on the role of human control that is as clear as it is sophisticated. The broadcaster first makes editorial AI content subject to a general inspection requirement, before adding a dynamic escape clause: »Even in the case of automated journalism and data journalism, the journalistic responsibility lies with the editorial offices. The principle of approval thus remains in place for content created automatically« (BR 2020). »But development is ongoing: The principle of individual inspection becomes a plausibility check of causal relationships in the data structure and a rigorous integrity test of the data source« (BR 2020). Instead of editors individually approving each piece in whose genesis AI was involved, BR opens the door for editors to limit their checks to the technical function of the AI instrument in question.
Requirements for editorial collaboration
When AI is used in the newsroom, it is usually the result of collaboration between different specialist departments: journalists and IT experts, data specialists and product managers working hand in hand. It is also common for them to be joined by external service providers. After all, not every editorial office has all the expertise needed to roll out an AI application. Some organizations’ guidelines for dealing with AI therefore include not only journalistic issues but also aspects of interdisciplinary collaboration both within the organization and with external actors.
When it comes to the distribution of roles within the organization, none of the guidelines investigated are more detailed than the BBC’s. Firstly, the document names various roles that might be involved in each case, including product managers and employees in the specialist Quality, Risk & Assurance department (cf. BBC 2021: 21). Above all, however, the document is aimed at employees outside the newsroom, with detailed guidance questions clearly targeted at technical and documentary processes (cf. BBC 2021: 17f.). BR is less clear on this point, stating that AI projects should be made possible by »the most diverse teams possible« (BR 2020). The dpa is similarly unspecific in setting out its requirements for collaboration. Its guidelines state that »all employees« (dpa 2023) are encouraged to be open to the topic of AI – this presumably means especially, but by no means exclusively, reporters and editors.
Media organizations whose guidelines govern the extent to which they collaborate with external actors on AI projects are the exception. The AP reports that it works with start-ups in order to benefit from »external innovation« (AP n.y.) at comparably low cost. The New York-based agency also claims to be forming partnerships with further institutions, including the investment companies Social Starts and Matter Ventures and the NYC Media Lab, a collaboration between various universities and companies in the city (cf. AP n.y.). Collaboration with universities is also covered by Bayerischer Rundfunk’s guidelines, which state that »exchange with academic institutions and AI ethics experts« (BR 2020) is intended to define the interdisciplinary approach to the topic of AI.
Dynamic nature of the guidelines
The emergence of new text generators and other tools has rapidly altered the possibilities for using AI in an editorial context. In order to make it possible to provide appropriate orientation for the foreseeable future, media organizations can make the content of their guidelines as broad as possible, as Thomson Reuters, BR and others have done. The other option – often used in addition – is to describe the guidelines deliberately as a temporary set of rules.
Some of the editorial offices have integrated a dynamic component into its AI rules. Wired addresses the ongoing transformation of AI and predicts that it »may modify our perspective over time« (Wired n.y.). Any changes would be shown transparently in the document, they continue. The news agencies are also aware that the current rules on dealing with AI cannot be set in stone. »These AI principles will evolve as the field of AI and its applications matures« (Thomson Reuters n.y.), writes Thomson Reuters. Similarly, the ANP makes it clear that their guidelines are a »living document« (ANP 2023) »that can be modified by the chief editors if the developments call for it« (ANP 2023).
BR notes the requirement to amend the guidelines to cater to the general dynamic nature of journalistic AI applications. »Experiments are part of the process,« write the authors (BR 2020). In a similar vein to this potential amendment of the guidelines, BR is the only organization among those examined to have designed a graded model for the binding nature of its guidelines. According to this model, the closer an AI application is to the public, the more of the requirements as set out in the guidelines it needs to meet – the maximum being all. The guidelines also account for the fact that the application of AI can lead to »ethical borderline situations« (BR 2020). »We evaluate the experiences from the perspective of the State Media Treaty and the guidelines set out here« (BR 2020).
The BBC also addresses the fact that the use of AI changes over time in its guidelines on machine learning. »ML is an evolving set of technologies, where the BBC continues to innovate and experiment« (BBC 2021: 6), they say, offering the opportunity to revise the issues using the BBC checklist. One of the guiding questions is: »What important changes (or revisioning / redeployment of the model) would trigger a MLEP checklist review?« (BBC 2021: 26).
4. Discussion of results: Everyone sets their own regulations
How does one approach something that is new and unknown? This investigation into the guidelines that international media organizations have subjected themselves to as initial rules for dealing with AI has shown that, in the first instance, every editorial office makes their own decisions. The guidelines differ widely from one other in both form and content. A kind of standard model for the guidelines is yet to emerge.
The AI rules that international media set themselves
When it comes to form, the organizations investigated – one magazine, four news agencies and two broadcasters – have largely chosen concise, matter-of-fact guidelines in which they set out how they want to deal with AI briefly in a defined number of points. A few of the guidelines have also been formed into a text in language based on magazine journalism, or developed into a tool that uses guiding questions to direct interdisciplinary project teams towards key points on dealing with algorithms. Some links emerged between the journalistic style of the organization in question and the form chosen for the guidelines: The news agencies present their guidelines briefly in a news-like style, while the magazine chose a more narrative form, and a British broadcaster known for its structural complexity selected the form of detailed guidelines. Almost all of the guidelines were recognizable as such; in only one case was the approach embedded in a general representation of the organization’s AI activities. Six of the seven organizations also chose not to make the rules more or less binding in different situations; only one German broadcaster linked the level of fulfillment of the guidelines to the development progress of an AI project. All in all, the investigation included two media organizations each from Germany and the United States, and one each from the Netherlands, the United Kingdom and Canada.
When it comes to content, comparing the seven international guidelines throws up significant differences. The content produced by the editorial offices can be roughly divided into seven categories. Table 2 below shows an overview of which media organizations set out rules of any kind for which aspects in their guidelines.
Table 2
Content of regulations in AI guidelines of international media
Journalistic goals and values | Fields of application | Responsible AI | Transparency | Human control | Forms of collaboration | Dynamic nature of the rules | |
ANP | + | + | + | + | + | – | + |
AP | – | – | – | – | – | + | – |
BR | + | + | + | + | + | + | + |
BBC | + | – | + | + | + | + | + |
dpa | + | + | + | + | + | + | – |
Thomson Reuters | + | – | + | + | + | – | – |
Wired | – | + | – | + | + | – | + |
Table compiled by author
As an arithmetic mean, the organizations investigated took into account 4.9 of seven content dimensions in their AI guidelines. The broadest were the guidelines from Bayerischer Rundfunk, which cover all seven points, followed by the documents from the news agencies Algemeen Nederlands Persbureau and Deutsche Presse-Agentur, with six dimensions each. The British broadcaster BBC achieved the same number. With four of the content points each, the rules of the Canadian news agency Thomson Reuters and the American magazine Wired are slightly below average. And covering just one of the seven dimensions, the American news agency Associated Press has chosen a comparatively low level of self-regulation. That may be linked to the particular character of the guidelines, which are the only ones not described explicitly as such.
Taking a closer look at the individual fields quickly reveals focus areas in the content. As an arithmetic mean, each individual sphere of self-regulation is covered by 4.9 organizations in their guidelines. Nonetheless, almost all of the editorial offices investigated – six out of seven – say something about the role of human control when AI is used for journalistic end products. This would indicate that most media houses are currently examining the question of whether and when a journalist is to check or approve the contribution of AI from an editorial point of view. Just as often, the guidelines have something to say on the extent to which the contribution of AI should be made transparent to the audience. Five of the seven organizations indicate strategic corporate objectives and journalistic values as reasons behind their AI activities. Just as many placed requirements on trustworthy AI. Less frequently, the guidelines examined addressed the definition of areas of application, forms of interdisciplinary collaboration, and potential updating of the guidelines. All of these cases were covered by just four of the seven editorial offices, in various compositions.
There is a general consensus across all of the guidelines examined that, even in an age of AI-supported journalism, people should still be at the heart of the profession. Among the six guidelines with specific commitments on this, the BBC and Thomson Reuters remain vague. The other four guidelines prescribe a comparably strong position for editors. Wired forbids the use of AI text blocks and AI-supported editing completely, while the guidelines from the European news agencies dpa and ANP provide no exceptions to the obligation for a human to check the relevant texts. Only BR notes that the principle of control can be relaxed if it is replaced by strict examination of the way the algorithms work. Most of the organizations included in the investigation thus completely rule out the idea that text material created or significantly shaped by AI can be used without reflection or critique.
Three of the four news agencies undertake to label the use of AI and thus to make it transparent to users. These agencies include Thomson Reuters, dpa and ANP. Wired, which generally takes a restrictive approach to AI, also has rules on this. The two European broadcasters investigated – BR and the BBC – also commit to transparency. This clearly demonstrates that transparency is an almost universally uncontroversial criterion, at least among the media examined.
Around half of the organizations include possible fields of application for AI in their guidelines. BR and the dpa keep their representations on this comparably abstract, while AP names selected examples from throughout the journalistic production process. American magazine Wired is the only organization to address the use of AI-generated images specifically, rejecting the publication of such results in the same way as it does for texts.
It is notable that just five of the seven editorial offices examine in their guidelines the question of which requirements need to be set for responsible AI and what that means for the editorial use of the new technologies. This question is particularly significant given media organizations’ potential dependence on the providers of high-performance AI engines, which has already been addressed by researchers (cf. Simon 2022). The key content-related dimension can be further broken down into individual elements. Four media – ANP, Thomson Reuters, BR and the BBC – set out data protection requirements. The quality of algorithms is explicitly an issue for the dpa, BBC, BR and Thomson Reuters. Only the two broadcasters mention potential problems that can arise from algorithmic distortion.
Outlook
The investigation into seven sets of guidelines for dealing with AI shows that international media organizations are already focusing intensively on key questions thrown up by the new technology. While the documents’ authors have focused their attention mainly on human control and transparency for the audience, analysis of the content highlights some sometimes major omissions. By the end of April 2023, only a minority of the media address problems in connection with a possible ›algorithmic bias.‹ And it is not always clear from the guidelines whether all the editorial offices investigated require a critical examination of the quality of the algorithm’s training data, especially for journalistic use. The investigation clearly shows that further investigation is needed as the use of AI continues to spread in journalism. If more media organizations decided to develop AI guidelines and publish them or make them accessible to researchers, this would allow further points to be addressed in a larger sample. These points might include the extent to which the breadth and depth of the rules differs between different types of organization, or the extent to which possible professional and cultural differences between the editorial offices arise in different countries when it comes to the way the guidelines are set out.
About the author:
Kim Björn Becker, Dr. (*1986) has been Political Editor at the Frankfurter Allgemeine Zeitung since 2018. Before this, he worked at the Süddeutsche Zeitung in Munich. He taught practical journalism at the Universities of Trier and Mainz, and at Darmstadt University of Applied Sciences. His research focuses on the application of artificial intelligence in the modern newsroom. Contact: kbb@kimbjoernbecker.com
Translation: Sophie Costella
References
ANP (2023): Leidraad: zo gaat de ANP-redactie om met AI. https://twitter.com/CoolsHannes/status/1646115524235886592 (21 April 2023)
AP (n.d.): Leveraging AI to advance the power of facts. Artificial intelligence at The Associated Press. https://www.ap.org/discover/artificial-intelligence (21 April 2023)
Beckett, Charlie (2019): New powers, new responsibilities. A global survey of journalism and artificial intelligence. https://drive.google.com/file/d/1utmAMCmd4rfJHrUfLLfSJ-clpFTjyef1/view (21 April 2023)
BR (2020): Unsere KI-Richtlinien im Bayerischen Rundfunk. https://www.br.de/extra/ai-automation-lab/ki-ethik-100.html (21 April 2023)
BBC (2023): The BBC privacy promise. https://www.bbc.co.uk/usingthebbc/privacy/privacy-promise/ (22 April 2023)
BBC (2021): Responsible AI at the BBC: Our Machine Learning Engine Principles. https://downloads.bbc.co.uk/rd/pubs/MLEP_Doc_2.1.pdf (21 April 2023)
Buxmann, Peter; Schmidt, Holger (2022): KI im Journalismus: Unterschätzter Helfer im Hintergrund. In: Frankfurter Allgemeine Zeitung. https://www.faz.net/podcasts/f-a-z-kuenstliche-intelligenz-podcast/ki-im-journalismus-unterschaetzter-helfer-im-hintergrund-18218234.html (21 April 2023)
Deutscher Ethikrat (2023): Mensch und Maschine – Herausforderungen durch Künstliche Intelligenz, Stellungnahme. https://www.ethikrat.org/mitteilungen/mitteilungen/2023/ethikrat-kuenstliche-intelligenz-darf-menschliche-entfaltung-nicht-vermindern/?cookieLevel=not-set#:~:text=pdf%20%7C%20104%20KB)-,Stellungnahme,-(pdf%20%7C%203%20MB (22 April 2023)
Deutscher Journalistenverband (2023): Positionspapier bezüglich des Einsatzes Künstlicher Intelligenz im Journalismus. https://www.djv.de/fileadmin/user_upload/INFOS/Themen/Medienpolitik/DJV-Positionspapier_KI_2023-04.pdf (28.04.2023)
Diakopoulos, Nicholas (2019): Automating the News. How Algorithms are rewriting the Media. Cambridge/London: Harvard University Press.
Dignum, Virginia (2019): Responsible Artificial Intelligence. How to Develop and Use AI in a Responsible Way. Cham: Springer Nature.
dpa (2023): Offen, verantwortungsvoll und transparent – Die Guidelines der dpa für Künstliche Intelligenz. https://innovation.dpa.com/2023/04/03/kuenstliche-intelligenz-fuenf-guidelines-der-dpa/ (21 April 2023)
GoogleTrends (2023): Künstliche Intelligenz. https://trends.google.com/trends/explore?q=k%C3%BCnstliche%20intelligenz (21 April 2023)
Marconi, Francesco (2020): Newsmakers. Artificial Intelligence and the Future of Journalism. New York/Chichester: Columbia University Press.
Menn, Andreas (2023): 100 Millionen Nutzer in zwei Monaten: Diese Grafiken zeigen den Hype um ChatGPT. In: Wirtschaftswoche, dated 7 March 2023. https://www.wiwo.de/my/technologie/digitale-welt/kuenstliche-intelligenz-100-millionen-nutzer-in-zwei-monaten-diese-grafiken-zeigen-den-hype-um-chatgpt/29019970.html?ticket=ST-295551-wFHWZOjp97E2u9eukgqW-cas01.example.org (21 April 2023)
Nature (2023): Tools such as ChatGPT threaten transparent science; here are our ground rules for their use. In: Nature 613, 612 (2023), DOI https://doi.org/10.1038/d41586-023-00191-1
PNAS (2023): The PNAS Journals Outline Their Policies for ChatGPT and Generative AI, https://www.pnas.org/post/update/pnas-policy-for-chatgpt-generative-ai (21 April 2023)
Shneiderman, Ben (2022): Human-Centered AI. Oxford: Oxford University Press.
Simon, Felix (2022): Uneasy Bedfellows: AI in the News, Platform Companies and the Issue of Journalistic Autonomy. In: Digital Journalism, 10(10), pp. 1832-1854. DOI: 10.1080/21670811.2022.2063150
Thomson Reuters (2018): Thomson Reuters Founders Share Company Limited. https://www.thomsonreuters.com/content/dam/ewp-m/documents/thomsonreuters/en/pdf/corporate-responsibility/thomson-reuters-founders-share-company-limited-1218.pdf (21 April 2023)
Thomson Reuters (n.d.): Our AI principles. https://www.thomsonreuters.com/en/artificial-intelligence/ai-principles.html (21 April 2023)
OpenAI (2022): Introducing ChatGPT. https://openai.com/blog/chatgpt (21 April 2023)
Raad voor de Journalistiek (n.d.): Nieuwe richtlijn over het gebruik van artificiële intelligentie in de journalistiek. https://www.rvdj.be/nieuws/nieuwe-richtlijn-over-het-gebruik-van-artificiele-intelligentie-de-journalistiek (22 April 2023)
Skrubbeltrang Mahnke, Martina; Karlin, Simon (2023): »Dieser Artikel könnte Sie auch interessieren« Entwicklung demokratisch verantwortungsvoller Algorithmen in Nachrichtenmedien. Eine dänische Fallstudie. In: Communicatio Socialis, 56(1), pp.49-62. DOI: 10.5771/0010-3497-2023-1-49.
Ventura Pocino, Patrícia (2021): Algorithms in the newsrooms. Challenges and recommendations for aritficial intelligence with the ethical values of Journalism. Publikation des Catalan Press Council. https://fcic.periodistes.cat/wp-content/uploads/2022/03/venglishDIGITAL_ALGORITMES-A-LES-REDACCIONS_ENG-1.pdf (22 April 2023)
Waterson, Jim (2022): BBC licence fee to be abolished in 2027 and funding frozen. In: The Guardian vom 16. Januar 2022. https://www.theguardian.com/media/2022/jan/16/bbc-licence-fee-to-be-abolished-in-2027-and-funding-frozen?utm_source=dlvr.it&utm_medium=twitter (21 April 2023)
Wired (2023): How WIRED Will Use Generative AI Tools. https://www.wired.com/about/generative-ai-policy/ (21 April 2023)
Wolfnagel, Eva (2023): Wenn die KI lügt: Grenzen von ChatGPT für den Journalismus. In: Deutschlandfunk vom 4. Januar 2023, 15.55 Uhr. https://www.deutschlandfunk.de/wenn-die-ki-luegt-grenzen-von-chatgpt-fuer-den-journalismus-dlf-ce94895e-100.html (22 April 2023)
Footnote
1 This and the following quotes from Dutch were translated with DeepL.
About this article
Copyright
This article is distributed under Creative Commons Atrribution 4.0 International (CC BY 4.0). You are free to share and redistribute the material in any medium or format. The licensor cannot revoke these freedoms as long as you follow the license terms. You must however give appropriate credit, provide a link to the license, and indicate if changes were made. You may do so in any reasonable manner, but not in any way that suggests the licensor endorses you or your use. You may not apply legal terms or technological measures that legally restrict others from doing anything the license permits. More Information under https://creativecommons.org/licenses/by/4.0/deed.en.
Citation
Kim Björn Becker: New game, new rules. An investigation into editorial guidelines for dealing with artificial intelligence in the newsroom. In: Journalism Research, Vol. 6 (2), 2023, pp. 133-152. DOI: 10.1453/2569-152X-22023-13404-en
ISSN
2569-152X
DOI
https://doi.org/10.1453/2569-152X-22023-13404-en
First published online
July 2023