by Michael Graßl, Jonas Schützeneder, and Klaus Meier
Abstract: Artificial intelligence has become a buzzword in business and society, denoting any automated, cooperative, and corrective forms of interaction between humans and machines. There is a need for information, discussion, and systematization – despite or rather because of the wealth of publications on the topic that crop up on an almost daily basis. This article is an attempt to bring some (conceptual) order to this field. At the core of this classification endeavor is a qualitative survey of experts from academia and practice. We combine the perspectives of software production, newsroom organization, and media ethics, trying to create a basis for terminology and for exploring the challenges and potentials of this technological development. As our evaluation of the interviews shows, the industry has recognized the importance of AI for journalism. Its potential lies primarily in research, distribution, workflow optimization, and verification of third-party content. From this, we conclude: For current and future journalism, AI should be understood as a tool that can provide (technological, definitional, and editorial) assistance; practice and research should discuss the topic of AI in an ongoing discourse about opportunities and risks, while also creating awareness and offering solutions in the media ethics debate (especially along the lines of responsibility for content and audience).
1 The ambivalence of assistance
Help and support is always welcome, sometimes lacking, at times expensive, or a distraction. Individuals obviously depend on help and support, and the same applies at the meso (company) and macro (state) levels – and to journalism, its actors, and organizational structures. As a rule, though, increasing levels of support also mean increasing dependency, of which journalists and editors are generally wary because in a democracy, independence is one of the central tenets of their trade (Meier 2018: 17). While political or economic support and/or dependence is usually problematic, technological support or dependence bears no such stigma, at least at first glance. Technological development has always been journalism’s constant companion (Altmeppen/Arnold 2013: 47). Recently, for example, it has been offering new distribution channels for journalistic content, analytics tools for more precise insights into the audience, intermediary structures to integrate external platforms (Graßl et al. 2020), or even new and more efficient forms of newsroom organization (García-Avilés et al. 2014; Lischka 2018: 237). And yet, every single step of the way, processes of technological adaptation are a major challenge for journalism, also in terms of dependencies. Advances are almost always accompanied by risks, reservations, skepticism, and rejection. Gillmor (2013: 187) poignantly expresses what many actors in newsrooms are thinking: »New equals danger. Technological equals non-journalistic.« There is a certain ambivalence to assistance – while technological innovation as a form of assistance holds great promise, it does not always deliver, it is riddled with risk, and therefore, the involved actors usually approach it with a mixture of curiosity and criticism. Almost simultaneously, we have seen discussions about the value, dimensions, opportunities, and problems of artificial intelligence (AI) in journalism. The potentials of AI assistance for editorial tasks are being discussed ambivalently, both in science and practice. Those who view AI with curiosity and skepticism may lack sufficient knowledge about what AI actually is and what it does. Many things remain unclear at this point, from the concept to concrete designs and a clear vision of the future.
Currently, the term »artificial Intelligence« covers a huge variety of software, tools, and other computer systems. Although we do not have a uniform, international definition of the term, the topic is, unsurprisingly, generating ever greater research interest everywhere (Stray 2019: 1077 f.) In German journalism research, however, the topic has received little attention so far. The number of studies and articles is scant (see Dreyer/Schulz 2019; Loosen/Solbach 2020; Porlezza 2020). Empirical research results are few or have already been rendered obsolete by the rapid technological progress of recent years (cf. Meier et al. 2021). We see a need for action both in science and in practice.
Our contribution is not meant to be a »literature review«, but an impetus to close the empirical gap in Germany: We want to provide more scientific and practical detail and background
- on the concept of AI and its context,
- on current and concrete opportunities to leverage AI in the production and organization of journalistic content,
- on medium- and long-term challenges and problems,
- on the international, comparative dimension of the inventory we are drawing in this paper.
The goal is to offer a basic systematization of the field with the help of qualitative guided interviews we conducted with experts from science and practice in Germany. These results are supplemented by an international comparison, which is based on a DFG-funded research project in five countries, allowing us to assess the innovative power of AI in journalism in a broader context. First, however, we need to sort the field a little, theoretically and conceptually.
2 The diversity of AI: A definitional basis
For years, the term AI has been used in various fields, situations, and with different intentions, sometimes causing more confusion than clarification. Our first point of departure is a somewhat dated definition of AI by McCarthy (2007: 2): »It is the science and engineering of making intelligent machines, especially intelligent computer programs. It is related to the similar task of using computers to understand human intelligence, but AI does not have to confine itself to methods that are biologically observable.« This technological approach considers the machine (i.e. the computing power) as the central subject of AI. In general, this currently refers to computer systems, i.e. primarily software or algorithms whose purpose is to solve certain, clearly defined problems (cf. e.g. Buxmann / Schmidt 20. Buxmann /Schmidt 2021: 7). They run automated processes drawing on large data sets, especially in digital media applications. Their hallmark feature is their ability to learn, meaning that these systems are able to improve continuously. When neural networks are used, we call it called machine learning or deep learning. AI applications learn from sets of training data, recognizing patterns, and then applying them to new data sets once they are trained.
Today, we differentiate further: Not every form of »computational journalism« (Thurman 2020) or »automated journalism« (Dörr 2016; Dörr/Hollnbuchner 2017; Caswell/Dörr 2017) is now classified as an AI application. In journalism, there is a persistent notion that the mere rule-based automated composition of text modules – often described as »robot journalism« – already falls under artificial intelligence (as, for example, automated texts on weather forecasts, sports results, or stock market prices). »This has nothing to do with artificial intelligence, by the way: The texts are generated based on rules,« as Spiegel pointed out quite correctly when they published automated texts on election results in March 2021 (Pauly 2021): »All decisions are defined manually: If, for example, local results differ from state results by a specified threshold, they will be classified as a newsworthy deviation, and a text module to that effect will be inserted.« In an earlier publication, we also critically classified the frequently used term »robot journalism«: »[AI in journalism] offers application options and dimensions of debate in academia and practice that far exceed the superficial understanding of ›robot journalism‹. Both the term itself and its narrow focus on automatic text production fall short« (Meier et al. 2021).
This is why we defined the relationship between AI and journalism as a form of technological assistance. AI is not an autonomous system; rather, assistance is provided reciprocally:
- Artificial Intelligence as a form of technological assistance for editorial activities, decision-making, and boosting human intelligence.
- Human intelligence to make corrections and ensure that the artificial intelligence will continue to evolve in a circular fashion.
Under this notion, AI is a technology-based, customized pool, trained by permanent feedback, and constantly evolving. Due to this continuous learning effect, journalism can leverage these tools on various levels throughout the entire journalistic production process:
- Research and topic setting: Research as the heart of journalistic activity has often received intense stimuli from technological advancements (mobility, telephone, online databases). AI continues this development. Specifically, the impact on research can be described along (at least) the following categories (Stray 2019: 1080; Diakopoulos 2019): analyses of large document and data sets (mining), language analysis and translation programs, data cleansing, identifying breaking news or prevailing topics in specific communities (e.g., through social media monitoring).
- Presentation and preparation of journalistic content: Artificial intelligence in the sense of machine learning can assist in content preparation. In this context, there are various AI applications (see Beckett 2019: 10; Rech/Meyer 2021: 21): It can supply text, image, or video elements from data sets, such as the media’s own archives, or propose hyper-textuality, i.e. make specific cross-references to relevant and related content on certain topics, persons, or databases. It is also possible to automate the editing of texts (linguistic), audio, and video (cut), or translations for multilingual portals.
- Distribution support: Artificial intelligence is able to scan huge amounts of data from digital usage traces in real time and thus, for example, enables publishers to better address target groups, personalize contents, or use recommendation systems. Ideally, this creates a »system [that] ›knows‹ the end-users’ preferences very well« (Vergeer 2020: 375). One central question in this context is monetization, i.e. the audience’s willingness to pay for certain content.
- Editorial organization and workflows: In terms of staff coordination, artificial intelligence serves as a tool to assist human cooperation. Especially larger forms of cooperation (cross-border journalism) rely on good coordination, on tools that facilitate collaboration and support organizational tasks (Beckett 2019: 75 ff., 156). The oft-cited path »from tool to team mate« or the notion of »Machines as Team Mates (MaT)« (e.g., Bienefeld 2020; Seeber et al. 2020) involves a combination of human and tech-enabled decision-making: The practical implementation of this is still in its very early stages. However, we already have AI tools that use NLP (Natural Language Processing) to automatically refer (external) queries to the right contact or department, or to send documents to the right places in the correct order based on predefined workflows. At the same time, we are observing strategic adjustments in the newsrooms in terms of organizing innovation management (and the question of integrating AI into in-house processes is definitely one of them): New innovation units, often referred to as media labs (Hogh-Janovsky/Meier 2021), have sprung up around AI and are developing digital formats to address precisely these trends and challenges independently of the constraints of the daily editorial business.
- Verification/correcting of non-journalistic content: Artificial intelligence is able to verify the exclusivity and originality of text, image, and video content. This is a key function that journalism provides to its audience (factuality, transparency) that can no longer be ensured by human perception alone, especially when it comes to deep fakes (Mattke 2018; Godulla et al. 2021).
How are these extensive capabilities currently being used? A survey on journalism and AI, based on 71 media organizations from 32 countries (Beckett 2019: 7 ff., 156), has shown that the main motives for integrating AI into journalism are a desire for more efficient newsroom organization (68%), a more targeted content offering for the audience (45%), and generally more economical personnel and content management (18%). This great potential comes with a number of major challenges: financing (27%), training and knowledge management (24%), and creating a more AI-friendly corporate culture (24%) in editorial offices where the prevailing sentiment is often AI skepticism.
For Germany, Rech and Meyer (2021) evaluated 385 surveys with journalists, revealing that although most editorial offices are educating themselves and taking about AI, only few of them have come into direct contact with AI tools. According to the survey, more than 60% of respondents had never used this technology. Just under 20% said they had at least come into contact with it, albeit rarely. Nevertheless, the fundamental importance of AI is often described as a central challenge for the future of journalism and society: In an international survey of 227 decision-makers in media companies, conducted by Oxford University in December 2020, 69% considered artificial intelligence the major technological driver of journalistic innovation in the coming years, far ahead of 5G technology, which came in a distant second at 18% (see Newman 2021: 30).
In addition to the aforementioned opportunities and future scenarios for AI applications in journalism, a complete inventory and potential analysis must also consider risks and ethical concepts. We have identified four dimensions (cf. Meier et al. 2021) of overarching issues (cf. e.g. Giessmann et al. 2018; Montal/Reich 2017; Linden 2017; Dörr/Hollnbucher 2017, Filipovic 2020) as the basis of our empirical investigation:
- Automated and learning text production: functionality, strengths, and weaknesses;
- AI as a tool of assistance or, in some cases, even a determining element of the production process – from topic selection and research, to processing and enrichment, to the distribution and use of journalistic products;
- Changing journalistic role models and skill sets: new tasks in the context of algorithms and automation, also in cooperation with technical staff;
- (New) media ethics at the interface between media ethics, journalistic ethics, and machine ethics – in other words, developing ethical concepts for the above-mentioned dimensions.
3 Empirical approach
As a basis, we used partially standardized guidelines to conduct interviews of approximately 60 minutes with experts. In our selection of respondents, we strove for a balance of expertise in software/AI development, journalistic practice, and journalism/media research. The interviews were conducted as part of two teaching projects in the Master’s program »journalism with a focus on innovation and management« at the Catholic University of Eichstätt-Ingolstadt. Over two time periods (November 2020 to January 2021 and May 2021 to June 2021), we conducted a total of 18 interviews. Specifically, we were able to draw on the expertise of the following experts:
Sample of respondents in the empirical survey
|Susanne Merkle||Head of Treffpunkt Trimedialität, Labor für Innovation und Vernetzung, Bayerischer Rundfunk (BR)||Practice|
|Steffen Kühne||Head of Development, Research/Data and AI + Automation Lab, BR||Practice|
|Robert Kaiser||Head of IT Business Systems and Solutions, BR||Practice|
|Jens Radü||Chief Editor, Multimedia, SPIEGEL||Practice|
|Jan Georg Plavec||Editor, Stuttgarter Zeitung||Practice|
|Gabriele Wenger-Glemser||Head of Documentation and Research, BR||Practice|
|Cécile Schneider||Product Lead AI + Automation Lab, BR||Practice|
|Norbert Lossau||Science journalist, physicist, and advisory board member of the Science Press Conference||Science/Practice|
|Jonas Schreiber||Scientific Documentarist, BR archives||Science/Practice|
|Philipp Mayer||Dual program student, BR/LMU||Science/Practice|
|Jessica Heesen||Head of Research, Media Ethics and Information Technology, University of Tübingen||Science|
|Karla Markert||Cognitive Security Technologies, Fraunhofer AISEC||Science|
|Alexander Waldmann||Senior Technical Product Manager for AI and ML, Amazon||Science|
|Oliver Zöllner||Professor für Medienforschung, internationale Kommunikation und Digitale Ethik, Hochschule der Medien Stuttgart||Wissenschaft|
|Oliver Wiesener||Professor of Technology and Innovation Management, Stuttgart Media University||Science|
|Rolf Fricke||Head of Research and Development, Condat AG||Software and IT|
|Johannes Sommer||CEO Retresco GmbH||Software development|
|Stefan Grill||Innovation and Products Team, 3pc||Agency|
These interviews and associated transcripts result in a pool of over 226 pages of experiences with AI in journalism. These were evaluated with the help of MAXQDA and a qualitative category system that is essentially based on the aforementioned dimensions. In total, 860 individual codes were set along 66 categories across the 18 interviews. In the following evaluation, which follows a set of main categories (e.g. »conceptualizations«, »AI in the newsroom«, »fields of application«, »skill sets«, »opportunities and challenges«, and »ethics and responsibility«), we assigned each quotation and indirect attribution to the person and their activity.
Our evaluation of the interviews shows, first of all, that the terminological difficulties which we addressed in the theoretical introduction to our work also make themselves felt in practice. Almost all respondents stated that there is no uniform working definition of AI within their department, editorial office, or media company. Often, people merely agree on a working definition for certain projects, as SPIEGEL and Bayerischer Rundfunk (BR) did for a collaboration: »We agreed, for this one project […], that we define AI as something that means ›computer learning‹.« (Jens Radü, SPIEGEL) The term is usually only defined more precisely via its differentiation from other buzzwords in the same context. Respondents from practical backgrounds tend to distinguish the term machine learning, which is considered only a subfield of AI, but which is currently the most established practice in journalism. Another important aspect of defining the term is the distinction from data journalism. While working with data is also a fundamental element of working with AI applications, AI and data journalism should nonetheless be understood as two separate approaches: »In a nutshell, AI is a technology and data journalism is a process, which may also harness AI methods.« (Steffen Kühne, BR).
It also becomes relatively clear that the term »robot journalism«, which is already a questionable term in scientific terms (cf. chapter 2), is also of little practical use in the context of artificial intelligence:
»I don’t like the term robot journalism at all, because it evokes entirely wrong connotations. Journalism encompasses a great many activities. Putting ›robot‹ in front of it implies that a robot could do all the jobs that journalists do today. And that’s just not right, and it shouldn’t be. In this context, we prefer to speak of text automation, because that is a clearly delineated function that tells us what the so-called ›robot‹ really does.« (Cécile Schneider, BR)
Overall, there is a clear need to clarify and specify the term AI across editorial departments and companies. Exchange and debate between science and practice (cf. Meier/Schützeneder 2019), such as our podcast project on »AI and Journalism«, could support this process (Meier/Graßl 2021).
4.1 Fields of application
Respondents mentioned many fields of application across the entire spectrum of the journalistic production process. In the following, we will systematize these numerous mentions as practical examples of real-life AI use to complement the potential AI applications mentioned in literature, which we summarized above.
- Research: Newsrooms and investigative teams use AI applications for research purposes. This is also where we find the greatest proximity to data journalism. AI is deployed as one of many research tools, enabling journalists to sift through large amounts of data that would be impossible for humans to process. AI applications are now also being developed and used for smaller research projects (e.g., social media searches).
- Verification: AI applications help journalists and editors verify material or other content they receive from third parties. AI is particularly helpful to detect fakes and deep fakes, identifying fake or manipulated image and video material, or at least to make editors aware of possible fakes.
- Production: Respondents mentioned various tools for (automatic) text generation, summarizing, proofreading, and transcription (mostly speech-to-text), deep fake technology (to clean up video, or for comedic purposes), recommendation systems, and speech-to-text applications (e.g., for live subtitling).
- Documentation and archiving: AI applications assist with the time-consuming and resource-intensive (and cross-departmental) daily routine of documenting and archiving new content. For example, facial recognition eliminates the need to manually create metadata for video contributions (including the names of all persons who are featured). Such keywording tools are also being tested or used for text (e.g., press releases), images, and audio.
- Audience interaction: AI applications are assuming support functions for community management in online and social media, for example, by pre-selecting or clustering reader comments for the editorial offices.
- Usage analysis and monitoring: AI applications help analyze usage of digital offerings, for example by measuring reach or scanning social networks for particular anomalies (trends, atypical behavior, certain words). Especially the latter aspect brings the editorial production process full circle, as AI generates new topic ideas, offering starting points for new research.
This shows that artificial intelligence is already or can be integrated into the entire journalistic production process, from topic identification and research to usage analysis and monitoring.
4.2 Editorial impact
These varied possibilities for application have been impacting the organizational structure and workflow of editorial departments for some time already. However, especially outside of large media companies or public broadcasters, respondents reported initial organizational challenges in dealing with AI applications:
As a rule, editorial offices have little or no experience with such systems. It is especially difficult when there are no models and workflows in place. Then, of course, you have to free up budget and personnel capacities, which is always a thorny issue« (Jan Georg Plavec, Stuttgarter Zeitung).
According to the respondents, training or restructuring editorial teams as well as developing AI workflows would require an AI strategy, which German editorial offices and media companies generally don’t have. It is therefore no surprise that the interviews yielded little concrete information regarding the organizational impact of AI on newsrooms (as of yet). Rather, respondents reported that the topic of AI is usually entrusted to a few interested parties, or that dedicated AI work groups or special taskforces are created (e.g. the »AI + Automation Lab« at BR). These interdisciplinary teams, consisting of journalists/editors, data/technical staff, designers, and product developers, assist the editorial teams in all matters AI.
In addition to a lack of experience and workflows, the main challenge for editorial teams is technical infrastructure. While public broadcasters, for example, have the means to dedicate staff, time, and money towards developing and building their own AI technology for specific projects, most of the others cover their needs exclusively through collaborations, which has both advantages and drawbacks for the newsrooms and the media companies:
»People often rely on Google technologies because it would be insanely expensive to set up your own unit of 50 AI experts. We cannot afford that. That means you enjoy Google-level capacities from the outset […], but the price for this technological leap is independence. What do we do if we become aware of shady goings-on at Google that need to be investigated?« (Jens Radü, SPIEGEL)
The impact of AI on newsrooms structures and operations also has consequences for journalists, whose traditional general skillset (see Meier 2018: 233 ff.) has already been changed by other developments (e.g., social media, see Dernbach 2022). Experts agree that AI will bring technological skills even more to the fore. For the respondents, this means mathematical and statistical skills and, in some scenarios, the ability to program. Artificial intelligence will not render journalistic skills obsolete, however: »I wouldn’t say that all journalists have to be able to program all of a sudden, because you can cover that need very well with interdisciplinary teams.« (Cécile Schneider, BR). The respondents see the core tasks on the »creative« side (Johannes Sommer, Retresco), in making sense of information (Johannes Sommer, Retresco), or in »telling stories« (Susanne Merkle, BR). Journalists in the feature section, for example, do not have to be able to program an algorithm, but they do have to handle the data and results of AI applications.
4.3 Opportunities and risks
As with any new technology, the pros and cons of AI need to be weighed. In the interviews, the respondents initially conveyed the impression that they see AI primarily as a support function and a problem-solving tool, which is why they generally have a positive attitude toward it. The risks and challenges of AI revealed themselves during the implementation of specific projects. The issue of diversity, which kept coming up in the discussions, illustrates the proximity between opportunity and risk: On the one hand, editorial offices can now run their journalistic texts through an AI program and, for example, check their expert quotes for aspects such as gender balance. On the other hand, the AI application they use may end up being discriminatory if it was trained or developed with biased data. This is what happened to Amazon when the AI tool they used to pre-select job applications demonstrably discriminated against women (see Holland 2018).
AI can therefore certainly be seen as a »field of tension« (Stefan Grill, 3pc). The following table is a concise overview of the opportunities and challenges that were often mentioned in connection with AI in journalism.
Opportunities and challenges of AI in journalism
We cannot discuss all aspects of this in detail in this paper. Nevertheless, we would like to point out a few highlights. Deep fakes illustrate the aforementioned tension, as respondents mentioned them several times, both as an opportunity and a risk for journalism. According to the respondents, the risks reside in the difficulty of exposing deep fakes as well as in the loss of credibility suffered by editorial offices and media which disseminate deep fakes. But the technology offers a wealth of possibilities, as well: It enables editorial offices to pre-produce image material or smooth it out in post-production, for example, to eliminate non-lexical filler words, to insert missing words, or to better synchronize the footage. In particular, respondents feel that pre-productions »in the studio« hold an economic advantage for the future.
This economic aspect makes AI and its opportunities for journalism a corporate and entrepreneurial matter. Stefan Grill (3pc) even thinks that therein lies the main opportunity: »The real benefit of AI actually resides less in the journalistic product and more in the business model, such as selling advertising or other services (keyword: personalization).« A recommendation system for personalized articles can be an important starting point for newsrooms (cf. Elmer 2021). Personalization is particularly important for distribution and sales. Using AI-supported target group and/or audience analysis, »subscribers or premium customers can be targeted in an entirely new way« (Jens Radü, SPIEGEL). From an economic perspective, this also means that advertising can be sold more effectively because of a more targeted delivery to users. In the context of personalization, respondents also identified opportunities for local journalism, »because with AI and automation, I can use regional data to create micro-local offerings that I would never be able to attain with human workers« (Cécile Schneider, BR).
While available data holds opportunity for AI and journalism, it also harbors risk. Strict privacy laws are another complicating factor. The greatest risk, however, is the incorrect handling of data. The challenges reveal themselves at three points in the production process:
- Data preparation (1): If data is adopted unchecked, bias or erroneous data can be fed into AI applications, leading to biased or incorrect results. Especially large amounts of data are difficult and costly to clean up.
- Data application (2): Is the specific AI application running correct and current data? In sports journalism, for example, changes in coaching staff or rescheduled game days can lead to errors in automatically produced texts. The maintenance of datasets and AI applications requires additional resources.
- Data interpretation (3): Humans should be the ones placing AI-assisted results in the right context. Limitations in the data material and results should be pointed out to avoid misinterpretation by the audience.
All of these cases show that human journalists cannot be completely replaced by an AI application. However, fears and concerns are widespread in many editorial departments and media companies: »First, there is clearly a certain sense of unease in the industry. But I don’t think it’s a fear that AI will completely replace our work, but rather an uncertainty about what’s coming.« (Susanne Merkle, BR) Overall, however, the long-prevailing mood of total rejection of AI seems to be changing. Johannes Sommer, whose company Retresco offers AI software for editorial offices, concurs: »Five years ago, we would get booted, for example, when the sports director of a publisher said: ›As long as I’m head of the sports section here, we’ll never have automated soccer texts.‹ Today, almost all major media companies are working with us.« As the industry warms to the technology, the challenges of AI, which have received little priority so far, are coming into focus. These include, above all, ethical issues.
4.4 Ethics and responsibility
Respondents feel that ethical issues and accountability are critical for the long-term implementation of AI applications. This debate is not confined to journalism, but rather, is a larger discourse across society (cf. Weber-Guskar 2021). At the moment, however, editorial departments and media companies are more or less operating in a regulatory vacuum: »We need legal foundations for where AI can and cannot be used, which is something we do not have at this point.« (Oliver Zöllner, Stuttgart Media University). Some media companies are trying to fill this gap by establishing their own »ethics guidelines« (Steffen Kühne, BR). According to the respondents, journalistic responsibility in dealing with AI primarily resides on three levels:
- Responsibility towards data: This dimension of responsibility pertains primarily to the above-mentioned area of data preparation (1). It demands accountability for the »correctness« (Johannes Sommer, Retresco) of the used data. Accordingly, it is the responsibility of the editorial offices to prepare existing or old data material in such a way that any pre-existing bias or distortion is corrected and prejudices (gender, race, etc.) are not perpetuated.
- Responsibility towards the editorial team: The editorial team members should remain in charge, not the machine, in order to keep responsibility in the newsroom (and thus towards the editorial team) with one or more specific individuals. The respondents therefore advocate, for example, fixed rules for approvals and releases by humans so »responsibility cannot just be pushed onto an AI« (Cécile Schneider, BR).
- Responsibility towards the audience: There is also a responsibility to transparently label AI-generated content and, where appropriate, to explain how and using which data the AI application came to these results. Ideally, this information should be combined with the name of a contact person. In addition, there needs to be transparency about human responsibility to ensure compliance with privacy and personality rights (e.g., facial recognition).
Across all interviews, respondents agreed that the overriding responsibility should remain in the hands of humans, in other words, the journalists. Responsibility should not be pushed »on the machine«. Instead, respondents believe that better and higher-level (macro-level) regulation and implementation of control mechanisms into AI workflows (meso-level) are solutions to avoid (ethical) failures. This view strongly correlates with the notion of AI as a tool of assistance for journalism.
4.5 AI as a tool of assistance: A tool with limits
The respondents have a rather clear and uniform definition of the role of AI in journalism. They refer to AI as a »tool« (Steffen Kühne, BR), an »aid« (Cécile Schneider, BR), »means for scaling« (Jens Radü, SPIEGEL) or »assistance« (Oliver Zöllner, Stuttgart Media University), among others. The role of AI should not transcend that of an assisting function, »the sovereignty of the relevance decision, the sovereignty of the last word«, as Jens Radü (SPIEGEL) puts it, should remain in the hands of the journalists. This is also associated with the core tasks of journalism:
»Journalists’ contribution to society will not really change. We will still investigate, we will still tell stories, we will still fact-check, and we will still try to explain complex issues. But we’re going to do precisely these things with the help of artificial intelligence, and artificial intelligence is going to help us in many ways.« (Susanne Merkle, BR)
Consequently, the respondents draw different lines regarding the limits of AI in the newsroom. Overall, we noted the following aspects as the current limitations of AI in journalism:
- recognize, process, and render atmosphere and emotions,
- establish context,
- reliability (e.g. in research),
- moral/ethical decisions,
- available data,
- the content of existing data.
In addition to technical constraints, some limitations are also due to the journalist’s understanding of their own professional role. As already indicated in Radü’s quote, journalists do not feel that AI is able to decide ethical questions according to human moral concepts, or to make choices in keeping with journalistic quality standards and selection criteria. Rather, AI is meant to assist journalism by …
- providing input for journalists and editorial offices,
- taking over simple routine tasks,
- freeing up more time for other activities (e.g. research),
- increasing production efficiency (making it easier and faster),
- offering cues (e.g., identify anomalies in data sets, support social media monitoring, support target audience analysis),
- completing proofing tasks (e.g., spelling or other forms of error-proofing).
This may shift the focus of journalism even more towards more research, creativity, context interpretation, and (final) decision-making. Respondents consider the main function of AI to assist journalism to »do journalism better« (Cécile Schneider, BR). Respondents feel that this assisting function is already met when AI takes over unwelcome routine tasks, thus freeing up time for more research. From this perspective, AI furthers the quality of journalistic products by assuming certain tasks or performing them more efficiently.
However, this assisting role has not yet been exhausted even within the aforementioned limitations. The respondents feel that this development is still in its infancy and that there is great potential for development. They expect future opportunities, primarily in terms of personalization and the targeted distribution of journalistic content. According to the respondents, this starts with greater precision in evaluating user feedback data and analyzing target groups and leads to an ultimate scenario of a fully »personalized« publication, compiled and produced for each recipient based on their reading habits. From an economic perspective, the respondents expect technological advances to result in cheaper acquisition costs for AI tools and thus opportunities for more newsrooms to harness AI technology.
On the other hand, however, journalism will also need to invest in itself, for in addition to the aforementioned technical constraints of AI, the journalists themselves are also a limitation. The respondents believe that there is a lot of catching up to do across all levels, including the decision-making level: »A big problem is that knowledge is underdeveloped among media companies. They have to make software purchasing decisions which they can’t even evaluate because they simply don’t possess the know-how.« (Johannes Sommer, Retresco) Almost all respondents felt that the only way to prepare the industry for the future is to train existing staff and adapt the way they train future staff. Journalism is being restructured, and the changes are here to stay: »In the past, a television crew consisted of a journalist, a camera operator, and a sound supervisor. In the future, there will be a journalist, a graphic designer, a data manager, and a producer« (Susanne Merkle, BR). The respondents agree that newsroom organization will continue to change with AI. However, as international comparison shows, journalism in Germany is only just beginning to adopt AI.
4.6 International comparison
Internationally, Germany in general and German journalism, in particular, are lagging behind in AI adoption, as the survey respondents agree. While they consider AI in Germany to be more or less in its »infancy« (Gabriele Wenger-Glemser, BR), the US and China, in particular, have surpassed Germany (and Europe): »Germany is by no means at the spearhead of development. At this point, we are lagging far behind. As an overall industry, we are certainly not innovators, but followers.« (Johannes Sommer, Retresco) The Washington Post, the Financial Times, and the BBC are considered international pioneers in the media industry. Some consider international player Axel Springer to be a positive German example.
The fact that AI has not yet become an innovation driver in German journalism is also evident from the results of our international DFG-funded research project »Journalism Innovation in Democratic Societies: Index, Impact, and Prerequisites in International Comparison«, which involved a total of 19 researchers from Germany, Austria, Switzerland, Spain, and the UK. The purpose of this three-year research project is to identify the most successful innovations in journalism in the countries in question in the period from 2010 to 2020, to be examined in greater detail in subsequent case studies (cf. Meier 2020). To identify the most successful innovations in journalism, 20 experts from each country were given a guided survey between December 2020 and April 2021. The respondents in each country were selected according to their areas: media professionals (practice), media observers (science), and media evaluators (e.g. from media labs or juries). They were asked to name (approximately ten of) what they considered the most important innovations. The German respondents named a total of 273 innovations.
The interviews show that so far, artificial intelligence is hardly being perceived as an innovation in or for journalism in Germany at all. Only four of 20 German respondents named any innovation even pertaining to the field of AI and automation. Artificial intelligence was mentioned as the major innovation only a single time; otherwise, the topic was mainly mentioned in the context of automation. For national context, it is helpful to compare AI with other areas of innovation: Twelve German respondents named audio (including podcasts), twelve mentioned digital storytelling, ten named paywalls, and 15 mentioned news delivery via social media. Mentions of »radio« as an innovation were equally frequent (four responses) as mentions of the general field of AI and automation.
While in Germany, only one-fifth of respondents consider AI/automation to be one of the major innovations in journalism in the decade from 2010 to 2020, the proportion is significantly higher in the other countries (see Table 3). In Austria and Switzerland, almost twice as many experts, in the UK twice as many and in Spain even more than twice as many, considered AI/automation to be a significant innovation. This puts Germany at the bottom of the table.
Overview of number of respondents per country versus the number who consider AI and automation as one of the major innovations in journalism in the years 2010-2020
|Country||Number of respondents||Number of respondents who mentioned AI as an innovation|
The reasons for the higher values in other countries mainly relate to the area of content production. In Spain alone, the production of automated and AI-based news is mentioned nine times. In addition, increased use of AI to optimize subscription models (e.g., algorithmic paywalls) and advertising is also cited as an important innovation for journalism. Across all countries, however, the automated production of journalistic content using AI is considered the key innovative element in the arena of AI/automation.
5 Conclusion and outlook
Artificial intelligence is of vital and increasing importance for digital journalism. Using different conceptual approaches, we first defined our definitional approach: AI is a technology-based, customized set of tools, trained by permanent feedback, and constantly evolving. These tools often leverage large data sets and are able to support the full range of the journalistic production process.
As a rule, the role of AI in journalism is currently a form of assistance. Our qualitative survey of 18 selected experts from academia and practice (survey period November 2020 – June 2021) provides more concrete details. Consequently, from a scientific and practical perspective, we can state that the topic of AI and automation for journalism is of undisputed importance, but there is still a high need for basic information (even definitional), discussion, and cooperation. The interviews also shed some light on how AI could be used: currently, primarily in the areas of research/topic identification, content presentation/preparation, support for distribution and editorial workflows, and verification of third-party content.
At the same time, this development shifts the focus onto new skillsets: Current and future journalism must recognize AI as a tool that can provide assistance (technological, definitional, and editorial); it must discuss the topic in an ongoing discourse about opportunities and risks, while also creating awareness and offering solutions in the media ethics debate (especially along the lines of responsibility for content and audience).
This is all the more urgent because the results from our international research project indicate that Germany is merely a follower in this key area of current and future relevance. In science policy, at least, there seems to be a heightened awareness of the issue, as numerous AI professorships will be created in the next few years. It will be crucial to work on solutions in both interdisciplinary (technology, humanities, and social science) and transdisciplinary fashion, in close exchange between science and practice.
Part of the research for this paper was funded by the German Research Foundation (DFG), project number 438677067, as part of the project »Journalism Innovation in Democratic Societies: Index, Impact and Prerequisites in International Comparison« (JoIn-DemoS).
About the authors
Michael Graßl is a research associate at the Chair of Journalism I at the Catholic University of Eichstätt-Ingolstadt. His research focuses on journalism and social media, innovations in journalism, and AI-related change in journalism. Among other things, he developed teaching modules for the AI campus. Contact: Michael.Grassl@ku.de
Jonas Schützeneder, Dr., is a substitute professor for journalism and digital innovation at Magdeburg-Stendal University of Applied Sciences. His research focuses primarily on innovations in journalism, local journalism, and digital communication. He and Klaus Meier served as Fellows at the AI Campus in 2020/21. Contact: Jonas.Schuetzeneder@h2.de
Klaus Meier, Prof. Dr., holds the Chair of Journalism I at the Catholic University of Eichstätt-Ingolstadt and has also served as its Vice President for Studies and Teaching since 2021. His central research topics include innovation and transformation in journalism, quality and ethics, transfer, and journalism education. He and Jonas Schützeneder served as Fellows at the AI Campus in 2020/21. Since 2020, he has been leading the international DFG project »Innovations in Journalism«. Contact: Klaus.Meier@ku.de
Translation: Kerstin Trimble
Altmeppen, Klaus; Arnold, Klaus (2013): Journalistik: Grundlagen eines organisationalen Handlungsfelds. Munich: Oldenbourg.
Beckett, Charlie (2019): New powers, new responsibilities. A global survey of journalism and artificial intelligence. London: LSE/Journalism AI/Google News Initiative.
Bienefeld, Nadine (2020): From Tools to Teammates: Human-AI Teaming Success Factors in High-risk Industries. Projekt an der ETH Zürich (2020-2024); accessible at https://p3.snf.ch/project-187331
Buxmann, Peter; Schmidt, Holger (2021) Grundlagen der Künstlichen Intelligenz und des Maschinellen Lernens. In: Buxmann, Peter; Schmidt, Holger (eds.): Künstliche Intelligenz. Berlin, Heidelberg: Springer Gabler, pp. 3-25. https://doi.org/10.1007/978-3-662-61794-6_1
Caswell, David; Dörr, Konstantin (2017): Automated Journalism 2.0: Event-driven narratives. From simple descriptions to real stories. In: Journalism Practice, 12(4), pp. 477-496. https://doi.org/10.1080/17512786.2017.1320773
Dernbach Beatrice (2022): Lernen und Lehren: (Social-Media-) Kompetenzen in der journalistischen Ausbildung. In: Schützeneder, Jonas; Graßl, Michael (eds.): Journalismus und Instagram. Analysen, Strategien, Perspektiven aus Wissenschaft und Praxis. Wiesbaden: Springer VS, pp. 89-104.
Diakopoulos, Nicholas (2019): Automating the News. Cambridge: Harvard University Press.
Dörr, Konstantin (2016): Mapping the field of Algorithmic Journalism. In: Digital Journalism, 4(6), pp. 700-722. https://doi.org/10.1080/21670811.2015.1096748
Dörr, Konstantin; Hollnbuchner, Katharina (2017): Ethical Challenges of Algorithmical Journalism. In: Digital Journalism, 11(5), pp. 404-419. https://doi.org/10.1080/21670811.2016.1167612
Dreyer, Stephan; Schulz, Wolfgang (2019): Künstliche Intelligenz, Intermediäre und Öffentlichkeit. Bericht an das BAKOM, accessible at https://www.bakom.admin.ch/bakom/de/home/elektronische-medien/studien/einzelstudien.html
Elmer, Christina (2021): KI und Journalismus: Die erstaunlichsten und nützlichsten KI-Tools, die Journalismus besser machen können. Und Mythen, die es aufzuklären gilt. Präsentation im Rahmen des SPIEGEL Story Days am 29. Oktober in Hamburg.
Filipovic, Alexander (2020): Ethik als Akteurin für die Entwicklung einer digitalen Kultur. Das Verhältnis zu Wirtschaft und Politik am Beispiel des Diskurses um »Künstliche Intelligenz«. In: Prinzing, Marlis; Debatin, Berhard; Köberer, Nina (eds.): Kommunikations- und Medienethik reloaded? Nomos: Baden-Baden.
García-Avilés, José; Kaltenbrunner, Andy; Meier, Klaus (2014): Media Convergence Revisited. Lessons learned on newsroom integration in Austria, Germany and Spain. In: Journalism Practice, 8(5), pp. 573-584. https://doi.org/10.1080/17512786.2014.885678
Giessmann, Marius; Goutrie, Christine; Herzog, Michael (2018): Unsichtbar und unverständlich: Kennzeichnungen von Roboterjournalismus. In: Dachselt, Raimund; Weber, Gerhard (eds.): Mensch und Computer 2018 – Sammelband zur Tagung in Dresden. Dresden: GfI. https://doi.org/10.18420/muc2018-mci-0294
Gillmor, Dan (2013): Unternehmer werden Journalismus retten (und Sie können einer von ihnen sein). In: Kramp, Leif; Novy, Leonard; Ballwieser, Dennis; Wenzlaff, Karsten (eds.): Journalismus in der digitalen Moderne. Einsichten – Ansichten – Aussichten. Wiesbaden: Springer VS.
Godulla, Alexander; Hoffmann, Christian P.; Seibert, Daniel (2021): Dealing with deepfakes – an interdisciplinary examination of the state of research and implications for communication studies. In: SCM Studies in Communication and Media, 10(1), pp. 72-96. https://doi.org/10.5771/2192-4007-2021-1-72
Graßl, Michael; Schützeneder, Jonas; Klinghardt, Korbinian (2020): Intermediäre Strukturen und Neu-Organisation bekannter Aufgaben: Instagram im Lokaljournalismus. In: Medienwirtschaft: Zeitschrift für Medienmanagement und Medienökonomie, Heft 2-3, pp. 18-27. Doi: 10.15358/1613-0669-2020-2-3-18
Hogh-Janovsky, Isabell; Meier, Klaus (2021): Journalism Innovation Labs 2.0 in Media Organisations : A Motor for Transformation and Constant Learning. In: Journalism and Media, 2(3), pp. 361-378. Doi: 10.3390/journalmedia2030022
Holland, Martin (2018): Amazon: KI zur Bewerbungsprüfung benachteiligte Frauen, accessible at https://www.heise.de/newsticker/meldung/Amazon-KI-zur-Bewerbungspruefung-benachteiligte-Frauen-4189356.html
Linden, Carl-Gustav (2017): Decades of Automation in the Newsroom. Why are there still so many jobs in journalism? In: Digital Journalism, 5(2), pp. 123-140. https://doi.org/10.1080/21670811.2016.1160791
Lischka, Juliane (2018): Nachrichtenorganisation. Umbrüche durch Konvergenz, Crossmedialität, Multikanal- und Innovationsfähigkeit. In: Nuernbergk, Christoph; Neuberger, Christian (eds.): Journalismus im Internet. Profession – Partizipation – Technisierung. Wiesbaden: Springer VS, 2. Auflage.
Loosen, Wiebke; Solbach, Paul (2020): Künstliche Intelligenz im Journalismus? Was bedeutet Automatisierung für journalistisches Arbeiten? In: Köhler, Tanja (ed.): Fake News, Framing, Fact-Checking: Nachrichten im digitalen Zeitalter: Ein Handbuch. https://doi.org/10.14361/9783839450253-010
Mattke, Sascha (2018): KI gegen KI: Wettbewerb zu Fälschung von Video-Inhalten, accessible at https://www.heise.de/newsticker/meldung/DARPA-veranstaltet-Wettbewerb-zu-Faelschung-von-Video-Inhalten-4074467.html
McCarthy, John (2007): What is Artifical Intelligence? accessible at http://jmc.stanford.edu/articles/whatisai/whatisai.pdf
Meier, Klaus (2020): Start des Projekts, accessible at: https://innovations-in-journalism.com/2020/innovationen-journalismus-medien-projekt-deutschland-spanien-medieninnovationen
Meier, Klaus (2018): Journalistik. 4. Auflage. Konstanz: UTB.
Meier, Klaus; Schützeneder, Jonas; Graßl, Michael (2021): KI als Anwendung im Journalismus: zwischen Misstrauen und Aufklärung. Berlin: KI-Campus, accessible at https://ki-campus.org/blog/ki-im-journalismus
Meier, Klaus; Schützeneder, Jonas (2019): Bridging the Gaps: Transfer Between Scholarly Research and Newsrooms in Journalism Education – Toward an Evidence-Based Practice in an Age of Post-Truth and State of Flux. In: Journalism and Mass Communication Educator, 74(2), pp. 199-211. https://doi.org/10.1177/1077695819830021
Meier, Klaus; Graßl, Michael (2021): Podcast »KI im Journalismus«, accessible at: https://ki-campus.org/podcasts/ki-im-journalismus
Montal, Tal; Reich, Zvi (2017): I, Robot. You, Journalist. Who is the Author?: Authorship, bylines and full disclosure in automated journalism. In: Digital Journalism, 12(5), pp. 829-849.
Newman, Nic (2021): Journalism, Media, and Technology Trends and Predictions 2021, accessible at https://reutersinstitute.politics.ox.ac.uk/sites/default/files/2021-01/Newman_Predictions_2021_FINAL.pdf
Pauly, Marcel (2021): So lassen wir Dutzende Wahlkreisanalysen automatisiert schreiben, accessible at https://www.spiegel.de/backstage/ltw-in-baden-wuerttemberg-und-rheinland-pfalz-2021-automatisierte-wahlkreis-analysen-a-701c2ffb-c0a9-4ffb-8575-ac27c2184a0e
Porlezza, Colin (2020): Ethische Herausforderungen des automatisierten Journalismus. Zwischen Dataismus, Bias und fehlender Transparenz. In: Prinzing, Marlis; Debatin, Bernhard; Köberer, Nina (eds.): Kommunikations- und Medienethik reloaded? Baden-Baden: Nomos. https://doi.org/10.5771/9783748905158-143
Rech, Benjamin; Meyer, Matthias (2021): Plattformen und neue Technologien im Journalismus: Ergebnisse einer Online-Befragung von Journalistinnen und Journalisten in Deutschland. ArXiv, abs/2105.07881.
Seeber, Isabella, Bittner, Eva, Briggs, Robert O., de Vreede, Triparna, de Vreede, Gert-Jan, Elkins, Aaron, Maier, Ronald, Merz, Alexander B., Oeste-Reiß, Sarah, Randrup, Nils, Schwabe, Gerhard, Söllner, Matthias (2020): Machines as teammates: A research agenda on AI in team collaboration. In: Information & Management, 57(2). https://doi.org/10.1016/j.im.2019.103174
Stray, Jonathan (2019): Making Artificial Intelligence Work for Investigative Journalism. In: Digital Journalism, 7(8), pp. 1076-1094. https://doi.org/10.1080/21670811.2019.1630289
Thurman, Neil (2020): Computational Journalism. In: Wahl-Jorgensen, Karin; Hanitzsch, Thomas (eds.): The Handbook of Journalism Studies. New York: Routledge.
Vergeer, Maurice (2020): Artificial Intelligence in the Dutch Press: An Analysis of Topics and Trends. In: Communication Studies, 71(3), pp. 373-392. https://doi.org/10.1080/10510974.2020.1733038
Weber-Guskar, Eva (2021): Drohen mit dem Einsatz von Künstlicher Intelligenz neue Menschenwürdeverletzungen? In: Kipke, Roland; Röttger, Nele; Wagner, Johanna; v. Wedelstaedt, Almut Kristine (eds.): ZusammenDenken.Wiesbaden: Springer VS. https://doi.org/10.1007/978-3-658-33464-2_12
1 We thank the respondents for their valuable input and support of the project. The following students in the Master’s program »Journalism with a focus on innovation and management« were involved: Konstantin Holtkamp, Felix Melzer, Verena Müller, Morgana Pfeiffer, Amelie Ries, Leonie Bednorz, Paulina Skrobanek, Leonie Heinrichs, Hannah Marquardt, Tamara Ruf, Florian Enslein, Jana Rudolf, Laura Danner, and Katharina Harbach.
2 At the beginning of this article, we made a theoretical distinction between the term automation and artificial intelligence (cf. ch. 2 as well as Dörr 2016; Montal/Reich 2017); for international comparison, we combined AI and automation as one innovation cluster in our current international research project due to numerous overlaps and similarities in the interviews.
About this article
This article is distributed under Creative Commons Atrribution 4.0 International (CC BY 4.0). You are free to share and redistribute the material in any medium or format. The licensor cannot revoke these freedoms as long as you follow the license terms. You must however give appropriate credit, provide a link to the license, and indicate if changes were made. You may do so in any reasonable manner, but not in any way that suggests the licensor endorses you or your use. You may not apply legal terms or technological measures that legally restrict others from doing anything the license permits. More Information under https://creativecommons.org/licenses/by/4.0/deed.en.
Michael Graßl; Jonas Schützeneder; Klaus Meier: Artificial intelligence as a tool of assistance. A scientific and practical perspective on AI in journalism. In: Journalism Research, Vol. 5 (1), 2022, pp. 3-24. DOI: 10.1453/2569-152X-12022-12049-en
First published online