Photo credit: www.theguardian.com
In early March, a job advertisement circulated among sports journalists, offering a position for an “AI-assisted sports reporter” at Gannett, the publisher of USA Today. The role was presented as being at the forefront of a new journalism era but came with the stipulation that it was not intended for traditional beat reporting, and would not involve travel or in-person interviews. This prompted some dark humor within the industry, epitomized by football commentator Gary Taphouse’s remark: “It was fun while it lasted.”
As the integration of artificial intelligence within newsrooms progresses, media professionals are grappling with both the risks and advantages presented by this technology. Recently, one media outlet faced scrutiny for a project that appeared to downplay the history of the Ku Klux Klan. Simultaneously, some British journalists have notably recorded as many as 100 bylines in a single day, showcasing the dual-edged nature of AI in contemporary journalism. Despite the apprehensions surrounding AI, there is a growing agreement on its current capabilities and limitations.
However, media companies must confront a significant concern: the potential for users to bypass traditional news sources altogether by turning to AI assistants for information. A UK media executive highlighted the necessity of establishing clear guidelines in the coming years to preserve credible journalism, stating, “I think good quality information can rise in an age of AI, but we need to set the terms in the right way, or we are all screwed.”
The rapid emergence of AI technology has resulted in several cautionary tales regarding its application in journalism. For instance, when the LA Times introduced an AI tool designed to provide alternative viewpoints on opinion pieces, it raised alarms by framing the Ku Klux Klan in a way that seemed to minimize its hateful ideology. This incident underscored the risks of assigning evaluative tasks to AI systems that lack the necessary contextual understanding. A media executive remarked on the inherent dangers, saying, “It was given a task of making judgments it can’t possibly be expected to make.”
Even established tech companies like Apple have faced challenges, having to halt the use of a feature that produced inaccurate summaries of BBC News headlines, a testament to the difficulties involved in ensuring the accuracy of generative AI outputs.
In practice, journalism teams and technology developers have been collaborating for years to identify the most effective uses of AI. Currently, many publishers are utilizing AI to assist with the generation of small text segments, such as headlines and story summaries, which are then subject to verification by human editors. The Independent announced its plans to publish AI-generated condensed versions of its articles, joining a wave of publishers exploring similar initiatives.
Additionally, several major organizations have been experimenting with custom AI chatbots that allow audiences to engage with content from their archives. However, this raises the issue that editors may not always verify the accuracy of the AI-generated responses. For instance, the Washington Post has included a disclaimer with its chatbot feature, cautioning users that “this is an experiment … Because AI can make mistakes, please verify the response by consulting these articles.”
The extent to which AI-generated content can be effectively overseen by human editors remains a contentious topic. Reach, the publisher behind the Daily Mirror and other local outlets, has utilized its “Guten” tool to transform its existing journalism for various audiences. This has led to some journalists reporting impressively high byline counts, including one regional reporter who achieved 150 bylines in a single day. Although this reporter did not directly use the Guten tool, it was instrumental in repackaging his work for other platforms.
Concerns about the implications of such technology are prevalent among journalists. Nonetheless, a spokesperson for Reach emphasized that Guten serves merely as a tool, one that requires careful use by editorial staff. They noted, “We’re encouraged by the progress we’ve made in reducing errors and supporting our everyday work, which has allowed journalists to focus on stories that might otherwise remain underreported.”
The USA Today Network echoed similar sentiments regarding its AI-assisted reporter position, asserting that “By leveraging AI, we are able to expand coverage and enable our journalists to focus on more in-depth sports reporting,” according to a spokesperson.
Critics, however, question whether the time saved through AI will actually be reinvested into original journalism. Chris Blackhurst, a former editor at the Independent, expressed skepticism about the potential benefits, suggesting that it might instead “free people up to work elsewhere.”
While the impact of AI-assisted journalism remains a debated topic, the technology is already delivering tangible improvements within newsrooms, especially in analyzing vast datasets. Publications such as the Financial Times, New York Times, and the Guardian are leveraging AI for various tasks, including identifying severe neglect cases through extensive hospital documentation. Additionally, AI is being applied for transcription and translation purposes, further enhancing journalistic capabilities.
Some media organizations are even employing AI for “social listening,” utilizing tools to track trending discussions among younger audiences on social media platforms. Dion Bailey, the chief product and technology officer of The News Movement, shared that this technology helps them grasp current conversations and topics relevant to their audience. Despite ongoing fears about AI inaccuracies, some outlets, including Der Spiegel, are exploring the use of AI for fact-checking their content.
Looking ahead, research suggests a shift toward “audience-facing format transformations,” which entails adapting stories into the formats preferred by users—be it summary, audio, or video. A significant portion of media leaders surveyed by the Reuters Institute for the Study of Journalism expressed interest in experimenting with converting text articles into video content. Current technologies are already capable of distilling lengthy footage into concise, shareable pieces.
Yet, overshadowing these innovations is a prevailing concern that the rise of personal AI chatbots could displace traditional media companies as the primary source of content production. One industry insider noted, “What keeps me up at night is AI simply inserting itself between us and the user,” a sentiment echoed with the recent introduction of Google’s “AI Mode,” which aggregates information and presents it in a chatbot format. Many believe that regulatory intervention might be necessary to combat these challenges.
In response to the growing influence of AI, several major media organizations have begun signing licensing agreements with prominent AI model developers, allowing these systems to be trained on their original content while ensuring proper attribution. Notably, The Guardian has established a partnership with OpenAI, the creator of ChatGPT, while the New York Times is actively pursuing legal action against OpenAI for utilizing its materials.
Bailey acknowledges the existing concerns but remains optimistic about the media’s ability to adapt. He remarked, “If the power goes to two or three big tech companies, then we have some real, significant issues. We need to adapt in terms of how people are able to get to us. That’s just a fact.”
Source
www.theguardian.com