Rapid AI Adoption, But Guidelines Still Awaiting
Generative AI tools, led by ChatGPT from OpenAI, have made significant inroads into global newsrooms, according to a recent survey by the World Association of News Publishers (WAN-IFRA).
Despite being in its early developmental stages, AI has already found a place in half of the newsrooms across the globe.
However, the survey also indicated that only 20% of newsrooms have policies in place to guide the use of these AI tools, revealing an interesting dichotomy in the industry’s response to AI: embracing its potential while wrestling with its ethical and practical implications.
The survey, conducted in collaboration with Schickler Consulting, polled a global cohort of 101 journalists, editors, and other news professionals.
A remarkable 49% of respondents confirmed their newsrooms’ active engagement with GenAI tools such as ChatGPT, a testament to the rapid adoption of these technologies.
Despite this eager adoption, however, the survey uncovered a significant gap in terms of formal guidelines regulating AI usage in newsrooms. 49% of respondents said journalists had the freedom to use AI tools as they deemed fit, without formal policies.
Conversely, only 20% of those polled confirmed the existence of management guidelines on when and how to use these tools.
The remaining 3% said AI use was entirely forbidden in their publications.
“Given that these tools only became available a few months ago, the pace of adoption is remarkable, but there’s clearly work to be done in formulating guidelines,” commented one anonymous participant in the survey. “The industry must face these complex questions surrounding GenAI sooner rather than later.”
GenAI Tools: A Mixed Bag of Promise and Concerns
Contrary to alarmist reactions about AI’s potential to usurp journalists, the survey found that GenAI tools were primarily being used to enhance workflow efficiency and content summarisation.
This reveals a broader industry trend of harnessing AI as a valuable support tool rather than a replacement for human journalism.
However, concerns about the technology persist.
Chief among these are worries about the potential for misinformation and poor content quality generated by AI, with 85% of respondents identifying this as a major issue.
The fear of plagiarism, copyright infringement, data protection, and privacy issues also emerged as key concerns.
It’s not entirely unexpected, considering instances where some news outlets published content generated with AI’s assistance and were later discovered to contain inaccuracies.
“The lack of clear guidelines only amplifies these uncertainties. Addressing these issues with AI-specific policies and staff training could alleviate concerns, as could open dialogue about responsible GenAI usage,” suggested another survey participant.
The Future of GenAI in Newsrooms
Despite the concerns, the outlook on GenAI amongst news professionals remains broadly positive.
Seventy per cent of survey participants saw GenAI as a beneficial tool for their newsrooms. This positivity, however, was tempered by a realistic acknowledgement of the need for more development for these tools to reach their full potential.
The survey also revealed that editorial management and data and tech teams primarily drive the push for AI integration.
While some resistance was reported amongst journalists, a greater acceptance was noted among editors, with 37% of respondents noting no resistance among their editorial staff.
Interestingly, a considerable 45% of respondents anticipated that the increased adoption of AI tools would bring about “significant” changes to roles and responsibilities within the newsroom.
The concern about AI affecting job security, while present, was not the dominant worry, reflecting a trend towards seeing AI as a supplement to, rather than a replacement for, human journalism.
As AI continues its steady march into newsrooms, these findings highlight the crucial need for open dialogue and concrete guidelines to ensure the responsible use of GenAI tools.