Many media executives are placing their bets on artificial intelligence as the future of the industry, even considering the replacement of journalists to cut costs and capitalize on the buzz. However, these efforts have been less than impressive so far. Numerous publications have inadvertently published garbled AI content, frustrating both readers and journalists.
AI’s tendency to “hallucinate” is affecting many aspects of our online experience, from Google’s inaccurate AI summaries to gambling content in newspapers, and AI-created content farms plagiarizing journalists’ work. Google’s embrace of AI technology is negatively impacting the financial health of publications by diverting readers and ad revenue away from the content being monetized by their AI.
Journalists are discovering that AI is inadequate for effectively assisting with their everyday tasks. An investigation led by NYU journalism professor Hilke Schellmann, published in the Columbia Journalism Review, found AI to be poor at summarizing documents and scientific research for reporters. While AI models like Google’s Gemini 2.5 Pro and OpenAI’s GPT-4o could create short summaries with few errors, they struggled to produce accurate long summaries, omitting around half of the facts. The shortcomings were even more pronounced in generating lists of related scientific papers, with AI tools failing to match the benchmark’s citations.
AI’s inconsistency in scientific research may result in journalists misunderstanding new studies or missing crucial critiques. Despite promises that AI can reduce workloads, the technology often fails even in basic tasks like summarization and research, forcing journalists to double-check its output.
The paradox of AI technology is that, while intended to aid, journalists must constantly verify its work, potentially increasing their workload. As AI-generated content increasingly pollutes the internet, the future of journalism is at risk. This is not just alarmism; the industry faces a real existential threat with widespread layoffs, while media companies chase AI-driven results and lucrative licensing deals with companies like OpenAI.
Despite an increasing anti-AI sentiment among the public, little progress has been made since reports emerged about AI-generated content in the industry. Axel Springer, owner of Politico, prompted outrage by forcing journalists to publish AI content, while The Washington Post is developing a tool for unqualified writers. Springer Nature now offers AI-generated summaries of research papers. A study last year revealed that mentioning AI in bylines reduces readers’ trust in the source and author.