Facepalm: AI algorithms are well known for being unreliable and untrustworthy for journalism reporting or factual documentation. And yet, media companies are extremely eager to jump on the AI bandwagon anyway. They can always retreat from their “experiments” and ask for forgiveness later, after all.
Gannett was recently at the center of yet another case of bad journalism powered by a generative AI algorithm. The media holding company is the largest US newspaper publisher as measured by total daily circulation, as it owns USA Today and many local newspapers in Florida, Tennessee, Kentucky, Arizona, New York, and Montana.
In August, some of the articles published in Gannett’s local newspapers went viral on social media as they were clearly written by an AI. The articles, mostly brief high school sports dispatches, were made fun of for being repetitive, lacking essential details, and using strange wording. The articles’ “authors” had no apparent knowledge of the sports they were writing about, or any semblance of human intelligence for that matter.
Some of the botched articles are preserved on the Internet Archive for anyone to see, providing some eye-opening details about close encounters “of the athletic kind” and a “[[WINNING_TEAM_MASCOT]]” overcoming the brutes of the “[[LOSING_TEAM_MASCOT]].” In other cases, CNN reports, the sports dispatches included identical language and phrasing like “high school football action” and teams that “took victory away from” their opponents.
Gannett later confirmed that its weird local articles were being written by LedeAI, a company that promises to give newsrooms “superpowers” and create “reliable, readable, accurate local reporting” for topics that readers want but newspapers seemingly cannot afford to provide anymore. The “experiment” with AI-based reporting has now been put on hold, Gannett confirmed.
The media giant stated that it is still adding “hundreds” of reporting jobs across the US while also experimenting with AI-based automation to provide additional content to readers and tools to journalists. The evaluation process is ongoing, as the company wants to ensure the information it provides meets the “highest journalistic standards.”
LedeAI CEO Jay Allred acknowledged that their generative algorithm made “some errors” and used unwanted repetition or awkward phrasing. The company launched an “around-the-clock effort” right away to correct the problems displayed by their AI, even though machine learning models aren’t that easy to “correct” after going through their training phase. Allred still thinks that automation is a fundamental part of the future of local newsrooms, however, as LedeAI can provide information that communities would not have otherwise.