The journalism industry has long expressed concern about losing jobs to AI—and with good reason. Much of the content circulating online today appears to be machine-generated. But surely we can still rely on major publications to deliver high-quality, human-authored reporting, right? If the Chicago Sun-Times is any indication, even traditional newsrooms are beginning to embrace generative AI. And, like many others using these tools without adequate oversight, they’re getting burned.
There may be a silver lining, though, as AI appears unlikely—at least for now—to replace one traditional role in journalism: the fact checker.
Fact-checking wasn’t always a fixture in newsrooms, but it became standard practice as early as the 1950s. Despite shrinking budgets and new technology streamlining editorial workflows, fact-checking still demands human judgment. And AI hasn’t changed that . . . yet.
The situation mirrors what’s unfolding in the legal profession. In both journalism and law, accuracy is not optional. Errors can lead to reputational damage, legal exposure, or the spread of misinformation with real-world consequences. Just as no responsible attorney would submit a court filing without verifying the precedent, no responsible journalist should publish a story without confirming its claims.
And yet, in both fields, professionals are tempted by AI’s promise of speed and scale. Newsrooms and law firms alike are being asked to do more with less. AI can help—but only if it’s used carefully. Generative models can hallucinate, fabricating plausible-sounding but false quotes, citations, or case law. Legal hallucinations get lawyers sanctioned. Journalistic ones erode public trust, sometimes irreparably.
We’ve already seen real-world consequences in both professions. In 2023's Mata v. Avianca, lawyers were sanctioned for the first time for citing AI-generated, fictitious case law. In journalism, publishing AI-generated articles without proper oversight has led to retractions and public embarrassment. Moreover, if the misinformation causes harm, the publication exposes itself to potential liability. In both the law and journalism, the temptation to skip fact-checking in the name of efficiency is strong, but the risk is far stronger.
There’s also a growing need for transparency. Just as courts now require disclosure of AI use in legal filings, newsrooms may soon face similar expectations. In both arenas, authorship and accountability matter. If AI generates a falsehood, the author and the publisher can be liable.
Generative AI offers journalists and attorneys powerful capabilities in their respective industries. But if anything is clear, it is that its outputs can’t be trusted blindly. Just as with human-generated content, the work still needs to be fact-checked by a human. And lawyers should be the last ones laughing at the Sun Times' mistake.