Now Reading
How should journalists govern the use of AI?
Dark Light

How should journalists govern the use of AI?

Associated Press

Like so many sectors of the economy, the news industry is hurtling toward a future where artificial intelligence (AI) plays a major role—grappling with questions about how much the technology is used, what consumers should be told about it, whether anything can be done for the journalists who will be left behind.

These issues were on the minds of reporters for the independent outlet ProPublica as they walked picket lines earlier this month.

They’re inching toward a potential strike—in what is believed would be the first such job action in the news business, where the chief sticking point is how to deal with AI. Few expect this dispute will be the last.

Vetting AI data

AI has undeniably helped journalists, simplifying complex tasks and saving time, particularly with data-focused stories. News organizations are using it to help sift through the Jeffrey Epstein files. AI suggests headlines, summarizes stories. Transcription technology has largely eliminated the need for a human to type up interviews. These days, even a simple Google search frequently involves AI.

Yet rushing to see how AI can help a financially troubled industry has resulted in several cases of publications owning up to errors.

Within the past year, Bloomberg issued several corrections for mistakes in AI-generated news summaries. Business Insider and Wired were forced to remove articles by a fake author named Margaux Blanchard. The Los Angeles Times also had trouble with AI and opinion pieces. Ars Technica said AI fabricated quotes, and the publication that has frequently reported on the risks of overreliance on AI tools embarrassed itself further by failing to follow its policy to tell readers when the tool is used.

‘Unrecognizable’ trajectory

The ProPublica dispute is noteworthy for how it touches on issues that are frequently cause for debates. The union representing ProPublica’s journalists, negotiating its first contract with the outlet known for investigative reporting, says it wants commitments that mirror those sought elsewhere in the industry about disclosure and the role of humans in the use of AI.

Along with holding informational pickets, union members pledged they would be willing to strike without a satisfactory agreement, said Jen Sheehan, spokesperson for the New York Guild, the union that represents many journalists in the city.

“It feels to me pretty monumental when we think about the trajectory of AI and journalism,” said Alex Mahadevan, an expert on the topic in journalism think tank Poynter Institute.

ProPublica has rejected its requests, the union said. Insight into why can be found in an essay, “Something Big Is Happening,” which circulated widely this month.

Author and investor Matt Shumer, who said he had spent six years building an AI startup, wrote that the technology is advancing so quickly that “if you haven’t tried AI in the last few months, what exists today would be unrecognizable to you.”

Small wonder, then, that news executives are reluctant to put guarantees in writing that could quickly become outdated.

‘Greedy side, journalism side’

Rather than make promises that can’t be kept, ProPublica is exploring how technology can create more space for investigative reporting, company spokesperson Tyson Evans said.

In the “unlikely event” of AI-related layoffs, ProPublica is proposing expanded severance packages for those affected, he said.

“We’re approaching AI with both curiosity and skepticism,” Evans said. “It would be a mistake to freeze editorial decisions in a contract that will last years.”

Fifty-seven of 283 contracts at US news organizations negotiated by the NewsGuild-USA contain language related to artificial intelligence, said Jon Schleuss, president of the union that represents more journalists than any in the country. The first such deals happened in 2023, and The Associated Press was one pioneer. He wants provisions in more contracts.

It won’t be easy, judging by the reluctance of many outlets to be tied down. The organization Trusting News, which encourages news organizations to develop and make public its policies on AI use, estimates that less than half of US outlets have done so.

“I think it is becoming harder,” Schleuss said, “because too many newsrooms are being run by the greedy side of the organization and not by the journalism side of the organization.”

The guild pushing for contracts that guarantee AI won’t eliminate jobs. That’s no surprise—unions exist to protect jobs. Schleuss characterized a proposal that ensures an actual journalist is involved when AI is used as a way to prevent errors and help an outlet build trust with its readers.

Disclosure, readers’ trust

“Humans are actually so much better at going out, finding the story, interviewing sources, bringing back the relevant pieces, asking the hard follow-up questions and putting that in a way that people can understand and see, whether it’s a news story or a video,” he said. “Humans are way better at doing that than AI ever will be.”

See Also

Apparently, not everyone in journalism agrees. Chris Quinn, editor of The Plain Dealer in Cleveland, Ohio, wrote this month of his disgust with a recent college graduate who turned down a job offer because the person had been taught that AI was bad for journalism.

Quinn’s newspaper has been sending some of its journalists out to cover stories by interviewing people, collecting quotes and information, then feeding it to a computer to write. While a human will edit what the computer spits out, an integral part of the process—a reporter using his or her judgment about how to tell a story—has been stripped from their hands. Quinn defended it as the best use of limited resources.

Research shows that a vast majority of American consumers believe it’s very important that newsrooms tell the public when AI is used to write stories or edit photographs, said Benjamin Toff, director of the Minnesota Journalism Center at the University of Minnesota.

But here’s the rub: Such disclosure makes them trust the outlet’s stories less, not more.

A significant minority—30 percent in a study Toff conducted last year—doesn’t want AI used in journalism at all.

Telling a reader that AI was used is not as simple as it sounds. “There are just so many, many uses of AI in journalism, from the very beginning of the reporting process to when you hit publish, that just broadly declaring that when AI is used in the news gathering process that you have to disclose it, just seems like it is actually a disservice to the reader in some cases,” Poynter’s Mahadevan said.

‘Important conversations’

Two lawmakers in New York state—the nation’s publishing capital—introduced legislation this month requiring clear disclaimers when AI is used in an published content.

There’s no immediate word on its chances of enactment, but both sponsors are Democrats in a legislature controlled by that party.

Mahadevan believes it’s fair to have policies that requires human involvement—editing to prevent slip-ups, for example. But even these declarations are open to interpretation, he said. If an outlet uses chatbots to answer reader questions, are they being edited by a human being?

“Speaking realistically, the newsroom of the future is going to look completely different than it does today,” he said. “Which means people will lose jobs. There will be new jobs. So I think it’s important that we are having these conversations right now because audiences do not want a newsroom completely taken over by AI.”

Have problems with your subscription? Contact us via
Email: plus@inquirer.net, subscription@inquirer.net
Landline: (02) 8896-6000
SMS/Viber: 0908-8966000, 0919-0838000

© 2025 Inquirer Interactive, Inc.
All Rights Reserved.

Scroll To Top