The Associated Press’s guidance around generative AI is careful in a way that feels almost unfashionable. It says AI may support the values of accuracy, fairness, and speed, but the central role of AP journalists will not change.

More specifically, AP says staff may experiment with ChatGPT cautiously, but do not use it to create publishable content. Any generative AI output should be treated as unvetted source material and filtered through editorial judgment and sourcing standards.

That may sound conservative. It is also a remarkably clear read on where trust actually lives.

Source credit: The Associated Press's original source material.

The byline is a promise

Newsrooms are not trusted because every sentence is handmade in some romantic candlelit archive. They are trusted, when they are trusted, because there is accountability behind the work: reporting, verification, correction, standards, and a visible institution willing to stand behind the result.

Generative AI strains that compact because it can produce fluent text without having done any of the civic labor that makes journalism valuable. Fluency is not reporting. A paragraph that sounds finished may still be epistemically unemployed.

AP’s standards do not require pretending AI has no place in the newsroom. They suggest a better hierarchy: AI can help with tasks, but editorial responsibility cannot be outsourced to a tool that cannot be accountable in public.

That is the right distinction. The danger is not that journalists use software. They always have. The danger is that audiences lose the ability to tell when a piece of media has passed through a human institution with obligations.

The most durable AI policies in media will likely sound a lot like this: practical, limited, and explicit about what must remain human.

Trust is not only built by disclosure labels. It is built by preserving the parts of the process that make disclosure meaningful in the first place.

In short

The Associated Press says generative AI output should be treated as unvetted source material and not used to create publishable content. That is less anti-AI than pro-accountability.