📄

Request My Resume

Thank you for your interest! To receive my resume, please reach out to me through any of the following channels:

AI 'Invading' Journalism? 9% of Articles Are AI-Made, But That's Not the Real Alarm

Introduction: A Futile War for “Purity”

Recently, a research paper titled “AI use in American newspapers is widespread, uneven, and rarely disclosed” dropped a medium-sized bomb in AI and media circles. (Published on arXiv)

Its core conclusion: In summer 2025, roughly 9% of American newspaper articles contained at least some AI-generated content. Of 100 AI-flagged articles sampled manually, only 5 naively disclosed: “Hey, we used AI here.”

Immediately, cries of “journalism’s fall,” “public right-to-know crisis,” and “AI fake news flood” erupted everywhere. People seemed to have found a new target, ready to launch a massive “AI content purge campaign.”

But as a Builder who’s been navigating AI waters for several years, my reaction was completely different. Shocked? Maybe slightly surprised this number (9%) came so fast, given traditional journalism’s existing industry standards.

But panicked? Not at all. Because in my view, what this paper truly reveals isn’t how scary AI is, but how “stuck in the past” many people’s attitudes toward AI are. The alarm it sounds shouldn’t be “how to contain AI content,” but “how do we face a future where AI is everywhere?“

1. What Did the Paper Say? — Cold Data on AI “Occupying” Journalism

Let’s set aside opinions and objectively examine what facts this paper (arXiv:2510.18774v1) reveals. Researchers deployed a high-precision AI detector called Pangram (claimed false positive rate of only 0.001%), “examining” approximately 250,000 articles from over 1,500 American newspapers in summer 2025.

Core Conclusions Are Striking:

  • AI Is Now Normal: About 9.1% of news articles contain AI-generated content (5.2% fully AI-generated, 3.9% human-AI hybrid).

  • “Small Towns” Rely More: Local small papers’ AI usage far exceeds national major media. AI is most “rampant” in Maryland (16.5%), Tennessee (13.6%), Alabama (13.9%).

  • Certain Domains Hit Hardest: Weather news AI usage reaches 27.7%, tech and health topics also significantly higher.

  • “Opinion” More “Watered Down” Than “Fact”: AI usage in commentary and columns is 6.4 times that of news reports.

  • “Stealth Mode” Is Mainstream: Of 100 manually sampled AI-flagged articles, only 5% disclosed. The vast majority of AI writes “invisibly.”

  • Language Differences: Non-English articles (especially Spanish) reach 31% AI, far exceeding English articles (8%).

Credibility of Evidence and Methods:

Large data scale (~250,000 articles), high-precision detection tools (Pangram + GPTZero cross-validation), relatively rigorous logic. But limitations exist: ambiguous “human-AI hybrid” definition, possible translation misjudgment for non-English news, small disclosure verification sample. Overall, this report’s data and conclusions have high reference value.

2. My “Contrarian” View — Why “Anti-AI Detection” Is Meaningless

OK, facts laid out. Now how do we interpret them? Mainstream voices will likely be panic and containment — many content platforms are doing this too, using various restrictions, mandatory labeling, traffic throttling to hinder AI content spread. My view may disappoint or even anger many: All this “containment” and “panic” is meaningless, even harmful. Yes, I clearly oppose investing major effort into “anti-AI detection.” The reason is simple:

1. AI Content “Invasion” Is Irreversible — The Future Internet Will Be AI Everywhere.

9% is just the beginning — this number will only grow exponentially. In the future, completely “pure” human-created content will become rare. Trying to filter out all “AI ingredients” is like finding needles in haystacks.

2. The Obsession with “Pure Human Data” Is Leading AI Training Into a Dead End.

High-quality human data has long been “squeezed dry,” and new content is full of AI traces. Clinging to “purity” will only make models outdated. The correct path is researching how to leverage (not reject) synthetic data for more effective training.

3. Content Evaluation Standards Should Be “Quality” and “Value,” Not “Origin.”

We care whether news is true, accurate, valuable — not who (or what) wrote it. Would you dismiss an entire movie because the special effects were AI-made? Same logic for content creation. AI is just a tool; final value depends on the human-machine collaboration system’s goals and capabilities. AI can be a tool for editors writing articles, but it can’t replace journalists investigating on the ground. We should care about information source authenticity — the journalist — not whether final writing was AI or human. If a news article is a briefing sent back by a real journalist from the frontlines, following journalism industry standards, then polished using AI and reviewed by humans — under such workflow, discussing whether AI participated or wrote it is truly meaningless.

It’s like evaluating: handwriting manuscripts with paper and pen is good? Typing on computers is bad?

4. AI Detection Technology Itself Is a “Cat and Mouse Game” That Can Never Completely Win.

As models evolve, AI-generated content will increasingly resemble human writing, detection difficulty will exponentially increase. Investing heavily in developing detection tools is like reinforcing a dam destined to burst.

Therefore, my core view: Rather than wasting resources fighting this doomed-to-fail war, shift focus to truly important questions: How do we ensure all content’s truthfulness, accuracy, and ethics? How do we establish new norms and evaluation systems suited for human-machine collaboration? How do we improve public media literacy?

3. The Paper’s Real Value

So where’s this paper’s actual value? In my view, it sounds several deeper alarms:

  • Transparency Crisis Is Far Scarier Than AI Itself: 9% AI content isn’t scary — the scary part is only 5% disclosure rate. This “sneakiness” is the cancer eroding public trust. Industry must quickly establish clear AI usage disclosure standards, with respecting right-to-know and taking responsibility for authenticity at the core.

  • Structural Differences Widen Information Gaps: Local small papers’ AI dependence far exceeds major media, reflecting resource gaps. Long-term, will this lead to a new “information feudalism” — rich media doing deep investigations, poor media only producing shallow information via AI?

  • “Opinion” AI-ification May Be More Dangerous: Commentary AI usage at 6.4x news reports deserves particular vigilance. Compared to facts, shaping opinions is more subtle, more easily manipulated. When AI is used to batch “optimize” or even “generate” opinions, its potential impact on public discourse is far greater than AI writing weather forecasts.

Embrace the Flood, Learn to Surf — Not Build Dams

AI’s penetration into content creation is an irreversible flood. What we can do isn’t futilely build dams to block it, but quickly learn how to swim in the flood, even ride it to surf.

This paper on American journalism’s AI usage situation — rather than being evidence for “AI threat theory,” it’s a “industry transformation notice.” It reminds us:

  • Stop fantasizing about “pure human creation” — that’s unrealistic and unnecessary.
  • Shift focus from “detecting AI” to “ensuring quality” and “improving literacy.”
  • Quickly establish transparency norms and ethical guidelines for AI usage.

The future is here. Rather than panic and resist, face reality and think about how to establish new trust mechanisms and value standards in this new era of human-machine co-creation. This is far more important and urgent than developing the next “AI detection miracle tool.”

Finally, this article was co-created by Gemini 2.5 Pro and myself.

Fear comes from the unknown. Understanding how AI confirms and evaluates source authenticity and accuracy, its advantages and limitations, helps reduce your fear.

Recommended reading:

GEO (AIO) Underlying Principles: Deconstructing AI’s Credibility and Authority Assessment in Information Retrieval

Interrogating AI: How I Made ChatGPT “Confess” GEO’s Underlying Logic

Found Mr. Guo’s analysis insightful? Drop a 👍 and share with more friends who need it!

Follow my channel to explore AI, going global, and digital marketing’s infinite possibilities together.

🌌 Content’s value lies in true insight, not carbon-based or silicon-based labels.

Mr. Guo Logo

© 2026 Mr'Guo

Twitter Github WeChat