Reuters publishes over 3,000 AI-assisted stories per month. The Associated Press has been generating earnings reports automatically since 2014. The Washington Post's in-house AI tool, Heliograf, covered more than 850 stories during the 2016 US elections without a human writing a single sentence.
This is not a pilot program or a future roadmap. It is the current operating reality for some of the largest newsrooms on earth. The question for smaller publishers is not whether AI belongs in editorial workflows. It is which parts of the workflow to change first, and what that change actually costs.
What publishing workflows is AI changing right now?
The clearest gains have come in structured content, the kind that follows a template and pulls from data. Sports scores, financial results, weather summaries, election returns, and real estate listings are all candidates. These stories have a fixed shape: number goes in, sentence comes out. AI does not need to understand the subject. It needs to fill the template correctly.
Beyond templated content, publishers are using AI in four areas that used to eat significant editorial time. Transcription of interviews and press conferences, which used to take one to two hours per recording, now takes under five minutes with tools like Whisper and Otter.ai. Headline A/B testing, once a manual process requiring traffic analysis over several days, now runs automatically and picks the better-performing headline within hours. Image tagging and metadata, which contract workers used to handle manually across photo archives, now happens automatically. SEO optimization, which required a separate review pass from a specialist, is now flagged in real time as writers type.
Nieман Lab's 2023 survey of 150 newsrooms found 75% had adopted at least one AI tool in the prior 12 months. That number would have been under 20% in 2021. The adoption curve has been steep, though most of what was deployed through 2023 fell into assistance rather than generation.
How does AI-assisted content generation work for publishers?
The phrase "AI-generated content" covers a wide range of actual workflows, and the distinction matters for anyone making a budget decision.
At the automation end, a publisher connects a data source (a live sports API, a company's earnings filing, a government database) to a generation tool. The tool pulls the data, maps it to a template the editorial team designed, and publishes. No human writes anything. A human designed the template, a human decides what data feeds to connect, and a human reviews the output periodically. AP produces roughly 4,400 earnings reports per quarter this way. Each one would have taken a junior reporter 20–30 minutes to write. At scale, that is thousands of hours of labor redirected to work that required actual reporting.
At the assistance end, a reporter or editor works with an AI tool that drafts, suggests, or expands rather than publishes independently. The writer describes an angle, the tool produces a first draft, and the writer revises. The final article is human-written in the sense that a human shaped every sentence. But the blank-page problem, which is genuinely one of the slowest parts of writing, gets solved in minutes instead of an hour.
Between those two points sits a workflow many publishers are building now: AI drafts the body of a routine story, a human editor reads it, changes what needs changing, and publishes. The Washington Post used this approach for its Olympics coverage. One editor oversaw output that would have required four reporters under a traditional workflow.
McKinsey's 2023 State of AI report found generative AI tools reduced content production time by 40–60% in marketing and media contexts. For a publisher with a 20-person editorial team, that is the equivalent of adding eight to twelve people without the payroll.
Can AI personalize what each reader sees?
Personalization in publishing means two different things, and they require different investments.
Content recommendation shows each reader articles they are more likely to read based on their history. This has been standard at large publishers since at least 2015. The New York Times, the Financial Times, and the Guardian all run recommendation engines. Spotify's Discover Weekly, which uses the same underlying approach, increased user engagement by 30% when it launched. Publishers who have implemented similar systems report 20–35% increases in pages per session (Piano, 2023).
Content adaptation goes further, adjusting the actual text of an article based on what is known about the reader. This is newer and harder. A financial publication might show a simplified explainer to a reader who has never opened a markets article, and a technical breakdown to a reader who reads earnings analysis every day. The New York Times has experimented with this. Most publishers have not gone this far yet, because it requires editorial policy decisions about how much the text can vary and still count as the same article.
The infrastructure for recommendation is available without custom engineering. Tools like Piano, Arc XP, and Sailthru offer recommendation as a service for publishers at various scales. A mid-sized digital publisher with 500,000 monthly readers can implement a recommendation layer for $2,000–$8,000 per month depending on the platform, compared to $40,000–$80,000 to build a custom system with a Western technology agency.
| Personalization Approach | What It Does | Setup Cost (AI-Native Team) | Western Agency Cost |
|---|---|---|---|
| Content recommendation engine | Suggests articles based on reader history | $6,000–$10,000 | $30,000–$50,000 |
| Email personalization | Sends each subscriber articles matched to their interests | $4,000–$7,000 | $20,000–$35,000 |
| Dynamic homepage layout | Reorders sections and stories per reader segment | $8,000–$14,000 | $40,000–$65,000 |
| Full adaptive content | Changes article text based on reader profile | $25,000–$40,000 | $100,000–$160,000 |
For most publishers, content recommendation is the right starting point. It produces measurable engagement gains within weeks and does not require changing how journalists write.
Is AI-generated content expensive to produce at scale?
The cost equation depends entirely on what the publisher is generating and how.
For structured data stories (earnings, sports, weather), the marginal cost per story approaches zero once the initial template and data connection are in place. AP's automation investment, which it built with Automated Insights starting in 2014, now produces output that would cost millions in reporter salaries if written by hand. The upfront build cost for a comparable system with a modern AI-native team runs $15,000–$25,000. A traditional Western agency would quote $60,000–$100,000 for the same capability.
For AI-assisted drafting at the article level, the cost model shifts to per-token API pricing. OpenAI's GPT-3.5 API costs approximately $0.002 per 1,000 tokens as of late 2023, which works out to roughly $0.005 per 500-word article draft. A publisher generating 1,000 AI-assisted drafts per month pays under $10 in API costs. The human time spent reviewing and editing those drafts is the real cost, not the AI.
The table below compares what a publisher typically pays to produce different content volumes under three models.
| Content Volume | Traditional Newsroom Cost | AI-Assisted Team Cost | Automated (Data Stories) |
|---|---|---|---|
| 500 articles/month | $40,000–$60,000/mo | $18,000–$28,000/mo | N/A (requires structured data) |
| 2,000 articles/month | $140,000–$200,000/mo | $55,000–$80,000/mo | $15,000–$25,000/mo |
| 10,000 articles/month | Not feasible at this volume | $180,000–$250,000/mo | $20,000–$35,000/mo |
The economics shift dramatically at high volume. A data-driven publisher producing 10,000 structured stories per month could not staff that with human reporters at any reasonable budget. With an automated pipeline, the same volume costs less than what a single US senior reporter earns in a year.
What does not get cheaper with AI is original reporting. Investigative pieces, interviews, and stories that require judgment about what matters cannot be automated in any meaningful way. The publishers who will get the most from AI are those who use it to remove the low-judgment work so that their reporters can do more high-judgment work.
Timespade builds content automation systems, recommendation engines, and AI-assisted editorial tools for media companies across Generative AI, Predictive AI, Product Engineering, and Data & Infrastructure. A publisher that needs an automated data story pipeline, a personalized reader app, and a dashboard to track which content drives subscriptions can get all three from one team, on one contract. Most technology vendors cannot cover that range.
If you are evaluating what an AI content system would cost your publication, the first step is a scoping call. Book a free discovery call
