What the 2026 General Election Revealed About AI in Campaigns

The honest retrospective. What worked, what overpromised, what surprised, and what the campaigns that are already thinking about 2028 should be building now.

#campaign-operations#ai-campaign-operations#election-retrospective#political-campaign-automation#2028-cycle

{/* MAVEN NOTE: This post requires Eric's real post-election observations to be publishable. Structure and general framing are complete. Eric fills in the [ERIC FILL] sections below after results are in. Publish November 10-14. If TBCA case study was approved for publication, this is the place for it. Eric should draft his observations by November 8 so they can be formatted and filed quickly. */}

The 2026 general election is over. The results are in. The AI operations that campaigns ran through August, September, and October either worked or they didn't, and now we have actual data instead of projections.

Here's the honest retrospective.


What Worked

{/* ERIC FILL: 2-3 specific observations about campaign AI operations that produced results in the 2026 general election cycle. Can reference AutomatedTeams clients directly (with their permission) or use scenario-based framing ("the campaigns that ran this correctly..."). What specific operations produced visible results? What surprised you? Format: 2-3 short paragraphs, specific details, Eric's voice. No fabricated metrics. */}

[DRAFT PLACEHOLDER — Eric adds post-election observations here]


What Overpromised

{/* ERIC FILL: 1-2 honest observations about where campaign AI underperformed expectations, or where the vendor landscape oversold what the technology actually delivered. This section is important for credibility — being honest about limitations is what separates this from vendor marketing. Eric's actual assessment, not diplomatic hedging. Format: 2 paragraphs max. */}

[DRAFT PLACEHOLDER — Eric adds honest assessment of limitations here]


What Surprised

{/* ERIC FILL: 1-2 things that genuinely surprised you about how AI performed in campaigns this cycle — either positively or negatively. The most credible version of this section is something you didn't predict. If you didn't have any surprises, skip this section. Format: 1-2 paragraphs. */}

[DRAFT PLACEHOLDER — Eric adds genuine surprises here (optional)]


The Operational Pattern That Separated Winners from Runners-Up

Every election cycle reveals something about how the technology actually performs under real conditions. Social media broke the same way. Email broke the same way. The pattern is always: the campaigns that treated it as infrastructure outperformed the ones that treated it as a feature.

{/* ERIC FILL: 1 paragraph on the central operational insight from watching AI in general election campaigns. This is the thesis of the retrospective — the single most important lesson. Make it specific to what you actually observed in 2026. The generic version is not publishable. */}

[DRAFT PLACEHOLDER — Eric adds central operational insight here]


The Case Study That Tells the Story

{/* ERIC FILL CONDITIONAL: If TBCA case study was approved for publication, this is the section. The full story: 72 hours from zero to 35,000 people, the Slavet dropout response, the primary results. Use only verified numbers from the March 5, 2026 verification in the content bank. Eric reviews for accuracy before publish. If TBCA case is NOT approved, delete this section entirely. */}

[TBCA CASE STUDY SECTION — include only if Eric confirms publication approval]


What the Campaigns Already Thinking About 2028 Should Build

The post-election window (November through February) is the best time to build campaign infrastructure. No active race pressure. Cycles ahead to iterate. The campaigns that are already in conversations about 2028, 2027 special elections, or off-cycle races are starting from a better position if they start building now.

Here's what matters heading into the next cycle:

Voice models built early, not under pressure. The campaigns that had the best AI output this cycle trained their voice models before the primary, not during it. Starting the training process in November or December, before any race timeline exists, means the model has time to calibrate correctly before it's needed.

Approval workflows that are built for sprint conditions. The campaigns that ran their approval workflows correctly in October designed them for October conditions in August. The process decision (who approves what, on what timeline, without the candidate's direct involvement) has to be made before the sprint. Make it in January.

Data infrastructure from the current cycle. Donor response data, list engagement patterns, what sends worked and what didn't; this is more valuable in the next cycle than any tool. Before the data gets archived or lost, structure it in a way that's usable. The 2028 campaign that builds on 2026 data has a different starting point than one building from scratch.


The Honest Assessment of Where Campaign AI Is in 2026

{/* ERIC FILL: 1-2 paragraphs on the honest state of AI in political campaigns as of November 2026. Not a prediction. Not marketing copy. Eric's actual assessment: what it can do well right now, what it still can't do, and what will change in the next cycle. This is the anchor of the piece and what people will quote. Make it honest and specific. */}

[DRAFT PLACEHOLDER — Eric adds honest 2026 assessment here]


The post-election period is a reset. The races are done. The operations that ran well are documented. The ones that didn't are instructive.

The 2027-2028 cycle is already starting for some campaigns. The window to build correctly (before there's a primary date on the calendar) is now.


Eric Linder is a former California State Assemblyman (2012-2016) and founder of AutomatedTeams, an AI operations consultancy for political campaigns and advocacy organizations.

Eric Linder

Eric Linder

Former California Assemblyman. Now building AI operations for political campaigns.

ericlinder.com →

Ready to build an operation that never sleeps?