What the 2026 Primary Season Revealed About AI in Campaigns

The primary season created a natural experiment: some campaigns deployed AI operations correctly, some deployed incorrectly, and some didn't deploy at all. Here's what actually showed up in how campaigns ran.

#campaign-operations#ai-campaign-tools#primary-season#political-campaign-automation#campaign-operations

{/* MAVEN NOTE: This post requires Eric's real primary season observations to be publishable. Structure and framing are complete. Eric should fill in the [ERIC FILL] sections below with real, specific observations — no names required, scenario-based is fine. The post goes out June 9. Eric should review and fill by June 6. */}

The 2026 primary season is over. Depending on which races you were watching, it was an instructive six months.

Some campaigns ran AI operations and ran them correctly. Some ran AI operations that were configured poorly and got limited value out of them. Some didn't deploy anything and relied on the same manual operations they've always used. All three groups are now in general election mode, and the gap between them is visible.

Here's what the primary season actually revealed about AI in campaigns: not the vendor narrative, not the predictions, but what showed up when the systems ran under real campaign conditions.


What Worked: The Operations That Ran Well

{/* ERIC FILL: 2-3 real observations from campaigns that used AI correctly during the primary. No client names needed — scenario-based is fine. Examples of what to address: - What specific operations produced visible results? (follow-up velocity, email cadence, event coordination) - What did campaigns that got it right have in common? (workflow discipline, voice model quality, approval step) - What surprised you about how the technology performed under real campaign pressure? Format: 2-3 short paragraphs, Eric's voice, specific details over general claims. */}

[DRAFT PLACEHOLDER — Eric adds primary season observations here]


What Didn't Work: The Mistakes That Showed Up

{/* ERIC FILL: 2-3 real observations from campaigns that deployed AI poorly. The most valuable version of this section: specific failure modes you observed, not generic warnings. Examples: - Campaigns that skipped the approval step and had quality problems - Campaigns that deployed without voice model training and got generic-sounding output - Campaigns that configured once and never updated — voice drift, stale monitoring - Campaigns that used AI for the wrong operations (the Swerve-for-fundraising mistake) No names. Scenarios. "One campaign I was watching..." is fine. */}

[DRAFT PLACEHOLDER — Eric adds primary season observations here]


The Pattern That Kept Showing Up

Every primary season reveals something about the state of campaign technology. The first presidential cycle after social media broke, campaigns learned that the ones who treated social as a broadcast channel lost to the ones who used it for organizing. The pattern repeats.

This primary season's pattern:

{/* ERIC FILL: 1-2 paragraphs on the overarching lesson from watching AI in primary campaigns. What's the single most important thing the primary season revealed about how campaigns should be thinking about AI? This is the post's thesis — the insight that earns the link and gets remembered. Make it specific. The generic version is not publishable. */}

[DRAFT PLACEHOLDER — Eric adds the central observation here]


What the General Election Campaigns Should Learn

The primary season created a natural experiment. The general election campaigns that can read the results correctly have an advantage over the ones that are building their AI strategy from first principles without watching what just happened.

The most important things to take from the primary:

Workflow discipline matters more than tool selection. The campaigns that got the most from AI weren't running the most sophisticated tools. They were running simpler tools with tighter approval processes. The technology works when the human workflow around it is built correctly.

Configuration lag is real and costly. The campaigns that configured AI operations in January had a different operation in May than the ones that configured in March. The difference isn't just time; it's the calibration depth that accumulates from real campaign data. General election campaigns that start now will be ahead of the ones that start in August.

The voice model is the foundation. Everything in a campaign AI operation runs on the candidate's trained voice. Campaigns that invested in voice model quality early (real samples, regular updates, calibration checks) got better output from every other part of the system. The ones that rushed the voice model setup struggled throughout.


For the General Election Cycle

The primary season is done. Races are set. General election campaigns have five months to build, configure, and run the operations that determine what their program looks like in October.

The campaigns that are already operational from the primary have an advantage. The ones starting now can close the gap. The ones that wait until August can't.

{/* ERIC FILL: Optional — if there's a specific thing you're watching heading into the general that you want to flag for readers, add it here. One paragraph. Could be about a specific race dynamic, a technology development, or something you're testing in Q3. Optional. */}


Eric Linder is a former California State Assemblyman (2012-2016) and founder of AutomatedTeams, an AI operations consultancy for political campaigns and advocacy organizations.

Eric Linder

Eric Linder

Former California Assemblyman. Now building AI operations for political campaigns.

ericlinder.com →

Ready to build an operation that never sleeps?