What Campaign Managers Get Wrong About AI (From Someone Who's Built It)
The campaigns that deploy AI badly have one thing in common: they had the wrong mental model before they started. Here are the five mistakes I see most often.
I've been building technology for political campaigns since before most of the vendors pitching you AI were in high school. I've seen every version of this conversation: the initial skepticism, the overreaction, the overreach, the inevitable disappointment when the technology doesn't do what the pitch promised.
AI in campaigns is different from previous technology transitions in one important way: the gap between what it can actually do and what vendors claim it can do is wider than it's ever been. That gap creates predictable mistakes. Most of them stem from having the wrong mental model before you even start.
Here are the five I see most often.
Mistake 1: Treating AI as a Staffing Replacement
The pitch is usually framed as "AI handles the work so you don't need as many people." Campaign managers hear that and start doing the math: if AI writes the emails, maybe I don't need a full-time comms director.
That's the wrong math.
AI reduces the time humans spend on specific tasks. It doesn't eliminate the judgment calls that make campaigns win or lose. The comms director who was spending 4 hours per email now spends 2. She uses the other 2 hours on the strategy decisions that actually matter — what to say, when to say it, when to stay quiet, how to frame an issue in a way that lands for your specific district.
Campaigns that deploy AI to replace headcount end up with less capacity for judgment at exactly the moments when they need it most. You haven't freed up human capital. You've just removed the humans and hoped the system covers for them.
The right mental model: AI multiplies the people you have. It doesn't replace them.
Mistake 2: Thinking the Approval Step Is Optional
"If I have to review every email, what am I actually automating?"
I hear this from campaign managers who want the system to run without oversight. Send the follow-ups automatically. Let the emails go out without someone checking each one. Trust the AI.
This is the mistake that creates scandals.
AI systems make errors. They misread the tone of a call note. They write an ask amount that doesn't match the donor's history. They occasionally produce output that is technically accurate but politically tone-deaf. None of these are catastrophic in isolation. They become catastrophic when they go out to a donor before anyone caught them.
More fundamentally: your campaign is legally and reputationally accountable for every communication that leaves your operation. The AI generates; a human approves. That's not a limitation of the technology. It's the only safe architecture.
The approval step, done well, takes 30-60 seconds per email. For a 30-email batch, that's 20 minutes. That's the actual cost. It's worth it.
Mistake 3: Deploying Before the Voice Is Trained
This one is subtle and expensive. Campaign managers buy into an AI follow-up system, configure the trigger workflow, and start using it in the first week without investing in the voice model.
The output sounds like campaign fundraising language. Generic. Professional. Fine.
Fine is not the same as the candidate's voice. And donors notice.
Your major donors have been getting emails from your candidate for years. They know how Eric writes. They know how Amy writes. When the AI produces "Dear Friend, I wanted to reach out personally to express my gratitude for your continued support" — they feel the difference even if they can't articulate it.
Voice training takes time and real examples. It requires feeding the system emails the candidate has actually written, not templates. Speeches, if the writing style matches the email register. Text messages, if the campaign's communication style is more casual. The training period is usually 1-2 weeks of real usage with corrections and calibration.
Campaigns that skip this step get passable output. Campaigns that invest in it get output that sounds like it came from the candidate. That gap is the difference between AI that donors don't notice and AI that donors appreciate.
Mistake 4: Expecting the System to Know What It Doesn't Know
Campaign managers often expect AI to fill in information it doesn't have. "Just draft the follow-up" — without providing what was discussed in the call. "Write the fundraising email about the water bill" — without explaining the campaign's position on it or why it matters to their district.
The AI produces something. It always produces something. And because it's grammatically correct and sounds confident, it's easy to assume it's right.
It may not be.
An AI that doesn't have your candidate's position on an issue will produce a plausible-sounding position. An AI that doesn't have call notes will produce a follow-up that sounds personal but references nothing real. An AI that doesn't know your district will produce messaging that could apply to any race in the country.
The quality of the output is bounded by the quality of the input. "Garbage in, garbage out" is older than computers and it still applies.
The practical rule: any time you want the AI to produce content that requires specific knowledge, give it that knowledge explicitly. Don't assume it knows the candidate's position. Don't assume it remembers what was discussed in the call. Tell it.
Mistake 5: Buying a Product When the Field Is Moving Every 90 Days
This is the one that costs the most money and generates the most frustration.
Campaign managers evaluate AI tools, sign a contract, and assume the evaluation is done. They have a product. It does what it does.
What they've actually bought is a snapshot of what AI could do at the moment the vendor built their system. And AI improves every 90 days. Not incrementally — sometimes fundamentally. The model that was state-of-the-art when the vendor built their platform may be three generations behind by the time you're in the general election.
This is why the consulting model exists. Not as a luxury, but as the only architecture that makes sense when the underlying technology changes faster than product development cycles.
You don't want a campaign AI product that was locked in 18 months ago. You want an operation that evolves when the technology evolves. Those are different things, and only one of them is available as a fixed-price SaaS tool.
The campaigns that run the best AI operations aren't the ones with the most sophisticated software licenses. They're the ones that have someone who understands the technology and keeps the operation current. The tools change. The expertise compounds.
None of these mistakes are new. They're the same mistakes campaigns made with CRMs, with email platforms, with digital advertising. The technology changes; the human tendency to misuse it doesn't.
The campaigns that run AI well are the ones that go in with realistic expectations: AI handles the volume so humans can focus on the judgment. The approval step is non-negotiable. Voice training takes time. The system only knows what you tell it. And the field is moving whether or not your vendor is keeping up.
That's the mental model. The tools work when you work them correctly.
Eric Linder is a former California State Assemblyman (2012-2016) and founder of AutomatedTeams, an AI operations consultancy for political campaigns and advocacy organizations.

Eric Linder
Former California Assemblyman. Now building AI operations for political campaigns.
ericlinder.com →