Mid-Cycle AI Audit: 5 Questions to Ask About Your Campaign's AI Operation Right Now

If you deployed AI in Q1 or Q2, six months of drift has probably accumulated. Voice models that haven't been updated. Approval workflows running slower than they should. Here are five questions to run against your operation before the sprint starts.

#campaign-operations#ai-campaign-operations#campaign-audit#campaign-fundraising#donor-follow-up

If you deployed AI in Q1 or Q2, you're not running the same operation you built in January. You're running what's left after six months of drift.

Voice models that haven't been updated since the primary. Approval workflows that have accumulated informal workarounds nobody documented. Email programs running at the frequency bandwidth allowed, not the frequency the program should be at. The "we'll fix that later" items that are still unfixed because the primary got in the way.

Drift is normal. Every campaign operation accumulates it. The question is whether you catch it before the sprint or during it.

Here are five questions to run against your AI operation before August.


1. Is the voice model still calibrated correctly after six months?

This is the most common gap in deployed AI campaigns and the hardest to notice from inside the operation, because the drift is gradual.

The voice model you trained in January or February reflected the candidate's communication style at that moment. The stump speech has evolved since then. The primary campaign sharpened certain issue positions and softened others. The donor conversation has a different rhythm in a general election than it did in a contested primary.

A voice model running on six-month-old calibration is generating donor emails that sound slightly off. Donors who've been in the relationship long enough to recognize the candidate's real voice sometimes notice. The drafts require more editing than they did when the model was fresh, which slows the approval cycle, which slows follow-up velocity.

The test: Pull five recent AI-generated drafts and have someone who knows the candidate well — not the person who reviews them every day — read them cold. Do they sound like how the candidate is talking now, or like how they were talking at the start of the year? If there's a meaningful difference, recalibration is due.


2. Is the approval workflow actually being used the way it was designed?

Approval workflows accumulate informal workarounds. The candidate was traveling for two weeks in April and the finance director got in the habit of approving everything herself. Now that workflow step that was supposed to have two eyes on it only has one. Or the opposite: a second approval step got added during a sensitive period and nobody removed it, so every email is taking two days to get through review instead of four hours.

Informal changes to the approval workflow are invisible unless you check for them. And they matter: a workflow that's running slower than it should is the root cause of follow-up velocity problems more often than anything in the AI layer itself.

The test: Document how a standard donor follow-up email actually gets approved right now. Who touches it, in what order, and how long does each step take? Compare that to what the designed workflow says. Any gap between designed and actual is something to fix before August.


3. Has the email program frequency actually improved?

The business case for AI-assisted fundraising email is that AI handling the drafting allows the email program to run at the frequency the campaign should be at, not the frequency bandwidth allows.

After six months, the question is whether that actually happened. Is the program running at 2x per week? Or is it still running at 1x because the approval cycle is slower than expected, or because the content feed hasn't been keeping up with demand, or because someone decided to pull back frequency after a low-performing send in March and never ramped it back up?

The test: Count the emails that went to the house file in May and in June. Is the cadence at 2x per week or above? If not, identify specifically why — approval bottleneck, draft quality, content gap, or deliberate choice — and decide whether it's fixable before the sprint.


4. Is donor follow-up velocity above 90%?

The target is 24-hour follow-up velocity for at least 90% of contacts from each call session. That means: 9 out of 10 contacts from a call session receive a personalized follow-up email within 24 hours of the session ending.

Most campaigns that deployed follow-up AI in Q1 started strong on this metric and saw it drift as the primary got intense. The velocity drops when the approval workflow gets overloaded, when the candidate's personal review is required on standard emails, or when follow-up triggers aren't running on weekends.

The test: Review the last five call sessions. Calculate actual follow-up velocity — what percentage of contacts received a follow-up within 24 hours, and what was the average time from session end to send? If velocity is below 80%, something in the trigger or approval path needs adjustment before August.


5. Is the system keeping up with changes in the race?

The general election race is different from the primary. Different opponents, different issue landscape, different media market. Your AI operation was configured for the primary. The question is whether it's been updated for the general.

This applies specifically to: the news monitoring source list (is it tracking the right opponents and issues?), the content feed (is it generating material relevant to the current race or still producing primary-era content?), and the voice model (addressed above).

Campaigns that win the general election sprint are running general election operations, not primary operations with a general election date on the calendar.

The test: Review one week of news monitoring output. Is what's being surfaced relevant to the current race? Review five recent AI-generated emails. Do they reflect the current race dynamics or the issues that were dominant in April? Any mismatch is a configuration update that should happen before August.



What Good Looks Like

A campaign that passes this audit looks like this: the voice model was updated within the last 60 days and the drafts require light editing. The approval workflow runs on a predictable daily schedule. The email program is at 2x per week or above with active segmentation. Follow-up velocity is above 90% for the last five sessions. The news feed is surfacing general election-relevant content.

That's the operation that goes into the final sprint. It's not complicated. But getting there requires catching the drift before it compounds through August and September.

The campaigns that find these problems in July fix them before they matter. The campaigns that find them in October are fixing them during the sprint.


Eric Linder is a former California State Assemblyman (2012-2016) and founder of AutomatedTeams, an AI operations consultancy for political campaigns and advocacy organizations.

Eric Linder

Eric Linder

Former California Assemblyman. Now building AI operations for political campaigns.

ericlinder.com →

Ready to build an operation that never sleeps?