Chinese AI Models Are Free and Powerful. Should Your Campaign Use Them?

DeepSeek and other Chinese AI models are genuinely capable and cost nothing to run. Campaign managers are starting to ask. Here's the honest answer.

#ai-tools#campaign-security#political-technology#deepseak

DeepSeek dropped in January 2025 and political Twitter had a brief meltdown about it. A Chinese AI model performing at or near GPT-4 level, open-source, free to run, no API costs. Campaign managers started asking almost immediately: should we be using this?

It's a fair question. AI API costs are real. DeepSeek and its successors are legitimately capable. And campaigns are always looking for ways to do more with less.

Here's the honest answer, without the tech panic and without the dismissal.


What These Models Actually Are

DeepSeek R1 and its successors are large language models developed by a Chinese AI company. They're open-source — meaning anyone can download the weights and run them on their own hardware. They perform comparably to leading American models on most standard benchmarks.

The "free" part is true but requires clarification. The model itself is free. Running it requires compute: either renting cloud servers or owning hardware with sufficient GPU capacity. For a small operation running occasional drafts, there are APIs that offer cheap or free access. For running it at campaign scale (thousands of emails per week), you'd need to either pay for cloud compute or invest in hardware.

So the actual cost comparison isn't zero versus $X. It's "what does it cost to run this ourselves" versus "what does a subscription to an American AI provider cost." For most campaign operations, the American providers are simpler and the cost difference is not significant.


The Legitimate Concerns

There are two real concerns with using Chinese AI models for political campaign operations, and neither of them is hysteria.

Data sovereignty. When you use a commercial API — including Chinese AI APIs — your data travels to their servers. For campaigns, that data includes donor names, email drafts, call notes, strategy documents, and anything else you feed into the system. Whether data from American political campaigns is interesting to Chinese intelligence services is not a hypothetical question. It's an established fact of how state-sponsored intelligence operates.

Using the open-source weights and running the model locally eliminates this concern. If you run DeepSeek on your own server, nothing leaves your infrastructure. The concern is API usage specifically, not the model itself.

Alignment and subtle influence. This one is harder to measure and easier to dismiss as paranoia, but it's worth understanding. AI models are trained with specific values and constraints embedded into them. American models have their own biases; that's well-documented. Chinese models are trained in a regulatory environment with explicit restrictions on political content that the Chinese government finds problematic.

For generating fundraising emails about a water district fight in a California congressional race, this probably doesn't matter. The model doesn't care about your race. But for content that touches issues the Chinese government monitors — trade policy, Taiwan, certain human rights issues — the model's training constraints may produce outputs that are subtly different from what you'd get from an American model. Not fabrication; just different framing at the margin.

For most campaign operations, this is not a significant concern. For a congressional candidate running on trade policy or national security, it's worth knowing.



The Part Nobody Talks About: American Providers Aren't Risk-Free Either

Here's where the conversation usually goes off the rails. People respond to concerns about Chinese AI with "just use OpenAI" as if American providers are a neutral option.

They're not. OpenAI's terms of service allow them to use your data to improve their models (though they offer an opt-out for API customers). Microsoft, which owns a significant stake in OpenAI, is subject to government data requests. Google's AI products are subject to their privacy terms, which are not written for the security requirements of political campaigns.

The honest answer is that any AI model where your data leaves your infrastructure carries some version of the data sovereignty question. The Chinese AI version is more vivid because the counterparty is an explicitly adversarial foreign government. But campaigns should be asking the data question of every AI provider they use, not just the Chinese ones.


What Actually Matters for Campaign AI Security

The data security question that campaigns should be focused on isn't "Chinese vs. American model." It's: for what categories of information are you willing to use external APIs at all?

A practical framework:

Safe to send through any API: Public-facing content drafts (fundraising emails, social posts, press releases). The content will be public anyway; the draft stage doesn't add meaningful exposure.

Think before sending: Donor names combined with giving history, call notes with personal details, internal strategy documents, opposition research. This data has value and shouldn't leave your control by default.

Don't send through any external API: Major donor lists with full contact information and giving history, internal polling data, vulnerability assessments, anything you'd be embarrassed to see in a news story about a campaign data breach.

The model's country of origin matters less than this categorization. A campaign using an American AI provider to process its entire donor database through external APIs is taking on more risk than a campaign using a Chinese model to draft a press release on its own infrastructure.


The Practical Answer for Most Campaigns

For most campaigns, the answer is: use the American providers (OpenAI, Anthropic, Google) through reputable APIs, apply basic data hygiene about what you send through external systems, and don't feed your donor database into any AI tool you don't control.

The cost savings from running Chinese models on your own infrastructure are not worth the operational complexity for a campaign that doesn't have a technical team to manage it. The AI API cost for a normal campaign AI operation (donor follow-up drafts, fundraising email generation) is not the budget line that determines whether a campaign can afford its operation.

If you have a technical team, are running at scale, and want to control your own AI infrastructure: running open-source models locally is a legitimate option and the security profile is actually better than using external APIs. The model origin becomes irrelevant when nothing leaves your servers.

But "it's free and it works" isn't sufficient reason to route campaign data through a foreign AI provider. The question isn't whether the model is capable. It's where your data goes.


Eric Linder is a former California State Assemblyman (2012-2016) and founder of AutomatedTeams, an AI operations consultancy for political campaigns and advocacy organizations.

Eric Linder

Eric Linder

Former California Assemblyman. Now building AI operations for political campaigns.

ericlinder.com →

Ready to build an operation that never sleeps?