Most SQR automation writing focuses on classification. How fast the AI can categorize queries. How many it can process per hour. How clever the prompt is.
Classification is the glamour work. It’s fast, it’s visible, and it produces a clean list at the end.
Upload is the part nobody talks about. And upload is where every SQR pipeline I’ve watched other people build quietly falls apart.
I think about search query management in three steps: download, processing, upload. The download pulls raw queries from the Google Ads API. The processing turns them into an approved list of negatives. The upload writes those negatives back into each account so they actually block traffic.
Skip the upload, and the pipeline is a research tool. A very nice research tool. But the queries still run, the wasted spend still happens, and the spreadsheet of “approved negatives” sits there doing nothing.
The problem is that uploading thousands of negatives across hundreds of accounts is extremely painful. Every account has its own shared negative list. Every negative has to land in the right list with the right match type. And the Google Ads UI is designed for one account at a time — which is fine for a single campaign, and a nightmare when you’re staring down 300 accounts and a list of 3,000 queries to add.
How do you finish the last mile of an SQR pipeline without spending a week copy-pasting negatives account by account?
That question is what the sqr-upload skill exists to answer. It’s open-sourced as part of my PPC AI Skills repo, and it’s the skill that turned my search query work from “glamorous classification, then manual drudgery” into a genuinely automated pipeline that runs overnight with no babysitting.
Here’s what it does, why the upload step matters more than the classification step, and why most SQR automation stops one step short of being useful.
Get the SQR Upload skill → github.com/fourteenwm/ppc-ai-skills/sqr-upload
Free and open-sourced. Drop the SKILL.md into any Claude Code project in under a minute. No configuration required.
The Core Problem: Classification Is Research. Upload Is the Work.
The classification step is fascinating. Ask a language model to judge whether a query belongs to the account’s offering, watch it produce structured output, measure its agreement rate against a human reviewer. It’s fast, it’s interesting, it feels like progress.
The upload step is boring. You have a list of approved negatives and a list of accounts. You open account one, navigate to the shared negative keyword list, paste the keywords, choose the match type, save. You open account two. You do it again. Three hundred times.
If the classification is the only step you automate, the pipeline produces a spreadsheet. A spreadsheet is not a negative keyword. Google Ads doesn’t consume spreadsheets. The spreadsheet is only valuable after someone has written its contents into the right shared lists in the right accounts with the right match type.
For a solo operator or a small agency managing 10 accounts, manual upload is annoying but survivable. For anyone managing 50+ accounts, it’s the single biggest reason the SQR workflow ends up neglected. The classification piece gets built, gets demoed, gets praised — and then nobody runs it regularly because the downstream upload is too painful to sustain.
Automating the upload is what makes the whole pipeline worth building.
Rule 1: The Sheet Is the Queue
The skill’s architecture uses a Google Sheet as the persistent queue between classification and upload. That sounds mundane. It’s the most important design choice in the whole pipeline.
The queue sheet has a simple schema:
| Column | Purpose |
|---|---|
CID | Full format customer ID (e.g., 123-456-7890) |
Query | Search term to add as negative |
Neg List ID | Shared negative keyword list ID |
Trunc CID | Numeric customer ID |
Uploaded? | Empty = pending, “X” = done |
Classification writes rows into this sheet. Upload reads rows from this sheet. Approval happens in between — a human can look at the sheet, remove anything that shouldn’t get uploaded, and leave the rest alone.
Using a sheet instead of a direct API hand-off means three things work that wouldn’t otherwise. Approval is async — the person approving doesn’t have to be at the keyboard when classification finishes. State is persistent — if the upload fails halfway through, the sheet remembers what was done. And the workflow is inspectable — anyone with sheet access can see what’s pending and what’s done.
This pattern shows up in most of my serious automation. The sheet is not the output. The sheet is the queue.
Rule 2: Batch Mutations by Account, Not Per Keyword
Three thousand pending keywords across one hundred accounts is not three thousand API calls. It’s one hundred — one per account, batched.
The skill groups pending keywords by Trunc CID before constructing mutations. Each account gets a single SharedCriterionService mutate call that adds all its pending negatives in one operation. This matters for three reasons.
First, rate limits. Google Ads API operations are rate-limited per account and per credential. A per-keyword approach burns through the rate limit fast. A per-account approach stays well under it even with thousands of total keywords.
Second, atomicity. If the mutation for an account fails, all its keywords fail together. That’s actually a feature — it means the sheet’s “Uploaded” stamps stay consistent with reality. No half-uploaded accounts with unclear state.
Third, speed. The network round-trip dominates per-call latency. Batching turns a multi-hour operation into something that finishes in minutes.
The batching is invisible in the final result — the sheet sees the same “X” stamps either way. But the difference between a pipeline that runs overnight unattended and a pipeline that times out halfway through is mostly this rule.
Rule 3: Every Upload Goes Through Mutation Safety
The upload is a mutation. It writes to production accounts. So it goes through mutation-safety — the two-step approval pattern I apply to every write operation across my system.
The flow is always:
- Dry run. Run the script with no approval code. It reads the sheet, groups pending keywords, and prints exactly what will change: account name, CID, number of negatives, the shared list ID each one will be written to. It does not modify anything.
- Human review. I look at the preview. If the counts look right, if the accounts look right, if the sample keywords look right, I grab the approval code the dry run printed.
- Execute. Run the script again with the approval code. It validates the code, then executes the batched mutations account by account.
The approval code is the forcing function. It’s a short random string the dry run emits; it’s valid for one execution; it’s never auto-generated by the AI. If I’m not there to type it, nothing happens.
This matters at scale because the failure mode of an unattended upload is not “nothing happens.” The failure mode is “something wrong happens, to a lot of accounts, while nobody is watching.” The approval code means the uploading can be automated, but the approving can’t.
Rule 4: Idempotent Re-runs via the “Uploaded” Stamp
Every successful upload stamps “X” in the Uploaded? column of the sheet. The next run of the script filters out any row where that stamp is present.
That sounds trivial. It is the reason I can run this skill without worrying.
Automation that isn’t idempotent produces a very specific kind of stress. You finish a run, you don’t remember exactly which accounts got written to, and when something looks off later you don’t know if the script touched it or not. You end up scared to re-run. Scared to schedule. Scared to hand the tool to anyone else because the state is in someone’s head.
The stamp fixes this. Every row knows whether it’s been uploaded. The script is safe to run at any time — if nothing’s pending, nothing happens. If half a batch uploaded before something crashed, re-running picks up exactly where it left off. If someone approved new queries today, re-running grabs only the new ones.
This is what “runs overnight with no babysitting” actually requires. Not more reliability. Just the discipline to make every step idempotent.
Rule 5: PHRASE Match Is a Deliberate Choice
The skill hard-codes negatives to PHRASE match, not EXACT and not BROAD.
This is a principled decision, not a default. EXACT match would miss common variations of the negative query, which means wasted spend keeps happening on near-duplicates. BROAD match would over-block traffic that looks similar but actually converts. PHRASE sits in the middle: it blocks the query and its variations that contain the same phrase, without expanding too aggressively.
For a large portfolio where the cost of over-blocking is real conversions lost and the cost of under-blocking is real spend wasted, PHRASE match is the sweet spot. The skill makes that choice explicit rather than letting the match type default quietly to whatever the API picks.
If you want a different match type for a specific account, the skill’s script is easy to modify. But the default has been chosen, documented, and applied consistently across every account — which is more than I can say for most of the manually-uploaded negative lists I inherit on client handoffs.
What This Has Actually Prevented
Since adding sqr-upload as the final step of my SQR pipeline:
- Manual copy-paste sessions across hundreds of accounts. Before this skill, “add approved negatives to the portfolio” meant a day of tabbing between accounts. Now it means approving the sheet, running the dry run, typing the approval code, and letting the script finish while I’m asleep.
- Half-completed uploads with unclear state. The “Uploaded” stamp turned “did I already run this?” into a question with a definitive answer. Re-running is safe. Restarting after a crash is safe. Handing the tool to someone else is safe.
- Missed negatives from workflow fatigue. The biggest risk of a manual upload workflow is that nobody actually does it. Classification happens, approvals happen, and then the list sits in a spreadsheet for a month because nobody has six hours free to post them account by account. Automating the upload turns the pipeline from “we should really run that” into “it ran last night.”
- Inconsistent match-type decisions. Every negative across every account gets the same match type, applied the same way, by the same script. No account gets PHRASE while another accidentally gets BROAD because a team member picked the wrong dropdown.
None of these are dramatic saves. That’s the point. Most of the value of automation is small consistent wins that compound across hundreds of accounts over dozens of runs.
Get the SQR Upload Skill
Install in 30 seconds
Copy the SKILL.md file into your Claude Code project:
mkdir -p .claude/skills/sqr-upload
curl -o .claude/skills/sqr-upload/SKILL.md \
https://raw.githubusercontent.com/fourteenwm/ppc-ai-skills/main/sqr-upload/SKILL.mdClaude Code auto-loads the skill when any agent needs to upload negatives to a Google Ads shared list. No configuration required beyond a sheet with the correct column schema. Works with any AI harness that respects skill files.
Free. Open-sourced. MIT licensed.
The full repo has nine other PPC AI skills I use in production every day — SQR classifier, mutation-safety, PMax asset automation, impression share diagnostics, and more. All at github.com/fourteenwm/ppc-ai-skills.
The Bigger Point
Most AI-for-PPC content obsesses over the classification problem. That’s the part that benchmarks well, demos well, and makes for good screenshots. It’s also the easy part.
The hard part of any SQR workflow is the last mile. The upload. The thing that takes a list of approved negatives and makes them real in hundreds of live accounts without a human having to click through the same dropdown three hundred times.
If you build a classifier and stop there, you’ve built a research tool. The wasted spend keeps happening. The queries keep running. The spreadsheet of “approved negatives” keeps growing and never actually blocks a single impression.
If you automate the whole pipeline — download, process, upload — you’ve built something that changes the economics of your account management. The classification and the upload together are what turn SQR from “a thing I should do eventually” into “a thing that happens every week whether I’m paying attention or not.”
Automate the boring part. Batch by account. Stamp every success. Run through mutation safety. Close the loop.