The first time I let Claude Code make a change in one of my Google Ads accounts, it added conversion actions to the wrong account.

The account names looked almost identical. I asked for the change in one, but Claude understood it as the other. By the time I noticed, the conversion actions were already live in an account that had nothing to do with the work I was doing.

I had to go back in, find every conversion action I’d just created, and remove them. It wasn’t a disaster — nothing got overspent, no bidding was affected, the cleanup took maybe 15 minutes. But it scared me enough to stop everything and ask a harder question:

If my AI agent can get the account wrong once, what happens when I’m running this system across 118 accounts and I’m not watching every single mutation?

That question is what the mutation-safety skill exists to answer. It’s open-sourced as part of my PPC AI Skills repo, and it’s now the first skill every one of my 22 AI agents loads before it’s allowed to touch a live account.

Here’s what it does, why it works, and why you need some version of it the moment you start letting AI write to production systems.

Get the Mutation Safety skill → github.com/fourteenwm/ppc-ai-skills/mutation-safety

Free and open-sourced. Drop the SKILL.md into any Claude Code project in under a minute. No configuration required.

The Core Problem: AI Is Confident Even When It’s Wrong

When Claude picked the wrong account, it wasn’t guessing. It wasn’t flagging uncertainty. It wasn’t saying “I think you mean…” — it just executed. Confidently. On the wrong target.

This is the real failure mode of AI in production. Not obvious hallucinations. Not weird output. Confident action on slightly-wrong input.

Two account names that differ by a single word. Two conversion action names that share a prefix. Two CIDs with transposed digits. A script written for one account that runs across an entire MCC because someone forgot to add a filter. These aren’t exotic edge cases. They’re the normal texture of a PPC portfolio.

The fix isn’t “make the AI smarter.” Smarter AI just means more confident wrong actions. The fix is a forcing function — a required checkpoint where a human has to look at what’s about to happen before it happens.

That’s what mutation-safety is.

Rule 1: No Mutations Without Two-Step Approval

Every write operation in my system — creating a keyword, pausing a campaign, updating a budget, changing a conversion setting, overwriting a Google Sheet tab — goes through three stages:

  1. Dry run. Show me exactly what will change. Current value → proposed value. Count of affected entities. Whether it’s reversible.
  2. Explicit approval. I read the dry run. I decide. If I don’t say “execute,” nothing happens.
  3. Execute. Only after I’ve confirmed, the mutation runs.

The reason this works isn’t really technical. It’s psychological. When the AI shows me what it’s about to do with real values in front of me, my brain catches things a prompt review never would.

“Wait, that’s the wrong account name.” “Wait, 847 keywords? I only wanted the one campaign.” “Wait, that CID isn’t even my client.”

Without the dry run, those moments don’t happen. The AI just runs. I find out later.

I’ve caught the wrong-account problem with mutation-safety’s dry run more than once since I built it. Same failure mode as the original incident — similar account names, ambiguous reference, AI picks the wrong one. But now instead of a cleanup job, it’s a one-second correction: “no, the other one.”

Rule 2: Exact Match Required for All Identifiers

This is the rule I learned the hard way.

Fuzzy matching is the default behavior of most AI systems. Ask Claude to update “the Form Submit conversion” and it’ll happily match Form_Submit, Form_Submit_BC, Form_Submit_Legacy, and any other conversion with “Form_Submit” in the name. That’s a feature when you’re asking questions. It’s a disaster when you’re making changes.

So the rule is: every mutation uses exact match. No exceptions. No flexibility.

  • Customer IDs must be exact strings, not “the account that starts with 912…”
  • Conversion action names must match character-for-character
  • Campaign names, ad group names, keyword text, shared set names — all exact
  • GAQL queries for mutations use = 'value', never LIKE '%value%'

Pattern matching is allowed for read-only operations — reports, audits, debugging — because the worst case is a wrong query result, not a wrong change. But the moment a query is backing a mutation, the matching has to be exact or the mutation stops.

Rule 3: Scope Verification Before Execution

Before any mutation runs, the skill forces a scope check:

  1. Is the target account one I actually manage? (Cross-referenced against a known accounts file.)
  2. If I said “this account,” is it clear which account I meant?
  3. If I said “all accounts,” does “all” mean my 87-account portfolio, or the entire MCC tree including sub-MCCs?

This rule exists because ambiguity is the biggest source of wrong-target mutations. “All accounts” can mean three different things depending on context. “This account” can refer to the last one mentioned in conversation, the one I’m currently looking at, or the one from a Salesforce task that’s half a screen away.

When scope is unclear, the skill stops and asks. It doesn’t guess. Guessing is what got me into trouble the first time.

Rule 4: Show What Will Change

The dry-run preview isn’t a vague summary. It’s a structured block that has to include specific fields:

MUTATION PREVIEW
================
Target: [Account Name] (CID: XXXXXXXXXX)
Operation: [What will change]
Entities affected: [Count]

Current → Proposed:
  - [Entity 1]: [current value] → [new value]
  - [Entity 2]: [current value] → [new value]

Reversibility: [Yes/No — how to undo if needed]

Type APPROVE to execute, or CANCEL to abort.

Every mutation in my system emits this exact format before it runs. The “Target” line with both the account name and the CID is the single most important piece. It’s what catches the wrong-account failure mode I started this article with. When I see the name and the CID side by side and they don’t match what I asked for, the mistake is instantly obvious.

“Reversibility” is the second most important. Some things are easy to undo — pause a campaign, re-enable it. Some things are permanent — delete a conversion action, and there’s no API call to bring it back. When a mutation is irreversible, the dry run has to say so in bold, and I read it twice before I approve.

Rule 5: Never Generate Approval Codes

This one is about not outsmarting yourself.

My system uses approval codes — short random strings the user has to type to confirm a mutation. The AI must never generate, guess, or auto-fill those codes. Not even in “helpful” batch operations. Not even when the code is obvious. Not even when the AI is confident.

The approval code exists because the human — not the AI — controls execution. The moment the AI can generate its own approval code, the two-step approval becomes a one-step automation, and the whole point of the skill collapses.

I added this rule after I noticed Claude was getting “helpful” in batch operations — running a mutation, generating an approval code, filling it in, and continuing. It wasn’t malicious. It was just trying to keep the workflow moving. But a workflow with no human checkpoint is not a safety system.

What This Has Actually Prevented

Since I added mutation-safety as a required skill for every agent that touches production:

  • Wrong-account targeting — caught multiple times. Same failure mode as the original incident. The dry run shows me the account name and CID, I see they don’t match what I asked for, I say cancel, nothing runs.
  • Fuzzy-match over-matching — caught when a rename operation was about to touch three conversion actions instead of one because the prompt referenced them by a shared prefix.
  • MCC-wide scope creep — caught when a script written for one client accidentally had no account filter and was about to execute across the whole management tree.
  • Irreversible operations — the “Reversibility: No” line has made me stop and reconsider at least twice when I was about to delete something I’d have to rebuild from scratch.

None of these saves are dramatic. That’s the point. A good safety system produces no headlines, just a steady stream of small catches that would have been painful cleanups.

Get the Mutation Safety Skill

Install in 30 seconds

→ View the skill on GitHub

Copy the SKILL.md file into your Claude Code project:

mkdir -p .claude/skills/mutation-safety
curl -o .claude/skills/mutation-safety/SKILL.md \
  https://raw.githubusercontent.com/fourteenwm/ppc-ai-skills/main/mutation-safety/SKILL.md

Claude Code auto-loads the skill when any agent or script attempts a mutation. No configuration required. Works with any AI harness that respects skill files — I built it for Claude Code but the rules are portable.

Free. Open-sourced. MIT licensed.

The full repo has nine other PPC AI skills I use in production every day — GAQL query patterns, impression share diagnostics, ad copy verification, SQR classification, and more. All at github.com/fourteenwm/ppc-ai-skills.

The Bigger Point

Most of the AI-in-PPC conversation right now is about speed. How fast can AI do the work. How many accounts it can handle. How much manual labor it replaces.

Speed is table stakes. The real question is reliability under production conditions, and reliability comes from guardrails, not from smarter models. Every serious AI system I’ve seen in PPC — mine and others — has some version of a two-step approval layer. The ones that don’t, eventually have an incident that forces them to build one.

Mutation-safety is my version of that layer. Yours will look a little different. But if you’re running AI against live Google Ads accounts and you don’t have one yet, the question isn’t if something will go wrong. It’s when, and how painful the cleanup will be.

Build the safety layer first. Build the speed on top of it. Not the other way around.