I manage Performance Max campaigns for a client with extremely tight brand control. Every headline, every image, every description goes through approval before it runs. It’s the standard that large organizations with real brand equity have always held — and it’s completely incompatible with how Google ships Performance Max by default.
The first time I audited this client’s PMax campaigns, I found auto-generated headlines nobody had approved. Auto-cropped images in aspect ratios the brand team had never signed off on. Images pulled from landing pages the brand team specifically did not want represented in ads. All of it generated and served silently, because Google’s asset automation settings ship ON by default and there is no per-asset approval step.
The fix wasn’t complicated for a single campaign. Open the campaign. Expand Asset Automation. Flip all five toggles off. Save. Done.
But this wasn’t one campaign. It was 300+ accounts across the portfolio, every one of them running PMax, every one of them with those five settings on.
How do you enforce a brand standard across hundreds of campaigns when Google ships asset automation on by default — and silently turns it back on after certain edits?
That question is what the pmax-asset-automation skill exists to answer. It’s open-sourced as part of my PPC AI Skills repo, and it’s now the skill I run on every new client handoff, every month as a compliance check, and anytime a PMax campaign has been edited by anyone on the team.
Here’s what it does, why it matters, and why brand-sensitive advertisers cannot rely on Google’s defaults.
Get the PMax Asset Automation skill → github.com/fourteenwm/ppc-ai-skills/pmax-asset-automation
Free and open-sourced. Drop the SKILL.md into any Claude Code project in under a minute. No configuration required.
The Core Problem: Asset Automation Is Opt-Out, Not Opt-In
Performance Max ships with five asset automation settings turned ON by default. Google uses them to fill gaps in your asset library — generating new headlines, expanding final URLs to additional landing pages, enhancing YouTube videos, auto-cropping images, and extracting new images from the landing page.
For advertisers with weak creative assets, these defaults are often an upgrade. Google’s generated content fills empty slots the advertiser would have left empty.
For brand-sensitive advertisers, these defaults are a liability. There is no per-asset approval. No notification when a new asset is generated. No queue to review before the asset serves. Google decides what looks good, writes it, and runs it.
The problem is not that the automation produces bad output. Sometimes it produces good output. The problem is that the output is unverified — and for a brand that has spent years training its team to hold a specific voice, image style, or messaging standard, unverified automation is the single biggest source of drift.
The underlying principle I keep returning to is one I apply across every skill I write for ad copy: empty beats inaccurate. An empty asset slot is better than a plausible-sounding headline nobody wrote. PMax’s defaults invert that principle, and the skill’s job is to flip it back.
Rule 1: Treat Asset Automation as Opt-Out for Every Brand-Sensitive Account
The five settings the skill audits:
| Setting | What Google Does When ON |
|---|---|
TEXT_ASSET_AUTOMATION | Generates new headlines and descriptions it predicts will perform |
FINAL_URL_EXPANSION_TEXT_ASSET_AUTOMATION | Expands final URLs to other pages on the domain and writes matching copy |
GENERATE_ENHANCED_YOUTUBE_VIDEOS | Edits, trims, or remixes your uploaded YouTube videos |
GENERATE_IMAGE_ENHANCEMENT | Auto-crops your images to new aspect ratios |
GENERATE_IMAGE_EXTRACTION | Extracts new images from any page on your landing domain |
The standard for brand-sensitive accounts is all five OPTED_OUT, with manual asset management and explicit approval for every asset that runs. This is not a configuration preference. It’s a brand-safety posture.
Some advertisers benefit from text asset automation specifically. High-volume ecommerce accounts with weak creative teams sometimes see real performance lift when Google fills headline gaps. The skill supports that: opt out as the default, leave specific automations on only when you have data proving they help for that account.
The audit-first workflow makes this explicit. You don’t flip settings blind. You audit, see the current state, and choose. For a client where I don’t yet have performance data, the default choice is opt out.
Rule 2: Image Extraction Is the Setting That Will Burn You
Of the five automations, GENERATE_IMAGE_EXTRACTION is the one to opt out of first, even if you leave others on.
The behavior is straightforward: Google pulls images from any page on your landing domain and adds them to the ad creative pool. Any page. About pages, blog posts, career sections, old product pages, press releases — anything Google finds while crawling.
For the brand-sensitive client I started with, this was the hardest conversation. Their brand team had spent months building an approved creative library. Image extraction bypasses that library entirely. A product page screenshot from 2019, a team photo that predates the current brand refresh, a stock image used in a blog post — any of them can end up in a live ad without anyone on the brand team ever seeing it.
The fix is the single toggle. But the reason this toggle matters more than the others is that the failure mode is invisible. A bad generated headline is something you notice the first time you review your assets. A landing page image that shouldn’t be in ads can serve for weeks before someone catches it in a creative review.
Opt out of image extraction first. Then worry about everything else.
Rule 3: Text Generation Conflicts With Any Verified Ad Copy Standard
The underlying principle of my ad copy work is verification. Every headline, every description, every claim traces back to something explicit on the landing page or brand guidelines. No “common industry practices” as justification. No assumed services. Empty beats inaccurate.
TEXT_ASSET_AUTOMATION breaks this principle structurally. Google’s model doesn’t know which claims are verified and which aren’t. It writes plausible copy based on patterns from your landing page text and other ads in the account. “Plausible” is not the same as “verified.”
For an account where nobody has been holding the copy to a verification standard, auto-generated text might be fine. The baseline is already inconsistent; Google’s additions aren’t obviously worse.
For an account where ad copy has been carefully built and approved, auto-generation injects inconsistency directly into the brand voice. You can end up with a mix of verified and auto-generated assets in the same ad group, and you’d never know from the Google Ads UI which are which.
Opt out of text automation for any account where copy has been deliberately verified. Keep it on only when you’ve measured that the auto-generated assets are outperforming and you’re comfortable with the trade-off.
Rule 4: Settings Reset Silently — Audit Monthly
The hardest lesson with asset automation is that fixing it once is not enough.
Google has been observed flipping automations back on after certain campaign edits — budget changes, bid strategy changes, asset group modifications. The mechanism is not well documented. What is well documented is that “I turned this off three months ago” is not a guarantee it’s still off today.
The skill’s audit workflow is designed to be re-run. Monthly, at minimum. More often after any batch of campaign edits. The audit output is a per-campaign report: campaign name, CID, which settings are currently at OPTED_OUT and which are not.
For the 300+ account portfolio I mentioned, the monthly audit is now a standing automation. It surfaces any campaign that has drifted back to default without requiring a human to click into each campaign’s settings panel. That alone is the value. The Google Ads UI forces you to click into each campaign one at a time, which is fine for five campaigns and impossible for three hundred.
Rule 5: Any Fix at Portfolio Scale Goes Through Mutation Safety
The audit is a read operation. It’s safe. Run it anytime.
The fix — actually setting those OPTED_OUT values via the API — is a mutation. And PMax settings affect serving behavior directly. A bad mutation across a portfolio is expensive to clean up.
The skill is explicit: any fix goes through mutation-safety. No exceptions. The flow is always:
- Audit surfaces non-compliant campaigns
- Dry run shows exactly which campaigns will be mutated and which specific settings will change
- Human approves
- Execute
Do not batch across accounts without a per-account dry run. It’s tempting, especially when you’ve just audited 300 campaigns and the fix looks identical for each one. But “identical” across 300 campaigns is also 300 opportunities for a wrong mutation to propagate. The dry run keeps the scale honest.
What This Has Actually Prevented
Since adding pmax-asset-automation as a standing audit across the portfolio:
- Off-brand image assets caught before they scaled — the image extraction setting was ON across dozens of campaigns for a brand-sensitive client. The audit surfaced every one in a single pass. The fix took an afternoon instead of a multi-week creative review.
- Silent re-activation after campaign edits — monthly re-audit catches campaigns where settings drifted back to default after unrelated changes. The fix is a ten-second mutation instead of a forensic investigation weeks later.
- Unverified auto-generated headlines — opting out of text automation in advance means the ad library stays verified. Headlines that didn’t come from the approval workflow don’t sneak in and dilute the voice.
- Portfolio compliance in client handoffs — any new PMax campaign added to the book gets audited on day one. Settings are opted out before the campaign has time to accumulate auto-generated assets.
None of these are dramatic saves. That’s the point. A brand standard is maintained by a thousand small catches, not by one big incident.
Get the PMax Asset Automation Skill
Install in 30 seconds
Copy the SKILL.md file into your Claude Code project:
mkdir -p .claude/skills/pmax-asset-automation
curl -o .claude/skills/pmax-asset-automation/SKILL.md \
https://raw.githubusercontent.com/fourteenwm/ppc-ai-skills/main/pmax-asset-automation/SKILL.mdClaude Code auto-loads the skill when any agent runs a PMax audit, asset automation check, or compliance sweep. No configuration required. Works with any AI harness that respects skill files.
Free. Open-sourced. MIT licensed.
The full repo has nine other PPC AI skills I use in production every day — mutation-safety, SQR classification, RSA refresh, impression share diagnostics, and more. All at github.com/fourteenwm/ppc-ai-skills.
The Bigger Point
Google’s defaults reflect Google’s incentives. More automation means more surface area for Google’s models to make decisions, which means more campaigns that perform well enough without advertisers having to do creative work. That’s a reasonable default for advertisers who weren’t going to do creative work anyway.
For advertisers who are doing creative work — brand teams that have spent years on voice, agencies running approved-copy workflows, accounts where every asset has a verification trail — Google’s defaults are actively hostile to the brand standard.
The fix is not to argue with Google about whose defaults are right. The fix is to audit ruthlessly, enforce the standard the client actually wants, and re-audit on a cadence because settings drift.
Audit first. Opt out by default. Re-audit monthly. Fix through mutation safety. Treat brand control as operational discipline, not a one-time setting.