Two modes. You choose. Analysis + recommendations — OpenClaw pulls live data, diagnoses issues, and tells you exactly what to do. Or full execution — it makes the changes directly in your account, with the spend limits and approval gates you set.
This guide covers the 4 analysis skills, what write operations OpenClaw can execute, how to configure guardrails so you're never blindsided, and how to schedule it to run autonomously.
Free / open source
Download the skills
4 installable skills on GitHub. Connect your own Google Ads MCP, run analysis from the command line. Free, self-hosted, full control.
Download on GitHubManaged / no setup
Connect via Ryze AI
Read + write access, guardrails configured, scheduling included. Your Google Ads account talks directly to Claude — analysis, changes, and autonomous runs out of the box.
What OpenClaw Can Read from Google Ads
Once connected via MCP, OpenClaw has full read access across every level of your account. It queries using GAQL directly — no pre-built report templates, no fixed dashboards. You ask, it pulls.
Campaign level
Status, bidding strategy, impression share lost to budget vs. rank, budget pacing, cost, conversions, CPA, ROAS, conversion value, auction insights
Ad group level
CPC bids, quality score distribution, ad strength, CPM/CPC/CTR/CVR trends, audience bid modifiers (device, location, demographic), ad rotation status
Keyword level
Quality Score (ad relevance, expected CTR, landing page experience), first-page and top-of-page bid estimates, search term match coverage, close variants triggering
Ad level
RSA asset-level performance (pinned vs. unpinned headlines, serving frequency, individual headline/description CTR), ad strength score, approval status
Audiences & targeting
In-market and affinity segment performance, custom intent audiences, RLSA overlap, audience observation data vs. targeted data, demographic breakdowns (age, gender, income)
Conversion tracking
Google Ads vs. GA4 conversion count discrepancies, attribution model comparison (last-click vs. data-driven), view-through conversion ratios, GCLID match rate
No fixed report format. You write the query in plain English. OpenClaw translates to GAQL, pulls the data, and reasons over it. You can cross-reference metrics that normally require 3 separate reports in the UI.
What OpenClaw Can Change in Your Account
When connected via Ryze AI (or a write-capable MCP server), OpenClaw moves beyond analysis. It can execute changes directly. Here's what's available:
Bids & Budgets
- Adjust CPC bids at campaign and ad group level by specified % or absolute amount
- Update Target CPA and Target ROAS on Smart Bidding campaigns
- Modify daily and shared campaign budgets
- Shift budget between campaigns based on efficiency ranking
- Apply bid modifiers for device, location, and demographic segments
Campaign & Ad Group Management
- Pause and resume campaigns and ad groups based on performance thresholds
- Switch bidding strategies (e.g., Manual CPC → Target CPA) when data thresholds are met
- Update campaign settings, ad scheduling, and targeting criteria
- Create new campaigns with specified budget, bid strategy, and targeting
Keyword Management
- Add keywords to ad groups (exact, phrase, broad match)
- Add negative keywords at campaign and ad group level from search term analysis
- Add campaign-level negative keywords to Performance Max campaigns
- Pause keywords exceeding CPA threshold or with Quality Score below target
- Bulk update keyword bids based on Quality Score and conversion data
Ads & Creative
- Create new RSAs with headlines and descriptions based on top-performing asset analysis
- Pause underperforming ads based on CTR, CVR, or ad strength thresholds
- Update Performance Max asset groups with new assets
- Upload and attach creative assets via Asset Service
Execution modes
Every write action can run in three modes: Recommend-only (returns a list of changes for you to execute), Approve-then-execute (queues changes and waits for your sign-off before applying), or Autonomous (executes within the limits you define, logs every action). See Guardrails below.
The 4 Analysis Skills
Each skill is a deep-analysis workflow. Install once, run with a single prompt.
1. ads-bid-budget
Budget allocation, bid efficiency, scaling decisions
What it diagnoses
Cost-per-conversion vs. target across every campaign. Impression share lost to budget vs. rank (different problems, different fixes). Diminishing returns curves. Wasted spend by segment. Budget pacing at day/week level.
Output
| What you get | Specificity |
|---|---|
| Efficiency ranking | Every campaign ranked by CPA vs. target, with delta |
| Bid recommendations | Specific % changes per campaign/ad group with rationale |
| Budget reallocation | Exact dollar amounts to shift, with projected impact |
| Pause candidates | What to kill, estimated monthly savings, alternative |
| Scale headroom | Campaigns with IS headroom + healthy CPA = safe to push |
Prompts for experienced managers
2. ads-creative-analyst
RSA asset performance, fatigue detection, test direction
What it diagnoses
RSA asset-level serving frequency and CTR for individual headlines and descriptions. Ad strength vs. actual conversion rate (they often diverge). CTR decay curves over 7/14/30 days for fatigue detection. Performance Max asset group health. Search vs. Display vs. PMax format comparison.
Output
| What you get | Specificity |
|---|---|
| Winner breakdown | Which headline/description combinations Google favors, why |
| Fatigue list | Ads with >15% CTR decline — days since peak and rate of decline |
| Ad strength vs. CVR | Flags "Poor" ads beating "Excellent" ads on conversion rate |
| Test brief | Specific angles and headlines to test based on conversion data |
| Format comparison | Search vs. PMax CVR and CPA — where budget should sit |
Prompts for experienced managers
3. ads-audience-architect
Audience efficiency, overlap, expansion opportunities
What it diagnoses
In-market and custom intent audience performance in observation mode — who's actually converting. Demographic CPA/ROAS by age, gender, and household income bracket. Geographic performance at region and city level. Device gaps. RLSA overlap between campaigns. Observation vs. targeting mode effectiveness.
Output
| What you get | Specificity |
|---|---|
| Segment ranking | Every audience sorted by CPA/ROAS, observation → targeting recommendations |
| Bid modifier map | Specific % adjustments by device, location, demographic |
| Overlap conflicts | RLSA campaigns competing for the same converters |
| Expansion brief | New in-market/custom intent segments, segments to exclude |
Prompts for experienced managers
4. ads-performance-auditor
Full account health: structure, tracking, trends, anomalies
What it diagnoses
Account structure health. Quality Score distribution by campaign (not just averages). Conversion tracking gaps: Google Ads vs. GA4 count discrepancies, GCLID match rate, attribution window comparison (last-click vs. data-driven). Search term coverage and negative keyword gaps. Bidding strategy performance vs. account average. 30/60/90 day trend analysis on CPA, CTR, IS, CVR.
Output
| What you get | Specificity |
|---|---|
| Health score | Account rating by category (structure, tracking, bidding, creative) |
| Anomaly list | Stat changes >20% with probable root causes — not just flags |
| Hidden winners | Strong unit economics, IS headroom, under-budgeted |
| Hidden losers | Good campaign-level CPA masking bleeding ad groups/keywords |
| Tracking audit | GA4 vs. Google Ads conversion gap, GCLID rate, attribution model impact |
| Priority fixes | Ranked by estimated $ impact — highest leverage actions first |
Prompts for experienced managers
Free / open source
Download the skills
4 installable skills on GitHub. Connect your own Google Ads MCP, run analysis from the command line. Free, self-hosted, full control.
Download on GitHubManaged / no setup
Connect via Ryze AI
Read + write access, guardrails configured, scheduling included. Your Google Ads account talks directly to Claude — analysis, changes, and autonomous runs out of the box.
Guardrails & Safety
You're connecting an AI agent to a live ad account. The controls you put in place determine how much autonomy it has. Here's how experienced managers structure this.
Three operating modes
Recommend-only
OpenClaw analyzes and returns a prioritized action list. You execute every change manually in Google Ads Manager. No write access granted. Appropriate for new deployments or high-spend accounts where you want full control.
Approve-then-execute
OpenClaw queues a change list and delivers it to Slack or email. You review and approve. Once approved, it applies the changes. Good for teams that want speed without giving up oversight. Approval is per-batch, not per-change.
Autonomous
OpenClaw executes within limits you define. Changes are logged before and after application. Appropriate once you've run Mode 2 for several weeks and trust the output quality. Set hard limits on change magnitude.
Spend & change limits (autonomous mode)
Set these as constraints in your skill configuration before running autonomous sessions:
API token scope — the hard backstop
The most reliable guardrail is the API token permission scope. What the token can't do, the agent can't do — regardless of what you prompt it to try.
Audit log
Every action OpenClaw takes is logged: timestamp, campaign/ad group/keyword affected, previous value, new value, rationale, and which skill/prompt triggered it. For autonomous sessions, the log is delivered to your configured channel (Slack, email, or webhook) after each run. Review it weekly until you're confident in the behavior.
Scheduling & Autonomous Mode
OpenClaw has a built-in cron scheduler. You define the schedule, the skill, and the operating mode. It runs, does the work, and delivers results — with or without waiting for you.
Recommended schedule for experienced managers
| When | Task | Mode |
|---|---|---|
| Daily 7am | Anomaly detection — CPA spike >20%, CTR drop >15%, IS drop >10%. Alert only, no changes. | Recommend |
| Mon 8am | Full performance audit. Week-over-week comparison. Priority fixes ranked by estimated impact. Delivered to Slack. | Recommend |
| Mon 9am | Bid and budget rebalancing. Queue changes within ±20% limit. Hold for your approval before applying. | Approve |
| Wed 9am | Search term review. Pull all terms with spend >$30 and zero conversions. Add as negatives — queue for approval. | Approve |
| Fri 9am | Creative fatigue check. Flag ads with >15% CTR decline. Recommend test briefs. No creative changes without approval. | Recommend |
| 1st Mon/month | Full account audit. Structure, tracking health, 30/60/90 trends, bidding strategy review. Full report delivered. | Recommend |
~15 minutes of your time per week. OpenClaw does the analysis loop. You make the calls and approve the changes.
Starting recommendation
Run the daily anomaly check and Monday audit in recommend-only mode for 3–4 weeks. Learn the output format, calibrate your trust, spot any misdiagnoses. Then move bid rebalancing to approve-then-execute. Only move anything to fully autonomous after you've reviewed 10+ sessions worth of output and found it reliable.
Option 2
Manual OpenClaw Setup
Requires some technical setup — IT help may be needed
Time
20–30 min
one time
Prerequisites
OpenClaw installed. Google Ads API access (Manager Account recommended). Google Cloud project with Ads API enabled. Developer token.
Install skills
Connect Google Ads MCP
Add your Google Ads MCP server config. For read-only analysis, use the official Google Ads MCP. For write operations, configure a write-capable MCP server with your Google Ads API credentials and developer token. Test with: "List my Google Ads campaigns" — if it returns campaign names, you're connected.
Weekly Workflow
Five prompts. Once a week. Covers 90% of ongoing management analysis — the rest runs on schedule.
| Day | Prompt |
|---|---|
| Monday | "Full performance audit. Target ROAS [X]. Pull 30-day trends on CPA, IS, CTR, CVR. Flag anything moving in the wrong direction." |
| Monday | "Budget and bid rebalancing. Target CPA $[X]. Reallocation within existing total spend — no new budget. Flag any campaign you'd pause." |
| Wednesday | "Search terms last 14 days. Spend >$20, zero conversions — add as negatives (show me the list first). Converting terms not yet added as keywords — flag those too." |
| Wednesday | "RSA asset performance. Which headlines are being suppressed? Any ads with declining CTR this week?" |
| Friday | "Audience observation data — any in-market segments converting 20%+ above account average? Flag them. Are any audience bid modifiers obviously wrong given the data?" |
~20 minutes interactive. Everything else runs on schedule and lands in your Slack before you open your laptop.
Advanced Prompts Worth Saving
These go beyond basic analysis — they cross-reference data the UI doesn't connect for you.
Cross-campaign diagnosis
Attribution reality check
PMax audit
Bidding strategy readiness
Negative keyword gap
Autonomous change with confirmation
Limitations
- Official Google Ads MCP is read-only. Write capabilities require a write-capable MCP server (manual setup) or Ryze AI, which handles this for you.
- No built-in spend limit enforcement. You configure limits in your skill settings. The API token scope is the hard backstop — don't grant more access than needed.
- Quality of output depends on quality of tracking. Broken conversion tracking, GCLID gaps, or mismatched attribution windows will produce unreliable recommendations. Fix tracking first.
- Won't replace strategic judgment. It doesn't know your seasonality, upcoming promotions, or business context unless you tell it. Include context in your prompts.
- Won't produce creative assets. It identifies what angles to test and which headlines to retire. Writing and designing is still yours.
Free / open source
Download the skills
4 installable skills on GitHub. Connect your own Google Ads MCP, run analysis from the command line. Free, self-hosted, full control.
Download on GitHubManaged / no setup
Connect via Ryze AI
Read + write access, guardrails configured, scheduling included. Your Google Ads account talks directly to Claude — analysis, changes, and autonomous runs out of the box.






