By the end of this guide, you'll have 10 agents checking your campaigns every 15 minutes — auditing spend, catching anomalies, shifting budgets, and flagging creative fatigue. All with human approval before anything goes live.
This guide assumes you're comfortable with a terminal, you run Google or Meta Ads at some scale, and you want to automate the repetitive parts of media buying.
Not a developer? Ryze AI connects to Google and Meta Ads directly. Same agents, managed for you.
Part 1: What This Actually Is
ClawBot (OpenClaw) is an open-source AI agent framework. It connects an LLM to real tools — file system, shell commands, APIs, web browsing. It runs as a background process on a server.
One ClawBot instance = one AI agent with persistent memory.
Ten instances = ten agents. Each with a different job. Each waking up on a schedule.
The idea is simple. Instead of one general-purpose AI, you run specialized agents that each do one thing well. An auditor that only audits. A bidder that only manages budgets. A tracker that only watches for anomalies.
They share a database so they can coordinate. They follow an approval workflow so nothing happens without you.
That's it. No magic.
Part 2: The Architecture
How ClawBot Works
ClawBot runs a Gateway process on your server. The Gateway manages sessions, handles scheduled tasks, and routes messages.
clawbot gateway startEach agent is a session. A session has:
- →A session key (unique ID, like
agent:auditor:main) - →Conversation history (stored as JSONL on disk)
- →A model (which LLM to use)
- →Tools (what the agent can access)
Sessions are independent. Each has its own memory, its own context, its own identity.
The Workspace
Every agent reads from a shared workspace on disk:
/home/ops/clawbot/
├── AGENTS.md ← Operating manual for all agents
├── agents/
│ ├── auditor/
│ │ ├── SOUL.md ← "You are the Auditor..."
│ │ └── memory/
│ │ ├── WORKING.md ← Current task state
│ │ └── 2026-02-08.md ← Daily log
│ ├── bidder/
│ │ ├── SOUL.md
│ │ └── memory/
│ ├── scout/
│ │ ├── SOUL.md
│ │ └── memory/
│ └── ... (one folder per agent)
├── shared/
│ ├── campaign-data/ ← Pulled from APIs
│ ├── rules/ ← Audit rules, thresholds
│ └── templates/ ← Campaign templates
├── scripts/
│ ├── pull-meta-data.sh
│ ├── pull-google-data.sh
│ ├── push-changes.sh
│ └── utils/
└── config/
├── meta-credentials.json
└── google-credentials.jsonThe key insight: agents persist information by writing to files. When an agent wakes up, it reads its WORKING.md to remember what it was doing. When it finds something, it writes to a file. Mental notes don't survive restarts. Files do.
Part 3: The 10 Agents
Agent 1: Commander
(Orchestrator)agent:commander:main
Job: Receives tasks, delegates to the right agent, tracks progress.
# SOUL.md — Commander You are the Commander. The orchestrator. Your job: - Receive tasks from the human operator - Break them into subtasks - Assign to the right agent - Track progress across all agents - Escalate blockers You don't do the work yourself. You delegate. You don't make campaign changes. You coordinate. Check the activity feed every heartbeat. If an agent is stuck, reassign or escalate.
Agent 2: Auditor
(Campaign Auditor)agent:auditor:main
Job: Runs a 12-rule check on campaigns. Finds wasted spend.
# SOUL.md — Auditor
You are the Auditor. You find waste.
On every heartbeat, check if there are campaigns to audit.
When auditing, run these 12 rules:
1. Broad match without negatives — flag
2. Budget pacing off by >20% — flag
3. Audience overlap >30% between ad sets — flag
4. No frequency cap on prospecting — flag
5. CPA >2x target for 3+ days — flag
6. Placements not excluded (Audience Network) — flag
7. Missing conversion tracking — critical flag
8. Creative-to-landing page mismatch — flag
9. No dayparting on campaigns with clear off-hours — flag
10. Device bid adjustments missing — flag
11. Ad sets with <50 conversions in learning — flag
12. Duplicate audiences across campaigns — flag
For each flag: campaign name, rule violated,
estimated waste per day, recommended fix.
Output: shared/audit-reports/YYYY-MM-DD-{campaign}.md
Tag @Guard for anything requiring budget changes.Agent 3: Bidder
(Budget Manager)agent:bidder:main
Job: Moves budget from underperformers to winners.
# SOUL.md — Bidder You are the Bidder. You manage budget allocation. Your rules: - Never move budget without Guard approval - Always show: source, destination, amount, reasoning - Base decisions on 7-day rolling data minimum - CPA below target for 3+ days = "winner" - CPA above target for 3+ days = "loser" - Maximum single shift: 20% of campaign daily budget - Document every proposed shift Format for proposals: Campaign: [name] Action: [increase/decrease/pause] Amount: [$/day] Reason: [data-backed] Risk: [what could go wrong]
Agent 4: Scout
(Creative Analyst)agent:scout:main
Job: Scores creatives. Catches fatigue before CTR crashes.
# SOUL.md — Scout You are the Scout. You analyze creative performance. Track per creative: CTR trend (7-day rolling), Frequency, ROAS, Hook type, CTA type, Days active. Fatigue thresholds: - Frequency >3.5 on prospecting = warning - Frequency >4.5 = critical - CTR declined >15% from peak = warning - CTR declined >30% from peak = kill recommendation When you find a fatigued creative: 1. Document in shared/creative-reports/ 2. Tag @Guard with kill recommendation 3. Note which creative should replace it Weekly: produce a creative winner report. Data only. No opinions without data.
Agent 5: Tracker
(Anomaly Detector)agent:tracker:main
Job: Catches CPL spikes, CTR drops, spend pacing issues.
# SOUL.md — Tracker You are the Tracker. You catch problems early. On every heartbeat, check for: - CPA spike >20% vs 7-day average — alert - CTR drop >15% vs 7-day average — alert - Spend pacing >15% off daily budget — alert - Conversion volume drop >25% — critical alert - $200+/day with zero conversions in 6+ hours — critical For each alert: - Identify probable cause - Tag the relevant agent - Write to shared/alerts/ Priority levels: - Critical: action within 1 hour - Warning: action within 24 hours - Watch: monitor, no action yet
Agent 6: Keyword
(Search Term Miner)agent:keyword:main
Job: Mines search terms. Adds negatives. Suggests new targets.
# SOUL.md — Keyword You are the Keyword agent. Google Ads search terms only. Daily tasks: - Pull search term report for last 7 days - Flag irrelevant terms with >$10 spend - Flag terms with CTR <0.5% and >100 impressions - Group negative candidates by theme - Identify high-intent terms not currently targeted Never add negatives without approval. Never add new keywords without approval.
Agent 7: Reporter
(Cross-Platform Reporting)agent:reporter:main
Job: Pulls Google + Meta data into one view.
# SOUL.md — Reporter You are the Reporter. You consolidate data. Daily report (generated at 8am): - Total spend: Google vs Meta - Blended ROAS - CPA by platform - Top 5 campaigns by ROAS - Bottom 5 campaigns by CPA - Spend pacing vs monthly budget - Week-over-week trends Format: markdown in shared/reports/ No analysis. Just clean data. Let the human draw conclusions.
Agent 8: Launcher
(Campaign Builder)agent:launcher:main
Job: Builds campaigns from a brief.
# SOUL.md — Launcher You are the Launcher. You build campaigns. When given a brief, create: - Campaign structure (CBO or ABO, with reasoning) - Ad set configuration (audiences, placements, budget) - Naming convention (platform_objective_audience_creative_date) - UTM parameters - Bid strategy recommendation Never launch without approval. Never deviate from naming conventions.
Agent 9: Watcher
(Competitor Intel)agent:watcher:main
Job: Monitors competitor ads and auction insights.
# SOUL.md — Watcher You are the Watcher. Competitive intelligence. Weekly tasks: - Pull Meta Ad Library for tracked competitors - Log new creatives: hook type, CTA, format, run time - Pull Google Auction Insights for key campaigns - Track: impression share, overlap rate, position above rate Flag when: - New competitor enters top auctions - Competitor overlap rate increases >10% - Competitor launches format we haven't tested - Competitor pauses a long-running campaign
Agent 10: Guard
(Approval Gate)agent:guard:main
Job: Final approval before any change goes live. This is the most important agent.
# SOUL.md — Guard You are the Guard. Nothing happens without your approval. You review: - Budget shifts (from Bidder) - Campaign pauses/kills (from Auditor) - Creative kills (from Scout) - New negatives (from Keyword) - Campaign launches (from Launcher) Approval levels: - Auto-approve: negatives under $50/mo, bid adjustments under 10% - Queue for human: budget shifts >$500/day, campaign pauses, launches - Block: anything that would increase spend >20% or pause a top performer Every approval or rejection must include reasoning.
Part 4: Connecting to Ad Platforms
Pulling Data from Meta Ads
Create a script that agents can call to pull campaign data:
# scripts/pull-meta-data.sh
#!/bin/bash
ACCESS_TOKEN=$(cat config/meta-credentials.json | jq -r '.access_token')
AD_ACCOUNT_ID=$(cat config/meta-credentials.json | jq -r '.ad_account_id')
# Pull campaign-level data for last 7 days
curl -G "https://graph.facebook.com/v19.0/act_${AD_ACCOUNT_ID}/insights" \
-d "access_token=${ACCESS_TOKEN}" \
-d "level=campaign" \
-d "fields=campaign_name,spend,impressions,clicks,actions,cost_per_action_type,purchase_roas" \
-d "date_preset=last_7d" \
-d "limit=500" \
-o shared/campaign-data/meta-campaigns-$(date +%Y-%m-%d).json
# Pull ad-level data for creative analysis
curl -G "https://graph.facebook.com/v19.0/act_${AD_ACCOUNT_ID}/insights" \
-d "access_token=${ACCESS_TOKEN}" \
-d "level=ad" \
-d "fields=ad_name,creative{title,body,thumbnail_url},spend,impressions,clicks,actions,frequency" \
-d "date_preset=last_7d" \
-d "limit=500" \
-o shared/campaign-data/meta-ads-$(date +%Y-%m-%d).json
echo "Meta data pulled at $(date)"Pulling Data from Google Ads
# scripts/pull-google-data.sh
#!/bin/bash
# Campaign performance
gaql query \
--customer-id $(cat config/google-credentials.json | jq -r '.customer_id') \
--query "
SELECT
campaign.name,
campaign.status,
metrics.cost_micros,
metrics.conversions,
metrics.cost_per_conversion,
metrics.clicks,
metrics.impressions,
metrics.ctr
FROM campaign
WHERE segments.date DURING LAST_7_DAYS
AND campaign.status = 'ENABLED'
ORDER BY metrics.cost_micros DESC
" \
--output-file shared/campaign-data/google-campaigns-$(date +%Y-%m-%d).json
echo "Google data pulled at $(date)"Scheduling Data Pulls
Set up cron jobs to pull fresh data before agents wake up:
# Pull data every 15 minutes, 1 minute before agent heartbeats clawbot cron add \ --name "pull-meta-data" \ --cron "59 * * * *" \ --session "isolated" \ --message "Run scripts/pull-meta-data.sh and confirm data is fresh" clawbot cron add \ --name "pull-google-data" \ --cron "59 * * * *" \ --session "isolated" \ --message "Run scripts/pull-google-data.sh and confirm data is fresh"
Not a developer? Ryze AI connects to Google and Meta Ads directly. Same agents, managed for you.
Part 5: The Heartbeat System
Why Heartbeats
Always-on agents burn API credits doing nothing. Always-off agents miss problems. Heartbeats are the middle ground.
Each agent wakes up every 15 minutes. Checks for work. Acts or stands down.
Setting Up Heartbeats
Stagger them so agents don't all run at once:
# Commander at :00
clawbot cron add \
--name "commander-heartbeat" \
--cron "0,15,30,45 * * * *" \
--session "isolated" \
--message "You are the Commander. Read your SOUL.md. Check WORKING.md.
Check Mission Control for new tasks, @mentions, and agent status."
# Auditor at :02
clawbot cron add \
--name "auditor-heartbeat" \
--cron "2,17,32,47 * * * *" \
--session "isolated" \
--message "You are the Auditor. Read your SOUL.md. Check WORKING.md.
Check shared/campaign-data/ for fresh data. Run audits."
# Bidder at :04
clawbot cron add \
--name "bidder-heartbeat" \
--cron "4,19,34,49 * * * *" \
--session "isolated" \
--message "You are the Bidder. Read your SOUL.md. Check WORKING.md.
Review shared/audit-reports/ for new findings."
# Scout at :06
clawbot cron add \
--name "scout-heartbeat" \
--cron "6,21,36,51 * * * *" \
--session "isolated" \
--message "You are the Scout. Analyze creative performance.
Check for fatigue signals."
# Tracker at :08
clawbot cron add \
--name "tracker-heartbeat" \
--cron "8,23,38,53 * * * *" \
--session "isolated" \
--message "You are the Tracker. Check shared/campaign-data/ for anomalies.
Compare against 7-day baselines."
# Guard at :01 (runs right after Commander)
clawbot cron add \
--name "guard-heartbeat" \
--cron "1,16,31,46 * * * *" \
--session "isolated" \
--message "You are the Guard. Check for pending approvals.
Process each one."What Happens During a Heartbeat
:08 Tracker wakes up → Reads SOUL.md (who am I) → Reads memory/WORKING.md (what was I doing) → Reads shared/campaign-data/ (fresh data) → Compares CPA/CTR/spend against 7-day baselines → Finds CPL spike on 3 campaigns → Writes alert to shared/alerts/ → Tags @Scout and @Bidder in Mission Control → Updates WORKING.md → Goes back to sleep
15 minutes later, Scout wakes up, sees the tag, analyzes the creatives on those campaigns, confirms fatigue. Tags @Guard with a kill recommendation. Guard wakes up, reviews, queues for human approval.
The whole chain happens without you touching anything. You just see the final "approve?" notification.
Part 6: The Shared Brain (Mission Control)
Why You Need One
Ten agents writing to separate files creates chaos. You need a shared space where everyone sees the same tasks, comments, and decisions.
Option 1: Convex (Recommended)
Real-time database. Free tier is enough. TypeScript-native. The schema:
// agents table
agents: {
name: string, // "Auditor"
role: string, // "Campaign Auditor"
status: "idle" | "active" | "blocked",
sessionKey: string, // "agent:auditor:main"
}
// tasks table
tasks: {
title: string,
description: string,
status: "inbox" | "processing" | "review" | "deployed",
assigneeIds: Id<"agents">[],
priority: "critical" | "high" | "normal" | "low",
}
// approvals table
approvals: {
type: "budget_shift" | "campaign_pause" | "creative_kill" | "negative_add" | "campaign_launch",
proposedBy: Id<"agents">,
data: string, // JSON with the proposed change
status: "pending" | "approved" | "rejected",
reviewedBy: string, // "human" or agent name
reasoning: string,
}Option 2: Simple File System
If you don't want a database, use shared files:
shared/ ├── tasks/ │ ├── inbox/ │ ├── processing/ │ ├── review/ │ └── deployed/ ├── approvals/ │ ├── pending/ │ └── processed/ ├── alerts/ ├── reports/ └── activity-log.md ← append-only log
Agents move files between folders to change status. Less elegant but works.
Part 7: The Approval Workflow
This is the most important part. Without it, you have AI making changes to live ad spend. Bad idea.
Three Tiers
Tier 1: Auto-approve (Guard handles)
- →Adding negatives with <$50/month impact
- →Bid adjustments under 10%
- →Report generation
- →Data pulls
Tier 2: Queue for human (Guard holds, you review)
- →Budget shifts >$500/day
- →Campaign pauses
- →Creative kills
- →Campaign launches
- →Any change to a top performer
Tier 3: Block (Guard rejects)
- →Spend increases >20% in a single move
- →Pausing a campaign with ROAS >3x target
- →Any structural change without Auditor review first
How It Works in Practice
Auditor finds $2,300/day wasted on broad match → Creates task in Mission Control → Tags @Bidder and @Guard Bidder reviews, proposes shifting $1,200/day to LAL → Creates approval request → Guard picks it up on next heartbeat Guard checks: ✓ Data supports it (CPA $52 vs $30 target, 7+ days) ✓ Destination has headroom (LAL CPA $18, well below target) ✓ Amount within limits (<20% of total budget) → Tier 2: queues for human You get notification: "Kill Broad Interest ($2,892/wk, 1.4x ROAS) → shift $1,200/day to LAL Top 5%?" [Approve] [Reject] You approve. Bidder executes via API. Logged in shared/approval-log/.
Part 8: The Daily Standup
Every day at 11:30 PM, a cron generates a summary of all agent activity and sends it to your Telegram/Slack/email.
clawbot cron add \
--name "daily-standup" \
--cron "30 23 * * *" \
--session "isolated" \
--message "Generate the daily standup. Read all files in
shared/alerts/, shared/audit-reports/, shared/budget-proposals/,
shared/creative-reports/, shared/approval-log/ from today.
Compile into a summary and send to Telegram."Example Output
DAILY STANDUP — Feb 8, 2026 ALERTS - Tracker: CPL spike 34% on 3 lead gen campaigns (fixed) - Tracker: Spend pacing off on PMax by 18% (monitoring) AUDITS - Auditor: Broad Interest flagged — $2,300/day waste - Auditor: Fitness DTC clean — 0 critical issues BUDGET MOVES - Bidder: +$480/day to LAL Top 5% (approved) - Bidder: Kill Broad Interest pending approval CREATIVE - Scout: 2 creatives past fatigue threshold - Scout: UGC outperforming static 2.4x this month APPROVALS PENDING - Kill "Broad Interest" campaign - Launch "UGC Testimonial v1" as replacement NEGATIVES ADDED: 14 (est. $340/mo savings)
Part 9: Setup Checklist
Step 1: Install ClawBot
npm install -g clawbot clawbot init
Add your API key (Anthropic, OpenAI, etc.) to the config.
Step 2: Set Up the Workspace
mkdir -p /home/ops/clawbot/{agents,shared,scripts,config}
mkdir -p /home/ops/clawbot/shared/{campaign-data,audit-reports,budget-proposals,keyword-proposals,creative-reports,competitive-intel,alerts,reports,approval-log,briefs,campaign-builds,templates}
# Create agent folders
for agent in commander auditor bidder scout tracker keyword reporter launcher watcher guard; do
mkdir -p /home/ops/clawbot/agents/$agent/memory
touch /home/ops/clawbot/agents/$agent/SOUL.md
touch /home/ops/clawbot/agents/$agent/memory/WORKING.md
doneStep 3: Write the SOUL Files
Copy the SOUL.md content from Part 3 into each agent's folder. Customize the rules and thresholds for your accounts.
Step 4: Set Up API Connections
Add your Meta and Google credentials to config/. Set up the data pull scripts from Part 4. Test them manually first.
Step 5: Start with 3 Agents
Don't start with 10. Start with:
- 1Tracker — catches problems
- 2Auditor — finds waste
- 3Guard — approves changes
These three give you 80% of the value. Add the rest once these are solid.
Step 6: Add Heartbeats
Set up cron jobs from Part 5. Start with 30-minute intervals. Move to 15 minutes once you trust the system.
Step 7: Set Up Mission Control
Pick Convex or file system from Part 6. Set up the approval workflow from Part 7. Set up the daily standup from Part 8.
Step 8: Scale
Once the core 3 are working, add agents one at a time:
- →Bidder (budget optimization)
- →Scout (creative analysis)
- →Keyword (search term mining)
- →Reporter (consolidated reporting)
- →Launcher (campaign building)
- →Watcher (competitive intel)
- →Commander (orchestration)
Not a developer? Ryze AI connects to Google and Meta Ads directly. Same agents, managed for you.
Part 10: What to Expect
Week 1
Agents will make mistakes. SOUL files need tuning. Thresholds need adjusting. Approval workflows need tightening. This is normal.
Week 2-3
Agents start catching real issues. You'll find wasted spend you didn't know about. Creative fatigue you would have missed for days. Search terms you should have blocked months ago.
Month 2+
The compound effect kicks in. Every 15 minutes, your campaigns are being watched. Budget moves happen same-day. Creative gets rotated before CTR tanks. You focus on strategy. Agents handle operations.
Cost
Running 10 agents on Claude Sonnet with 15-minute heartbeats costs roughly $25/day. Most heartbeats are "nothing to do" and cost almost nothing. The expensive ones are audits and creative analysis — maybe 50-100 meaningful actions per day at ~$0.15 each.
Part 11: Lessons from Running This
The approval gate is non-negotiable.
Experienced media buyers don't want AI making decisions. They want AI surfacing decisions for them to make.
Start with detection, not action.
Tracker and Auditor should run for a week before you let Bidder touch anything. Build trust in the system first.
Thresholds are everything.
A CPA spike alert that fires every day is noise. One that fires when CPA exceeds 2x target for 3+ days is useful. Tune aggressively.
Let agents specialize.
The Auditor doesn't need to know about creative fatigue. The Scout doesn't need to know about search terms. Narrow focus produces better output.
File memory beats mental notes.
If an agent discovers something, it needs to write it to a file. Not just respond with "noted." Files survive restarts. Acknowledgments don't.
Stagger everything.
Data pulls → 1 minute later → agent heartbeats. Agents staggered 2 minutes apart. This prevents race conditions and ensures fresh data.
Read the daily standup.
It takes 2 minutes. It catches everything you'd otherwise miss. If agents aren't showing progress in standups, something is broken.






