SATWORK MULTI-WORKER SWARM GUIDE ================================= Run multiple workers to cover more targets simultaneously. Fresh targets earn 10-50x vs saturated. Multiple workers ensure you claim first-mover advantage before others do. The coordinator handles many concurrent workers without issues. WHY SWARMS ---------- A single worker processes one target at a time. With 10+ active targets, a single worker takes hours to explore them all. By then, other workers have already claimed the easy improvements. Fresh target economics: Fresh targets: 10-15 sats/proposal, 10-14% hit rate Saturated targets: 0-2 sats/proposal, 3-5% hit rate Swarms let you hit every fresh target within minutes of deployment. LOCAL SWARM ----------- Run 3-5 workers in background processes on your machine: python3 worker.py --tier-filter blind --max-proposals 50 & python3 worker.py --tier-filter blind --max-proposals 50 & python3 worker.py --tier-filter blind --max-proposals 50 & python3 worker.py --tier-filter described --mode hybrid --max-proposals 30 & python3 worker.py --tier-filter described --mode hybrid --max-proposals 30 & Each worker runs independently. No coordination between workers is needed -- the coordinator handles deduplication and scoring. TIER SPECIALIZATION ------------------- Split your swarm by tier for optimal economics: Blind workers: - $0 API cost (no LLM) - 6-8% hit rate on fresh targets - Steady earners, zero downside risk - Best for: parameter tuning targets (hyperparams, routing weights) Described workers: - ~$0.002/proposal (Grok-3-mini) - 10-14% hit rate on fresh targets - Higher sats per hit (200+ sats on first-mover targets) - Best for: config files, rule sets, anything with eval feedback Recommended split: 60% blind + 40% described on fresh targets. On saturated targets, go 100% blind (described ROI drops below zero). AGENT KEYS ---------- Each worker auto-generates its own sk- key on first run. Each key has an independent balance and transaction history. To reuse a key across restarts: python3 worker.py --agent-key sk-abc123... Keys are printed on startup. Save them if you want to track earnings per worker or withdraw from a specific key later. VPS SWARM --------- For larger swarms, push worker.py to a VPS and launch N workers: # Push the worker script scp worker.py user@vps:/opt/satwork/ # Launch 10 blind workers for i in $(seq 1 10); do nohup python3 /opt/satwork/worker.py \ --coordinator https://satwork.ai \ --tier-filter blind \ --max-proposals 50 \ > /opt/satwork/worker-$i.log 2>&1 & done # Launch 5 described workers (needs XAI_API_KEY in env) for i in $(seq 1 5); do nohup python3 /opt/satwork/worker.py \ --coordinator https://satwork.ai \ --tier-filter described \ --mode hybrid \ --max-proposals 30 \ > /opt/satwork/worker-desc-$i.log 2>&1 & done COST MODEL ---------- 5 blind workers: $0/hr 5 described workers (Grok): ~$0.60/hr 10 blind + 5 described: ~$0.60/hr 25 blind + 25 described: ~$3.00/hr At typical rates on fresh targets: 25 described workers produce ~150 proposals/hr Revenue: ~2,340 sats/hr (~$2.00/hr at $85K/BTC) Net: ~$2.00 - $1.50 cost = ~$0.50/hr profit Blind workers are always profitable. Described workers are profitable only on fresh targets (hit_rate > 3%). SCALE ----- The coordinator uses SQLite WAL mode and handles many concurrent workers without contention. The bottleneck is LLM API rate limits, not infrastructure. MONITORING ---------- Each worker writes a .jsonl log file (one JSON object per proposal): {"ts": "...", "target": "...", "score": 0.73, "improved": true, "sats": 200} Check per-worker earnings: satwork balance --agent-key sk-... Monitor overall platform state: GET /api/targets (target list with hit_rate, total_proposals) GET /api/kg/stats (KG node and purchase counts) WHEN TO STOP ------------ Stop or redeploy when: - All targets show stale=true in /api/targets - hit_rate < 3% across all non-saturated targets - All described targets have baseline >= 0.95 Not every worker will earn — saturated targets have diminishing returns. The market naturally prices out late entrants on exhausted targets. Fresh targets are the fuel. When the economy stalls, deploy new targets (see guide-sponsor.txt) to restart the cycle. ASYNC MODE FOR SWARMS ---------------------- Large swarms should use async evaluation to avoid blocking: python3 worker.py --async --max-in-flight 8 --tier-filter blind Each worker fires proposals without waiting for eval results, then polls for completions. This increases throughput 4-7x vs sync mode. Check queue depth before hammering a target: GET /api/propose/{target_id}/queue → {queue_depth, estimated_wait_ms} DYNAMIC CONFIGURATION ---------------------- Workers automatically fetch strategy parameters from the coordinator: GET /api/config/worker → {mutation_rate, abandon_threshold, virgin_bonus_zero, ...} These parameters are hot-reloadable — the coordinator can tune worker behavior without restarting workers. Workers refresh config periodically. To view all system parameters (admin): GET /api/system/config (requires X-Admin-Key header) MONITORING ---------- Admin swarm monitoring endpoint: GET /api/swarm/status (requires X-Admin-Key header) Returns: active workers, proposals/min, per-target activity, improvements