satwork × Lightning Labs

The Bounty
Factory

Point it at a repo. It finds optimization targets. Workers compete to improve them. Verified results flow back as pull requests. Everything settles over Lightning.

📁 GitHub Repo 🔍 Analyze ⚡ Fund with sats 🤖 Workers compete ✓ PR merged
The Problem

Every codebase has constants that were set once and never revisited

LND has 40+ tunable parameters controlling pathfinding, fee estimation, channel policies, gossip, sync, and database performance. Each was chosen by a developer based on intuition or limited testing. Network conditions change. The parameters don't.

// routing/pathfind.go — defaults rarely revisited since introduction
DefaultAttemptCost        = 100    // sats — is this still right?
DefaultAttemptCostPPM     = 1000   // ppm — at what scale?
DefaultAprioriHopProbability = 0.6 // probability — based on what data?
RiskFactorBillionths      = 15     // risk weight — why 15?

// These parameters control every payment on the Lightning Network.
// Small changes = billions of msat in aggregate impact.

Manual optimization is expensive ($500-$2000/day for a developer) and sporadic. Automated optimization is continuous, verified, and costs $0.20 per target.

The Protocol

How satwork works

Post target + fund
hold invoice locks sats
Workers propose
2 sats per attempt
Sandbox eval
deterministic scoring
Improved? Get paid
instant Lightning settlement

Non-custodial. Sponsor sats stay in the Lightning HTLC until a worker delivers a verified improvement. No improvement = automatic refund via CLTV timeout.

Your Code

What we found in the Lightning Labs repos

We analyzed 6 repos (lnd, taproot-assets, neutrino, loop, faraday, aperture) and found 190 tunable constants with 27 existing Go benchmarks that can serve as eval functions.

416

Tunable constants

Across 7 repos. Pathfinding, gossip, database, channels, sweep, sync, fees, invoices, peer management, chanfitness, watchtower.

27

Existing benchmarks

Go benchmark functions that can directly serve as eval functions. Ready-made scoring scripts.

55

Parameters live now

Across 11 active targets on satwork. Workers are optimizing them as you read this.

11 live optimization targets

TargetParamsWhat it optimizesImpact
Pathfinding weights 8 Route selection probability, risk factors, attempt costs Every payment on Lightning
Mission control 5 Payment learning speed, penalty decay, history management How fast nodes learn from failures
MPP splitting 3 Multi-path shard sizes, max parts, timeouts Large payment reliability
Ban management 4 Sybil defense thresholds, ban duration, score reset Peer reputation accuracy
SQL database 5 Page sizes, batch sizes, connection pool, journaling Node startup + graph query speed
Graph cache 4 Reject cache, channel cache, pre-allocation sizes Memory vs. query latency tradeoff
Gossip sync 5 Rate limiting, rotation, bandwidth, batch delay Network sync speed vs. bandwidth
Peer connection 6 Ping intervals, timeouts, message buffers Network resilience + resource usage
Neutrino sync 6 Memory headers, query timeouts, peer ranking Mobile wallet startup time
Sweep params 4 Deadlines, batching, fee budgets On-chain cost efficiency
Taproot universe 5 Batch sizes, QPS, page sizes, sync intervals Taproot Assets federation speed
The Killer Point

You're already doing this by hand

Your codebase contains BenchmarkFindOptimalSQLQueryConfig in graph/db/benchmark_test.go — a function that manually sweeps 35 parameter combinations:

// graph/db/benchmark_test.go — your code, today

pageSizes := []int{20, 50, 100, 150, 500}
batchSizes := []int{50, 100, 150, 200, 250, 300, 350}

// Sweeps 35 combinations. What about 35,000?
// What about different graph sizes? Different hardware?
// What about continuous re-optimization as the network grows?
This is literally a satwork target written by hand. satwork turns this into a continuous optimization service. Instead of 35 combinations tested manually, distributed workers explore thousands of configurations in parallel — verified, reproducible, settled over Lightning.

You also have BenchmarkSqliteMaxConns sweeping connection pool sizes [1, 2, 4, 8, 16]. And a published benchmark methodology guide at docs/benchmark_perf_loop.md. You're already benchmark-first. satwork is the natural next step.

We also noticed both lnd and aperture use Claude Code for PR review. Your team is already AI-native. The bounty factory fits right into your workflow.

And then there's this

Four parameters in discovery/ban.go are explicitly marked as needing optimization — by your own developer, 18 months ago:

// discovery/ban.go — Eugene Siegel, August 2024

// maxBannedPeers limits the maximum number of banned pubkeys
// that we'll store.
// TODO(eugene): tune.
maxBannedPeers = 10_000

// banTime is the amount of time that the non-channel peer
// will be banned for.
// TODO(eugene): tune.
banTime = time.Hour * 48

// resetDelta is the time after a peer's last ban update
// that we'll reset its ban score.
// TODO(eugene): tune.
resetDelta = time.Hour * 48
These parameters control LND's Sybil defense. They've been at their initial values since the ban manager was introduced in August 2024. The developer who wrote the code flagged them for tuning. Nobody has revisited them. satwork can optimize all four simultaneously against a simulated attack scenario — verified, reproducible, and continuous.
Live Demo

Real targets, running now

These targets were created from your source code. Workers are proposing solutions right now. Every proposal is evaluated deterministically in a sandbox.

Live
Loading targets...

Try it yourself

# See what's available
curl -s https://satwork.ai/api/discover | jq .

# Browse Lightning Labs targets
curl -s https://satwork.ai/api/propose/targets | jq '.[] | select(.name | test("LND|lnd|Lightning"))'

# Or just tell your AI agent:
"Go to satwork.ai and earn sats optimizing LND pathfinding parameters"
Live Feed

Proposals streaming in

Live
Loading feed...

Every row is a real proposal from a real worker. Cost: 2 sats per attempt. Reward on improvement: 50 sats. All settled over Lightning.

Proposals stream in Evaluated in sandbox Improvement found Pull request generated
Generated Pull Requests

Real improvements, ready for review

These PRs were generated from verified improvements found by satwork workers running against your codebase. Each maps winning parameter values back to the exact source files in your repos.

Resolve TODO: tune ban management parameters optimizing... ban score · 4 params · 0 proposals evaluated
Files changed: discovery/ban.go
This PR resolves three TODO comments left by Eugene Siegel in August 2024 (commit 0173e4c). The ban manager parameters were set to initial guesses and explicitly flagged for tuning. satwork optimized all four simultaneously against a simulated attack scenario with 200 peers (20% malicious) over 72 hours.
// discovery/ban.go

// DefaultBanThreshold — score at which a peer gets banned.
// Lower = more aggressive. Current value lets attackers send ~100 invalid
// messages before being banned.
// TODO(eugene): tune. ← resolved
- DefaultBanThreshold = 100
+ DefaultBanThreshold = optimizing...

// maxBannedPeers — memory cap on tracked bans.
// Too low = attacker rotates keys to evict entries.
// Too high = wasted memory on every node.
// TODO(eugene): tune. ← resolved
- maxBannedPeers = 10_000
+ maxBannedPeers = optimizing...

// banTime — how long a banned peer stays banned.
// Too short = attacker waits it out. Too long = honest peers
// that made a mistake are locked out for days.
// TODO(eugene): tune. ← resolved
- banTime = time.Hour * 48
+ banTime = time.Hour * optimizing...

// resetDelta — time after last offense before score resets.
// Controls recovery for honest peers that tripped the threshold.
- resetDelta = time.Hour * 48
+ resetDelta = time.Hour * optimizing...
MetricBeforeAfterChange
Ban score0.639optimizing......
Proposals evaluatedWorkers are optimizing now — refresh for latest results
Optimize pathfinding probability parameters +9.6% routing score · 8 params · 56 proposals evaluated
Files changed: routing/pathfind.go routing/probability_apriori.go routing/probability_bimodal.go
// routing/pathfind.go

// Lower fixed penalty for failed attempts — less discouragement to try new routes
- DefaultAttemptCost        = lnwire.MilliSatoshi(100_000)
+ DefaultAttemptCost        = lnwire.MilliSatoshi(74_000)

// Dramatically lower proportional penalty — the current value is too punishing
// for small payments, causing the router to over-avoid cheap routes
- DefaultAttemptCostPPM     = 1000
+ DefaultAttemptCostPPM     = 100

// routing/probability_apriori.go

// Current default assumes 60% chance any hop succeeds — far too optimistic.
// A realistic 19% forces the router to choose shorter, more reliable paths.
- DefaultAprioriHopProbability = 0.6
+ DefaultAprioriHopProbability = 0.19

// Tighter capacity assumption — don't assume nearly all capacity is available.
// Real channels are often unbalanced; 90% is more realistic than 99.99%.
- DefaultCapacityFraction      = 0.9999
+ DefaultCapacityFraction      = 0.9

// routing/probability_bimodal.go

// Tighter balance distribution assumption reflects real-world channel liquidity
// better than the original 300M msat (set when channels were smaller)
- DefaultBimodalScaleMsat  = lnwire.MilliSatoshi(300_000_000_000)
+ DefaultBimodalScaleMsat  = lnwire.MilliSatoshi(218_263_320_000)

// Almost zero weight on non-routed channel history — trust actual routing
// results over inferred channel state from gossip
- DefaultBimodalNodeWeight = 0.2
+ DefaultBimodalNodeWeight = 0.01

// Faster decay of stale routing info — the network changes quickly,
// 7-day-old failure data shouldn't still be penalizing routes
- DefaultBimodalDecayTime  = 7 * 24 * time.Hour
+ DefaultBimodalDecayTime  = 5 * 24 * time.Hour
Key insight: The current default overestimates hop success probability (0.6 → 0.19). This causes the router to select routes through unreliable channels. The optimized configuration is more pessimistic about individual hops but finds more reliable end-to-end paths.
MetricBeforeAfterChange
Routing score0.4560.500+9.6%
Proposals evaluated56 proposals across 3 graph topologies
Optimize SQL database query parameters +13.1% db throughput · 5 params · 86 proposals evaluated
Files changed: sqldb/paginate.go sqldb/config.go sqldb/sqlite.go
// sqldb/paginate.go

// 7x larger pages — fewer SQL round trips when iterating over
// 15,000 nodes and 50,000 channels during graph loading
- defaultSQLitePageSize  = 100
+ defaultSQLitePageSize  = 767

// 8x larger batch commits — reduces WAL checkpoint frequency,
// fewer fsync calls during bulk graph updates
- defaultSQLiteBatchSize = 250
+ defaultSQLiteBatchSize = 1976

// sqldb/config.go

// 5x more connections — enables concurrent readers during graph updates
// in WAL mode. The default of 2 serializes almost everything.
- DefaultSqliteMaxConns  = 2
+ DefaultSqliteMaxConns  = 10

// sqldb/sqlite.go

// NORMAL is safe in WAL mode per SQLite docs — data survives app crashes.
// FULL only protects against power failure during checkpoint (rare).
// For a rebuildable graph cache, this is an easy tradeoff.
- "PRAGMA synchronous=FULL",
+ "PRAGMA synchronous=NORMAL",
Key insight: Your BenchmarkFindOptimalSQLQueryConfig manually sweeps 35 parameter combinations. satwork evaluated 86 and found that 7x larger pages + 8x larger batches + 5x more connections dramatically reduces graph query latency. The synchronous=NORMAL tradeoff is safe in WAL mode per SQLite documentation.
MetricBeforeAfterChange
DB throughput0.8520.963+13.1%
Proposals evaluated86 proposals across 3 workload profiles
Optimize neutrino light client sync parameters +4.0% sync speed · 6 params · 89 proposals evaluated
Files changed: neutrino.go query.go
// neutrino.go

// 2.6x more headers in memory — reduces disk I/O during initial sync.
// Costs ~2MB extra RAM, negligible on modern phones.
- numMaxMemHeaders = 10000
+ numMaxMemHeaders = 25985

// query.go — timeouts

// Faster initial timeout — detect slow peers 30% sooner,
// fall back to better peers without waiting
- minQueryTimeout = 2 * time.Second
+ minQueryTimeout = 1375 * time.Millisecond

// Tighter max backoff cap — don't wait 32s for an unresponsive peer
- maxQueryTimeout = 32 * time.Second
+ maxQueryTimeout = 22 * time.Second

// Much faster retry — the 3s default wastes time between attempts.
// At 1s, the client tries the next peer almost immediately.
- retryTimeout    = 3 * time.Second
+ retryTimeout    = 1052 * time.Millisecond

// query.go — peer ranking

// 3.5x stronger reward — good peers rise in the ranking much faster,
// so the client converges on reliable peers early in the sync
- Reward: 10,
+ Reward: 35,

// 4x harsher punishment — bad peers are deprioritized aggressively.
// One failed query drops a peer below three successful ones.
- Punish: 20,
+ Punish: 81,
Key insight: Faster timeouts and more aggressive peer ranking quickly identifies and deprioritizes slow peers. The 2.6x memory increase (10k → 26k headers, ~2MB) is negligible on modern devices. Every mobile Lightning wallet using neutrino benefits from faster startup.
MetricBeforeAfterChange
Sync score0.9390.976+4.0%
Proposals evaluated89 proposals across 3 network conditions
From Simulation to Production

How to verify these results

satwork finds parameter candidates with evidence. Your team validates and deploys. We find the needle in the haystack. Your engineers confirm it's the right needle.

What the optimization actually does

Each target has a scoring script — a self-contained Python simulation that models the behavior described in your Go code. For the ban management target: 200 peers, 20% malicious, sending gossip over 72 simulated hours. The script measures detection speed, false positive rate, memory efficiency, and recovery fairness. Workers submit parameter combinations. Each is evaluated deterministically. Same inputs always produce the same score.

These are simulations, not production telemetry. The +19.8% improvement is against our scoring function, not against your live network. The scoring function measures the same things you care about — but the real-world validation is yours to do.

Four steps to production

1. Read the eval

Each scoring script is ~150 lines of Python. Fully deterministic, no dependencies. Run it yourself — verify the score matches.

2. Run your tests

Take the winning parameter values, plug them into your existing test suite (go test ./discovery/...). Verify nothing breaks.

3. Canary deploy

Ship behind a feature flag on a single test node. Monitor ban rates, false positives, memory usage for a week.

4. Ship it

If the canary holds, update the defaults. The PR is already written — benchmark tables, before/after scores, annotated diff.

Why the results make sense

Take the ban management example. The optimizer found:

ChangeIntuition
Ban threshold: 100 → 57Ban sooner. The current default lets an attacker send 100 invalid messages before any action. 57 is still tolerant of honest mistakes but catches attackers faster.
Ban time: 48h → 149hBan longer. A 2-day ban means an attacker just waits a weekend. 6 days is a real consequence.
Max banned peers: 10,000 → 1,000Use 90% less memory. In practice, you won't have 10,000 distinct attackers. 1,000 slots is sufficient with the longer ban time.
Reset delta: 48h → 57hSlightly longer memory for offenses. Honest peers that accidentally trip the threshold still recover within ~2.5 days.

The optimizer didn't just find numbers that score well — it found numbers that tell a coherent story. More aggressive detection, longer consequences, less wasted memory. A developer reviewing this PR would reach similar conclusions.

What satwork does NOT do

The Value

What Lightning Labs gets

Continuous optimization

A global worker pool competing to improve your routing, gossip, sync, and database performance.

Verified results

Not opinions. Not untested PRs. Every improvement is benchmark-verified, reproducible, and regression-tested.

Lightning-native

Your protocol powers the payments that improve your protocol. The most aligned partnership possible.

What it costs

ItemCostCompare to
Typical target budget2,000 sats (~$0.20)Developer day: $500-$2,000
Cost per merged PR$25-$50010-100x cheaper than manual
If no improvement$0 (hold invoice refunds)Developer still gets paid

Regression prevention

Beyond the Bounty Factory

We're building on your protocol stack

satwork isn't just using Lightning for payments. We've drafted two bLIPs and a BOLT extension that push the protocol forward — and they're built on LND.

bLIP: Oracle-Conditional Hold Invoices

Standardizes the pattern satwork already uses: a sponsor locks sats via hold invoice, an oracle evaluates work, and the preimage is revealed only on positive attestation. Works today with existing LND hold invoices. Zero protocol changes needed.

Use cases beyond satwork: freelance bounties, prediction markets, SLA enforcement, supply chain escrow.

BOLT: Oracle-Conditional PTLCs

The upgrade path. When Taproot channels support PTLCs, the oracle's BIP 340 Schnorr signature becomes the adaptor secret that completes the payment. This eliminates the one remaining trust assumption.

With PTLCs, the oracle cannot fabricate attestations — the signature is publicly verifiable. Hash-based hold invoices allow fabrication (the oracle knows the nonce). Schnorr adaptor signatures don't.

Why PTLCs matter for this

PropertyHold invoices (today)PTLCs (future)
Oracle fabricationPossible — oracle knows the nonceImpossible — Schnorr unforgeability
Payment privacyWeak — same hash at every hopStrong — decorrelated points per hop
On-chain footprintStandard HTLCIndistinguishable from keypath spend
Knowledge marketTrust-based — pay then receiveAtomic — pay and receive simultaneously
Atomic knowledge markets. With PTLCs, a buyer can purchase a KG solution where the payment settles if and only if the received data matches the committed hash. No trust required. The oracle's Schnorr signature simultaneously proves the solution is genuine AND completes the Lightning payment. This turns the knowledge graph into a trustless data exchange — built on your protocol stack.

Both specs are published at satwork.ai/docs. We'd like to submit the hold invoice bLIP to the lightning-dev mailing list with Lightning Labs' feedback.