Initial commit: LetsBe Biz project with openclaw source

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
This commit is contained in:
2026-02-27 16:24:23 +01:00
commit 14ff8fd54c
93 changed files with 31651 additions and 0 deletions

View File

@@ -0,0 +1,513 @@
# LetsBe Biz — Financial Projections & Analysis
**Version 1.2 — February 26, 2026**
**Status:** Internal Planning Document — Confidential
**Companion To:** Foundation Document v1.0, Technical Architecture v1.1, Pricing Model v2.2
**Projection Period:** March 2026 — February 2029 (36 months)
---
## 1. Executive Summary
This document models the three-year financial trajectory for LetsBe Biz, a privacy-first AI workforce platform targeting SMBs. The business is bootstrapped with near-zero investment, operated by the founder (Matt) and one engineer, armed with AI-assisted development tools (Claude Opus 4.6 Max 20x, Codex, Gemini).
**Key assumptions:**
- Launch: March 2026
- Team: 2 people (founder + engineer), no salaries modeled (bootstrapped)
- Existing enterprise contract: €1,500/mo (ongoing, offsets all fixed costs)
- Gross fixed overhead: ~€400/month (tooling + internal infra)
- Net fixed overhead: -€1,100/month (surplus from enterprise contract)
- Three growth scenarios modeled: Conservative, Moderate, Aggressive
- Revenue from: subscriptions, premium AI metering, server upgrades, domains
**Bottom line (Moderate scenario):**
- Month 12 MRR: €11,000 (product) + €1,500 (enterprise) = €12,500
- Month 12 ARR: €150,000
- Month 24 MRR: €26,600 | ARR: €319,200
- Month 36 MRR: €51,200 | ARR: €614,400
- Breakeven: Day 1 — enterprise contract already covers all fixed costs
- Cumulative gross profit at Month 36: ~€448,000 (product) + €39,600 (enterprise surplus) = ~€488,000
**Note on margins:** AI token costs are calculated from high-usage estimates (full pool consumption) to stress-test viability. Actual margins will improve as: (1) most users won't exhaust token pools, (2) prompt caching reduces costs by 5-8% from Month 3+, (3) AI model prices trend downward over time.
---
## 2. Operating Cost Structure
### 2.1 Fixed Monthly Costs (Overhead)
These costs exist regardless of customer count.
| Expense | Monthly | Annual | Notes |
|---------|---------|--------|-------|
| Claude Pro Max (200$) | €185 | €2,220 | Primary development tool |
| Claude Pro Max 10x (potential) | €93 | €1,116 | Second seat for engineer |
| Internal VPS infrastructure | €50 | €600 | Staging, CI/CD, hub relay |
| Figma | €15 | €180 | Design |
| Domain registrations | €10 | €120 | letsbe.biz + related domains |
| Miscellaneous (email, DNS, etc.) | €20 | €240 | Stalwart Mail, CloudFlare, etc. |
| **Gross Fixed Overhead** | **€373** | **€4,476** | |
Rounded to **~€400/mo** for modeling.
### 2.2 Enterprise Contract Offset
An existing enterprise customer pays **€1,500/mo** on an ongoing basis. This contract is modeled as a fixed cost offset rather than product revenue, since it exists independently of the SaaS platform.
| | Monthly | Annual |
|---|---------|--------|
| Gross Fixed Overhead | €400 | €4,800 |
| Enterprise Contract Revenue | -€1,500 | -€18,000 |
| **Net Fixed Overhead** | **-€1,100** | **-€13,200** |
**The business is cash-flow positive from day zero.** The €1,100/mo surplus from the enterprise contract means every product customer's gross margin flows directly to profit with no overhead to cover first. This is extraordinarily lean — a direct benefit of the bootstrapped, AI-augmented development approach combined with an existing revenue base.
### 2.2 Variable Costs Per Customer
From the Pricing Model v2.2 (per tier, VPS G12 default):
| Component | Lite (€29) | Build (€45) | Scale (€75) | Enterprise (€109) |
|-----------|-----------|-------------|-------------|-------------------|
| Netcup VPS | €7.10 | €13.10 | €22.00 | €32.50 |
| Included AI tokens | €2.91 | €6.76 | €13.46 | €25.05 |
| Monitoring + Backups | €1.50 | €1.50 | €1.50 | €1.50 |
| DNS + Support tooling | €1.00 | €1.00 | €1.00 | €1.00 |
| **Total Variable Cost** | **€12.51** | **€22.36** | **€37.96** | **€60.05** |
| **Gross Margin** | **€16.49 (57%)** | **€22.64 (50%)** | **€37.04 (49%)** | **€48.95 (45%)** |
**Note on AI costs:** These are calculated from preset-based routing at full token pool consumption — 85-55% Balanced (DeepSeek V3.2), 10% Basic (GPT 5 Nano/Gemini Flash), 5-35% Complex (GLM 5/MiniMax M2.5) with right-sized pools (8-40M tokens). GLM 5 at $1.677/M is the primary cost driver. Actual costs will likely be lower as most users won't exhaust pools. Prompt caching reduces AI costs by ~5-8% from Month 3+. Model selections are not final — GPT 5.2 Mini ($1.002/M blended) also under consideration for inclusion, which would affect these calculations. See Pricing Model for full comparison.
### 2.3 Stripe Payment Processing
2.9% + €0.25 per transaction. At €62/mo blended ARPU: ~€2.05 per transaction. Modeled as 3.5% effective rate (includes failed charges, refunds).
---
## 3. Market Context & Growth Benchmarks
### 3.1 AI SaaS Industry Benchmarks (2025-2026)
| Metric | Benchmark | Source |
|--------|-----------|--------|
| AI-native SaaS median growth (early stage) | 100% YoY | ChartMogul 2025 SaaS Growth Report |
| Monthly churn — SMB SaaS | 5-7% | Recurly / Agile Growth Labs 2025 |
| Monthly churn — B2B SaaS (all) | 3.5% avg | Recurly 2025 |
| AI SaaS activation rate | 54.8% | Agile Growth Labs 2025 |
| CAC ratio (new ARR) | $2.00 per $1 ARR | High Alpha 2025 SaaS Benchmarks |
| CAC payback (early stage) | 8 months | High Alpha 2025 |
| AI-native GRR (stabilizing) | ~40% at maturity | ChartMogul 2025 |
### 3.2 OpenClaw Growth as Reference
OpenClaw (open-source AI agent platform) achieved explosive growth in late 2025 / early 2026:
- 300,000-400,000 users in ~3 months (Nov 2025 — Feb 2026)
- 200,000+ GitHub stars in under 2 weeks
- 5,700+ community-built skills on ClawHub by Feb 2026
- Drove OpenRouter from 6.4T to 13T tokens/week (2x in one month)
**Relevance to LetsBe Biz:** OpenClaw proves massive demand for AI agent platforms. However, OpenClaw is free/open-source targeting developers — LetsBe Biz is a paid, managed service targeting non-technical SMBs. Our growth will be much slower but our monetization is immediate. OpenClaw validates the market; we're building the productized, privacy-first version for businesses.
### 3.3 Churn Rate Assumptions
Based on industry benchmarks for SMB-focused SaaS with infrastructure lock-in:
| Phase | Monthly Churn | Rationale |
|-------|---------------|-----------|
| Months 1-6 (pre-PMF) | 8% | Early adopters testing; product still rough |
| Months 7-12 (finding PMF) | 6% | Improving retention; founding members engaged |
| Months 13-24 (post-PMF) | 4% | Product-market fit; agent customization creates lock-in |
| Months 25-36 (maturity) | 3% | Strong lock-in; custom agents + data on private server |
**Why churn improves over time:** LetsBe Biz has natural lock-in mechanisms that most SaaS doesn't — custom AI agents (SOUL.md + TOOLS.md represent hours of configuration), business data on private servers, and 30 integrated tools. Switching cost increases the longer a customer stays.
---
## 4. Three Growth Scenarios
### 4.1 Scenario Definitions
**Conservative:** Organic growth only. Word of mouth, community posts, minimal content marketing. No paid acquisition.
**Moderate:** Active content marketing, community building, targeted outreach. Founding member program drives early traction. Some PR from the OpenClaw/AI agent wave.
**Aggressive:** Moderate + strategic partnerships, paid acquisition, press coverage. Riding the AI agent hype cycle hard.
### 4.2 New Customer Acquisition (Monthly)
| Month | Conservative | Moderate | Aggressive |
|-------|-------------|----------|------------|
| 1 (Mar 2026) | 3 | 8 | 15 |
| 2 | 4 | 10 | 20 |
| 3 | 5 | 12 | 25 |
| 4 | 5 | 12 | 25 |
| 5 | 6 | 14 | 30 |
| 6 | 7 | 16 | 35 |
| 7 | 8 | 18 | 35 |
| 8 | 8 | 18 | 40 |
| 9 | 9 | 20 | 40 |
| 10 | 10 | 22 | 45 |
| 11 | 10 | 24 | 50 |
| 12 | 12 | 26 | 55 |
| **Year 1 Total New** | **87** | **200** | **415** |
| Avg Monthly (Y1) | 7 | 17 | 35 |
| Year 2 Avg Monthly | 15 | 35 | 70 |
| Year 3 Avg Monthly | 20 | 50 | 90 |
### 4.3 Tier Distribution Assumptions
| Tier | Price | % of Customers | Weighted ARPU |
|------|-------|---------------|---------------|
| Lite (hidden) | €29 | 10% | €2.90 |
| Build | €45 | 45% | €20.25 |
| Scale | €75 | 30% | €22.50 |
| Enterprise | €109 | 15% | €16.35 |
| **Blended ARPU** | | **100%** | **€62.00** |
### 4.4 Additional Revenue per Customer
| Stream | Avg per Customer/Month | Adoption Rate | Blended/Customer |
|--------|----------------------|---------------|-----------------|
| Premium AI metering | €8.83 | 60% | €5.30 |
| RS upgrade | €12 avg uplift | 10% | €1.20 |
| Domain reselling | €2.50 | 15% | €0.38 |
| Overage billing | €3.00 | 20% | €0.60 |
| **Total Additional** | | | **€7.48** |
**Effective ARPU (all revenue): €62.00 + €7.48 = €69.48/customer/month**
---
## 5. Monthly Financial Projections
### 5.1 Moderate Scenario — Month-by-Month (Year 1)
Fixed cost shown as net (€400 gross - €1,500 enterprise contract = -€1,100 net). Enterprise surplus effectively subsidizes early growth.
| Month | New | Churned | Active Users | Sub Revenue | Add'l Revenue | Total Revenue | Variable Cost | Net Fixed | Gross Profit | Cumulative |
|-------|-----|---------|-------------|-------------|--------------|---------------|--------------|-----------|-------------|------------|
| 1 | 8 | 0 | 8 | €496 | €60 | €556 | €254 | -€1,100 | €1,402 | €1,402 |
| 2 | 10 | 1 | 17 | €1,054 | €127 | €1,181 | €539 | -€1,100 | €1,742 | €3,144 |
| 3 | 12 | 1 | 28 | €1,736 | €209 | €1,945 | €888 | -€1,100 | €2,157 | €5,301 |
| 4 | 14 | 2 | 40 | €2,480 | €299 | €2,779 | €1,268 | -€1,100 | €2,611 | €7,912 |
| 5 | 15 | 2 | 53 | €3,286 | €396 | €3,682 | €1,681 | -€1,100 | €3,101 | €11,013 |
| 6 | 16 | 3 | 66 | €4,092 | €493 | €4,585 | €2,093 | -€1,100 | €3,593 | €14,605 |
| 7 | 18 | 3 | 81 | €5,022 | €606 | €5,628 | €2,568 | -€1,100 | €4,159 | €18,764 |
| 8 | 18 | 4 | 95 | €5,890 | €710 | €6,600 | €3,012 | -€1,100 | €4,688 | €23,453 |
| 9 | 20 | 4 | 111 | €6,882 | €830 | €7,712 | €3,520 | -€1,100 | €5,292 | €28,745 |
| 10 | 20 | 4 | 127 | €7,874 | €950 | €8,824 | €4,027 | -€1,100 | €5,897 | €34,642 |
| 11 | 22 | 5 | 144 | €8,928 | €1,077 | €10,005 | €4,566 | -€1,100 | €6,539 | €41,181 |
| 12 | 22 | 6 | 160 | €9,920 | €1,197 | €11,117 | €5,073 | -€1,100 | €7,143 | €48,325 |
**Year 1 Summary (Moderate):**
- End of Year 1 active users: 160
- Month 12 product MRR: €11,117 | + enterprise: €12,617
- Month 12 ARR run rate: €151,404
- Year 1 total product revenue: €64,614 | + enterprise: €82,614
- Year 1 total gross profit: €48,325 (including enterprise surplus)
- Breakeven: Day 1 — enterprise contract covers all fixed costs before first product sale
- **Note:** Right-sized token pools (8-40M) and adjusted pricing (€29-109) deliver ~49% blended gross margin. Prompt caching and below-pool-cap usage will improve actuals further.
### 5.2 Conservative Scenario — Quarterly Summary
Enterprise surplus of €1,100/mo (€3,300/quarter) added to gross profit.
| Quarter | End Active Users | Product MRR | Quarterly Product Rev | Quarterly Gross Profit |
|---------|-----------------|-------------|----------------------|----------------------|
| Q1 (M1-3) | 12 | €834 | €1,560 | €5,470 |
| Q2 (M4-6) | 28 | €1,946 | €4,593 | €6,768 |
| Q3 (M7-9) | 50 | €3,475 | €8,880 | €8,875 |
| Q4 (M10-12) | 75 | €5,213 | €14,150 | €11,542 |
| **Year 1** | **75** | **€5,213** | **€29,183** | **€32,655** |
### 5.3 Aggressive Scenario — Quarterly Summary
| Quarter | End Active Users | Product MRR | Quarterly Product Rev | Quarterly Gross Profit |
|---------|-----------------|-------------|----------------------|----------------------|
| Q1 (M1-3) | 65 | €4,519 | €8,344 | €8,248 |
| Q2 (M4-6) | 135 | €9,383 | €22,712 | €15,082 |
| Q3 (M7-9) | 210 | €14,596 | €39,230 | €22,844 |
| Q4 (M10-12) | 290 | €20,158 | €56,980 | €31,696 |
| **Year 1** | **290** | **€20,158** | **€127,266** | **€77,870** |
---
## 6. Three-Year Summary
### 6.1 Annual Revenue
| Year | Conservative | Moderate | Aggressive |
|------|-------------|----------|------------|
| Year 1 Revenue | €29,183 | €64,614 | €127,266 |
| Year 2 Revenue | €99,590 | €255,743 | €468,360 |
| Year 3 Revenue | €199,780 | €511,485 | €918,000 |
| **3-Year Total** | **€328,553** | **€831,842** | **€1,513,626** |
### 6.2 Annual Gross Profit (Including Enterprise Surplus)
Enterprise contract adds €13,200/yr surplus (€1,100/mo × 12) on top of product gross profit.
| Year | Conservative | Moderate | Aggressive |
|------|-------------|----------|------------|
| Year 1 Gross Profit | €32,655 | €48,325 | €77,870 |
| Year 2 Gross Profit | €63,870 | €151,000 | €231,000 |
| Year 3 Gross Profit | €118,600 | €289,000 | €436,000 |
| **3-Year Total** | **€215,125** | **€488,325** | **€744,870** |
### 6.3 Active Customers (End of Period)
| Milestone | Conservative | Moderate | Aggressive |
|-----------|-------------|----------|------------|
| Month 6 | 30 | 63 | 119 |
| Month 12 (Year 1) | 57 | 156 | 280 |
| Month 18 | 90 | 255 | 460 |
| Month 24 (Year 2) | 130 | 375 | 680 |
| Month 30 | 170 | 500 | 890 |
| Month 36 (Year 3) | 220 | 660 | 1,150 |
### 6.4 MRR Trajectory
| Milestone | Conservative | Moderate | Aggressive |
|-----------|-------------|----------|------------|
| Month 6 MRR | €1,946 | €4,585 | €9,383 |
| Month 12 MRR | €5,213 | €11,117 | €20,158 |
| Month 18 MRR | €7,500 | €18,800 | €34,200 |
| Month 24 MRR | €10,100 | €26,600 | €49,800 |
| Month 30 MRR | €13,500 | €37,500 | €66,600 |
| Month 36 MRR | €17,100 | €51,200 | €86,600 |
| Month 36 ARR | €205,200 | €614,400 | €1,039,200 |
---
## 7. Key Financial Metrics
### 7.1 Unit Economics
| Metric | Value |
|--------|-------|
| Blended ARPU (subscription only) | €62.00/mo |
| Effective ARPU (all revenue) | €69.48/mo |
| Blended variable cost per customer | €31.71/mo |
| Blended gross margin per customer | €30.29/mo (49%) |
| Effective gross margin (with add'l revenue) | €37.77/mo (54%) |
| Customer Lifetime Value (20-mo avg tenure) | €606 |
| CAC (founding members, 2×) | ~€134/year |
| CAC payback | < 1 month |
| LTV:CAC ratio | ~8:1 |
**Note:** Variable costs assume full token pool consumption at realistic model mixes (including GLM 5 usage in Complex Tasks preset). Actual costs will likely be lower — many users won't exhaust pools, and prompt caching improves margins further. LTV:CAC ratio of 8:1 is excellent (industry target is 3:1). Right-sized pools (8-40M) and adjusted pricing (€29-109) deliver healthy ~50% blended margin.
### 7.2 Breakeven Analysis
| Scenario | Month to Cover Fixed Costs | Net Fixed Cost/Mo | Required Active Users |
|----------|---------------------------|-------------------|----------------------|
| All scenarios | Day 0 | -€1,100 (surplus) | 0 — already profitable |
**The existing enterprise contract (€1,500/mo) fully covers gross fixed overhead (€400/mo) with €1,100/mo surplus.** Every product customer's gross margin flows directly to profit. There is no "breakeven" point — the business is cash-flow positive before launching the SaaS product.
### 7.3 Cash Requirements
| Expense | One-Time | Recurring |
|---------|----------|-----------|
| Netcup server pool (3-5 pre-provisioned) | €200-400 | — |
| Domain registrations | €50 | €50/yr |
| Stripe setup + initial reserve | €0 | — |
| Marketing (organic content) | €0 | €0 |
| **Total pre-launch investment** | **~€300-500** | — |
| Monthly burn (pre-revenue) | — | -€1,100 (net surplus) |
| **External funding required** | **€0** | — |
The enterprise contract means zero runway concerns. The €300-500 pre-launch investment for server pool and domains is covered by less than two weeks of the enterprise surplus. No external funding required, now or ever (unless choosing to accelerate growth).
---
## 8. Revenue Composition Analysis
### 8.1 Revenue Mix (Moderate, Year 1)
| Stream | Annual | % of Revenue |
|--------|--------|-------------|
| Subscription revenue | €54,612 | 82.5% |
| Premium AI metering | €7,017 | 10.6% |
| RS server upgrades | €1,986 | 3.0% |
| Overage billing | €795 | 1.2% |
| Domain reselling | €529 | 0.8% |
| Annual discount impact | -€1,258 | -1.9% |
| **Net Revenue** | **€63,681** | **100%** |
### 8.2 Revenue Mix Evolution (Moderate)
| Stream | Year 1 | Year 2 | Year 3 |
|--------|--------|--------|--------|
| Subscriptions | 82.5% | 78% | 74% |
| Premium AI | 10.6% | 14% | 18% |
| Server upgrades | 3.0% | 4% | 4% |
| Overage + Domains | 2.0% | 3% | 3% |
| Annual discount | -1.9% | -3% | -3% |
Premium AI revenue grows as a percentage over time because:
1. Users discover premium models after initial onboarding period
2. Agent customization leads to per-agent model selection
3. More complex workflows demand higher-quality models
4. Opus 4.6 adoption grows among power users
---
## 9. Sensitivity Analysis
### 9.1 Churn Impact
| Monthly Churn Rate | Year 1 Active (Mod) | Year 3 Active (Mod) | Year 3 MRR |
|-------------------|---------------------|---------------------|------------|
| 3% (optimistic) | 175 | 810 | €51,273 |
| 5% (base case avg) | 156 | 660 | €41,772 |
| 7% (pessimistic) | 135 | 510 | €32,283 |
| 10% (crisis) | 108 | 340 | €21,522 |
**Takeaway:** Even at 10% monthly churn (extremely high), the business is still profitable due to near-zero fixed costs. Churn impacts scale, not survival.
### 9.2 ARPU Impact
| ARPU Scenario | Year 1 Rev (Mod) | Year 3 Rev (Mod) |
|--------------|-----------------|-----------------|
| Low ARPU (€55 effective) | €51,200 | €412,000 |
| Base ARPU (€69.48) | €64,614 | €511,485 |
| High ARPU (€85, more RS/premium) | €79,100 | €637,000 |
### 9.3 What Breaks the Model
| Risk | Impact | Likelihood | Mitigation |
|------|--------|-----------|------------|
| OpenRouter 5.5% fee increase | -2-3pp margin | Low | Direct API fallback (Anthropic, Google, DeepSeek) |
| Netcup price increase (>20%) | -3-5pp margin on base | Low | Hetzner as alternative; 12-mo contracts lock price |
| DeepSeek V3.2 deprecated/degraded | Must shift default model | Medium | GPT 5 Nano or MiniMax M2.5 as fallback |
| AI price war (models get cheaper) | Higher margins OR lower prices | High | Pass savings to users → competitive advantage |
| Zero premium AI adoption | -€5.30/user/mo | Medium | Still profitable on subscription alone |
| Churn >10% monthly | Slow growth, never scales | Medium | Invest in onboarding + agent templates |
| Stripe account issues | Revenue disruption | Low | Backup payment processor (Paddle, Lemon Squeezy) |
---
## 10. Founding Member Economics (Deep Dive)
### 10.1 Founding Member Program
- First 50-100 customers
- **2× included token allotment** for 12 months ("Double the AI")
- Same subscription price
- Available March 2026 — until cap reached
### 10.2 Financial Impact
| Scenario | # Founders | Extra Monthly AI Cost | 12-Month Total Cost | Effective CAC/User |
|----------|-----------|----------------------|--------------------|--------------------|
| Conservative | 30 | €334 | €4,008 | €134/yr |
| Moderate | 60 | €668 | €8,016 | €134/yr |
| Aggressive | 100 | €1,113 | €13,356 | €134/yr |
All tiers remain margin-positive at 2× (Lite 47%, Build 35%, Scale 31%, Enterprise 22%). The extra cost per founding member is ~€11/mo blended — manageable at all tiers.
**ROI calculation (Moderate, 60 founders):**
- Extra cost: €8,016 over 12 months
- Revenue from 60 founders (12 months @ €69.48 avg): €50,026
- Net contribution: €42,010
- ROI: 524%
The 2× founding member program is both generous and sustainable. At ~€134/user/year effective CAC, it's excellent value — providing a compelling "double the AI" benefit while keeping the business healthy.
---
## 11. Comparison: LetsBe Biz vs. Industry Medians
| Metric | LetsBe Biz (Moderate, Y1) | Industry Median (AI SaaS <$1M ARR) |
|--------|--------------------------|-------------------------------------|
| YoY Growth | ~400%+ (from zero) | 100% median |
| Monthly Churn | 6% avg | 5-7% SMB |
| Gross Margin | 57% (with enterprise) | 60-75% (pure SaaS) |
| CAC Payback | < 2 months | 8 months |
| LTV:CAC | ~8:1 | 3:1 target |
| Net Fixed Overhead | -€1,100/mo (surplus) | €10,000-50,000/mo (typical) |
| Breakeven | Day 0 (pre-launch) | Month 12-18 (typical) |
**Key advantage:** LetsBe Biz is profitable before selling a single SaaS subscription. The enterprise contract covers all fixed costs. Every product customer is pure profit from day one. A typical funded startup needs 200-500 customers to break even; LetsBe needs zero.
---
## 12. Key Milestones & Decision Points
| Milestone | Trigger | Action |
|-----------|---------|--------|
| 10 active users | ~Month 2 | Breakeven on fixed costs. Validate PMF signals. |
| 50 active users | ~Month 5-6 | Consider second Claude Max seat. Start tracking NPS. |
| 100 active users | ~Month 10-12 | Evaluate: hire support? Increase marketing? RS upgrade demand? |
| €10K MRR | ~Month 12 | Serious business. Review pricing, consider annual plans push. |
| 200 active users | ~Month 14-18 | OpenRouter enterprise tier inquiry. Bulk Netcup negotiation. |
| €25K MRR | ~Month 22-26 | First hire consideration (support/community). |
| 500 active users | ~Month 24-30 | Scaling challenges: provisioning automation, monitoring, support load. |
| €50K MRR | ~Month 30-36 | Review: raise capital for growth? Stay bootstrapped? International? |
---
## 13. Three-Year P&L Summary (Moderate Scenario)
| | Year 1 | Year 2 | Year 3 |
|---|--------|--------|--------|
| **Revenue** | | | |
| Subscription Revenue | €58,032 | €230,640 | €461,280 |
| Premium AI Revenue | €4,959 | €19,709 | €39,417 |
| Server Upgrades | €1,123 | €4,464 | €8,928 |
| Other (Domains + Overage) | €916 | €3,642 | €7,284 |
| Annual Discount Impact | -€1,416 | -€8,070 | -€17,424 |
| Enterprise Contract | €18,000 | €18,000 | €18,000 |
| **Total Revenue** | **€81,614** | **€268,385** | **€517,485** |
| | | | |
| **Costs** | | | |
| Server (Netcup) | €12,917 | €51,338 | €102,676 |
| AI Token Costs (included) | €10,416 | €41,398 | €82,796 |
| AI Token Costs (premium, pass-through) | €4,508 | €17,917 | €35,834 |
| Monitoring + Backups | €1,872 | €7,440 | €14,880 |
| DNS + Support Tooling | €1,248 | €4,960 | €9,920 |
| Stripe Processing (3.5%) | €2,856 | €9,394 | €18,112 |
| Fixed Overhead | €4,800 | €4,800 | €6,000 |
| **Total Costs** | **€38,617** | **€137,247** | **€270,218** |
| | | | |
| **Gross Profit** | **€42,997** | **€131,138** | **€247,267** |
| **Gross Margin** | **52.7%** | **48.9%** | **47.8%** |
| | | | |
| **Cumulative Gross Profit** | **€42,997** | **€174,135** | **€421,402** |
**Note on gross margin:** Including the enterprise contract brings Year 1 margin to 53% — healthy for an infrastructure + AI platform and approaching pure SaaS territory (60-75%). Right-sized token pools (8-40M) and adjusted pricing (€29-109) deliver sustainable margins across all tiers. As product revenue scales and the enterprise contract becomes a smaller share, margin trends toward the product-only rate (~49%). Key margin improvement levers: (1) prompt caching (+1-2pp), (2) AI model price decreases over time, (3) actual usage below pool caps, (4) OpenRouter enterprise tier discounts at scale.
---
## 14. Assumptions & Methodology
### 14.1 Core Assumptions
1. **Launch date:** March 2026. Product functional enough for founding members.
2. **No salaries modeled.** Both founder and engineer working on sweat equity. If/when salaries are introduced, they come from gross profit.
3. **No paid marketing.** All growth is organic (content, community, word of mouth, AI agent hype wave).
4. **Tier distribution stays constant.** In reality, it may shift toward Scale/Enterprise as product matures.
5. **Premium AI adoption grows linearly.** 40% of users use some premium in Year 1, growing to 70% by Year 3.
6. **Churn improves over time.** From 8% in early months to 3% at maturity, driven by increasing lock-in.
7. **No significant model price changes.** If AI model prices drop (likely), margins improve. If they rise (unlikely), markup absorbs some impact.
8. **EUR/USD at parity.** OpenRouter bills in USD; Netcup and subscriptions in EUR. Modeled at 1:1 for simplicity.
9. **Annual plans:** 15% of customers choose annual billing by Month 6, growing to 30% by Month 18. 15% discount applied.
10. **Prompt caching adoption:** Modeled as reducing included AI costs by 30% starting Month 4 (when engineering implementation is complete). This improves margins but is not reflected in pricing — it's a pure margin gain.
### 14.2 What's Not Modeled
- Salaries / founder draws
- Legal / accounting costs
- Marketing spend (organic only)
- Office space (remote operation)
- Insurance
- Tax implications
- Currency fluctuation beyond 1:1 EUR/USD
- Potential acquisition / investment scenarios
These would need to be added for investor-facing projections.
---
*This is an internal planning document. Projections are estimates based on market benchmarks and pricing model assumptions. Actual results will vary based on product-market fit, execution quality, and market conditions. Updated as real revenue data becomes available.*

View File

@@ -0,0 +1,245 @@
%PDF-1.4
%“Œ‹ž ReportLab Generated PDF document (opensource)
1 0 obj
<<
/F1 2 0 R /F2 3 0 R
>>
endobj
2 0 obj
<<
/BaseFont /Helvetica /Encoding /WinAnsiEncoding /Name /F1 /Subtype /Type1 /Type /Font
>>
endobj
3 0 obj
<<
/BaseFont /Helvetica-Bold /Encoding /WinAnsiEncoding /Name /F2 /Subtype /Type1 /Type /Font
>>
endobj
4 0 obj
<<
/Contents 17 0 R /MediaBox [ 0 0 595.2756 841.8898 ] /Parent 16 0 R /Resources <<
/Font 1 0 R /ProcSet [ /PDF /Text /ImageB /ImageC /ImageI ]
>> /Rotate 0 /Trans <<
>>
/Type /Page
>>
endobj
5 0 obj
<<
/Contents 18 0 R /MediaBox [ 0 0 595.2756 841.8898 ] /Parent 16 0 R /Resources <<
/Font 1 0 R /ProcSet [ /PDF /Text /ImageB /ImageC /ImageI ]
>> /Rotate 0 /Trans <<
>>
/Type /Page
>>
endobj
6 0 obj
<<
/Contents 19 0 R /MediaBox [ 0 0 595.2756 841.8898 ] /Parent 16 0 R /Resources <<
/Font 1 0 R /ProcSet [ /PDF /Text /ImageB /ImageC /ImageI ]
>> /Rotate 0 /Trans <<
>>
/Type /Page
>>
endobj
7 0 obj
<<
/Contents 20 0 R /MediaBox [ 0 0 595.2756 841.8898 ] /Parent 16 0 R /Resources <<
/Font 1 0 R /ProcSet [ /PDF /Text /ImageB /ImageC /ImageI ]
>> /Rotate 0 /Trans <<
>>
/Type /Page
>>
endobj
8 0 obj
<<
/Contents 21 0 R /MediaBox [ 0 0 595.2756 841.8898 ] /Parent 16 0 R /Resources <<
/Font 1 0 R /ProcSet [ /PDF /Text /ImageB /ImageC /ImageI ]
>> /Rotate 0 /Trans <<
>>
/Type /Page
>>
endobj
9 0 obj
<<
/Contents 22 0 R /MediaBox [ 0 0 595.2756 841.8898 ] /Parent 16 0 R /Resources <<
/Font 1 0 R /ProcSet [ /PDF /Text /ImageB /ImageC /ImageI ]
>> /Rotate 0 /Trans <<
>>
/Type /Page
>>
endobj
10 0 obj
<<
/Contents 23 0 R /MediaBox [ 0 0 595.2756 841.8898 ] /Parent 16 0 R /Resources <<
/Font 1 0 R /ProcSet [ /PDF /Text /ImageB /ImageC /ImageI ]
>> /Rotate 0 /Trans <<
>>
/Type /Page
>>
endobj
11 0 obj
<<
/Contents 24 0 R /MediaBox [ 0 0 595.2756 841.8898 ] /Parent 16 0 R /Resources <<
/Font 1 0 R /ProcSet [ /PDF /Text /ImageB /ImageC /ImageI ]
>> /Rotate 0 /Trans <<
>>
/Type /Page
>>
endobj
12 0 obj
<<
/Contents 25 0 R /MediaBox [ 0 0 595.2756 841.8898 ] /Parent 16 0 R /Resources <<
/Font 1 0 R /ProcSet [ /PDF /Text /ImageB /ImageC /ImageI ]
>> /Rotate 0 /Trans <<
>>
/Type /Page
>>
endobj
13 0 obj
<<
/Contents 26 0 R /MediaBox [ 0 0 595.2756 841.8898 ] /Parent 16 0 R /Resources <<
/Font 1 0 R /ProcSet [ /PDF /Text /ImageB /ImageC /ImageI ]
>> /Rotate 0 /Trans <<
>>
/Type /Page
>>
endobj
14 0 obj
<<
/PageMode /UseNone /Pages 16 0 R /Type /Catalog
>>
endobj
15 0 obj
<<
/Author (\(anonymous\)) /CreationDate (D:20260225022102+00'00') /Creator (\(unspecified\)) /Keywords () /ModDate (D:20260225022102+00'00') /Producer (ReportLab PDF Library - \(opensource\))
/Subject (\(unspecified\)) /Title (\(anonymous\)) /Trapped /False
>>
endobj
16 0 obj
<<
/Count 10 /Kids [ 4 0 R 5 0 R 6 0 R 7 0 R 8 0 R 9 0 R 10 0 R 11 0 R 12 0 R 13 0 R ] /Type /Pages
>>
endobj
17 0 obj
<<
/Filter [ /ASCII85Decode /FlateDecode ] /Length 1665
>>
stream
Gatm<?!#c?&:Mm.R$[4A:87$sZF"Ps((P]FD@,"P(LC?:,\CO*C^8\nqAn8Z7fk$jeqX>s*mNKfgcLZ3"r)Phq)=65c=ZS>S,qY(]*0O<K\8lj0H3062AY0fFRu(p/j+$%85F"RnS'C-$F5A,@%6AO,0FlBJ86@)OpYeU*.>1_IV.Oiq>iOFqf!T&5nZZ'$GL'8SILK*9lbJhKN@ejIssSrI/i0sm.0f/1!K&nG2cE`i\3fS1&a)VQXF6QFZbIg_WMsOmScduM"$]%),)\Q*1fVkS$/\QdCt8N4J@V00oPnY$Ecm#=I0q1nVrSbL1m_Hf!B"@C*u-&#]g/Zcos<%L@8*'6FiDkk8^_K8:dFo!Tjca-oG\1c+=^m=$,*,JW%;.8P1,Y87@e5F(F;CD/_fVLR7qNCm]D!&A"R-]A.[1d;/hR0@Tb:S0l);HQM9CWnS)gUe3(u5,.]bf[Hjgn8P$g.EJO)."5?Vh_"B^Dk#jU0Jk7A[pQIaOF7B%NIb$23l:/RfMdYMWGIB</sF.aM(@^<8<PB(_##rna#5A)9p-j4d2[pDn,kG05%-dF+YmidKeG?BdeWA$.EKA;=esm-;1uJ?&dmVT.@0eSnbudidDSRiG%kl`<@Z]rGSn-iC)9_K7IYG2+iD5@>XeD>g]G/uOM3&F)^i'/854pg;>dp^c)ULHG7^Y*Z%0&7n4Km1KGf#<[UFi,j5^._3oa^M`tT#kG+pTkkT2^7el+I;>%S_j')jnBAG!kgo4KQtL)NdK/Yf8Jh6YhkOL<"1cu#Kj*+Th&:I_]B?:A[c8`":XF`fRMs8UfNQsXc2V@4nGEL`k>OV_/>IrFL[35(gbl\-"SS\s5Hg/WIf7Kd=!l/Wmi%Rj@L+=?U2#2/'mbu2'2^>\GNd:Q#jbY^M?/5*_iOHdJ=e!jiFCZoH?bcHLeQ9H$Va?uF"W5fmc2Rb1u-ZGC)i*V.3+=NlmK-^fTH'-)P:Z0s0DC7H)K;e.BVl3^lSG5Kg,YO^Xp8V3);@k)NhYg&%'X%>2.Wk,cqI193qQO"3Tg.KSZ_4OA]id6uYN$IHY.)69rXfj*!:KBjHokK!Y,k_W.q2>#fR`fKF!bMB8]e-!3m:1bQ^qmAHV,dhFc_S][rYb$e2aa&oc/A=ouLHjhAdVCT=4OMH,a&SBN9OdA5Uu`D!"Bb'^h>Q)r>9>'CH%^\-FJ?kJ%5:AM8S\Q!SnK5jTP]],T16e0r(t-oVcO&10P>"$Wb+YW!B,PKPBr=-"-rb-C(&;`5IDoD1*L%=a#($/aK5k$VjpdJfm\0B`MSI%!h)'/g]]%EBF[=Slo!D1c;<1%p=Xceb=fP)XH)YE7u9/&SaOF:-K>!`M4bp9QarXBQr[H7Z`3[a@B<\X$b(9(Oup%Z9OK2Wh#h8&sA])$oXS9CpqEqsisbT@(J_V'OoerEKhfB8GinCKMPd:D:9sGJFq_r$;4.Oo9/<Y?+#8<)s.^8%mZU31sgf8tbMkDp7DGkN7knN:ScPjZmVYF[/d>bnpPt!;Zs"hMm?JHu#RdikC1,hs,eW]bk1ZVttQX[hiGIbAXGE'#4%0DS;q7]ZuJ&_rr\)D?D?@2(7,pjAnaU%>EF2NWDYX!H7?G"-+i5;n<U.9;*4[\T+N1VZURL#)ubFR5i3V\J]5-eH\&Nr<EpqS3H~>endstream
endobj
18 0 obj
<<
/Filter [ /ASCII85Decode /FlateDecode ] /Length 1931
>>
stream
Gb"/(hc&8h&:Vr4Yn)%7ZqZ7'fWt,T!BT!9V'OM;?;)Pp>K>%B:.t'7rq9RLZ'/#DpbKG998A(WZBDrpm^KLqK)rq5<rO,c!)?%'I)1>Q(d.rlL^0\*3$7hmln'f?25oYaic3pu%E.k'iZ<P`b-Zh<C)=,9m_0rEF3T4c,!>*](sV"&aqXZNJ^`g9\E>u)M^SmqE>6E3S#'K9\;8QYLMr-ub!,TFF)cJKIP\Pt\U"@Q0/.]f?!,-UOpJ1B9q9sWC4NLi+I\.;q`oBkd;:!MOC#`\mR*&QpKk/r=4)Ps^k3O"64mRu(\Lia%""U#/iC)ub;HdG;,T0.-^k@JnR/Tt<)dQ1,!Zi)lg"!D#:=;@3OJlO@K\m?9bnLA15/#_(GIDG`OK#\B'jk-D/hrtU3n8QNmM?l\AdXU6Z71Eidb8p_D"4#AcgRu;EVo_8oO:V:.QB:iHqsGN,&q_-Hu(\`lKr-"6KLB":H\4)!KBS;*$FX1:_t@"U=Q<@5K_q%C8WBnf6(J(bbG(SXBna/<YSbcGt3!(8faf\WoVIG&5"udVgIZ1>rA1*($#\)3PFM9O]'<_JW=82;p]E8CJd'4m$fAl,1X)qf"9\RA@Vp9[#F/VfOp$ZtF)4INhu`dZDT^![<oEP95We`]W`)k##QohD9j]%$,=h_8osob>),1,^NscU?d_J[mcnB@hI"5FV,se,1j1TPn[&5d^f][*N849q%jUWbWTm)TS?^e.':@q^1]qWDO8X,G,ackr-<1CI!b67T+YL']f3a43jo*/cES]N2#@?E,fq6h\*pMK_M$SA!H-?&]I[i%/upeG=u2rQAYh>GYSW<.ic$V,p/M+EiA972rPg;"J$V4Zc9/U,5c4J8\TE_Mqjc-kq/qF;RmjY;qlt..'kt7M@M35@hrL/%S*)de0IcK0cm@Aj>d:!S@X"uG!jY""Ed+[W@1LOA9dYskY^jbN"-ZER`0dVPf048/I>P/O/A]NckM%.Ja^Z+K40LD6R[n_4U[GUU=.Ns$J<G((BP3.=,/gQXq_5NB*h;o^C%6^lfJq[c;N`/M8)P32F;F;iV_bukME2a)D?3amHoaVW1O1fO>*?q5/#78&7bDapKBjp-GISp/!k(B0*NIO>T^7b/B8CbD)VI$CChg45j)d#dLr7I/_%bpr>sjAtn@nAkQP[A5>2Z)Zol_FDI8Pb$G(a0K,3\OQFU@:)]qE8Yf+m?eG,#_0DNVfn,s8$EZh,"15OeD?O/Sm)Tp4VhO3LVR);]f*iO7X5[W&ZiU?p7l&P;62RXo@(:"L_p:"ch7QKo;+eHWFd7'/0(L/W-#'t0ED"ru.t_:K,UWXWbHSCQQAo.'`B(Ri1_CX-HZ(MdpnCNcPTgm7Y[:tn3O'/:';l:L)orqZa@L_*1iO]j_l%^RR2LeX_O^ob^dYS9$>lO+c/7%q6-"GXf%%"iX[:D<)%[ND35&aA?C?se_jW%cjNQ+`pDdN``Y#\SPbB/s6cnRRJ%F:48)*XMl`m#<hALN38dZ5#r@+UYeFd^ea3k#mr#E;F4Ws$c@c_-=]aDO[EWb]0XA"F)Wek)`eE7C6J]CiV&,E730g28!se.F>XHe"MeBR>g0[]jdL:i[s2k8[]#u%>gS^)6FG'ON8lHBS1+\j:p$XKrc`a'N2BEetG-GSL.3rB(k+Un)-#BIpL8[E6,"N_E)]#E0(Na%L?3Yk\uUn[@adp+d-KJR*X6__u7WX,h#dA%=&EgfoYBIO>V:n#O,'3WappF^8Jj`g8,k/Gc.$m>>3l;-KGl)lV9!OMWqf@UB>79+0m`LiB#2[d<6[UFYGh?RM`qC'jD->rnaJOEsf%9d&[\&F\oU_fI3WO6E=/Nk.l!CmAhP?INA9O`jSfU@t%D_V8cOHehf4"),%9$)Ho:+ZYNPqWjLB1Q!J',V;9jp_57DA]mg"cVa8T~>endstream
endobj
19 0 obj
<<
/Filter [ /ASCII85Decode /FlateDecode ] /Length 1840
>>
stream
GauHLgMYb*&:Ml+bX9!]04IN<7NkQ&[V<'Bg,(N".,Yo]VJS3%\0@2SqXlq7/D9I2QZ\!)M.MP'bi+uKn4XZqr*e.=-j;RJ^q2B,N5K$)bTe/;0&Z]dkf;([,d[ZJI82omqE+i=q*K$2Y7Y>C!6*G_?jl:L5_=e^%(IF_(O,rh;WqC7UJDSo`'7\OGmJ&&-F6Q#'=]O_mqQCm4O\8q#d)-C^O/4,s7?3P_0Q(?'b%goi3e9ClA#P]NM:CJc]t"ME*$Y5Lq+]'JaeaFC`mU4GTh_*%n@T\Nj9s;5slh_%oVn&6%"53RR[\>*.:7^)W(R&bX#H$"9CkRCYDn6Z"NTGC:S\8`dn;Z9Yr>jCum.F)f_YbHCVdI/FJaRdl]GO*s7#.LE_4%1H87bBU7YN?<Pt5%?OgaKJ2+!&tt4cEU/Chb?p=`!c<XtcA'-o7S!sakAV[-%@h(k;OcZc"UkfdS9!N'O'q`t,IEcqJG[:X]a!1[/heah+p":NV$R"Y0EHlZa_M;4#1LI0N9YPIBPgZ*X\K5sS)j.<-K2spL*oVf8Gn>_hX6PmH4XDrUt%IuDRl=+Xr44eU3ES1&W\j@`^E33'C8QeD4ie5E*5Rga':XO_bFGQ,P?t?>0Ui=@p=ct_Bp(DjP_&2Gs92$&/NV(6#B$,IR2CHG;<mlHW;G/WlD<f?s7K@@8FuhH/+RW/_-Bm@[+T[Du$lugiqoOF=%B:R8;!/m'b\S.^4)>jRNp6eLZ!!dB"lSE8]mLU^FnTU-""c45nB=,^8D=>)c.uV-OMAIV4LCB=,T%nm:$[&di?%h/c@'g_g/8O17=imV^"m3)RrBG9Lnt0!8@,eHY<2>&rXC.n6BWTRVC':X/+IrsMRMKt/\k"gcKuPMV_nO7@;rrFA*U4#=K9ZKLfK<fZU:!Y*kVI+sGIY!<7`D$AZFX-W#KCde;WJ)?;`GLh(?Lk(8Z?_^dNTl*;G%uitLQdQ%eX\R+j$nKZgg1.^ADrltVc%rb.Yr!/>Wrmq)%X;cF]VR(,==XJuf*n>/=rG,fH@Q=2NjOMehR(p?k13tG_sP]Xq!08@*+4stR9uMo:VPa3%SFX;)stl.b?SVVM0o<&)f>Va9]C%la*11%hfXd:^AhR*\[9JkW.O&1E*=[-?B/A5>tHuUmW$E]#3Lb!PIsGAYOJKY`Tgchd1URa8j>V@<M:R*hA?(N/LE.LoEILtk$r[PpME'R/X94XqJEPsLSD?j+s2$E"8$+^L2i?N8@*!4o?q$GRhU,T!V$A%EWg/2G:M)1X<@Xt=fc%Lk@dD_elLah<GF*/gAf%A"I?0%K7[.\dEl_J-f[1UJse''F^3q*qGWC2iXMK^UISr0(YZYf0%Z]%ejYm;N_uQS(6>\(]e+As.BSj^Hc-GQL[3C5q;r"M@52o:aYt#2.iHpi\kM9e8%lM^F!I:_XK-^@A:boMPBl/<SdaNRc(6Ht#IQBi1T@j^.up#9@Z1f(hWX&N:`I)e>L:aITjFn&qBD-\l--_U&Mu0h^sY"lP]Ltm[dAlh)"%gJ(@g)kh3kVDQ&_!eHu<cB1eJ^g2oeXt_+GRm#o%YFSNTL;YTQ/Pd)`FMM6j5Ym2:K#:]u,5TJL6DWBDm#1(.[_BY^0XN.npGQ_(5LecnTT;`m%Zcd>`L)o-TX;qqGb2Z)onc7jKi7%c>K=*:Z-BruV,YrQ-\d\B^04H4jAI;G(-m[SN[%dH`001*I&a0enqQ)6*AZh5D,m3_$&K"K>MkfDA8.<<Z*"i&*.H;h\M3X-M;TL(J\SiihZp2<.5R\CUG^ZM;>n*/tE`QU.1(nB/3Y'Q&hVM2g"=GOj*[l96(]#T>rIf[/Z[2f~>endstream
endobj
20 0 obj
<<
/Filter [ /ASCII85Decode /FlateDecode ] /Length 1801
>>
stream
Gb!;f95iQU%)2U?n.Hl22UcqeFA1DakTWaM`-YT/(Y,+elH6C_b7;+uIQ<^q7p=#l4"7B>&jrBR+3BT8i%PQfrikeIaW05T%'M<$0Fm,i@$4%t?c)V`oPIQ"![f\PHQp!s49Qjn0jN^N86Ml7RlLXQNik4d+V,H06\Nre_fu[3QSr3A=ZD?>KfdQ^\;8T^S]"A,f@f/aS`K;b?,;7->0,TK]Xi@fpAA\'rq,abN5*7)V)+g%___s6(pn:cUR;;=claJF!EVO25gaBj,1&)QE:[tciF.u%p)4h\'1eLC&r%fe<pX-=E'*oLES4c9?L$HN`+[CLSgYr0`IgJTBrD=5UB"/QcICG^*11m]k0sS0=:f_D/KXI9E'(==Pf8P\_iIS6"pd\W186UUQHX<'b:@5;%4Ja<V#_"-liXn'YO]ZfAoKg,[Z4Fois+q35:k7\9Ci&L7Aj-c-o"AN:f?Wh9Bu.9$.>Ic-8]\Y$tL.A?QsJ\S&B+N$Qhplf$^q)V$CaTF+>%gpsYM84?uL-T./_=KcQ#1[=G2-D7l=A<F,[F[YT6.WFh_`eTkqp,5\h8Fu%5T,tdLmi8NSOoX<F"b2\KK7,aRH8_L:,0st/KG9c$D5e,2=@"d"^/@2D-O'^28mZ&OCc]PAB"qN#/KeTJS;GJpH"!bG&@?8W@DI,dM/g4Ko9[&qhDocAfMr)7n!(-Y'<-9k+\4m1!@/'`1]Re0*Cj?+mF9#VGin06F:'pX\q-MP3%BMVnCV%00Z0k;el10SKHY4-nc^sV`?/EAOd,b^Q&7+/>HKbdSB+JChQ8^A.T.Fn_X?-kAY.T=B^4jr/g)[fC=&J&g/(dX_2_<PI=@JK60[Vh[['-#T3!gb^bk;_4%C,f#<Y=I'MhV$7o*-PQ4k$(:mDSW"eH\t6&!QDDbSqB@YTA)m;gmu*n?kR]D6a"J0OEg;ibhQc;s+VD$s4UbJT3a/<::8jp(q/O[`g=K*)8FK-uZR!*#lR3%YbrB]c@To_<Oso0<h*nk5"(54H2F0fKD*)4nG7p)6;qk-`lfM=F]?!f+Bb!rOFYO#7="(>isnU)rV>=eP5JE^;+iOmcHXf!]L7hopReQf_A(Z5V186&O'5>h:C"3%[1bQA7IlB=sA?q6l=+E#Tn8bY[p7*)duo.R+_]!Z9["Ga;2LkiC+,[n:m]T5>)$'<M[/]TmEYia-A,Pe&eYG=t[07d`KV3LRn?A<sq>L6RYmfUXi5U+H7S0#.rFI(OHi9DX>B?CC>'eG[n)J-:uaJ5W=KcZsi>-G"=^3*7Bh.LGU5X7l(2o4N47R8fK?i2b:.#PPPXYYRC%C=grZgU\TH-og#%R`26HpKr*%o#1i)NMsb^K?j+@s%gK[MjS!T]4f&JCXdHb,IpPjTGZhK>5*pg[]t+,X85$af*Hu`r`'pM8^T(c2YA0t*E#Ij4'hEHMBBqOF^&ij=D^SD6D]-8P:NN!WZZt$V(@kRn?>"F&YNKm$*p+5hj7IO2i*-[^cV75)JN$,Zd>k)^D>Lqt<>&*TAXH!@5O0VmUH.<*W$`*=<gPbSi-L?l!3DNn/=QPl4eK'@G*-EiP2G;B5r8'D*eaRG>+s=C,ehTS8,<#o_sRHbicCV8K2UMEZ?)9D4WrZ(CZ$H4''HK(//"=<bMQ0k^0SsX35l>RAHt]W)fu\NSu]&@&+s!;f.$Y(G/,!Wcs>c'ep`'tn,=PGA%.pU/n.\@%,"7$iOcm@4R!?3]J%@cr[KE/C[M`K'bQgtB;o5g$sC4?@RCMj[\kfq%.lcaa==7$<X"K;MJW$kP8=rMN;`q-oin$~>endstream
endobj
21 0 obj
<<
/Filter [ /ASCII85Decode /FlateDecode ] /Length 2257
>>
stream
GauI:?$"LR'Sc)N=.Gc*"NE1mrobArKV9\+_G31`aj%+.=IO&6Z[oR&O0<)t,WeC\/8(>DZI'^dO*mbtaHh6&&GJQh^WR"8UH.*kVEc$C?NV%FJGBk'31kVdm4?s19`2U>N([cV("*pGgnuIJ.bME(($MfCWENl$V8,dJKb=aT(NAl>gKknCE4#HPc5>#L]NQu>bLPSlSo$+;Fj'1;Ed\*5BhJ[-KAW:B5Q9d'e+r&$:[Z$r*R+!K/rVei[G4qKjZHb[ks18D[$GaW,d^5Yf5533<^K_EI5GI'Bspe82+4oC$b-2Y9e24h7JQ-f?Hf!&^BCuS]/m*segNtX5Fo?]hB*H[V]jj$)\+&7HrobW1SDm;QkAfF"4>!-jf<:*fQQanfrZ'[e`G_JPKWXmmGZ6<&,Uk7m,5'bYlT/oB69`k-QS-n,mA35e*<tS>e><@]"T'9)Ij3r;?NBu?JflXF:K"&h!O%2ekTsWfD;G3WiRMb5.00VrUsELlqp?AIJT:M"`];X_-![qiXd"W<cZ'!5Ju"@F%*RDf-uV-NO'50$MsiF=%`(Dq5P6p<e`&IEg`Ia>^5@L]6E"*]2.,"Q"cMF9(1U2K=BI%FgcS\AF!FU^[Lo&4Sm^lgR"_$l9!]am65^ZlTB\phD37I+4re_I!BUU[ETAl$ancI]a;`[J%A_WfgBW"R,NBNs(HJ!Flh50f;ng@E9iei:;13V?!gj/05_YCdanrfULit0)knjh_B%'&+e@QIV]siNI!=e6ma):uU;9NW3mh?26E<?/b`Zqm?tYE08Q:D.;3=h`$>lO^"j)g7dQ8B<Y\3)=S&G?EODYWclA`^(%q!aG)N<fK/r:U0<ZuWKR''7I)Qf?46Ze=E;5'NM$GE2Ylh$3aQ83rU8aj/[P+P!W&;<S_'i$=F*=YL0d3\#tO&hru1`38rh9fnghR+se[ScLl:s!M(G>r+DhGKD)'Tc2\aJ8"R.k.&g,atZ.4;t:nnS#9l/1DVL`bQ9L7e:\S21imV`M5aWX&1u/Z;0qg8D:R3.l<;MT"K"G%$X@0>(5$Jc(i*g5/JSiUX<g5O6=R#L-a<O)QgaaCoGMR1MImFC('3m1lmNqdl]+C2Pr0.PNHl2ZXKT$&$T0*#)+(T+F,dZ/aQ17`M:9?@6+`:c\Vg-coW.^OLmCGbJ5NM<2R;"Zit'g4"@R/m\iP)E*I<^O31F]i[(iKX;urkn3sUWZ"V)d4W5m?]dS%d-WY':TtuJk_!&Z/=V*YEP;Ya->gVN&B8tP@M7b"=nC/qD*L1=X2tm\h='42\H2U1:UTnRPNX![R08^d0<EKX'f[4+*c<6oY`Q^IoP;\l^YDXEd=XA5j,&a9VGTMY+=l(j(O"^ndej_rSr1_om=KLM-*#bnFm:\%SfJe_/8#,H(r<fSkPWL@Y2V&Z;$A.&1R8n:J,rSFtXR3NheVu8:X_cp/RqDnK#/-eJ$<t,>mqAqQLlQf%o-,\Ir!Rg@U\3C&NKP#8m\ih1YaQMT<+_EPEJ]ZT'fYQjSn3lm-,!6aXg=I_l`acuAnN.m<k+Von%iIYZ0Ga;k15Q,FQlYIH;PGXn#C>\2L9N>L[WE]^9uVW_GfLB]="#t2%>g0`DQ/WqnDq42dc>3ST%j,O]\\oZQ>$):h<o'Z!8[b(Dq/67tM3I4\"UeG?XMKOj#D2"_:m-&Wq5%4X*][Z?-NqQ[;+G(QH$!+mZ?"pYDo.]+ci,'[m[u4(Ke25+9$Z5/e54Mj,jSM`s.:h037;R!qb!#EB;91dio7RsZrZ[$m@e0q-Uf6!oQt+XSNZP)5T`aVCS-::iej2sd=V2Kcl1qI!2;e>*=!F[/Xq,2r?Nc::'c\feeVfu6bEKiglpof7)Abk(O189RVh&?*6)(.NHjL=jr/6Qt-N$Zpe>kkUo+Z7@r6[^V#YZ?%*Y]Z@`K(.NI?#"A1X:^HdX2^<^k&<r]meF[P?XU0<jMI?H<qrq\dB!mB\ioPFjRr5cjjn\2TLC^^kRl%a2EO"*N/AimHNOE)NZcS_^7g7St$74:)Bgc]jS(,1LJ*AG(59"c_E_d.08:MH65MQ8b*4#GaI;bJA(B1rSle,(iqUHBpPVV6Q0G:ZJG8YCJS9r:D*[9isQ]cB:A@_AZ_+jE;9?<ni">/@eanq/WA6He&5tK0]]CW@5iY3s?oJ!ehl8D`5)<)hD-`<lS18X+l\F5<ElS\;`S\t1h#_Zb_4G#M?VUPE+dg83ni-94Jkn\dG1bi*KLBLHKo%IZBFYN\7Ysq8X~>endstream
endobj
22 0 obj
<<
/Filter [ /ASCII85Decode /FlateDecode ] /Length 1955
>>
stream
Gb"/)9lo&I&A@C2i0\*f"H.1<D)]b9P:o#/F=%gSAK584BhHLmUqaW:;5,I/TO;r>9N."mQDl2"^jQ@\h@dULrkS*neV(3V+a\YJ$ksY0@UOjt_q47oE:DIe(%G9H1uA\9ZrN4p@ma&s5GNnq=F"10J5u'h#o?SEVLm17)97IFp>gA$R%G/\FSn)pW0CrXj6:6'2'1SXpZ:a7EO?Y0VK6nc6It\5GAkf7rVGO@nC>YH?eFiU)a:BYd)H)6FGtYfkPaVU@'8$H7*o7C-BsiVCaQN5`WNi#3Df).@(I4oOBr'.fS!BJKFKnuoiF*=_r(UfZGq'&eHYm`_)];Y@TYm\%q(mQ+E^ZX&gWmELALgQ0lU-IQ)kVr`!SF#>Sb;bhht@R528m[U[hXT8hL`DcldoLH8Pi9i^QqhWk$pW&8LppC'_UMVUJbFjg)BHVRHa+iu@l7<D5pEGa9=BP<rnIBP[7sFA/rF:s9%^H\d59K>>W]I<pcu+^5.hZ<.WB`2uKUZl?Qg7X3e87M8M7<(6,)dqmi3B-Wp!Cn8g#I9tQ;r_t[t)M4=aFWT7R2@QOaj^bEUqECkA:So%p'?2954ZbmGBi^IBUW*#28:i8M6o:9gSfhr`Ihq-ZOTs1X$Nj`0"ZX',/rTQ'qeM;:[VL2^P9/K&=qqL<+i%3m,Oj/R+&$u;Y_+V6q^E`lr2?DjMW=!/T0cFPlaRFN5HU`Sd;c-pH6HhQ;A,u&'1iC1?(P+;>ONr&'GR;':8Ul--:q'W%,>rt3X1_0b4M7Y:e:l$bc1t/.5SIZ75_),WstDkW57N$r!kUEWMk.Rir1`!f/dV*>E!GTVCW>EEp5H99:N3[;M)J&o@@5?:uL&#DK`1s^Z"XJ!QsG+<eoUt8!tB5XbZHcHi(AiHBX>S_P9^u:R<;>I4r(DQCIJLBC,'+R<HB6-@fR1'u6Se5];WQDn89+f*^G5*<7DOgSc#WF3k=])-IVZKq!]AX<#g?<8ltI'u!6'HDdU$@!tK,c4b$d!ZrMH7ej$ln:+Tdk4?`\I2Xa[VZ*]>_BSiOqL;KEO6Za/AYbb34?i=sXo>`ecC#oXIij,SorgMNM5.,$)7Xk1<rYd+Y'sjQmpkPB*CfrW".cBG&cW&ph#C@OK@\fu>Q!ep\5F2;0.!QXGXF\fruWB^9si!/2+d%Nn3=8aQ:C^dG&gS36?[N6?d'ic3T%%5[>r,Wc1@W-lN<Lq@R%pT@\;!j4@JOgPl7u[pnkis:X`cuo;q3`nM&5Fqa.SWIpW<keK=W?OS>B!De@J;kH,P[$/3H3Fb[7G"*5?6-8I9*/q*1N.:nU7nAueoF93<sf_"R@Z.r;l.Wd+b1(1T9/KE/Fa,GjE#,\IbU7jIcVN9(qf;*QQ<I-;-3QiWbRe@W#OeK>2WCCLdC=YH:88k!6+4DmtU\a&uZ&\H8_?nXGKEr<e;E6OmlIO:];Gl80*-RY-o'L=+FP8&S9V15!S@I>D[QB@N4uKnu&@P<=r8AuJ1[<dKGn](PPLo?#:Y+(I5Hm_KXO'IQl%-i$)trFbOq#T3[9[e'Yr/^>af*8`)U85m".-";ZA2#3M3hG;NL]7\8?3j')@`q$Tq-=%L^-!,^))t$XEu.Q39s*c6S]]rA;KnRLChoeZltPG;tCjS#-jke'[q-(e)$\/5a(09aWTo>3ghmaoS&Ii#"9S\q?8R$]k$d;fQS/`qQ`\I""/_+DDgN.*j^0u9WI-GdhbV`6s77?M]>\7Pe<<FA'7I$i.o6TUdLDHQ:]W^k:fI7C,D7t:5lj$P%3%!pC2*1dOX-^2CiJ!Y%*7*@J20M)r5X_q4Y"'AEY&fXi`%U<#YBo4-bADT2l)MfrO$Gp[N^CmYG'^ec`=CbKD,K<H+NsLB9QY<X"CZ>35)`UBD$4F)1bdm7)#\>dV_Vh)$K+C,`8TOH&Y5NqtDpFd`\1P]`F>U9(\Yf4>s\p]pR1.V8~>endstream
endobj
23 0 obj
<<
/Filter [ /ASCII85Decode /FlateDecode ] /Length 1935
>>
stream
GauHL>BeOU&:Vs/Ql#H1%T+NTP%DqQ[&)l+\!-pp?M`Zk8n3\E?or)7&,c,Afs0Rj\2D+7V\^FFd`aWdc$U0/'Rs6T5DT<"7rW^mJ8lE#J-dE-F9/;7_=tc\`O-)brgX8^8F?Kj;JRoSku$Rl88mkr[ZW6X;',HKE(q=GKS!&(#8\`5NXZt`s1gMT/t@X#*s\K(K<$j)C4\QAiXD-3cuT*Gh<!>cldO&)q9d0&(k%+c$O`Z-LFM5-T"Cubb.fgl2GOgE3A-X`\4[Dl&B@efAan['JA^1c?K9Sc-/'(FJTN`2)GsHUc/@X?4o2#'Xn_/ias("<LAia[`bHS'Mhb6s]>HE79Nc!@Aj+-P:JDjjUC6$G&bm&q3Lg:FS4$eI_&mWd)ZQ:0R^ceRq\CRi,?ei[KL4lP7bl/L[Fq%S>+mM:Q!'b2?A-2Rb+uq=7@#<`8_9>5ER\ljY"K``.NGPpU[.FiL.u=aY!!4]VBIfNH%A9!O9s,fY=)*tbu4nVECt@8V<K%VK\k`#;nEOu9nOnqf&JBFL!+aimHU&?F*Qrak+QqI\31aI81>2=i2t,M0PCn.GO^8FT%I9HB7I?D>N0@:XKmaIDASL:?>r]u(;ba#kU+.\FgYmQ_R>OGcV/^rDW'`reL.jja;3ee8Ai!?,ETjML[4+D][.UdCJLjn7+/t$Btp\+Gs1CRSP3kE>aifaIl>2$_a:U^>qGFRLc&EtK^6YfYKKgN@:/kFk`9?\JgANJN[6/WeZ,O,mgY45So`mNNuP:XT(K]fP:V@-:N+/G(N&RLK=+iYi^pKpo1mD?O5ueI0&;`J\eSM('9e36JYEo'">5E7`F%JF8c7H7=?h&!5@V4Q=)g5\3^GPLaWeG*j'.=51cB(!`S0/k=DM2'eDfu6q!I=#"hq1W@Z?uV%q@j!etJ1P3saJ21g<h]>!Y5,3;=.H6qNnEgPHCE[O2_A7\=$j1K,C-<ek)VV"P&RYLj>-J)T;:!V5'0-YoioMnI^nM&Z"sP]Bj3[V/BTm&O^=EM:"L5W:m(NK&oL/V4hk)b]s00;8'`-%)h\]FA9*>+FGGUt<\PBJ(]108L)9@]eM?d-e!#ZV@W&b.3[H!%im#V6]cB)kWHh&^"W][s&L&-L2,u#LCE0G(mSBMCF\j#\N$8eN9Ug?+l8Td$.(p3A_R-_]c&%>;=m@XC&_f6qPV4=pu1s>.uSmGShO/8u=-DNRa`>Z%7$BDP5OGeUnWN".6rl(D%+K@B]^Q,"tO<gGml2A_pJjhMQYa/2d1AQ]c]6F&[K$G#D:5JL:\hNGI\'MCEn+8>it-?c%0tAZN2&&16@!P*TWmd])bT?Ga4@lO_tJ7@9OF+Kb,8Jb^UsX*T+^?o/<K^gA^LN6?R8-$RuI.r!O6*`>>4Mdmlgmbf4[qD%c,5Kq"FVsMO"ABX0*dJ#jlQMYC1Qt'@*+;l1lqD[EHIVtDg(b`0K2TRbq-9qtWA7d_;Ar+M=A(uY(cp#Z35'46U,qYAAOsQo,?t/228Oe.Y2sk<T*MQf*Dl8#s6+E.-jn1InE]ETq.=<P=&=GNH0]G-\pe49,BuJPLNsW;9Z#b>#UsdD:8k<m,WmZ6RP,J%q'5-(;jB=6Y"icseIH;P>8=8rXb]lABQUu)';rq(-iSKCS<GI9[1g,PNNkTbW1Qu9VJ9d=%T([$)F;TL_=RZ5Z')jLA6TY[g!bWq?dc:G[[*eu9'"GYhn`qI;Jr"X52U(`D`l7=kCT$pLb%1g-pSP1c2ddHC6bbe#H5m(EgAZ]+EP+hPlrS7%pSYdiW.!J@N--rF5*#QLdV9W8RK%_/D26!@n%c/5l:+?OHr"[8(]GTgVJmS09pXOGfcf$q!`AHOQOHLqHg,W$14:T8NH8V72.NrL2uV=sUc69uc+As1otK7G;a\cY5L;*\bLfu(oh,a&pUmh9/pSNIrWVq%ja7~>endstream
endobj
24 0 obj
<<
/Filter [ /ASCII85Decode /FlateDecode ] /Length 1839
>>
stream
Gau`U?#SIU'Re<2\1[s4/nRY3U*pF#OFCh5dPUL;L:eoiD6qcJ-dch\?b]7WQL-;^<KH>T`!1?USu[3_a`D/#'`ZhV1X,r,%cR%3"3+b/"t#.Vi<?k3*rUMoZ:*!2g?1ln+QP6g&Lta'I`I\l&upd-^fIdsP/s$2%(IL4O@$f<LZ8bZl^&MrRW!/cGm&%[OVFcd1:XL0:HGn\/8A9@66#g&%p*\q?ba,iStB:NKAe0p5'JVqj"O&V30&0%Y*4\H[5EQS/;SNVKLZa4"qj=%6k!ti*HCjp<,DJ41a4C^O@'j5&h<N_r@!<QUQ67Y27DSH0fTtH;DO1YRL+'*@'KH-1l1sO!MU^,X/Bkr0^0e*W=/387,V)^[BX=PliWam8bPuMHL"j#FuU'fU-]TrS=hBU\ARIr$[4\[n1Rj<0R,IG<s>,WZQuqtGC3PRFHor;\#D!Cb*7+r3dkaq.0Uq^."]-`.<(g:+;DPBMA6PeJkErU3AQB&$!_@fo)^2o)'m`*+PCZEp3U<]joDIbj<>5!E#B;F-:AeoOK*T&m4k*'E]5c2RA;##Z7P?!HY?mC)*TAqBu5'2b+B*f0U3Eb'7gN_@Y"P@'Hu+aJ6N/5,s=4cLi@W\oA0%\3H!(QH]k),bsi`^lp7!E.Ip3#Ulc;GT!7ZG$%B_@?d>ap9l3W$`3O<,o7kMaE'K-7ckN8Pq=KDba!hJN$'0]+]a\C5TnSNGqkVf)\rUAVksM7*GS\I#7!_4aZ3QNKclA1"SO='>Jk-Z(6AlhG1L8TsOH886=jSB8]qL..o$JDFUG:[S>WnL9^&gkHNG2s#g@S!*Dn)f;*r7oX7%Z=.80;Il+b)$Ns1nY17jqgcnZ.o8&W5PWd:"GTEN1[-&XI@R&[n4n/XGm1cfr_b]'6]Sj7-iZfJZ:J)GnLl)6gM,"?RY.S#K5oD&KS.(g%oL2<.qT)m'M'/Le`G1Ui@p*4SGqG]DlSYqggqdbZb;W4f0L>X%"qd^DMmZ<=Qs3aOQ^/=Csncik[1>RojG6('@EqVAFUC+GijHnmRCl.rZZYYu#*o@angAYRPr'8%%NX=D91-MNp_d9M;UU!Q@gpnk))`oa<C6!DjJ-G)V;fIVr3)X90$oEc5.KmI&WbTb0MrW37=';>U?4+e6%(aP'NT*g=jNN#.[=^C'WkIRnpb8&unaFHanaq_q7!VM>E#Lk:(;j"sG4\32s\9DlUN"a@Nkht-`MR7a0=HP!?j+HO00V<K"VI_dQ4t39QSC?t8N(Y_j-],J&p^\+"no]&%R-EV-S6K:8=DKTZ>%n\mq^M1"_13&JfaRGIR[>Jfi\e.#E%f?eqapL9p=`J,[;^;;%c\hXmZY">9sASs4Mb?<S'GZVkF6Q\0"g-7Qi.>>F(,[h_B31<)f@<`]qN-kp03)^P[sJH9\Md=i^$c#b+4ts;!kB^I]`;0<pS'cdPG-DM:LFO=Nds.o\6>_>D+%Fo?<pc)<ks%e)&G)ouPAKrFC6WI!7`,&?_1jKN#B;Kt0mkp.9(=L%j8GPLMIKWcA=:1oY*?_g]J93Ue(I-Ed%hWOiskbI(F0rChQj]1u0af?(lDP8\njPg3pq\f5r)H8OkM-p:VoIlV5'3M#1K#:)JN\LYdOl%d<$Q7j$Yb8WMUd*m9BX4+`JTtu+bpJ#ODJCg7tW@Y9r\Z^!O08la<p*CB@+'r8iQ$5Ahg7),F?pZO4]0?&gPM,V_G?uhBD,CZa\+aH3NA.Wr=0O`XAU%fNp?)<l:g@;9I4rE)^H7ZSa2]i1[O6rVY!-=J)V"2g$iQLf/lX`EEI%#YDJZst0lL+;d_9U8ls.tnjlOJ*]!8\&="eOb%t,J,K`~>endstream
endobj
25 0 obj
<<
/Filter [ /ASCII85Decode /FlateDecode ] /Length 1942
>>
stream
GauHN>Ar7S'Roe[i6)>6;cqm0O%FCYL,;J3GJF3nBTA7SN?qAddpg(@Uugg'6o*pH$q=^Qa5lucoC7otZm[-d75t80$\XP;keei<PbI=^U&k]Z]g*sodGL_$7=0Zd5KA@qaXk:q)T.5T[tIq=ML463DF^$#/'BJ4b[<&f:h[F*j,9^<*@9ntr<LRae*&X5ibY4+O6F0r^F&P-c*J.2U6npRO&q[GkJns:e+:J8GA-gf:#C:)D&fQ0[oR]mY2fB^fiAW".G%6;bVX=H&a!7':!q5=LnX:4XPmrN`^U?C5a!';NFVMVd3G*=5.e<e@k5"Tj2Sr*.I,nqKoOY<(7l$]5'A#C)5>7Y'2ou-g3P]2>NS8.`Wi4&6]iW!IjT/dXuAP'J$%Q[ctrV/^J0Tq9[I=AA9!aoZZduc5BdCa:'k[?A6Mt9qr(g'#i])3@44%R)K\4R.M?jthM';[:cR9J'fp0C]a/?H;IR8+1Ag_'C7eh.m8m[CC=jS(<:1D.9(rXHfd@PUO&5=),N2Z5pSsD6cZ"m*Ag(UU1/b$.)aX<<fUehe)Q]dBc:4hefcIB>,NabeF:^?%2;Y,8m7dUn-)R44jS5g#Hu:C!?FZUGT>)58D#-:"Pn"%anE7/j<k,>@.O9!P@B,,AU7Vd@6077iO-@a?-b[5P=qU@(a!T*8D,#sqgQYpP2%]f0qQJG^8=+cA5rZnaP-'r(cG8Db(a247P*=_N&JgX(()bR3>:T[]bGg@-l%aE@C'oW<>pP>M]g.A9s$9gBFRQ>jX>0h!Jq>`uE?bO<"\$s&U9oEtF1OsMQ*a-QPSE?)o"KouP?!.3@*kCeM3J;Y'bjV;Y&Flt'\63\.9Am*3^7'2Y0$tsl"ka;0B1IroY7(MVm3;E#AVdY?SEC`^6H:c?/O'H'N@P6@&7Q/L%Wr4!HKl,)-!;KQt:&WAiiaho6d*ok^9Xok@K1N?jmMpD2IXUUIJHie51.9IGLkTJ_JeTH0N6VhdjQR2pkWC^C*V8:fM1@U]q0o7TE8e_+Z,/lR,#G:J>]Z>2aa<C\dJBVVF0JC9RGX2473_:E+%?0Dd\E2>]=KNkL$J8[mg&Cj_Y8oCXkL_h&!J-Fsll#7+F25p@1%s'\cagi7_7&o>`P8(7B?ieilO36gTHleCJY86dhQRfcTb7DOoe$gM>'d+S\^b1)HJNH1to[&r_9HV@\aX49N3:ups>LEN/(fMj^0*Upbi)&/>*_2($2K;'hUGk9j8Al'l(:O/7lH')_5mR2%/cbHTK!Tum2r2Sm9]"\YhPQ_552,TI>Yb(EA%4<i\.N+7"-Vi(2HF_7XdcGNKkVNcG(5tMT//&%C!O/5+0ZK*XTV,o;Y%th1WNo']M>qF2SubsVDeER?jY%bb1)Pj?>H7i9'qgpD18=f[V3VnJ:D)92,I'QRC5Y;NIjDLmCRp5+^>1@g(iY`t6PHtqo7+][jTNP%&K+8A$%(b5R'4Y_%2B;Zq?\<S>4:#T'\&RBL#aR/-t>(XH71dbs&oB0WJO=K8cQ1i+V(oDSD@FE-Kt<glL]HBn<5$V#0)%=h612W>9kJm<KWOI\+[%t&fksMFDF[Gp5cO(?`h\Q\@>>@OR!i.=<Up8&5[P$iE<8YnQOO<:j<2&IQWsX:tKUN`VM;Ae#*=K9"M%g/e,=T#<h^7-77sL#)e9EfU>)ZEgVirH-=[oh;?fr2V[E=N)<8AKPgC\e]ldgNV0u6c>(`h;WF8G$+"HcTOf7:O%JYqXfq@*`2W$h>dH>NDD0Gr0[jImS$bhUNla.oP@P\tCXsihU#DKKM03M,Ao/_t)Pj(cAoHGbS$g@[B_Z1V;B6[i=drrWL2YY?Y"l>=?9bRG-JG1[\hlrF_fT"JL4nu6cj^03@3?KjU4Y*;GhtaLm20q4YVf8#Wo_#pW#Nu\@`DHJlg8hiZ#_.j/G&SHnSJ6<H(iX#LV2F*~>endstream
endobj
26 0 obj
<<
/Filter [ /ASCII85Decode /FlateDecode ] /Length 2460
>>
stream
Gau`U=]=B@&q7#kOY9DT*L,<A0:CIK%Ub_3Af@,8C1^s#GS.B_;WA-h/mod`[r9Dj\`gV4*P4Tr`n'[V^<Dfs;Zoel+k?$("ur0jVfMom_Xnn*D:gQ@$=B[*a&1=Fkm#^e2f*4he-Em7(o&uQI?XFYj>/rL1p]9HiVM%O5ac'6Y3j(_L;U2]'rM+.jhKc?;@h&a_NR7U#d3dkLBs,<P6)N<0^JG=Dcc*<^OE_^?CZEA!7N#DrMW0Kit,rbVWY)@>k=?`@ePNn5-[68J3c=3jT`Wn>h^KBo@o)<djS?+bY6>jSddD6(,Hqb@^MaOR#Ja(0eKn.RkZmh_-E<LTf$ba,MCWL;,`\Ql'qfp24FftP`agdR?gX26TF]Ipd;^0V2Rg9dR4f=$iDIU=WfYBS]X>8]lea1&ZAoeM\8,$F)-=b_r8IP*3s04@PHq#(_[+aJGYBu)%pPVUA^N1_d=F<(=O5amPg#kmhfTAG75\`\j:#npf#qIed'W&ffe`(^/HWqMq7h!#;%f+p!@qpQ:fDWSa&,uM3ntdOM)UlbB&E`,!q[>==1B!36X;e@H?60-^@?=Rbt3d[2b@s*"o0W60d4W`?;U!U.PiG2TkC2U6&HNi.Id]$NXf^o$'AFEB>$.hj8&Ok!8[6GqdIk>jr!'&.,#]hofZ^Fc[B=0]4T*acJ"Rf9bYsaB?BT_Up9*$beie8Br.h0GncH^GC%WCLAjC1H^M)-up_(O<u-@7D'$4>k2P*_Nb-8`H)24kJ783+C&2J[V::qK=Sf;eM%h\UBY1faW@T=)!mRmO%QeRA!fpE_i$:ndTh3TM^1P0.0RROIh2s0n9GEAXuDG%\(!6kq$10]h!Chc0[Q-^4a8%WF!>XmQ3H'/Ku>ABGFfbYOQ$SP>R&:C&S,H-#G*1e`c#"uPHET$6O&^3^=<5g6Z0;>@uRd!&VIoKb=$B7Q4!@1Ol10=+0Aj@OW/7C7LsGa<)@1gLfp`'f*D"-79Dnt;;)\^Z_&TR#25ZfFA2Eg%0hGJQ[,:i%4Gm"L5KZS4`!$P:I,o>AKo[Jm>#QNPMqP^[g#5&T?#G&>NQt]d87'2!$e+HG)RA:\u=%#II?Z(V/`>i,DblR9(Qe=C=KQ>'*`O]B?qf2i5=2&a4<G'0G%ugLE8h.od1CB9I_*3f"o]@gngu"0HeLn&aJs*pSt%;%/*dJj6udC`s<@E3[&.[V&guII\!6C@ZJf2?%NtJZT7ua;t6Xr\CY,kHo?<lHX/O'6mu8R_Ukps.I/qOCYRpR#j*e<@k0f\.@FuG)dL!jYZL>]-+pX;DCr]H.P!*\n"QY(^/M/AHWdM:0WVVhh;5iPmGD9_8R(4oI+*%j?\K<^44r%*?CPVGWB'l]+'DW6BH-%Ocmok`0`&:M_@j!:"YN4l1"b,a"L0On0[&=V#i`17e3HiYD_FddC[3<c.86DX^8ik0'S^LpG7+Wjm(31CVsbGr%PQ<##l[815AZ"ZLi6+IBu2sj?J35G;54sZA?PXekQT8oAc>l+2*;D(lQ-%5Cp!h^9Fh?kN4f<mo&T?U-Cdl?BR.D/s0k*B$;A8,RG<GM_-h,,loGB`aW1f:292;`l-2iU)Kt[5=H%TMEX(.njc0QV]BO00f1eu.C3Koe=Yr,9#>cGMWVPZc]q]hVV6XBg"30S4Vgf^3[C[k./j:ZTR-\F+0KeZgJ3p^Zl@0@]T3&GID[V]R"V:>]e,b2LgGD.6koP"/BWZ!-\HTk0L7hj.M+Q(Jm9os-g0Y.CIC<K6h=Xto9)#4Y5DHnj[VN!HTl'\Ta^u-c<fh4e?CL#9JZhe)@Jf`ZhV's<_AKlOn2NJ='6ld96ek*3A%Oul&BRV@f8Ouk8[-Id8o7%kG:Q,kbO47nTeZ4HS8)%#L(ZJi?;>"8jo"mtJ:fGe\n4]1UUGg!C-0rVlsR8-RYP$;)o*Hmq%8e;JOmYM<eRo8JZXni<h"5Y`S-'WrIEU:"7"KXRC;aq@2U!)gCahP[;hpJ9kdT[Up![A\DW5'1N<irmZBE>#KQtF1FCUsL+%H_3N+r.e^\OA-Pt=t$\bbUQPnp.f6M\'$1FFDUr%Blr/L7P(l"o-Y6&dDdhQu\YqFGZV`A_+09ZruS!i2+@d_N-#p(%)Q%F7`Eoj7bVlL`HGV!DY63APY\fV^o_.?X8\BiV@RKC^cqRO8Dd>T[/1`0[#`r+)eQ5q%@]]R8ZCZFh?SKkiC%iLX(ZT_YAY*KZs5oGSK)(RZG*0!?#9#b+m!q(<,Q'F0@I^g%_0\:g?3Q&98NCS7B,2bme``Y0Y!@NHH7Hn%kA[K*&+.J#LTF!??5/H3AN$;AC9HZBd/5O/=:u_`j<IKP#&OBQbqAsWK<;TMrhK7.H6a?``n;-7*'Lr^k)5:4,F[sX@#q.QNUO'H0cQM7!69FL6hMQ#Ji[(S(=/Lu8cV36kKt'ZY;mii`84ipXPp7j]RNG;0i%i_EqV!?*oH0p\aPQ~>endstream
endobj
xref
0 27
0000000000 65535 f
0000000061 00000 n
0000000102 00000 n
0000000209 00000 n
0000000321 00000 n
0000000526 00000 n
0000000731 00000 n
0000000936 00000 n
0000001141 00000 n
0000001346 00000 n
0000001551 00000 n
0000001757 00000 n
0000001963 00000 n
0000002169 00000 n
0000002375 00000 n
0000002445 00000 n
0000002726 00000 n
0000002845 00000 n
0000004602 00000 n
0000006625 00000 n
0000008557 00000 n
0000010450 00000 n
0000012799 00000 n
0000014846 00000 n
0000016873 00000 n
0000018804 00000 n
0000020838 00000 n
trailer
<<
/ID
[<d0783d6505941bda00cbcb1d4441a406><d0783d6505941bda00cbcb1d4441a406>]
% ReportLab generated PDF document -- digest (opensource)
/Info 15 0 R
/Root 14 0 R
/Size 27
>>
startxref
23390
%%EOF

View File

@@ -0,0 +1,569 @@
# LetsBe Biz — Pricing Model & Cost Analysis
**Version 2.2 — February 26, 2026**
**Status:** Working Draft — Confidential
**Companion To:** Foundation Document v1.0, Technical Architecture v1.1, Product Vision v1.0
**Supersedes:** Pricing Model v1.0
---
## 1. Executive Summary
This document is a comprehensive revision of the LetsBe Biz pricing model. It incorporates updated AI model pricing (sourced from OpenRouter, February 2026), a simplified three-tier structure, bundled server costs within subscription pricing, unlimited agents, and a prompt caching strategy to optimize AI costs.
**Key changes from v1:**
- **Three tiers instead of four.** Dropped the underpowered Starter (4c/8GB). New tiers: Build, Scale, Enterprise.
- **Updated AI model lineup.** DeepSeek V3.2 as default; broader included model pool; Sonnet 4.6 and GPT 5.2 as premium. Claude Opus 4.6 now offered (credit card required).
- **Sliding markup scale.** Higher markup on cheap models (where users don't notice), lower on expensive models (where every penny counts). Replaces flat 25%.
- **Simplified model selection UX.** Basic settings: "Basic Tasks" / "Balanced" / "Complex Tasks." Advanced settings: pick any specific model.
- **Server bundled in subscription.** No separate "hosting" line item. Price includes the recommended server for the user's tool selection.
- **Unlimited agents.** No hardcoded agent limits. Users get all templates plus full customization.
- **OpenRouter platform fee (5.5%)** factored into all cost calculations.
- **Prompt caching strategy** identified as a major cost optimization lever, especially for Claude Sonnet 4.6.
**Key finding:** With DeepSeek V3.2 as default ($0.33/M blended) and GLM 5 included for Complex Tasks ($1.68/M blended), LetsBe Biz prices at **€29-109/mo** with **45-57% gross margins** on full pool consumption (higher in practice as most users won't exhaust pools). Premium AI metering generates significant additional revenue at 8-10% markup. Prompt caching improves margins by 1-2pp from Month 3+. Founding members get 2× included tokens for 12 months — all tiers stay margin-positive.
---
## 2. AI Model Lineup & Pricing
### 2.1 OpenRouter Base Prices (Before Platform Fee)
All prices per 1M tokens. Sourced from OpenRouter, February 25, 2026.
| Model | Input/1M | Output/1M | Cache Read/1M | Cache Write/1M | Context Window |
|-------|----------|-----------|---------------|----------------|----------------|
| DeepSeek V3.2 | $0.26 | $0.40 | $0.20 | — | 131K |
| GPT 5 Nano | $0.05 | $0.40 | $0.005 | — | 128K |
| GPT 5.2 Mini | $0.25 | $2.00 | $0.025* | — | 200K |
| MiniMax M2.5 | $0.30 | $1.20 | $0.15 | — | 256K |
| Gemini 3 Flash Preview | $0.50 | $3.00 | $0.05 | $0.083 | 1M |
| GLM 5 | $0.95 | $2.55 | $0.20 | — | 128K |
| GPT 5.2 | $1.75 | $14.00 | $0.175 | — | 400K |
| Claude Sonnet 4.6 (≤200K) | $3.00 | $15.00 | $0.30 | $3.75 | 1M |
| Claude Sonnet 4.6 (>200K) | $6.00 | $22.50 | $0.60 | $7.50 | 1M |
| Claude Opus 4.6 (≤200K) | $15.00 | $75.00 | $1.50 | $18.75 | 1M |
| Claude Opus 4.6 (>200K) | $30.00 | $112.50 | $3.00 | $37.50 | 1M |
*GPT 5.2 Mini cache read estimated at 10% of input (standard OpenAI pattern); exact rate not published.
**Claude Opus 4.6 pricing estimated based on Opus 4.5 pattern; confirm on OpenRouter when available.
### 2.2 Our Actual Cost (Base + 5.5% OpenRouter Platform Fee)
| Model | Input/1M | Output/1M | Cache Read/1M | Blended Cost* |
|-------|----------|-----------|---------------|---------------|
| DeepSeek V3.2 | $0.274 | $0.422 | $0.211 | $0.333 |
| GPT 5 Nano | $0.053 | $0.422 | $0.005 | $0.201 |
| GPT 5.2 Mini | $0.264 | $2.110 | $0.026 | $1.002 |
| MiniMax M2.5 | $0.317 | $1.266 | $0.158 | $0.696 |
| Gemini 3 Flash Preview | $0.528 | $3.165 | $0.053 | $1.583 |
| GLM 5 | $1.002 | $2.690 | $0.211 | $1.677 |
| GPT 5.2 | $1.846 | $14.770 | $0.185 | $7.016 |
| Claude Sonnet 4.6 (≤200K) | $3.165 | $15.825 | $0.317 | $8.229 |
| Claude Sonnet 4.6 (>200K) | $6.330 | $23.738 | $0.633 | $13.293 |
| Claude Opus 4.6 (≤200K) | $15.825 | $79.125 | $1.583 | $41.145 |
| Claude Opus 4.6 (>200K) | $31.650 | $118.688 | $3.165 | $65.503 |
*Blended rate assumes 60% input / 40% output token ratio, no caching.
**Opus 4.6 pricing estimated; confirm when available on OpenRouter.
### 2.3 Model Selection UX
Users interact with model selection through two interfaces:
**Basic Settings (default — no credit card needed):** Three simple presets mapped to the best included models, ranked weakest to strongest. Users pick a "mode" — they don't think about specific models. All usage draws from the included token pool.
| Preset | Maps To | Blended Cost | Use Case |
|--------|---------|-------------|----------|
| **Basic Tasks** | Gemini Flash / GPT 5 Nano | $0.201-1.583/M | Quick lookups, simple scheduling, basic drafts, data entry, status checks |
| **Balanced (default)** | DeepSeek V3.2 | $0.333/M | Day-to-day operations, most agent work, routine business tasks |
| **Complex Tasks** | GLM 5 / MiniMax M2.5 | $0.696-1.677/M | Multi-step reasoning, analysis, complex workflows, report writing |
These three presets cover 90%+ of daily usage. Non-technical users never need to go deeper. The included monthly token pool (10-50M depending on tier) only applies to these models and the other included models (GPT 5 Nano, MiniMax M2.5, Gemini Flash).
**Advanced Settings (unlocked by adding a credit card):** Full model catalog with per-model selection per agent or per task. This is where power users, agencies, and anyone who knows what "Claude Sonnet 4.6" means goes to pick exactly what they want. Premium models (GPT 5.2, Gemini 3.1 Pro, Sonnet 4.6, Opus 4.6) are metered — every token is billed to the card at our marked-up rates. Premium model usage never draws from the included token pool.
**Gating logic:** No credit card → basic settings only (3 presets, included models, token pool). Credit card added → advanced settings unlocked (full model catalog, premium models metered to card, included pool still available for cheap models).
**Future: BYOK (Bring Your Own Key).** Deferred to post-launch (see Foundation Document decision #41). The orchestration layer will be architected from day one for provider-agnostic key injection, so adding BYOK later is a configuration change, not a rewrite. When launched, BYOK users will pay the same platform subscription fee (hosting + orchestration + support) but supply their own API keys, bypassing our AI markup. This means higher platform-side margin per BYOK user (no API cost absorption) while those users lose managed model routing, failover, and caching optimizations. BYOK will likely be gated to a Pro/Developer tier feature.
### 2.4 Model Tiering & Markup Strategy
**Principle: Sliding markup scale.** Higher percentage on cheap models (where the absolute dollar amount is tiny and users don't notice), lower percentage on expensive models (where every cent counts and we don't want to discourage usage of our most powerful offerings). This keeps pricing fair and encourages adoption of premium models.
**Included Models (no extra charge — covered by subscription token pool):**
*Current selection — model choices not yet final. All models in Section 2.1 remain candidates.*
| Model | Blended Cost/1M | Preset Assignment | Notes |
|-------|----------------|------------------|-------|
| DeepSeek V3.2 | $0.333 | Balanced (default) | Default for everything. 90%+ of GPT-5 quality. Best cost-to-performance. |
| GPT 5 Nano | $0.201 | Basic Tasks | Quick lookups, simple classification, formatting. Cheapest included model. |
| GPT 5.2 Mini | $1.002 | *(candidate — not yet assigned)* | Strong mid-range. Could replace or supplement other included models. |
| Gemini Flash | $1.583 | Basic Tasks | Fast, 1M context. Alternates with GPT 5 Nano for basic task routing. |
| MiniMax M2.5 | $0.696 | Complex Tasks | Strong multilingual, 256K context. Shares Complex preset with GLM 5. |
| GLM 5 | $1.677 | Complex Tasks | Strong multi-step reasoning. Highest-cost included model. |
Currently selected five (excluding GPT 5.2 Mini) stay under $1.70/M blended. Heavy usage (20M tokens/month) costs us ≤ €8-10/month per user depending on model mix. Including GPT 5.2 Mini would add a capable mid-tier option at $1.002/M.
**Premium Models (metered — billing/credit card required):**
Markup decreases as model cost increases. The absolute margin per token is still meaningful on expensive models, but the percentage is lower so users aren't punished for choosing quality.
| Model | Our Cost (Blended/1M) | Markup % | Our Price (Blended/1M) | Margin/1M |
|-------|----------------------|----------|----------------------|-----------|
| Gemini 3.1 Pro | $6.330 | 10% | $6.963 | $0.633 |
| GPT 5.2 | $7.016 | 10% | $7.718 | $0.702 |
| Claude Sonnet 4.6 (≤200K) | $8.229 | 10% | $9.052 | $0.823 |
| Claude Sonnet 4.6 (>200K) | $13.293 | 10% | $14.622 | $1.329 |
| Claude Opus 4.6 (≤200K) | $41.145 | 8% | $44.437 | $3.292 |
| Claude Opus 4.6 (>200K) | $65.503 | 8% | $70.743 | $5.240 |
**Note:** Gemini 3.1 Pro pricing confirmed on OpenRouter ($2.00/$12.00 input/output per 1M). Blended cost $6.330/M places it in $5-15/M threshold → 10% markup. GLM 5 moved from premium to included (Complex Tasks preset, Decision #33). GPT 5.2 markup 10% per threshold (Decision #35).
**Overage markup (when included token pool runs out on included models):**
| Model Tier | Models | Overage Markup |
|-----------|--------|---------------|
| Cheapest (< $0.50/M) | DeepSeek V3.2, GPT 5 Nano | 35% |
| Mid ($0.50-1.20/M) | GPT 5.2 Mini, MiniMax M2.5 | 25% |
| Top included (> $1.20/M) | GLM 5, Gemini Flash | 20% |
**Note:** Model selections are not final — all models listed in Section 2.1 remain candidates for inclusion/exclusion. This table shows overage tiers for all models currently under consideration for the included pool.
This means overage on cheap models is almost invisible ($0.33 → $0.45/M, user barely notices) while premium models stay competitively priced.
**Claude Opus 4.6 — Offered, Not Subsidized:**
Opus 4.6 is available through OpenRouter with metered billing. Not BYOK — we route it like any other model. But:
- Requires a credit card on file (enforced in app).
- Visible only in Advanced Settings (not in the basic presets).
- 8% markup keeps it competitive — users who want Opus are sophisticated enough to know pricing.
- At ~$41-66/M blended, even light Opus usage (500K tokens) costs the user ~$22-35/month. This self-selects for high-value users.
- Estimated Opus pricing based on Opus 4.5 patterns; confirm on OpenRouter when Opus 4.6 is listed.
### 2.4 Prompt Caching Opportunity
Cache read prices are **80-99% cheaper** than standard input prices. This is a critical engineering opportunity.
**Cache savings by model (read vs. standard input):**
| Model | Standard Input/1M | Cache Read/1M | Savings | Impact |
|-------|-------------------|---------------|---------|--------|
| DeepSeek V3.2 | $0.274 | $0.211 | 23% | Moderate |
| GPT 5 Nano | $0.053 | $0.005 | 91% | High |
| GPT 5.2 Mini | $0.264 | $0.026 | 90% | High |
| MiniMax M2.5 | $0.317 | $0.158 | 50% | Moderate |
| Gemini 3 Flash | $0.528 | $0.053 | 90% | High |
| GPT 5.2 | $1.846 | $0.185 | 90% | Very High |
| Claude Sonnet 4.6 (≤200K) | $3.165 | $0.317 | 90% | Very High |
| Claude Sonnet 4.6 (>200K) | $6.330 | $0.633 | 90% | Extreme |
**Architecture recommendation:** Structure the agent framework so that SOUL.md (personality/domain knowledge) and TOOLS.md (permissions/API schemas) are sent as cacheable prompt prefixes. These don't change between requests, so every subsequent call after the first benefits from cache read pricing. For a typical agent call with 4K tokens of system prompt:
- Without caching (Sonnet ≤200K): 4K × $3.165/M = $0.013 per call
- With caching (Sonnet ≤200K): 4K × $0.317/M = $0.001 per call — **10x cheaper**
At 1,000 agent calls/month per user on Sonnet, that's $12.66 saved per user per month. At scale, this is massive.
**Decision: Build prompt caching into the agent framework from day one.** This is not optional — it's a direct margin multiplier.
---
## 3. Infrastructure Cost Breakdown
### 3.1 Netcup VPS G12 (Primary — Shared vCores)
Unchanged from v1. AMD EPYC 9645 (Zen 5), DDR5 ECC RAM, NVMe storage, 2.5 Gbps networking.
| Plan | vCores | RAM | Storage | Monthly | Per Core |
|------|--------|-----|---------|---------|----------|
| VPS 1000 G12 | 4 | 8 GB | 256 GB | €7.10 | €1.78 |
| VPS 2000 G12 | 8 | 16 GB | 512 GB | €13.10 | €1.64 |
| VPS 4000 G12 | 12 | 32 GB | 1 TB | €22.00 | €1.83 |
| VPS 8000 G12 | 16 | 64 GB | 2 TB | €32.50 | €2.03 |
### 3.2 Netcup RS G12 (Premium — Dedicated Cores)
| Plan | Cores | RAM | Storage | Monthly | Per Core |
|------|-------|-----|---------|---------|----------|
| RS 1000 G12 | 4 ded. | 8 GB | 256 GB | €8.74 | €2.19 |
| RS 2000 G12 | 8 ded. | 16 GB | 512 GB | €14.58 | €1.82 |
| RS 4000 G12 | 12 ded. | 32 GB | 1 TB | €27.08 | €2.26 |
| RS 8000 G12 | 16 ded. | 64 GB | 2 TB | €58.00 | €3.63 |
### 3.3 Hetzner Cloud CCX (Backup / Overflow)
Used only when Netcup pool is exhausted. Hourly billing. Post-April 2026 prices (30-37% increase) make this significantly more expensive than Netcup.
---
## 4. Three-Tier Pricing Structure
### 4.1 Why Three Tiers (Changed from v1)
**Dropped: Starter (4c/8GB/€29).** Rationale:
- Most target customers (SMBs replacing 10-30 SaaS tools) need 10+ tools minimum. A 4c/8GB server running 5-8 tools doesn't deliver the core value proposition.
- Four tiers creates decision paralysis for non-technical buyers.
- The €29 price point attracts the lowest-value customers who churn fastest.
- Better to push the floor up to where the product actually works well.
**Exception:** If a user's tool selection genuinely fits in 4c/8GB (e.g., a Freelancer bundle with 5-7 tools), the system can offer a **Lite** option at a lower price. This is not marketed on the pricing page — it appears only during onboarding when the resource calculator determines it's sufficient. This captures price-sensitive users without diluting the brand.
### 4.2 Tier Definitions
| | Lite (Hidden) | Build | Scale | Enterprise |
|---|---------------|-------|-------|------------|
| **Positioning** | Budget option (not marketed) | Default experience | Power users | Full stack |
| **Server (VPS default)** | VPS 1000 (4c/8GB) | VPS 2000 (8c/16GB) | VPS 4000 (12c/32GB) | VPS 8000 (16c/64GB) |
| **Tools** | 5-8 | 10-15 | 15-25 | All 30 |
| **Agents** | Unlimited | Unlimited | Unlimited | Unlimited |
| **Included AI Models** | All 5 included models | All 5 included models | All 5 included models | All 5 included models |
| **Included AI Tokens** | ~8M/mo | ~15M/mo | ~25M/mo | ~40M/mo |
| **Premium AI** | Metered + markup | Metered + markup | Metered + markup | Metered + markup |
| **Target Customer** | Solo freelancer | SMB (1-10 employees) | Agency/e-commerce | Power user / regulated |
### 4.3 Cost Model (VPS G12 — Default)
| Cost Component | Lite | Build | Scale | Enterprise |
|---------------|------|-------|-------|------------|
| Netcup VPS | €7.10 | €13.10 | €22.00 | €32.50 |
| Included AI (preset-based, full pool usage) | €2.91 | €6.76 | €13.46 | €25.05 |
| Monitoring (Uptime Kuma + GlitchTip) | €0.50 | €0.50 | €0.50 | €0.50 |
| Backups (snapshots + off-site) | €1.00 | €1.00 | €1.00 | €1.00 |
| DNS / Domain (Entri + Netcup reseller) | €0.50 | €0.50 | €0.50 | €0.50 |
| Support Tooling (Chatwoot instance, KB) | €0.50 | €0.50 | €0.50 | €0.50 |
| **Total Variable Cost** | **€12.51** | **€22.36** | **€37.96** | **€60.05** |
**AI cost assumptions (included models only — thoroughly recalculated using preset-based routing):**
Costs are modeled by preset usage patterns, not individual models. The system routes through three presets:
- **Basic Tasks preset:** 80% GPT 5 Nano ($0.201/M) + 20% Gemini Flash ($1.583/M) = $0.477/M blended
- **Balanced preset (default):** 100% DeepSeek V3.2 = $0.333/M blended
- **Complex Tasks preset:** 60% GLM 5 ($1.677/M) + 40% MiniMax M2.5 ($0.697/M) = $1.285/M blended
Tier-appropriate preset usage (lower tiers use Complex Tasks less):
| Tier | Balanced | Basic | Complex | Weighted $/M | Pool | AI Cost |
|------|----------|-------|---------|-------------|------|---------|
| Lite | 85% | 10% | 5% | $0.395 | 8M | €2.91 |
| Build | 75% | 10% | 15% | $0.490 | 15M | €6.76 |
| Scale | 65% | 10% | 25% | $0.585 | 25M | €13.46 |
| Enterprise | 55% | 10% | 35% | $0.681 | 40M | €25.05 |
**Note:** GLM 5 inclusion (Decision #33) is the primary cost driver. GLM 5 at $1.677/M blended is 5x more expensive than DeepSeek V3.2 ($0.333/M). Even modest Complex Tasks usage (15-35%) significantly impacts costs. These estimates assume users consume their full token pools — actual costs will likely be lower as many users won't exhaust their allocation. Reduced pool sizes (8-40M vs. prior 10-50M) combined with the price adjustment restore margins to healthy SaaS levels. Prompt caching reduces AI costs by ~5-8% (see Section 11).
### 4.4 Subscription Pricing (VPS G12 — Default)
| | Lite | Build | Scale | Enterprise |
|---|------|-------|-------|------------|
| Our Cost | €12.51 | €22.36 | €37.96 | €60.05 |
| **Subscription Price** | **€29/mo** | **€45/mo** | **€75/mo** | **€109/mo** |
| Gross Margin | €16.49 | €22.64 | €37.04 | €48.95 |
| **Gross Margin %** | **56.9%** | **50.3%** | **49.4%** | **44.9%** |
| After Stripe (2.9% + €0.25) | €15.40 | €21.08 | €34.61 | €45.54 |
| **Net Margin %** | **53.1%** | **46.8%** | **46.1%** | **41.8%** |
**Margin Analysis (thoroughly calculated from preset-based routing):**
These margins assume users consume their **full token pools** at realistic model mixes. In practice, not all users will exhaust their allocations, so actual margins will be higher. Blended gross margin (weighted by expected 10/45/30/15 tier mix): **~50%**. Key observations:
- **All tiers above 44% gross margin.** The combination of adjusted pricing (€29-109) and right-sized pools (8-40M) brings margins into healthy SaaS territory across the board.
- **GLM 5 remains the primary cost driver.** At $1.677/M, even 5-35% Complex Tasks usage is the dominant AI cost factor. But reduced pools limit the total exposure.
- **Prompt caching improves all margins by ~1-2pp** (achievable from Month 3+). See Section 11.
- **Enterprise is still the tightest** but at 44.9% it's comfortable rather than concerning.
- **Mitigating factors:** (1) Most users won't exhaust full pools; (2) DeepSeek V3.2 as default captures 55-85% of usage; (3) Prompt caching reduces costs; (4) AI model prices tend downward over time.
### 4.5 Server Upgrade Pricing
Users can upgrade their server beyond what their tool selection requires. Presented as "+€X/mo" in the UI.
**VPS → Larger VPS (more resources, shared):**
| Current Tier | Upgrade To | Additional Cost |
|-------------|-----------|-----------------|
| Lite (VPS 1000) | Build (VPS 2000) | +€16/mo (switches to Build tier) |
| Build (VPS 2000) | Scale (VPS 4000) | +€30/mo (switches to Scale tier) |
| Scale (VPS 4000) | Enterprise (VPS 8000) | +€34/mo (switches to Enterprise tier) |
**VPS → RS (Performance Guarantee — dedicated cores):**
| Tier | VPS Price | RS Price | Uplift |
|------|-----------|----------|--------|
| Lite | €29/mo | €35/mo | +€6/mo |
| Build | €45/mo | €55/mo | +€10/mo |
| Scale | €75/mo | €89/mo | +€14/mo |
| Enterprise | €109/mo | €149/mo | +€40/mo |
### 4.6 RS G12 Full Cost Model (Performance Guarantee)
| | Lite | Build | Scale | Enterprise |
|---|------|-------|-------|------------|
| Netcup RS | €8.74 | €14.58 | €27.08 | €58.00 |
| AI + Other Costs | €5.41 | €9.26 | €15.96 | €27.55 |
| **Total Variable Cost** | **€14.15** | **€23.84** | **€43.04** | **€85.55** |
| **RS Subscription Price** | **€35/mo** | **€55/mo** | **€89/mo** | **€149/mo** |
| Gross Margin | €20.85 | €31.16 | €45.96 | €63.45 |
| **Gross Margin %** | **60%** | **57%** | **52%** | **43%** |
---
## 5. Premium AI Model Revenue
### 5.1 Sliding Markup Structure
Premium models use a **sliding markup**: higher % on cheaper models, lower % on expensive ones. This keeps premium models competitively priced (encouraging adoption) while still generating meaningful absolute margin.
**Full markup schedule (output pricing shown — input follows same % markup):**
| Model | Markup % | Our Cost/1M Out | Our Price/1M Out | Margin/1M Out |
|-------|----------|----------------|-----------------|---------------|
| Gemini 3.1 Pro | 10% | $12.660 | $13.926 | $1.266 |
| GPT 5.2 | 10% | $14.770 | $16.247 | $1.477 |
| Claude Sonnet 4.6 (≤200K) | 10% | $15.825 | $17.408 | $1.583 |
| Claude Sonnet 4.6 (>200K) | 10% | $23.738 | $26.111 | $2.374 |
| Claude Opus 4.6 (≤200K) | 8% | $79.125 | $85.455 | $6.330 |
| Claude Opus 4.6 (>200K) | 8% | $118.688 | $128.182 | $9.495 |
*Gemini 3.1 Pro pricing confirmed on OpenRouter (Feb 2026): $2.00/$12.00 per 1M input/output.
**Markup thresholds (Decision #35):** < $1/M input = 25%, $1-5/M = 15%, $5-15/M = 10%, > $15/M = 8%. A 10% markup on Sonnet output ($1.58 margin per 1M tokens) is meaningful at volume but doesn't feel punitive. An 8% markup on Opus still yields $6-9 margin per 1M output tokens — significant given Opus users will be high-value.
**Note:** GLM 5 moved from premium to included models (Complex Tasks preset, Decision #33). Its cost is now absorbed into the included token pool.
### 5.2 Premium Revenue Scenarios (with Caching)
With prompt caching enabled, input costs drop significantly. Users benefit from lower bills (encouraging usage) while our margin percentage stays the same.
**Estimated premium cost with caching (50% of input tokens cached):**
| Model | Standard Blended/1M | With 50% Cache/1M | Savings |
|-------|--------------------|--------------------|---------|
| Claude Sonnet 4.6 (≤200K) | $8.229 | $5.595 | 32% |
| GPT 5.2 | $7.016 | $4.379 | 38% |
| Claude Opus 4.6 (≤200K) | $41.145 | $28.059 | 32% |
### 5.3 Estimated Premium Revenue per User Segment
With the lower markups, revenue per user is slightly lower but adoption should be higher (more users willing to try premium). Net effect: more total revenue.
| Segment | % of Users | Avg Model | Avg Spend | Rev/User/Mo | At 100 Users |
|---------|-----------|-----------|-----------|-------------|--------------|
| No premium (basic only) | 40% | — | $0 | $0 | $0 |
| Light premium | 25% | GLM 5 | ~2M tokens | ~$2.70 | $68 |
| Medium premium | 20% | Sonnet/GPT 5.2 mix | ~3M tokens | ~$12.00 | $240 |
| Heavy premium | 10% | Sonnet-dominant | ~8M tokens | ~$35.00 | $350 |
| Opus users | 5% | Opus 4.6 | ~1M tokens | ~$45.00 | $225 |
| **Weighted average** | **100%** | **—** | **—** | **~$8.83** | **$883/mo** |
At 100 users: ~$883/mo ($10,596/yr) in premium AI revenue.
At 500 users: ~$4,415/mo ($52,980/yr).
**Note:** Lower per-user revenue vs. v2.0 ($8.83 vs $10.60) but higher projected adoption rate (60% using premium vs 55% prior) and Opus users are a new high-ARPU segment that didn't exist before.
---
## 6. Agent Strategy
### 6.1 Unlimited Agents — No Caps
**Decision: All users get unlimited agents on every tier.**
Rationale:
1. **Agents are config files, not running processes.** A SOUL.md + TOOLS.md + model selection is ~10KB of YAML/Markdown. 100 agents = 1MB of storage. Zero infrastructure cost to "have" more agents.
2. **Agent customization is the primary lock-in mechanism.** Every custom agent represents hours of user investment in prompts, permissions, and workflows. Capping agents at 3 or 5 artificially limits the thing that makes users unable to leave.
3. **More agents = more AI usage = more revenue.** Users with 8 agents use more tokens than users with 3. Don't limit the revenue engine.
4. **Concurrent execution is the real constraint.** If resource contention becomes an issue, gate concurrent agent tasks per tier (e.g., Build: 3 concurrent, Scale: 5, Enterprise: 10). This is a performance constraint, not a pricing lever.
### 6.2 Agent Delivery Model
Every user gets:
- **5 pre-built agent templates** (Dispatcher, IT Admin, Marketing, Secretary, Sales) with sensible defaults per business type bundle.
- **Full SOUL.md editor** — personality, domain knowledge, tone, preferences, example interactions.
- **Full TOOLS.md editor** — API permissions, destructive action gating, model selection per agent.
- **Clone & modify** — duplicate any template as a starting point for custom agents.
- **Create from scratch** — blank agent with guided setup.
- **Per-agent model selection** — each agent can use a different LLM. IT Agent on DeepSeek V3.2 (cheap, routine ops), Marketing Agent on Gemini 3 Flash (creative content), Sales Agent on Sonnet 4.6 (high-stakes communication).
### 6.3 Token Allocation Model
Included tokens are a **pooled monthly budget** across all agents, not per-agent. The pool **only covers included models** (currently: DeepSeek V3.2, GPT 5 Nano, GLM 5, MiniMax M2.5, Gemini Flash; GPT 5.2 Mini also under consideration — final selection pending). Premium models (Gemini 3.1 Pro, GPT 5.2, Sonnet 4.6, Opus 4.6) are always metered separately — they never draw from the pool.
| Tier | Monthly Token Pool | ~Equivalent Agent Calls* | Applies To |
|------|-------------------|-------------------------|------------|
| Lite | ~8M tokens | ~2,000 calls | Included models only |
| Build | ~15M tokens | ~3,750 calls | Included models only |
| Scale | ~25M tokens | ~6,250 calls | Included models only |
| Enterprise | ~40M tokens | ~10,000 calls | Included models only |
*Assuming ~4K tokens per agent call average (prompt + response).
When the included pool is exhausted:
- Included model usage pauses until next billing cycle, OR
- If user has a credit card on file, they can opt into overage billing at cost + tiered markup (35% for cheapest models, 25% mid, 20% top included).
- Premium model usage is always metered to the credit card regardless of pool status.
---
## 7. Complete Revenue Model
### 7.1 Revenue Components
| Revenue Stream | Type | Margin Driver |
|---------------|------|---------------|
| Base subscription | Recurring | Server + platform + included AI token pool |
| Premium AI metering | Usage-based | Sliding markup (8-25%) on OpenRouter |
| Server tier upgrades | Recurring | Larger VPS = higher subscription |
| Performance Guarantee (RS) | Recurring | +€5-50/mo for dedicated cores |
| Domain reselling | Recurring | Netcup wholesale margin |
| Annual discount | Recurring (locked) | 15% off; locks in 12 months revenue |
### 7.2 Scenario: 100 Customers (Month 6-12)
Conservative mix: 10% Lite, 45% Build, 30% Scale, 15% Enterprise. All on VPS G12 default.
| Revenue Stream | Monthly | Annual |
|---------------|---------|--------|
| 10 × Lite @ €29 | €290 | €3,480 |
| 45 × Build @ €45 | €2,025 | €24,300 |
| 30 × Scale @ €75 | €2,250 | €27,000 |
| 15 × Enterprise @ €109 | €1,635 | €19,620 |
| **Subtotal Subscriptions** | **€6,200** | **€74,400** |
| Premium AI Revenue (est.) | €820 | €9,840 |
| RS Upgrades (~10% of users) | €200 | €2,400 |
| Domain Revenue (est.) | €25 | €300 |
| **Total Revenue** | **€7,245** | **€86,940** |
| | | |
| Total Variable Costs | €3,171 | €38,052 |
| **Gross Profit** | **€4,074** | **€48,888** |
| **Gross Margin** | **56%** | **56%** |
### 7.3 Scenario: 500 Customers (Month 18-24)
| Revenue Stream | Monthly | Annual |
|---------------|---------|--------|
| Subscription Revenue | €31,000 | €372,000 |
| Premium AI Revenue | €4,100 | €49,200 |
| RS Upgrades (~12%) | €1,200 | €14,400 |
| Domain Revenue | €125 | €1,500 |
| **Total Revenue** | **€36,425** | **€437,100** |
| Total Variable Costs | €15,856 | €190,272 |
| **Gross Profit** | **€20,569** | **€246,828** |
| **Gross Margin** | **56%** | **56%** |
### 7.4 Growth Trajectory
| Milestone | Users | MRR | ARR | Gross Profit/Yr |
|-----------|-------|-----|-----|-----------------|
| Launch (Month 1) | 10 | €725 | €8,694 | €4,889 |
| Traction (Month 6) | 50 | €3,622 | €43,470 | €24,443 |
| Product-Market Fit (Month 12) | 100 | €7,245 | €86,940 | €48,888 |
| Scale (Month 18) | 250 | €18,112 | €217,350 | €122,220 |
| Growth (Month 24) | 500 | €36,425 | €437,100 | €246,828 |
| Maturity (Month 36) | 1,000 | €72,450 | €869,400 | €488,868 |
### 7.5 v2 vs v1 Comparison
| Metric | v1 (100 users) | v2 (100 users) | Delta |
|--------|----------------|----------------|-------|
| MRR | €5,990 | €7,245 | +21% |
| ARR | €71,880 | €86,940 | +21% |
| Gross Margin % | 54% | 56% | +2pp |
| Tiers | 4 | 3 (+ hidden Lite) | Simpler |
| Included models | 2 | 5 | More value |
| Agent limits | 3-8 per tier | Unlimited | More lock-in |
| Premium AI markup | Flat 20% | Sliding 8-25% | Fairer, more adoption |
| Model selection UX | Raw model list | Basic presets + Advanced | More accessible |
| Opus 4.6 | Not offered | Available (card required) | New high-ARPU segment |
---
## 8. Founding Member Economics
First 50-100 customers get founding member pricing: **2× included AI token allotment** for 12 months. Same subscription price. "Double the AI" — clean marketing message, all tiers stay margin-positive.
| Tier | Normal Tokens | Founding (2×) | Normal AI Cost | Founding AI Cost | Extra Cost | Margin w/ 2× |
|------|--------------|---------------|---------------|-----------------|------------|-------------|
| Lite | 8M/mo | 16M/mo | €2.91 | €5.81 | +€2.91/mo | €13.59 (47%) |
| Build | 15M/mo | 30M/mo | €6.76 | €13.53 | +€6.76/mo | €15.87 (35%) |
| Scale | 25M/mo | 50M/mo | €13.46 | €26.93 | +€13.46/mo | €23.57 (31%) |
| Enterprise | 40M/mo | 80M/mo | €25.05 | €50.09 | +€25.05/mo | €23.91 (22%) ✓ |
**All tiers margin-positive.** Even Enterprise at 2× stays at 22% gross margin — thin but sustainable for a 12-month acquisition incentive.
Worst case (100 founding members, all Enterprise): €25.05 × 100 × 12 = **€30,060/year** extra cost.
Realistic case (50 founding members, mixed tiers): ~**€6,130/year** extra cost.
**Why 2× instead of 3×:** The original 3× multiplier was designed before thorough cost modeling. With GLM 5 included at $1.68/M, 3× creates negative margins on Build/Scale/Enterprise tiers. 2× provides a compelling benefit ("double the AI included") while keeping the business healthy. At 50 founding members with realistic tier mix, the extra cost is ~€6,130/year — an effective CAC of ~€123/user/year, which is excellent for early adopters who provide feedback and testimonials.
---
## 9. Competitive Pricing Context
| Alternative | Typical Monthly Cost | vs LetsBe Build (€45) | What's Missing |
|------------|---------------------|----------------------|---------------|
| SaaS stack (10-15 tools) | €500-1,500/mo | 11-33x more expensive | No AI workforce |
| Virtual assistant | €1,500-3,000/mo | 33-67x more expensive | Limited hours, not 24/7 |
| IT contractor (10 hrs/mo) | €1,000-2,000/mo | 22-44x more expensive | Reactive, not proactive |
| Cloudron/YunoHost + DIY | €10-30/mo hosting | Comparable hosting cost | No AI, no mobile app |
| Coolify self-hosted | €0-20/mo | Cheaper hosting | Developer tool, not business ops |
**Value proposition:** At €45/mo (Build), a customer gets 10-15 business tools + an AI workforce that would cost €2,000-4,000/mo if assembled from SaaS subscriptions + human labor. The 40-90x value multiplier is the core selling point.
---
## 10. Pricing Strategy Decisions (Updated)
| # | Decision | Rationale |
|---|----------|-----------|
| P1 | Three tiers: Build / Scale / Enterprise | Simpler; no underpowered default; hidden Lite for small tool selections |
| P2 | €45/75/109 VPS pricing (€29 Lite) | Floor pushed up to where product delivers; margins support GLM 5 inclusion |
| P3 | €55/89/149 RS pricing (€35 Lite) | Meaningful dedicated-core premium |
| P4 | Server bundled in subscription | No separate hosting line item; cleaner value proposition |
| P5 | 5-6 included AI models (not 2) | DeepSeek V3.2, GPT 5 Nano, GPT 5.2 Mini, GLM 5, MiniMax M2.5, Gemini Flash (final selection pending) |
| P6 | DeepSeek V3.2 as default model | Best quality-to-cost ratio at $0.33/M blended |
| P7 | Gemini 3 Flash high on shortlist | Fast, 1M context, great for content generation |
| P8 | Sliding markup: 25% cheap → 8% expensive (threshold-based) | Don't gouge expensive models; encourage premium adoption |
| P9 | Prompt caching built into agent framework | 10x cheaper input on repeated agent calls; mandatory engineering priority |
| P10 | Unlimited agents, all tiers | Agents are config files; zero infra cost; maximize lock-in and usage |
| P11 | All 5 agent templates + full customization | Templates as starting point; clone, modify, create from scratch |
| P12 | Pooled token budget (not per-agent) | Simpler billing; natural usage allocation |
| P13 | Claude Opus 4.6 offered (8% markup, card required) | Available in Advanced Settings; high-ARPU segment; not BYOK |
| P14 | Hidden Lite tier for small tool selections | Captures price-sensitive users without brand dilution |
| P15 | 15% annual discount | Lock in revenue; aligns with 12-mo Netcup contracts |
| P16 | Founding member 2× tokens (50-100 users) | "Double the AI" — clean message; ~€123/user/yr effective CAC; all tiers margin-positive |
| P17 | Basic/Advanced model selection UX | Basic: 3 presets (Basic Tasks/Balanced/Complex Tasks). Advanced: full catalog. Non-technical users never see model names. |
| P18 | Advanced settings gated behind credit card | No card = basic presets + included pool only. Card = full model catalog + premium metered billing. |
| P19 | Included token pool covers cheap models only | Pool only draws from 5 included models. Premium models always metered to card separately. |
| P20 | Overage markup tiered (35%/25%/20%) | When pool runs out: high markup on cheapest models (invisible), low markup on top included models. |
---
## 11. Open Questions
1. **OpenRouter Enterprise tier** — At what volume do we qualify for bulk discounts (reducing or eliminating the 5.5% platform fee)? This could add 3-5pp to our AI margins at scale.
2. **Overage billing vs. hard cap** — When included tokens run out, do we auto-pause (friction) or auto-bill overages (revenue)? Recommendation: auto-bill with clear in-app warnings at 80% and 95%.
3. **Concurrent agent execution limits** — If VPS resource contention becomes an issue, define per-tier concurrent task limits (e.g., Build: 3, Scale: 5, Enterprise: 10).
4. **Gemini 3 Flash GA pricing** — Currently "Preview" pricing. Monitor for changes when it exits preview.
5. **GLM 5 cost management** — Now included (Complex Tasks preset). At $1.677/M, it's the most expensive included model and the primary margin pressure driver. Monitor actual Complex Tasks preset usage — if > 25% of token consumption, margins compress significantly. Consider smart routing that favors MiniMax M2.5 ($0.697/M) for less demanding "complex" tasks.
---
## 12. Next Steps
1. **Update Foundation Document** to v0.7 with three-tier structure, unlimited agents, updated model lineup.
2. **Design prompt caching architecture** for agent framework — SOUL.md and TOOLS.md as cacheable prefixes.
3. **Build pricing page** for letsbe.biz with three visible tiers + RS upgrade toggle.
4. **Implement Stripe billing** with subscription tiers + metered premium AI component.
5. **Confirm OpenRouter Enterprise tier** requirements and timeline for bulk discount eligibility.
6. **Monitor Gemini 3 Flash** GA pricing and adjust included model pool if needed.
---
*This is a working document. Pricing will be refined as we validate costs, test market response, and gather founding member feedback. Supersedes Pricing Model v1.0.*