Initial commit: Kalei app — docs, mockups, logo, pitch deck

Complete project files including:
- 73 polished HTML mockup screens (onboarding, turn, mirror, lens, gallery, you, ritual, spectrum, modals, guide)
- Design system CSS with Inter font, jewel-tone palette, device frame scaling
- Canonical 6-blade kaleidoscope logo (soft-elegance-final)
- SVG asset library (fragments, icons, patterns, evidence wall, spectrum viz)
- Product docs, brand guidelines, technical architecture, build phases
- Pitch deck and cost projections
- Logo mockup iterations and finalists

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
This commit is contained in:
2026-02-22 14:55:22 +01:00
commit 38021c4633
168 changed files with 46724 additions and 0 deletions

View File

@@ -0,0 +1,57 @@
# Build Timeline & Execution Phases
Last updated: 2026-02-22
This folder contains the sequential build plan for developing Kalei. All features ship together in a single v1 release. The phases below represent an execution timeline for managing development complexity, not separate product phases.
Read in order:
1. `phase-0-groundwork-and-dev-environment.md`
2. `phase-1-platform-foundation.md`
3. `phase-2-core-experience-build.md`
4. `phase-3-launch-readiness-and-hardening.md`
5. `phase-4-spectrum-and-scale.md`
## Build Timeline Overview
These phases organize the work sequentially for manageable development and testing. All features — including Spectrum — are built toward a unified v1 launch.
- Phase 0: Groundwork and Dev Environment
- Goal: stable tooling, accounts, repo standards, and local infrastructure.
- Phase 1: Platform Foundation
- Goal: production-quality backend skeleton, auth, entitlements, core data model.
- Phase 2: Core Experience Build
- Goal: ship Mirror, Turn, Lens, Ritual, Evidence Wall end-to-end.
- Phase 3: Launch Readiness and Hardening
- Goal: Spectrum integration, safety, billing, reliability, compliance, app store readiness.
- Phase 4: Post-Launch Scale
- Goal: analytics pipeline, feature enhancements, growth optimization, scaling controls.
## Development Gate Rules
Do not move to the next build phase until the current phase exit checklist is complete.
If a phase slips, reduce scope but do not skip quality gates for:
- security
- safety
- observability
- data integrity
## Product vs. Build Phases
**Important distinction:** These build phases are execution timelines, not product tiers. The product launches with all features (Mirror, Turn, Lens, Ritual, Evidence Wall, Guide, Spectrum) in a single v1 release. Free tier and Prism subscription tier have feature limits, but both versions of v1 include all features at launch.
## Tooling policy
These phase docs assume an open-source-first stack:
- Gitea for source control and CI
- GlitchTip for error tracking
- PostHog self-hosted for product analytics
- Ollama (local) and vLLM (staging/prod) for open-weight model serving
Platform exceptions remain for mobile distribution and push:
- Apple App Store and Google Play billing/distribution APIs
- APNs and FCM delivery infrastructure

View File

@@ -0,0 +1,185 @@
# Phase 0 - Groundwork and Dev Environment
Duration: 1-2 weeks
Primary owner: Founder + coding assistant
## 1. Objective
Build a stable base so feature work can move fast without breaking:
- all required accounts are created
- local stack boots reliably
- repo structure and standards are in place
- CI checks run on every pull request
## 2. Prerequisites
- Read `docs/kalei-getting-started.md`
- Read `docs/kalei-system-architecture-plan.md`
## 3. Outcomes
By the end of Phase 0 you will have:
- local Postgres and Redis running via Docker
- mobile app bootstrapped with Expo
- API service bootstrapped with Fastify
- initial DB migration system in place
- lint, format, and test commands working
- CI pipeline validating every PR
## 4. Deep Work Breakdown
## 4.1 Access and Account Setup
Task list:
1. Set up Gitea (self-hosted or managed) for source control and CI.
2. Set up open-weight model serving accounts and endpoints (Ollama local, vLLM target host).
3. Create GlitchTip project for API and mobile error tracking.
4. Create PostHog self-hosted project for product analytics.
5. Set up DNS (PowerDNS self-hosted or managed DNS provider) and add domain.
6. Confirm Apple Developer and Google Play Console access (required for app distribution).
Deliverables:
- shared credential inventory (local secure password manager)
- documented secret naming convention
## 4.2 Repository and Branching Standards
Task list:
1. Define branch policy: `main`, short-lived feature branches.
2. Define PR checklist template.
3. Add CODEOWNERS or at least reviewer policy.
4. Add issue templates for bug and feature requests.
Deliverables:
- `CONTRIBUTING.md`
- `.gitea/pull_request_template.md` (or repository PR template equivalent)
- `.gitea/ISSUE_TEMPLATE/*` (or repository issue template equivalent)
## 4.3 Local Development Environment
Task list:
1. Install and verify Git, Node, npm, Docker, Expo CLI.
2. Add Docker compose for Postgres + Redis.
3. Create `.env.example` for API and mobile.
4. Add one-command local start script.
Deliverables:
- `infra/docker/docker-compose.yml`
- `services/api/.env.example`
- `apps/mobile/.env.example`
- root `Makefile` or npm scripts for local startup
## 4.4 API and Mobile Skeletons
Task list:
1. Create Fastify app with health endpoint.
2. Create Expo app with tabs template.
3. Add API client module to mobile app.
4. Show backend health status in app.
Deliverables:
- API running on local port
- mobile app able to read API response
## 4.5 Data and Migration Baseline
Task list:
1. Choose migration tool (for example, `node-pg-migrate`, `drizzle`, or `knex`).
2. Create first migration set for identity tables.
3. Add migration run and rollback commands.
4. Add seed command for local dev data.
Minimum initial tables:
- users
- profiles
- auth_sessions
- refresh_tokens
## 4.6 Quality and Automation Baseline
Task list:
1. Add ESLint + Prettier for API and mobile.
2. Add API unit test framework and one integration test.
3. Configure Gitea Actions (or Woodpecker CI) for lint + test.
4. Add commit hooks (optional but recommended) using `husky`.
Deliverables:
- passing CI on every push/PR
- at least one passing API integration test
## 5. Suggested Day-by-Day Plan
Day 1:
- account setup
- tooling install
- repo folder scaffold
Day 2:
- docker compose and env files
- API skeleton with `/health`
Day 3:
- Expo app setup
- mobile to API health call
Day 4:
- migration tooling and first migrations
- baseline seed script
Day 5:
- linting and tests
- CI setup
- first stable baseline commit
## 6. Validation Checklist
All items must be true:
- `docker compose` starts Postgres and Redis with no errors.
- API starts and `GET /health` returns 200.
- Mobile app loads and displays backend health.
- Migrations can run on clean DB and rollback at least one step.
- CI runs lint and tests successfully.
## 7. Exit Criteria
You can exit Phase 0 when:
- no manual setup surprises remain for a fresh machine
- all team members can run the stack locally in under 30 minutes
- baseline quality checks are automated
## 8. Platform exceptions
These are not open source, but required for shipping mobile apps:
- Apple App Store tooling and APIs
- Google Play tooling and APIs
## 9. Typical Pitfalls and Fixes
- Pitfall: unclear `.env` expectations.
- Fix: complete `.env.example` files with comments.
- Pitfall: mobile app cannot reach local API on real device.
- Fix: use machine LAN IP, not localhost, for device testing.
- Pitfall: migration drift.
- Fix: never edit applied migration files; create a new migration.

View File

@@ -0,0 +1,333 @@
# Kalei Build Plan — Phases 1 & 2
### From Platform Foundation to Core Experience
**Total Duration:** 58 weeks
**Approach:** Backend-first in Phase 1, then mobile + backend in parallel in Phase 2
---
## Overview
This document consolidates the two core build phases that take Kalei from a configured dev environment to a fully functional app with Mirror, Turn, and Lens experiences end-to-end. Phase 1 lays the platform foundation (auth, schema, AI gateway, safety). Phase 2 builds the user-facing experience on top of that foundation.
```
Phase 1: Platform Foundation (Weeks 1-3)
→ Auth, schema, entitlements, AI gateway, safety, observability
Phase 2: Core Experience Build (Weeks 4-8)
→ Mirror v1, Turn v1, Lens v1, Gallery, end-to-end flows
```
---
# PHASE 1 — Platform Foundation
**Duration:** 23 weeks
**Primary owner:** Backend-first with mobile stub integration
## 1.1 Objective
Build a production-grade platform foundation: robust auth and session model, entitlement checks for free vs. paid plans, core domain schema for Mirror/Turn/Lens, AI gateway scaffold with usage metering, and observability + error handling baseline.
## 1.2 Entry Criteria
Phase 0 exit checklist must be complete.
---
## 1.3 Core Scope
### 1.3.1 API Module Setup
Implement service modules: auth, profiles, entitlements, mirror (session/message skeleton), turn (request skeleton), lens (goal/action skeleton), ai_gateway, usage_cost, safety (precheck skeleton).
Each module needs: route handlers, input/output schema validation, service layer, repository/data access layer, and unit tests.
### 1.3.2 Identity and Access
Implement: email/password registration and login, JWT access token (short TTL), refresh token rotation and revocation, logout all sessions, role model (at least `user`, `admin`).
Security details: hash passwords with Argon2id or bcrypt, store refresh tokens hashed, include device metadata per session.
### 1.3.3 Entitlement Model
Implement plan model now, even before paywall UI is complete.
Suggested plan keys: `free`, `prism`, `prism_plus`.
Implement gates for: turns per day, mirror sessions per week, spectrum access.
Integration approach: no RevenueCat dependency. Ingest App Store Server Notifications and Google Play RTDN notifications directly. Maintain local entitlement snapshots as source of truth for authorization.
### 1.3.4 Data Model (Phase 1 Schema)
Create migrations for: users, profiles, subscriptions, entitlement_snapshots, turns, mirror_sessions, mirror_messages, mirror_fragments, lens_goals, lens_actions, ai_usage_events, safety_events.
Design requirements: every row has `created_at`, `updated_at` where relevant. Index by `user_id` and key query timestamps. Soft delete where legal retention requires it.
### 1.3.5 AI Gateway Scaffold
Implement a strict abstraction now: provider adapter interface, request envelope (feature, model, temperature, timeout), response normalization, token usage extraction, retry + timeout + circuit breaker policy.
Do not expose provider SDK directly in feature modules.
### 1.3.6 Safety Precheck Skeleton
Implement now even if rule set is basic: deterministic keyword precheck, safety event logging, return safety status to caller. Mirror and Turn endpoints must call this precheck before generation.
### 1.3.7 Usage Metering and Cost Guardrails
Implement: per-user usage counters in Redis, endpoint-level rate limit middleware, AI usage event write on every provider call, per-feature daily budget checks.
### 1.3.8 Observability Baseline
Implement: structured logging with request IDs, error tracking to GlitchTip, latency and error metrics per endpoint, AI cost metrics by feature.
---
## 1.4 Build Sequence
**Week 1:**
1. Finalize schema and migration files
2. Implement auth and profile endpoints
3. Add integration tests for auth flows
**Week 2:**
1. Implement entitlements and plan gating middleware
2. Implement AI gateway interface and one real provider adapter
3. Implement Redis rate limits and usage counters
**Week 3:**
1. Implement Mirror and Turn endpoint skeletons with safety precheck
2. Implement Lens goal and action skeleton endpoints
3. Add complete observability hooks and dashboards
---
## 1.5 API Contract — End of Phase 1
**Auth:**
- `POST /auth/register`
- `POST /auth/login`
- `POST /auth/refresh`
- `POST /auth/logout`
- `GET /me`
**Entitlements:**
- `GET /billing/entitlements`
- Webhook endpoints for App Store and Google Play billing event ingestion
**Feature Skeletons:**
- `POST /mirror/sessions`
- `POST /mirror/messages`
- `POST /turns`
- `POST /lens/goals`
## 1.6 Testing Requirements
Minimum automated coverage: auth happy path and invalid credential path, token refresh rotation path, entitlement denial for free limits, safety precheck path for crisis keyword match, AI gateway timeout and fallback behavior. Recommended: basic load test for auth + turn skeleton endpoints.
## 1.7 Phase 1 Deliverables
**Code:** Migration files for core schema, API modules with tests, Redis-backed rate limit and usage tracking, AI gateway abstraction with one provider, safety precheck middleware.
**Operational:** GlitchTip configured, endpoint metrics visible, API runbook for local and staging.
## 1.8 Phase 1 Exit Criteria
You can exit Phase 1 when: core auth model is stable and tested, plan gating is enforced server-side, Mirror/Turn/Lens endpoint skeletons are live, AI calls only happen through AI gateway, logs/metrics/error tracking are active.
## 1.9 Phase 1 Risks
- **Auth complexity balloons early.** Mitigation: keep v1 auth strict but minimal; defer advanced IAM.
- **Schema churn from feature uncertainty.** Mitigation: maintain a schema decision log and avoid premature optimization.
- **Provider coupling in feature code.** Mitigation: enforce gateway adapter pattern in code review.
---
---
# PHASE 2 — Core Experience Build
**Duration:** 35 weeks
**Primary owner:** Mobile + backend in parallel
## 2.1 Objective
Ship Kalei's core user experience end-to-end: Mirror with fragment highlighting and inline reframe, Turn generation with 3 perspectives and micro-action, Lens goals/daily actions/daily focus, Gallery/history views for user continuity.
## 2.2 Entry Criteria
Phase 1 exit checklist complete.
---
## 2.3 Product Scope
### 2.3.1 Mirror (Awareness)
**Required behavior:** User starts mirror session → submits messages → backend runs safety precheck first → backend runs fragment detection on safe content → app highlights detected fragments above confidence threshold → user taps fragment for inline reframe → user closes session and receives reflection summary.
**Backend work:** Finalize `mirror_sessions`, `mirror_messages`, `mirror_fragments`. Add close-session reflection endpoint. Add mirror session list/detail endpoints.
**Mobile work:** Mirror compose UI, highlight rendering for detected fragment ranges, tap-to-reframe interaction card, session close and reflection display.
### 2.3.2 Turn (Kaleidoscope)
**Required behavior:** User submits a fragment or thought → backend runs safety precheck → backend generates 3 reframed perspectives → backend returns micro-action (if-then) → user can save turn to gallery.
**Backend work:** Finalize `turns` table and categories. Add save/unsave state. Add history list endpoint.
**Mobile work:** Turn input and loading animation, display 3 patterns + micro-action, save to gallery and view history.
### 2.3.3 Lens (Direction)
**Required behavior:** User creates one or more goals → app generates or stores daily action suggestions → user can mark actions complete → optional daily affirmation/focus shown.
**Backend work:** Finalize `lens_goals`, `lens_actions`. Daily action generation endpoint. Daily affirmation endpoint through AI gateway.
**Mobile work:** Goal creation UI, daily action checklist UI, completion updates and streak indicator.
### 2.3.4 The Rehearsal (Lens Sub-Feature)
**Required behavior:** User selects "Rehearse" within a Lens goal → backend generates a personalized visualization script (process-oriented, first-person, multi-sensory, with obstacle rehearsal) → app displays as a guided text flow with SVG progress ring → session completes with a follow-up micro-action.
**Backend work:** Rehearsal generation endpoint through AI gateway. Prompt template enforcing: first-person perspective, present tense, multi-sensory detail, process focus, obstacle inclusion, ~10 min reading pace. Cache generated scripts per goal; refresh when actions change. Add `rehearsal_sessions` table.
**Mobile work:** Rehearsal screen (single flowing view with SVG progress ring timer). Step transitions (Grounding → Process → Obstacle → Close). Completion state with generated SVG pattern. Rehearsal history in Gallery.
### 2.3.5 The Ritual (Context-Anchored Daily Flow)
**Required behavior:** User selects a Ritual template (Morning/Evening/Quick) and anchors to a daily context → app delivers a timed, sequenced flow chaining Mirror/Turn/Lens steps → Ritual completion tracked with context consistency metrics.
**Backend work:** `ritual_configs` table (template, anchored time, notification preferences). `ritual_completions` table (timestamp, duration, steps completed). Context consistency calculation logic (same-window tracking per Wood et al.). Ritual notification scheduling.
**Mobile work:** Ritual selection/setup during onboarding or settings. Single flowing Ritual screen with SVG progress segments per step. Step transitions without navigation. Completion state with Ritual pattern. Context consistency display in streaks.
### 2.3.6 The Evidence Wall (Mastery Tracking)
**Required behavior:** System automatically collects proof points from all features (completed actions, saved keepsakes, self-corrections, streak milestones, goal completions, reframe echoes) → Evidence Wall in "You" tab displays as SVG mosaic → AI surfaces evidence contextually when self-efficacy dip detected.
**Backend work:** `evidence_points` table (user_id, type, source_feature, source_id, description, created_at). Background job to detect and log proof points from existing feature activity. Efficacy-dip detection logic (pattern analysis on recent Mirror/Turn language). Evidence surfacing endpoint for contextual AI integration.
**Mobile work:** Evidence Wall view in "You" tab (SVG mosaic grid, color-coded by source). Timeline toggle view. Evidence count badges. Contextual evidence card component for use within Mirror/Turn sessions.
---
## 2.4 Deep Technical Workstreams
### 2.4.1 Prompt and Output Contracts
Create strict prompt templates and JSON output contracts per feature: Mirror fragment detection, Mirror inline reframe, Turn multi-pattern output, Lens daily focus output, Rehearsal visualization script, Evidence Wall contextual surfacing. Require server-side validation of AI output shape before returning to clients.
### 2.4.2 Safety Integration
At this phase safety must be complete for user-facing flows: all Mirror and Turn requests pass safety gate, crisis response path returns resource payload (not reframe payload), safety events are queryable for audit.
### 2.4.3 Entitlement Enforcement
Enforce in API middleware: free turn daily limits, free mirror weekly limits, spectrum endpoint lock for non-entitled users. Add clear response codes and client UI handling for plan limits.
### 2.4.4 Performance Targets
Set targets now and test against them: Mirror fragment detection p95 under 3.5s, Turn generation p95 under 3.5s, client screen transitions under 300ms for cached navigation.
---
## 2.5 Build Plan
**Week 1 (Week 4 overall):**
- Finish Mirror backend and basic mobile UI
- Complete fragment highlight rendering
**Week 2 (Week 5 overall):**
- Finish inline reframe flow and session reflections
- Add Mirror history and session detail view
**Week 3 (Week 6 overall):**
- Finish Turn backend and mobile flow
- Add save/history integration
**Week 4 (Week 7 overall):**
- Finish Lens goals and daily actions
- Add daily focus/affirmation flow
- Build Rehearsal backend + mobile UI (Lens sub-feature)
**Week 5 (Week 8 overall):**
- Build Ritual backend (config, completions, consistency tracking)
- Build Ritual mobile UI (single-flow screen, SVG progress, setup flow)
- Build Evidence Wall backend (proof point collection job, evidence_points table)
**Week 6 (Week 9 overall):**
- Build Evidence Wall mobile UI (mosaic view, timeline, contextual card)
- Wire Evidence Wall contextual surfacing into Mirror/Turn sessions
- Integrate Ritual into onboarding flow
**Week 7 (Week 10 overall — hardening):**
- Optimize latency across all features
- Improve retry and offline handling
- Run end-to-end QA pass across all flows including Ritual → Mirror → Turn → Lens → Evidence
---
## 2.6 Test Plan
**Unit tests:** Prompt builder functions, AI output validators, entitlement middleware, safety decision functions.
**Integration tests:** Full Mirror message lifecycle, full Turn generation lifecycle, Lens action completion lifecycle.
**Manual QA matrix:** Normal usage, plan-limit blocked usage, low-connectivity behavior, crisis-language safety behavior.
## 2.7 Phase 2 Deliverables
**Functional:** Mirror v1 complete, Turn v1 complete, Lens v1 complete (with Rehearsal), Ritual v1 complete, Evidence Wall v1 complete, Gallery/history v1 complete.
**Engineering:** Stable endpoint contracts, documented prompt versions, meaningful test coverage for critical flows, feature-level latency and error metrics.
## 2.8 Phase 2 Exit Criteria
You can exit Phase 2 when: users can complete Mirror → Turn → Lens flow end-to-end, Ritual sequences features into a single daily flow, Rehearsal generates process-oriented visualization scripts, Evidence Wall collects and surfaces proof points, plan limits and safety behavior are consistent and test-backed, no critical P0 bugs in core user paths, telemetry confirms baseline latency and reliability targets.
## 2.9 Phase 2 Risks
- **Output variability from model causes UI breakage.** Mitigation: strict response schema validation and fallback copy.
- **Too much feature scope in one pass.** Mitigation: ship v1 flows first, defer advanced UX polish.
- **Latency drift from complex prompts.** Mitigation: simplify prompts and use cached static context.
---
---
# Cross-Phase Reference
## Combined Timeline
| Week | Phase | Focus |
|------|-------|-------|
| 1 | Phase 1 | Schema, auth, profile endpoints |
| 2 | Phase 1 | Entitlements, AI gateway, rate limits |
| 3 | Phase 1 | Feature skeletons, safety, observability |
| 4 | Phase 2 | Mirror backend + mobile UI |
| 5 | Phase 2 | Mirror inline reframes + history |
| 6 | Phase 2 | Turn backend + mobile flow |
| 7 | Phase 2 | Lens goals + daily actions + Rehearsal |
| 8 | Phase 2 | Ritual (backend + mobile + onboarding) + Evidence Wall backend |
| 9 | Phase 2 | Evidence Wall mobile + contextual surfacing integration |
| 10 | Phase 2 | Hardening, latency, end-to-end QA |
## Dependencies
- Phase 2 Mirror depends on Phase 1 AI gateway + safety precheck
- Phase 2 entitlement enforcement depends on Phase 1 plan gating middleware
- Phase 2 Lens daily actions depend on Phase 1 AI gateway being stable
- All Phase 2 features depend on Phase 1 observability for debugging
## Combined API Surface (End of Phase 2)
**Auth:** register, login, refresh, logout, me
**Billing:** entitlements, App Store + Google Play webhooks
**Mirror:** create session, send message (with fragment detection), close session (with reflection), list sessions, session detail
**Turn:** create turn (with 3 patterns + micro-action), save/unsave, list history
**Lens:** create goal, generate daily actions, complete action, daily affirmation
**Rehearsal:** generate visualization script (per goal), list rehearsal history
**Ritual:** create/update ritual config, start ritual session, complete ritual, list completions, context consistency stats
**Evidence Wall:** list proof points (filterable by type/source), get contextual evidence (for AI surfacing)
**Gallery:** list saved turns + mirror reflections + rehearsal/ritual patterns (unified history)

View File

@@ -0,0 +1,167 @@
# Phase 3 - Launch Readiness and Hardening
Duration: 2-4 weeks
Primary owner: Full stack + operations focus
## 1. Objective
Prepare Kalei for real users with production safeguards:
- safety policy completion and crisis flow readiness
- subscription and entitlement reliability
- app and API operational stability
- privacy and compliance basics for app store approval
## 2. Entry Criteria
Phase 2 exit checklist complete.
## 3. Scope
## 3.1 Safety and Trust Hardening
Tasks:
1. finalize crisis keyword and pattern sets
2. validate crisis response templates and regional resources
3. add safety dashboards and alerting
4. add audit trail for safety decisions
Validation goals:
- crisis path returns under 1 second in most cases
- no crisis path returns reframing output
## 3.2 Billing and Entitlements
Tasks:
1. complete App Store Server Notifications ingestion
2. complete Google Play RTDN ingestion
3. build reconciliation jobs for both stores (entitlements sync)
4. test expired, canceled, trial, billing retry, and restore scenarios
5. add paywall gating in all required clients
Validation goals:
- entitlement state converges within minutes after billing changes
- no premium endpoint access for expired plans
## 3.3 Reliability Engineering
Tasks:
1. finalize health checks and readiness probes
2. add backup and restore procedures for Postgres
3. add Redis persistence strategy for critical counters if required
4. define incident severity levels and on-call workflow
Validation goals:
- verified DB restore from backup in staging
- runbook exists for API outage, DB outage, AI provider outage
## 3.4 Security and Compliance Baseline
Tasks:
1. secrets rotation policy and documented process
2. verify transport security and secure headers
3. verify account deletion and data export flows
4. prepare privacy policy and terms for submission
Validation goals:
- basic security checklist signed off
- app store privacy disclosures map to real data flows
## 3.5 Observability and Cost Control
Tasks:
1. define alerts for latency, error rate, and AI spend thresholds
2. implement monthly spend cap and automatic degradation rules
3. monitor feature-level token cost dashboards
Validation goals:
- alert thresholds tested in staging
- degradation path verified (Lens fallback first)
## 3.6 Beta and Release Pipeline
Tasks:
1. set up TestFlight internal/external testing
2. set up Android internal testing track
3. run beta cycle with scripted feedback collection
4. triage and fix launch-blocking defects
Validation goals:
- no unresolved launch-blocking defects
- release checklist complete for both stores
## 4. Suggested Execution Plan
Week 1:
- safety hardening and billing reconciliation
- initial reliability runbooks
Week 2:
- security/compliance checks
- backup and restore drills
- full observability alert tuning
Week 3:
- TestFlight and Play internal beta
- defect triage and fixes
Week 4 (if needed):
- final store submission materials
- go/no-go readiness review
## 5. Release Checklists
## 5.1 API release checklist
- migration plan reviewed
- rollback plan documented
- dashboards green
- error budget acceptable
## 5.2 Mobile release checklist
- build reproducibility verified
- crash-free session baseline from beta acceptable
- paywall and entitlement states correct
- copy and metadata final
## 5.3 Business and policy checklist
- privacy policy URL live
- terms URL live
- support contact available
- crisis resources configured for launch regions
## 6. Exit Criteria
You can exit Phase 3 when:
- app is store-ready with stable entitlement behavior
- safety flow is verified and monitored
- operations runbooks and alerts are live
- backup and restore are proven in practice
## 7. Risks To Watch
- Risk: entitlement mismatch from webhook delays.
- Mitigation: scheduled reconciliation and idempotent webhook handling.
- Risk: launch-day AI latency spikes.
- Mitigation: timeout limits and graceful fallback behavior.
- Risk: compliance gaps discovered late.
- Mitigation: complete privacy mapping before store submission.

View File

@@ -0,0 +1,158 @@
# Phase 4 - Spectrum and Scale
Duration: 3-6 weeks
Primary owner: Data + backend + product analytics
## 1. Objective
Deliver Phase 2 intelligence features and scaling maturity:
- Spectrum weekly and monthly insights
- aggregated analytics model over user activity
- asynchronous jobs and batch processing
- cost, reliability, and scaling controls for growth
## 2. Entry Criteria
Phase 3 exit checklist complete.
## 3. Scope
## 3.1 Spectrum Data Foundation
Implement tables and data flow for:
- session-level emotional vectors
- turn-level impact analysis
- weekly aggregates
- monthly aggregates
Data design requirements:
- user-level partition/index strategy for query speed
- clear retention and deletion behavior
- exclusion flags so users can omit sessions from analysis
## 3.2 Aggregation Pipeline
Build asynchronous jobs:
1. post-session analysis job
2. weekly aggregation job
3. monthly narrative job
Job engineering requirements:
- idempotency keys
- retry with backoff
- dead-letter queue for failures
- metrics for queue depth and job duration
## 3.3 Spectrum Insight Generation
Implement AI-assisted summary generation using aggregated data only.
Rules:
- do not include raw user text in generated insights by default
- validate output tone and safety constraints
- version prompts and track prompt revisions
## 3.4 Spectrum API and Client
Backend endpoints:
- weekly insight feed
- monthly deep dive
- spectrum reset
- exclusions management
Mobile screens:
- emotional landscape view
- pattern distribution view
- insight feed cards
- monthly summary panel
## 3.5 Growth-Ready Scale Controls
Implement scale milestones:
- worker isolation from interactive API if needed
- database optimization and index tuning
- caching strategy for read-heavy insight endpoints
- cost-aware model routing for non-critical generation
## 4. Detailed Execution Plan
Week 1:
- schema rollout for spectrum tables
- event ingestion hooks from Mirror/Turn/Lens
Week 2:
- implement post-session analysis and weekly aggregation jobs
- add metrics and retries
Week 3:
- implement monthly aggregation and narrative generation
- implement spectrum API endpoints
Week 4:
- mobile spectrum dashboard v1
- push notification hooks for weekly summaries
Week 5-6 (as needed):
- performance tuning
- scale and cost optimization
- UX polish for insight comprehension
## 5. Quality and Analytics Requirements
Quality gates:
- no raw-content leakage in Spectrum UI
- weekly job completion SLA met
- dashboard load times within agreed target
Analytics requirements:
- track spectrum engagement events
- track conversion impact from spectrum teaser to upgrade
- track retention lift for spectrum users vs non-spectrum users
## 6. Deliverables
Functional deliverables:
- Spectrum dashboard v1
- weekly and monthly insight generation
- user controls for exclusions and reset
Engineering deliverables:
- robust worker pipeline with retries and DLQ
- aggregated analytics tables with indexing strategy
- end-to-end observability for job health and costs
## 7. Exit Criteria
You can exit Phase 4 when:
- weekly and monthly insights run on schedule reliably
- users can view, reset, and control analysis scope
- spectrum cost and performance stay inside defined envelopes
- data deletion behavior is verified for raw and derived records
## 8. Risks To Watch
- Risk: analytics pipeline complexity causes reliability issues.
- Mitigation: isolate workers and enforce idempotent jobs.
- Risk: insight quality is too generic.
- Mitigation: prompt iteration with rubric scoring and blinded review.
- Risk: costs drift with growing history windows.
- Mitigation: aggregate-first processing and strict feature budget controls.