Initial commit: Kalei app — docs, mockups, logo, pitch deck

Complete project files including:
- 73 polished HTML mockup screens (onboarding, turn, mirror, lens, gallery, you, ritual, spectrum, modals, guide)
- Design system CSS with Inter font, jewel-tone palette, device frame scaling
- Canonical 6-blade kaleidoscope logo (soft-elegance-final)
- SVG asset library (fragments, icons, patterns, evidence wall, spectrum viz)
- Product docs, brand guidelines, technical architecture, build phases
- Pitch deck and cost projections
- Logo mockup iterations and finalists

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
This commit is contained in:
2026-02-22 14:55:22 +01:00
commit 38021c4633
168 changed files with 46724 additions and 0 deletions

File diff suppressed because it is too large Load Diff

View File

@@ -0,0 +1,403 @@
# Kalei — Brand Metaphor & Experience Design
## The Core Metaphor
A kaleidoscope takes broken, random fragments of glass and reveals them as beautiful, symmetrical patterns. It never changes the pieces — it changes the **angle**. Turn it once, and chaos becomes art. Turn it again, and the same fragments form something entirely new.
**Kalei does the same thing with your thoughts.**
Your situation hasn't changed. Your circumstances are the same fragments they were a moment ago. But Kalei shifts the angle — and suddenly you see the pattern, the meaning, the opportunity that was always there.
This isn't toxic positivity. A kaleidoscope doesn't pretend the glass isn't broken. It proves that broken things can still be beautiful.
---
## Brand Vocabulary
Every app builds an unconscious vocabulary through the words it uses in buttons, headers, notifications, and empty states. Kalei's vocabulary should reinforce the metaphor without being heavy-handed.
### Primary Terms (Use Frequently)
| Instead of... | Kalei says... | Why |
|----------------------|----------------------|--------------------------------------------------------|
| Reframe | **Turn** | You "turn" a kaleidoscope to see a new pattern |
| Negative thought | **Fragment** | Broken glass — raw material, not a flaw |
| Reframed perspective | **Pattern** | The beautiful arrangement revealed by a new angle |
| Journal entry | **Reflection** | Light + mirrors = what a kaleidoscope runs on |
| Daily session | **Turn of the day** | Each day you take a fresh turn |
| Progress/history | **Gallery** | A collection of the patterns you've created |
| Insight | **Facet** | One face of a multifaceted view |
| Saved reframe | **Keepsake** | A pattern worth holding onto |
### Secondary Terms (Use Sparingly for Flavor)
- **Shift** — a small adjustment in perspective
- **Prism** — the tool that splits one beam into many colors
- **Mosaic** — the bigger picture built from many small pieces
- **Spectrum** — the full range of ways to see something
- **Illuminate** — to light up what was hidden
- **Refract** — to bend light in a new direction
### Words to Avoid
- "Fix" — implies the user is broken
- "Heal" — too clinical, positions app as therapy
- "Transform" — too dramatic, overpromises
- "Manifest" — save for the Manifestation Engine context only
- "Positive vibes" — trivializes the process
- "Journey" — overused in wellness apps
---
## Feature Naming
### The Reframer → **The Kaleidoscope** (or just **Turn**)
The flagship feature. User inputs a negative thought (a fragment), and Kalei reveals multiple reframed perspectives (patterns).
- **CTA button:** "Turn" (verb — active, simple, one word)
- **Input prompt:** "Drop in a fragment" or "What's on your mind?"
- **Loading state:** A subtle kaleidoscope rotation animation
- **Results header:** "Here's what the same pieces look like from a new angle"
- **Individual reframes:** Displayed as "Pattern 1," "Pattern 2," "Pattern 3" — each a different arrangement of the same facts
- **Save action:** "Keep this pattern"
### Manifestation Engine → **The Lens**
The goal-setting and manifestation feature. If the Kaleidoscope shows you new patterns in what already exists, the Lens focuses your vision on what you're building toward.
- **Section header:** "Your Lens"
- **Goal creation:** "Set your focus"
- **Daily affirmation:** "Today's focus"
- **Vision board:** "The View" — what you see when you look through the lens
- **Progress check-in:** "Sharpen your focus"
- **Milestone reached:** "Crystal clear"
### Combined Narrative
> The Kaleidoscope helps you see beauty in what's already there.
> The Lens helps you focus on what's ahead.
> Together, they're Kalei — a new way to see your life.
---
## Onboarding Flow
The onboarding should teach the metaphor through experience, not explanation.
### Screen 1 — The Fragment
Visual: A single shard of colored glass on a dark background. Simple. Stark.
> **"This is a thought."**
> *On its own, it can feel sharp. Random. Hard to make sense of.*
### Screen 2 — The Turn
Visual: The shard multiplies and rotates into a kaleidoscope pattern. Animated transition.
> **"But change the angle..."**
> *...and the same piece becomes part of something beautiful.*
### Screen 3 — The Reveal
Visual: A full, stunning kaleidoscope pattern fills the screen. Color blooms.
> **"Kalei doesn't change your reality."**
> *It changes how you see it.*
### Screen 4 — First Turn (Interactive)
> **"Let's try your first Turn."**
> *Type something that's been weighing on you.*
The user types a real negative thought. Kalei processes it and returns 23 reframed perspectives. The user experiences the core value proposition within 60 seconds of opening the app.
### Screen 5 — Welcome
> **"Welcome to Kalei."**
> *Every day is a new turn.*
---
## Visual Design Language
### The Kaleidoscope Aesthetic
The visual identity should evoke the feeling of looking through a kaleidoscope without being literal or childish.
**Color Palette:**
- **Primary:** Deep jewel tones — amethyst purple, sapphire blue, emerald green
- **Secondary:** Warm golds and soft amber (the light passing through glass)
- **Background:** Near-black or deep navy (the dark tube of a kaleidoscope — the fragments shine against darkness)
- **Accent:** Prismatic gradients for highlights and CTAs (light refracting)
**Why dark backgrounds:** A kaleidoscope works by reflecting light against darkness. The dark UI makes the colorful elements pop — and also positions Kalei as premium, not bubbly.
**Avoid:** Pastel wellness aesthetic. No sage green, no cream, no watercolor blobs. Kalei is jewel-toned, rich, and confident.
**Typography:**
- Clean, modern sans-serif for body text (clarity, legibility)
- One geometric or slightly decorative font for headlines (faceted, angular — like cut glass)
**Iconography:**
- Geometric and faceted — hexagons, triangles, crystalline shapes
- Avoid circles and soft curves (that's every other wellness app)
- Subtle symmetry in icon design (mirrors the symmetry of kaleidoscope patterns)
### Signature Animation: The Turn
The core micro-interaction of the app. When a user submits a thought for reframing:
1. **Input phase:** Fragment icon — a single angular shard
2. **Processing phase:** The shard begins to rotate and multiply (kaleidoscope turning). Subtle, smooth, 1.52 seconds
3. **Reveal phase:** Fragments settle into a symmetric pattern. The reframed perspectives appear beneath or within the pattern
This animation should become iconic — the "Kalei Turn" — recognizable in screenshots, marketing, and social media.
### Pattern Generation
Each reframing session could generate a unique, procedurally-created kaleidoscope pattern based on the input. These patterns become:
- **Visual identity for saved reframes** — each keepsake has its own pattern
- **Gallery items** — your collection of patterns grows over time
- **Shareable cards** — "My pattern for today" with the reframe text overlaid
- **Profile decoration** — your most-used patterns become part of your visual identity
This is a powerful retention mechanic: **users build a gallery of beautiful, personal, unique patterns.** Each one tied to a moment where they chose to see things differently.
---
## Navigation & Information Architecture
### Tab Bar (4 tabs)
| Icon | Label | Function |
|------|-------|----------|
| ◇ (geometric shard) | **Turn** | The Kaleidoscope — reframe a thought |
| ◎ (lens/circle) | **Lens** | The Manifestation Engine — goals & focus |
| ▦ (grid of patterns) | **Gallery** | History of all your Turns and patterns |
| ● (profile) | **You** | Settings, stats, subscription, profile |
### Turn Tab (Home)
- Hero area: "What's the fragment?" — text input
- Below: "Turn of the day" — a featured prompt or previous pattern
- Below: "Recent patterns" — quick access to last 3 Turns
- Floating action: Quick Turn button (always accessible)
### Lens Tab
- Current focus (active goal) displayed prominently
- Today's affirmation
- Vision board / The View
- Progress tracker with "sharpening" visual metaphor
### Gallery Tab
- Grid view of kaleidoscope patterns, each linked to a saved reframe
- Filterable by date, mood tag, or theme
- Tap to expand: see the original fragment, the patterns revealed, and any notes
- Option to reshare or re-Turn (reframe the same thought again for fresh perspectives)
### You Tab
- Streak counter: "X-day turning streak"
- Stats: Total turns, patterns saved, most common themes
- Settings, subscription management
- "Your spectrum" — a visual breakdown of the emotional themes you've explored
---
## Notification & Engagement Copy
### Daily Prompt (Push Notification)
Rotate through styles:
- "Ready for today's Turn? 🔮"
- "Same pieces, new angle. What fragment are you carrying today?"
- "Your Gallery is growing. Add today's pattern."
- "The glass hasn't changed. But the view can. Take a Turn."
### Streak Maintenance
- Day 3: "Three days of turning fragments into patterns. Keep going."
- Day 7: "A week of new angles. Your Gallery is filling up."
- Day 30: "30 days. 30 Turns. You're seeing things most people never will."
- Streak broken: "The kaleidoscope is still here when you're ready. No pressure."
### Milestone Celebrations
- First Turn: "Your first pattern. This is where it starts."
- 10th Turn: "10 fragments turned into 10 beautiful patterns."
- 50th Turn: "You've looked at 50 hard things and found something worth keeping in every one."
- 100th Turn: "100 Turns. You don't just see the bright side — you see every side."
### Empty States
- Gallery (no saves yet): "Your Gallery is waiting. The next pattern you save will appear here."
- Lens (no goal set): "What are you focusing on? Set your first Lens."
- Turn history (new user): "Every kaleidoscope starts with a single turn."
---
## Subscription & Monetization Naming
### Free Tier → **Kalei**
- 3 Turns per day
- Basic pattern generation
- Gallery (last 30 days)
### Premium Tier → **Kalei Prism**
- Unlimited Turns
- Full Gallery (all history)
- The Lens (Manifestation Engine)
- Advanced reframe styles (Stoic, Compassionate, Pragmatic, Growth)
- Custom pattern themes
- Export & share patterns
**Why "Prism":** A prism takes a single beam of light and splits it into its full spectrum. Kalei Prism gives you the full spectrum of features.
**CTA for upgrade:** "See the full spectrum" or "Unlock your Prism"
### Pricing Display
> **Kalei Prism — $7.99/month**
> *Unlimited Turns. Full Gallery. The Lens. Your complete spectrum.*
---
## Social & Sharing Mechanics
### Pattern Cards
When a user saves a reframe, they can generate a **Pattern Card** — a shareable image featuring:
- Their unique kaleidoscope pattern (procedurally generated)
- The reframed thought (the pattern, not the fragment — we never share the negative thought)
- Subtle Kalei branding
These are designed to be **Instagram Story and iMessage native** — correct aspect ratios, visually striking against both light and dark backgrounds.
### "Turn It" Sharing
A user can share a prompt with a friend: "Turn this fragment" — challenging someone else to reframe a thought. This introduces new users to the app through a natural, non-spammy mechanic.
### Community Gallery (Future, v2+)
An opt-in public gallery where users can share their best patterns anonymously. Browse how other people turned their fragments into patterns. Upvote the most powerful reframes. This builds community without requiring social profiles or exposing personal information.
---
## Marketing & Brand Voice
### Tagline Options
1. **"Same pieces. New angle."** — the core proposition in five words
2. **"Turn how you see it."** — active, empowering, references the mechanic
3. **"Find the pattern."** — mysterious, inviting, implies hidden beauty
4. **"A new way to see."** — simple, universal
**Recommended primary tagline:** *Same pieces. New angle.*
### Brand Voice Guidelines
**Kalei speaks like:** A wise friend who sees beauty in hard things — not a therapist, not a guru, not a cheerleader. Calm. Grounded. A little poetic. Never preachy.
**Tone:** Warm but not soft. Confident but not aggressive. Poetic but not flowery.
**Examples:**
| Situation | ❌ Wrong tone | ✅ Kalei tone |
|-----------|-------------|-------------|
| User inputs a negative thought | "Let's turn that frown upside down!" | "Let's see what this looks like from another angle." |
| User completes a reframe | "Amazing job! You're so strong!" | "There it is. A pattern worth keeping." |
| User hasn't opened app in a week | "We miss you! Come back!" | "Still here. Ready when you are." |
| Explaining the app | "AI-powered cognitive reframing tool" | "A kaleidoscope for your mind." |
### App Store Description (Draft)
> **Kalei — A kaleidoscope for your mind.**
>
> A kaleidoscope doesn't change the glass. It changes the angle. Suddenly, broken fragments become a beautiful pattern.
>
> Kalei does the same thing with your thoughts.
>
> Type what's weighing on you. Kalei reveals new perspectives — not toxic positivity, but genuine, research-backed ways to see the same situation differently. Every reframe is grounded in cognitive behavioral science and built to help you think clearer, not just feel better.
>
> **The Kaleidoscope** — Turn any negative thought into multiple new perspectives. Same facts. Different angle. Beautiful patterns.
>
> **The Lens** — Set your focus. Define what you're building toward. Daily affirmations, vision tracking, and goal clarity powered by AI.
>
> **Your Gallery** — Every Turn creates a unique pattern. Save your favorites. Watch your collection grow. See how far you've come.
>
> Same pieces. New angle. That's Kalei.
### Elevator Pitch
> "Kalei is a kaleidoscope for your mind. You give it a negative thought — a broken fragment — and it shows you the beautiful patterns hidden inside. It's AI-powered cognitive reframing that helps you see the same situation from new angles. Not toxic positivity. Real perspective shifts, grounded in science. Same pieces, new angle."
---
## Technical Implementation Notes
### Procedural Pattern Generation
Each reframing session should generate a unique kaleidoscope pattern. Implementation approach:
- Use the **input text as a seed** — same thought always generates the same base pattern (creates personal connection)
- Apply **reframe variant as a rotation** — Pattern 1, Pattern 2, Pattern 3 are visual rotations of the base
- Render using **Canvas/WebGL** in React Native (or pre-rendered SVG for performance)
- Patterns should be **deterministic** — reopening a saved reframe shows the same pattern
- Export patterns as **PNG at 1080×1920** for Instagram Stories and **1080×1080** for feed posts
### Animation Specs
- **Turn animation:** 1.5s ease-in-out rotation, fragments multiplying from 1→6→full symmetry
- **Loading shimmer:** Prismatic color shift across a geometric skeleton screen
- **Tab transitions:** Subtle faceted wipe (diagonal geometric transition, not standard iOS slide)
- **Pattern reveal:** Fragments drift into position with slight parallax depth (0.3s stagger per fragment)
### Reframe Prompt Engineering
The Claude API prompts for reframing should be structured to match the metaphor:
```
System prompt context:
"You are the engine behind Kalei, a kaleidoscope for the mind.
The user gives you a fragment — a negative thought or situation.
Your job is to reveal the patterns — multiple genuine, grounded
perspectives on the same situation. You never change the facts.
You change the angle.
You are not a therapist. You are not toxic positivity.
You are a kaleidoscope: you show what was already there,
arranged in a way the user hadn't seen before."
```
---
## Summary: The Metaphor at Every Layer
| Layer | How the metaphor shows up |
|-------|---------------------------|
| **Name** | Kalei — short for kaleidoscope |
| **Tagline** | Same pieces. New angle. |
| **Core mechanic** | "Turn" a fragment into patterns |
| **Visual design** | Jewel tones, geometric shapes, dark backgrounds, prismatic gradients |
| **Animations** | Kaleidoscope rotation on every reframe |
| **Vocabulary** | Fragments, patterns, turns, facets, gallery, keepsakes |
| **Feature names** | The Kaleidoscope, The Lens, The Gallery |
| **Subscription** | Kalei Prism — "See the full spectrum" |
| **Sharing** | Pattern Cards — unique generative art tied to each reframe |
| **Notifications** | Poetic, grounded, always referencing angles and patterns |
| **Brand voice** | Calm, wise, finds beauty in hard things |
| **Onboarding** | User experiences a real Turn within 60 seconds |
| **Retention** | Gallery of personal patterns grows over time — collectible, visual, meaningful |
---
*The glass hasn't changed. But you have.*

View File

@@ -0,0 +1,31 @@
<svg viewBox="0 0 400 400" xmlns="http://www.w3.org/2000/svg" width="400" height="400">
<defs>
<linearGradient id="b1" x1="50%" y1="0%" x2="50%" y2="100%"><stop offset="0%" stop-color="#A78BFA"/><stop offset="100%" stop-color="#6D28D9"/></linearGradient>
<linearGradient id="b2" x1="50%" y1="0%" x2="50%" y2="100%"><stop offset="0%" stop-color="#60A5FA"/><stop offset="100%" stop-color="#1E40AF"/></linearGradient>
<linearGradient id="b3" x1="50%" y1="0%" x2="50%" y2="100%"><stop offset="0%" stop-color="#34D399"/><stop offset="100%" stop-color="#065F46"/></linearGradient>
<linearGradient id="b4" x1="50%" y1="0%" x2="50%" y2="100%"><stop offset="0%" stop-color="#FCD34D"/><stop offset="100%" stop-color="#92400E"/></linearGradient>
<linearGradient id="b5" x1="50%" y1="0%" x2="50%" y2="100%"><stop offset="0%" stop-color="#F472B6"/><stop offset="100%" stop-color="#9D174D"/></linearGradient>
<linearGradient id="b6" x1="50%" y1="0%" x2="50%" y2="100%"><stop offset="0%" stop-color="#818CF8"/><stop offset="100%" stop-color="#3730A3"/></linearGradient>
<linearGradient id="prismatic" x1="0%" y1="0%" x2="100%" y2="100%">
<stop offset="0%" stop-color="#C4B5FD"/>
<stop offset="25%" stop-color="#93C5FD"/>
<stop offset="50%" stop-color="#6EE7B7"/>
<stop offset="75%" stop-color="#FDE68A"/>
<stop offset="100%" stop-color="#FBCFE8"/>
</linearGradient>
<filter id="glow">
<feGaussianBlur stdDeviation="6" result="blur"/>
<feMerge><feMergeNode in="blur"/><feMergeNode in="SourceGraphic"/></feMerge>
</filter>
</defs>
<g transform="translate(200,200)">
<path d="M 0,-24 Q 50,-100 30,-170 Q -10,-170 -40,-140 Q -30,-80 0,-24" fill="url(#b1)" opacity="0.9"/>
<path d="M 0,-24 Q 50,-100 30,-170 Q -10,-170 -40,-140 Q -30,-80 0,-24" fill="url(#b2)" opacity="0.87" transform="rotate(60)"/>
<path d="M 0,-24 Q 50,-100 30,-170 Q -10,-170 -40,-140 Q -30,-80 0,-24" fill="url(#b3)" opacity="0.87" transform="rotate(120)"/>
<path d="M 0,-24 Q 50,-100 30,-170 Q -10,-170 -40,-140 Q -30,-80 0,-24" fill="url(#b4)" opacity="0.87" transform="rotate(180)"/>
<path d="M 0,-24 Q 50,-100 30,-170 Q -10,-170 -40,-140 Q -30,-80 0,-24" fill="url(#b5)" opacity="0.87" transform="rotate(240)"/>
<path d="M 0,-24 Q 50,-100 30,-170 Q -10,-170 -40,-140 Q -30,-80 0,-24" fill="url(#b6)" opacity="0.82" transform="rotate(300)"/>
<circle r="30" fill="url(#prismatic)" filter="url(#glow)"/>
<circle cx="-6" cy="-8" r="8" fill="white" opacity="0.3"/>
</g>
</svg>

After

Width:  |  Height:  |  Size: 2.5 KiB

View File

@@ -0,0 +1,175 @@
<svg viewBox="0 0 400 400" xmlns="http://www.w3.org/2000/svg" width="100%" height="100%">
<defs>
<!-- Background Gradient -->
<radialGradient id="bgGlow" cx="50%" cy="50%" r="50%">
<stop offset="0%" stop-color="#0A0E1A" />
<stop offset="100%" stop-color="#050508" />
</radialGradient>
<!-- Center Backlight Glow -->
<radialGradient id="centerLight" cx="50%" cy="50%" r="50%">
<stop offset="0%" stop-color="#ffffff" stop-opacity="0.2" />
<stop offset="100%" stop-color="#ffffff" stop-opacity="0" />
</radialGradient>
<!-- Blade Gradients: Amethyst (0°) -->
<linearGradient id="g0_F1" x1="20" y1="17" x2="-30" y2="160" gradientUnits="userSpaceOnUse">
<stop offset="0%" stop-color="#A78BFA" />
<stop offset="100%" stop-color="#8B5CF6" />
</linearGradient>
<linearGradient id="g0_F2" x1="40" y1="0" x2="140" y2="60" gradientUnits="userSpaceOnUse">
<stop offset="0%" stop-color="#8B5CF6" />
<stop offset="100%" stop-color="#5B21B6" />
</linearGradient>
<!-- Blade Gradients: Sapphire (60°) -->
<linearGradient id="g1_F1" x1="20" y1="17" x2="-30" y2="160" gradientUnits="userSpaceOnUse">
<stop offset="0%" stop-color="#93C5FD" />
<stop offset="100%" stop-color="#3B82F6" />
</linearGradient>
<linearGradient id="g1_F2" x1="40" y1="0" x2="140" y2="60" gradientUnits="userSpaceOnUse">
<stop offset="0%" stop-color="#3B82F6" />
<stop offset="100%" stop-color="#1D4ED8" />
</linearGradient>
<!-- Blade Gradients: Emerald (120°) -->
<linearGradient id="g2_F1" x1="20" y1="17" x2="-30" y2="160" gradientUnits="userSpaceOnUse">
<stop offset="0%" stop-color="#6EE7B7" />
<stop offset="100%" stop-color="#10B981" />
</linearGradient>
<linearGradient id="g2_F2" x1="40" y1="0" x2="140" y2="60" gradientUnits="userSpaceOnUse">
<stop offset="0%" stop-color="#10B981" />
<stop offset="100%" stop-color="#047857" />
</linearGradient>
<!-- Blade Gradients: Amber (180°) -->
<linearGradient id="g3_F1" x1="20" y1="17" x2="-30" y2="160" gradientUnits="userSpaceOnUse">
<stop offset="0%" stop-color="#FCD34D" />
<stop offset="100%" stop-color="#F59E0B" />
</linearGradient>
<linearGradient id="g3_F2" x1="40" y1="0" x2="140" y2="60" gradientUnits="userSpaceOnUse">
<stop offset="0%" stop-color="#F59E0B" />
<stop offset="100%" stop-color="#B45309" />
</linearGradient>
<!-- Blade Gradients: Rose (240°) -->
<linearGradient id="g4_F1" x1="20" y1="17" x2="-30" y2="160" gradientUnits="userSpaceOnUse">
<stop offset="0%" stop-color="#F9A8D4" />
<stop offset="100%" stop-color="#EC4899" />
</linearGradient>
<linearGradient id="g4_F2" x1="40" y1="0" x2="140" y2="60" gradientUnits="userSpaceOnUse">
<stop offset="0%" stop-color="#EC4899" />
<stop offset="100%" stop-color="#BE185D" />
</linearGradient>
<!-- Blade Gradients: Indigo (300°) -->
<linearGradient id="g5_F1" x1="20" y1="17" x2="-30" y2="160" gradientUnits="userSpaceOnUse">
<stop offset="0%" stop-color="#A5B4FC" />
<stop offset="100%" stop-color="#6366F1" />
</linearGradient>
<linearGradient id="g5_F2" x1="40" y1="0" x2="140" y2="60" gradientUnits="userSpaceOnUse">
<stop offset="0%" stop-color="#6366F1" />
<stop offset="100%" stop-color="#4338CA" />
</linearGradient>
<!-- Prismatic Core Gradient -->
<linearGradient id="prismatic" x1="0%" y1="0%" x2="100%" y2="100%">
<stop offset="0%" stop-color="#8B5CF6" />
<stop offset="25%" stop-color="#3B82F6" />
<stop offset="50%" stop-color="#10B981" />
<stop offset="75%" stop-color="#F59E0B" />
<stop offset="100%" stop-color="#EC4899" />
</linearGradient>
<!-- Standard Glow Filter -->
<filter id="glow" x="-50%" y="-50%" width="200%" height="200%">
<feGaussianBlur stdDeviation="6" result="blur" />
<feMerge>
<feMergeNode in="blur" />
<feMergeNode in="SourceGraphic" />
</feMerge>
</filter>
<!-- Strong Core Glow -->
<filter id="coreGlow" x="-50%" y="-50%" width="200%" height="200%">
<feGaussianBlur stdDeviation="12" result="blur" />
<feMerge>
<feMergeNode in="blur" />
<feMergeNode in="SourceGraphic" />
</feMerge>
</filter>
</defs>
<!-- Deep Dark Space Background -->
<rect width="100%" height="100%" fill="url(#bgGlow)" />
<!-- Center Backlight -->
<circle cx="200" cy="200" r="120" fill="url(#centerLight)" />
<!-- The Iris (Rotated by -15 deg to emphasize "The Turn" & "New Angle") -->
<g transform="translate(200, 200) rotate(-15)">
<!--
THE KALEIDOSCOPE SHARDS
Overlapping translucent facets using screen blend mode for emergent colors
-->
<!-- Amethyst Blade (0°) -->
<g transform="rotate(0)" style="mix-blend-mode: screen;">
<!-- Main Outer Face -->
<path d="M 40,0 L -30,160 L 140,60 Z" fill="url(#g0_F2)" fill-opacity="0.80" stroke="#ffffff" stroke-width="0.5" stroke-opacity="0.4" stroke-linejoin="round" />
<!-- Inner Crystalline Bevel -->
<path d="M 40,0 L 20,34.641 L -30,160 Z" fill="url(#g0_F1)" fill-opacity="0.95" stroke="#ffffff" stroke-width="0.5" stroke-opacity="0.6" stroke-linejoin="round" />
</g>
<!-- Sapphire Blade (60°) -->
<g transform="rotate(60)" style="mix-blend-mode: screen;">
<path d="M 40,0 L -30,160 L 140,60 Z" fill="url(#g1_F2)" fill-opacity="0.80" stroke="#ffffff" stroke-width="0.5" stroke-opacity="0.4" stroke-linejoin="round" />
<path d="M 40,0 L 20,34.641 L -30,160 Z" fill="url(#g1_F1)" fill-opacity="0.95" stroke="#ffffff" stroke-width="0.5" stroke-opacity="0.6" stroke-linejoin="round" />
</g>
<!-- Emerald Blade (120°) -->
<g transform="rotate(120)" style="mix-blend-mode: screen;">
<path d="M 40,0 L -30,160 L 140,60 Z" fill="url(#g2_F2)" fill-opacity="0.80" stroke="#ffffff" stroke-width="0.5" stroke-opacity="0.4" stroke-linejoin="round" />
<path d="M 40,0 L 20,34.641 L -30,160 Z" fill="url(#g2_F1)" fill-opacity="0.95" stroke="#ffffff" stroke-width="0.5" stroke-opacity="0.6" stroke-linejoin="round" />
</g>
<!-- Amber Blade (180°) -->
<g transform="rotate(180)" style="mix-blend-mode: screen;">
<path d="M 40,0 L -30,160 L 140,60 Z" fill="url(#g3_F2)" fill-opacity="0.80" stroke="#ffffff" stroke-width="0.5" stroke-opacity="0.4" stroke-linejoin="round" />
<path d="M 40,0 L 20,34.641 L -30,160 Z" fill="url(#g3_F1)" fill-opacity="0.95" stroke="#ffffff" stroke-width="0.5" stroke-opacity="0.6" stroke-linejoin="round" />
</g>
<!-- Rose Blade (240°) -->
<g transform="rotate(240)" style="mix-blend-mode: screen;">
<path d="M 40,0 L -30,160 L 140,60 Z" fill="url(#g4_F2)" fill-opacity="0.80" stroke="#ffffff" stroke-width="0.5" stroke-opacity="0.4" stroke-linejoin="round" />
<path d="M 40,0 L 20,34.641 L -30,160 Z" fill="url(#g4_F1)" fill-opacity="0.95" stroke="#ffffff" stroke-width="0.5" stroke-opacity="0.6" stroke-linejoin="round" />
</g>
<!-- Indigo Blade (300°) -->
<g transform="rotate(300)" style="mix-blend-mode: screen;">
<path d="M 40,0 L -30,160 L 140,60 Z" fill="url(#g5_F2)" fill-opacity="0.80" stroke="#ffffff" stroke-width="0.5" stroke-opacity="0.4" stroke-linejoin="round" />
<path d="M 40,0 L 20,34.641 L -30,160 Z" fill="url(#g5_F1)" fill-opacity="0.95" stroke="#ffffff" stroke-width="0.5" stroke-opacity="0.6" stroke-linejoin="round" />
</g>
<!--
THE KALEIDOSCOPE CORE (The Point of Transformation)
Solid hexagonal prism reflecting the mirrors inside the optical instrument
-->
<g filter="url(#coreGlow)">
<!-- Prismatic Hexagon Aperture -->
<polygon points="38,0 19,32.909 -19,32.909 -38,0 -19,-32.909 19,-32.909" fill="url(#prismatic)" />
<!-- Internal Kaleidoscope Mirror Lines -->
<line x1="-38" y1="0" x2="38" y2="0" stroke="#ffffff" stroke-width="1.5" opacity="0.6" />
<line x1="-19" y1="-32.909" x2="19" y2="32.909" stroke="#ffffff" stroke-width="1.5" opacity="0.6" />
<line x1="19" y1="-32.909" x2="-19" y2="32.909" stroke="#ffffff" stroke-width="1.5" opacity="0.6" />
<!-- Center Refraction Point -->
<circle r="4" fill="#ffffff" filter="url(#glow)" />
</g>
<!-- Outer Core Reflection Edge -->
<polygon points="38,0 19,32.909 -19,32.909 -38,0 -19,-32.909 19,-32.909" fill="none" stroke="#ffffff" stroke-width="1" opacity="0.8" />
</g>
</svg>

After

Width:  |  Height:  |  Size: 8.4 KiB

View File

@@ -0,0 +1,57 @@
# Build Timeline & Execution Phases
Last updated: 2026-02-22
This folder contains the sequential build plan for developing Kalei. All features ship together in a single v1 release. The phases below represent an execution timeline for managing development complexity, not separate product phases.
Read in order:
1. `phase-0-groundwork-and-dev-environment.md`
2. `phase-1-platform-foundation.md`
3. `phase-2-core-experience-build.md`
4. `phase-3-launch-readiness-and-hardening.md`
5. `phase-4-spectrum-and-scale.md`
## Build Timeline Overview
These phases organize the work sequentially for manageable development and testing. All features — including Spectrum — are built toward a unified v1 launch.
- Phase 0: Groundwork and Dev Environment
- Goal: stable tooling, accounts, repo standards, and local infrastructure.
- Phase 1: Platform Foundation
- Goal: production-quality backend skeleton, auth, entitlements, core data model.
- Phase 2: Core Experience Build
- Goal: ship Mirror, Turn, Lens, Ritual, Evidence Wall end-to-end.
- Phase 3: Launch Readiness and Hardening
- Goal: Spectrum integration, safety, billing, reliability, compliance, app store readiness.
- Phase 4: Post-Launch Scale
- Goal: analytics pipeline, feature enhancements, growth optimization, scaling controls.
## Development Gate Rules
Do not move to the next build phase until the current phase exit checklist is complete.
If a phase slips, reduce scope but do not skip quality gates for:
- security
- safety
- observability
- data integrity
## Product vs. Build Phases
**Important distinction:** These build phases are execution timelines, not product tiers. The product launches with all features (Mirror, Turn, Lens, Ritual, Evidence Wall, Guide, Spectrum) in a single v1 release. Free tier and Prism subscription tier have feature limits, but both versions of v1 include all features at launch.
## Tooling policy
These phase docs assume an open-source-first stack:
- Gitea for source control and CI
- GlitchTip for error tracking
- PostHog self-hosted for product analytics
- Ollama (local) and vLLM (staging/prod) for open-weight model serving
Platform exceptions remain for mobile distribution and push:
- Apple App Store and Google Play billing/distribution APIs
- APNs and FCM delivery infrastructure

View File

@@ -0,0 +1,185 @@
# Phase 0 - Groundwork and Dev Environment
Duration: 1-2 weeks
Primary owner: Founder + coding assistant
## 1. Objective
Build a stable base so feature work can move fast without breaking:
- all required accounts are created
- local stack boots reliably
- repo structure and standards are in place
- CI checks run on every pull request
## 2. Prerequisites
- Read `docs/kalei-getting-started.md`
- Read `docs/kalei-system-architecture-plan.md`
## 3. Outcomes
By the end of Phase 0 you will have:
- local Postgres and Redis running via Docker
- mobile app bootstrapped with Expo
- API service bootstrapped with Fastify
- initial DB migration system in place
- lint, format, and test commands working
- CI pipeline validating every PR
## 4. Deep Work Breakdown
## 4.1 Access and Account Setup
Task list:
1. Set up Gitea (self-hosted or managed) for source control and CI.
2. Set up open-weight model serving accounts and endpoints (Ollama local, vLLM target host).
3. Create GlitchTip project for API and mobile error tracking.
4. Create PostHog self-hosted project for product analytics.
5. Set up DNS (PowerDNS self-hosted or managed DNS provider) and add domain.
6. Confirm Apple Developer and Google Play Console access (required for app distribution).
Deliverables:
- shared credential inventory (local secure password manager)
- documented secret naming convention
## 4.2 Repository and Branching Standards
Task list:
1. Define branch policy: `main`, short-lived feature branches.
2. Define PR checklist template.
3. Add CODEOWNERS or at least reviewer policy.
4. Add issue templates for bug and feature requests.
Deliverables:
- `CONTRIBUTING.md`
- `.gitea/pull_request_template.md` (or repository PR template equivalent)
- `.gitea/ISSUE_TEMPLATE/*` (or repository issue template equivalent)
## 4.3 Local Development Environment
Task list:
1. Install and verify Git, Node, npm, Docker, Expo CLI.
2. Add Docker compose for Postgres + Redis.
3. Create `.env.example` for API and mobile.
4. Add one-command local start script.
Deliverables:
- `infra/docker/docker-compose.yml`
- `services/api/.env.example`
- `apps/mobile/.env.example`
- root `Makefile` or npm scripts for local startup
## 4.4 API and Mobile Skeletons
Task list:
1. Create Fastify app with health endpoint.
2. Create Expo app with tabs template.
3. Add API client module to mobile app.
4. Show backend health status in app.
Deliverables:
- API running on local port
- mobile app able to read API response
## 4.5 Data and Migration Baseline
Task list:
1. Choose migration tool (for example, `node-pg-migrate`, `drizzle`, or `knex`).
2. Create first migration set for identity tables.
3. Add migration run and rollback commands.
4. Add seed command for local dev data.
Minimum initial tables:
- users
- profiles
- auth_sessions
- refresh_tokens
## 4.6 Quality and Automation Baseline
Task list:
1. Add ESLint + Prettier for API and mobile.
2. Add API unit test framework and one integration test.
3. Configure Gitea Actions (or Woodpecker CI) for lint + test.
4. Add commit hooks (optional but recommended) using `husky`.
Deliverables:
- passing CI on every push/PR
- at least one passing API integration test
## 5. Suggested Day-by-Day Plan
Day 1:
- account setup
- tooling install
- repo folder scaffold
Day 2:
- docker compose and env files
- API skeleton with `/health`
Day 3:
- Expo app setup
- mobile to API health call
Day 4:
- migration tooling and first migrations
- baseline seed script
Day 5:
- linting and tests
- CI setup
- first stable baseline commit
## 6. Validation Checklist
All items must be true:
- `docker compose` starts Postgres and Redis with no errors.
- API starts and `GET /health` returns 200.
- Mobile app loads and displays backend health.
- Migrations can run on clean DB and rollback at least one step.
- CI runs lint and tests successfully.
## 7. Exit Criteria
You can exit Phase 0 when:
- no manual setup surprises remain for a fresh machine
- all team members can run the stack locally in under 30 minutes
- baseline quality checks are automated
## 8. Platform exceptions
These are not open source, but required for shipping mobile apps:
- Apple App Store tooling and APIs
- Google Play tooling and APIs
## 9. Typical Pitfalls and Fixes
- Pitfall: unclear `.env` expectations.
- Fix: complete `.env.example` files with comments.
- Pitfall: mobile app cannot reach local API on real device.
- Fix: use machine LAN IP, not localhost, for device testing.
- Pitfall: migration drift.
- Fix: never edit applied migration files; create a new migration.

View File

@@ -0,0 +1,333 @@
# Kalei Build Plan — Phases 1 & 2
### From Platform Foundation to Core Experience
**Total Duration:** 58 weeks
**Approach:** Backend-first in Phase 1, then mobile + backend in parallel in Phase 2
---
## Overview
This document consolidates the two core build phases that take Kalei from a configured dev environment to a fully functional app with Mirror, Turn, and Lens experiences end-to-end. Phase 1 lays the platform foundation (auth, schema, AI gateway, safety). Phase 2 builds the user-facing experience on top of that foundation.
```
Phase 1: Platform Foundation (Weeks 1-3)
→ Auth, schema, entitlements, AI gateway, safety, observability
Phase 2: Core Experience Build (Weeks 4-8)
→ Mirror v1, Turn v1, Lens v1, Gallery, end-to-end flows
```
---
# PHASE 1 — Platform Foundation
**Duration:** 23 weeks
**Primary owner:** Backend-first with mobile stub integration
## 1.1 Objective
Build a production-grade platform foundation: robust auth and session model, entitlement checks for free vs. paid plans, core domain schema for Mirror/Turn/Lens, AI gateway scaffold with usage metering, and observability + error handling baseline.
## 1.2 Entry Criteria
Phase 0 exit checklist must be complete.
---
## 1.3 Core Scope
### 1.3.1 API Module Setup
Implement service modules: auth, profiles, entitlements, mirror (session/message skeleton), turn (request skeleton), lens (goal/action skeleton), ai_gateway, usage_cost, safety (precheck skeleton).
Each module needs: route handlers, input/output schema validation, service layer, repository/data access layer, and unit tests.
### 1.3.2 Identity and Access
Implement: email/password registration and login, JWT access token (short TTL), refresh token rotation and revocation, logout all sessions, role model (at least `user`, `admin`).
Security details: hash passwords with Argon2id or bcrypt, store refresh tokens hashed, include device metadata per session.
### 1.3.3 Entitlement Model
Implement plan model now, even before paywall UI is complete.
Suggested plan keys: `free`, `prism`, `prism_plus`.
Implement gates for: turns per day, mirror sessions per week, spectrum access.
Integration approach: no RevenueCat dependency. Ingest App Store Server Notifications and Google Play RTDN notifications directly. Maintain local entitlement snapshots as source of truth for authorization.
### 1.3.4 Data Model (Phase 1 Schema)
Create migrations for: users, profiles, subscriptions, entitlement_snapshots, turns, mirror_sessions, mirror_messages, mirror_fragments, lens_goals, lens_actions, ai_usage_events, safety_events.
Design requirements: every row has `created_at`, `updated_at` where relevant. Index by `user_id` and key query timestamps. Soft delete where legal retention requires it.
### 1.3.5 AI Gateway Scaffold
Implement a strict abstraction now: provider adapter interface, request envelope (feature, model, temperature, timeout), response normalization, token usage extraction, retry + timeout + circuit breaker policy.
Do not expose provider SDK directly in feature modules.
### 1.3.6 Safety Precheck Skeleton
Implement now even if rule set is basic: deterministic keyword precheck, safety event logging, return safety status to caller. Mirror and Turn endpoints must call this precheck before generation.
### 1.3.7 Usage Metering and Cost Guardrails
Implement: per-user usage counters in Redis, endpoint-level rate limit middleware, AI usage event write on every provider call, per-feature daily budget checks.
### 1.3.8 Observability Baseline
Implement: structured logging with request IDs, error tracking to GlitchTip, latency and error metrics per endpoint, AI cost metrics by feature.
---
## 1.4 Build Sequence
**Week 1:**
1. Finalize schema and migration files
2. Implement auth and profile endpoints
3. Add integration tests for auth flows
**Week 2:**
1. Implement entitlements and plan gating middleware
2. Implement AI gateway interface and one real provider adapter
3. Implement Redis rate limits and usage counters
**Week 3:**
1. Implement Mirror and Turn endpoint skeletons with safety precheck
2. Implement Lens goal and action skeleton endpoints
3. Add complete observability hooks and dashboards
---
## 1.5 API Contract — End of Phase 1
**Auth:**
- `POST /auth/register`
- `POST /auth/login`
- `POST /auth/refresh`
- `POST /auth/logout`
- `GET /me`
**Entitlements:**
- `GET /billing/entitlements`
- Webhook endpoints for App Store and Google Play billing event ingestion
**Feature Skeletons:**
- `POST /mirror/sessions`
- `POST /mirror/messages`
- `POST /turns`
- `POST /lens/goals`
## 1.6 Testing Requirements
Minimum automated coverage: auth happy path and invalid credential path, token refresh rotation path, entitlement denial for free limits, safety precheck path for crisis keyword match, AI gateway timeout and fallback behavior. Recommended: basic load test for auth + turn skeleton endpoints.
## 1.7 Phase 1 Deliverables
**Code:** Migration files for core schema, API modules with tests, Redis-backed rate limit and usage tracking, AI gateway abstraction with one provider, safety precheck middleware.
**Operational:** GlitchTip configured, endpoint metrics visible, API runbook for local and staging.
## 1.8 Phase 1 Exit Criteria
You can exit Phase 1 when: core auth model is stable and tested, plan gating is enforced server-side, Mirror/Turn/Lens endpoint skeletons are live, AI calls only happen through AI gateway, logs/metrics/error tracking are active.
## 1.9 Phase 1 Risks
- **Auth complexity balloons early.** Mitigation: keep v1 auth strict but minimal; defer advanced IAM.
- **Schema churn from feature uncertainty.** Mitigation: maintain a schema decision log and avoid premature optimization.
- **Provider coupling in feature code.** Mitigation: enforce gateway adapter pattern in code review.
---
---
# PHASE 2 — Core Experience Build
**Duration:** 35 weeks
**Primary owner:** Mobile + backend in parallel
## 2.1 Objective
Ship Kalei's core user experience end-to-end: Mirror with fragment highlighting and inline reframe, Turn generation with 3 perspectives and micro-action, Lens goals/daily actions/daily focus, Gallery/history views for user continuity.
## 2.2 Entry Criteria
Phase 1 exit checklist complete.
---
## 2.3 Product Scope
### 2.3.1 Mirror (Awareness)
**Required behavior:** User starts mirror session → submits messages → backend runs safety precheck first → backend runs fragment detection on safe content → app highlights detected fragments above confidence threshold → user taps fragment for inline reframe → user closes session and receives reflection summary.
**Backend work:** Finalize `mirror_sessions`, `mirror_messages`, `mirror_fragments`. Add close-session reflection endpoint. Add mirror session list/detail endpoints.
**Mobile work:** Mirror compose UI, highlight rendering for detected fragment ranges, tap-to-reframe interaction card, session close and reflection display.
### 2.3.2 Turn (Kaleidoscope)
**Required behavior:** User submits a fragment or thought → backend runs safety precheck → backend generates 3 reframed perspectives → backend returns micro-action (if-then) → user can save turn to gallery.
**Backend work:** Finalize `turns` table and categories. Add save/unsave state. Add history list endpoint.
**Mobile work:** Turn input and loading animation, display 3 patterns + micro-action, save to gallery and view history.
### 2.3.3 Lens (Direction)
**Required behavior:** User creates one or more goals → app generates or stores daily action suggestions → user can mark actions complete → optional daily affirmation/focus shown.
**Backend work:** Finalize `lens_goals`, `lens_actions`. Daily action generation endpoint. Daily affirmation endpoint through AI gateway.
**Mobile work:** Goal creation UI, daily action checklist UI, completion updates and streak indicator.
### 2.3.4 The Rehearsal (Lens Sub-Feature)
**Required behavior:** User selects "Rehearse" within a Lens goal → backend generates a personalized visualization script (process-oriented, first-person, multi-sensory, with obstacle rehearsal) → app displays as a guided text flow with SVG progress ring → session completes with a follow-up micro-action.
**Backend work:** Rehearsal generation endpoint through AI gateway. Prompt template enforcing: first-person perspective, present tense, multi-sensory detail, process focus, obstacle inclusion, ~10 min reading pace. Cache generated scripts per goal; refresh when actions change. Add `rehearsal_sessions` table.
**Mobile work:** Rehearsal screen (single flowing view with SVG progress ring timer). Step transitions (Grounding → Process → Obstacle → Close). Completion state with generated SVG pattern. Rehearsal history in Gallery.
### 2.3.5 The Ritual (Context-Anchored Daily Flow)
**Required behavior:** User selects a Ritual template (Morning/Evening/Quick) and anchors to a daily context → app delivers a timed, sequenced flow chaining Mirror/Turn/Lens steps → Ritual completion tracked with context consistency metrics.
**Backend work:** `ritual_configs` table (template, anchored time, notification preferences). `ritual_completions` table (timestamp, duration, steps completed). Context consistency calculation logic (same-window tracking per Wood et al.). Ritual notification scheduling.
**Mobile work:** Ritual selection/setup during onboarding or settings. Single flowing Ritual screen with SVG progress segments per step. Step transitions without navigation. Completion state with Ritual pattern. Context consistency display in streaks.
### 2.3.6 The Evidence Wall (Mastery Tracking)
**Required behavior:** System automatically collects proof points from all features (completed actions, saved keepsakes, self-corrections, streak milestones, goal completions, reframe echoes) → Evidence Wall in "You" tab displays as SVG mosaic → AI surfaces evidence contextually when self-efficacy dip detected.
**Backend work:** `evidence_points` table (user_id, type, source_feature, source_id, description, created_at). Background job to detect and log proof points from existing feature activity. Efficacy-dip detection logic (pattern analysis on recent Mirror/Turn language). Evidence surfacing endpoint for contextual AI integration.
**Mobile work:** Evidence Wall view in "You" tab (SVG mosaic grid, color-coded by source). Timeline toggle view. Evidence count badges. Contextual evidence card component for use within Mirror/Turn sessions.
---
## 2.4 Deep Technical Workstreams
### 2.4.1 Prompt and Output Contracts
Create strict prompt templates and JSON output contracts per feature: Mirror fragment detection, Mirror inline reframe, Turn multi-pattern output, Lens daily focus output, Rehearsal visualization script, Evidence Wall contextual surfacing. Require server-side validation of AI output shape before returning to clients.
### 2.4.2 Safety Integration
At this phase safety must be complete for user-facing flows: all Mirror and Turn requests pass safety gate, crisis response path returns resource payload (not reframe payload), safety events are queryable for audit.
### 2.4.3 Entitlement Enforcement
Enforce in API middleware: free turn daily limits, free mirror weekly limits, spectrum endpoint lock for non-entitled users. Add clear response codes and client UI handling for plan limits.
### 2.4.4 Performance Targets
Set targets now and test against them: Mirror fragment detection p95 under 3.5s, Turn generation p95 under 3.5s, client screen transitions under 300ms for cached navigation.
---
## 2.5 Build Plan
**Week 1 (Week 4 overall):**
- Finish Mirror backend and basic mobile UI
- Complete fragment highlight rendering
**Week 2 (Week 5 overall):**
- Finish inline reframe flow and session reflections
- Add Mirror history and session detail view
**Week 3 (Week 6 overall):**
- Finish Turn backend and mobile flow
- Add save/history integration
**Week 4 (Week 7 overall):**
- Finish Lens goals and daily actions
- Add daily focus/affirmation flow
- Build Rehearsal backend + mobile UI (Lens sub-feature)
**Week 5 (Week 8 overall):**
- Build Ritual backend (config, completions, consistency tracking)
- Build Ritual mobile UI (single-flow screen, SVG progress, setup flow)
- Build Evidence Wall backend (proof point collection job, evidence_points table)
**Week 6 (Week 9 overall):**
- Build Evidence Wall mobile UI (mosaic view, timeline, contextual card)
- Wire Evidence Wall contextual surfacing into Mirror/Turn sessions
- Integrate Ritual into onboarding flow
**Week 7 (Week 10 overall — hardening):**
- Optimize latency across all features
- Improve retry and offline handling
- Run end-to-end QA pass across all flows including Ritual → Mirror → Turn → Lens → Evidence
---
## 2.6 Test Plan
**Unit tests:** Prompt builder functions, AI output validators, entitlement middleware, safety decision functions.
**Integration tests:** Full Mirror message lifecycle, full Turn generation lifecycle, Lens action completion lifecycle.
**Manual QA matrix:** Normal usage, plan-limit blocked usage, low-connectivity behavior, crisis-language safety behavior.
## 2.7 Phase 2 Deliverables
**Functional:** Mirror v1 complete, Turn v1 complete, Lens v1 complete (with Rehearsal), Ritual v1 complete, Evidence Wall v1 complete, Gallery/history v1 complete.
**Engineering:** Stable endpoint contracts, documented prompt versions, meaningful test coverage for critical flows, feature-level latency and error metrics.
## 2.8 Phase 2 Exit Criteria
You can exit Phase 2 when: users can complete Mirror → Turn → Lens flow end-to-end, Ritual sequences features into a single daily flow, Rehearsal generates process-oriented visualization scripts, Evidence Wall collects and surfaces proof points, plan limits and safety behavior are consistent and test-backed, no critical P0 bugs in core user paths, telemetry confirms baseline latency and reliability targets.
## 2.9 Phase 2 Risks
- **Output variability from model causes UI breakage.** Mitigation: strict response schema validation and fallback copy.
- **Too much feature scope in one pass.** Mitigation: ship v1 flows first, defer advanced UX polish.
- **Latency drift from complex prompts.** Mitigation: simplify prompts and use cached static context.
---
---
# Cross-Phase Reference
## Combined Timeline
| Week | Phase | Focus |
|------|-------|-------|
| 1 | Phase 1 | Schema, auth, profile endpoints |
| 2 | Phase 1 | Entitlements, AI gateway, rate limits |
| 3 | Phase 1 | Feature skeletons, safety, observability |
| 4 | Phase 2 | Mirror backend + mobile UI |
| 5 | Phase 2 | Mirror inline reframes + history |
| 6 | Phase 2 | Turn backend + mobile flow |
| 7 | Phase 2 | Lens goals + daily actions + Rehearsal |
| 8 | Phase 2 | Ritual (backend + mobile + onboarding) + Evidence Wall backend |
| 9 | Phase 2 | Evidence Wall mobile + contextual surfacing integration |
| 10 | Phase 2 | Hardening, latency, end-to-end QA |
## Dependencies
- Phase 2 Mirror depends on Phase 1 AI gateway + safety precheck
- Phase 2 entitlement enforcement depends on Phase 1 plan gating middleware
- Phase 2 Lens daily actions depend on Phase 1 AI gateway being stable
- All Phase 2 features depend on Phase 1 observability for debugging
## Combined API Surface (End of Phase 2)
**Auth:** register, login, refresh, logout, me
**Billing:** entitlements, App Store + Google Play webhooks
**Mirror:** create session, send message (with fragment detection), close session (with reflection), list sessions, session detail
**Turn:** create turn (with 3 patterns + micro-action), save/unsave, list history
**Lens:** create goal, generate daily actions, complete action, daily affirmation
**Rehearsal:** generate visualization script (per goal), list rehearsal history
**Ritual:** create/update ritual config, start ritual session, complete ritual, list completions, context consistency stats
**Evidence Wall:** list proof points (filterable by type/source), get contextual evidence (for AI surfacing)
**Gallery:** list saved turns + mirror reflections + rehearsal/ritual patterns (unified history)

View File

@@ -0,0 +1,167 @@
# Phase 3 - Launch Readiness and Hardening
Duration: 2-4 weeks
Primary owner: Full stack + operations focus
## 1. Objective
Prepare Kalei for real users with production safeguards:
- safety policy completion and crisis flow readiness
- subscription and entitlement reliability
- app and API operational stability
- privacy and compliance basics for app store approval
## 2. Entry Criteria
Phase 2 exit checklist complete.
## 3. Scope
## 3.1 Safety and Trust Hardening
Tasks:
1. finalize crisis keyword and pattern sets
2. validate crisis response templates and regional resources
3. add safety dashboards and alerting
4. add audit trail for safety decisions
Validation goals:
- crisis path returns under 1 second in most cases
- no crisis path returns reframing output
## 3.2 Billing and Entitlements
Tasks:
1. complete App Store Server Notifications ingestion
2. complete Google Play RTDN ingestion
3. build reconciliation jobs for both stores (entitlements sync)
4. test expired, canceled, trial, billing retry, and restore scenarios
5. add paywall gating in all required clients
Validation goals:
- entitlement state converges within minutes after billing changes
- no premium endpoint access for expired plans
## 3.3 Reliability Engineering
Tasks:
1. finalize health checks and readiness probes
2. add backup and restore procedures for Postgres
3. add Redis persistence strategy for critical counters if required
4. define incident severity levels and on-call workflow
Validation goals:
- verified DB restore from backup in staging
- runbook exists for API outage, DB outage, AI provider outage
## 3.4 Security and Compliance Baseline
Tasks:
1. secrets rotation policy and documented process
2. verify transport security and secure headers
3. verify account deletion and data export flows
4. prepare privacy policy and terms for submission
Validation goals:
- basic security checklist signed off
- app store privacy disclosures map to real data flows
## 3.5 Observability and Cost Control
Tasks:
1. define alerts for latency, error rate, and AI spend thresholds
2. implement monthly spend cap and automatic degradation rules
3. monitor feature-level token cost dashboards
Validation goals:
- alert thresholds tested in staging
- degradation path verified (Lens fallback first)
## 3.6 Beta and Release Pipeline
Tasks:
1. set up TestFlight internal/external testing
2. set up Android internal testing track
3. run beta cycle with scripted feedback collection
4. triage and fix launch-blocking defects
Validation goals:
- no unresolved launch-blocking defects
- release checklist complete for both stores
## 4. Suggested Execution Plan
Week 1:
- safety hardening and billing reconciliation
- initial reliability runbooks
Week 2:
- security/compliance checks
- backup and restore drills
- full observability alert tuning
Week 3:
- TestFlight and Play internal beta
- defect triage and fixes
Week 4 (if needed):
- final store submission materials
- go/no-go readiness review
## 5. Release Checklists
## 5.1 API release checklist
- migration plan reviewed
- rollback plan documented
- dashboards green
- error budget acceptable
## 5.2 Mobile release checklist
- build reproducibility verified
- crash-free session baseline from beta acceptable
- paywall and entitlement states correct
- copy and metadata final
## 5.3 Business and policy checklist
- privacy policy URL live
- terms URL live
- support contact available
- crisis resources configured for launch regions
## 6. Exit Criteria
You can exit Phase 3 when:
- app is store-ready with stable entitlement behavior
- safety flow is verified and monitored
- operations runbooks and alerts are live
- backup and restore are proven in practice
## 7. Risks To Watch
- Risk: entitlement mismatch from webhook delays.
- Mitigation: scheduled reconciliation and idempotent webhook handling.
- Risk: launch-day AI latency spikes.
- Mitigation: timeout limits and graceful fallback behavior.
- Risk: compliance gaps discovered late.
- Mitigation: complete privacy mapping before store submission.

View File

@@ -0,0 +1,158 @@
# Phase 4 - Spectrum and Scale
Duration: 3-6 weeks
Primary owner: Data + backend + product analytics
## 1. Objective
Deliver Phase 2 intelligence features and scaling maturity:
- Spectrum weekly and monthly insights
- aggregated analytics model over user activity
- asynchronous jobs and batch processing
- cost, reliability, and scaling controls for growth
## 2. Entry Criteria
Phase 3 exit checklist complete.
## 3. Scope
## 3.1 Spectrum Data Foundation
Implement tables and data flow for:
- session-level emotional vectors
- turn-level impact analysis
- weekly aggregates
- monthly aggregates
Data design requirements:
- user-level partition/index strategy for query speed
- clear retention and deletion behavior
- exclusion flags so users can omit sessions from analysis
## 3.2 Aggregation Pipeline
Build asynchronous jobs:
1. post-session analysis job
2. weekly aggregation job
3. monthly narrative job
Job engineering requirements:
- idempotency keys
- retry with backoff
- dead-letter queue for failures
- metrics for queue depth and job duration
## 3.3 Spectrum Insight Generation
Implement AI-assisted summary generation using aggregated data only.
Rules:
- do not include raw user text in generated insights by default
- validate output tone and safety constraints
- version prompts and track prompt revisions
## 3.4 Spectrum API and Client
Backend endpoints:
- weekly insight feed
- monthly deep dive
- spectrum reset
- exclusions management
Mobile screens:
- emotional landscape view
- pattern distribution view
- insight feed cards
- monthly summary panel
## 3.5 Growth-Ready Scale Controls
Implement scale milestones:
- worker isolation from interactive API if needed
- database optimization and index tuning
- caching strategy for read-heavy insight endpoints
- cost-aware model routing for non-critical generation
## 4. Detailed Execution Plan
Week 1:
- schema rollout for spectrum tables
- event ingestion hooks from Mirror/Turn/Lens
Week 2:
- implement post-session analysis and weekly aggregation jobs
- add metrics and retries
Week 3:
- implement monthly aggregation and narrative generation
- implement spectrum API endpoints
Week 4:
- mobile spectrum dashboard v1
- push notification hooks for weekly summaries
Week 5-6 (as needed):
- performance tuning
- scale and cost optimization
- UX polish for insight comprehension
## 5. Quality and Analytics Requirements
Quality gates:
- no raw-content leakage in Spectrum UI
- weekly job completion SLA met
- dashboard load times within agreed target
Analytics requirements:
- track spectrum engagement events
- track conversion impact from spectrum teaser to upgrade
- track retention lift for spectrum users vs non-spectrum users
## 6. Deliverables
Functional deliverables:
- Spectrum dashboard v1
- weekly and monthly insight generation
- user controls for exclusions and reset
Engineering deliverables:
- robust worker pipeline with retries and DLQ
- aggregated analytics tables with indexing strategy
- end-to-end observability for job health and costs
## 7. Exit Criteria
You can exit Phase 4 when:
- weekly and monthly insights run on schedule reliably
- users can view, reset, and control analysis scope
- spectrum cost and performance stay inside defined envelopes
- data deletion behavior is verified for raw and derived records
## 8. Risks To Watch
- Risk: analytics pipeline complexity causes reliability issues.
- Mitigation: isolate workers and enforce idempotent jobs.
- Risk: insight quality is too generic.
- Mitigation: prompt iteration with rubric scoring and blinded review.
- Risk: costs drift with growing history windows.
- Mitigation: aggregate-first processing and strict feature budget controls.

View File

@@ -0,0 +1,347 @@
# Kalei — The Science Behind Every Turn
### How Peer-Reviewed Research Powers Every Feature in the App
**Version:** 1.0 · February 2026
**Purpose:** Map every Kalei feature to its scientific foundation, ensuring real cognitive science is woven into the user journey — not as decoration, but as structural integrity.
---
## Why This Document Exists
Kalei's core differentiator is that it treats "manifestation" as what it actually is: a structured psychological process operating through known cognitive and behavioral mechanisms. Every feature in the app should be traceable back to published, peer-reviewed research. This document is the bridge between our research library and the product — a reference for anyone building, writing, or designing any part of Kalei.
The rule is simple: **if we can't cite it, we don't claim it.**
---
## The Research Library — 8 Pillars
Our research base spans 18 peer-reviewed papers across 8 scientific domains. Each domain maps directly to one or more of Kalei's 6 Steps and core features.
| # | Research Pillar | Papers | Primary Kalei Feature(s) |
|---|---|---|---|
| 1 | Goal Setting & Implementation | 4 | Step 1: Decide (Clarity) · Step 5: Act in Alignment · The Lens |
| 2 | Visualization & Mental Imagery | 3 | Step 2: See It · Guided Visualizations |
| 3 | Self-Efficacy | 1 | Step 3: Believe It's Possible (But Not Guaranteed) · The Turn |
| 4 | Attention & Neuroscience | 3 | Step 4: Notice Differently · The Mirror · The Reframer |
| 5 | Habit Formation | 2 | Step 6: Repeat and Compound · Streaks & Rituals |
| 6 | Placebo & Expectation Effects | 2 | Overall framework · Onboarding · Belief calibration |
| 7 | Social Networks | 1 | Future community features · The Spectrum |
| 8 | Self-Regulation & Feedback Loops | 2 | The Guide · Goal Check-Ins · Weekly Pulse · Cross-Feature Coaching |
---
## Pillar 1: Goal Setting & Implementation
### The Science
**Locke & Latham (2002)***Building a Practically Useful Theory of Goal Setting and Task Motivation: A 35-Year Odyssey*
The foundational paper on goal-setting theory, drawn from over 35 years of research. Core finding: specific, challenging goals consistently lead to higher performance than vague "do your best" goals. The mechanism works through four channels — goals direct attention, energize effort, increase persistence, and promote the discovery of task-relevant strategies.
**Locke & Latham (2006)***New Directions in Goal-Setting Theory*
Extends the original theory with moderators and mediators. Goals work best when paired with high self-efficacy, feedback loops, and commitment. Critically, goal complexity matters — overly complex goals without adequate learning time can backfire. This informs how Kalei scaffolds goals progressively rather than demanding perfection upfront.
**Gollwitzer (1999)***Implementation Intentions: Strong Effects of Simple Plans*
The landmark paper on "if-then" planning. When people form implementation intentions ("If situation X arises, I will do Y"), follow-through rates increase dramatically — in some studies doubling or tripling goal attainment. The mechanism: implementation intentions create strong mental links between situational cues and planned responses, effectively delegating action initiation to environmental triggers rather than relying on willpower.
**Gollwitzer & Sheeran***Implementation Intentions*
A comprehensive overview confirming that implementation intentions are most effective when self-regulatory problems threaten goal striving and when backed by strong, activated goal intentions. The if-then format works because it puts people in a position to both *see* and *seize* opportunities to act.
### Where It Lives in Kalei
**Step 1: Decide (Clarity)** — The Lens feature guides users through defining exactly what they want, using AI to help them move from vague wishes ("I want to be happier") to specific, challenging goals ("I want to complete a 5K run by June, training 3x per week"). The AI draws on Locke & Latham's specificity principle: the more precise the goal, the more it directs attention and effort.
**Step 5: Act in Alignment** — Every micro-action the AI generates follows Gollwitzer's if-then format. Instead of "exercise more," Kalei produces: "If it's 7am on Monday/Wednesday/Friday, then I put on my running shoes and walk out the front door." This isn't a style choice — it's the most empirically validated action-planning format in psychology.
**The Lens (Manifestation Engine)** — The entire goal-creation flow is structured around Locke & Latham's principles: clarity of outcome, challenge calibration (not too easy, not impossibly hard), commitment rituals, and built-in feedback loops through progress tracking.
**Design implications:**
- Goal inputs should guide toward specificity (prompts, not blank fields)
- Challenge level should be calibrated — the AI should push back on goals that are too vague or too easy
- Implementation intentions should always use the literal "If... then..." structure
- Progress feedback should be frequent and visible
---
## Pillar 2: Visualization & Mental Imagery
### The Science
**Schuster et al. (2011)***Best Practice for Motor Imagery: A Systematic Literature Review*
A massive cross-disciplinary review across education, medicine, music, psychology, and sports. Key finding: motor imagery (mentally rehearsing actions) is most effective when combined with physical practice, when sessions are structured with clear protocols, and when the imagery is vivid and first-person. Pure fantasy without behavioral specificity doesn't work — the visualization must be *process-oriented*, not just outcome-oriented.
**Liu et al. (2025)***Effects of Imagery Practice on Athletes' Performance: A Multilevel Meta-Analysis*
A meta-analysis of 86 studies with 3,593 athletes confirming that imagery practice enhances performance across agility, strength, and sport-specific skills. The optimal dosage: approximately 10 minutes, 3 times per week, over about 100 days. Combining imagery with 1-2 additional psychological skills (like self-talk or goal setting) produces stronger effects than imagery alone.
**Seok & Choi (2023)***The Impact of Mental Practice on Motor Function in Patients With Stroke*
A systematic review and meta-analysis demonstrating that mental practice facilitates motor recovery in stroke patients — evidence that visualization activates overlapping neural circuits with actual physical execution, even when the body cannot currently perform the action.
### Where It Lives in Kalei
**Step 2: See It (Mental Rehearsal)** — Kalei generates personalized visualization scripts that guide users through mentally rehearsing the *process* of achieving their goal, not just imagining the end state. This distinction is critical: the research shows process visualization (imagining yourself studying, training, preparing) outperforms outcome visualization (imagining yourself on the podium).
**Guided Visualization Sessions** — Following Schuster et al.'s best-practice findings, Kalei's visualization prompts are first-person, sensory-rich, and process-focused. The AI asks users to engage multiple senses: what do you see, hear, feel? The recommended frequency (Liu et al.'s finding of ~10 minutes, 3x/week) informs the suggested cadence of visualization reminders.
**Design implications:**
- Visualization scripts must be process-oriented, not just outcome fantasy
- First-person perspective, multi-sensory detail
- Sessions should be ~10 minutes, suggested 3x per week
- Combine visualization with goal-setting and self-talk elements for maximum effect
- Never present visualization as sufficient alone — always pair with action steps
---
## Pillar 3: Self-Efficacy
### The Science
**Bandura (1977)***Self-Efficacy: Toward a Unifying Theory of Behavioral Change*
One of the most cited papers in all of psychology. Bandura's core claim: the belief in one's *capability* to execute specific behaviors is the strongest predictor of whether someone will attempt, sustain, and succeed at a goal. Self-efficacy is not general confidence — it's domain-specific belief that "I can do this particular thing."
Four sources build self-efficacy, in order of potency:
1. **Mastery experiences** — successfully doing the thing (strongest source)
2. **Vicarious experience** — watching someone similar succeed
3. **Verbal persuasion** — being told you can do it (weakest but still real)
4. **Physiological states** — interpreting your emotional/physical state as capability vs. inadequacy
Critically, Bandura distinguishes *efficacy expectations* (I can do it) from *outcome expectations* (doing it will produce results). Both matter, but self-efficacy is the bottleneck — people don't attempt what they don't believe they can execute.
### Where It Lives in Kalei
**Step 3: Believe It's Possible — But Not Guaranteed** — This is the philosophical soul of Kalei. The app explicitly rejects certainty-based belief ("the universe will provide") in favor of Bandura's capability-based belief ("I have or can develop the skills to make this happen"). This single distinction separates Kalei from every magical-thinking manifestation app on the market.
**The Turn (Reframing Engine)** — When users submit a negative thought, the AI reframe is designed to build self-efficacy, not just provide comfort. A good reframe should help users:
- Recognize past mastery experiences ("You've handled difficult things before — remember when...")
- Reinterpret physiological states ("That anxiety isn't proof you can't do this — it's your body preparing to perform")
- Shift from outcome fixation to capability focus ("You can't control whether you get the job, but you can control how well you prepare")
**Onboarding & Belief Calibration** — The coaching style selection (brutal honesty, gentle guidance, logical analysis, etc.) maps to Bandura's verbal persuasion channel. Different people respond to different persuasion styles. A skeptic needs logical arguments for capability; someone more emotionally oriented needs warmth and encouragement. The coaching style personalizes the persuasion channel.
**Design implications:**
- Reframes should target capability belief, never promise outcomes
- Track and surface mastery experiences ("You've completed 12 micro-actions this week")
- Coaching tone selection = personalizing the verbal persuasion channel
- Never say "you will succeed" — say "you have what it takes to give this your best shot"
- Celebrate effort and execution, not just outcomes
---
## Pillar 4: Attention & Neuroscience
### The Science
**Yantis (2008)***The Neural Basis of Selective Attention: Cortical Sources and Targets of Attentional Modulation*
Selective attention is an intrinsic component of how the brain processes reality. Modulatory signals from frontal and parietal cortex amplify neural responses to relevant information and suppress irrelevant inputs. What you attend to literally changes what your brain represents — attention isn't just noticing, it's constructing your experienced reality.
**Stevens & Bavelier (2012)***The Role of Selective Attention on Academic Foundations*
Attention is trainable. This paper demonstrates that selective attention underlies learning, memory, and skill acquisition. Crucially, attention training transfers — improving attentional control in one domain can enhance performance broadly. The brain's attentional system is plastic and responsive to practice.
**Koch & Tsuchiya***Attention and Consciousness: Two Distinct Brain Processes*
A critical theoretical paper distinguishing attention from consciousness. You can attend to things without being conscious of them, and you can be conscious of things without attending to them. This matters for Kalei because it means that training attention (a controllable process) can shift what enters conscious awareness (what feels like "reality") — without requiring mystical explanations.
### Where It Lives in Kalei
**Step 4: Notice Differently** — After setting a goal and building belief, Kalei trains users to notice differently. This isn't "the universe sending signs" — it's Yantis's selective attention at work. When you define a goal, your brain's attentional filters begin prioritizing goal-relevant information. Opportunities that were always there become visible because your attentional system is now tuned to detect them.
**The Mirror (Freeform Notebook)** — The Mirror feature directly applies attentional science. As users write freely, Kalei's AI detects cognitive distortion patterns (catastrophizing, black-and-white thinking, etc.) and gently highlights them. This is attention training in action: the AI acts as an external attentional spotlight, pointing at patterns the user's own attentional system has habituated to and therefore stopped noticing.
**The Reframer's Pattern Analysis** — Over time, the app analyzes which cognitive distortions appear most frequently in a user's Turns and Mirror sessions. This longitudinal attention data helps users see their own attentional biases — "You tend to catastrophize most on Sunday evenings" — turning unconscious attentional habits into conscious, addressable patterns.
**Design implications:**
- Frame "noticing" in neurological terms, never mystical ones
- The Mirror's highlighting is literally externalized attentional modulation
- Pattern analytics should reveal attentional biases over time
- Use language like "your brain is filtering for this now" not "the universe is showing you signs"
---
## Pillar 5: Habit Formation
### The Science
**Wood & Neal (2007)***A New Look at Habits and the Habit-Goal Interface*
Habits form through the gradual learning of associations between responses and context features (physical settings, time of day, preceding actions). Once formed, perception of the context triggers the habitual response *without a mediating goal* — the behavior becomes automatic. Goals can direct habit formation (by motivating repetition) but once habits are established, they run on context cues, not intentions.
**Wood, Mazar & Neal (2021)***Habits and Goals in Human Behavior: Separate but Interacting Systems*
Extends the 2007 model: habits and goals are separate cognitive systems that interact. ~43% of daily behavior is habitual. Habit change requires disrupting the context-response link — either by changing contexts, or by introducing friction into the habitual response. For building new habits, the key is consistent repetition in stable contexts until the behavior becomes automatic.
### Where It Lives in Kalei
**Step 6: Repeat and Compound** — The final step in Kalei's manifestation system is explicitly about habit formation. The app helps users build daily rituals — consistent, context-anchored micro-actions that compound over time. The AI generates context-specific triggers ("Every morning after your first coffee, open Kalei and do one Turn") because Wood's research shows that context stability is the single biggest predictor of habit formation.
**Streaks & Ritual Tracking** — Kalei's streak system isn't gamification for its own sake — it's measuring the repetition that Wood et al. show is necessary for habit crystallization. The app tracks not just frequency but context consistency ("You've done your morning Turn at roughly the same time for 18 days — this is becoming automatic").
**The Mirror as Habitual Practice** — Regular Mirror sessions train the habit of self-reflection. Over time, the act of writing and examining thoughts becomes an automatic response to stress or uncertainty, rather than requiring conscious effort each time.
**Design implications:**
- Always pair actions with specific context cues (time, location, preceding action)
- Streak tracking should emphasize context consistency, not just count
- Frame habit formation as ~66 days of consistent repetition (Lally et al.'s median)
- Celebrate automaticity milestones ("This is becoming second nature")
- When habits break, help users rebuild the context-response link rather than relying on willpower
---
## Pillar 6: Placebo & Expectation Effects
### The Science
**Pardo-Cabello et al. (2022)***Placebo: A Brief Updated Review*
A comprehensive review of placebo/nocebo effects across medicine. The placebo effect has been observed across multiple medical conditions and administration routes. Key finding: expectations directly influence physiological and behavioral outcomes. The doctor-patient relationship (or in Kalei's case, the app-user relationship) is the most important factor in whether expectation effects materialize. The psycho-neurobiological mechanisms are real and measurable.
**Stetler (2014)***Adherence, Expectations, and the Placebo Response*
Investigates why adherence to even inert treatments produces health benefits. The model: initial expectations shape behavior and physiological responses, and consistent adherence reinforces those expectations in a positive feedback loop. This is not "it's all in your head" — it's "what's in your head measurably changes what happens in your body and behavior."
### Where It Lives in Kalei
**The Overall Framework** — Kalei's entire approach leverages expectation effects honestly. We don't hide the mechanism — we explain it. Telling users "structured positive expectation, when combined with action, measurably improves outcomes" is both scientifically accurate and itself a form of positive expectation setting. The transparency is the feature.
**Onboarding & Science Education** — When users first encounter Kalei, the app explains *why* it works, citing real research. This serves two purposes: (1) it builds credibility with our skeptic-friendly audience, and (2) it primes legitimate expectation effects. Understanding that these mechanisms are real makes them more effective, not less — unlike placebos in medicine, where disclosure sometimes weakens the effect, in behavioral change, understanding the mechanism often strengthens engagement.
**Belief Calibration** — Stetler's finding about adherence reinforcing expectations informs Kalei's emphasis on daily practice. The more consistently users engage, the stronger their expectation of benefit becomes, which in turn increases the actual benefit. This is not circular logic — it's a documented feedback loop.
**Design implications:**
- Always explain the science behind features — transparency strengthens engagement
- The app-user relationship quality matters (tone, personalization, responsiveness)
- Consistent engagement creates a positive expectation-behavior feedback loop
- Frame this honestly: "Expectation shapes behavior. We're using that deliberately and transparently."
- Never hide the mechanism or pretend Kalei works through unknown forces
---
## Pillar 7: Social Networks & Community
### The Science
**Granovetter (1973)***The Strength of Weak Ties*
One of the most influential papers in sociology. Granovetter demonstrates that transformative opportunities — new jobs, novel information, unexpected connections — come disproportionately from *weak ties* (acquaintances, distant contacts) rather than strong ties (close friends, family). Strong ties tend to share overlapping information; weak ties bridge different social worlds and provide access to non-redundant resources.
### Where It Lives in Kalei
**Future Community Features (The Spectrum)** — When Kalei eventually adds social features, Granovetter's weak-ties theory informs the design. Anonymous sharing of reframes, goals, and breakthroughs creates a network of weak ties — users inspiring other users they'll never meet. The value isn't in building close friendships within the app (strong ties) but in being unexpectedly inspired by someone else's Turn on a problem you share (weak ties).
**Design implications:**
- Community features should optimize for weak-tie connections (diverse, anonymous, cross-context)
- Don't try to build a social network — build a constellation of shared perspectives
- Anonymous or pseudonymous sharing preserves the weak-tie benefit (no obligation, no social pressure)
- Surface unexpected resonance: "42 other people Turned a similar thought this week"
---
## Pillar 8: Self-Regulation & Feedback Loops
### The Science
**Carver & Scheier (1998)***On the Self-Regulation of Behavior*
The foundational text on self-regulation theory. Carver & Scheier model all goal-directed behavior as a feedback loop — the "test-operate-test-exit" (TOTE) cycle. People continuously compare their current state to a reference value (their goal), and when a discrepancy is detected, they take corrective action. The cycle repeats until the discrepancy is eliminated or the person disengages.
Critically, the theory shows that **without feedback, self-regulation fails.** People cannot close the gap between where they are and where they want to be if they have no mechanism for detecting the gap. This is why goals without progress monitoring produce no better outcomes than no goals at all. The feedback loop is not optional — it is the mechanism through which goals produce behavior change.
Carver & Scheier also distinguish between two types of feedback loops: **discrepancy-reducing** (moving toward a goal) and **discrepancy-enlarging** (moving away from a threat). Both are relevant to Kalei: the Lens operates primarily through discrepancy-reducing loops (closing the gap toward a desired state), while the Mirror and Turn operate through discrepancy-enlarging loops (moving away from distorted thinking patterns).
**Locke & Latham (2006)***New Directions in Goal-Setting Theory* (extended application)
While primarily cited in Pillar 1, Locke & Latham's 2006 paper explicitly identifies **feedback** as a necessary moderator of goal effectiveness. Goals combined with feedback produce significantly better performance than goals alone. The feedback must be: (1) timely — delivered close to the relevant behavior, (2) specific — referencing concrete actions, not vague impressions, (3) self-relevant — connected to the individual's personal goals and values, and (4) actionable — pointing toward what to do differently, not just what went wrong.
This directly informs how the Guide delivers coaching: not through generic encouragement, but through timely, specific, self-relevant, and actionable observations drawn from the user's own data across all features.
### Where It Lives in Kalei
**The Guide (Active Coaching Layer)** — The Guide is the primary implementation of self-regulation theory in Kalei. Without it, the app provides tools (Mirror, Turn, Lens) but no feedback loop connecting them. The Guide implements Carver & Scheier's TOTE cycle across the entire user experience:
- **Test:** The Guide monitors the user's state across all features — Mirror session language, Turn topics, Lens goal progress, Ritual consistency, Evidence Wall accumulation
- **Operate:** When discrepancies are detected (goal progress stalling, self-efficacy dipping, patterns recurring), the Guide surfaces targeted interventions — check-ins, bridges, evidence, attention prompts
- **Test again:** The Weekly Pulse provides a structured moment for the user to self-report their state, while the AI provides its own read — closing the loop between subjective experience and objective data
- **Exit (or recalibrate):** Goals that are completed celebrate and close. Goals that need adjustment get recalibrated through check-in conversations. Goals that have been abandoned are addressed directly.
**Goal Check-Ins** — These implement Locke & Latham's feedback moderator directly. The AI reviews specific actions taken (not vague impressions), references concrete data (Evidence Wall), and proposes adjustments (revised if-then plans). The feedback is timely (scheduled at the user's chosen interval), specific ("you completed 18 of 22 sessions"), self-relevant (tied to the user's chosen goal), and actionable ("what if we add a backup plan for high-stress days?").
**Cross-Feature Bridges** — These implement a form of self-regulation that no single feature can achieve alone: connecting emotional processing (Mirror/Turn) with goal-directed behavior (Lens). When the Guide notices that Mirror sessions keep circling a theme that maps to a Lens goal, it's detecting a discrepancy between the user's emotional state and their goal state — and offering to close it.
**Weekly Pulse** — This is the explicit feedback loop closure. The user reports their subjective state; the AI reports the objective data; the gap (or alignment) between the two becomes the most coaching-rich moment in the entire app. Carver & Scheier's research shows that awareness of discrepancy is the primary driver of corrective action — the Pulse creates that awareness weekly.
**Attention Prompts** — These implement the "operate" phase of the TOTE cycle for Step 4 (Notice Differently). Rather than waiting for the user to naturally redirect attention, the Guide actively trains attention toward goal-relevant information through daily micro-exercises. This accelerates the attentional retraining that the Mirror provides passively.
**Design implications:**
- The Guide must be proactive, not reactive — it initiates contact, not just responds to it
- Feedback must reference concrete, specific, user-generated data — never generic
- The gap between self-report and AI-read (in Weekly Pulse) is a feature, not a bug — surfacing discrepancy is the mechanism of change
- The Guide must adapt its cadence to the user — too frequent feels surveillance-like, too infrequent loses the feedback loop
- Every Guide interaction should close with forward momentum — "here's what to focus on" — not just analysis
---
## The Chain: How the 8 Pillars Create the Manifestation Mechanism
The research pillars aren't independent — they form a causal chain, and the self-regulation feedback loop (Pillar 8) monitors and steers the entire process:
```
Clear Goal (Locke & Latham)
→ biases attention toward goal-relevant information (Yantis, Stevens & Bavelier)
→ mental rehearsal primes execution (Schuster, Liu, Seok)
→ capability belief sustains effort through setbacks (Bandura)
→ if-then plans automate action initiation (Gollwitzer)
→ repetition builds automatic habits (Wood & Neal)
→ consistent practice reinforces positive expectations (Stetler, Pardo-Cabello)
→ broader action increases exposure to opportunity (Granovetter)
→ probability of desired outcome increases
↑ ↓
└── FEEDBACK LOOP (Carver & Scheier) ──────────┘
The Guide monitors progress at every step,
detects when the chain stalls, and steers
the user back on track through check-ins,
bridges, evidence, and attention prompts.
```
This is what "manifestation" actually is: a chain of well-documented psychological mechanisms that compound to tilt probability in your favor — **plus a feedback loop that keeps the chain running.** Not magic. Not metaphysics. Not "the universe." Just your brain doing what brains do when properly directed, with an AI ensuring you don't get stuck.
---
## Quick Reference: Feature → Science Map
| Kalei Feature | Primary Research | Key Principle |
|---|---|---|
| **The Lens** (goal creation) | Locke & Latham 2002, 2006 | Specific, challenging goals with feedback |
| **The Turn** (reframing) | Bandura 1977; Yantis 2008 | Capability belief + attentional retraining |
| **The Mirror** (freeform notebook) | Stevens & Bavelier 2012; Koch & Tsuchiya | Externalized attentional spotlight |
| **The Rehearsal** (guided visualization) | Schuster 2011; Liu 2025; Seok 2023 | Process-oriented mental rehearsal (~10min, 3x/week) |
| **The Ritual** (daily flow) | Wood & Neal 2007; Wood et al. 2021 | Context-anchored habit formation; context stability |
| **The Evidence Wall** (mastery tracking) | Bandura 1977 | Mastery experiences (strongest efficacy source) |
| **If-Then Micro-Actions** | Gollwitzer 1999; Gollwitzer & Sheeran | Implementation intentions |
| **Coaching Styles** | Bandura 1977 | Personalized verbal persuasion |
| **Pattern Analytics** | Yantis 2008; Stevens & Bavelier 2012 | Revealing attentional biases |
| **Science Explanations** | Pardo-Cabello 2022; Stetler 2014 | Transparent expectation effects |
| **Community (future)** | Granovetter 1973 | Weak-tie opportunity exposure |
| **The Guide** (active coaching) | Carver & Scheier 1998; Locke & Latham 2006 | Feedback loops + self-regulation TOTE cycle |
| **Goal Check-Ins** | Locke & Latham 2006 | Timely, specific, actionable feedback |
| **Cross-Feature Bridges** | Carver & Scheier 1998 | Discrepancy detection across systems |
| **Weekly Pulse** | Carver & Scheier 1998 | Subjective-objective gap awareness |
| **Attention Prompts** | Stevens & Bavelier 2012; Carver & Scheier 1998 | Active attentional retraining via feedback |
---
## Guardrails: What the Science Does NOT Support
Just as important as what we cite is what we explicitly reject:
1. **"The universe responds to your thoughts"** — No research supports metaphysical causation. We never imply it.
2. **"Visualize and it will happen"** — Outcome-only visualization without action can actually *decrease* performance (by providing premature satisfaction). We always pair visualization with process focus and action steps.
3. **"Believe hard enough and you'll succeed"** — Bandura's self-efficacy is about capability belief, not outcome certainty. We always say "possible, not guaranteed."
4. **"Positive thinking cures everything"** — Toxic positivity. Kalei acknowledges real constraints, structural barriers, and randomness. The science improves odds — it doesn't eliminate uncertainty.
5. **"You attracted your problems"** — Victim-blaming disguised as empowerment. Never. The attentional science explains perception, not causation.
---
## How to Use This Document
**For developers:** When building a feature, check this doc to understand the scientific principle behind it. The AI prompts, UX copy, and interaction patterns should all reflect the cited research.
**For AI prompt engineering:** Every reframe, visualization script, and goal scaffold should be traceable to a specific pillar. If a prompt produces output that contradicts the research (e.g., promising guaranteed outcomes), it needs revision.
**For content and copy:** When writing user-facing text — onboarding, tooltips, push notifications, feature descriptions — ground it in the relevant pillar. Users should feel the science without needing to read papers.
**For marketing:** "Science-backed" is not a buzzword for Kalei. It's a specific claim backed by 18 peer-reviewed papers across 8 research domains. This document is the receipt.
---
*Same pieces. New angle. Real science.*

View File

@@ -0,0 +1,112 @@
Final Conclusion: What “Manifesting” Really Is (Scientifically)
When stripped of spiritual language, “manifesting” is a structured psychological process that increases the probability of desired outcomes through well-documented cognitive and behavioral mechanisms.
It does not mean:
Reality bends to thought.
The universe delivers outcomes on demand.
Belief overrides randomness.
It does mean:
1⃣ Clear Goals Change Performance
Research on goal-setting shows that specific, challenging goals direct attention, increase effort, and improve persistence.
Clarity is not motivational fluff — it changes how the brain allocates cognitive resources.
2⃣ Mental Rehearsal Primes Execution
Visualization activates overlapping neural circuits used in real performance.
When used as behavioral rehearsal (not fantasy), it improves readiness, confidence, and execution quality.
3⃣ Self-Efficacy Changes Behavior
Belief in capability (self-efficacy) predicts:
Greater effort
Longer persistence
Better stress tolerance
Faster recovery from setbacks
Belief influences behavior — behavior influences outcomes.
4⃣ Attention Is Biased by Goals
The brain filters information according to relevance.
When you define a goal, you increase the likelihood of noticing:
Opportunities
Relevant information
Signals
Threats
You dont create opportunities out of nothing — you detect them more effectively.
5⃣ Planning Automates Action
Implementation intentions (“If X happens, I do Y”) dramatically increase follow-through.
This reduces reliance on willpower and increases consistency.
6⃣ Repetition Builds Habits
Repeated, goal-aligned behaviors become automatic over time.
Once habits form, less conscious effort is required — increasing long-term probability of success.
7⃣ Exposure Increases Opportunity
Network research shows that broader exposure (especially weak ties) increases access to opportunities.
Repeated aligned action increases surface area for luck.
The Core Mechanism
Clear intention
→ biases attention
→ increases belief-driven persistence
→ improves quality and frequency of action
→ expands exposure to opportunity
→ increases probability of desired outcomes
Thats it.
The Real Conclusion
“Manifesting” is not mystical causation.
It is:
Goal-directed cognition
Expectancy effects
Behavioral alignment
Habit formation
Increased opportunity exposure
Compounded probability
It works because it operates through known psychological and social mechanisms, not because it overrides reality.
The Honest Boundary
This process:
Improves odds
Does not guarantee outcomes
Cannot eliminate randomness
Cannot override structural constraints
But over time, it systematically tilts probability in your favor.

File diff suppressed because it is too large Load Diff

File diff suppressed because it is too large Load Diff

View File

@@ -0,0 +1,792 @@
# Kalei — Complete User Journey Map
> Version 2.0 — February 2026
> Updated to include Ritual, Rehearsal, Evidence Wall, and full cross-feature integration
---
## Overview
This document maps every user-facing flow in Kalei from first launch to long-term mastery. It serves as the single source of truth for what the user experiences, when, and why — covering all 7 features across the 5-tab architecture.
**Navigation Architecture:** Turn ◇ | Mirror ✦ | Lens ◎ | Gallery ▦ | You ●
**Core Features (4 Pillars):**
- The Turn (Kaleidoscope) — Perspective shifting via cognitive reframing
- The Mirror — Awareness through freeform journaling with AI fragment detection
- The Lens — Direction through goal setting, visualization, and action planning
- Gallery — Pattern collection and history
**Connector Features (4 Bridges):**
- The Ritual — Context-anchored daily habit sequences chaining Mirror → Turn → Lens
- The Rehearsal — Guided multi-sensory visualization (Lens sub-feature)
- The Evidence Wall — Mastery tracking mosaic (You tab sub-feature)
- The Guide — Active coaching layer connecting all features with proactive check-ins, cross-feature bridges, attention prompts, evidence interventions, and weekly pulse
**Intelligence Layer:**
- The Spectrum — AI-powered self-knowledge dashboard
---
## Journey Stage 1: First Launch & Onboarding
### Screen 1: Splash
- Breathing logo animation (soft-elegance iris, slow rotation, core pulse)
- Background: Void (#050508) with subtle breathing aura
- Duration: 2 seconds, auto-advance
### Screen 2: Welcome — "Same pieces. New angle."
- Hero kaleidoscope pattern (6-blade prismatic, screen blend mode)
- Tagline in Space Grotesk display font
- Single CTA: "See how it works" (Amethyst shimmer button)
### Screen 3: The Metaphor — Fragment Introduction
- Visual: A single thought fragment (◇) appears, glowing Amber
- Copy: "Your thoughts are like pieces of glass in a kaleidoscope."
- Interaction: User taps the fragment → it pulses with detected-state animation
- Copy continues: "Sometimes you see sharp edges. Sometimes beautiful patterns."
### Screen 4: The Turn Demo — Live Reframe
- Pre-populated negative thought: "I always mess everything up"
- Auto-animated Turn sequence (1.5s): collapse → multiply → crystallize → settle
- Three reframed perspectives appear as jewel-toned cards (Amethyst, Sapphire, Emerald)
- Copy: "Same pieces. New angle. That's a Turn."
### Screen 5: Choose Your Style
- 4 coaching style cards with fragment icons:
- Stoic Sage (Sapphire ◇) — "Clear-eyed perspective"
- Compassionate Friend (Rose ◇) — "Gentle understanding"
- Pragmatic Coach (Emerald ◇) — "Practical next steps"
- Growth Catalyst (Amber ◇) — "Opportunity in everything"
- User selects default (can change later)
### Screen 6: Notification Permission
- Copy: "When would you like a gentle nudge to check in?"
- Time picker with morning/evening presets
- Skip option available
### Screen 7: Account Creation
- Email + password OR Apple/Google SSO
- Minimal fields — name optional at this stage
- Privacy assurance: "Your thoughts stay yours. Always encrypted."
### Screen 8: First Real Turn
- Empty Turn input with prompt: "What's weighing on you right now?"
- User types their first real negative thought
- Full Turn animation plays
- 3 reframed perspectives appear
- User can save favorites (→ Gallery) or dismiss
- Success burst animation on save
### Screen 9: Welcome Complete
- Copy: "Welcome to Kalei. Your kaleidoscope is ready."
- Mini kaleidoscope pattern generated from their first Turn (deterministic, seeded from input)
- CTA: "Start exploring" → Tab bar appears, Turn tab active
---
## Journey Stage 2: Daily Core Loop
### 2A: The Turn (Tab 1 — Amethyst ◇)
**Entry:** User opens app → Turn tab is default home
**Empty State:**
- Breathing logo at center, subtle floating shards in background
- Rotating prompts: "What thought keeps circling?", "What would you like to see differently?", "What feels heavy right now?"
- Single large text input area
**Active Flow:**
1. User types or speaks a negative thought
2. Tap "Turn it" button (Amethyst shimmer)
3. Turn animation plays (1.5s kaleidoscope rotation)
4. 3 reframed perspectives appear as cards:
- Each card has a coaching style label, the reframe text, and a fragment icon
- Cards use jewel tone gradients matching their style
5. Below perspectives: "If-Then Micro-Action" card
- Format: "If [situation], then I will [specific action]"
- Emerald accent, action-oriented
6. User actions:
- Save any perspective → goes to Gallery as a Keepsake
- Save micro-action → goes to Lens as a suggested action
- Share → generates Pattern Card (kaleidoscope pattern + reframe text)
- Dismiss → confirmation, thought is discarded
**Turn History:**
- Scrollable list below input showing today's Turns
- Each entry: timestamp, first line of thought, fragment count badge, pattern thumbnail
**Rate Limiting (Free tier):**
- 3 Turns per day
- After limit: "You've used your 3 Turns today. Upgrade to Kalei Prism for unlimited Turns."
- Gentle, never punishing
### 2B: The Mirror (Tab 2 — Amber ✦)
**Entry:** Mirror tab → New session or continue existing
**Empty State:**
- Soft amber glow background
- Copy: "What's on your mind? Write freely — no judgment, no rules."
- Suggested prompts rotate: "How are you feeling right now?", "What happened today?", "What's been on repeat in your head?"
**Active Session Flow:**
1. User writes freely in chat-like interface (user messages appear as dark bubbles)
2. After each message, AI processes silently (AI thinking animation: 3 oscillating fragments)
3. AI responds with warm, reflective prompts (light bubbles with subtle amber border)
4. Simultaneously, AI detects cognitive distortions in user's text
5. Detected fragments appear as inline highlights:
- Amber glow underline beneath the distorted phrase
- Small ◇ icon at start of highlight
- Highlight appears post-message (never while typing)
6. User taps a highlight → Half-sheet modal:
- Distortion type name + icon (from icons-distortions.svg)
- Brief explanation: "Catastrophizing: Assuming the worst possible outcome"
- 1-2 quick reframes
- "Take to Turn" button → opens Turn with this thought pre-filled
- Dismiss to continue writing
**Session Close:**
- User taps "End Session" or navigates away
- AI generates Reflection:
- Themes detected
- Fragment count and types
- One-line insight
- Unique kaleidoscope pattern (seeded from session content)
- Reflection saved to Gallery
**Nudge System:**
- If user ignores 5+ highlighted fragments, ONE gentle offer:
- "I noticed a few patterns in what you wrote. Want to look at them together?"
- Max once per session
- Dismissible
**Rate Limiting (Free):** 2 sessions per week, 3 distortion types detected (Catastrophizing, Black-and-White, Should Statements). Prism: unlimited sessions, all 10 types.
### 2C: The Lens (Tab 3 — Emerald ◎)
**Entry:** Lens tab → Goal dashboard
**Dashboard View:**
- Active goals displayed as cards with progress rings
- Each goal card shows: title, progress percentage, next action, streak indicator
- "Add Goal" floating action button (Emerald gradient)
**Goal Creation Flow (6 Steps):**
**Step 1: Decide — Set a SMART Goal**
- AI-guided conversation to refine a vague desire into a specific goal
- Template: "I want to [specific outcome] by [date] measured by [metric]"
- AI suggests refinements if goal is vague
**Step 2: See It — The View (Vision Board)**
- AI generates a visualization description based on goal
- User can add personal images or AI-generated imagery
- "The View" appears as a full-screen card they can revisit
**Step 3: Believe — Capability Building**
- Evidence Wall integration: surfaces past achievements relevant to this goal
- AI generates affirmations based on goal + user's history
- Daily affirmation card appears at Lens tab top
**Step 4: Notice — Attention Training**
- AI prompts awareness exercises: "Today, notice one moment where you [goal-related behavior]"
- Prompts delivered via notification at user-chosen time
- User logs noticed moments → feeds Evidence Wall
**Step 5: Act — If-Then Micro-Actions**
- AI generates situation-specific implementation intentions
- Format: "If [context], then [specific action]"
- User can mark actions complete → feeds Evidence Wall
- Action completion streak tracking
**Step 6: Compound — Habit Tracking**
- Visual habit tracker (fragment-shaped step indicators)
- Streak counter with flame icon
- Weekly review of consistency
**The Rehearsal (Lens Sub-Feature):**
- Accessed from goal detail screen → "Rehearse" button
- Timer ring appears (default: 10 minutes)
- AI generates personalized visualization script:
- First-person perspective
- Multi-sensory (see, hear, feel, smell)
- Process-oriented (not just outcome)
- Includes obstacle rehearsal ("When X happens, I will Y")
- Script plays as text cards with breathing animation pacing
- Progress ring counts down
- Completion → Success burst → logged to Evidence Wall
- Free: 1 Rehearsal per week. Prism: unlimited.
### 2D: Gallery (Tab 4 — Sapphire ▦)
**Entry:** Gallery tab → Collection view
**Views:**
- **All Patterns** (default): Reverse-chronological grid of kaleidoscope pattern thumbnails
- **Keepsakes**: Saved reframes, reflections, and insights
- **By Feature**: Filter by Turn / Mirror / Lens source
- **By Distortion**: Filter by cognitive distortion type
**Pattern Card Detail:**
- Full kaleidoscope pattern (hero variant, animated)
- Source content (the reframe or reflection that generated it)
- Date, feature source, distortion types tagged
- Share button → exports as Pattern Card image
- Delete with confirmation
**Search:**
- Text search across all saved content
- Filter chips: date range, feature, distortion type, favorites
### 2E: You (Tab 5 — Soft Light ●)
**Entry:** You tab → Profile dashboard
**Sections:**
- **Profile**: Name, avatar, member since
- **Stats Overview**: Total Turns, Mirror sessions, Goals active, Streak count
- **Evidence Wall** (prominent card → opens full view)
- **Settings**: Coaching style, notification times, theme (dark only for now), data export
- **Subscription**: Current plan, upgrade CTA (if free)
- **Support**: FAQ, contact, crisis resources
**The Evidence Wall (You Sub-Feature):**
- Accessed from You tab → "Your Evidence Wall" card
- Opens full-screen mosaic view
**Evidence Wall States:**
*Empty State (0-2 items):*
- Ghost tile outlines (dashed borders) showing where tiles will appear
- Central fragment icon with breathing animation
- Copy: "Start collecting evidence. Each Turn adds a tile to your wall."
*Early State (3-7 items):*
- Small cluster of tiles, connections forming
- Tiles are mixed shapes (diamond, hex, rectangle, pentagon, triangle)
- Each tile represents one proof point:
- Completed action (Emerald border)
- Saved keepsake (Sapphire border)
- Self-correction in Mirror (Amber border)
- Streak milestone (Amethyst border)
- Goal completion (Emerald border, larger tile)
- Reframe echo (Indigo border) — when user's later writing echoes a saved reframe
*Mid State (8-20 items):*
- Mosaic takes shape, dashed connection lines between related tiles
- Tiles glow softly when tapped → detail half-sheet
*Full State (20+ items):*
- Dense mosaic with visible connection web
- Zoom/pan enabled
- Most impactful tiles glow brighter
**Contextual Surfacing:**
- During low self-efficacy moments (detected in Mirror or Turn), the Evidence Wall surfaces 1-2 relevant tiles
- Example: User writes "I can never stick to anything" → Evidence Wall suggests: "You completed 12 actions in the last month and maintained a 7-day streak"
- Presented as a gentle card, not a correction
---
## Journey Stage 3: The Ritual (Connector Feature)
The Ritual chains Mirror → Turn → Lens into a single context-anchored daily flow.
**Access:** Dedicated "Start Ritual" button at top of Turn tab, or via notification
**Template Selection:**
*Morning Ritual (15-20 min):*
1. Mirror check-in: "How are you waking up today?" (3 min writing)
2. Turn: AI identifies strongest fragment from Mirror → offers reframe (2 min)
3. Lens: Today's priority action from active goal (1 min review)
4. Affirmation: Daily affirmation card
5. Set intention: One sentence for the day
*Evening Ritual (10-15 min):*
1. Mirror reflection: "What stood out about today?" (3 min writing)
2. Turn: Process any unresolved thought from the day (2 min)
3. Lens review: Mark completed actions, log noticed moments
4. Gratitude: One thing from today (saved to Gallery)
*Quick Ritual (5 min):*
1. One-line check-in
2. Fastest Turn (single perspective)
3. One action reminder
**Ritual Flow UI:**
- Step indicators using fragment-shaped progress bar (from progress-indicators.svg)
- Each step has a timer (visible but not pressuring)
- Smooth transitions between steps (fragment scatter/converge animation)
- Completion → Success burst → streak updated
**Ritual Tracking:**
- Streak calendar (7-day week view, Amber jewel tone)
- Context consistency tracking (Wood et al.): same time, same place → stronger habit
- Ritual completion logged to Evidence Wall
**Rate Limiting (Free):** Quick Ritual only. Prism: all 3 templates.
---
## Journey Stage 3B: The Guide (Active Coaching Layer)
The Guide is not a tab or a destination — it's an intelligence layer that surfaces across all features through five interaction patterns. These screens show how each pattern manifests in the UI.
### Guide Pattern 1: Goal Check-In (Lens)
**Access:** "Check in" button on goal detail screen, or via notification at user's chosen check-in time
**Screen 65: Goal Check-In Conversation**
A chat-like interface within the goal detail screen. The Guide has full context from the user's Lens activity, Mirror sessions, and Turn history.
**Flow:**
1. Guide opens with a recognition of recent progress (evidence-first)
2. Guide asks about specific milestones or actions since last check-in
3. User responds conversationally
4. Guide reviews relevant if-then plans — did the situations arise? Did the plans work?
5. If plans need adjustment, Guide proposes modifications collaboratively
6. Guide closes with a concrete Evidence Wall proof point
**UI Elements:**
- Chat interface within goal detail (not a separate screen — slides up from goal card)
- Guide messages use prismatic gradient border (distinguishing from Mirror's amber)
- User messages in dark bubbles (consistent with Mirror style)
- At bottom: typing area with send button
- Check-in history accessible via "Past check-ins" link
**Screen 66: Check-In Summary**
After the conversation ends, a summary card appears:
- What was reviewed
- Plan adjustments made (if any)
- Evidence highlighted
- Next check-in date
- "Added to your coaching history" confirmation
**Rate Limiting (Free):** 1 check-in per month per goal. Prism: weekly per goal + on-demand.
---
### Guide Pattern 2: Cross-Feature Bridge Cards
**Access:** Appear automatically at the top of Turn, Mirror, or Lens tabs when the Guide detects a cross-feature pattern
**Screen 67: Discovery Bridge**
Appears when 3+ Mirror sessions or Turns share a theme that doesn't map to any existing Lens goal.
**Layout:**
- Half-height card at top of screen (below nav header, above feature content)
- Prismatic gradient border (thin, cycling amethyst → sapphire → emerald → amber)
- Header: "◇ Something keeps coming up" (or "A pattern is forming")
- Body: 1-2 sentences referencing the theme, with quoted user text in italics
- CTAs: Primary action (e.g., "Open Lens" / "Start a goal") + Dismiss ("Just noticing")
- Dismissible with swipe or tap
**Screen 68: Reinforcement Bridge**
Appears when Mirror/Turn content directly relates to an existing Lens goal.
**Layout:** Same card format as discovery bridge.
- Header: "◇ This connects to something you're building"
- Body: References the specific goal and how the current processing connects to it
- CTAs: "Start Rehearsal" / "Check in on goal" + Dismiss
**Screen 69: Integration Bridge**
Appears when current Mirror/Turn writing contradicts a previously saved keepsake.
**Layout:** Same card format, but includes a quoted keepsake.
- Header: "◇ You've seen this differently before"
- Body: Shows the saved keepsake text, then the current contradicting sentiment
- CTAs: "See your Evidence Wall" / "Full Turn" + "Continue writing"
**Rules:** Maximum one bridge per day. Never appears mid-Mirror session. Always dismissible.
---
### Guide Pattern 3: Attention Prompts (Lens)
**Access:** Daily notification → opens in Lens tab. Also accessible from Lens dashboard as a card.
**Screen 70: Daily Attention Prompt**
**Layout:**
- Card in Lens tab (below goals, above rehearsals)
- Emerald accent border (Lens color family)
- Header: "Today's Focus: [Prompt Type]" (Notice / Reflect / Act / Envision)
- Body: The specific prompt, 1-2 sentences, tied to the active goal
- Goal reference: "For your goal: [goal title]"
- CTA: "Got it" (acknowledges) + "Log a moment" (appears later in the day)
- Prompt type rotates based on which step of the manifestation chain the user is in
**Screen 71: Moment Log**
When user taps "Log a moment" (later in the day or from notification):
**Layout:**
- Simple text input: "What did you notice?"
- Below: context reminder of today's prompt
- Submit → confirmation: "That's evidence. Added to your Evidence Wall."
- The logged moment appears as a new Evidence Wall tile
**Rate Limiting (Free):** 3 attention prompts per week. Prism: daily.
---
### Guide Pattern 4: Evidence Intervention
**Access:** Surfaces automatically during Mirror sessions or after Turns when low self-efficacy is detected
**Screen 72: Evidence Intervention Card (Mirror)**
Appears after a Mirror session ends (never mid-session) when the session contained significant self-efficacy dip language.
**Layout:**
- Card at bottom of Mirror reflection screen
- Prismatic border
- Header: "◇ Here's what I've seen"
- Body: 2-3 specific, numbered proof points from Evidence Wall that directly counter the expressed doubt
- Each proof point includes a specific number, date, or action
- CTA: "See your full Evidence Wall" + Dismiss
- Tone: Presenting evidence, not cheerleading. "You said X. Your data shows Y."
**Screen 73: Evidence Intervention Card (Turn)**
Appears below Turn results when the original thought contained capability doubt on a topic where the user has evidence.
**Layout:**
- Same card format as Mirror intervention
- Positioned below the 3 reframe cards, above the action buttons
- Contextually references the Turn's topic
**Rules:** Maximum one intervention per session. Only surfaces when meaningful evidence exists. Never fabricates or exaggerates.
**Rate Limiting (Free):** Not available. Prism: full evidence interventions.
---
### Guide Pattern 5: Weekly Pulse
**Access:** Weekly notification on user's chosen day (default: Sunday evening) → opens dedicated Pulse flow
**Screen 74: Pulse — Self-Report**
Step 1 of 3 in the Weekly Pulse flow.
**Layout:**
- Full-screen flow (no tab bar — immersive like Ritual)
- Header: "Your Weekly Pulse"
- Subheader: "How did this week feel?"
- 5-point fragment scale (SVG diamonds at increasing glow/facet levels):
- ◇ dim, cracked — "Rough"
- ◇ muted — "Harder than usual"
- ◇ neutral — "Steady"
- ◇ glowing — "Good momentum"
- ◇ brilliant, faceted — "Breakthrough week"
- Below scale: optional one-sentence write-in
- Progress indicator: Step 1 of 3
**Screen 75: Pulse — AI Read**
Step 2 of 3.
**Layout:**
- Header: "Here's what I noticed this week"
- 3-5 bullet observations from the AI, each with a jewel-tone accent dot:
- Turn count and theme
- Mirror session emotional trajectory
- Lens goal progress
- Distortion pattern changes
- Streak/consistency data
- If self-report diverges from data: a highlighted callout — "You said this was a rough week, but your data shows progress on two fronts. Sometimes the feeling lags behind the evidence."
- Progress indicator: Step 2 of 3
**Screen 76: Pulse — Next Week Focus**
Step 3 of 3.
**Layout:**
- Header: "For next week"
- 2-3 suggested focus areas as cards:
- Each card: one-sentence suggestion + the feature it relates to (Lens, Mirror, Rehearsal, etc.)
- Examples: "Do a Rehearsal for your 5K — you haven't done one in 10 days" / "Your Mirror streak is at 14 days — keep it going"
- CTAs: "Sounds good" (accepts) / "Adjust" (opens edit)
- Completion: "Pulse complete. See you next week."
- Pulse data saved → feeds Spectrum
**Rate Limiting (Free):** Self-report step only (no AI read, no next-week focus). Prism: full 3-step Pulse.
---
### Guide — Enhanced Turn Results (Updated Screen 13)
The existing Turn Results screen (13) is enhanced with two new elements:
**Addition 1: If-Then Micro-Action Card**
Positioned between the reframe cards and the action buttons:
- Emerald accent border
- Format: "If [situation from the thought], then I will [specific action]"
- CTA: "Save to Lens" → creates an action item on the most relevant active goal
- If no active goal exists: "Start a Lens goal around this"
**Addition 2: Goal Connection (when relevant)**
If the Turn's topic maps to an active Lens goal:
- Small card below the micro-action: "This connects to your goal: [goal title]"
- CTA: "Check in on this goal" / Dismiss
---
### Guide — Enhanced Mirror Reflection (Updated Screen 19)
The existing Mirror Session Reflection screen (19) is enhanced:
**Addition: "The Guide noticed..." section**
Below the existing reflection content (themes, fragment count, patterns, insight):
- Prismatic-bordered card
- Header: "The Guide noticed..."
- 1-2 cross-feature observations:
- Theme connections to Lens goals
- Pattern changes compared to recent sessions
- Integration bridge opportunities (if a saved keepsake was contradicted)
- CTAs appropriate to the observation (e.g., "Open Lens" / "See your Evidence Wall" / Dismiss)
---
## Journey Stage 4: Spectrum (Intelligence Layer)
**Unlock:** After 2 weeks of active use (minimum 5 Turns, 2 Mirror sessions)
**Teaser Period:**
- Notification: "Something is forming... Your Spectrum is almost ready."
- Small locked card on You tab with shimmer animation
**Launch Reveal:**
- Full-screen animation: fragments converge into prismatic kaleidoscope
- User's first Spectrum dashboard appears
**Dashboard Components:**
**The River (Emotional Flow):**
- Flowing prismatic gradient band showing emotional valence over time
- Data points as fragment icons at key moments
- X-axis: days/weeks, Y-axis: emotional valence
- Hover/tap any point → detail card with source Turn/Mirror session
**Your Glass (Distortion Distribution):**
- Radar/spider chart showing which of the 10 distortion types appear most
- Amber jewel tone data shape on hex grid
- Vertices as fragment icons
- Evolves weekly as patterns shift
**Turn Impact (Before/After):**
- Bar chart pairs showing emotional metrics before and after Turns
- Metrics: Distress level, Clarity, Hope
- Ruby bars (before) vs Emerald bars (after)
- Rolling 30-day average
**Rhythm Detection (Your Cycles):**
- Time-of-day engagement pattern
- Bubble sizes represent intensity
- Peak labels with fragment accents
- Helps user identify best times for practice
**Growth Trajectory (The Long View):**
- Line chart with fragment data points
- Y-axis: Resilience Score (composite of fragment density, self-correction rate, reframe adoption, distortion diversity, Turn-to-insight ratio)
- Milestone markers (10th Turn, 30-day streak, etc.)
- Monthly trend with prismatic gradient fill under curve
**Cadence:**
- Weekly summary: Sunday evening notification with 1 key insight
- Monthly deep dive: First of month with month-over-month comparison
- In-context nudges: Insights surface within Mirror/Turn/Lens at natural moments
**Rate Limiting (Free):** Simplified weekly summary (1 insight, no visuals, basic fragment counts). Prism: full dashboard, all 5 components, weekly/monthly deep dives, growth trajectory, export.
---
## Journey Stage 5: Engagement Deepening & Retention
### Streak System
- Daily streak counter (consecutive days with at least 1 Turn or Ritual)
- Visual: flame icon with Amber gradient, pulse animation
- Milestones: 3, 7, 14, 30, 60, 90, 180, 365 days
- Each milestone → special pattern generated, saved to Gallery
- Streak freeze: 1 free per week (Prism: 3 per week)
### Push Notifications
- Daily check-in at user's chosen time
- Streak maintenance reminders (if about to break)
- Milestone celebrations
- Weekly Spectrum insights (Prism)
- Ritual reminders at consistent time/place
- Never more than 2 per day
### Empty States
- Every screen has a warm, encouraging empty state
- Uses breathing logo animation or floating shard clusters
- Copy examples:
- Turn: "What would you like to see differently today?"
- Mirror: "Ready to write? There's no wrong way to start."
- Lens: "What are you working toward? Let's build a path."
- Gallery: "Your first pattern is waiting to be created."
- Evidence Wall: "Every small step is evidence. Start collecting."
### Upgrade Moments (Free → Prism)
- After hitting 3 Turn limit: "You're on a roll. Unlock unlimited Turns."
- After 2nd Mirror session: "Want to explore all 10 distortion types?"
- After first Rehearsal: "That felt good, right? Get unlimited Rehearsals."
- After Evidence Wall shows 10+ tiles: "Your evidence is growing. See the full picture with Spectrum."
- Never blocks current action — always shows after completion
---
## Journey Stage 6: System States
### Loading States
- Initial load: Breathing logo animation
- Feature transitions: Fragment scatter/converge
- AI processing: 3-fragment oscillation (AI thinking bubble)
- Data loading: Skeleton shimmer (text lines + card shapes)
- Long operations: Iris spinner with progress text
### Error States
- Network error: "Lost connection. Your data is safe — we'll sync when you're back."
- AI error: "Our thinking engine needs a moment. Try again in a few seconds."
- Rate limit: Feature-specific messaging (see each feature above)
- Generic: Ruby-accent toast with retry option
### Success States
- Turn saved: Emerald toast "Turn saved" with fragment icon
- Goal completed: Success burst animation (expanding rings + particle fragments)
- Streak milestone: Special celebration with pattern generation
- Ritual complete: Prismatic ring completion animation
### Offline Mode
- Turn input cached locally, syncs when online
- Mirror sessions continue with local fragment detection (basic)
- Gallery browsable offline
- Clear indicator: "Offline — your work will sync automatically"
---
## Monetization Tiers
### Kalei (Free)
| Feature | Limit |
|---------|-------|
| Turn | 3 per day |
| Mirror | 2 sessions per week, 3 distortion types |
| Lens | 1 active goal, basic actions |
| Rehearsal | 1 per week |
| Ritual | Quick template only |
| Evidence Wall | 30-day window |
| Guide | Discovery bridges only, 1 check-in/month/goal, 3 attention prompts/week, self-report Pulse only |
| Gallery | Full access |
| Spectrum | Simplified weekly summary (text only) |
### Kalei Prism ($7.99/month)
| Feature | Access |
|---------|--------|
| Turn | Unlimited + if-then micro-action cards |
| Mirror | Unlimited sessions, all 10 distortion types, unlimited inline reframes + evidence interventions |
| Lens | Unlimited goals, AI-refined actions + weekly check-ins + daily attention prompts |
| Rehearsal | Unlimited |
| Ritual | All 3 templates |
| Evidence Wall | Full history, no time window + contextual AI surfacing |
| Guide | All 5 patterns: full check-ins, all bridge types, daily prompts, evidence interventions, full Pulse |
| Gallery | Full access + export |
| Spectrum | Full dashboard, all 5 components, weekly/monthly insights, growth trajectory |
---
## Appendix: Screen Inventory
| # | Screen | Tab | Feature |
|---|--------|-----|---------|
| 1 | Splash | — | System |
| 2 | Welcome | — | Onboarding |
| 3 | Fragment Intro | — | Onboarding |
| 4 | Turn Demo | — | Onboarding |
| 5 | Style Selection | — | Onboarding |
| 6 | Notification Permission | — | Onboarding |
| 7 | Account Creation | — | Onboarding |
| 8 | First Turn | — | Onboarding |
| 9 | Welcome Complete | — | Onboarding |
| 10 | Turn Home (empty) | Turn | Turn |
| 11 | Turn Input Active | Turn | Turn |
| 12 | Turn Animation | Turn | Turn |
| 13 | Turn Results | Turn | Turn |
| 14 | Turn History | Turn | Turn |
| 15 | Mirror Home (empty) | Mirror | Mirror |
| 16 | Mirror Session Active | Mirror | Mirror |
| 17 | Mirror Fragment Highlight | Mirror | Mirror |
| 18 | Mirror Fragment Detail (half-sheet) | Mirror | Mirror |
| 19 | Mirror Session Reflection | Mirror | Mirror |
| 20 | Lens Dashboard | Lens | Lens |
| 21 | Lens Goal Creation Step 1 | Lens | Lens |
| 22 | Lens Goal Creation Step 2 | Lens | Lens |
| 23 | Lens Goal Creation Step 3 | Lens | Lens |
| 24 | Lens Goal Creation Step 4 | Lens | Lens |
| 25 | Lens Goal Creation Step 5 | Lens | Lens |
| 26 | Lens Goal Creation Step 6 | Lens | Lens |
| 27 | Lens Goal Detail | Lens | Lens |
| 28 | Lens Daily Affirmation | Lens | Lens |
| 29 | Rehearsal Session | Lens | Rehearsal |
| 30 | Rehearsal Complete | Lens | Rehearsal |
| 31 | Gallery All Patterns | Gallery | Gallery |
| 32 | Gallery Keepsakes | Gallery | Gallery |
| 33 | Gallery Pattern Detail | Gallery | Gallery |
| 34 | Gallery Search/Filter | Gallery | Gallery |
| 35 | You Profile | You | You |
| 36 | You Stats | You | You |
| 37 | You Settings | You | Settings |
| 38 | You Subscription | You | Billing |
| 39 | Evidence Wall (empty) | You | Evidence Wall |
| 40 | Evidence Wall (early) | You | Evidence Wall |
| 41 | Evidence Wall (mid) | You | Evidence Wall |
| 42 | Evidence Wall (full) | You | Evidence Wall |
| 43 | Evidence Wall Tile Detail | You | Evidence Wall |
| 44 | Ritual Template Selection | Turn | Ritual |
| 45 | Ritual Morning Flow | Turn | Ritual |
| 46 | Ritual Evening Flow | Turn | Ritual |
| 47 | Ritual Quick Flow | Turn | Ritual |
| 48 | Ritual Complete | Turn | Ritual |
| 49 | Ritual Streak View | Turn | Ritual |
| 50 | Spectrum Dashboard | You | Spectrum |
| 51 | Spectrum The River | You | Spectrum |
| 52 | Spectrum Your Glass | You | Spectrum |
| 53 | Spectrum Turn Impact | You | Spectrum |
| 54 | Spectrum Rhythm | You | Spectrum |
| 55 | Spectrum Growth | You | Spectrum |
| 56 | Spectrum Weekly Summary | You | Spectrum |
| 57 | Spectrum Monthly Deep Dive | You | Spectrum |
| 58 | Upgrade Modal | — | Billing |
| 59 | Rate Limit Notice | — | System |
| 60 | Crisis Response | — | Safety |
| 61 | Pattern Card Share | — | Social |
| 62 | Notification Settings | You | Settings |
| 63 | Data Export | You | Settings |
| 64 | Account Deletion Confirm | You | Settings |
| 65 | Goal Check-In Conversation | Lens | Guide |
| 66 | Check-In Summary | Lens | Guide |
| 67 | Discovery Bridge Card | Turn/Mirror/Lens | Guide |
| 68 | Reinforcement Bridge Card | Turn/Mirror/Lens | Guide |
| 69 | Integration Bridge Card | Mirror | Guide |
| 70 | Daily Attention Prompt | Lens | Guide |
| 71 | Moment Log | Lens | Guide |
| 72 | Evidence Intervention (Mirror) | Mirror | Guide |
| 73 | Evidence Intervention (Turn) | Turn | Guide |
| 74 | Pulse — Self-Report | — | Guide |
| 75 | Pulse — AI Read | — | Guide |
| 76 | Pulse — Next Week Focus | — | Guide |
| 13* | Turn Results (Enhanced) | Turn | Guide + Turn |
| 19* | Mirror Reflection (Enhanced) | Mirror | Guide + Mirror |

View File

@@ -0,0 +1,374 @@
# The Mirror — Kalei's Notebook Feature
## Scientific Foundation
The Mirror is Kalei's most direct application of attention and neuroscience research.
**Selective Attention as the Core Mechanism:** Yantis (2008) showed that selective attention operates through modulatory signals that amplify relevant information and suppress irrelevant inputs. The Mirror externalizes this process — Kalei's AI acts as an attentional amplifier, highlighting cognitive patterns the user's own system has habituated to and stopped noticing. The highlighted fragments aren't new information; they're existing patterns made visible through redirected attention.
**Attention Is Trainable:** Stevens & Bavelier (2012) demonstrated that attentional control improves with practice and transfers across domains. Regular Mirror use trains the user to notice their own cognitive distortions — first with AI assistance, eventually independently. The Spectrum's "self-correction rate" metric (Phase 2) directly measures this training effect.
**Attention vs. Consciousness:** Koch & Tsuchiya's distinction between attention and consciousness is operationally important. The Mirror works at the attention level: it doesn't require the user to be consciously aware of their patterns (consciousness) — it simply redirects attention toward them. The conscious recognition follows naturally.
**Habit Formation Through Consistent Practice:** Wood & Neal (2007) showed that habits form through context-response associations. Regular Mirror sessions — anchored to consistent contexts (time of day, emotional state) — train the habit of reflective self-examination until it becomes automatic.
---
---
## The Concept
The Kaleidoscope (Turn) is structured: one fragment in, patterns out. It works when you **know** what's bothering you.
But most of the time, people don't. They're carrying a vague heaviness — a bad day, an argument replaying in their head, a worry they can't articulate. They don't need a tool yet. They need a space to **think out loud** first.
**The Mirror** is that space.
It's a freeform notebook with a chat-like interface where you write whatever's on your mind — stream of consciousness, venting, processing. As you write, Kalei's AI reads along quietly and does two things:
1. **Highlights fragments** — gently underlines or marks phrases that carry negative cognitive patterns (catastrophizing, black-and-white thinking, personalization, fortune-telling, etc.)
2. **Offers to Turn them** — tapping a highlighted fragment opens a mini-reframe inline, without leaving the flow of writing
You're not journaling into a void. You're writing into a mirror that reflects back what you can't see yourself.
---
## Why "The Mirror"
A kaleidoscope is built from mirrors. The mirrors are what create the symmetry — what take a random fragment and reveal the pattern. Without the mirrors, it's just broken glass.
The Mirror feature is the reflective surface of Kalei. The Kaleidoscope (Turn) is the active tool. The Mirror is the quiet awareness that makes the tool work.
**Metaphor alignment:**
- You write freely → you're pouring fragments onto the table
- Kalei highlights patterns → the mirror reflects back what you couldn't see
- You tap to reframe → you choose which fragments to Turn
- The session becomes a Reflection → saved to your Gallery with its own pattern
---
## How It Works — User Flow
### Entry Point
The Mirror lives as the **fifth element** in the app's navigation, or as a secondary action within the Turn tab. Two options:
**Option A — Dedicated tab (recommended):**
| Icon | Label | Function |
|------|-------|----------|
| ◇ | **Turn** | Quick structured reframe |
| ✦ | **Mirror** | Freeform notebook with AI awareness |
| ◎ | **Lens** | Manifestation Engine |
| ▦ | **Gallery** | History of patterns and reflections |
| ● | **You** | Profile and settings |
**Option B — Nested under Turn:**
Turn tab has two modes: "Quick Turn" (current structured input) and "Open Mirror" (freeform). Toggle at top.
**Recommendation:** Option A. The Mirror is different enough in intent and behavior that it deserves its own space. Users will develop separate habits — quick Turns for specific thoughts, Mirror sessions for processing.
### The Writing Experience
**Visual:** Chat-style interface. The user's messages appear as bubbles or blocks on one side. Clean, minimal, dark background consistent with Kalei's aesthetic. No AI responses appear unprompted — the AI is **listening**, not talking.
**Prompt on empty state:**
> "Start writing. Say whatever's on your mind. I'll listen."
> *Kalei will gently highlight patterns it notices. You decide what to do with them.*
**User writes freely.** They can send multiple messages in sequence, like texting a friend or writing in a stream. No character limits. No structure required. Just write.
### The Highlighting — "Fragment Detection"
As the user writes (or after each message is sent), Kalei's AI analyzes the text for **cognitive distortion patterns** — the same patterns that cognitive behavioral therapy identifies as drivers of negative thinking:
| Distortion | Example | What Kalei detects |
|---|---|---|
| Catastrophizing | "This is going to ruin everything" | Absolutist prediction language |
| Black-and-white thinking | "I always fail at this" | Always/never, all-or-nothing |
| Mind reading | "They probably think I'm an idiot" | Assuming others' thoughts |
| Fortune telling | "This will never get better" | Predicting negative outcomes |
| Personalization | "It's all my fault" | Taking undue responsibility |
| Discounting positives | "That win was just luck" | Minimizing good things |
| Emotional reasoning | "I feel like a failure so I must be one" | Feelings presented as facts |
| Should statements | "I should be further along by now" | Rigid self-imposed rules |
| Labeling | "I'm such a loser" | Identity-level negative labels |
| Overgeneralization | "Nothing ever works out for me" | One event → universal pattern |
**How highlighting appears:**
- Detected phrases get a **subtle underline or soft glow** in a warm amber/gold color — the color of light catching a fragment
- The highlight is gentle, not aggressive. It shouldn't feel like a red pen correcting homework. It should feel like sunlight falling on a piece of glass — drawing attention naturally
- A small **◇ icon** (fragment symbol) appears at the end of the highlighted phrase, indicating this fragment can be Turned
- Highlights appear **after the user finishes a message** (not while typing — that would be intrusive and anxiety-inducing)
**Critical UX principle:** The highlighting must feel like **noticing**, not **judging**. The AI is a mirror, not a critic. The user should feel seen, not corrected. This distinction is scientifically grounded — Bandura (1977) showed that perceived criticism undermines self-efficacy, while neutral observation preserves it. The Mirror builds capability awareness, not self-judgment.
### Tapping a Fragment — Inline Reframing
When the user taps a highlighted fragment:
1. A **mini-card slides up** from below (half-sheet modal, not full screen — user can still see their writing above)
2. The card shows:
- The original fragment, quoted
- The cognitive pattern name in plain language (e.g., "This sounds like catastrophizing — predicting the worst outcome")
- **12 reframed alternatives** — shorter and lighter than a full Turn, designed for quick insight
- A "Full Turn" button if they want to take this fragment into the Kaleidoscope for deeper exploration
- A "Dismiss" option — user can say "I see it, moving on" without reframing
**Example interaction:**
> **User writes:** "Had a terrible meeting today. My manager barely acknowledged my presentation. She probably thinks I'm not cut out for this role. I should just start looking for another job."
> **Kalei highlights:** "She probably thinks I'm not cut out for this role"
> **User taps the highlight. Card appears:**
> **◇ Fragment detected**
> *"She probably thinks I'm not cut out for this role"*
>
> This looks like **mind reading** — assuming someone else's thoughts without evidence.
>
> **A different angle:**
> There are many reasons a manager might seem distracted that have nothing to do with your performance. What you observed was her behavior. What she thinks is something you don't have access to yet — but you could ask.
>
> **[Full Turn ◇]** · **[Dismiss]**
### The AI's Role — Passive, Not Conversational
**This is critical.** The Mirror is NOT a chatbot. The AI does not:
- Respond to every message
- Ask follow-up questions unprompted
- Inject unsolicited advice
- Break the user's flow with interjections
The AI **only** does three things:
1. Highlights fragments (passively, after each message)
2. Provides reframes when the user taps a highlight (on demand)
3. Generates a session summary when the user ends the session (see below)
**Why not make it a chatbot?** Because the whole point is that the user is thinking out loud. Inserting AI responses between every message turns it into a conversation with a bot, which changes the psychology entirely. The user stops introspecting and starts performing. The Mirror should feel like writing in a journal that occasionally catches the light — not like talking to a therapist.
**Exception — The Nudge:** If the user has written 5+ messages with zero taps on any highlights and significant negative patterns are accumulating, Kalei can offer ONE gentle nudge at the end of the stream:
> "I noticed a few fragments in what you wrote. Want to look at them together?"
> **[Show me]** · **[Not now]**
This is the only time the AI initiates. Once per session maximum.
---
## Session Wrap-Up — The Reflection
When the user signals they're done writing (closes the Mirror, presses a "Done" button, or after a period of inactivity), Kalei generates a **Reflection** — a brief session summary.
**The Reflection includes:**
1. **The Mosaic** — a high-level summary of what the user wrote about (themes, not specifics)
> "Today's Mirror covered: work frustration, self-doubt about career, and a conflict with your manager."
2. **Fragments Found** — count of cognitive patterns detected
> "4 fragments noticed. You explored 2 of them."
3. **Patterns Revealed** — the reframes the user chose to engage with
> "You looked at mind reading and catastrophizing from new angles."
4. **A Generated Pattern** — a unique kaleidoscope visual for this session, saved to the Gallery alongside their Turn patterns. Mirror sessions get their own visual style — perhaps slightly different geometry (softer, more organic) to distinguish them from structured Turns
5. **An optional one-line insight** — the AI's single most important observation from the session
> "You were hardest on yourself about things you haven't confirmed are true."
**The Reflection is saved to the Gallery** as a distinct type: a Mirror Reflection. Users can revisit their sessions, re-read what they wrote, see which fragments they explored, and track how their patterns evolve over time.
---
## Where the Mirror Fits in the User's Journey
The three core features now form a **progression**:
```
THE MIRROR THE KALEIDOSCOPE THE LENS
(Awareness) → (Perspective) → (Direction)
"What am I "How else can I "What am I
feeling?" see this?" building toward?"
Freeform Structured Goal-focused
writing reframing manifestation
Fragments Fragments → Patterns Patterns → Focus
detected revealed applied
```
**The natural user flow:**
1. **Mirror** — User dumps their raw thoughts. AI highlights the fragments they can't see themselves
2. **Turn** — User takes the most charged fragment and gives it a full Turn in the Kaleidoscope, getting deep, multi-angle reframes
3. **Lens** — The insights from reframing inform the user's goals. What they thought was a setback becomes fuel for what they're building toward
Not every session follows this sequence. Some days you just need a quick Turn. Some days you just need to write in the Mirror. Some days you go straight to the Lens. But when a user does flow through all three, that's the **Kalei experience** at its deepest.
---
## Engagement & Retention Mechanics
### Mirror Streaks
Track separately from Turn streaks:
- "You've written in the Mirror 5 days in a row"
- Mirror sessions tend to be longer and more personal → higher engagement signal
**Science note:** Wood et al. (2021) found that context stability is the single biggest predictor of habit formation. Mirror streaks should track not just frequency but context consistency — "You've written in the Mirror at roughly the same time for 14 days" is a stronger habit signal than "14 sessions total across random times."
### Fragment Tracking Over Time
The Gallery can show **fragment patterns over time**:
- "This month, your most common fragment type was **should statements**"
- "You've reduced catastrophizing by 40% compared to last month"
- A visual "spectrum" chart showing which cognitive distortions appear most frequently
This turns the Mirror from a journal into a **self-awareness engine**. Users can literally see their thinking patterns change over time.
**Science note:** This longitudinal tracking implements Stevens & Bavelier's (2012) finding that attention training transfers and compounds. The fragment density decline over time is a measurable proxy for improved attentional self-awareness — the user is literally catching patterns earlier and more often because their attentional system has been retrained.
### Mirror Prompts
For days when the user opens the Mirror but doesn't know what to write:
- "What happened today that you're still thinking about?"
- "What would you say to a friend if they were feeling what you're feeling?"
- "What's one thing you're avoiding thinking about?"
- "Describe your mood in a sentence. Then ask yourself why."
These are optional, dismissible, and only shown on empty-state.
---
## Monetization Placement
| Feature | Free (Kalei) | Premium (Kalei Prism) |
|---|---|---|
| Mirror access | 2 sessions/week | Unlimited |
| Fragment highlighting | Basic (3 distortion types) | Full spectrum (all 10 types) |
| Inline reframes | 1 per session | Unlimited |
| Session Reflections | Summary only | Full Reflection with insight |
| Fragment tracking over time | ✗ | ✓ |
| Export Mirror sessions | ✗ | ✓ |
The free tier gives enough Mirror access to experience the value. The paywall hits at the point where the user wants **depth and consistency** — which is exactly when they're most likely to convert.
---
## Technical Implementation Notes
### AI Processing Pipeline
Each message the user sends in the Mirror triggers a lightweight AI analysis:
```
User message → Claude API call → Returns:
{
"fragments": [
{
"text": "She probably thinks I'm not cut out for this role",
"start_index": 89,
"end_index": 143,
"distortion_type": "mind_reading",
"distortion_label": "Mind reading",
"distortion_description": "Assuming someone else's thoughts without evidence",
"confidence": 0.87
}
]
}
```
**Confidence threshold:** Only highlight fragments with confidence > 0.75 to avoid false positives. A false positive (highlighting something that isn't actually distorted thinking) would erode trust quickly.
**Latency:** Analysis should complete within 12 seconds after message sent. Highlights appear with a subtle fade-in animation — fragments "catching the light."
### Reframe Generation (On Tap)
When user taps a highlighted fragment, a second API call generates the inline reframe:
```
Input: fragment text + surrounding context + distortion type
Output: {
"distortion_explanation": "Plain language explanation",
"reframe": "1-2 sentence alternative perspective",
"full_turn_prompt": "Pre-filled prompt for Kaleidoscope if user wants deeper exploration"
}
```
### Data Storage
Mirror sessions stored in Supabase:
```sql
-- Mirror sessions table
CREATE TABLE mirror_sessions (
id UUID PRIMARY KEY DEFAULT gen_random_uuid(),
user_id UUID REFERENCES users(id),
started_at TIMESTAMPTZ DEFAULT NOW(),
ended_at TIMESTAMPTZ,
reflection_summary TEXT,
reflection_insight TEXT,
pattern_seed TEXT, -- for generating the visual pattern
created_at TIMESTAMPTZ DEFAULT NOW()
);
-- Mirror messages table
CREATE TABLE mirror_messages (
id UUID PRIMARY KEY DEFAULT gen_random_uuid(),
session_id UUID REFERENCES mirror_sessions(id),
content TEXT NOT NULL,
sequence_order INTEGER,
created_at TIMESTAMPTZ DEFAULT NOW()
);
-- Detected fragments within messages
CREATE TABLE mirror_fragments (
id UUID PRIMARY KEY DEFAULT gen_random_uuid(),
message_id UUID REFERENCES mirror_messages(id),
fragment_text TEXT NOT NULL,
start_index INTEGER,
end_index INTEGER,
distortion_type VARCHAR(50),
confidence FLOAT,
was_tapped BOOLEAN DEFAULT FALSE,
was_reframed BOOLEAN DEFAULT FALSE,
reframe_text TEXT,
created_at TIMESTAMPTZ DEFAULT NOW()
);
```
### Privacy & Sensitivity
Mirror content is the most personal data in the app. Requirements:
- **End-to-end encryption** for all Mirror content at rest
- **No Mirror content used for model training** — explicit policy
- **Local-first option** (future): Allow users to keep Mirror data on-device only
- **Easy deletion**: User can delete any session or all Mirror data
- **Content safety**: If AI detects crisis language (self-harm, suicidal ideation), surface crisis resources immediately — not as a highlight/reframe, but as a dedicated intervention with hotline numbers and a warm handoff message
---
## Updated App Structure
With the Mirror, Kalei now has three pillars that map to a complete mental model:
| Pillar | Feature | Metaphor | User need | Interaction style |
|--------|---------|----------|-----------|-------------------|
| **Awareness** | The Mirror | Reflective surface that shows you your fragments | "I need to process" | Freeform writing, passive AI |
| **Perspective** | The Kaleidoscope | The turn that reveals patterns in fragments | "I need to see this differently" | Structured input → output |
| **Direction** | The Lens | Focused vision toward what you're building | "I need to move forward" | Goal setting, affirmations, tracking |
**Together:** You become aware of your patterns (Mirror), you learn to see them differently (Kaleidoscope), and you channel that clarity into what you're building (Lens).
**Tagline still holds:** *Same pieces. New angle.*
**Elevator pitch (updated):**
> "Kalei is a kaleidoscope for your mind. Write freely in the Mirror and Kalei gently highlights the negative thinking patterns you can't see yourself. Take any thought into the Kaleidoscope and see it from entirely new angles. Then focus your clarity through the Lens toward the goals that matter to you. Same pieces. New angle. That's Kalei."
---
*The mirror doesn't tell you what to see. It shows you what's already there.*

View File

@@ -0,0 +1,427 @@
# The Spectrum — Kalei v1
## Scientific Foundation
The Spectrum is where multiple research pillars converge into a single intelligence layer.
**Expectation Effects (Stetler 2014, Pardo-Cabello et al. 2022):** The Turn Impact component ("Before & After") is a deliberate evidence engine. Stetler demonstrated that consistent adherence to a process reinforces positive expectations, which in turn improve outcomes — a documented feedback loop. By showing users concrete proof that reframing measurably shifts their subsequent emotional state, the Spectrum accelerates this cycle. The transparency doesn't weaken the effect; it strengthens it.
**Habit Formation (Wood & Neal 2007, Wood et al. 2021):** Rhythm Detection and streak mechanics implement Wood's finding that ~43% of daily behavior is habitual and that context stability predicts habit formation. The Spectrum tracks context patterns (time-of-day rhythms, weekly cycles) to help users understand and leverage their own behavioral patterns.
**Selective Attention (Yantis 2008, Stevens & Bavelier 2012):** Fragment Pattern tracking ("Your Glass") operates the Mirror's attentional principle at a longitudinal scale. Instead of highlighting individual distortions in real-time, it reveals macro-patterns: which cognitive biases dominate, how they shift over time, and which respond most to reframing. This is attentional self-knowledge — seeing your own perceptual filters from the outside.
**Self-Efficacy (Bandura 1977):** The Growth Trajectory ("The Long View") directly implements Bandura's most potent self-efficacy source: mastery experiences. By tracking fragment density decline, self-correction rate, and reframe adoption, the Spectrum provides concrete evidence of growing capability — "You are getting better at this" backed by data, not platitudes.
---
## Emotional Intelligence, Not Mood Tracking
Every wellness app asks you to rate your mood on a scale. Tap a smiley face. Drag a slider. It's self-reported, inaccurate, and most people stop doing it after two weeks because it feels like homework.
Kalei doesn't need to ask how you feel. **It already knows.**
Over time, users accumulate weeks or months of Mirror sessions, Turns, and Lens activity. Every word they've written, every fragment detected, every pattern revealed, every reframe they saved or dismissed — it's all data. Rich, personal, longitudinal emotional data that the user generated naturally while using features they already love.
The Spectrum turns that data into **self-knowledge**.
---
## Why "The Spectrum"
Light enters a prism and exits as a spectrum — the full range of colors that were always present but invisible to the naked eye. The Spectrum takes the raw light of your daily Kalei usage and separates it into its component colors so you can see what's really going on inside.
It also completes the optical metaphor system:
| Feature | Optical element | What it does |
|---------|----------------|--------------|
| The Mirror | Mirror | Reflects your thoughts back to you |
| The Kaleidoscope | Kaleidoscope | Rearranges fragments into patterns |
| The Lens | Lens | Focuses your vision on what's ahead |
| The Spectrum | Prism | Reveals the full range of what you're feeling |
---
## What The Spectrum Shows
### 1. The Emotional Landscape
A visual representation of your emotional state over time — not from self-reporting, but from **AI analysis of your Mirror sessions, Turns, and Lens check-ins.**
**How it works:**
Every Mirror message and Turn input is analyzed for emotional signatures across multiple dimensions:
- **Valence:** Positive ↔ Negative
- **Arousal:** Calm ↔ Activated
- **Certainty:** Confident ↔ Uncertain
- **Agency:** In control ↔ Helpless
- **Social orientation:** Connected ↔ Isolated
- **Temporal focus:** Past-dwelling ↔ Present ↔ Future-focused
These dimensions are plotted over time as a **flowing gradient visualization** — not a line chart, but a river of color that shifts and blends. Warm colors for activated states, cool for calm, dark for negative, bright for positive. The result looks like light passing through a prism: your emotional spectrum, laid out across days and weeks.
**The user sees:**
- The overall color/tone of their week at a glance
- Shifts and transitions (Tuesday was dark and activated → Wednesday calmed down after a Turn)
- Long-term trends (past month trending brighter, or a slow slide they hadn't noticed)
**What they don't see:** Numbers, scores, or ratings. The Spectrum is visual and intuitive, not clinical. You look at it and *feel* whether things are moving in the right direction.
### 2. Fragment Patterns — "Your Glass"
A breakdown of which cognitive distortion types appear most frequently in the user's writing.
**Visualization:** A faceted gem or crystal with different faces representing different distortion types. The larger the face, the more frequently that pattern appears. The gem evolves over time as patterns shift.
**Insights delivered in plain language:**
> "This month, **should statements** made up 34% of your fragments — up from 22% last month. You're putting more pressure on yourself than usual."
> "**Mind reading** dropped significantly since you started Turning those fragments. You assumed others' thoughts 8 times in January, only twice in February."
> "Your top 3 fragment types this month: catastrophizing, discounting positives, and black-and-white thinking."
**Why this matters:** Most people have 2-3 dominant cognitive distortions they don't know about. Seeing them named and tracked over time is genuinely transformative — it's the kind of insight you'd normally get after months of therapy.
### 3. Turn Impact — "Before & After"
Tracks the measurable effect of reframing on subsequent emotional state.
**How it works:**
The AI compares the emotional tone of Mirror sessions **before and after** a Turn:
- User writes in Mirror (frustrated, catastrophizing)
- User takes a fragment to the Kaleidoscope
- User writes in Mirror again later that day or the next day
- The Spectrum measures the shift
**What the user sees:**
> "After Turning a fragment, your next Mirror session is 62% more likely to show increased agency and reduced catastrophizing."
> "Your most impactful Turn this month was on Feb 3 — the shift in your writing afterward was significant."
> "Turns on work-related fragments have the strongest positive effect for you. Relationship fragments take 2-3 Turns before the shift shows up."
This is the **evidence engine** for Kalei's core thesis: that changing the angle actually changes how you feel. Users can see the proof in their own data.
**Science note:** This directly implements Stetler's (2014) adherence-expectation model. When users see measurable shifts in their own emotional data after Turns, it reinforces the expectation that reframing works — which increases future engagement and actual benefit. Pardo-Cabello et al. (2022) confirmed that the quality of the therapeutic relationship (or in Kalei's case, the app-user relationship) is the strongest predictor of whether expectation effects materialize. The Spectrum builds that trust through evidence.
### 4. Rhythm Detection — "Your Cycles"
Identifies recurring emotional patterns tied to time.
**Weekly rhythms:**
> "Your Mirror sessions on Mondays contain 3x more should statements than any other day."
> "Fridays tend to be your most positive writing days."
**Monthly rhythms:**
> "The last week of each month shows elevated anxiety patterns — possibly tied to deadlines or financial cycles."
**Event correlation (Lens integration):**
> "When you check in with your Lens goals in the morning, your afternoon Mirror sessions show 40% fewer negative fragments."
**Contextual patterns:**
> "After writing about [work] topics, catastrophizing spikes. After writing about [relationships], personalization is more common."
The user starts to see their emotional life as a **landscape with terrain** rather than random weather. Some hills are always there. Some valleys are seasonal. That awareness alone is a superpower.
**Science note:** Rhythm Detection operationalizes Wood et al.'s (2021) finding that habits are triggered by context cues. By revealing temporal patterns ("Mondays are heavy on should-statements"), the Spectrum helps users anticipate and prepare for predictable emotional terrain — turning reactive coping into proactive awareness.
### 5. Growth Trajectory — "The Long View"
The headline metric: **how is this person's relationship with their own thinking changing over time?**
**Tracked indicators:**
- Fragment density: How many distortions per 100 words in Mirror sessions (trending down = growth)
- Self-correction rate: How often the user identifies their own fragments before Kalei highlights them (measured by editing/deleting mid-message)
- Reframe adoption: How often saved patterns from Turns echo in subsequent Mirror writing (user naturally using new perspectives)
- Distortion diversity: Whether the user is getting stuck on one pattern or successfully addressing multiple types
- Turn-to-insight ratio: How many Turns result in a saved keepsake vs. dismissed patterns
**Visualization:** A single, evolving kaleidoscope pattern that represents your overall growth. The more you use Kalei, the more complex, colorful, and beautiful the pattern becomes. At month 1, it might be simple and muted. At month 6, it's intricate and vivid.
This becomes the **centerpiece of the Spectrum dashboard** — your personal growth, visualized as a living kaleidoscope pattern.
**Science note:** Every tracked indicator maps to Bandura's (1977) self-efficacy sources. Fragment density decline and self-correction rate are mastery experiences (the strongest efficacy source). Reframe adoption shows vicarious learning internalized. The evolving pattern visualization provides a visceral, non-numerical representation of growing capability — "I am getting better at this" made visible.
**Milestone moments:**
> "Your fragment density has dropped 30% since you started. You're catching your own patterns now."
> "This week, you naturally reframed a catastrophizing thought in your Mirror session without needing a Turn. That's new."
> "You've explored all 10 fragment types. You're seeing the full spectrum."
---
## The Spectrum Dashboard — Layout
### Top Section: The River
Your emotional landscape as a flowing color gradient. Swipe horizontally to scroll through time. Tap any point to see the Mirror session or Turn from that day.
### Middle Section: Your Glass
The faceted gem visualization showing fragment type distribution. Toggle between "This week," "This month," "All time." Tap any facet for the distortion deep-dive.
### Bottom Section: Insights Feed
A scrollable feed of AI-generated insights, refreshed weekly. Each insight is a card with:
- A one-line observation
- Supporting data (subtle, not overwhelming)
- An action suggestion when relevant
### Floating Element: Your Pattern
Your evolving kaleidoscope pattern, accessible from the top corner. Tap to expand full-screen. Shareable as a "growth snapshot."
---
## When Insights Are Delivered
The Spectrum doesn't bombard users with data. Insights surface at natural moments:
### Weekly Reflection (Push Notification)
Every Sunday evening (or user-configured day):
> "Your Spectrum updated. See what this week's light revealed. 🔮"
Opens to a **Weekly Spectrum Summary:**
- Dominant emotional color this week
- Top fragment type
- Most impactful Turn
- One insight
- The week's addition to your evolving pattern
### Monthly Deep Dive
First of each month:
> "January's Spectrum is ready. See how your light shifted."
A richer summary with month-over-month comparisons, rhythm detection insights, and growth trajectory updates.
### In-Context Nudges
Subtle, non-intrusive insights surfaced within other features:
- In the Mirror: "You've used the phrase 'I should' 4 times this session. That's a pattern worth noticing."
- After a Turn: "This is the 3rd time you've Turned a work-related fragment this week. The Spectrum can show you more about this pattern."
- In the Lens: "Your Lens focus on [career growth] aligns with the fragments you've been processing. You're working on the right things."
---
## Monetization
The Spectrum is a **Kalei Prism exclusive feature**. It's the single strongest reason to upgrade.
**Free tier gets:**
- A simplified weekly emotional summary (1 insight, no visualizations)
- Fragment type counts (basic numbers only)
- A teaser of what the full Spectrum shows: "Upgrade to see your full Spectrum"
**Prism tier gets:**
- Full Spectrum dashboard with all 5 sections
- Weekly and monthly deep dives
- Growth trajectory and evolving pattern
- Rhythm detection
- Turn impact analysis
- Export and sharing of Spectrum snapshots
**Upgrade CTA:**
> "You've written 47 Mirror sessions and completed 23 Turns. There's a story in that data. See your full Spectrum."
This is a natural paywall because the Spectrum **requires usage history to be valuable.** By the time a user has enough data for the Spectrum to be meaningful, they've already experienced Kalei's value through the free tier and are primed to convert.
---
## Privacy Architecture
The Spectrum analyzes deeply personal data. Trust is non-negotiable.
### Principles
1. **All analysis happens on aggregated patterns, never exposed raw content.** The Spectrum shows "your catastrophizing increased this week" — it never shows "you wrote 'my life is falling apart' on Tuesday"
2. **No Spectrum data leaves the user's account.** Not for model training, not for anonymized research, not for anything
3. **Users control the window.** They can exclude any Mirror session or Turn from Spectrum analysis. They can set the Spectrum to only analyze the last 30/60/90 days
4. **Full deletion.** "Reset my Spectrum" erases all analyzed data and starts fresh
5. **Transparency.** A "How this works" section explains exactly what the AI analyzes and what it doesn't
### Data Processing
Spectrum analysis runs as a **background job**, not in real-time:
- After each Mirror session ends, emotional dimensions are computed and stored as numerical vectors — not raw text
- Fragment types are already captured during Mirror sessions
- Weekly aggregation job runs to compute trends, rhythms, and insights
- The Spectrum dashboard reads from aggregated data only
```sql
-- Emotional analysis per Mirror session
CREATE TABLE spectrum_session_analysis (
id UUID PRIMARY KEY DEFAULT gen_random_uuid(),
user_id UUID REFERENCES users(id),
session_id UUID REFERENCES mirror_sessions(id),
session_date DATE NOT NULL,
valence FLOAT, -- -1 (negative) to 1 (positive)
arousal FLOAT, -- -1 (calm) to 1 (activated)
certainty FLOAT, -- -1 (uncertain) to 1 (confident)
agency FLOAT, -- -1 (helpless) to 1 (in control)
social_orientation FLOAT, -- -1 (isolated) to 1 (connected)
temporal_focus FLOAT, -- -1 (past) to 0 (present) to 1 (future)
fragment_count INTEGER,
word_count INTEGER,
dominant_distortion VARCHAR(50),
created_at TIMESTAMPTZ DEFAULT NOW()
);
-- Emotional analysis per Turn
CREATE TABLE spectrum_turn_analysis (
id UUID PRIMARY KEY DEFAULT gen_random_uuid(),
user_id UUID REFERENCES users(id),
turn_id UUID REFERENCES turns(id),
turn_date DATE NOT NULL,
pre_valence FLOAT, -- emotional state of input
post_valence FLOAT, -- emotional state after reframe engagement
distortion_type VARCHAR(50),
reframe_saved BOOLEAN,
topic_cluster VARCHAR(100),
created_at TIMESTAMPTZ DEFAULT NOW()
);
-- Weekly aggregated insights
CREATE TABLE spectrum_weekly (
id UUID PRIMARY KEY DEFAULT gen_random_uuid(),
user_id UUID REFERENCES users(id),
week_start DATE NOT NULL,
avg_valence FLOAT,
avg_arousal FLOAT,
avg_agency FLOAT,
total_fragments INTEGER,
total_turns INTEGER,
total_mirror_sessions INTEGER,
dominant_distortion VARCHAR(50),
distortion_distribution JSONB, -- {"catastrophizing": 5, "mind_reading": 3, ...}
fragment_density FLOAT, -- fragments per 100 words
turn_impact_score FLOAT, -- measured shift after turns
insight_text TEXT, -- AI-generated weekly insight
created_at TIMESTAMPTZ DEFAULT NOW(),
UNIQUE(user_id, week_start)
);
-- Monthly deep dive
CREATE TABLE spectrum_monthly (
id UUID PRIMARY KEY DEFAULT gen_random_uuid(),
user_id UUID REFERENCES users(id),
month_start DATE NOT NULL,
growth_score FLOAT, -- composite improvement metric
rhythm_insights JSONB, -- detected patterns tied to time
month_over_month_delta JSONB, -- comparison with previous month
top_fragment_types JSONB, -- ranked list
most_impactful_turn UUID, -- references turn with biggest shift
pattern_complexity_score FLOAT, -- drives evolving visual pattern
narrative_summary TEXT, -- AI-generated monthly narrative
created_at TIMESTAMPTZ DEFAULT NOW(),
UNIQUE(user_id, month_start)
);
```
---
## Notification Copy — Spectrum Voice
The Spectrum speaks with slightly more authority than other Kalei features — it has data behind it. But still warm, still poetic.
### Weekly
- "Your Spectrum shifted this week. Come see the colors."
- "7 days of fragments and patterns. Here's what the light reveals."
- "This week had a rhythm. The Spectrum caught it."
### Monthly
- "A month of Turns. Your Spectrum has a story to tell."
- "January's light, separated into its colors. Your monthly Spectrum is ready."
### Milestone Insights
- "First month of Spectrum data. Your baseline is set — now watch it evolve."
- "Your fragment density dropped below 5 per 100 words for the first time. You're catching yourself."
- "3 consecutive weeks of increasing agency in your writing. Something shifted."
- "You haven't catastrophized in 12 days. That's your longest streak."
### Growth Pattern Evolution
- "Your pattern grew a new layer this month. Tap to see it."
- "Remember your first pattern? Compare it to today's. Look how far you've come."
---
## The Spectrum as Retention Engine
The Spectrum solves the biggest problem in wellness apps: **the drop-off after the initial novelty fades.**
**Week 1-2:** Users are engaged with the Mirror and Kaleidoscope. Everything is new.
**Week 3-4:** Novelty fades. This is where most wellness apps lose people.
**With the Spectrum (timed with early user data accumulation):**
- "Your first Spectrum is ready" re-engages users with a new reason to open the app
- The evolving pattern creates **collection psychology** — users want to see it grow
- Weekly insights create a **recurring appointment** with the app
- Growth trajectory shows **concrete progress** — "this is working" evidence
- Fragment tracking creates **self-competition** — users try to beat their own patterns
- Monthly deep dives become **anticipated events** — not notifications to dismiss
The Spectrum turns Kalei from a tool you use when you feel bad into a **dashboard you check because you're curious about yourself.** That's the difference between reactive usage (declining) and proactive usage (compounding).
**Science note:** This retention mechanic is grounded in both habit formation and expectation effects. Wood et al. (2021) showed that shifting behavior from goal-directed (conscious, effortful) to habitual (automatic, context-triggered) is what sustains long-term change. Stetler (2014) showed that consistent engagement reinforces positive expectations. The Spectrum provides the evidence that keeps both loops spinning.
---
## Spectrum Rollout Sequence
### Pre-Launch (2 weeks before)
Notification to existing users:
> "You've completed [X] Turns and [Y] Mirror sessions. Something new is coming that turns all of that into self-knowledge. Stay tuned."
### Launch Day
> "The Spectrum is here. Every Turn you've taken, every fragment you've noticed — it all means something. See your full emotional landscape for the first time."
Open to a dramatic reveal of their personal Spectrum for the first time — the river visualization populating with their historical data, the gem forming its facets, the evolving pattern appearing.
This should be a **wow moment.** The user's own emotional history, visualized beautifully for the first time. Data they generated without thinking about it, now reflecting back as genuine self-knowledge.
### Post-Launch (ongoing)
Weekly and monthly cadence takes over. The Spectrum becomes a background engine that surfaces insights at the right moments and gives users a reason to maintain their Mirror and Turn habits.
---
## Updated Feature Map — Full Kalei Ecosystem
```
PHASE 1 PHASE 2
───────────────────────────────── ──────────────────────
THE MIRROR (Awareness) ──→ feeds data to ──→ THE SPECTRUM
Write freely (Intelligence)
AI highlights fragments See your patterns
Inline reframes Track growth
Discover rhythms
THE KALEIDOSCOPE (Perspective) ──→ feeds data to ──→ Measure impact
Structured reframing Evolving visual
Fragment → Patterns
Save keepsakes
THE LENS (Direction) ──→ informed by ──→
Goal setting
Daily affirmations
Vision tracking
◇ ◇
Kalei Free Kalei Prism
3 Turns/day Unlimited everything
2 Mirror/week + Full Spectrum
Basic Lens + Weekly/monthly insights
+ Growth trajectory
+ Fragment analytics
```
---
*White light looks simple. The Spectrum shows you everything it's made of.*

View File

@@ -0,0 +1,162 @@
# Kalei — AI Model Selection: Unbiased Analysis
## The Question
Which AI model should power a mental wellness app that needs to detect emotional fragments, generate empathetic perspective reframes, produce personalized affirmations, detect crisis signals, and analyze behavioral patterns over time?
---
## What Kalei Actually Needs From Its AI
| Task | Quality Bar | Frequency | Latency Tolerance |
|------|------------|-----------|-------------------|
| **Mirror** — detect emotional fragments in freeform writing | High empathy + precision | 2-7x/week per user | 2-3s acceptable |
| **Kaleidoscope** — generate 3 perspective reframes | Highest — this IS the product | 3-10x/day per user | 2-3s acceptable |
| **Lens** — daily affirmation generation | Medium — structured output | 1x/day per user | 5s acceptable |
| **Crisis Detection** — flag self-harm/distress signals | Critical safety — zero false negatives | Every interaction | <1s preferred |
| **Spectrum** — weekly/monthly pattern analysis | High analytical depth | 1x/week batch | Minutes acceptable |
The Kaleidoscope reframes are the core product experience. If they feel generic, robotic, or tone-deaf, users churn. This is the task where model quality matters most.
---
## Venice.ai API — What You Get
Since you already have Venice Pro ($10 one-time API credit), here are the relevant models and their pricing:
### Best Venice Models for Kalei
| Model | Input/MTok | Output/MTok | Cache Read | Context | Privacy | Notes |
|-------|-----------|------------|------------|---------|---------|-------|
| **DeepSeek V3.2** | $0.40 | $1.00 | $0.20 | 164K | Private | Strongest general model on Venice |
| **Qwen3 235B A22B** | $0.15 | $0.75 | — | 131K | Private | Best price-to-quality ratio |
| **Llama 3.3 70B** | $0.70 | $2.80 | — | 131K | Private | Meta's flagship open model |
| **Gemma 3 27B** | $0.12 | $0.20 | — | 203K | Private | Ultra-cheap, Google's open model |
| **Venice Small (Qwen3 4B)** | $0.05 | $0.15 | — | 33K | Private | Affirmation-tier only |
### Venice Advantages
- **Privacy-first architecture** — no data retention, critical for mental health
- **OpenAI-compatible API** — trivial to swap in/out, same SDK
- **Prompt caching** on select models (DeepSeek V3.2 confirmed)
- **You already pay for Pro** — $10 free API credit to test
- **No minimum commitment** — pure pay-per-use
### Venice Limitations
- **No batch API** — can't get 50% off for Spectrum overnight processing
- **"Uncensored" default posture** — Venice optimizes for no guardrails, which is the OPPOSITE of what a mental health app needs. We must disable Venice system prompts and provide our own safety layer
- **No equivalent to Anthropic's constitutional AI** — crisis detection safety net is entirely on us
- **Smaller infrastructure** — less battle-tested at scale than Anthropic/OpenAI
- **Rate limits not publicly documented** — could be a problem at scale
---
## Head-to-Head: Venice Models vs Claude Haiku 4.5
### Cost Per User Per Month
Calculated using our established usage model: Free user = 3 Turns/day, 2 Mirror/week, daily Lens.
| Model (via) | Free User/mo | Prism User/mo | vs Claude Haiku |
|-------------|-------------|--------------|-----------------|
| **Claude Haiku 4.5** (Anthropic) | $0.31 | $0.63 | baseline |
| **DeepSeek V3.2** (Venice) | ~$0.07 | ~$0.15 | **78% cheaper** |
| **Qwen3 235B** (Venice) | ~$0.05 | ~$0.10 | **84% cheaper** |
| **Llama 3.3 70B** (Venice) | ~$0.16 | ~$0.33 | **48% cheaper** |
| **Gemma 3 27B** (Venice) | ~$0.02 | ~$0.04 | **94% cheaper** |
The cost difference is massive. At 200 DAU (traction), monthly AI cost drops from ~$50 to ~$10-15.
### Quality Comparison for Emotional Tasks
This is the critical question. Here's what the research and benchmarks tell us:
**Emotional Intelligence (EI) Benchmarks:**
- A 2025 Nature study tested LLMs on 5 standard EI tests. GPT-4, Claude 3.5 Haiku, and DeepSeek V3 all outperformed humans (81% avg vs 56% human avg)
- GPT-4 scored highest with a Z-score of 4.26 on the LEAS emotional awareness scale
- Claude models are specifically noted for "endless empathy" — excellent for therapeutic contexts but with dependency risk
- A blinded study found AI-generated psychological advice was rated MORE empathetic than human expert advice
**Model-Specific Emotional Qualities:**
| Model | Empathy Quality | Tone Consistency | Creative Reframing | Safety/Guardrails |
|-------|----------------|-----------------|-------------------|-------------------|
| Claude Haiku 4.5 | ★★★★☆ | ★★★★★ | ★★★★☆ | ★★★★★ |
| DeepSeek V3.2 | ★★★☆☆ | ★★★★☆ | ★★★☆☆ | ★★☆☆☆ |
| Qwen3 235B | ★★★★☆ | ★★★★☆ | ★★★☆☆ | ★★☆☆☆ |
| Llama 3.3 70B | ★★★☆☆ | ★★★☆☆ | ★★★☆☆ | ★★★☆☆ |
| Gemma 3 27B | ★★☆☆☆ | ★★★☆☆ | ★★☆☆☆ | ★★★☆☆ |
**Key findings:**
- DeepSeek V3.2 is described as "slightly more mechanical in tone" with "repetition in phrasing" — problematic for daily therapeutic interactions
- Qwen3 is praised for "coherent extended conversations" and "tone consistency over long interactions" — actually quite good for our use case
- Llama 3.3 is solid but unremarkable for emotional tasks
- Gemma 3 27B is too small for the nuance we need in Mirror and Kaleidoscope
- Claude's constitutional AI training makes crisis detection significantly more reliable out-of-the-box
---
## The Final Decision (Updated February 2026)
After evaluating all options including Venice, Claude-first, and various hybrid strategies, the decision is:
### ★ Chosen: DeepSeek V3.2 via OpenRouter + Non-Chinese Providers
**Primary:** DeepSeek V3.2 routed through DeepInfra/Fireworks (US/EU infrastructure) via OpenRouter
**Fallback:** Claude Haiku 4.5 via OpenRouter (automatic failover on provider outage)
**Single model for all features** — no tiering until 5,000+ DAU justifies the complexity
| | DeepInfra (via OpenRouter) | Claude Haiku 4.5 (fallback) |
|---|---|---|
| Input (cache miss) | $0.26/M | $1.00/M |
| Input (cache hit) | $0.216/M | $0.10/M |
| Output | $0.38/M | $5.00/M |
**Monthly AI cost at 200 DAU: ~$8** (vs $50 with Claude Haiku, vs $12-18 Venice hybrid)
### Why This Beats All Other Options
1. **Data privacy solved** — DeepInfra/Fireworks host on US/EU infrastructure. No data through Chinese servers. Critical for a mental wellness app.
2. **8590% cheaper than Claude** — per-user AI cost drops from $0.33 to ~$0.034/month (free users).
3. **Automatic failover** — OpenRouter routes to Claude Haiku if DeepInfra goes down. No code changes, no downtime.
4. **No vendor lock-in** — one API key, switch models/providers via config. OpenRouter's API is OpenAI-compatible.
5. **Single model simplicity** — one prompt set to tune, one quality bar to maintain. Solo founder can manage this.
6. **Emotional intelligence validated** — Nature 2025 study shows DeepSeek V3 scores comparably to Claude on standardized EI tests (81% avg vs 56% human avg).
### Why Not Venice, Groq, or Direct DeepSeek
- **Venice:** No batch API, "uncensored" default posture requires extra safety work, rate limits undocumented, smaller infrastructure.
- **Groq:** Great speed but limited model selection. Useful as a future tier for structured generation at 5,000+ DAU.
- **DeepSeek Direct API:** Cheapest option ($0.028 cache hits) but routes all data through Chinese servers. Non-starter for mental health data.
- **Tiered hybrid (Option D):** Saves ~$30-50/month over single-model approach but adds 4 separate prompt configs, routing logic, and quality benchmarks. Not worth the complexity at current scale.
### Safety Layer (Non-Negotiable Regardless of Provider)
```
User input → Keyword crisis detector (local, instant)
→ If flagged: hardcoded crisis response (no LLM needed)
→ If clear: send to OpenRouter with safety-focused system prompt
→ Post-process: scan output for harmful patterns before showing to user
```
We build our own safety layer regardless of provider. This gives us MORE control than relying on any provider's built-in guardrails.
---
## Final Cost Model (OpenRouter + DeepInfra)
| Stage | DAU | AI Cost/mo | Total Infra/mo | Break-even Subscribers |
|-------|-----|-----------|----------------|----------------------|
| Launch (0-500 users) | ~50 | ~$2 | ~$16 | **3 Prism @ $4.99** |
| Traction (500-2K) | ~200 | ~$8 | ~$53 | **11 Prism** |
| Growth (2K-10K) | ~1K | ~$40 | ~$216 | **43 Prism** |
Compare to Claude-first: launch was $26/mo, now $16. growth was $425, now $216. AI went from 60% of total spend to 19%.
---
## Scaling Roadmap
1. **Launch → 600 DAU:** Single model (DeepSeek V3.2 via OpenRouter/DeepInfra). Focus on prompt quality.
2. **600+ DAU:** Evaluate self-hosted Qwen3-30B-A3B on GPU ($245/month fixed) — cheaper than API at this volume, full data control.
3. **5,000+ DAU:** Introduce tiered model routing if usage data shows certain features benefit from specialized models.
4. **Build the safety layer regardless** — multi-stage crisis filter is a day-one requirement, not a provider feature.

View File

@@ -0,0 +1,641 @@
# Kalei — Architecture & User Journey Diagrams
All diagrams below are valid Mermaid syntax. Paste into any Mermaid renderer (mermaid.live, GitHub markdown, VS Code preview) to visualize.
---
## 1. Complete User Journey
Shows the full lifecycle from app discovery through onboarding, daily habit loops, conversion, and long-term retention.
```mermaid
flowchart LR
subgraph Acquisition["Acquisition"]
A1["App Store Discovery"] --> A2["Download App"]
A2 --> A3["Open for First Time"]
end
subgraph Onboarding["Onboarding Flow"]
A3 --> O1["Welcome Screen"]
O1 --> O2["3 Swipeable Intro Cards"]
O2 --> O3["Choose Reframe Style\n(Brutal / Gentle / Logical / Philosophical / Humor)"]
O3 --> O4["Set Daily Check-in Time"]
O4 --> O5["Create Account\n(Email / Google / Apple)"]
O5 --> O6["First Reframe — WOW Moment"]
O6 --> O7["Land on Home Screen"]
end
subgraph DailyLoop["Daily Habit Loop"]
O7 --> D1["Push Notification\n'What's weighing on you?'"]
D1 --> D2{"User Intent"}
D2 -->|"Quick reframe"| T1["The Turn\n(Kaleidoscope)"]
D2 -->|"Need to process"| M1["The Mirror\n(Freeform Writing)"]
D2 -->|"Goal focus"| L1["The Lens\n(Manifestation)"]
D2 -->|"Review history"| G1["The Gallery"]
T1 --> D3["Save Reframe / Keepsake"]
M1 --> D3
L1 --> D3
D3 --> D4["Streak Updated"]
D4 --> D5["Return Tomorrow"]
D5 --> D1
end
subgraph Deepening["Engagement Deepening"]
G1 --> E1["See Thought Patterns Over Time"]
E1 --> E2["Weekly Summary\n(Sunday Push)"]
E2 --> E3["Discover Recurring Fragments"]
E3 --> E4["Motivation to Continue"]
E4 --> D1
end
subgraph Conversion["Free → Premium"]
D4 --> C1{"Hit Free Tier Limit?"}
C1 -->|"Yes"| C2["Soft Paywall\n'Unlock unlimited Turns'"]
C2 --> C3{"Convert?"}
C3 -->|"Yes"| C4["Subscribe to Prism\n($4.99/mo)"]
C3 -->|"Not yet"| D5
C4 --> C5["Full Access Unlocked"]
C5 --> D1
end
subgraph Retention["Long-Term Retention"]
C5 --> R1["Spectrum Dashboard\n(Prism+)"]
R1 --> R2["Monthly AI Insights"]
R2 --> R3["Growth Trajectory"]
R3 --> R4["Deep Self-Knowledge"]
R4 --> D1
end
```
---
## 2. Backend System Architecture
The full backend from client through edge, API services, AI layer, data stores, and external integrations.
```mermaid
flowchart TB
subgraph Client["Client Layer"]
APP["React Native + Expo App"]
TURN_UI["Turn Screen"]
MIRROR_UI["Mirror Screen"]
LENS_UI["Lens Screen"]
GALLERY_UI["Gallery Screen"]
SPECTRUM_UI["Spectrum Dashboard"]
PROFILE_UI["Profile & Settings"]
end
subgraph Edge["Edge Layer"]
CF["Cloudflare\nDNS / CDN / DDoS"]
NGINX["Nginx\nReverse Proxy / SSL / Rate Limit"]
end
subgraph API["API Layer — Modular Monolith (Fastify)"]
GW["API Gateway & Auth\n(JWT + Refresh Rotation)"]
TURN_SVC["Turn Service"]
MIRROR_SVC["Mirror Service"]
LENS_SVC["Lens Service"]
SPECTRUM_SVC["Spectrum Service"]
SAFETY_SVC["Safety Service\n(Crisis Detection)"]
ENT_SVC["Entitlement Service\n(Plan Gating)"]
COST_SVC["Usage Meter &\nCost Guard"]
JOBS["Job Scheduler\n& Workers"]
NOTIF["Notification Service"]
end
subgraph AI["AI Layer (via OpenRouter Gateway)"]
AI_GW["AI Gateway\n(OpenRouter Provider Routing)"]
DEEPSEEK["DeepSeek V3.2\nvia DeepInfra/Fireworks\n(US/EU — Primary)"]
CLAUDE_FALLBACK["Claude Haiku 4.5\n(Automatic Fallback)"]
end
subgraph Data["Data Layer"]
PG["PostgreSQL 16\n(Source of Truth)"]
REDIS["Redis\n(Cache / Rate Limits / Counters)"]
OBJ["Object Storage\n(Spectrum Exports)"]
end
subgraph External["External Services"]
APPLE["Apple App Store\nBilling API"]
GOOGLE["Google Play\nBilling API"]
APNS["APNs"]
FCM["FCM"]
POSTHOG["PostHog\n(Self-Hosted Analytics)"]
GLITCHTIP["GlitchTip\n(Error Tracking)"]
end
APP --> CF --> NGINX --> GW
GW --> TURN_SVC
GW --> MIRROR_SVC
GW --> LENS_SVC
GW --> SPECTRUM_SVC
GW --> ENT_SVC
TURN_SVC --> SAFETY_SVC
MIRROR_SVC --> SAFETY_SVC
LENS_SVC --> SAFETY_SVC
TURN_SVC --> AI_GW
MIRROR_SVC --> AI_GW
LENS_SVC --> AI_GW
SPECTRUM_SVC --> AI_GW
AI_GW --> DEEPSEEK
AI_GW --> CLAUDE_FALLBACK
TURN_SVC --> COST_SVC
MIRROR_SVC --> COST_SVC
LENS_SVC --> COST_SVC
COST_SVC --> REDIS
TURN_SVC --> PG
MIRROR_SVC --> PG
LENS_SVC --> PG
SPECTRUM_SVC --> PG
ENT_SVC --> PG
JOBS --> PG
ENT_SVC --> APPLE
ENT_SVC --> GOOGLE
NOTIF --> APNS
NOTIF --> FCM
GW --> POSTHOG
GW --> GLITCHTIP
```
---
## 3. The Mirror (Awareness) — Sequence Diagram
Complete sequence from session start through writing, fragment detection, inline reframing, and session reflection.
```mermaid
sequenceDiagram
participant U as User
participant App as Mobile App
participant API as Kalei API
participant Safety as Safety Service
participant Ent as Entitlement Service
participant AI as AI Gateway
participant Model as DeepSeek V3.2 via OpenRouter
participant DB as PostgreSQL
participant R as Redis
Note over U,R: Session Start
U->>App: Opens Mirror tab
App->>API: POST /mirror/sessions
API->>Ent: Check plan (free: 2/week, prism: unlimited)
Ent->>R: Read session counter
R-->>Ent: Counter value
Ent-->>API: Allowed / Denied
API->>DB: Create mirror_session row
API-->>App: Session ID + empty state prompt
Note over U,R: Writing & Fragment Detection Loop
U->>App: Writes message freely
App->>API: POST /mirror/messages
API->>Safety: Crisis precheck on text
alt Crisis Detected
Safety->>DB: Log safety_event
API-->>App: Crisis resources (hotlines, warm message)
else Safe Content
API->>AI: Fragment detection prompt + user text
AI->>Model: Inference request (cached system prompt)
Model-->>AI: JSON with fragments + confidence scores
AI-->>API: Validated structured result
API->>DB: Save message + fragments (confidence > 0.75)
API->>R: Increment usage counters
API-->>App: Message with highlighted fragments (amber glow + ◇ icons)
end
Note over U,R: User Taps a Fragment
U->>App: Taps highlighted fragment ◇
App->>API: POST /mirror/fragments/{id}/reframe
API->>AI: Reframe prompt + fragment + surrounding context
AI->>Model: Inference request
Model-->>AI: Reframe + distortion explanation
AI-->>API: Validated reframe response
API->>DB: Update fragment (was_tapped, was_reframed, reframe_text)
API-->>App: Mini-card slides up with reframe
App-->>U: Shows pattern name + alternative angle + Full Turn option
Note over U,R: Session Close & Reflection
U->>App: Presses Done / closes Mirror
App->>API: POST /mirror/sessions/{id}/close
API->>AI: Generate Reflection from all messages + fragments
AI->>Model: Batch summary request
Model-->>AI: Mosaic themes + fragment count + insight
AI-->>API: Reflection payload
API->>DB: Update session with reflection + pattern_seed
API-->>App: Reflection card (Mosaic + fragments found + patterns + one-line insight)
App-->>U: Reflection saved to Gallery
```
---
## 4. The Turn (Kaleidoscope) — Sequence Diagram
The structured reframing flow with entitlement gating, safety checks, and multi-perspective AI generation.
```mermaid
sequenceDiagram
participant U as User
participant App as Mobile App
participant API as Kalei API
participant Ent as Entitlement Service
participant Safety as Safety Service
participant AI as AI Gateway
participant Model as DeepSeek V3.2 via OpenRouter
participant DB as PostgreSQL
participant Cost as Cost Guard
participant R as Redis
Note over U,R: User Submits a Fragment for Turning
U->>App: Types negative thought + selects style
App->>API: POST /turns {text, style, context}
Note over API,R: Validation & Gating
API->>Ent: Validate tier + daily Turn cap
Ent->>R: Check daily counter (free: 3/day)
R-->>Ent: Current count
Ent-->>API: Allowed / Limit reached
alt Limit Reached
API-->>App: Soft paywall prompt
App-->>U: "Unlock unlimited Turns with Prism"
else Allowed
Note over API,R: Safety Gate
API->>Safety: Crisis precheck on text
alt Crisis Detected
Safety->>DB: Log safety_event
API-->>App: Crisis resources response
App-->>U: Hotline numbers + warm message
else Safe Content
Note over API,R: AI Reframe Generation
API->>AI: Build prompt (cached system + user style + fragment + history context)
AI->>Model: Streaming inference request
Model-->>AI: 3 reframe perspectives + micro-action (if-then)
AI-->>API: Validated structured response + token count
Note over API,R: Record & Respond
API->>Cost: Record token usage + budget check
Cost->>R: Update per-user + global counters
API->>DB: Save turn + reframes + metadata
API-->>App: Stream final Turn result
Note over U,App: User Sees the Turn Card
App-->>U: Original fragment quoted
App-->>U: 3 perspective reframes in chosen style
App-->>U: Micro-action (Gollwitzer if-then)
App-->>U: "Why This Works" expandable science drawer
Note over U,App: Post-Turn Actions
U->>App: Save as Keepsake / Try Different Style / Share
App->>API: POST /turns/{id}/save
API->>DB: Mark as saved keepsake
API-->>App: Streak counter updated
end
end
```
---
## 5. The Lens (Manifestation Engine) — 6-Step Flow
The complete goal creation and daily action system mapped to the 6 research-backed steps.
```mermaid
flowchart TB
subgraph Step1["Step 1: DECIDE — Clarity"]
S1A["User taps 'Create a Manifestation'"] --> S1B["AI Conversation (3-5 exchanges)\n'What do you want to achieve?'"]
S1B --> S1C["SMART Goal Refinement"]
S1C --> S1D["Output: Clarity Statement Card"]
end
subgraph Step2["Step 2: SEE IT — Mental Rehearsal"]
S1D --> S2A["AI generates personalized\nvisualization script"]
S2A --> S2B["First-person, sensory-rich,\nprocess-focused imagery"]
S2B --> S2C["User reads/listens\nMarks as complete"]
S2C --> S2D["Output: Vision Summary\n(revisitable daily)"]
end
subgraph Step3["Step 3: BELIEVE — Self-Efficacy"]
S2D --> S3A["AI asks:\n'What makes you doubt\nthis is possible?'"]
S3A --> S3B["User lists doubts"]
S3B --> S3C["AI addresses each with:\npast successes + transferable skills\n+ role models + small wins"]
S3C --> S3D["Output: Belief Statement Card\n'You CAN — here's the evidence'"]
end
subgraph Step4["Step 4: NOTICE — Attention Training"]
S3D --> S4A["AI sets up daily\nattention prompts"]
S4A --> S4B["Daily push:\n'Notice one thing aligned\nwith your goal today'"]
S4B --> S4C["User logs observations\nBuilds Evidence Journal"]
S4C --> S4D["AI surfaces patterns:\n'23 alignment instances\nthis month'"]
end
subgraph Step5["Step 5: ACT — Implementation Intentions"]
S4D --> S5A["AI generates weekly\nmicro-actions"]
S5A --> S5B["Gollwitzer if-then format:\n'If [time/situation],\nthen I will [15-min action]'"]
S5B --> S5C["User checks off\ncompleted actions"]
S5C --> S5D["AI adapts difficulty:\n2-min actions → scales up\nas habit solidifies"]
end
subgraph Step6["Step 6: REPEAT — Compound"]
S5D --> S6A["Habit Tracking Dashboard"]
S6A --> S6B["Visual growth charts\nMilestone celebrations"]
S6B --> S6C["Weekly AI Summary:\nreframes + patterns +\nprogress + adjustments"]
S6C --> S6D["Celebrates PROCESS\nnot just outcomes"]
S6D --> S4B
end
```
---
## 6. AI Processing Pipeline & Cost Routing
How every AI request flows through the gateway with prompt caching, safety guards, and cost-aware provider routing.
```mermaid
flowchart LR
subgraph Input["Request Sources"]
MIRROR["Mirror\n(Fragment Detection)"]
TURN["Turn\n(3 Reframes)"]
LENS["Lens\n(Affirmations)"]
SPECTRUM["Spectrum\n(Weekly Insights)"]
end
subgraph Pipeline["AI Gateway Pipeline"]
PROMPT["Prompt Builder\n(System + User Context + Research Grounding)"]
CACHE["Prompt Cache Check\n(~20% savings on cached system prompts)"]
SAFETY_CHECK["Anti-Toxicity Guard\n(No toxic positivity / magical thinking)"]
ROUTER{"Cost Router\nBudget Check"}
end
subgraph PrimaryLane["Primary Lane (via OpenRouter)"]
DEEPSEEK_RT["DeepSeek V3.2\nvia DeepInfra/Fireworks\n($0.26/$0.38 per MTok)"]
end
subgraph FallbackLane["Fallback Lane (Auto-Failover)"]
CLAUDE_FB["Claude Haiku 4.5\n(Automatic on provider outage)"]
end
subgraph TemplateLane["Template Fallback (Budget Pressure)"]
TEMPLATE["Local Template System\n(Pre-written, no AI cost)"]
end
subgraph Output["Response Handling"]
VALIDATE["Structured Output\nValidation (JSON)"]
TOKEN_LOG["Token Usage Logger"]
BUDGET["Budget Tracker\n(Per-user daily + Global monthly)"]
RESPONSE["Stream Response\nto Client"]
end
subgraph Alerts["Cost Controls"]
ALERT50["50% Budget Alert"]
ALERT80["80% Budget Alert\nDegrade Lens to templates"]
ALERT95["95% Budget Alert\nPause Spectrum generation"]
HARDCAP["100% Hard Cap\nGraceful service message"]
end
MIRROR --> PROMPT
TURN --> PROMPT
LENS --> PROMPT
SPECTRUM --> PROMPT
PROMPT --> CACHE
CACHE --> SAFETY_CHECK
SAFETY_CHECK --> ROUTER
ROUTER -->|"All features\n(primary)"| DEEPSEEK_RT
ROUTER -->|"Provider outage\n(auto-failover)"| CLAUDE_FB
ROUTER -->|"Lens / basic content\n(budget pressure)"| TEMPLATE
DEEPSEEK_RT --> VALIDATE
CLAUDE_FB --> VALIDATE
TEMPLATE --> VALIDATE
VALIDATE --> TOKEN_LOG
TOKEN_LOG --> BUDGET
BUDGET --> RESPONSE
BUDGET --> ALERT50
BUDGET --> ALERT80
BUDGET --> ALERT95
BUDGET --> HARDCAP
```
---
## 7. Safety & Crisis Detection Flow
The multi-stage safety pipeline that ensures crisis content is never reframed.
```mermaid
flowchart TB
subgraph Input["User Input Arrives"]
MSG["User text from\nMirror / Turn / Lens"]
end
subgraph Stage1["Stage 1: Deterministic Keyword Scan"]
KW["Keyword & Pattern Matcher\n(Regex-based, zero latency)"]
KW_YES{"Crisis keywords\ndetected?"}
end
subgraph Stage2["Stage 2: AI Confirmation"]
AI_CHECK["Low-latency AI model\nconfirms severity"]
AI_YES{"Confirmed\ncrisis?"}
end
subgraph CrisisResponse["Crisis Response — NEVER Reframed"]
CR1["Hardcoded crisis template\n(Not AI-generated)"]
CR2["Display local hotline numbers\n(988 Lifeline, Crisis Text Line)"]
CR3["Warm handoff message:\n'You matter. Help is available\nright now.'"]
CR4["Log safety_event to DB\nfor monitoring"]
end
subgraph NormalFlow["Normal Processing"]
PROCEED["Continue to AI Gateway\nfor reframing / detection"]
end
subgraph Monitoring["Safety Monitoring"]
FP["Track false positives\nfor filter tuning"]
FN["Track false negatives\nvia user reports"]
REVIEW["Weekly safety review loop"]
end
MSG --> KW
KW --> KW_YES
KW_YES -->|"Yes — high confidence\n(explicit phrases)"| CR1
KW_YES -->|"Maybe — ambiguous"| AI_CHECK
KW_YES -->|"No"| PROCEED
AI_CHECK --> AI_YES
AI_YES -->|"Yes"| CR1
AI_YES -->|"No — false alarm"| PROCEED
CR1 --> CR2
CR2 --> CR3
CR3 --> CR4
PROCEED --> FP
CR4 --> FN
FP --> REVIEW
FN --> REVIEW
```
---
## 8. Subscription & Billing State Machine
All user states from anonymous through free, Prism, and Prism+ with renewal, grace period, and downgrade flows.
```mermaid
stateDiagram-v2
[*] --> Anonymous: App Downloaded
Anonymous --> FreeUser: Account Created
state FreeUser {
[*] --> ActiveFree
ActiveFree --> LimitHit: Daily/weekly cap reached
LimitHit --> ActiveFree: Next day/week resets
LimitHit --> PaywallShown: Soft upgrade prompt
}
state PrismSubscriber {
[*] --> ActivePrism
ActivePrism --> PrismRenewal: Monthly renewal
PrismRenewal --> ActivePrism: Payment success
PrismRenewal --> GracePeriod: Payment failed
GracePeriod --> ActivePrism: Retry success
GracePeriod --> Expired: Grace period ends
}
state PrismPlusSubscriber {
[*] --> ActivePrismPlus
ActivePrismPlus --> PlusMRenewal: Monthly renewal
PlusMRenewal --> ActivePrismPlus: Payment success
PlusMRenewal --> PlusGrace: Payment failed
PlusGrace --> ActivePrismPlus: Retry success
PlusGrace --> PlusExpired: Grace period ends
}
FreeUser --> PrismSubscriber: Subscribe $4.99/mo
FreeUser --> PrismPlusSubscriber: Subscribe $9.99/mo
PrismSubscriber --> PrismPlusSubscriber: Upgrade
PrismPlusSubscriber --> PrismSubscriber: Downgrade
PrismSubscriber --> FreeUser: Cancel / Expired
PrismPlusSubscriber --> FreeUser: Cancel / PlusExpired
state EntitlementCheck {
[*] --> CheckPlan
CheckPlan --> FreeTier: No active subscription
CheckPlan --> PrismTier: Active Prism
CheckPlan --> PrismPlusTier: Active Prism+
FreeTier --> ApplyLimits: 3 Turns/day, 2 Mirror/week
PrismTier --> FullAccess: Unlimited Turns + Mirror
PrismPlusTier --> FullPlusAccess: Full + Spectrum + Insights
}
```
---
## 9. Data Entity Relationship Model
All database domains showing how Users connect to every feature, commerce, and ops table.
```mermaid
flowchart LR
subgraph Identity["Identity Domain"]
USERS["USERS\nid, email, password_hash"]
PROFILES["PROFILES\nuser_id, display_name\nreframe_style, checkin_time"]
AUTH["AUTH_SESSIONS\nid, user_id, device_info"]
REFRESH["REFRESH_TOKENS\nid, user_id, token_hash"]
end
subgraph Mirror["Mirror Domain"]
M_SESS["MIRROR_SESSIONS\nid, user_id, started_at\nreflection_summary"]
M_MSG["MIRROR_MESSAGES\nid, session_id\ncontent (encrypted)"]
M_FRAG["MIRROR_FRAGMENTS\nid, message_id\ndistortion_type, confidence\nwas_tapped, reframe_text"]
end
subgraph Turn["Turn Domain"]
TURNS["TURNS\nid, user_id, input_text\nreframe_style, reframes\nmicro_action, is_saved"]
end
subgraph Lens["Lens Domain"]
GOALS["LENS_GOALS\nid, user_id, goal_text\nclarity_statement\nbelief_statement"]
ACTIONS["LENS_ACTIONS\nid, goal_id, action_text\nif_then_format, completed"]
end
subgraph Spectrum["Spectrum"]
S_WEEK["SPECTRUM_WEEKLY\nuser_id, week_start\naggregates, insight"]
S_MONTH["SPECTRUM_MONTHLY\nuser_id, month\ngrowth_trajectory"]
end
subgraph Commerce["Commerce"]
SUBS["SUBSCRIPTIONS\nid, user_id, plan\nstore, status, expires_at"]
end
subgraph Ops["Safety and Ops"]
SAFE["SAFETY_EVENTS\nid, user_id, trigger_text\ndetection_stage"]
AI_USE["AI_USAGE_EVENTS\nid, user_id, feature\nmodel, tokens, cost"]
end
USERS --> PROFILES
USERS --> AUTH
USERS --> REFRESH
USERS --> M_SESS
M_SESS --> M_MSG
M_MSG --> M_FRAG
USERS --> TURNS
USERS --> GOALS
GOALS --> ACTIONS
USERS --> S_WEEK
USERS --> S_MONTH
USERS --> SUBS
USERS --> SAFE
USERS --> AI_USE
```
---
## 10. Deployment Topology & Scaling Path
How the infrastructure evolves from a single EUR 8.45/mo VPS to a multi-node architecture as DAU grows.
```mermaid
flowchart TB
subgraph Launch["Launch: Single VPS — 50 DAU — ~EUR 16/mo"]
P1_CF["Cloudflare (Free)"] --> P1_NGINX["Nginx"]
P1_NGINX --> P1_API["Node.js API + Workers"]
P1_API --> P1_PG["PostgreSQL 16"]
P1_API --> P1_REDIS["Redis"]
P1_API --> P1_AI["DeepSeek V3.2\nvia OpenRouter/DeepInfra"]
end
subgraph Traction["Traction: Split DB — 200 DAU — ~EUR 53/mo"]
P2_CF["Cloudflare"] --> P2_NGINX["Nginx"]
P2_NGINX --> P2_API["Node.js API + Workers"]
P2_API --> P2_PG["PostgreSQL (Separate VPS)"]
P2_API --> P2_REDIS["Redis"]
P2_API --> P2_AI["DeepSeek V3.2 via OpenRouter\n+ Claude Haiku fallback"]
end
subgraph Growth["Growth: Scale API — 1000 DAU — ~EUR 216/mo"]
P3_CF["Cloudflare"] --> P3_LB["Load Balancer"]
P3_LB --> P3_API1["API Replica 1"]
P3_LB --> P3_API2["API Replica 2"]
P3_API1 --> P3_PG["PostgreSQL (Dedicated)"]
P3_API2 --> P3_PG
P3_API1 --> P3_REDIS["Redis Cluster"]
P3_API2 --> P3_REDIS
P3_WORKER["Spectrum Workers"] --> P3_PG
P3_API1 --> P3_AI["OpenRouter AI Gateway\n(DeepSeek + Claude fallback)"]
P3_API2 --> P3_AI
end
Launch -->|"p95 latency > 120ms\nor storage > 70%"| Traction
Traction -->|"CPU > 70% sustained\nor p95 > SLO"| Growth
```

View File

@@ -0,0 +1,972 @@
# Kalei — Claude Code Build Framework
Version: 2.0
Date: 2026-02-20
Status: Implementation-ready technical framework
Author: Matt + Claude
This document is the operational build bible for constructing Kalei from zero to production using Claude Code as the primary development tool. It translates all existing architecture, design, and phase documents into a practical, session-by-session execution framework.
This is not a summary of what exists. This is the missing piece: how you actually sit down and build it.
---
## 1. How This Framework Works
Every section in this document maps to a concrete Claude Code session. Each session has a clear input (what you tell Claude Code), a clear output (what gets committed), and a definition of done (how you know it worked). You work through them in order. If a session fails validation, you fix it before moving on.
The framework is organized into build tracks that run roughly in sequence but with deliberate parallelism where safe. Each track contains numbered sessions. Each session is designed to be completable in a single focused Claude Code interaction (30-90 minutes of wall time).
### 1.1 Session Anatomy
Every session follows this structure:
1. Context: what Claude Code needs to know before starting.
2. Prompt: what you paste or describe to Claude Code.
3. Deliverables: what files get created or modified.
4. Validation: how you confirm it works.
5. Commit: what the git message should look like.
### 1.2 Working Principles
These rules apply to every session:
- Never skip validation. A session is not done until validation passes.
- Commit after every successful session. Small, clean commits.
- If Claude Code generates something that does not match the architecture docs, the architecture docs win.
- Every AI call goes through the AI gateway. No direct provider SDK imports in feature code. Ever.
- Every user-facing AI path goes through the safety gate first. No exceptions.
- TypeScript strict mode everywhere. No `any` unless explicitly justified.
- Tests are not optional. Every endpoint gets at least one happy-path and one failure-path test.
---
## 2. Toolchain Decisions (Locked)
These are final. Do not revisit during build.
### Backend
| Layer | Tool | Version Target | Why |
|---|---|---|---|
| Language | TypeScript | 5.x strict | Type safety across full stack, shared types |
| Runtime | Node.js | 22 LTS | Current LTS, native fetch, native test runner fallback |
| API framework | Fastify | 5.x | Fastest Node framework, plugin encapsulation model maps perfectly to our modular monolith. Note: v5 prohibits decorating request/reply with reference types — use onRequest hooks or getter decorators instead. |
| ORM / query | Drizzle ORM | Latest stable | Type-safe schemas as code, zero-overhead SQL, migration generation via drizzle-kit. Use prepared statements for hot-path queries (auth lookups, usage checks). |
| Database | PostgreSQL | 16 | JSONB, RLS, pgcrypto, mature ecosystem |
| Cache / counters | Redis | 7.x | Rate limiting, usage counters, session cache, idempotency store |
| AI SDK | openai (OpenAI-compatible SDK) | Latest | OpenRouter uses OpenAI-compatible API. Native streaming, structured output. Single SDK works for DeepSeek V3.2 (primary) and Claude Haiku (fallback) via OpenRouter gateway. |
| Password hashing | @node-rs/argon2 | Latest | Argon2id via native Rust binding, fastest safe option |
| JWT | jose | Latest | Standards-compliant, no native deps, supports all JWT operations |
### Mobile
| Layer | Tool | Version Target | Why |
|---|---|---|---|
| Framework | React Native + Expo | SDK 54+ (New Architecture enabled by default) | Fabric renderer + TurboModules = 60fps UI, precompiled iOS builds (10x faster), EAS builds + OTA updates |
| Navigation | Expo Router | v4+ | File-system routing, typed routes, deep linking, shared element transitions |
| Server state | TanStack Query (React Query) | v5 | Industry standard for data fetching in production React Native apps. Provides caching (staleTime/gcTime), background refetching, optimistic mutations, offline persistence via `networkMode: 'offlineFirst'`, and paused mutation resume. Replaces raw HTTP clients as the data layer. |
| Client state | Zustand | Latest | Minimal boilerplate, persist middleware with MMKV adapter for offline-first state |
| Local storage | react-native-mmkv | v4 | 30x faster than AsyncStorage, built-in AES encryption, synchronous reads. Used for auth tokens, user preferences, offline cache. Premium apps universally use MMKV over AsyncStorage. |
| Animations | react-native-reanimated | v4+ | UI-thread animations via shared values + useAnimatedStyle. Spring physics, layout animations, entering/exiting transitions, shared element transitions. Required for premium-feeling interactions. |
| Gestures | react-native-gesture-handler | Latest | Native gesture system — pan, pinch, rotation, tap. Composes with Reanimated for gesture-driven animations (swipe-to-dismiss, pull-to-refresh, card interactions). |
| Haptics | expo-haptics | Latest | Tactile feedback on key interactions: Turn save (success notification), action completion (light impact), fragment tap (selection), onboarding progression (soft impact). Standard in premium wellness apps. |
| Images | expo-image | Latest | Faster than React Native Image, aggressive caching, blurhash placeholders, animated image support. |
| Lists | @shopify/flash-list | Latest | 60fps scrolling for long lists, view recycling, superior to FlatList. Used for Gallery timeline, Turn history, and any scrollable content with 50+ items. |
### Shared / Tooling
| Layer | Tool | Version Target | Why |
|---|---|---|---|
| Validation | Zod | 3.x | Shared schemas between API and mobile, runtime + compile-time safety |
| Linting | Biome | Latest | Single tool for lint + format, faster than ESLint + Prettier combined |
| Unit testing | Vitest | Latest | Vite-powered, native TypeScript, compatible with Fastify test patterns |
| E2E testing | Maestro | Latest | Industry standard for mobile E2E testing. YAML-based flow definitions, visual regression, CI-friendly. Tests critical paths: onboarding, Turn flow, Mirror session, purchase. |
| CI | Gitea Actions (self-hosted, free) or GitHub Actions (2,000 free mins/month private, unlimited public) | N/A | Lint + test + type-check on every PR. Gitea is fully self-hosted on the VPS at zero cost. |
| Monorepo | pnpm workspaces | Native | Strict dependency resolution, disk-efficient, prevents phantom deps |
### 2.1 Package Manager
Use `pnpm` as the package manager. It is faster, more disk-efficient, and has strict dependency resolution that prevents phantom dependencies. Install globally:
```bash
npm install -g pnpm
```
### 2.2 What We Are NOT Using (And Why)
| Rejected | Reason |
|---|---|
| Express | Slower, no schema validation, no encapsulation model |
| Prisma | Heavy runtime, slower queries, migration lock-in |
| Supabase Cloud | 3x cost of self-hosting, unnecessary service overhead |
| RevenueCat | Third-party dependency in billing critical path |
| Redux / MobX | Overkill for this app's state complexity. Zustand + TanStack Query covers both client and server state |
| AsyncStorage | 30x slower than MMKV, no encryption, async-only reads. Not used in premium apps |
| Axios / ky / ofetch | TanStack Query is the data layer now. Raw HTTP clients are only used inside query/mutation functions, not directly in components. Use native fetch inside TanStack Query's queryFn. |
| React Navigation (standalone) | Expo Router wraps React Navigation with file-based routing — no reason to use React Navigation directly |
| Animated (built-in) | React Native's built-in Animated API runs on the JS thread. Reanimated runs on the UI thread — mandatory for 60fps premium animations. |
| Jest | Slower, heavier config, Vitest is drop-in compatible |
| ESLint + Prettier | Two tools where Biome does both faster |
| tRPC | Over-engineering for a mobile + API monorepo with Zod already shared |
| Detox | Slower, more complex setup than Maestro for E2E testing. Maestro's YAML flows are faster to write and maintain |
### 2.3 Zero-Cost Development Guarantee
Every tool and dependency in this stack can be used at zero cost during development. This section maps each component to its free path.
**All npm packages are free and open source.** Every library in the Backend, Mobile, and Shared toolchain tables is MIT, Apache 2.0, or BSD licensed. This includes Fastify, Drizzle ORM, TanStack Query, Zustand, MMKV, Reanimated, FlashList, Zod, Biome, Vitest, Pino, jose, @node-rs/argon2, and ioredis. Zero license costs, ever.
**Local infrastructure is free.** PostgreSQL and Redis run locally via Docker Compose (Docker Engine is free and open source). No cloud database or managed Redis required during development.
**Mobile builds are free locally.** Use `npx expo run:ios` and `npx expo run:android` for development builds — these compile using your local Xcode and Android Studio (both free). For production builds, use `eas build --local --platform ios` and `eas build --local --platform android` to build .ipa and .aab files on your machine at zero cost. EAS Cloud Build's free tier (30 builds/month) is available but optional. OTA updates via EAS Update are free for up to 1,000 active users.
**AI API costs are zero during development.** The mock AI provider (`mock.provider.ts`) returns deterministic, schema-valid responses for all unit and integration tests. You never hit real AI APIs during normal development. For manual testing against real AI, use OpenRouter credits — DeepSeek V3.2 via DeepInfra is so inexpensive ($0.26/$0.38 per MTok) that even extensive manual testing costs pennies. During development, you will spend effectively $0 on AI.
**Testing is free.** Vitest (MIT) for unit/integration tests. Maestro CLI (Apache 2.0) runs locally against simulators/emulators at no cost — do not use Maestro Cloud. k6 (Apache 2.0) for load testing runs locally.
**Error tracking is free.** GlitchTip is self-hosted via Docker Compose on your dev machine (or later on the VPS). It is Sentry-compatible and 100% open source. No SaaS subscription needed.
**Analytics is free.** PostHog can be self-hosted via Docker for up to ~100k events/month, or use their cloud free tier (1M events/month, no credit card required). For development, self-hosted PostHog in Docker Compose alongside Postgres and Redis costs nothing.
**Push notifications are free.** Expo Push Notification API is free with a 600 notifications/second/project rate limit. No paid tier required.
**CI/CD is free.** Gitea Actions is self-hosted and free. GitHub Actions offers 2,000 minutes/month free for private repos, unlimited for public repos.
**SSL/HTTPS is free.** Caddy auto-provisions Let's Encrypt certificates. Let's Encrypt is a free certificate authority.
**Costs that only apply at launch (not during development):**
| Item | Cost | When |
|---|---|---|
| Netcup VPS 1000 G12 | €8.45/month | Production deployment |
| Apple Developer Program | $99/year | App Store submission |
| Google Play Developer | $25 one-time | Play Store submission |
| OpenRouter API — DeepSeek V3.2 via DeepInfra (production traffic) | ~$2-8/month at launch scale | Live users hitting AI endpoints |
| Domain name | ~$10-15/year | Production domain |
Total development cost: **$0.** Total launch cost: approximately **€16/month** + one-time store fees.
---
## 3. Repository Architecture
This is the exact folder structure. Claude Code should scaffold this in the first session.
```
kalei/
package.json # Workspace root
pnpm-workspace.yaml # Workspace config
tsconfig.base.json # Shared TS config
biome.json # Lint + format config
.env.example # Root env template
.gitignore
apps/
mobile/ # Expo React Native app
app/ # Expo Router file-based routes
(tabs)/ # Tab navigator group
_layout.tsx # Tab layout with 5 tabs
index.tsx # Home / daily focus
mirror.tsx # Mirror entry
turn.tsx # Kaleidoscope entry
lens.tsx # Lens goals
gallery.tsx # Gallery / history
(auth)/ # Auth flow group
_layout.tsx
login.tsx
register.tsx
onboarding.tsx
(spectrum)/ # Spectrum (ships in v1)
_layout.tsx
dashboard.tsx
weekly.tsx
monthly.tsx
_layout.tsx # Root layout (auth gate)
src/
api/ # API layer
client.ts # Base fetch wrapper with auth header injection
query-client.ts # TanStack Query client config (staleTime, gcTime, offline persistence)
queries/ # TanStack Query hooks organized by feature
use-auth.ts # useLogin, useRegister, useRefresh mutations
use-mirror.ts # useMirrorSession, useSendMessage, useReframe queries/mutations
use-turn.ts # useCreateTurn, useTurns, useSaveTurn
use-lens.ts # useGoals, useCreateGoal, useCompleteAction, useDailyAffirmation
use-spectrum.ts # useWeeklyInsight, useMonthlyInsight
use-billing.ts # useEntitlements
keys.ts # Query key factory for organized cache invalidation
stores/ # Zustand stores (client state only — NOT server state)
auth.store.ts # Token state, persisted to MMKV
ui.store.ts # UI preferences, theme mode, onboarding state
offline-queue.store.ts # Queued mutations for offline-first
lib/ # Core utilities
storage.ts # MMKV instance setup with encryption
haptics.ts # Centralized haptic feedback helpers (tapHaptic, successHaptic, etc.)
analytics.ts # PostHog event tracking (self-hosted or cloud free tier — 1M events/month free)
components/ # Reusable UI components
ui/ # Primitives (Button, Card, Input, TextArea, etc.)
animations/ # Reanimated animation components (FadeIn, SlideUp, PulseGlow, KaleidoscopeSpinner)
mirror/ # Mirror-specific components
turn/ # Turn-specific components
lens/ # Lens-specific components
spectrum/ # Spectrum-specific components
hooks/ # Custom React hooks
use-keyboard-aware.ts # Keyboard avoidance for writing screens
use-network-state.ts # Online/offline detection
use-app-state.ts # Foreground/background lifecycle
utils/ # Helpers, formatters, constants
theme/ # Colors, typography, spacing tokens, dark mode support
types/ # Mobile-specific types
assets/ # Images, fonts, Lottie animations
e2e/ # Maestro E2E test flows
onboarding.yaml
turn-flow.yaml
mirror-session.yaml
purchase-flow.yaml
app.config.ts # Expo config (newArchEnabled: true is default in SDK 54)
package.json
tsconfig.json
.env.example
services/
api/ # Fastify API server
src/
server.ts # Fastify app factory
config.ts # Environment config with Zod validation
modules/ # Feature modules (Fastify plugins)
auth/
auth.routes.ts
auth.service.ts
auth.schemas.ts
auth.test.ts
mirror/
mirror.routes.ts
mirror.service.ts
mirror.schemas.ts
mirror.test.ts
turn/
turn.routes.ts
turn.service.ts
turn.schemas.ts
turn.test.ts
lens/
lens.routes.ts
lens.service.ts
lens.schemas.ts
lens.test.ts
spectrum/
spectrum.routes.ts
spectrum.service.ts
spectrum.schemas.ts
spectrum.test.ts
billing/
billing.routes.ts
billing.service.ts
billing.schemas.ts
billing.test.ts
plugins/ # Cross-cutting Fastify plugins
auth.plugin.ts # JWT verification decorator
entitlement.plugin.ts # Plan gating middleware
rate-limit.plugin.ts # Redis-backed rate limiting
request-id.plugin.ts # Request ID injection
error-handler.plugin.ts
gateway/ # AI Gateway abstraction
ai-gateway.ts # Main gateway interface
providers/
openrouter.provider.ts # Primary: DeepSeek V3.2 via DeepInfra/Fireworks, Fallback: Claude Haiku
prompts/
mirror-detect.prompt.ts
mirror-reframe.prompt.ts
turn-reframe.prompt.ts
lens-affirmation.prompt.ts
spectrum-weekly.prompt.ts
spectrum-monthly.prompt.ts
output-schemas/ # Zod schemas for AI output validation
mirror-fragments.schema.ts
turn-perspectives.schema.ts
lens-output.schema.ts
safety/ # Safety service
safety.service.ts # Multi-stage crisis filter
crisis-keywords.ts # Deterministic keyword sets
crisis-resources.ts # Regional crisis hotline data
safety.test.ts
db/ # Database layer
client.ts # Drizzle client setup
schema/ # Drizzle schema definitions
users.ts
auth.ts
mirror.ts
turn.ts
lens.ts
spectrum.ts
billing.ts
safety.ts
usage.ts
index.ts # Re-exports all schemas
migrations/ # Generated by drizzle-kit
seed.ts # Dev seed data
redis/
client.ts # Redis connection
usage-counter.ts # Per-user usage tracking
rate-limiter.ts # Sliding window rate limiter
idempotency.ts # Idempotency key store
workers/ # Background job processors
spectrum-weekly.worker.ts
spectrum-monthly.worker.ts
push-notification.worker.ts
billing-reconciliation.worker.ts
utils/
logger.ts # Pino structured logger
crypto.ts # Column-level encryption helpers
errors.ts # Custom error classes
drizzle.config.ts # Drizzle Kit configuration
package.json
tsconfig.json
vitest.config.ts
.env.example
packages/
shared/ # Shared types and schemas
src/
schemas/ # Zod schemas shared between API and mobile
auth.schema.ts
mirror.schema.ts
turn.schema.ts
lens.schema.ts
spectrum.schema.ts
billing.schema.ts
types/ # TypeScript type exports
index.ts
constants/ # Shared constants
plans.ts # Plan definitions and limits
cognitive-distortions.ts # Fragment type taxonomy
crisis-patterns.ts # Shared crisis detection patterns
package.json
tsconfig.json
infra/
docker/
docker-compose.yml # Postgres + Redis for local dev
docker-compose.prod.yml # Production compose (VPS deployment)
scripts/
setup-local.sh # One-command local environment setup
deploy.sh # Production deployment script
backup-db.sh # PostgreSQL backup script
restore-db.sh # PostgreSQL restore script
nginx/
kalei.conf # Nginx reverse proxy config
caddy/
Caddyfile # Alternative: Caddy auto-HTTPS config
docs/ # Existing documentation (unchanged)
```
### 3.1 Why This Structure
**Backend: Fastify plugin encapsulation.** Each module in `services/api/src/modules/` is a self-contained Fastify plugin that registers its own routes, schemas, and services. Fastify v5 enforces this cleanly — you cannot decorate request/reply with reference types directly (use onRequest hooks or getter decorators instead). Each module can be tested in isolation, and extraction to a separate service later is a matter of moving the plugin to its own process.
**Mobile: Separation of server state and client state.** This is the pattern every premium React Native app follows in 2026. TanStack Query owns all server state (API data, caching, background refetching, optimistic updates). Zustand owns only client state (auth tokens, UI preferences, offline queue). Components never call fetch directly — they use query hooks from `src/api/queries/`. This separation makes offline-first behavior trivial: TanStack Query's `networkMode: 'offlineFirst'` plus MMKV-backed Zustand persistence means the app works without network.
**Shared schemas.** The `packages/shared` workspace ensures that Zod schemas defined once are used identically for API request validation, TanStack Query response typing, and mobile form validation. When the API says a turn response has exactly 3 perspectives, the mobile app knows this at compile time.
**AI gateway isolation.** The `gateway/` directory is intentionally separate from `modules/`. Feature code calls the gateway; the gateway calls providers. Feature code never imports provider SDKs. This is the single most important architectural boundary in the entire codebase.
### 3.2 Premium UX Architecture Principles
These patterns separate a forgettable app from a premium one:
**60fps everywhere.** All animations run on the UI thread via Reanimated shared values. Never animate with React state or the built-in Animated API. Screen transitions use Reanimated's layout animations (FadeIn, SlideInRight). The Turn kaleidoscope animation uses withRepeat + withSpring for organic-feeling motion.
**Haptic vocabulary.** Define a consistent haptic language: `Haptics.selectionAsync()` for tapping fragments, `Haptics.impactAsync(Light)` for completing actions, `Haptics.notificationAsync(Success)` for saving turns, `Haptics.impactAsync(Soft)` for onboarding progression. Centralize in `src/lib/haptics.ts` so the haptic vocabulary stays consistent.
**Offline-first by default.** Every screen must work without network. MMKV persists Zustand state synchronously. TanStack Query caches API responses with configurable stale times. Mutations queue locally and resume when connectivity returns (via `resumePausedMutations()`). The Mirror writing experience must never be interrupted by network issues.
**Optimistic UI.** When the user saves a Turn, the UI updates immediately via TanStack Query's optimistic mutation pattern — the save icon fills instantly, and the server sync happens in the background. If it fails, it rolls back. The user never waits.
**Image optimization.** Use `expo-image` with blurhash placeholders for any loaded images. Profile avatars, onboarding illustrations, and Spectrum chart images all load with a color-blur placeholder that resolves to the full image.
**Accessibility from day one.** All interactive elements have accessibility labels. The Mirror writing area supports dynamic type sizes. Color contrast meets WCAG AA. VoiceOver/TalkBack navigation works for all core flows.
---
## 4. Build Track A — Foundation (Weeks 1-2)
Duration: 3-5 days
Goal: Everything runs. Nothing is broken. You can develop features.
### Session A1: Repository Scaffold
Context: Starting from empty repo.
Prompt to Claude Code:
> Initialize a pnpm monorepo with three workspaces: apps/mobile, services/api, and packages/shared. Set up tsconfig.base.json with strict mode. Add biome.json with reasonable defaults. Add .gitignore covering node_modules, .env, dist, .expo, and drizzle migration artifacts. Create package.json files for each workspace. The API workspace should use Fastify 5, Drizzle ORM with postgres-js driver, zod, pino, ioredis, jose, and @node-rs/argon2. The mobile workspace should use Expo SDK 54 with expo-router, and these critical dependencies: @tanstack/react-query for server state, zustand for client state, react-native-mmkv v4 for encrypted local storage (use createMMKV API), react-native-reanimated v4 for UI-thread animations, react-native-gesture-handler for native gestures, expo-haptics for tactile feedback, expo-image for optimized image loading, and zod for shared validation. The shared workspace should export Zod schemas and TypeScript types. Add a pnpm-workspace.yaml. Make the whole thing build and type-check cleanly.
Deliverables:
- All package.json files with correct dependencies
- tsconfig files with project references
- biome.json
- pnpm-workspace.yaml
- Clean `pnpm install` and `pnpm -r run type-check`
Validation:
```bash
pnpm install
pnpm -r run type-check # Zero errors
pnpm biome check . # Zero errors
```
### Session A2: Docker Infrastructure
Context: Repo scaffold exists.
Prompt to Claude Code:
> Create infra/docker/docker-compose.yml with PostgreSQL 16 and Redis 7. Postgres should use kalei/kalei credentials with a kalei database. Add health checks for both services. Create a root-level Makefile with targets: up (start docker), down (stop docker), reset-db (drop and recreate), and logs. Also create .env.example files for both API and mobile workspaces with all expected environment variables documented with comments.
Deliverables:
- docker-compose.yml with health checks
- Makefile
- .env.example files
Validation:
```bash
make up
docker compose -f infra/docker/docker-compose.yml ps # Both healthy
make down
```
### Session A3: API Server Skeleton
Context: Docker running, dependencies installed.
Prompt to Claude Code:
> Create the Fastify server factory in services/api/src/server.ts. It should: load config from environment using Zod validation (src/config.ts), register CORS, helmet, and sensible plugins, add a request ID plugin that generates UUIDs and injects them into pino logs, add a global error handler that logs errors and returns structured JSON errors, register a GET /health endpoint that checks Postgres and Redis connectivity and returns status/uptime/version. Add the database client (src/db/client.ts) using Drizzle ORM with postgres-js as the driver. Add the Redis client (src/redis/client.ts) using ioredis with reconnection strategy. Wire it all up in an index.ts that starts the server. Use Fastify's plugin encapsulation model — each capability should be a registered plugin.
Deliverables:
- server.ts (app factory)
- config.ts (Zod-validated env)
- db/client.ts (Drizzle + postgres-js)
- redis/client.ts (ioredis)
- plugins/request-id.plugin.ts
- plugins/error-handler.plugin.ts
- Health endpoint returning JSON
Validation:
```bash
make up
cd services/api && pnpm dev
curl http://localhost:8080/health # Returns JSON with status: ok
```
### Session A4: Mobile App Skeleton
Context: API server running.
Prompt to Claude Code:
> Create the Expo app in apps/mobile using Expo Router with file-based routing and the New Architecture (enabled by default in SDK 54). Set up the root layout (_layout.tsx) with: a TanStack Query provider (QueryClientProvider with defaults: staleTime 5 minutes, gcTime 10 minutes, networkMode offlineFirst), a GestureHandlerRootView wrapper, and an auth gate that checks for a stored token in MMKV. Set up MMKV storage in src/lib/storage.ts using createMMKV with encryption enabled. Set up the TanStack Query client in src/api/query-client.ts. Create a Zustand auth store in src/stores/auth.store.ts that persists tokens to MMKV (using zustand/middleware persist with a custom MMKV storage adapter). Create the (auth) group with login and register screens (just UI shells with Reanimated FadeIn entering animations). Create the (tabs) group with 5 tabs: Home, Mirror, Turn, Lens, Gallery. Each tab should show its name and a placeholder with a subtle FadeIn.duration(300) entering animation. Set up the theme directory with Kalei's color tokens (primary deep teal #1A3A3A, accent amber #D4A574, background warm off-white #FAF8F5, text near-black #1A1A1A) and typography scale. Create a TanStack Query hook in src/api/queries/use-health.ts that fetches /health. Create src/lib/haptics.ts with centralized haptic helpers (tapHaptic, successHaptic, errorHaptic, selectionHaptic) wrapping expo-haptics. Display the health status on the Home tab using the useHealth query hook.
Deliverables:
- Expo Router file structure with all route groups
- Root layout with TanStack Query provider, GestureHandlerRootView, auth gate
- MMKV storage setup with encryption
- TanStack Query client with offline-first defaults
- Zustand auth store persisted to MMKV
- Haptics helper library
- 5 tab screens with Reanimated entering animations
- Theme tokens and typography scale
- useHealth query hook calling /health
- Query key factory in src/api/queries/keys.ts
Validation:
```bash
cd apps/mobile && npx expo start
# Open on device (not Expo Go — need dev build for MMKV/Reanimated native modules)
# See tabs with fade-in animations, see health status from API
# Kill app and reopen — auth state persists from MMKV
```
### Session A5: Database Schema v1
Context: API server and Drizzle client working.
Prompt to Claude Code:
> Create the full Phase 1 database schema using Drizzle ORM for PostgreSQL. Define all tables in separate files under services/api/src/db/schema/. Tables needed: users (id uuid, email, password_hash, display_name, role, created_at, updated_at), profiles (user_id FK, onboarding_complete, timezone, preferred_reframe_style, created_at, updated_at), auth_sessions (id uuid, user_id FK, device_info jsonb, ip_address, created_at, expires_at), refresh_tokens (id uuid, user_id FK, token_hash, session_id FK, created_at, expires_at, revoked_at), subscriptions (id uuid, user_id FK, plan enum free/prism/prism_plus, store enum apple/google, store_subscription_id, status enum, current_period_start, current_period_end, created_at, updated_at), entitlement_snapshots (id uuid, user_id FK, plan, features jsonb, valid_from, valid_until), turns (id uuid, user_id FK, input_text encrypted, perspectives jsonb, micro_action text, reframe_style, saved boolean, created_at), mirror_sessions (id uuid, user_id FK, status enum, started_at, ended_at, reflection_text), mirror_messages (id uuid, session_id FK, user_id FK, content encrypted, sequence_number, created_at), mirror_fragments (id uuid, message_id FK, user_id FK, fragment_type, start_offset, end_offset, confidence, reframe_text, created_at), lens_goals (id uuid, user_id FK, title, description, status, created_at, updated_at), lens_actions (id uuid, goal_id FK, user_id FK, title, completed boolean, completed_at, due_date, created_at), ai_usage_events (id uuid, user_id FK, feature, model, provider, input_tokens, output_tokens, cost_usd, latency_ms, created_at), safety_events (id uuid, user_id FK, trigger_type, trigger_content_hash, action_taken, created_at). Use proper indexes on user_id and created_at for all tables. Use Drizzle relations for type-safe joins. Create an index.ts that re-exports everything. Then configure drizzle.config.ts and generate the initial migration with drizzle-kit generate.
Deliverables:
- Schema files for all tables
- Drizzle relations defined
- drizzle.config.ts
- Generated SQL migration
- Clean migration apply
Validation:
```bash
cd services/api
pnpm drizzle-kit generate # Generates migration SQL
pnpm drizzle-kit migrate # Applies to local Postgres
pnpm drizzle-kit studio # Opens Drizzle Studio, verify all tables exist
```
### Session A6: Shared Schemas and Quality Gates
Context: Schema exists, both apps scaffold complete.
Prompt to Claude Code:
> Create the shared Zod schemas in packages/shared/src/schemas/ for all API request/response contracts. Auth schemas: RegisterInput, LoginInput, TokenResponse, ProfileResponse. Mirror schemas: CreateSessionInput, SendMessageInput, FragmentResponse, ReframeResponse, ReflectionResponse. Turn schemas: CreateTurnInput (input_text, reframe_style enum), TurnResponse (3 perspectives array + micro_action). Lens schemas: CreateGoalInput, GoalResponse, CreateActionInput, DailyAffirmationResponse. Also create packages/shared/src/constants/plans.ts with plan definitions including feature limits (free: 3 turns/day, 2 mirror/week; prism: unlimited; prism_plus: unlimited + spectrum). Add packages/shared/src/constants/cognitive-distortions.ts with the 10 cognitive distortion types used for fragment detection. Then set up Vitest config for the API workspace with a test that imports the shared schemas. Set up Biome and add scripts to root package.json: lint, format, test, type-check, e2e (runs `maestro test apps/mobile/e2e/`). Add a CI-ready check script that runs lint + format + type-check + test (unit/integration). E2E tests via Maestro run separately in the release pipeline since they require a running device/emulator.
Deliverables:
- All shared Zod schemas
- Plan constants with limits
- Cognitive distortion taxonomy
- Vitest configuration
- Root-level quality scripts
- At least one passing test
Validation:
```bash
pnpm run check # Runs lint + format + type-check + test — all pass
```
---
## 5. Build Track B — Platform Core (Weeks 1-3)
Duration: 2-3 weeks
Goal: Auth, entitlements, AI gateway, safety, and rate limiting are production-grade.
### Session B1: Auth — Registration and Login
Prompt to Claude Code:
> Implement the auth module in services/api/src/modules/auth/. Create auth.routes.ts as a Fastify plugin that registers POST /auth/register and POST /auth/login. Registration should: validate input with shared Zod schema, check email uniqueness, hash password with Argon2id, create user + profile, generate JWT access token (15 min TTL) and refresh token (7 day TTL), store refresh token hash in DB, return both tokens. Login should: validate credentials, verify password, create new session with device info from user-agent, generate token pair, return tokens. Use jose for JWT signing with RS256 or HS256 based on config. Create auth.service.ts for business logic and use Drizzle for data access. Add auth.schemas.ts importing from shared schemas. Write tests for: successful registration, duplicate email rejection, successful login, wrong password rejection.
### Session B2: Auth — Token Refresh and Sessions
Prompt to Claude Code:
> Add POST /auth/refresh and POST /auth/logout to the auth module. Refresh should: accept refresh token, verify it exists and is not expired or revoked, rotate it (revoke old, issue new pair), return new tokens. This implements refresh token rotation — if a revoked token is reused, revoke ALL tokens for that user (compromise detection). Logout should: revoke the current refresh token and mark the session as ended. Add GET /me that returns the current user profile. Add PATCH /me/profile for updating display_name, timezone, and preferred_reframe_style. Create the auth.plugin.ts in plugins/ that adds a Fastify decorator `authenticate` — a preHandler hook that verifies the JWT access token, extracts user ID, and decorates the request with `request.user`. Test: full token lifecycle (register, use access token, refresh, use new token, logout, verify old refresh fails).
### Session B3: Entitlement and Plan Gating
Prompt to Claude Code:
> Implement the entitlement system. Create plugins/entitlement.plugin.ts as a Fastify plugin that adds a `requirePlan` decorator. This decorator takes a minimum plan level and returns a preHandler that: loads the user's current entitlement snapshot, checks if their plan meets the minimum, returns 403 with a clear upgrade message if not. Also implement feature-specific gates: a `checkTurnLimit` preHandler that reads today's turn count from Redis and enforces the daily cap for free users (3/day), and a `checkMirrorLimit` preHandler that reads this week's mirror session count and enforces the weekly cap for free users (2/week). Create billing.routes.ts with webhook endpoints for Apple and Google (POST /billing/webhooks/apple and /billing/webhooks/google) — these should parse notification payloads, update subscriptions table, and write new entitlement snapshots. For now, implement the webhook signature verification as a placeholder. Add GET /billing/entitlements that returns the user's current plan and feature limits with usage counts. Test: free user hitting turn limit, prism user bypassing limit, entitlement check after plan change.
### Session B4: AI Gateway — Core Abstraction
Prompt to Claude Code:
> Build the AI gateway in services/api/src/gateway/. Create ai-gateway.ts with a TypeScript interface: AIGateway with methods `generate(request: AIRequest): Promise<AIResponse>` and `stream(request: AIRequest): AsyncGenerator<AIChunk>`. The AIRequest type should include: feature (mirror_detect | mirror_reframe | turn_reframe | lens_affirmation | spectrum_weekly | spectrum_monthly), messages array, model override (optional), temperature, max_tokens, output_schema (Zod schema for structured output validation). Create providers/anthropic.provider.ts that implements this interface using @anthropic-ai/sdk. It should: use prompt caching by marking system message content blocks with `cache_control: { type: "ephemeral" }` (this tells Anthropic to cache the system prompt — subsequent calls within a 5-minute window pay only 10% of base input rate, saving 40-50% on input costs), support streaming via the SDK's stream method, extract token usage from response metadata, validate output against the provided Zod schema and retry once on validation failure. Create providers/openai-compatible.provider.ts as the Venice/Groq/OpenRouter adapter using the OpenAI-compatible API format. The gateway factory should read config to determine which provider to use per feature, with fallback chains. Log every call to ai_usage_events with feature, model, provider, token counts, cost estimate, and latency. Test: mock provider returns valid structured output, mock provider returns invalid output and retry succeeds, token usage logging works.
### Session B5: AI Gateway — Prompt Templates
Prompt to Claude Code:
> Create all prompt templates in services/api/src/gateway/prompts/. Each template exports a function that takes context and returns the messages array for the AI gateway. Mirror fragment detection (mirror-detect.prompt.ts): system prompt instructs the model to analyze freeform writing and identify cognitive distortions from our taxonomy (all-or-nothing, catastrophizing, emotional reasoning, fortune telling, labeling, magnification, mental filtering, mind reading, overgeneralization, should statements). Output must be a JSON array of fragments with: type, start_offset, end_offset, confidence (0-1), and a brief explanation. Mirror inline reframe (mirror-reframe.prompt.ts): takes the original text and a specific fragment, generates a gentle compassionate reframe (2-3 sentences). Turn reframe (turn-reframe.prompt.ts): system prompt generates exactly 3 perspective reframes (compassionate, rational, growth-oriented) plus one micro-action as an if-then implementation intention. Output is strictly structured JSON. Lens affirmation (lens-affirmation.prompt.ts): takes user's active goals and generates a personalized daily affirmation. Create corresponding output validation schemas in output-schemas/ using Zod. Each prompt template should include a version string (e.g., "mirror-detect-v1") for tracking prompt revisions. Test: each template produces valid messages arrays, output schemas validate correct and reject incorrect shapes.
### Session B6: Safety Service
Prompt to Claude Code:
> Build the safety service in services/api/src/safety/. Create crisis-keywords.ts with deterministic keyword and phrase sets for crisis detection: explicit self-harm language, suicidal ideation phrases, immediate danger phrases, and severe distress indicators. These must be comprehensive but avoid false positives on common expressions. Create safety.service.ts with a multi-stage crisis filter: Stage 1 is deterministic keyword/regex matching (instant, no AI call). If Stage 1 flags, immediately return crisis response — no further processing. Stage 2 (optional, for ambiguous cases): send flagged text to AI gateway for confirmation with a safety-specific prompt that returns a confidence score. If confirmed, return crisis response. Create crisis-resources.ts with structured data for crisis hotlines by region (start with US: 988 Suicide and Crisis Lifeline, Crisis Text Line). The crisis response payload should include: is_crisis boolean, resources array with name/phone/text/url per resource, a compassionate message, and explicit instruction that the content was NOT reframed. Wire the safety service as a preHandler on all AI-facing routes (mirror messages, turn creation, lens if accepting user input). Log every safety event to safety_events table. Test: known crisis phrases trigger immediate response, safe text passes through, ambiguous text reaches Stage 2, crisis response never contains reframed content.
### Session B7: Rate Limiting and Usage Metering
Prompt to Claude Code:
> Build Redis-backed rate limiting and usage tracking. Create redis/rate-limiter.ts implementing a sliding window rate limiter. It should support: per-IP rate limiting (general API protection, 100 req/min), per-user rate limiting (feature-specific, configurable), and burst allowance. Create redis/usage-counter.ts for tracking: daily turn count per user, weekly mirror session count per user, monthly token usage per user. Create redis/idempotency.ts that stores idempotency keys with TTL — if a request includes an X-Idempotency-Key header and we've seen it before, return the cached response. Create plugins/rate-limit.plugin.ts that registers the general rate limiter on all routes and exposes decorators for feature-specific limits. Create a usage cost tracking function that estimates USD cost from token counts based on current model pricing (configurable). Add middleware that records every AI-backed request to ai_usage_events. Test: rate limit kicks in at threshold, idempotency key returns cached response, usage counters increment correctly.
### Session B8: Push Notification Infrastructure
Prompt to Claude Code:
> Set up push notification infrastructure. Install expo-notifications in the mobile app and configure it to request permissions and register for push tokens on login. Create a notification service in the API (services/api/src/modules/notifications/) that: stores device push tokens per user session, sends push notifications via Expo Push API (which handles both APNs and FCM routing), supports notification types: daily_affirmation_reminder, mirror_nudge, weekly_spectrum_ready, turn_streak_reminder. Create a notification preferences table in the schema so users can toggle notification types on/off. Create a worker (workers/push-notification.worker.ts) that runs scheduled notifications: daily affirmation reminder at user's preferred time (default 8am local), weekly spectrum summary notification on Monday morning. For development, log notifications to console instead of sending. Test: push token registration, notification send via Expo Push API, user preference toggle, scheduled notification timing.
### Session B9: Observability Baseline
Prompt to Claude Code:
> Set up structured logging and error tracking. Configure Pino logger in utils/logger.ts with: JSON output in production, pretty output in development, request ID correlation, user ID hash (never log raw user ID), redaction of sensitive fields (authorization headers, passwords, tokens). Create a Fastify hook that logs request start (method, url, request_id) and response end (status, duration_ms, request_id). Add GlitchTip integration for error tracking — GlitchTip is free and open source, self-hosted via Docker Compose alongside our existing Postgres and Redis (add it to docker-compose.yml). Install @sentry/node (GlitchTip is Sentry-compatible, same SDK) and create an initialization plugin that points to the local GlitchTip instance. For development, GlitchTip runs on your machine; in production, it runs on the VPS. Zero SaaS cost. Create a /metrics endpoint that returns: request count by endpoint, error count by endpoint, AI token usage by feature (daily/monthly), AI cost by feature, active sessions count, rate limit denial count. For now this can be a simple JSON endpoint — we'll add Prometheus format later if needed. Test: verify request logs contain request_id, verify error tracking captures unhandled errors, verify metrics endpoint returns non-zero data after some requests.
---
## 6. Build Track C — Core Experience (Weeks 4-8)
Duration: 3-5 weeks
Goal: Users can complete the Mirror to Turn to Lens flow end-to-end.
### Session C1: Mirror — Session Lifecycle API
Prompt to Claude Code:
> Implement the Mirror module backend. POST /mirror/sessions creates a new session (status: active). POST /mirror/messages accepts message content, runs it through the safety gate, then through the AI gateway for fragment detection. The response includes the original message with an array of detected fragments (type, offsets, confidence). Each fragment above the confidence threshold (0.7) is stored in mirror_fragments. POST /mirror/fragments/:id/reframe triggers an inline reframe for a specific fragment — calls AI gateway with the original context and fragment, returns the reframed perspective. POST /mirror/sessions/:id/close ends the session and triggers a reflection generation — the AI summarizes the session's themes, patterns noticed, and a gentle closing thought. GET /mirror/sessions lists sessions with pagination. DELETE /mirror/sessions/:id soft-deletes a session. All endpoints enforce authentication, safety precheck, and entitlement limits. The message content should be encrypted at rest using the column-level encryption helper. Test: full session lifecycle, fragment detection returns valid offsets, reframe returns compassionate text, reflection summarizes themes, safety gate blocks crisis content.
### Session C2: Mirror — Mobile UI
Prompt to Claude Code:
> Build the Mirror screen in the mobile app as a premium writing experience. The compose UI should feel like a peaceful writing space — warm off-white background (#FAF8F5), gentle typography (system serif or Inter at 18px), generous padding, no distracting UI elements. Use keyboard-aware scrolling so the text input stays visible above the keyboard. As the user types and pauses (debounce 2 seconds after last keystroke), use a TanStack Query mutation (useSendMessage from src/api/queries/use-mirror.ts) to send content to the API for fragment detection. When fragments come back, render subtle highlights on the detected text — use a warm amber (#D4A574) underline with a Reanimated FadeIn.duration(400) animation, not aggressive red highlighting. Add Haptics.selectionAsync() when the user taps a highlighted fragment. Show a bottom sheet card (animated with Reanimated SlideInDown.springify()) with the fragment type name (e.g., "Catastrophizing"), a brief explanation, and a "See another perspective" button that triggers the reframe mutation. The reframe slides in as an animated card with FadeInUp. When the user closes the session, show the Reflection as a full-screen gentle summary with Reanimated FadeIn. All API calls should go through TanStack Query hooks — useMirrorSession, useSendMessage, useReframe, useCloseSession. Configure the useSendMessage mutation with optimistic updates so fragment highlights appear immediately from cached data while the server processes. Handle offline gracefully using TanStack Query's offline mutation persistence — queue messages locally in the mutation cache and resume when connectivity returns via resumePausedMutations.
### Session C3: Turn — Reframe Engine API
Prompt to Claude Code:
> Implement the Turn module backend. POST /turns accepts input_text and optional reframe_style (compassionate, rational, growth, or all). Runs safety precheck. Calls AI gateway with the turn-reframe prompt. Returns exactly 3 perspectives, each with: style label, reframed text (2-3 sentences), and a brief explanation of the cognitive shift. Also returns one micro_action as an if-then implementation intention (e.g., "If I notice myself catastrophizing about work, then I will take 3 breaths and name one thing I can control"). POST /turns/:id/save toggles the saved state. GET /turns returns paginated history. GET /turns/:id returns a single turn with full details. Support streaming for the reframe generation — the API should use Server-Sent Events so the mobile app can show perspectives appearing in real-time. Enforce daily turn limits for free users. Test: valid turn creates 3 perspectives, streaming delivers chunks, save toggle works, daily limit enforced, crisis content blocked.
### Session C4: Turn — Mobile UI
Prompt to Claude Code:
> Build the Turn (Kaleidoscope) screen as the premium signature experience. The input should be a focused single-purpose screen: a large text input with placeholder "What thought is weighing on you?" and a "Turn" button that pulses gently using Reanimated withRepeat(withTiming()) when the input has text. Add Haptics.impactAsync(Medium) when the Turn button is pressed. When processing, show a kaleidoscope animation built with Reanimated — rotate a geometric SVG pattern using useSharedValue + withRepeat(withSpring()) for organic, non-mechanical motion. This is the core product moment and must feel magical. As perspectives stream in via SSE, render them as cards that appear one at a time using Reanimated's layout animation FadeInDown.springify().delay(index * 200) — each card shows the perspective style icon, the reframed text, and the cognitive shift explanation. The micro-action appears last with a distinct card style and FadeInUp animation. The save button (bookmark icon) should use Reanimated withSpring for a satisfying scale bounce on tap, with Haptics.notificationAsync(Success) feedback. Use TanStack Query for all data: useCreateTurn mutation (with streaming SSE handling), useTurns query for history, useSaveTurn mutation with optimistic update (the save icon fills immediately, server syncs in background — rollback on failure). The Gallery tab should show saved turns in a timeline view using FlashList for performant scrolling. Implement pull-to-refresh with Reanimated-powered custom refresh indicator. All loading states must use skeleton placeholders with a Reanimated shimmer effect, never a spinner.
### Session C5: Lens — Goals and Actions API
Prompt to Claude Code:
> Implement the Lens module backend. POST /lens/goals creates a goal with title and optional description. GET /lens/goals returns active goals. PATCH /lens/goals/:id updates goal status. POST /lens/goals/:id/actions creates an action item. POST /lens/actions/:id/complete marks it done with timestamp. GET /lens/affirmation/today generates or returns cached daily affirmation — it calls the AI gateway with the user's active goals as context and generates a personalized affirmation. Cache the affirmation in Redis with a TTL that expires at midnight in the user's timezone. If the AI budget is constrained, fall back to a template-based affirmation system using pre-written affirmations matched to goal themes. Test: goal CRUD, action completion, affirmation generation, affirmation caching, template fallback.
### Session C6: Lens — Mobile UI
Prompt to Claude Code:
> Build the Lens screen as a calm, empowering daily direction hub. Top section shows the daily affirmation in a prominent card with gentle serif typography and a Reanimated FadeIn.delay(200).duration(600) entering animation — this card should feel like the first thing you notice. Use TanStack Query's useDailyAffirmation hook (from src/api/queries/use-lens.ts) with staleTime set to Infinity (the affirmation is cached for the full day). Below the affirmation, show active goals as expandable cards using Reanimated Layout animations (LayoutAnimation.springify()) for smooth expand/collapse. Each goal card shows its title and an animated circular progress indicator — use Reanimated useAnimatedProps to animate the SVG circle stroke-dashoffset as actions are completed. Tapping a goal expands it with a spring animation to show action items as a checklist. Completing an action triggers: Haptics.impactAsync(ImpactFeedbackStyle.Light), a Reanimated withSpring scale bounce on the checkbox (1 → 1.3 → 1), and an optimistic update via TanStack Query's useCompleteAction mutation (checkbox fills immediately, server syncs in background, rollback on failure). When all actions in a goal are complete, trigger Haptics.notificationAsync(NotificationFeedbackType.Success) and play a subtle confetti-like Reanimated animation. Add a floating action button with a Reanimated withSpring scale entrance animation. The create goal flow opens as a bottom sheet (animated with Reanimated SlideInDown.springify()) with title and optional description fields. All data flows through TanStack Query hooks: useGoals, useCreateGoal, useCompleteAction, useDailyAffirmation. Goals and actions persist offline via TanStack Query's cache + MMKV persistence — creating a goal while offline queues the mutation. Keep the UI minimal and focused — this is the "direction" pillar, it should feel empowering and clear, not like a corporate task manager.
### Session C7: Gallery and History Views
Prompt to Claude Code:
> Build the Gallery tab as a premium history experience using FlashList for buttery-smooth scrolling. Use @shopify/flash-list instead of FlatList — it recycles views for 60fps scrolling even with hundreds of items. The gallery shows a unified timeline with three content types: saved turns (with perspective previews), mirror session summaries (with fragment count and date), and lens achievements (completed actions and goals). Each type should have a distinct but cohesive card design using Reanimated entering animations — cards use FadeInUp.delay(index * 80) for staggered appearance as they scroll into view. Add filtering by content type and date range with an animated filter bar (Reanimated layout animations for smooth filter chip transitions). Add search across saved turn content and mirror reflections — the search bar slides in with Reanimated SlideInDown and uses debounced TanStack Query with the search term as a query key. For free users, gallery shows last 30 days; for prism users, full history. Implement infinite scroll with cursor-based pagination using TanStack Query's useInfiniteQuery — configure getNextPageParam for cursor-based pagination and flatMap pages for FlashList data. Add pull-to-refresh with a custom Reanimated-powered refresh indicator (not the default system spinner). Add a detail view that opens when tapping any gallery item with a shared element transition (Reanimated's SharedTransition) — the card expands into the full detail view. Turns show full perspectives, mirror sessions show the reflection, lens items show the goal with all actions. Add Haptics.selectionAsync() when tapping gallery items. All data through TanStack Query hooks: useGalleryItems (infinite query), useGallerySearch, useTurnDetail, useMirrorDetail. Stale time 2 minutes for gallery list, 5 minutes for detail views. Gallery works offline with cached data from TanStack Query's persisted cache.
### Session C8: Onboarding Flow
Prompt to Claude Code:
> Build the onboarding experience in the (auth) group as a premium first impression. After registration, guide the user through 3-4 screens using a horizontal pager with Reanimated-powered page transitions — each page slides in with a spring physics animation using withSpring({ damping: 20, stiffness: 90 }) for a soft, organic feel (not the stiff default transitions). Screen 1 explains the kaleidoscope metaphor — "Your thoughts are like light. Sometimes they get stuck in one pattern. Kalei helps you turn them and see new colors." Animate the kaleidoscope illustration with a gentle Reanimated rotation (withRepeat + withTiming over 8 seconds) so it feels alive. Screen 2 asks the user to set their preferred reframe style (compassionate, rational, growth) with brief descriptions of each — use Reanimated scale animations on the selection cards (withSpring scale 1 → 1.05 on selection) and Haptics.selectionAsync() on tap. Screen 3 invites the user to try their first Turn — embed a mini version of the Turn input with a pulsing "Turn" button (Reanimated withRepeat opacity animation). Screen 4 shows the result and introduces the daily rhythm (morning affirmation, journaling, reframing) — use staggered FadeInUp animations for each rhythm item. Add a progress indicator using Reanimated interpolation — animated dots that fill as the user progresses, with Haptics.impactAsync(ImpactFeedbackStyle.Soft) on each page transition. Store onboarding completion in the profile via a TanStack Query mutation (useCompleteOnboarding). Skip onboarding for returning users by checking the auth store's onboarding_complete flag (persisted in MMKV). The entire onboarding should feel like a guided meditation — slow, intentional, beautiful.
---
## 7. Build Track D — Launch Hardening (Weeks 9-10)
Duration: 2-4 weeks
Goal: Production-safe, store-ready, monitored.
### Session D1: Safety Hardening
> Expand crisis keyword sets with comprehensive coverage. Add regional crisis resources for all launch regions. Add a safety dashboard endpoint that returns: total safety events by type, false positive rate (manually tagged), response time percentiles. Add an admin-only endpoint to review and tag safety events for quality improvement. Verify that no code path can return reframed content when crisis is detected — write an integration test that traces every AI-facing route with crisis input and asserts the response is always the crisis resource payload.
### Session D2: Billing Integration — Apple
> Implement full App Store Server Notifications v2 handling. Verify notification signatures using Apple's public key. Parse all notification types: SUBSCRIBED, DID_RENEW, EXPIRED, DID_FAIL_TO_RENEW, REFUND, REVOKE, GRACE_PERIOD_EXPIRED. Update subscription status and entitlement snapshots accordingly. Implement a reconciliation job that periodically verifies subscription status directly with Apple's API to catch any missed webhooks. Test with Apple's sandbox environment.
### Session D3: Billing Integration — Google
> Implement Google Play Real-Time Developer Notifications. Verify Pub/Sub message authenticity. Handle all notification types: SUBSCRIPTION_RECOVERED, SUBSCRIPTION_RENEWED, SUBSCRIPTION_CANCELED, SUBSCRIPTION_PURCHASED, SUBSCRIPTION_ON_HOLD, SUBSCRIPTION_IN_GRACE_PERIOD, SUBSCRIPTION_EXPIRED, SUBSCRIPTION_REVOKED. Update subscription and entitlement records. Add reconciliation job for Google Play API verification.
### Session D4: Security Audit Pass
> Review and harden: verify all endpoints require authentication (except /health, /auth/register, /auth/login, /auth/refresh, billing webhooks). Verify password hashing uses Argon2id with secure parameters. Verify JWT tokens use appropriate algorithms and key sizes. Verify refresh token rotation and compromise detection work. Add secure HTTP headers (HSTS, CSP, X-Frame-Options). Verify no PII in logs. Verify column-level encryption on mirror message content and turn input text. Run a dependency audit (pnpm audit). Add rate limiting on auth endpoints (5 attempts per minute per IP for login). Add PostgreSQL Row-Level Security (RLS) policies as defense-in-depth: enable RLS on all user-data tables, create policies that restrict SELECT/UPDATE/DELETE to rows where user_id matches the authenticated user's ID from the JWT claim. This ensures that even if application-level ownership checks are bypassed, the database itself prevents cross-user data access. Create a migration that enables RLS and adds the policies. Test: verify a query with the wrong user_id returns zero rows when RLS is active.
### Session D5: Performance and Load Testing
> Create a load test script using k6 or autocannon that simulates: 50 concurrent users performing the Turn flow (the most AI-intensive path), 20 concurrent Mirror sessions with message submission, 100 concurrent health checks. Measure: p50/p95/p99 latency for each endpoint, error rate under load, AI gateway response time distribution, database query time distribution. Identify and fix any bottlenecks. Target: p95 under 3.5s for AI-backed endpoints, p95 under 200ms for non-AI endpoints. Additionally, run the full Maestro E2E suite (apps/mobile/e2e/) on both iOS simulator and Android emulator to verify all critical flows pass under realistic conditions. Measure: screen transition times (target under 300ms), animation frame drops (target zero dropped frames in Reanimated animations), TanStack Query cache hit rates, and offline-to-online mutation resume success rate.
### Session D6: Deployment Pipeline
> Create the production deployment setup. Docker Compose for the VPS (Fastify API, Postgres 16, Redis 7, Caddy reverse proxy with auto-HTTPS via Let's Encrypt — all free). Create deploy.sh that: builds the API, runs migrations, restarts the service with zero-downtime (using Fastify's graceful shutdown). Create backup-db.sh for automated PostgreSQL backups. Set up the mobile build pipeline using local builds as the primary path: configure eas.json with development, preview, and production profiles, then build locally using `eas build --local --platform ios --profile production` and `eas build --local --platform android --profile production`. This avoids EAS Cloud Build costs entirely — builds run on your machine using Xcode and Android Studio (both free). For development builds during day-to-day work, use `npx expo run:ios` and `npx expo run:android` which compile directly against local toolchains. EAS Cloud Build (30 free builds/month) is available as a convenience but not required. Create the app store metadata: screenshots, description, privacy policy URL, support URL.
### Session D7: Beta Testing Pipeline
> Configure TestFlight for iOS internal testing and Google Play internal testing track. Set up EAS Update for over-the-air updates during beta. Create a beta feedback mechanism — either in-app feedback button or a simple form. Add Maestro E2E tests to the release gate: configure the CI pipeline so that all Maestro flows (onboarding, turn-flow, mirror-session, purchase-flow) must pass on both iOS and Android before a TestFlight or Play Store upload proceeds. Run through the complete user journey manually: register, onboard, complete a Mirror session, do a Turn, set a Lens goal, complete an action, view Gallery. Verify that offline-first behavior works: enable airplane mode, create a turn, re-enable network, verify the turn syncs. Verify haptic feedback fires on all premium touch points (fragment tap, turn save, action complete, onboarding page transition). Document any issues. Fix launch-blocking bugs.
---
## 8. Build Track E — Spectrum Intelligence (Weeks 11-12)
Duration: 3-6 weeks
Goal: Weekly and monthly AI-powered insights from accumulated usage data.
### Session E1: Spectrum Schema and Data Pipeline
> Create Drizzle schema for spectrum tables: spectrum_session_analysis (session_id, emotional_vectors jsonb, fragment_distribution jsonb, created_at), spectrum_turn_analysis (turn_id, pre_emotional_state jsonb, post_emotional_state jsonb, impact_score float, created_at), spectrum_weekly (user_id, week_start, emotional_landscape jsonb, top_fragments jsonb, turn_impact_summary jsonb, rhythm_patterns jsonb, narrative text, created_at), spectrum_monthly (user_id, month_start, growth_trajectory jsonb, theme_evolution jsonb, narrative text, created_at). Add event hooks in Mirror and Turn services that write to spectrum analysis tables after each session/turn completes. Add exclusion flags so users can opt out specific sessions from analysis.
### Session E2: Aggregation Workers
> Build background workers for Spectrum aggregation. The weekly worker runs every Sunday night: loads all non-excluded sessions and turns from the past week, computes emotional vectors using the AI gateway (batch mode for cost savings), aggregates fragment distributions, calculates turn impact scores, generates a narrative summary of the week's patterns. The monthly worker runs on the 1st of each month: aggregates weekly data into monthly trends, identifies growth trajectory, generates a deeper narrative. Both workers must be idempotent (safe to re-run), use retry with backoff, and log failures to a dead-letter table. Use BullMQ or a simple PostgreSQL-based job queue for scheduling.
### Session E3: Spectrum API and Mobile Dashboard
> Create Spectrum API endpoints: GET /spectrum/weekly returns the latest weekly insight (or 404 if not enough data), GET /spectrum/monthly returns the latest monthly summary, POST /spectrum/reset clears all spectrum data for the user, POST /spectrum/exclusions manages session exclusions. Build the Spectrum mobile dashboard as the (spectrum) route group. The dashboard should show: an emotional landscape visualization (could be a simple radar chart or heatmap of the 6 emotional dimensions), fragment pattern distribution (which cognitive distortions appear most), turn impact summary (how much emotional shift the reframes produce), and the AI-generated narrative insight. Gate all Spectrum features behind the prism_plus entitlement.
---
## 9. AI Cost Control Strategy
This section defines the exact implementation of cost guardrails.
### 9.1 Budget Hierarchy
```
Global monthly cap ($50 Phase 1, scales with revenue)
└─ Per-feature daily budget
├─ Turn: 40% of daily budget (core product)
├─ Mirror: 35% of daily budget (core product)
├─ Lens: 10% of daily budget (can fall back to templates)
└─ Spectrum: 15% of daily budget (batch, can defer)
└─ Per-user daily token cap
├─ Free: 100k tokens/day
└─ Prism: 500k tokens/day
```
### 9.2 Degradation Order
When budget pressure hits (80% of daily budget consumed):
1. Lens affirmations switch to template-based (no AI cost).
2. Spectrum narrative generation deferred to next day.
3. Mirror fragment detection reduces max fragments per message from 10 to 5.
4. Turn reframes reduce from 3 perspectives to 2.
5. If 95% consumed: all non-critical AI paused, only safety detection continues.
### 9.3 Infrastructure Scaling Tiers
The build timeline is sequential but culminates in a single v1 launch. Infrastructure scales in tiers:
| Tier | DAU | Infrastructure | Key Milestone |
|------|-----|-----------------|-----------|
| **Launch** | 50 | Single VPS, API + DB + Redis | Initial release |
| **Traction** | 200 | Split DB, keep API monolith | First paid subscribers |
| **Growth** | 1K | Separate workers, scale API replicas | Growing active base |
| **Scale** | 10K+ | Full microservice extraction | Mature platform |
### 9.3 Provider Routing
The AI gateway routes all features through OpenRouter with provider pinning:
| Feature | Primary Provider | Fallback (provider outage) | Cost Pressure Fallback |
|---|---|---|---|
| Turn reframe | DeepSeek V3.2 via DeepInfra | DeepSeek V3.2 via Fireworks | Template system |
| Mirror detect | DeepSeek V3.2 via DeepInfra | DeepSeek V3.2 via Fireworks | — (critical) |
| Mirror reframe | DeepSeek V3.2 via DeepInfra | DeepSeek V3.2 via Fireworks | Template system |
| Lens affirmation | DeepSeek V3.2 via DeepInfra | DeepSeek V3.2 via Fireworks | Template system (no AI) |
| Crisis detection | Keyword + DeepSeek V3.2 | Keyword + Claude Haiku 4.5 | Keywords only (never skip) |
| Guide coaching | DeepSeek V3.2 via DeepInfra | DeepSeek V3.2 via Fireworks | Deferred |
| Spectrum weekly | DeepSeek V3.2 via DeepInfra | Claude Haiku 4.5 | Deferred |
| Spectrum monthly | DeepSeek V3.2 via DeepInfra | Claude Haiku 4.5 | Deferred |
OpenRouter handles failover automatically via the `order` array in API requests. Provider pinning ensures no data flows through Chinese servers (DeepInfra/Fireworks host on US/EU infrastructure).
### 9.4 Prompt Caching Strategy
System prompts for each feature are designed to be identical across users and requests. With DeepInfra's prompt caching:
- System prompt tokens (600-800 per feature) are cached after first call.
- Cache hits receive ~20% discount on input pricing ($0.216/M vs $0.26/M).
- While less dramatic than Anthropic's 90% cache discount, the base pricing is already so low (~$0.26/M input, $0.38/M output) that effective per-user costs are minimal (~$0.034/month for free users).
- Implementation: OpenRouter's OpenAI-compatible API handles caching transparently at the provider level.
---
## 10. Testing Strategy
### 10.1 Test Pyramid
| Layer | Tool | What | Coverage Target |
|---|---|---|---|
| Unit | Vitest | Pure functions, schemas, validators, prompt builders | High (80%+) |
| Integration | Vitest + Supertest | API routes with real DB (test container) | All endpoints |
| Safety | Vitest | Crisis detection accuracy | 100% of known patterns |
| AI Output | Vitest | Schema validation of AI responses | All prompt templates |
| E2E (Mobile) | Maestro | Critical mobile user flows | All happy paths + key error paths |
| Load | k6 | Concurrency and latency under load | pre-launch only |
### 10.2 Test Database Strategy
Use a separate PostgreSQL database for tests. Before each test suite: run migrations on the test DB. After each test suite: truncate all tables. Never run tests against the development database.
### 10.3 AI Mock Strategy
For unit and integration tests, mock the AI gateway at the provider level. Create a `mock.provider.ts` that returns deterministic, schema-valid responses. This means tests never hit real AI APIs, are fast, free, and deterministic. This is the primary reason AI costs are zero during development — you develop and test against the mock, not real providers. Add a small set of "golden file" tests that run against the real AI provider (gated behind an environment variable `REAL_AI_TESTS=true`) to catch prompt regression. These golden file tests should only be run occasionally during prompt engineering, using OpenRouter credits (DeepSeek V3.2 via DeepInfra is so cheap at $0.26/$0.38 per MTok that golden file test costs are negligible).
### 10.4 Maestro E2E Testing Strategy
Maestro is the industry standard for mobile E2E testing in 2026. It uses YAML-based flow definitions that are faster to write and more readable than code-based alternatives.
**Flow files live in `apps/mobile/e2e/`:**
```yaml
# e2e/onboarding.yaml — Verify new user onboarding flow
appId: com.kalei.app
---
- launchApp
- assertVisible: "Your thoughts are like light"
- swipeLeft
- assertVisible: "How would you like your reframes?"
- tapOn: "Compassionate"
- swipeLeft
- assertVisible: "Try your first Turn"
- tapOn: "What thought is weighing on you?"
- inputText: "I feel overwhelmed with everything"
- tapOn: "Turn"
- assertVisible: "perspective"
- swipeLeft
- assertVisible: "Your daily rhythm"
```
```yaml
# e2e/turn-flow.yaml — Verify complete Turn lifecycle
appId: com.kalei.app
---
- launchApp
- tapOn: "Turn"
- tapOn: "What thought is weighing on you?"
- inputText: "Nothing ever works out for me"
- tapOn: "Turn"
- assertVisible: "Compassionate"
- assertVisible: "Rational"
- assertVisible: "Growth"
- assertVisible: "If I notice"
- tapOn:
id: "save-turn-button"
- assertVisible: "Saved"
```
```yaml
# e2e/mirror-session.yaml — Verify Mirror writing and fragment detection
appId: com.kalei.app
---
- launchApp
- tapOn: "Mirror"
- tapOn: "Begin writing"
- inputText: "Everything is terrible and nothing will get better"
- wait: 3000
- assertVisible: "Catastrophizing"
- tapOn: "Catastrophizing"
- assertVisible: "See another perspective"
- tapOn: "See another perspective"
- assertVisible: "perspective"
```
```yaml
# e2e/purchase-flow.yaml — Verify paywall and upgrade path
appId: com.kalei.app
---
- launchApp
- tapOn: "Turn"
# Exhaust free tier (3 turns)
- repeat:
times: 4
commands:
- tapOn: "What thought is weighing on you?"
- inputText: "Test thought"
- tapOn: "Turn"
- wait: 2000
- assertVisible: "Upgrade to Prism"
```
**Running Maestro tests:**
```bash
# Run a single flow
maestro test apps/mobile/e2e/onboarding.yaml
# Run all flows
maestro test apps/mobile/e2e/
# Run with screenshot capture on failure
maestro test --format junit --output e2e-results/ apps/mobile/e2e/
# Run in CI (headless)
maestro test --no-ansi apps/mobile/e2e/
```
**Maestro in CI (Session D7):** Add Maestro to the CI pipeline by running the Maestro CLI locally on a CI runner with an emulator (Maestro CLI is free and open source — do not use Maestro Cloud which is a paid service). For self-hosted CI via Gitea Actions, install Maestro and an Android emulator on the runner. Gate releases on E2E pass — no TestFlight or Play Store upload if Maestro flows fail. Target: all 4 core flows pass on both iOS and Android before every release.
---
## 11. Deployment and Operations
### 11.1 Launch Deployment (Single VPS)
```
Netcup VPS 1000 G12 (€8.45/month)
├── Caddy (reverse proxy, auto-HTTPS)
├── Node.js API (Fastify, PM2 process manager)
├── PostgreSQL 16 (direct install)
├── Redis 7 (direct install)
└── Background workers (same process or separate PM2 app)
```
### 11.2 Deployment Checklist
Before every production deploy:
1. All tests pass locally.
2. Migration is tested on a staging copy of the database.
3. Rollback migration exists and is tested.
4. Health endpoint returns ok after deploy.
5. Error rate does not spike in first 15 minutes.
6. AI cost telemetry is within expected range.
### 11.3 Backup Strategy
- PostgreSQL: daily pg_dump to encrypted offsite storage.
- Redis: RDB snapshots daily (Redis data is ephemeral and rebuildable, but snapshots prevent cold-start recalculation of usage counters).
- Verified restore drill: run monthly.
---
## 12. Session Execution Checklist
Use this checklist for every Claude Code session:
```
[ ] Read the session description and understand the deliverables
[ ] Verify prerequisites from previous sessions are met
[ ] Execute the session with Claude Code
[ ] Run the validation steps — all must pass
[ ] Run the full quality check: pnpm run check
[ ] Review generated code for architecture compliance:
[ ] AI calls go through gateway only
[ ] Safety precheck on all AI-facing routes
[ ] Entitlement checks where required
[ ] Structured logging with request IDs
[ ] Zod validation on all inputs
[ ] TypeScript strict mode — no any
[ ] Commit with descriptive message
[ ] Update this checklist with any issues or deviations
```
---
## 13. Timeline Summary
| Track | Sessions | Duration | Weeks | Outcome |
|---|---|---|---|---|
| A: Foundation | A1-A6 | 3-5 days | 1-2 | Repo, infra, schema, skeletons |
| B: Platform Core | B1-B9 | 2-3 weeks | 1-3 | Auth, billing, AI gateway, safety, push, observability |
| C: Core Experience | C1-C8 | 3-5 weeks | 4-8 | Mirror, Turn, Lens, Gallery, Onboarding |
| D: Launch Hardening | D1-D7 | 2-4 weeks | 9-10 | Safety, billing, security, performance, deployment |
| E: Spectrum | E1-E3 | 3-6 weeks | 11-12 | Analytics pipeline, insights, dashboard |
Total to v1 launch (all features, end of Track E): 12 weeks.
Note: All features ship in a single v1 release. The build timeline is continuous (weeks 1-12) with sequential milestones, not separate "phases."
---
## 14. Critical Path Dependencies
```
A1 (repo) → A2 (docker) → A3 (API) → A5 (schema)
A4 (mobile) ←─── A3 (API)
A6 (shared schemas) → B1 (auth) → B2 (tokens) → B3 (entitlements)
B4 (AI gateway) → B5 (prompts) → B6 (safety) ──────→ C1 (mirror API)
B7 (rate limits) → B8 (push) → B9 (observability) ──→ C3 (turn API)
C1 → C2 (mirror UI) ─────────────────────────────→ C7 (gallery)
C3 → C4 (turn UI) ─────────────────────────────→ C7 (gallery)
C5 → C6 (lens UI) ─────────────────────────────→ C7 (gallery)
C8 (onboarding) → D1-D7 (hardening) → E1-E3 (spectrum)
```
---
## 15. What This Framework Deliberately Excludes
These are not in scope for the build framework and should be handled separately:
- Pixel-level UI specs (refer to kalei-complete-design.md and kalei-brand-guidelines.md)
- Marketing copy and App Store optimization (refer to app-blueprint.md section on ASO)
- Legal review of privacy policy and terms of service
- Detailed threat modeling (should be commissioned separately before pre-launch)
- Community building and growth hacking strategy
- Investor materials or pitch decks
---
*This framework is the operational bridge between Kalei's architecture documents and actual code. Follow it session by session. Deviate only when reality demands it, and document every deviation.*

View File

@@ -0,0 +1,326 @@
# Kalei Getting Started Guide (Beginner Friendly)
Last updated: 2026-02-10
Audience: First-time app builders
This guide explains the groundwork you need before coding, then gives you the exact first steps to start building Kalei.
Reference architecture: `docs/kalei-system-architecture-plan.md`
## 1. What You Are Building
Kalei is a mobile-first mental wellness product with four product pillars:
- Mirror: freeform writing with passive AI fragment detection.
- Turn (Kaleidoscope): structured AI reframing.
- Lens: goals, daily actions, and affirmations.
- Spectrum: weekly and monthly insight analytics.
At launch, your implementation target is:
- Mobile app: React Native + Expo.
- Backend API: Node.js + Fastify.
- Data: PostgreSQL + Redis.
- Source control and CI: Gitea + Gitea Actions (or Woodpecker CI).
- AI access: provider-agnostic AI Gateway using open-weight models (Ollama for local dev, vLLM for staging/prod).
- Billing and entitlements: self-hosted entitlement service (direct Apple App Store + Google Play verification, no RevenueCat dependency).
## 2. How To Use This Document Set
Read in this order:
1. `docs/kalei-getting-started.md` (this file)
2. `docs/codex phase documents/README.md`
3. `docs/codex phase documents/phase-0-groundwork-and-dev-environment.md`
4. `docs/codex phase documents/phase-1-platform-foundation.md`
5. `docs/codex phase documents/phase-2-core-experience-build.md`
6. `docs/codex phase documents/phase-3-launch-readiness-and-hardening.md`
7. `docs/codex phase documents/phase-4-spectrum-and-scale.md`
## 3. Groundwork Checklist (Before You Write Feature Code)
## 3.1 Accounts You Need
Create these accounts first so you do not block yourself later.
Must have for early development:
- Gitea (source control and CI)
- Apple Developer Program (iOS distribution, required)
- Google Play Console (Android distribution, required)
- DNS provider account (or self-hosted DNS using PowerDNS)
Strongly recommended now (not later):
- GlitchTip (open-source error tracking)
- PostHog self-hosted (open-source product analytics)
- Domain registrar account (for `kalei.ai`)
## 3.2 Local Tools You Need
Install this baseline stack:
- Git
- Node.js LTS (via `nvm`, recommended)
- npm (bundled with Node) or pnpm
- Docker Desktop (for PostgreSQL + Redis locally)
- VS Code (or equivalent IDE)
- Expo Go app on your phone (iOS/Android)
- Ollama (local open-weight model serving)
Install and verify:
```bash
# Git
git --version
# nvm + Node LTS
nvm install --lts
nvm use --lts
node -v
npm -v
# Docker
docker --version
docker compose version
# Expo CLI (optional global; npx is also fine)
npx expo --version
# Ollama
ollama --version
```
## 3.3 Decide Your Working Model
Set these rules now:
- Work in short feature slices with demoable outcomes.
- Every backend endpoint gets at least one automated test.
- No direct AI calls from client apps.
- No secrets in the repo, ever.
- Crisis-level text is never reframed.
## 4. Recommended Monorepo Structure
If you are starting from scratch, use this layout:
```text
Kalei/
apps/
mobile/
services/
api/
workers/
packages/
shared/
infra/
docker/
scripts/
docs/
```
Why this structure:
- Keeps mobile and backend isolated but coordinated.
- Lets you share schemas/types in `packages/shared`.
- Keeps infra scripts in one predictable place.
## 5. Step-By-Step Initial Setup
These are the first practical steps for week 1.
## Step 1: Initialize folders
```bash
mkdir -p apps/mobile services/api services/workers packages/shared infra/docker infra/scripts
```
## Step 2: Bootstrap the mobile app
```bash
npx create-expo-app@latest apps/mobile --template tabs
cd apps/mobile
npm install
cd ../..
```
## Step 3: Bootstrap the API service
```bash
mkdir -p services/api && cd services/api
npm init -y
npm install fastify @fastify/cors @fastify/helmet @fastify/sensible zod dotenv pino pino-pretty pg ioredis
npm install -D typescript tsx @types/node vitest supertest @types/supertest eslint prettier
npx tsc --init
cd ../..
```
## Step 4: Bring up local PostgreSQL and Redis
Create `infra/docker/docker-compose.yml`:
```yaml
services:
postgres:
image: postgres:16
environment:
POSTGRES_USER: kalei
POSTGRES_PASSWORD: kalei
POSTGRES_DB: kalei
ports:
- "5432:5432"
volumes:
- pg_data:/var/lib/postgresql/data
redis:
image: redis:7
ports:
- "6379:6379"
volumes:
- redis_data:/data
volumes:
pg_data:
redis_data:
```
Start services:
```bash
docker compose -f infra/docker/docker-compose.yml up -d
```
## Step 5: Create environment files
Create:
- `services/api/.env`
- `apps/mobile/.env`
API `.env` minimum:
```env
NODE_ENV=development
PORT=8080
DATABASE_URL=postgres://kalei:kalei@localhost:5432/kalei
REDIS_URL=redis://localhost:6379
JWT_ACCESS_SECRET=replace_me
JWT_REFRESH_SECRET=replace_me
AI_PROVIDER=openai_compatible
AI_BASE_URL=http://localhost:11434/v1
AI_MODEL=qwen2.5:14b
AI_API_KEY=local-dev
GLITCHTIP_DSN=replace_me
POSTHOG_API_KEY=replace_me
POSTHOG_HOST=http://localhost:8000
APPLE_SHARED_SECRET=replace_me
GOOGLE_PLAY_PACKAGE_NAME=com.kalei.app
```
Mobile `.env` minimum:
```env
EXPO_PUBLIC_API_BASE_URL=http://localhost:8080
EXPO_PUBLIC_ERROR_TRACKING_DSN=replace_me
```
## Step 6: Create your first backend health endpoint
Create `/health` returning status, uptime, and version. This is your first proof that the API is running.
## Step 7: Connect mobile app to backend
Add a tiny service function in mobile that calls `/health` and shows the result on screen.
## Step 8: Add migrations baseline
Create migration folders and your first migration for:
- users
- profiles
- auth_sessions
- refresh_tokens
Add a migration script and run it on local Postgres.
## Step 9: Set up linting, formatting, and tests
At minimum:
- API: `npm run lint`, `npm run test`
- Mobile: `npm run lint`
## Step 10: Push a clean baseline commit
Your first stable commit should include:
- mobile app runs
- API runs
- db and redis run in Docker
- health endpoint tested
- env files templated
## 6. Non-Negotiable Ground Rules
These reduce rework and production risk.
- API-first contracts: Define backend request/response schema first.
- Version prompts: Keep prompt templates in source control with version tags.
- Idempotency: write endpoints should support idempotency keys.
- Structured logs: every request gets a request ID.
- Safety-first branching: crisis path is explicit and tested.
## 7. Open-Source-First Policy
Default to open-source tools unless there is a hard platform requirement.
Open-source defaults for Kalei:
- Git forge and CI: Gitea + Gitea Actions (or Woodpecker CI)
- Error tracking: GlitchTip
- Product analytics: PostHog self-hosted
- AI serving: Ollama (local), vLLM (staging/prod)
- Runtime and data: Fastify, PostgreSQL, Redis
Unavoidable non-open-source dependencies:
- Apple App Store distribution and StoreKit APIs
- Google Play distribution and Billing APIs
- APNs and FCM for push delivery
## 8. What "Done" Looks Like For Groundwork
Before the build starts, you should be able to demonstrate:
- Local stack boot with one command (`docker compose ... up -d`).
- API starts with no errors and serves `/health`.
- Mobile app opens and calls API successfully.
- Baseline DB migrations run and rollback cleanly.
- One CI pipeline runs lint + tests on pull requests.
## 9. Common Beginner Mistakes (Avoid These)
- Building UI screens before backend contracts exist.
- Calling AI provider directly from the app.
- Waiting too long to add tests and logs.
- Keeping architecture only in your head (not in docs).
- Delaying safety and privacy work until late phases.
## 10. Recommended Weekly Rhythm
Use this cycle every week:
1. Plan: define exact outcomes for the week.
2. Build: complete one vertical slice (API + mobile + data).
3. Verify: run tests, manual QA, and failure-path checks.
4. Demo: produce a short demo video for your own review.
5. Retrospective: capture blockers and adjust next week.
## 11. Next Step
Start with:
- `docs/codex phase documents/phase-0-groundwork-and-dev-environment.md`
Then execute each phase in order.

View File

@@ -0,0 +1,518 @@
# Kalei — Infrastructure & Financial Plan
## The Constraint
**Starting capital:** €0 €2,000 max
**Monthly burn target:** Under €30/month at launch, scaling only when revenue justifies it
**Goal:** Ship a production-quality AI mental wellness app that can serve its first 1,000 users without going broke
---
## 1. The AI Decision (This Is Everything)
AI is 7090% of Kalei's variable cost. Every other infrastructure decision is rounding error compared to this one.
### The Research That Changed Everything
A 2025 Nature study tested LLMs on 5 standardized emotional intelligence tests. DeepSeek V3, Claude 3.5 Haiku, and several other LLMs all outperformed humans (81% avg vs 56% human avg). The gap between Claude and cheaper open-weight models on emotion understanding is much smaller than originally assumed. This opened the door to a dramatically cheaper AI strategy.
### The Decision: OpenRouter Gateway + DeepSeek V3.2 (Non-Chinese Hosting)
**Primary engine:** DeepSeek V3.2 via OpenRouter, pinned to non-Chinese providers (DeepInfra / Fireworks)
**Automatic fallback:** Claude Haiku 4.5 via OpenRouter (activated if primary provider has an outage)
**Batch processing:** DeepSeek V3.2 for Spectrum analysis and weekly insights (no separate batch tier needed at this price point)
| | DeepInfra (via OpenRouter) | Claude Haiku 4.5 (fallback) | Savings |
|---|---|---|---|
| Input (cache miss) | $0.26/M | $1.00/M | 74% cheaper |
| Input (cache hit) | $0.216/M | $0.10/M | — |
| Output | $0.38/M | $5.00/M | 92% cheaper |
### Why OpenRouter + Non-Chinese Providers (Not DeepSeek Direct)
DeepSeek's direct API is cheaper ($0.028 cache hits, $0.42 output) but routes all data through Chinese servers. For a mental wellness app handling sensitive emotional content, this is a non-starter — both for user trust and GDPR considerations. Routing through DeepInfra/Fireworks (US/EU infrastructure) via OpenRouter costs ~23x more than the direct API but still delivers ~8590% savings vs Claude Haiku.
OpenRouter gives us:
- **Provider pinning** — deterministic routing to non-Chinese hosts via the `order` array in API calls
- **Automatic failover** — if DeepInfra goes down, routes to Fireworks or Claude Haiku automatically
- **One API, one billing** — no lock-in, switching models is a config change not a code change
- **No markup** on base provider pricing (OpenRouter doesn't add fees on paid models)
### Self-Hosted GPU: Not Yet
GPU self-hosting (Qwen3-30B-A3B on RTX 4090 at ~$245/month) only beats the API route at ~600+ DAU. Below that, APIs are cheaper. Revisit when user base justifies fixed GPU costs, or if data sovereignty becomes a hard requirement.
### Why Not Tiered Models (Option D)?
We evaluated a hybrid strategy using different models per feature tier (DeepSeek for emotional tasks, Qwen3 via Groq for structured generation, batch APIs for analytics). At our current scale, the complexity cost outweighs the savings: separate prompt tuning, multiple quality benchmarks, routing logic in the ai_gateway, and edge cases when tasks don't cleanly fit one tier. The ~$3050/month savings doesn't justify maintaining four model configurations as a solo founder. Introduce tiering only when usage data reveals which tasks genuinely benefit from a different model.
---
## 2. Per-User AI Cost Model
Here's what a real user session looks like in tokens:
### The Mirror (Freeform Writing + AI Highlights)
| Component | Input Tokens | Output Tokens |
|---|---|---|
| System prompt (cached after first call) | ~800 | — |
| User's writing (per session, ~300 words) | ~400 | — |
| Fragment detection (5 highlights avg) | — | ~500 |
| Inline reframe (per tap, user triggers ~2) | ~200 | ~150 |
| Session Reflection | ~300 | ~400 |
| **Total per Mirror session** | **~1,700** | **~1,050** |
With prompt caching (system prompt cached): effective input ≈ 980 tokens (800 cached at 0.1×) + 900 fresh = **~980 billable input tokens**
### The Kaleidoscope (One Turn)
| Component | Input Tokens | Output Tokens |
|---|---|---|
| System prompt (cached) | ~600 | — |
| User's fragment + context | ~300 | — |
| 3 reframe perspectives | — | ~450 |
| **Total per Turn** | **~900** | **~450** |
With caching: ~360 billable input tokens
### The Lens (Daily Affirmation)
| Component | Input Tokens | Output Tokens |
|---|---|---|
| System prompt (cached) | ~400 | — |
| User context + goals | ~200 | — |
| Generated affirmation | — | ~100 |
| **Total per daily affirmation** | **~600** | **~100** |
With caching: ~240 billable input tokens
### The Guide (Active Coaching Layer)
| Component | Input Tokens | Output Tokens |
|---|---|---|
| System prompt (cached after first call) | ~600 | — |
| Goal Check-In conversation (per check-in, ~4 exchanges) | ~1,200 | ~800 |
| Cross-Feature Bridge detection (per analysis pass) | ~500 | ~200 |
| Attention Prompt generation (per prompt) | ~300 | ~100 |
| Evidence Intervention (per intervention) | ~400 | ~300 |
| Weekly Pulse AI Read (per pulse) | ~800 | ~500 |
| **Total per weekly cycle (1 check-in + 7 prompts + 1 pulse + 1 bridge analysis)** | **~4,600** | **~2,600** |
With prompt caching: effective input ≈ 3,200 billable input tokens per week
**Note on Guide intelligence:** The Guide requires cross-feature context analysis — it reads Mirror sessions, Turn history, and Lens goals to generate bridges and check-ins. This makes its per-call token count higher than single-feature interactions, but the calls are less frequent (weekly check-ins, daily prompts, bridges max once/day). The Guide also benefits heavily from prompt caching since its system prompt and user context window are reused across multiple Guide interactions.
### Monthly Usage Per Active User Profile
**Free user** (3 Turns/day, 2 Mirror sessions/week, daily Lens, basic Guide):
| Feature | Sessions/Month | Billable Input Tokens | Output Tokens |
|---|---|---|---|
| Kaleidoscope | 90 Turns | 32,400 | 40,500 |
| Mirror | 8 sessions | 7,840 | 8,400 |
| Lens | 30 affirmations | 7,200 | 3,000 |
| Guide (basic: 1 check-in, 12 prompts, 4 pulses self-report, bridges) | ~15 interactions | 6,400 | 3,200 |
| **Total** | | **53,840** | **55,100** |
**Cost with DeepSeek V3.2 via DeepInfra:** (53,840 × ~$0.24 blended + 55,100 × $0.38) / 1,000,000 = **$0.013 + $0.021 = ~$0.034/month**
*(Previous Claude Haiku 4.5 estimate: $0.33/month — this is a ~90% reduction)*
**Prism subscriber** (unlimited usage, assume 2× free user + full Guide + Spectrum):
| Feature | Sessions/Month | Billable Input Tokens | Output Tokens |
|---|---|---|---|
| Kaleidoscope | 180 Turns | 64,800 | 81,000 |
| Mirror | 16 sessions | 15,680 | 16,800 |
| Lens | 30 affirmations | 7,200 | 3,000 |
| Guide (full: 4 check-ins, 30 prompts, 4 full pulses, evidence interventions, all bridges) | ~50 interactions | 22,400 | 14,000 |
| Spectrum (batch) | 4 analyses | 8,000 | 12,000 |
| **Total** | | **118,080** | **126,800** |
**Cost with DeepSeek V3.2 via DeepInfra:** (118,080 × ~$0.24 + 126,800 × $0.38) / 1,000,000 = **$0.028 + $0.048 = ~$0.076/month**
*(Previous Claude Haiku 4.5 estimate: $0.72/month — this is a ~89% reduction)*
**Reality check:** Most users won't hit max usage. Expect average active user cost of **$0.03$0.06/month.** The Guide adds ~$0.005$0.01/month for free users and ~$0.01$0.02/month for Prism subscribers — negligible cost for a significant retention benefit.
---
## 3. Infrastructure Stack
### Server: Netcup VPS 1000 G12
| Spec | Value |
|---|---|
| CPU | 4 vCores (AMD EPYC) |
| RAM | 8 GB DDR5 ECC |
| Storage | 256 GB NVMe |
| Bandwidth | Unlimited, 2.5 Gbps |
| Location | Nuremberg, Germany |
| **Price** | **€8.45/month** (~$9.20) |
This runs everything: API server, database, Redis cache, reverse proxy. Comfortably handles hundreds of concurrent users. Can upgrade to VPS 2000 (€15.59/mo) when we outgrow it.
**What runs on this box:**
- Node.js / Express API server (or Fastify for speed)
- PostgreSQL 16 (direct install, not Supabase overhead)
- Redis (session cache, rate limiting, prompt cache keys)
- Nginx (reverse proxy, SSL termination, rate limiting)
- Certbot (free SSL via Let's Encrypt)
### Why NOT Supabase Cloud
Supabase Cloud Pro is $25/month — that's 3× our VPS cost and we'd still need a separate server for the API layer. Self-hosting Supabase via Docker is possible but adds ~2GB RAM overhead for all the services (GoTrue, PostgREST, Realtime, Storage, Kong). On an 8GB VPS, that leaves very little room.
**Instead:** Run PostgreSQL directly. We get all the database functionality we need (Row Level Security, triggers, functions, JSON support) without the Supabase services overhead. We build our own auth layer (JWT-based, simple) and our own API. This is leaner, cheaper, and gives us full control.
If we later want Supabase features (real-time subscriptions, storage), we can self-host just the components we need.
### Domain & DNS
| Item | Cost |
|---|---|
| kalei.ai domain | ~$5070/year (~$5/month) |
| Cloudflare DNS (free tier) | $0 |
| Cloudflare CDN/DDoS (free tier) | $0 |
### App Deployment & Distribution
| Item | Cost |
|---|---|
| Expo / EAS Build (free tier) | $0 (limited builds, queue wait) |
| Apple Developer Program | $99/year (~$8.25/month) |
| Google Play Developer | $25 one-time |
| Push Notifications (Firebase Cloud Messaging) | $0 |
**Build strategy:** Use Expo free tier for development. For production releases, use EAS free tier (low priority queue, ~30 min wait) or build locally. 24 builds per month is fine for the free tier.
### Email & Transactional
| Item | Cost |
|---|---|
| Resend (transactional email, free tier) | $0 (up to 100 emails/day) |
| Or Brevo free tier | $0 (300 emails/day) |
### Monitoring & Error Tracking
| Item | Cost |
|---|---|
| Sentry (free tier) | $0 (5K errors/month) |
| UptimeRobot (free tier) | $0 (50 monitors) |
| Custom logging to PostgreSQL | $0 |
---
## 4. Total Monthly Cost Breakdown
### Development (Pre-Launch)
| Item | Monthly Cost |
|---|---|
| Netcup VPS 1000 G12 | €8.45 |
| Domain (kalei.ai) | ~€5.00 |
| OpenRouter API (dev/testing) | ~€5 |
| Expo Free Tier | €0 |
| Cloudflare, Sentry, email | €0 |
| **Total** | **~€18.50/month** |
**Upfront costs:** Apple Developer ($99) + Google Play ($25) + Domain (~$55/year) = **~€180 one-time**
### At Launch (0500 users, ~50 DAU)
All features ship in v1: Mirror, Turn, Lens, Spectrum, Rehearsal, Ritual, Evidence Wall, Guide. Assuming 50 daily active users, ~200 registered:
| Item | Monthly Cost |
|---|---|
| Netcup VPS 1000 G12 | €8.45 |
| Domain | ~€5.00 |
| AI via OpenRouter (~50 active × $0.04 avg) | ~€2.00 |
| Expo Free Tier | €0 |
| Infrastructure (Cloudflare, etc.) | €0 |
| **Total** | **~€15.50/month** |
### At Traction (5002,000 users, ~200 DAU)
| Item | Monthly Cost |
|---|---|
| Netcup VPS 2000 G12 (upgrade) | €15.59 |
| Domain | ~€5.00 |
| AI via OpenRouter (~200 active × $0.04 avg) | ~€8.00 |
| Expo Starter (if needed for OTA updates) | €19.00 |
| Email (may need paid tier) | €010 |
| **Total** | **~€4858/month** |
### At Growth (2,00010,000 users, ~1,000 DAU)
| Item | Monthly Cost |
|---|---|
| Netcup VPS 4000 G12 | €26.18 |
| Domain | ~€5.00 |
| AI via OpenRouter (~1,000 active × $0.04 avg) | ~€40.00 |
| Expo Production plan | €99.00 |
| Email paid tier | ~€20 |
| Sentry paid (if needed) | ~€26 |
| **Total** | **~€216/month** |
AI cost is now only ~19% of total spend at 1,000 DAU (down from ~60% under the Haiku-first plan). Infrastructure and app store tooling become the dominant costs at scale.
---
## 5. Pricing Reevaluation
### The Old Price: $7.99/month (Prism)
Based on the cost model above, let's check if this works:
**At 50 DAU (~10 paying subscribers):**
- Revenue: 10 × $7.99 = $79.90
- Costs: ~$28
- **Margin: +$52 (65%)**
**At 200 DAU (~40 paying subscribers @ 20% conversion):**
- Revenue: 40 × $7.99 = $319.60
- Costs: ~$100
- **Margin: +$220 (69%)**
**At 1,000 DAU (~150 paying subscribers @ 15% conversion):**
- Revenue: 150 × $7.99 = $1,198.50
- Costs: ~$425
- **Margin: +$773 (65%)**
The margins are healthy. But $7.99 feels like a lot for a brand-new app from an unknown brand in a competitive wellness space. Users compare against Headspace ($12.99), Calm ($14.99), but those have massive brand recognition and content libraries.
### The New Price: $4.99/month (Prism)
**Why $4.99:**
- Psychological barrier is much lower — impulse-buy territory
- Significantly undercuts major competitors while offering AI personalization they don't have
- At ~$0.08/month cost per Prism subscriber (including full Guide coaching), the margin is **98%**
- Annual option: $39.99/year ($3.33/month) — strong incentive to commit
- Free tier remains generous enough to demonstrate value (3 Turns/day, 2 Mirror/week, basic Guide)
**Revised projections at $4.99 (with OpenRouter + DeepSeek V3.2 AI strategy):**
| Scale | Paying Users | Monthly Revenue | Monthly Cost | Margin |
|---|---|---|---|---|
| Launch (~50 DAU) | 15 (higher conversion at lower price) | $74.85 | ~$16 | +$59 (79%) |
| Traction (~200 DAU) | 60 | $299.40 | ~$53 | +$246 (82%) |
| Growth (~1,000 DAU) | 250 | $1,247.50 | ~$216 | +$1,032 (83%) |
The AI cost reduction transforms the unit economics. Margins now exceed 79% at every stage, and break-even comes faster.
### Alternative: Tiered Pricing
| Tier | Price | What You Get |
|---|---|---|
| **Free** | $0 | 3 Turns/day, 2 Mirror/week, basic Lens, 30-day Gallery |
| **Prism** | $4.99/mo | Unlimited Turns + Mirror, advanced reframe styles, full Gallery, fragment tracking |
| **Prism+** | $9.99/mo | Everything in Prism + full Spectrum dashboard, weekly/monthly AI insights, export, priority processing |
This is smart because Spectrum is the most expensive feature (batch AI analysis of historical data) and the most valuable retention tool. Gating it behind a higher tier means only your most engaged (and willing-to-pay) users generate that cost, and they're paying for it.
---
## 6. Revenue Milestones & Sustainability
### Break-Even Analysis
**Monthly fixed costs (at launch):** ~€14 (VPS + domain)
**Variable cost per active user:** ~€0.04
Break-even on fixed costs alone: **3 Prism subscribers at $4.99** cover the infrastructure.
To cover Apple's annual fee ($99) and Google ($25 amortized): add ~$10/month → total of **5 subscribers** to fully break even.
### Path to Sustainability
| Milestone | Users | Paying | MRR | Costs | Profit |
|---|---|---|---|---|---|
| Month 3 | 100 | 5 | $25 | $17 | +$8 |
| Month 6 | 500 | 30 | $150 | $22 | +$128 |
| Month 9 | 1,500 | 80 | $400 | $35 | +$365 |
| Month 12 | 3,000 | 200 | $1,000 | $60 | +$940 |
| Month 18 | 8,000 | 600 | $3,000 | $150 | +$2,850 |
The model is profitable from **month 3** with just 5 paying subscribers. The 90% AI cost reduction means Kalei reaches profitability immediately at launch rather than needing a 45 month runway.
---
## 7. Technical Architecture Summary
```
┌─────────────────────────────────────────────────────┐
│ CLIENTS │
│ React Native (iOS + Android) │
│ via Expo / EAS │
└──────────────────┬──────────────────────────────────┘
│ HTTPS
┌─────────────────────────────────────────────────────┐
│ CLOUDFLARE (Free Tier) │
│ DNS · CDN · DDoS Protection · SSL │
└──────────────────┬──────────────────────────────────┘
┌─────────────────────────────────────────────────────┐
│ NETCUP VPS 1000 G12 (€8.45/mo) │
│ │
│ ┌──────────┐ ┌───────────┐ ┌──────────────────┐ │
│ │ Nginx │→ │ Node.js │→ │ PostgreSQL 16 │ │
│ │ (proxy) │ │ API │ │ (all app data) │ │
│ └──────────┘ └─────┬─────┘ └──────────────────┘ │
│ │ ┌──────────────────┐ │
│ │ │ Redis │ │
│ │ │ (cache/sessions)│ │
│ │ └──────────────────┘ │
└──────────────────────┼──────────────────────────────┘
│ API Calls
┌──────────────────────────────┐
│ OPENROUTER GATEWAY │
│ (single API, one key) │
│ │
│ ┌────────────────────────┐ │
│ │ PRIMARY: DeepSeek V3.2 │ │
│ │ via DeepInfra/Fireworks │ │
│ │ (US/EU infrastructure) │ │
│ │ │ │
│ │ All features: │ │
│ │ • Mirror fragments │ │
│ │ • Kaleidoscope reframes│ │
│ │ • Lens affirmations │ │
│ │ • Crisis detection │ │
│ │ • Guide coaching │ │
│ │ • Spectrum analysis │ │
│ │ │ │
│ │ $0.26/$0.38 per MTok │ │
│ └────────────────────────┘ │
│ │ │
│ (automatic failover) │
│ │ │
│ ┌────────────────────────┐ │
│ │ FALLBACK: Claude Haiku │ │
│ │ 4.5 (Anthropic) │ │
│ │ $1.00/$5.00 per MTok │ │
│ │ Activated on outage │ │
│ └────────────────────────┘ │
└──────────────────────────────┘
```
### Key Technical Decisions
**Auth:** Custom JWT-based auth built into our Node.js API. Uses bcrypt for password hashing, short-lived access tokens (15 min) + long-lived refresh tokens stored in PostgreSQL. Social login (Apple Sign-In, Google) via their SDKs — free.
**Database schema:** PostgreSQL with Row Level Security policies. Tables for users, mirror_sessions, mirror_fragments, turns, lens_goals, spectrum_analyses. All user content encrypted at rest (PostgreSQL `pgcrypto` extension).
**AI request pipeline:**
1. Client sends user text to our API
2. API constructs prompt with cached system prompt + user context
3. API calls DeepSeek V3.2 via OpenRouter (pinned to DeepInfra/Fireworks), streams response back to client
4. If primary provider fails, OpenRouter automatically fails over to Claude Haiku 4.5
5. API logs token usage for cost tracking
6. Response stored in PostgreSQL for Spectrum analysis
**Rate limiting:** Redis-based. Free tier: 3 Turns/day, 2 Mirror/week enforced server-side. Prism: unlimited but soft-capped at 50 Turns/day to prevent abuse (99.9% of users will never hit this).
**Prompt caching strategy:** System prompts for each feature (Mirror, Kaleidoscope, Lens, Guide) are designed to be identical across users. Only the user's specific content changes. DeepInfra supports prompt caching with ~20% discount on cached input tokens ($0.216/M vs $0.26/M). While less dramatic than Anthropic's 90% cache discount, the base pricing is already so low that effective costs remain minimal.
---
## 8. Cost Control Safeguards
These prevent a surprise API bill from killing the project:
1. **Hard spending cap** on OpenRouter dashboard (start at $20/month, increase as revenue grows)
2. **Per-user daily token budget** tracked in Redis. If a user somehow generates excessive requests, they get a "take a break" message (fits the wellness brand perfectly)
3. **Graceful degradation:** If API budget is 80% consumed, route Lens affirmations to local template system (pre-written affirmations, no AI needed). Mirror and Kaleidoscope get priority for remaining budget.
4. **Automatic failover:** OpenRouter handles provider switching transparently. If DeepInfra has an outage, requests route to Fireworks or Claude Haiku automatically — no code changes needed.
5. **Monitor daily:** Simple Telegram bot alerts if daily API spend exceeds threshold
---
## 9. Startup Budget Allocation
With a maximum €2,000 to spend wisely:
| Category | Amount | What It Covers |
|---|---|---|
| Apple Developer Account | €99 | Annual fee, required for App Store |
| Google Play Developer | €25 | One-time fee |
| Domain (kalei.ai, 1 year) | ~€55 | Annual registration |
| Netcup VPS (6 months prepaid) | ~€51 | Runway for half a year of hosting |
| OpenRouter API credits (initial deposit) | €50 | Covers dev + testing + first ~1,000+ active user-months at DeepSeek V3.2 pricing |
| Design assets (fonts, if not free) | €050 | Inter + custom weight = free. Icon set if needed. |
| Contingency | ~€120 | Unexpected costs |
| **Total startup spend** | **~€400450** | |
| **Remaining reserve** | **~€1,5501,600** | 100+ months of launch-scale operating costs |
This means the €2,000 budget gives us effectively **unlimited runway** at launch-scale costs (~€16/month). Even without a single paying customer, we could operate for over 8 years. The AI cost reduction transformed our runway from "comfortable" to "virtually infinite" at early scale.
---
## 10. When to Scale (And What Changes)
| Trigger | Action | Cost Impact |
|---|---|---|
| >200 concurrent connections | Upgrade to VPS 2000 (€15.59) | +€7/month |
| >500 DAU | Add Redis Cluster or separate DB VPS | +€58/month |
| >600 DAU | Evaluate self-hosted Qwen3-30B-A3B on GPU (~$245/mo) | Cheaper than API at this volume, full data control |
| >2,000 DAU | Upgrade to VPS 4000 (€26.18) | +€10/month |
| >5,000 DAU | Introduce tiered model routing (different models per feature) | Saves ~2030% on AI costs at scale |
| >10,000 DAU | Consider second VPS for API/DB separation | Architecture change |
| >$2,000/month revenue | Consider dedicated server or managed Postgres | Comfort/reliability upgrade |
The beauty of this architecture is that **nothing changes architecturally as we scale** — we just give the same VPS more resources, and the API costs scale linearly and predictably with users.
---
## 11. Competitive Cost Comparison
To put this in perspective — what would this cost on "standard" startup infrastructure?
| Our Stack | "Normal" Startup Stack | Monthly Cost |
|---|---|---|
| Netcup VPS (€8.45) | AWS EC2 t3.medium | $3550 |
| PostgreSQL on VPS ($0) | Supabase Pro or RDS | $2550 |
| Redis on VPS ($0) | Redis Cloud or ElastiCache | $1530 |
| Cloudflare free ($0) | AWS CloudFront + ALB | $2040 |
| DeepSeek V3.2 via OpenRouter (~$2) | Claude/GPT-4 API ($50+) | 96% cheaper |
| **Our total: ~$16/mo** | **Their total: ~$120200/mo** | |
We're running at **813%** of what a "typical" startup would spend by self-hosting on a European VPS and using cost-optimized AI routing instead of defaulting to AWS/GCP + expensive frontier models.
---
## 12. Final Pricing Recommendation
| | Free | Prism | Prism+ |
|---|---|---|---|
| **Price** | $0 | **$4.99/month** | **$9.99/month** |
| | | $39.99/year | $79.99/year |
| Turns/day | 3 | Unlimited | Unlimited |
| Mirror/week | 2 | Unlimited | Unlimited |
| Lens | Basic | Full | Full |
| Reframe styles | 1 (Compassionate) | All 4 | All 4 |
| Gallery | 30 days | Full history | Full history |
| Fragment tracking | No | Yes | Yes |
| Spectrum | No | No | **Full dashboard** |
| Weekly AI insights | No | No | **Yes** |
| Growth trajectory | No | No | **Yes** |
| Export | No | Basic | Full |
| **Our cost per user** | ~$0.02 | ~$0.06 | ~$0.08 |
| **Margin** | N/A (acquisition) | **99%** | **99%** |
### Why This Works
At **$4.99**, Kalei is:
- Cheaper than Headspace ($12.99), Calm ($14.99), Woebot (free but limited)
- More personalized than any of them (AI-powered, not pre-recorded content)
- Profitable from subscriber #6
- Self-sustaining from month ~5
- Fully funded for 12+ months on a €2,000 budget even with zero revenue
The model scales cleanly because **AI costs are the only meaningful variable cost**, and they scale linearly with usage at a rate that our pricing covers with 98%+ margins on the AI component. Even at scale, total infrastructure costs stay manageable because the OpenRouter + DeepInfra strategy keeps per-user AI spend under $0.10/month.
---
*Last updated: February 2026*
*All prices include VAT where applicable. USD/EUR conversions at approximate current rates.*

View File

@@ -0,0 +1,514 @@
# Kalei System Architecture Plan
Version: 1.0
Date: 2026-02-10
Status: Proposed canonical architecture for implementation
## 1. Purpose and Scope
This document consolidates the existing Kalei docs into one implementation-ready system architecture plan.
In scope:
- Core features: Mirror, Kaleidoscope (Turn), Lens, Gallery, Spectrum analytics, subscriptions (all ship in v1).
- Mobile-first architecture (iOS/Android via Expo) with optional web support.
- Production operations for safety, privacy, reliability, and cost control.
Out of scope:
- Pixel-level UI specs and brand copy details.
- Provider contract/legal details.
- Full threat model artifacts (to be produced separately).
## 2. Inputs Reviewed
- `docs/app-blueprint.md`
- `docs/kalei-infrastructure-plan.md`
- `docs/kalei-ai-model-comparison.md`
- `docs/kalei-mirror-feature.md`
- `docs/kalei-spectrum-phase2.md`
- `docs/kalei-complete-design.md`
- `docs/kalei-brand-guidelines.md`
## 3. Architecture Drivers
### 3.1 Product drivers
- Core loop quality: Mirror fragment detection and Turn reframes must feel high quality and emotionally calibrated.
- Daily habit loop: low friction, fast response, strong retention mechanics.
- Over time: longitudinal Spectrum insights from accumulated usage data.
### 3.2 Non-functional drivers
- Safety first: crisis language must bypass reframing and trigger support flow.
- Privacy first: personal reflective writing is highly sensitive.
- Cost discipline: launch target under ~EUR 30/month fixed infrastructure.
- Operability: architecture must be maintainable by a small team.
- Gradual scale: support ~50 DAU at launch and scale to ~10k DAU without full rewrite.
## 4. Canonical Decisions
This plan resolves conflicting guidance across current docs.
| Topic | Decision | Rationale |
|---|---|---|
| Backend platform | Self-hosted API-first modular monolith on Node.js (Fastify preferred) | Matches budget constraints and keeps full control of safety, rate limits, and AI routing. |
| Data layer | PostgreSQL 16 + Redis | Postgres for source-of-truth relational + analytics tables; Redis for counters, rate limits, caching, idempotency. |
| Auth | JWT auth service in API + refresh token rotation + social login (Apple/Google) | Aligns with self-hosted stack while preserving mobile auth UX. |
| Mobile | React Native + Expo (local/native builds) | Fastest path for iOS/Android while keeping build pipeline under direct control. |
| AI integration | AI Gateway abstraction via OpenRouter with provider pinning | Single API, automatic failover, no vendor lock-in, and deterministic routing to non-Chinese providers for data privacy. |
| AI default | DeepSeek V3.2 via OpenRouter, hosted on DeepInfra/Fireworks (US/EU infrastructure) | 8590% cheaper than Claude Haiku with comparable emotional intelligence benchmarks. Provider pinning ensures no data flows through Chinese servers. |
| AI fallback | Claude Haiku 4.5 via OpenRouter (automatic failover on provider outage) | Highest-quality safety net activated transparently when primary provider is unavailable. |
| Billing | Self-hosted entitlement authority (direct App Store + Google Play server APIs) | Keeps billing logic in-house and avoids closed SaaS dependency in core authorization path. |
| Analytics/monitoring | PostHog self-hosted + GlitchTip + centralized app logs + cost telemetry | Open-source-first observability stack with lower vendor lock-in. |
## 5. System Context
```mermaid
flowchart LR
user[User] --> app[Expo App]
app --> edge[Edge Proxy]
edge --> api[Kalei API]
api --> db[(PostgreSQL)]
api --> redis[(Redis)]
api --> ai[AI Providers]
api --> billing[Store Entitlements]
api --> push[Push Gateway]
api --> obs[Observability]
app --> analytics[Product Analytics]
```
## 6. Container Architecture
```mermaid
flowchart TB
subgraph Client
turn[Turn Screen]
mirror[Mirror Screen]
lens[Lens Screen]
spectrum_ui[Spectrum Dashboard]
profile_ui[Gallery and Profile]
end
subgraph Platform
gateway[API Gateway and Auth]
turn_service[Turn Service]
mirror_service[Mirror Service]
lens_service[Lens Service]
spectrum_service[Spectrum Service]
safety_service[Safety Service]
entitlement_service[Entitlement Service]
jobs[Job Scheduler and Workers]
ai_gateway[AI Gateway]
cost_guard[Usage Meter and Cost Guard]
end
subgraph Data
postgres[(PostgreSQL)]
redis[(Redis)]
object_storage[(Object Storage)]
end
subgraph External
ai_provider[DeepSeek V3.2 via OpenRouter + DeepInfra/Fireworks + Claude Haiku fallback]
store_billing[App Store and Play Billing APIs]
push_provider[APNs and FCM]
glitchtip[GlitchTip]
posthog[PostHog self-hosted]
end
turn --> gateway
mirror --> gateway
lens --> gateway
spectrum_ui --> gateway
profile_ui --> gateway
gateway --> turn_service
gateway --> mirror_service
gateway --> lens_service
gateway --> spectrum_service
gateway --> entitlement_service
mirror_service --> safety_service
turn_service --> safety_service
lens_service --> safety_service
spectrum_service --> safety_service
turn_service --> ai_gateway
mirror_service --> ai_gateway
lens_service --> ai_gateway
spectrum_service --> ai_gateway
ai_gateway --> ai_provider
turn_service --> cost_guard
mirror_service --> cost_guard
lens_service --> cost_guard
spectrum_service --> cost_guard
turn_service --> postgres
mirror_service --> postgres
lens_service --> postgres
spectrum_service --> postgres
entitlement_service --> postgres
jobs --> postgres
turn_service --> redis
mirror_service --> redis
lens_service --> redis
spectrum_service --> redis
cost_guard --> redis
jobs --> redis
entitlement_service --> store_billing
jobs --> push_provider
gateway --> glitchtip
gateway --> posthog
spectrum_service --> object_storage
```
## 7. Domain and Service Boundaries
### 7.1 Runtime modules
- `auth`: sign-up/sign-in, token issuance/rotation, device session management.
- `entitlements`: direct App Store + Google Play sync, plan gating (`free`, `prism`, `prism_plus`).
- `mirror`: session lifecycle, message ingestion, fragment detection, inline reframe, reflection.
- `turn`: structured reframing workflow and saved patterns.
- `lens`: goals, actions, daily focus generation, check-ins.
- `spectrum`: analytics feature store, weekly/monthly aggregation, insight generation.
- `safety`: crisis detection, escalation, crisis response policy.
- `ai_gateway`: prompt templates, OpenRouter API integration with provider pinning (DeepInfra/Fireworks primary, Claude Haiku fallback), retries/timeouts, structured output validation.
- `usage_cost`: token telemetry, per-user budgets, global spend controls.
- `notifications`: push scheduling, reminders, weekly summaries.
### 7.2 Why modular monolith first
- Lowest operational overhead at launch.
- Strong transaction boundaries in one codebase.
- Easy extraction path later for `spectrum` workers or `ai_gateway` if load increases.
## 8. Core Data Architecture
### 8.1 Data domains
- Identity: users, profiles, auth_sessions, refresh_tokens.
- Product interactions: turns, mirror_sessions, mirror_messages, mirror_fragments, lens_goals, lens_actions.
- Analytics: spectrum_session_analysis, spectrum_turn_analysis, spectrum_weekly, spectrum_monthly.
- Commerce: subscriptions, entitlement_snapshots, billing_events.
- Safety and operations: safety_events, ai_usage_events, request_logs, audit_events.
### 8.2 Entity relationship view
```mermaid
flowchart LR
users[USERS] --> profiles[PROFILES]
users --> auth_sessions[AUTH_SESSIONS]
users --> refresh_tokens[REFRESH_TOKENS]
users --> turns[TURNS]
users --> mirror_sessions[MIRROR_SESSIONS]
mirror_sessions --> mirror_messages[MIRROR_MESSAGES]
mirror_messages --> mirror_fragments[MIRROR_FRAGMENTS]
users --> lens_goals[LENS_GOALS]
lens_goals --> lens_actions[LENS_ACTIONS]
users --> spectrum_session[SPECTRUM_SESSION_ANALYSIS]
users --> spectrum_turn[SPECTRUM_TURN_ANALYSIS]
users --> spectrum_weekly[SPECTRUM_WEEKLY]
users --> spectrum_monthly[SPECTRUM_MONTHLY]
users --> subscriptions[SUBSCRIPTIONS]
users --> entitlement[ENTITLEMENT_SNAPSHOTS]
users --> safety_events[SAFETY_EVENTS]
users --> ai_usage[AI_USAGE_EVENTS]
```
### 8.3 Storage policy
- Raw reflective content remains in transactional tables, encrypted at rest.
- Spectrum dashboard reads aggregated tables only by default.
- Per-session exclusion flags allow users to opt out entries from analytics.
- Hard delete workflow removes raw + derived analytics for requested windows.
## 9. Key Runtime Sequences
### 9.1 Mirror message processing with safety gate
```mermaid
sequenceDiagram
participant App as Mobile App
participant API as Kalei API
participant Safety as Safety Service
participant Ent as Entitlement Service
participant AI as AI Gateway
participant Model as AI Provider
participant DB as PostgreSQL
participant Redis as Redis
App->>API: POST /mirror/messages
API->>Ent: Check plan/quota
Ent->>Redis: Read counters
Ent-->>API: Allowed
API->>Safety: Crisis precheck
alt Crisis detected
Safety->>DB: Insert safety_event
API-->>App: Crisis resources response
else Not crisis
API->>AI: Detect fragments prompt
AI->>Model: Inference request
Model-->>AI: Fragments with confidence
AI-->>API: Validated structured result
API->>DB: Save message + fragments
API->>Redis: Increment usage counters
API-->>App: Highlight payload
end
```
### 9.2 Turn (Kaleidoscope) request
```mermaid
sequenceDiagram
participant App as Mobile App
participant API as Kalei API
participant Ent as Entitlement Service
participant Safety as Safety Service
participant AI as AI Gateway
participant Model as AI Provider
participant DB as PostgreSQL
participant Cost as Cost Guard
App->>API: POST /turns
API->>Ent: Validate tier + daily cap
API->>Safety: Crisis precheck
alt Crisis detected
API-->>App: Crisis resources response
else Safe
API->>AI: Generate 3 reframes + micro-action
AI->>Model: Inference stream
Model-->>AI: Structured reframes
AI-->>API: Response + token usage
API->>Cost: Record token usage + budget check
API->>DB: Save turn + metadata
API-->>App: Stream final turn result
end
```
### 9.3 Weekly Spectrum aggregation (background)
```mermaid
sequenceDiagram
participant Cron as Scheduler
participant Worker as Spectrum Worker
participant DB as PostgreSQL
participant AI as AI Gateway
participant Model as Batch Provider
participant Push as Notification Service
Cron->>Worker: Trigger weekly job
Worker->>DB: Load eligible users + raw events
Worker->>DB: Compute vectors and weekly aggregates
Worker->>AI: Generate insight narratives from aggregates
AI->>Model: Batch request
Model-->>AI: Insight text
AI-->>Worker: Validated summaries
Worker->>DB: Upsert spectrum_weekly and monthly deltas
Worker->>Push: Enqueue spectrum updated notifications
```
## 10. API Surface (v1)
### 10.1 Auth and profile
- `POST /auth/register`
- `POST /auth/login`
- `POST /auth/refresh`
- `POST /auth/logout`
- `GET /me`
- `PATCH /me/profile`
### 10.2 Mirror
- `POST /mirror/sessions`
- `POST /mirror/messages`
- `POST /mirror/fragments/{id}/reframe`
- `POST /mirror/sessions/{id}/close`
- `GET /mirror/sessions`
- `DELETE /mirror/sessions/{id}`
### 10.3 Turn
- `POST /turns`
- `GET /turns`
- `GET /turns/{id}`
- `POST /turns/{id}/save`
### 10.4 Lens
- `POST /lens/goals`
- `GET /lens/goals`
- `POST /lens/goals/{id}/actions`
- `POST /lens/actions/{id}/complete`
- `GET /lens/affirmation/today`
### 10.5 Spectrum
- `GET /spectrum/weekly`
- `GET /spectrum/monthly`
- `POST /spectrum/reset`
- `POST /spectrum/exclusions`
### 10.6 Billing and entitlements
- `POST /billing/webhooks/apple`
- `POST /billing/webhooks/google`
- `GET /billing/entitlements`
## 11. Security, Safety, and Compliance Architecture
### 11.1 Security controls
- TLS everywhere (edge proxy to API origin and service egress).
- JWT access tokens (short TTL) + rotating refresh tokens.
- Password hashing with Argon2id (preferred) or bcrypt with strong cost factor.
- Row ownership checks enforced in API and optionally DB RLS for defense in depth.
- Secrets in environment vault; never in client bundle.
- Audit logging for auth events, entitlement changes, deletes, and safety events.
### 11.2 Data protection
- Encryption at rest for disk volumes and database backups.
- Column-level encryption for highly sensitive text fields (Mirror message content).
- Data minimization for analytics: Spectrum reads vectors and aggregates by default.
- User rights flows: export, per-item delete, account delete, Spectrum reset.
### 11.3 Safety architecture
- Multi-stage crisis filter:
1. Deterministic keyword and pattern pass.
2. Low-latency model confirmation where needed.
3. Hardcoded crisis response templates and hotline resources.
- Crisis-level content is never reframed.
- Safety events are logged and monitored for false-positive/false-negative tuning.
## 12. Reliability and Performance
### 12.1 Initial SLO targets
- API availability: 99.5% monthly at launch, 99.9% target at scale.
- Turn and Mirror response latency:
- p50 < 1.8s
- p95 < 3.5s
- Weekly Spectrum jobs completed within 2 hours of scheduled run.
### 12.2 Resilience patterns
- Idempotency keys on write endpoints.
- AI provider timeout + retry policy with circuit breaker.
- Graceful degradation hierarchy when budget/latency pressure occurs:
1. Degrade Lens generation first (template fallback).
2. Keep Turn and Mirror available.
3. Pause non-critical Spectrum generation if needed.
- Dead-letter queue for failed async jobs.
## 13. Observability and FinOps
### 13.1 Telemetry
- Structured logs with request ID, user ID hash, feature, model, token usage, cost.
- Metrics:
- request rate/error rate/latency by endpoint
- AI token usage and cost by feature
- quota denials and safety escalations
- Tracing across API -> AI Gateway -> provider call.
### 13.2 Cost controls
- Global monthly AI spend cap and alert thresholds (50%, 80%, 95%).
- Per-user daily token budget in Redis.
- Feature-level cost envelope with OpenRouter provider routing:
- All features: DeepSeek V3.2 via DeepInfra/Fireworks (US/EU, $0.26/$0.38 per MTok)
- Automatic failover: Claude Haiku 4.5 on provider outage ($1.00/$5.00 per MTok)
- Future: introduce tiered model routing at 5,000+ DAU when usage data justifies complexity
- Prompt caching for stable system prompts (DeepInfra ~20% cache hit discount).
## 14. Deployment Topology and Scaling Path
### 14.1 Launch deployment (single-node)
```mermaid
flowchart LR
EDGE[Caddy or Nginx Edge] --> NX[Nginx]
NX --> API[API + Workers]
API --> PG[(PostgreSQL)]
API --> R[(Redis)]
API --> AIP[AI Providers]
```
### 14.2 Scaling evolution
```mermaid
flowchart LR
launch[Launch single VPS API DB Redis] --> traction[Traction split DB keep API monolith]
traction --> growth[Growth separate workers and scale API]
growth --> scale[Scale optional service extraction]
```
### 14.3 Trigger-based scaling
- Move DB off app node when p95 query latency > 120ms sustained or storage > 70%.
- Add API replica when CPU > 70% sustained at peak and p95 latency breaches SLO.
- Split workers when Spectrum jobs impact interactive endpoints.
## 15. Delivery Plan
All features ship in a single unified v1 release. The build is a continuous 12-week effort:
### 15.1 Weeks 14: Platform Foundation
- API skeleton, auth, profile, entitlements integration.
- Postgres schema v1 and migrations.
- Mirror + Turn endpoints with safety pre-check.
- Usage metering and rate limiting.
### 15.2 Weeks 58: Core Experience
- Lens flows, Rehearsal, Ritual, Evidence Wall, and Gallery history.
- Push notifications and daily reminders.
- Full observability, alerting, and incident runbooks.
- Beta load testing and security hardening.
### 15.3 Weeks 912: Spectrum & Launch Readiness
- Spectrum: vector extraction pipeline, aggregated tables, weekly batch jobs, dashboard endpoints.
- Data exclusion controls and reset workflow.
- Cost optimization pass on AI routing.
- Final QA, store submission, beta launch.
## 16. Risks and Mitigations
| Risk | Impact | Mitigation |
|---|---|---|
| Reframe quality variance by provider/model | Core UX degradation | Keep AI Gateway abstraction + blind quality harness + model canary rollout. |
| Safety false negatives | High trust and user harm risk | Defense-in-depth crisis filter + explicit no-reframe crisis policy + monitoring and review loop. |
| AI cost spikes | Margin compression | Hard spend caps, per-feature budgets, degradation order, model fallback lanes. |
| Single-node bottlenecks | Latency and availability issues | Trigger-based scaling plan and early instrumentation. |
| Sensitive data handling errors | Compliance and trust risk | Encryption, strict retention controls, deletion workflows, audit logs. |
## 17. Decision Log and Open Items
### 17.1 Decided in this plan
- Self-hosted API + Postgres + Redis is the canonical launch architecture.
- AI provider routing is built in from day one.
- Safety is an explicit service and gate on all AI-facing paths.
- Spectrum runs asynchronously over aggregated data.
### 17.2 Resolved: AI Provider Strategy (February 2026)
- **Decided:** DeepSeek V3.2 via OpenRouter, pinned to non-Chinese providers (DeepInfra/Fireworks). Single model for all features at launch. Claude Haiku 4.5 as automatic fallback.
- **Rationale:** 8590% cost reduction vs Claude Haiku. Nature 2025 study confirms comparable emotional intelligence scores. Non-Chinese hosting avoids data sovereignty concerns. Single-model approach minimizes complexity for solo founder.
- **Revisit at:** 600+ DAU (evaluate self-hosting), 5,000+ DAU (evaluate tiered model routing).
### 17.3 Remaining open decisions
- Exact hosting target for DB scaling at traction stage (dedicated VPS vs managed Postgres).
- Regional crisis resource strategy (US-first or multi-region at launch).
---
If approved, this document should become the architecture source of truth and supersede conflicting details in older planning docs.

View File

@@ -0,0 +1,814 @@
# Kalei — User Journey Technical Map
> Version 1.0 — February 2026
> Maps every user-facing flow to backend API endpoints, database operations, frontend components, and AI calls
---
## Architecture Summary
**Backend:** Fastify 5.x (Node.js 22 LTS), Drizzle ORM, PostgreSQL 16, Redis 7
**Frontend:** React Native + Expo SDK 54+, Expo Router, TanStack Query v5, Zustand, MMKV v4
**AI:** DeepSeek V3.2 via OpenRouter (primary, hosted on DeepInfra/Fireworks US/EU), Claude Haiku 4.5 (automatic fallback), provider-agnostic AI Gateway
**Auth:** JWT with refresh token rotation, Apple/Google SSO
**Billing:** Direct App Store / Google Play webhook integration
---
## 1. Authentication & Onboarding
### 1.1 Account Registration
**User Action:** Enters email + password or taps Apple/Google SSO
| Layer | Detail |
|-------|--------|
| **API** | `POST /auth/register` — body: `{ email, password, provider? }` |
| **Validation** | Zod: email format, password 8+ chars, provider enum |
| **DB Write** | `INSERT INTO users (id, email, password_hash, created_at)` |
| **DB Write** | `INSERT INTO profiles (user_id, display_name, coaching_style, onboarding_complete)` |
| **DB Write** | `INSERT INTO subscriptions (user_id, plan, status, started_at)` — defaults to `free` |
| **Redis** | Set `rate:auth:{ip}` with TTL for brute-force protection |
| **Response** | `{ access_token, refresh_token, user: { id, email, profile } }` |
| **Frontend** | `AuthStore.setTokens()` → MMKV encrypted storage → navigate to onboarding |
### 1.2 Token Refresh
| Layer | Detail |
|-------|--------|
| **API** | `POST /auth/refresh` — body: `{ refresh_token }` |
| **DB Read** | `SELECT FROM refresh_tokens WHERE token = $1 AND revoked = false` |
| **DB Write** | Revoke old token, issue new pair (rotation) |
| **Redis** | Invalidate old session cache |
### 1.3 Onboarding Completion
**User Action:** Completes screens 2-9 (style selection, first Turn)
| Layer | Detail |
|-------|--------|
| **API** | `PATCH /me/profile` — body: `{ coaching_style, notification_time, onboarding_complete: true }` |
| **DB Write** | `UPDATE profiles SET coaching_style = $1, notification_time = $2, onboarding_complete = true` |
| **Push** | Schedule first notification via push service at chosen time |
| **Frontend** | `OnboardingStore.complete()` → navigate to main tab navigator |
---
## 2. The Turn (Reframing)
### 2.1 Submit a Turn
**User Action:** Types thought → taps "Turn it"
| Layer | Detail |
|-------|--------|
| **Frontend** | Validate non-empty input, show Turn animation (1.5s kaleidoscope rotation) |
| **API** | `POST /turns` — body: `{ input_text, coaching_style? }` |
| **Rate Check** | Redis: `INCR rate:turns:{user_id}:{date}` → reject if > 3 (free) or > 100 (prism) |
| **Entitlement** | `SELECT plan FROM subscriptions WHERE user_id = $1` → gate check |
| **Safety** | Deterministic keyword scan → if flagged: `INSERT INTO safety_events`, return crisis response |
| **AI Call** | AI Gateway → Claude Haiku 4.5: system prompt (coaching style + reframe instructions) + user input |
| **AI Response** | JSON: `{ perspectives: [{ style, text, emotion_before, emotion_after }], micro_action: { if_clause, then_clause }, fragments_detected: [{ type, phrase, confidence }] }` |
| **DB Write** | `INSERT INTO turns (id, user_id, input_text, perspectives, micro_action, fragments, emotion_vector, created_at)` |
| **DB Write** | `INSERT INTO ai_usage_events (user_id, feature, model, input_tokens, output_tokens, latency_ms, cost_usd)` |
| **Redis** | Increment daily counter, update streak cache |
| **Response** | `{ turn_id, perspectives, micro_action, fragments, pattern_seed }` |
| **Frontend** | Dismiss animation → render 3 perspective cards + micro-action card |
### 2.2 Save a Turn / Keepsake
**User Action:** Taps save on a perspective card
| Layer | Detail |
|-------|--------|
| **API** | `POST /turns/{id}/save` — body: `{ perspective_index, save_type: "keepsake" }` |
| **DB Write** | `UPDATE turns SET saved = true, saved_perspective_index = $1` |
| **DB Write** | `INSERT INTO evidence_wall_tiles (user_id, tile_type, source_feature, source_id, color_accent, created_at)` — type: `saved_keepsake` |
| **Response** | `{ saved: true, gallery_id, evidence_tile_id }` |
| **Frontend** | Success toast "Turn saved" → update Gallery cache via TanStack Query invalidation |
### 2.3 Get Turn History
| Layer | Detail |
|-------|--------|
| **API** | `GET /turns?limit=20&offset=0&date=2026-02-21` |
| **DB Read** | `SELECT id, input_text, perspectives, fragments, created_at FROM turns WHERE user_id = $1 ORDER BY created_at DESC LIMIT $2 OFFSET $3` |
| **Cache** | Redis: cache recent 20 turns per user, 5 min TTL |
---
## 3. The Mirror (Journaling + Fragment Detection)
### 3.1 Start Mirror Session
**User Action:** Opens Mirror tab → starts new session
| Layer | Detail |
|-------|--------|
| **API** | `POST /mirror/sessions` — body: `{ prompt_id? }` |
| **Rate Check** | Redis: `INCR rate:mirror:{user_id}:{week}` → reject if > 2 (free) |
| **DB Write** | `INSERT INTO mirror_sessions (id, user_id, status, started_at)` — status: `active` |
| **Response** | `{ session_id, opening_prompt }` |
| **Frontend** | Navigate to session view, show opening prompt in AI bubble |
### 3.2 Send Message in Mirror
**User Action:** Types and sends a message
| Layer | Detail |
|-------|--------|
| **Frontend** | Append user message to local state, show AI thinking animation |
| **API** | `POST /mirror/sessions/{id}/messages` — body: `{ content, message_type: "user" }` |
| **DB Write** | `INSERT INTO mirror_messages (id, session_id, role, content, created_at)` |
| **AI Call #1** | Fragment Detection: system prompt (10 distortion types + detection rules) + session context + new message |
| **AI Response #1** | `{ fragments: [{ type, phrase, start_index, end_index, confidence }] }` |
| **Entitlement Gate** | Free: filter to 3 types (catastrophizing, black_and_white, should_statements). Prism: all 10 |
| **DB Write** | `INSERT INTO mirror_fragments (id, session_id, message_id, distortion_type, phrase, start_idx, end_idx, confidence)` |
| **AI Call #2** | Reflective Response: system prompt (warm, non-directive, Mirror voice) + session history + detected fragments |
| **AI Response #2** | `{ response_text, suggested_prompts: [] }` |
| **DB Write** | `INSERT INTO mirror_messages (id, session_id, role: "assistant", content)` |
| **Response** | `{ message_id, ai_response, fragments: [{ type, phrase, indices }] }` |
| **Frontend** | Render AI response bubble, apply amber highlight underlines to user's message at fragment positions |
### 3.3 Tap Fragment Highlight → Inline Reframe
**User Action:** Taps highlighted text in their message
| Layer | Detail |
|-------|--------|
| **Frontend** | Open half-sheet modal with distortion info (local data from fragment detection response) |
| **API** | `POST /mirror/fragments/{id}/reframe` — body: `{ fragment_id }` |
| **Rate Check** | Free: 1 inline reframe per session |
| **AI Call** | Quick reframe: system prompt + fragment context + distortion type |
| **AI Response** | `{ reframes: [{ text, style }], can_turn: true }` |
| **Response** | `{ reframes, turn_prefill }``turn_prefill` is the fragment phrase ready for Turn |
| **Frontend** | Render reframes in half-sheet. "Take to Turn" button navigates to Turn tab with `input_text` pre-filled |
### 3.4 Close Mirror Session → Generate Reflection
**User Action:** Taps "End Session" or navigates away
| Layer | Detail |
|-------|--------|
| **API** | `POST /mirror/sessions/{id}/close` |
| **DB Read** | Fetch all messages + fragments for session |
| **AI Call** | Reflection generation: full session transcript + all fragments → themes, insight, pattern seed |
| **AI Response** | `{ themes: [], fragment_summary: { total, by_type }, insight: "string", pattern_seed: "hash" }` |
| **DB Write** | `UPDATE mirror_sessions SET status = "closed", reflection = $1, pattern_seed = $2, closed_at = NOW()` |
| **DB Write** | `INSERT INTO evidence_wall_tiles (user_id, tile_type: "mirror_reflection", source_id, ...)` |
| **Response** | `{ reflection, pattern_seed, fragment_summary }` |
| **Frontend** | Show reflection card with generated kaleidoscope pattern thumbnail → auto-saved to Gallery |
---
## 4. The Lens (Goals + Actions)
### 4.1 Create Goal
| Layer | Detail |
|-------|--------|
| **API** | `POST /lens/goals` — body: `{ title, description, target_date?, category? }` |
| **AI Call** | Goal refinement: make SMART, suggest metrics, generate initial visualization description |
| **DB Write** | `INSERT INTO lens_goals (id, user_id, title, description, target_date, visualization_text, status, created_at)` |
| **Response** | `{ goal_id, refined_title, visualization, suggested_actions }` |
### 4.2 Get Daily Actions + Affirmation
| Layer | Detail |
|-------|--------|
| **API** | `GET /lens/today` |
| **DB Read** | Active goals + incomplete actions + today's affirmation |
| **AI Call** (if no affirmation cached) | Generate daily affirmation based on active goals + recent progress |
| **Redis** | Cache today's affirmation, 24h TTL |
| **Response** | `{ goals: [...], today_actions: [...], affirmation: { text, goal_id } }` |
### 4.3 Complete Action
| Layer | Detail |
|-------|--------|
| **API** | `POST /lens/actions/{id}/complete` |
| **DB Write** | `UPDATE lens_actions SET completed = true, completed_at = NOW()` |
| **DB Write** | `INSERT INTO evidence_wall_tiles (tile_type: "completed_action", source_id, color_accent: "emerald")` |
| **Redis** | Update streak counter |
| **Response** | `{ completed: true, goal_progress_pct, streak_count, evidence_tile_id }` |
### 4.4 Start Rehearsal Session
**User Action:** On goal detail → taps "Rehearse"
| Layer | Detail |
|-------|--------|
| **API** | `POST /lens/goals/{id}/rehearsal` |
| **Rate Check** | Free: 1/week, Prism: unlimited |
| **DB Read** | Fetch goal details + user's recent Mirror/Turn data for personalization |
| **AI Call** | Visualization script generation: first-person, multi-sensory, process-oriented, obstacle rehearsal |
| **AI Response** | `{ script_segments: [{ text, duration_seconds, breathing_cue? }], total_duration }` |
| **DB Write** | `INSERT INTO rehearsal_sessions (id, user_id, goal_id, script, duration, created_at)` |
| **Response** | `{ session_id, script_segments, total_duration }` |
| **Frontend** | Enter Rehearsal mode: timer ring, text card sequence with breathing animation pacing |
### 4.5 Complete Rehearsal
| Layer | Detail |
|-------|--------|
| **API** | `POST /lens/rehearsals/{id}/complete` |
| **DB Write** | `UPDATE rehearsal_sessions SET completed = true, completed_at = NOW()` |
| **DB Write** | `INSERT INTO evidence_wall_tiles (tile_type: "rehearsal_complete", source_id, color_accent: "amethyst")` |
| **Response** | `{ completed: true, evidence_tile_id }` |
| **Frontend** | Success burst animation → navigate back to goal detail |
---
## 5. The Ritual (Daily Habit Sequences)
### 5.1 Start Ritual
**User Action:** Taps "Start Ritual" on Turn tab or from notification
| Layer | Detail |
|-------|--------|
| **API** | `POST /rituals/start` — body: `{ template: "morning" | "evening" | "quick" }` |
| **Rate Check** | Free: only `quick` template allowed |
| **DB Write** | `INSERT INTO ritual_sessions (id, user_id, template, status, started_at)` |
| **DB Read** | Fetch user's active goals, recent fragments, streak data for personalization |
| **AI Call** | Personalize ritual prompts based on user context |
| **Response** | `{ session_id, steps: [{ type, prompt, duration_seconds }] }` |
| **Frontend** | Enter Ritual mode: fragment-shaped step progress bar, step-by-step flow |
### 5.2 Ritual Step Completion
Each step may trigger its own API call:
| Step | API Calls |
|------|-----------|
| Mirror check-in | `POST /mirror/sessions` + `POST /mirror/sessions/{id}/messages` (lightweight, 1-2 exchanges) |
| Turn | `POST /turns` (with ritual context flag) |
| Lens review | `POST /lens/actions/{id}/complete` (for each completed action) |
| Affirmation | `GET /lens/today` (cached) |
| Gratitude | `POST /rituals/{id}/gratitude` — body: `{ text }` |
### 5.3 Complete Ritual
| Layer | Detail |
|-------|--------|
| **API** | `POST /rituals/{id}/complete` |
| **DB Write** | `UPDATE ritual_sessions SET status = "completed", completed_at = NOW()` |
| **DB Write** | `INSERT INTO evidence_wall_tiles (tile_type: "ritual_complete", color_accent: "amber")` |
| **Redis** | Update ritual streak: `INCR streak:ritual:{user_id}`, check context consistency (same time ± 30 min) |
| **DB Write** | `UPDATE profiles SET ritual_streak = $1, ritual_consistency_score = $2` |
| **Response** | `{ completed: true, streak_count, consistency_score, evidence_tile_id }` |
| **Frontend** | Prismatic ring completion animation → streak view |
---
## 6. Gallery
### 6.1 Get All Patterns
| Layer | Detail |
|-------|--------|
| **API** | `GET /gallery?view=all&limit=20&offset=0` |
| **DB Read** | `SELECT id, source_feature, pattern_seed, preview_text, created_at FROM gallery_items WHERE user_id = $1 ORDER BY created_at DESC` |
| **Cache** | Redis: cache first page, 2 min TTL |
### 6.2 Get Pattern Detail
| Layer | Detail |
|-------|--------|
| **API** | `GET /gallery/{id}` |
| **DB Read** | Full gallery item with source content, fragments, pattern seed |
| **Frontend** | Render hero kaleidoscope pattern (deterministic from seed), source content, metadata |
### 6.3 Search / Filter
| Layer | Detail |
|-------|--------|
| **API** | `GET /gallery/search?q=text&feature=turn&distortion=catastrophizing&from=2026-01-01` |
| **DB Read** | Full-text search on gallery content + filter joins |
### 6.4 Generate Pattern Card (Share)
| Layer | Detail |
|-------|--------|
| **API** | `POST /gallery/{id}/share` |
| **Backend** | Generate Pattern Card image: kaleidoscope pattern + reframe text overlay + Kalei watermark |
| **Response** | `{ share_url, image_url }` |
| **Frontend** | Native share sheet with generated image |
---
## 7. Evidence Wall
### 7.1 Get Evidence Wall
| Layer | Detail |
|-------|--------|
| **API** | `GET /evidence-wall?limit=50` |
| **DB Read** | `SELECT * FROM evidence_wall_tiles WHERE user_id = $1 ORDER BY created_at DESC` |
| **Entitlement** | Free: 30-day window (`WHERE created_at > NOW() - INTERVAL '30 days'`). Prism: all history |
| **Response** | `{ tiles: [...], connections: [...], stage: "empty" | "early" | "mid" | "full" }` |
| **Frontend** | Render mosaic view based on stage, assign tile shapes (diamond, hex, rect, pentagon, triangle) based on tile_type |
### 7.2 Get Tile Detail
| Layer | Detail |
|-------|--------|
| **API** | `GET /evidence-wall/tiles/{id}` |
| **DB Read** | Tile + source data (join to turns/mirror_sessions/lens_actions/ritual_sessions/rehearsal_sessions) |
| **Response** | `{ tile, source_content, source_feature, created_at }` |
| **Frontend** | Half-sheet with tile detail, source content, link to source feature |
### 7.3 Contextual Evidence Surfacing
**Trigger:** AI detects low self-efficacy language in Mirror or Turn
| Layer | Detail |
|-------|--------|
| **AI Detection** | During Mirror fragment detection or Turn processing, flag self-efficacy score < threshold |
| **DB Read** | `SELECT * FROM evidence_wall_tiles WHERE user_id = $1 ORDER BY relevance_score DESC LIMIT 2` |
| **Response** | Included in Mirror/Turn response: `{ evidence_nudge: { tiles: [...], message: "..." } }` |
| **Frontend** | Render gentle card below main content with 1-2 evidence tiles |
---
## 8. Spectrum
### 8.1 Weekly Aggregation (Background Job)
| Layer | Detail |
|-------|--------|
| **Trigger** | Cron: Sunday 6pm UTC |
| **DB Read** | All turns + mirror_sessions + lens_actions for user's week |
| **AI Call** | Batch API (50% cost): analyze emotional vectors, fragment patterns, Turn impact |
| **DB Write** | `INSERT INTO spectrum_weekly (user_id, week_start, river_data, glass_data, impact_data, rhythm_data, growth_score, insight_text)` |
| **Push** | Notification: "Your weekly Spectrum insight is ready" |
### 8.2 Get Spectrum Dashboard
| Layer | Detail |
|-------|--------|
| **API** | `GET /spectrum/dashboard` |
| **Entitlement** | Free: simplified `{ weekly_insight_text, basic_fragment_count }`. Prism: full dashboard |
| **DB Read** | Latest weekly + monthly aggregates |
| **Response** | `{ river, glass, impact, rhythm, growth, weekly_insight, monthly_insight? }` |
| **Frontend** | Render 5 visualization components from spectrum-visualizations.svg data |
### 8.3 Monthly Deep Dive (Background Job)
| Layer | Detail |
|-------|--------|
| **Trigger** | Cron: 1st of month |
| **DB Read** | All weekly aggregates for the month |
| **AI Call** | Batch API: month-over-month narrative generation |
| **DB Write** | `INSERT INTO spectrum_monthly (user_id, month_start, narrative, growth_trajectory, milestone_events)` |
---
## 9. Billing & Entitlements
### 9.1 Entitlement Check (Middleware)
Every rate-limited endpoint runs this check:
```
1. Redis: GET entitlement:{user_id} → if cached, return
2. DB: SELECT plan, status FROM subscriptions WHERE user_id = $1 AND status = 'active'
3. Redis: SET entitlement:{user_id} = plan, EX 300 (5 min cache)
4. Return plan → middleware applies feature gates
```
### 9.2 App Store Webhook
| Layer | Detail |
|-------|--------|
| **API** | `POST /billing/webhooks/apple` |
| **Validation** | Verify Apple JWT signature |
| **DB Write** | `INSERT/UPDATE subscriptions SET plan = $1, status = $2, apple_transaction_id = $3` |
| **Redis** | Invalidate `entitlement:{user_id}` cache |
| **DB Write** | `INSERT INTO entitlement_snapshots (user_id, plan, event_type, timestamp)` |
### 9.3 Google Play Webhook
Same pattern as Apple, with Google-specific JWT validation and `google_purchase_token`.
---
## 10. Safety System
### 10.1 Crisis Detection Pipeline
Every AI-processed input runs through:
```
Stage 1: Deterministic keyword scan (regex, ~1ms)
→ If match: flag, skip AI, return crisis template
Stage 2: AI confirmation (during normal processing)
→ AI output includes safety_flag: boolean
→ If flagged: return hardcoded crisis response (never AI-generated)
Stage 3: Logging
→ INSERT INTO safety_events (user_id, input_hash, detection_stage, action_taken)
→ Alert: send to safety dashboard
```
### 10.2 Crisis Response
| Layer | Detail |
|-------|--------|
| **Response** | Hardcoded template: empathetic acknowledgment + 988 Suicide & Crisis Lifeline + Crisis Text Line |
| **UI** | Full-screen modal with prominent crisis resource links, "I'm OK" dismiss button |
| **Logging** | All crisis events logged, never the content itself |
---
## Database Schema Summary
### Core Tables
| Table | Key Columns | Indexes |
|-------|------------|---------|
| `users` | id (uuid), email, password_hash, created_at | email (unique) |
| `profiles` | user_id (FK), display_name, coaching_style, notification_time, onboarding_complete, ritual_streak, ritual_consistency_score | user_id (unique) |
| `subscriptions` | user_id (FK), plan (enum), status, started_at, expires_at, apple_transaction_id?, google_purchase_token? | user_id, status |
| `refresh_tokens` | id, user_id (FK), token_hash, revoked, expires_at | token_hash, user_id |
### Feature Tables
| Table | Key Columns | Indexes |
|-------|------------|---------|
| `turns` | id, user_id, input_text (encrypted), perspectives (jsonb), micro_action (jsonb), fragments (jsonb), emotion_vector (jsonb), saved, pattern_seed, created_at | user_id + created_at |
| `mirror_sessions` | id, user_id, status, reflection (jsonb), pattern_seed, started_at, closed_at | user_id + status |
| `mirror_messages` | id, session_id (FK), role, content (encrypted), created_at | session_id + created_at |
| `mirror_fragments` | id, session_id (FK), message_id (FK), distortion_type (enum), phrase, start_idx, end_idx, confidence | session_id, distortion_type |
| `lens_goals` | id, user_id, title, description, target_date, visualization_text, status, created_at | user_id + status |
| `lens_actions` | id, goal_id (FK), text, if_clause, then_clause, completed, completed_at | goal_id + completed |
| `rehearsal_sessions` | id, user_id, goal_id (FK), script (jsonb), duration, completed, started_at, completed_at | user_id + goal_id |
| `ritual_sessions` | id, user_id, template (enum), status, steps_completed (jsonb), started_at, completed_at | user_id + created_at |
| `gallery_items` | id, user_id, source_feature (enum), source_id, content_preview, pattern_seed, distortion_types (text[]), created_at | user_id + source_feature, full-text on content_preview |
| `evidence_wall_tiles` | id, user_id, tile_type (enum), source_feature, source_id, color_accent, metadata (jsonb), created_at | user_id + created_at, user_id + tile_type |
### Analytics Tables
| Table | Key Columns | Indexes |
|-------|------------|---------|
| `spectrum_weekly` | id, user_id, week_start, river_data (jsonb), glass_data (jsonb), impact_data (jsonb), rhythm_data (jsonb), growth_score, insight_text | user_id + week_start |
| `spectrum_monthly` | id, user_id, month_start, narrative (text), growth_trajectory (jsonb), milestone_events (jsonb) | user_id + month_start |
| `ai_usage_events` | id, user_id, feature, model, input_tokens, output_tokens, latency_ms, cost_usd, created_at | user_id + feature, created_at |
| `safety_events` | id, user_id, input_hash, detection_stage, action_taken, created_at | user_id, created_at |
### Enum Types
```sql
CREATE TYPE plan_type AS ENUM ('free', 'prism');
CREATE TYPE subscription_status AS ENUM ('active', 'expired', 'canceled', 'trial', 'grace_period');
CREATE TYPE distortion_type AS ENUM ('catastrophizing', 'black_and_white', 'mind_reading', 'fortune_telling', 'personalization', 'discounting_positives', 'emotional_reasoning', 'should_statements', 'labeling', 'overgeneralization');
CREATE TYPE tile_type AS ENUM ('saved_keepsake', 'mirror_reflection', 'completed_action', 'self_correction', 'streak_milestone', 'goal_completion', 'reframe_echo', 'rehearsal_complete', 'ritual_complete');
CREATE TYPE ritual_template AS ENUM ('morning', 'evening', 'quick');
CREATE TYPE source_feature AS ENUM ('turn', 'mirror', 'lens', 'rehearsal', 'ritual');
```
---
## Redis Key Patterns
| Key | Type | TTL | Purpose |
|-----|------|-----|---------|
| `rate:turns:{user_id}:{date}` | counter | 24h | Daily Turn rate limit |
| `rate:mirror:{user_id}:{week}` | counter | 7d | Weekly Mirror rate limit |
| `rate:rehearsal:{user_id}:{week}` | counter | 7d | Weekly Rehearsal rate limit |
| `entitlement:{user_id}` | string | 5 min | Cached subscription plan |
| `streak:daily:{user_id}` | hash | — | Current daily streak count + last active date |
| `streak:ritual:{user_id}` | hash | — | Ritual streak + consistency data |
| `affirmation:{user_id}:{date}` | string | 24h | Today's cached affirmation |
| `turns:recent:{user_id}` | list | 5 min | Cached recent turns |
| `session:{session_id}` | hash | 2h | Active Mirror session context |
---
## AI Gateway Call Patterns
### Cost Tiers
| Feature | Model | Est. Cost/Call | Caching |
|---------|-------|----------------|---------|
| Turn (reframe) | Claude Haiku 4.5 | ~$0.003 | System prompt cached (40% saving) |
| Mirror (fragment detection) | Claude Haiku 4.5 | ~$0.002 | System prompt cached |
| Mirror (reflective response) | Claude Haiku 4.5 | ~$0.003 | System prompt cached |
| Mirror (session reflection) | Claude Haiku 4.5 | ~$0.005 | — |
| Lens (goal refinement) | Claude Haiku 4.5 | ~$0.004 | — |
| Rehearsal (script gen) | Claude Haiku 4.5 | ~$0.008 | — |
| Ritual (personalization) | Claude Haiku 4.5 | ~$0.003 | System prompt cached |
| Spectrum (weekly) | Claude Batch API | ~$0.010 | Batch (50% off) |
| Spectrum (monthly) | Claude Batch API | ~$0.015 | Batch (50% off) |
| Safety (confirmation) | Included in feature call | $0 | Part of existing call |
### Prompt Template Versioning
All prompts stored as versioned templates: `prompts/{feature}/{version}.json`
| Prompt | Current Version |
|--------|----------------|
| turn_reframe | v1.0 |
| mirror_fragment_detect | v1.0 |
| mirror_reflect | v1.0 |
| mirror_session_close | v1.0 |
| lens_goal_refine | v1.0 |
| rehearsal_script | v1.0 |
| ritual_personalize | v1.0 |
| spectrum_weekly | v1.0 |
| spectrum_monthly | v1.0 |
| safety_check | v1.0 |
---
## Frontend Component Architecture
### Navigation
```
AppNavigator (Expo Router)
├── (auth)
│ ├── login.tsx
│ ├── register.tsx
│ └── onboarding/
│ ├── welcome.tsx
│ ├── metaphor.tsx
│ ├── turn-demo.tsx
│ ├── style-select.tsx
│ ├── notifications.tsx
│ └── first-turn.tsx
├── (tabs)
│ ├── turn/
│ │ ├── index.tsx — Turn home + input
│ │ ├── results.tsx — Turn results display
│ │ ├── ritual-select.tsx — Ritual template picker
│ │ └── ritual-flow.tsx — Active ritual flow
│ ├── mirror/
│ │ ├── index.tsx — Session list + new session
│ │ └── session.tsx — Active Mirror session
│ ├── lens/
│ │ ├── index.tsx — Goal dashboard
│ │ ├── create-goal.tsx — 6-step goal creation
│ │ ├── goal-detail.tsx — Goal detail + actions
│ │ └── rehearsal.tsx — Rehearsal session
│ ├── gallery/
│ │ ├── index.tsx — Pattern grid + filters
│ │ └── detail.tsx — Pattern detail + share
│ └── you/
│ ├── index.tsx — Profile + stats
│ ├── evidence-wall.tsx — Evidence Wall mosaic
│ ├── spectrum.tsx — Spectrum dashboard
│ ├── settings.tsx — App settings
│ └── subscription.tsx — Plan management
├── guide/
│ ├── checkin.tsx — Check-in conversation (nested in Lens goal detail)
│ ├── checkin-summary.tsx — Post check-in summary
│ ├── bridge-card.tsx — Cross-feature bridge card (shared component)
│ ├── attention-prompt.tsx — Daily attention prompt card
│ ├── moment-log.tsx — Log a noticed moment
│ ├── evidence-card.tsx — Evidence intervention card (used in Mirror + Turn)
│ └── pulse/
│ ├── index.tsx — Weekly Pulse container (3-step flow)
│ ├── self-report.tsx — Step 1: self-report
│ ├── ai-read.tsx — Step 2: AI observations
│ └── next-focus.tsx — Step 3: next week focus
└── (modals)
├── fragment-detail.tsx — Half-sheet for fragment info
├── upgrade.tsx — Prism upgrade prompt
├── crisis.tsx — Crisis response (safety)
├── share-card.tsx — Pattern card share sheet
└── rate-limit.tsx — Rate limit notice
```
### Shared Components
| Component | Usage | SVG Asset |
|-----------|-------|-----------|
| `FragmentIcon` | Everywhere — the core ◇ | `fragment-icons.svg` |
| `TabBar` | Main navigation | `icons-tab-bar.svg` |
| `DistortionBadge` | Mirror highlights, Gallery filters | `icons-distortions.svg` |
| `ActionIcon` | Buttons, list items, settings | `icons-actions.svg` |
| `KaleidoscopePattern` | Gallery, Turn result, share card | `patterns-kaleidoscope.svg` |
| `ProgressRing` | Lens goals, Rehearsal timer | `progress-indicators.svg` |
| `StepDots` | Lens 6-step, Ritual flow | `progress-indicators.svg` |
| `StreakCalendar` | Ritual tracking, You stats | `progress-indicators.svg` |
| `EvidenceMosaic` | Evidence Wall | `evidence-wall.svg` |
| `LoadingSpinner` | All loading states | `loading-animations.svg` |
| `SkeletonShimmer` | Data loading | `loading-animations.svg` |
| `TurnAnimation` | Turn processing | `loading-animations.svg` |
| `AIThinkingBubble` | Mirror AI processing | `loading-animations.svg` |
| `SuccessBurst` | Completion celebrations | `loading-animations.svg` |
| `BreathingLogo` | Splash, idle states | `loading-animations.svg` |
| `StatusBar` | Device chrome | `device-chrome.svg` |
| `NavHeader` | All screens | `device-chrome.svg` |
| `TabBarFrame` | Tab bar container | `device-chrome.svg` |
| `Toast` | Success/error feedback | `device-chrome.svg` |
| `InputAccessory` | Mirror, Turn text input | `device-chrome.svg` |
| `ShardCluster` | Empty states, backgrounds | `decorative-shards.svg` |
| `PrismaticDivider` | Section separators | `decorative-shards.svg` |
| `CornerAccent` | Card decorations | `decorative-shards.svg` |
| `SpectrumRiver` | Spectrum dashboard | `spectrum-visualizations.svg` |
| `SpectrumGlass` | Spectrum dashboard | `spectrum-visualizations.svg` |
| `SpectrumImpact` | Spectrum dashboard | `spectrum-visualizations.svg` |
| `SpectrumRhythm` | Spectrum dashboard | `spectrum-visualizations.svg` |
| `SpectrumGrowth` | Spectrum dashboard | `spectrum-visualizations.svg` |
| `GuideCard` | Bridge cards, evidence interventions, Guide notices | — (CSS-only prismatic border) |
| `GuideChat` | Check-in conversation interface | — (reuses Mirror chat styling) |
| `PulseScale` | Weekly Pulse self-report (5-point fragment scale) | `fragment-icons.svg` |
---
## 8. The Guide — Technical Specification
### 8.1 Database Schema
```sql
-- Guide check-in conversations
CREATE TABLE guide_checkins (
id UUID PRIMARY KEY DEFAULT gen_random_uuid(),
user_id UUID REFERENCES users(id),
goal_id UUID REFERENCES lens_goals(id),
started_at TIMESTAMPTZ DEFAULT NOW(),
ended_at TIMESTAMPTZ,
plan_adjustments JSONB, -- { old_plan, new_plan, reason }
evidence_surfaced JSONB, -- array of proof point IDs shown
summary TEXT,
created_at TIMESTAMPTZ DEFAULT NOW()
);
-- Guide check-in messages (conversation history)
CREATE TABLE guide_checkin_messages (
id UUID PRIMARY KEY DEFAULT gen_random_uuid(),
checkin_id UUID REFERENCES guide_checkins(id),
role VARCHAR(10) NOT NULL CHECK (role IN ('guide', 'user')),
content TEXT NOT NULL,
sequence_order INTEGER,
created_at TIMESTAMPTZ DEFAULT NOW()
);
-- Cross-feature bridge events
CREATE TABLE guide_bridges (
id UUID PRIMARY KEY DEFAULT gen_random_uuid(),
user_id UUID REFERENCES users(id),
bridge_type VARCHAR(20) NOT NULL CHECK (bridge_type IN ('discovery', 'reinforcement', 'integration')),
source_feature VARCHAR(20) NOT NULL, -- 'mirror', 'turn', 'lens'
target_feature VARCHAR(20), -- what feature the bridge suggests
trigger_data JSONB, -- { session_ids, turn_ids, theme, confidence }
bridge_content TEXT, -- the displayed bridge text
was_shown BOOLEAN DEFAULT FALSE,
was_acted_on BOOLEAN DEFAULT FALSE,
action_taken VARCHAR(50), -- 'opened_lens', 'started_rehearsal', 'dismissed', etc.
created_at TIMESTAMPTZ DEFAULT NOW()
);
-- Daily attention prompts
CREATE TABLE guide_attention_prompts (
id UUID PRIMARY KEY DEFAULT gen_random_uuid(),
user_id UUID REFERENCES users(id),
goal_id UUID REFERENCES lens_goals(id),
prompt_type VARCHAR(20) NOT NULL CHECK (prompt_type IN ('vision', 'capability', 'awareness', 'action', 'reflection')),
manifestation_step INTEGER NOT NULL CHECK (manifestation_step BETWEEN 2 AND 6),
prompt_text TEXT NOT NULL,
delivered_at TIMESTAMPTZ,
was_acknowledged BOOLEAN DEFAULT FALSE,
moment_logged BOOLEAN DEFAULT FALSE,
moment_text TEXT,
moment_logged_at TIMESTAMPTZ,
created_at TIMESTAMPTZ DEFAULT NOW()
);
-- Evidence interventions
CREATE TABLE guide_evidence_interventions (
id UUID PRIMARY KEY DEFAULT gen_random_uuid(),
user_id UUID REFERENCES users(id),
trigger_source VARCHAR(20) NOT NULL, -- 'mirror', 'turn'
trigger_session_id UUID, -- mirror_session_id or turn_id
trigger_signals JSONB, -- { signals detected: helplessness_language, etc. }
evidence_shown JSONB, -- array of proof point IDs surfaced
intervention_text TEXT,
was_shown BOOLEAN DEFAULT FALSE,
was_acted_on BOOLEAN DEFAULT FALSE,
created_at TIMESTAMPTZ DEFAULT NOW()
);
-- Weekly pulse check-ins
CREATE TABLE guide_pulses (
id UUID PRIMARY KEY DEFAULT gen_random_uuid(),
user_id UUID REFERENCES users(id),
week_start DATE NOT NULL,
self_report_score INTEGER CHECK (self_report_score BETWEEN 1 AND 5),
self_report_text TEXT,
ai_observations JSONB, -- array of { observation, source, accent_color }
divergence_note TEXT, -- when self-report != AI read
next_week_focus JSONB, -- array of { suggestion, feature, reasoning }
completed_at TIMESTAMPTZ,
created_at TIMESTAMPTZ DEFAULT NOW(),
UNIQUE(user_id, week_start)
);
```
### 8.2 API Endpoints
| Endpoint | Method | Description | Auth | Rate Limit |
|----------|--------|-------------|------|------------|
| `/guide/checkins/:goalId` | GET | Get check-in history for a goal | JWT | — |
| `/guide/checkins/:goalId/start` | POST | Start a new check-in conversation | JWT | Free: 1/mo/goal, Prism: unlimited |
| `/guide/checkins/:checkinId/message` | POST | Send a message in check-in conversation | JWT | — |
| `/guide/checkins/:checkinId/complete` | POST | End check-in, generate summary | JWT | — |
| `/guide/bridges` | GET | Get pending bridge cards for user | JWT | — |
| `/guide/bridges/:id/action` | POST | Record bridge action (shown, acted_on, dismissed) | JWT | — |
| `/guide/bridges/scan` | POST | Trigger cross-feature pattern scan (background job) | JWT | Auto: daily |
| `/guide/prompts/today` | GET | Get today's attention prompt | JWT | — |
| `/guide/prompts/:id/acknowledge` | POST | Mark prompt as seen | JWT | — |
| `/guide/prompts/:id/log-moment` | POST | Log a noticed moment | JWT | — |
| `/guide/prompts/generate` | POST | Generate next prompt (background job) | JWT | Auto: daily |
| `/guide/evidence/check` | POST | Check for evidence intervention triggers | JWT | Auto: after Mirror/Turn |
| `/guide/evidence/:id/action` | POST | Record intervention action | JWT | — |
| `/guide/pulse/current` | GET | Get current week's pulse (or create if needed) | JWT | — |
| `/guide/pulse/:id/self-report` | POST | Submit self-report score + text | JWT | 1/week |
| `/guide/pulse/:id/ai-read` | GET | Generate AI observations for the week | JWT | Prism only |
| `/guide/pulse/:id/focus` | GET | Generate next-week focus suggestions | JWT | Prism only |
| `/guide/pulse/:id/complete` | POST | Mark pulse as complete | JWT | — |
### 8.3 AI Pipeline
The Guide requires a cross-feature AI analysis pipeline that differs from single-feature calls:
**Cross-Feature Context Window:**
For check-ins, bridges, and the weekly pulse, the AI needs context from multiple features simultaneously:
```
Guide AI Context = {
user_profile: { coaching_style, member_since, preferences },
active_goals: [ { goal, milestones, if_then_plans, progress } ],
recent_mirror_sessions: [ last 7 days — themes, fragment types, emotional tone ],
recent_turns: [ last 7 days — topics, distortions, saved keepsakes ],
evidence_wall: [ last 30 days — proof points by type and source ],
ritual_data: { streak, consistency_score, last_completed },
previous_checkins: [ last 2 check-ins per goal — summaries only ],
previous_pulse: { last week's scores and focus areas }
}
```
**Token estimate for cross-feature context:** ~1,500-2,000 input tokens. With prompt caching (system prompt + stable user context cached), effective billable input drops to ~800-1,200 tokens per Guide call.
**Background Jobs:**
| Job | Schedule | Purpose | AI Model |
|-----|----------|---------|----------|
| Bridge Pattern Scan | Daily (2am user-local) | Analyze 7-day Mirror/Turn window for cross-feature patterns | Haiku 4.5 Batch |
| Attention Prompt Generation | Daily (5am user-local) | Generate personalized prompt based on goal progress and manifestation step | Haiku 4.5 (cached) |
| Evidence Trigger Check | After each Mirror session / Turn | Check session language for self-efficacy dip signals | Haiku 4.5 (inline, fast) |
| Weekly Pulse AI Read | On pulse open (lazy) | Generate weekly observations from all feature data | Haiku 4.5 |
**Prompt Engineering Notes:**
The Guide's system prompt must enforce:
- Evidence-first framing (always lead with what went well)
- Specific data references (numbers, dates, not vague encouragement)
- Collaborative tone (propose adjustments, don't dictate)
- Forward momentum (always end with next action)
- Cross-feature awareness (reference Mirror/Turn patterns when coaching on Lens goals)
- The user's selected coaching style (brutal honesty, gentle guidance, logical analysis, etc.)
### 8.4 Frontend Component Tree
```
screens/guide/
├── GuideCheckinScreen.tsx — Check-in conversation (within Lens goal detail)
├── GuideCheckinSummary.tsx — Post check-in summary card
├── GuideBridgeCard.tsx — Reusable bridge card component (3 variants)
├── GuideAttentionPrompt.tsx — Daily prompt card (within Lens dashboard)
├── GuideMomentLog.tsx — Log a noticed moment screen
├── GuideEvidenceCard.tsx — Evidence intervention card (used in Mirror + Turn)
├── GuidePulseScreen.tsx — 3-step weekly pulse flow
│ ├── PulseSelfReport.tsx — Step 1: self-report scale
│ ├── PulseAIRead.tsx — Step 2: AI observations
│ └── PulseNextFocus.tsx — Step 3: next week suggestions
└── shared/
├── GuideBorder.tsx — Prismatic gradient border component
├── GuideIcon.tsx — Faceted diamond with directional pulse
└── FragmentScale.tsx — 5-point fragment glow scale (for Pulse)
```
### 8.5 Rate Limiting (Guide-Specific)
| Feature | Free | Prism |
|---------|------|-------|
| Goal check-ins | `rate:guide:checkin:{userId}:{goalId}` — 1/month | Unlimited |
| Cross-feature bridges | Discovery only. `rate:guide:bridge:{userId}` — 1/day | All types, 1/day |
| Attention prompts | `rate:guide:prompt:{userId}` — 3/week | Daily |
| Evidence interventions | Blocked (Prism feature) | `rate:guide:evidence:{userId}` — 1/session |
| Weekly pulse | Self-report only | Full 3-step with AI read |
### 8.6 Push Notifications (Guide-Specific)
| Notification | Trigger | Copy |
|---|---|---|
| Check-in reminder | Scheduled per goal settings | "Time to check in on [goal name]. How's it going?" |
| Attention prompt | Daily at user's chosen time | "Your Lens has a focus for today." |
| Bridge surfaced | After daily bridge scan finds match | "◇ A pattern is forming. Take a look." |
| Weekly pulse | User's chosen day (default Sunday 7pm) | "Your weekly Pulse is ready." |
| Moment log reminder | 6 hours after prompt acknowledgment | "Did you notice anything today?" |