Files
LetsBeBiz-Site/docs/superpowers/plans/2026-04-06-voice-discovery-pivot.md
Matt a5570a90b2 docs: add voice discovery pivot implementation plan
9-task plan covering: ModeToggle removal, i18n, system prompt rewrite,
contact card tool, reconnection logic, UI rebuild, Discovery section,
landing page integration, and email template verification.

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
2026-04-06 12:47:52 -04:00

44 KiB

Voice Discovery Mode Implementation Plan

For agentic workers: REQUIRED SUB-SKILL: Use superpowers:subagent-driven-development (recommended) or superpowers:executing-plans to implement this plan task-by-task. Steps use checkbox (- [ ]) syntax for tracking.

Goal: Pivot the voice mode from an in-configurator form-filling shortcut into a standalone consultative discovery section with its own entry point, rewritten conversational prompt, on-screen contact verification, and reconnection handling.

Architecture: The configurator loses its type/talk toggle and becomes typed-form-only. A new DiscoverySection component is added to the landing page with warm copy and a CTA that expands an inline voice panel. The Gemini system prompt is rewritten for consultative tone, a new request_contact tool surfaces an on-screen confirmation card, and the provider gains reconnection logic.

Tech Stack: Next.js 15 (App Router), React 19, TypeScript, Tailwind CSS, Framer Motion, Gemini Live API (WebSocket), next-intl

Spec: docs/superpowers/specs/2026-04-01-voice-discovery-pivot-design.md


File Map

Action File Responsibility
Delete src/components/configurator/ModeToggle.tsx Removed — no longer needed
Modify src/components/configurator/WizardContainer.tsx Remove mode toggle, voice branch, voice imports
Modify src/lib/gemini-live.ts Rewrite system prompts, add request_contact tool
Modify src/components/configurator/VoiceAgentProvider.tsx Contact card state, request_contact handler, reconnection logic, refactor brief weighting
Modify src/components/configurator/VoiceAgent.tsx New layout (larger orb, bigger transcript, no chips), contact card UI, autoscroll fix, mobile responsive
Create src/components/sections/Discovery.tsx New landing page section: warm copy + CTA + expandable voice panel
Modify src/app/(frontend)/[locale]/page.tsx Add <Discovery /> section to page
Modify src/i18n/messages/en.json Add discovery section strings, update voice strings
Modify src/i18n/messages/fr.json Add discovery section strings, update voice strings

Task 1: Remove ModeToggle from Configurator

Files:

  • Delete: src/components/configurator/ModeToggle.tsx

  • Modify: src/components/configurator/WizardContainer.tsx

  • Step 1: Remove mode state and voice imports from WizardContainer

In src/components/configurator/WizardContainer.tsx, remove the voice-related imports (lines 12-13):

// DELETE these two imports:
import VoiceAgent from './VoiceAgent';
import VoiceAgentProvider from './VoiceAgentProvider';

Also remove the ModeToggle import (line 11):

// DELETE this import:
import ModeToggle from './ModeToggle';
  • Step 2: Remove mode state and voice handler from component body

In the WizardContainer component body, remove:

// DELETE this state:
const [mode, setMode] = useState<'type' | 'talk'>('type');

Remove the handleVoiceComplete callback (lines 147-152):

// DELETE this entire function:
const handleVoiceComplete = (voiceBrief: string, voiceFormData: WizardFormData) => {
  setFormData(voiceFormData);
  setBrief(voiceBrief);
  setDirection(1);
  setCurrentStep(4);
};

In handleReset, remove setMode('type'); (line 111).

  • Step 3: Remove ModeToggle and voice branch from JSX

Remove the ModeToggle rendering block (lines 165-169):

// DELETE this block:
{!isGenerating && currentStep !== 4 && (
  <div className="flex justify-center mb-4">
    <ModeToggle mode={mode} onChange={setMode} />
  </div>
)}

Remove the entire voice mode branch (lines 172-184):

// DELETE this block:
{mode === 'talk' && !isGenerating && currentStep !== 4 && (
  <motion.div
    key="voice-mode"
    variants={stepVariants}
    initial="initial"
    animate="animate"
    exit="exit"
  >
    <VoiceAgentProvider locale={locale}>
      <VoiceAgent locale={locale} onComplete={handleVoiceComplete} />
    </VoiceAgentProvider>
  </motion.div>
)}

Remove the mode === 'type' && condition from the three remaining step blocks. They should just check !isGenerating && currentStep === N. For example, step 1 changes from:

{mode === 'type' && !isGenerating && currentStep === 1 && (

to:

{!isGenerating && currentStep === 1 && (

Do the same for steps 2 and 3.

  • Step 4: Delete ModeToggle.tsx

Delete the file src/components/configurator/ModeToggle.tsx.

  • Step 5: Verify TypeScript compiles

Run: npx tsc --noEmit 2>&1 | grep -i "WizardContainer\|ModeToggle" Expected: No output (no errors in these files)

  • Step 6: Commit
git add -u src/components/configurator/ModeToggle.tsx src/components/configurator/WizardContainer.tsx
git commit -m "refactor: remove ModeToggle from configurator, make it typed-form-only"

Task 2: Add i18n Translations

Files:

  • Modify: src/i18n/messages/en.json

  • Modify: src/i18n/messages/fr.json

  • Step 1: Add discovery section keys to en.json

Add a new top-level "discovery" key block. Insert it after the "configurator" block (after line 185 — after the closing } of configurator). Also update "voice" keys within configurator to add new strings needed for the redesigned voice UI.

In the "configurator" block, replace the existing "voice" object (lines 167-175) with:

"voice": {
  "agentName": "LetsBe project assistant",
  "endConversation": "End Conversation",
  "analyzingSite": "Analyzing your site...",
  "connecting": "Connecting...",
  "mute": "Mute",
  "unmute": "Unmute",
  "generatingBrief": "Generating your brief...",
  "contactConfirm": "Does this look right?",
  "contactEdit": "Edit",
  "contactConfirmButton": "That's correct",
  "reconnect": "Reconnect",
  "connectionLost": "Connection lost. Your conversation is saved.",
  "briefComplete": "Brief complete"
}

Remove the "mode" keys (lines 163-166) — no longer needed:

// DELETE:
"mode": {
  "type": "Type",
  "talk": "Talk"
},

Add the new "discovery" block after "configurator":

"discovery": {
  "eyebrow": "Let's Figure It Out",
  "title": "Not sure where to start?",
  "description": "Tell us what you're thinking and we'll figure it out together. You'll get a personalized brief at the end.",
  "cta": "Let's Talk",
  "privacy": "Voice conversations are not recorded or stored."
}
  • Step 2: Add discovery section keys to fr.json

Same structure in src/i18n/messages/fr.json. Replace the "voice" object with:

"voice": {
  "agentName": "Assistant projet LetsBe",
  "endConversation": "Terminer la conversation",
  "analyzingSite": "Analyse de votre site...",
  "connecting": "Connexion en cours...",
  "mute": "Couper le micro",
  "unmute": "Activer le micro",
  "generatingBrief": "Génération de votre brief...",
  "contactConfirm": "Est-ce correct ?",
  "contactEdit": "Modifier",
  "contactConfirmButton": "C'est correct",
  "reconnect": "Reconnecter",
  "connectionLost": "Connexion perdue. Votre conversation est sauvegardée.",
  "briefComplete": "Brief terminé"
}

Remove the "mode" keys.

Add the "discovery" block after "configurator":

"discovery": {
  "eyebrow": "Trouvons ensemble",
  "title": "Vous ne savez pas par où commencer ?",
  "description": "Dites-nous ce que vous avez en tête et on trouvera la solution ensemble. Vous recevrez un brief personnalisé à la fin.",
  "cta": "Discutons",
  "privacy": "Les conversations vocales ne sont ni enregistrées ni stockées."
}
  • Step 3: Verify JSON is valid

Run: node -e "JSON.parse(require('fs').readFileSync('src/i18n/messages/en.json','utf8')); console.log('en.json OK')" && node -e "JSON.parse(require('fs').readFileSync('src/i18n/messages/fr.json','utf8')); console.log('fr.json OK')" Expected: en.json OK and fr.json OK

  • Step 4: Commit
git add src/i18n/messages/en.json src/i18n/messages/fr.json
git commit -m "i18n: add discovery section translations, update voice strings"

Task 3: Rewrite System Prompt and Tools in gemini-live.ts

Files:

  • Modify: src/lib/gemini-live.ts

  • Step 1: Add request_contact tool declaration

In the AGENT_TOOLS array, add a new tool between analyze_website and complete_brief:

{
  name: 'request_contact',
  description:
    'Display a contact confirmation card on screen for the user to verify their name and email. Call this instead of spelling back their details verbally. The user will confirm or edit on screen.',
  parameters: {
    type: Type.OBJECT,
    properties: {
      name: { type: Type.STRING, description: 'The name the user provided' },
      email: { type: Type.STRING, description: 'The email the user provided' },
    },
    required: ['name', 'email'],
  },
},
  • Step 2: Update complete_brief tool description

Change the complete_brief description to emphasize conversationSummary as primary:

{
  name: 'complete_brief',
  description:
    'Generate and send the project brief. The conversationSummary is the most important field — it should capture the full richness of the conversation. Structured fields (services, industry, timeline) are supporting metadata. Only call after the user has confirmed their contact details on screen.',
  // ... parameters stay the same
}
  • Step 3: Rewrite English system prompt

Replace the entire English return in buildSystemPrompt() with:

return `You are the LetsBe project assistant — a warm, experienced consultant for LetsBe Solutions. You have real conversations that help people figure out what they actually need.

Introduce yourself: "Hi, I'm the LetsBe project assistant. Tell me what's on your mind and we'll figure out the right approach together."

Your goal: understand what this person needs deeply enough to write a compelling, personalized brief. You are not filling out a form. You are having a genuine consultative conversation.

How to behave:
- Follow their thread. If they mention a frustration, dig into it. If they go on a tangent, that tangent probably matters. Don't redirect to your next topic.
- Ask one question at a time. Let them finish before moving on.
- Offer perspective, not just questions. "That sounds like it might be more of a systems integration problem than a website redesign" — you have opinions and experience, share them.
- Reference LetsBe's work naturally when relevant. "We built something similar for a hospitality group" — not a feature list.
- Keep each response to 2-3 sentences. You're a consultant, not a lecturer.
- It's OK if topics come up organically out of order. It's OK if some topics never come up at all.

Things worth exploring (but don't treat this as a checklist):
- What prompted them to reach out now? What's the underlying need?
- What's broken or frustrating about their current setup?
- What tools or systems does their team use today?
- If they have a website, offer to analyze it — then discuss what you find naturally.
- What would success look like for them?
- Who else is involved in the decision?
- What's driving their timeline?

Tool usage:
- Call update_selections silently whenever you pick up on structured data. Map what you hear to the closest predefined value. Never ask checkbox-style questions.
  - services: "web", "systems", "infrastructure"
  - aiTypes: "teammate", "customer-facing", "data-intelligence", "notsure"
  - industry: "maritime", "hospitality", "technology", "realestate", "finance", "ngo", "other"
  - timeline: "asap", "1-3months", "3-6months", "exploring"
- Call analyze_website when they mention a URL.
- When the conversation reaches a natural conclusion and you have a solid understanding of their needs, ask for their name and email. Say something like "I've got a great picture of what you need — let me put together a brief. What's your name and email?"
- After they provide name and email, call request_contact to show their details on screen. Say "I've put your details on screen — take a look and let me know if that's right." Wait for them to confirm before proceeding.
- After confirmation, call complete_brief immediately. Say "Perfect, generating your brief now." Include a detailed conversationSummary capturing ALL specifics: pain points, current tools, what they want to keep vs change, business context, decision-makers, what success looks like, any unique requirements. The conversationSummary is the primary input for the brief — the more detail, the better.

About LetsBe (reference naturally, don't recite):
- Everything custom-built from scratch — no templates, no page builders
- Private infrastructure: clients fully own and control their data and servers
- Small, experienced team with decades of combined design and engineering expertise
- Deep AI integration into any type of system
- Data sovereignty and digital privacy as a core focus`;
  • Step 4: Rewrite French system prompt

Replace the entire French return in buildSystemPrompt() with:

if (isFr) {
  return `Tu es l'assistant de projets LetsBe — un consultant chaleureux et expérimenté pour LetsBe Solutions. Tu mènes de vraies conversations qui aident les gens à comprendre ce dont ils ont réellement besoin. Toute la conversation se fait en français.

Présente-toi : "Bonjour, je suis l'assistant de projets LetsBe. Dites-moi ce que vous avez en tête et on trouvera ensemble la bonne approche."

Ton objectif : comprendre les besoins de cette personne assez profondément pour rédiger un brief convaincant et personnalisé. Tu ne remplis pas un formulaire. Tu as une vraie conversation de consultant.

Comment te comporter :
- Suis le fil de la conversation. S'ils mentionnent une frustration, creuse. Si un sujet connexe apparaît, il est probablement important. Ne redirige pas vers ton prochain sujet.
- Pose une seule question à la fois. Laisse-les finir avant de continuer.
- Offre ta perspective, pas seulement des questions. "Ça ressemble davantage à un problème d'intégration de systèmes qu'à une refonte de site web" — tu as des opinions et de l'expérience, partage-les.
- Mentionne le travail de LetsBe naturellement quand c'est pertinent. "On a construit quelque chose de similaire pour un groupe hôtelier" — pas une liste de fonctionnalités.
- Garde chaque réponse à 2-3 phrases. Tu es consultant, pas conférencier.
- C'est OK si les sujets arrivent dans un ordre différent. C'est OK si certains sujets n'arrivent jamais.

Sujets qui méritent d'être explorés (mais ne traite pas ça comme une checklist) :
- Qu'est-ce qui les a poussés à nous contacter maintenant ? Quel est le besoin sous-jacent ?
- Qu'est-ce qui ne fonctionne pas ou qui est frustrant dans leur configuration actuelle ?
- Quels outils ou systèmes leur équipe utilise-t-elle aujourd'hui ?
- S'ils ont un site web, propose de l'analyser — puis discute des résultats naturellement.
- À quoi ressemblerait le succès pour eux ?
- Qui d'autre est impliqué dans la décision ?
- Qu'est-ce qui motive leur calendrier ?

Utilisation des outils :
- Appelle update_selections silencieusement dès que tu captes une donnée structurée. Fais correspondre ce que tu entends à la valeur prédéfinie la plus proche. Ne pose jamais de questions de type formulaire.
  - services : "web", "systems", "infrastructure"
  - aiTypes : "teammate", "customer-facing", "data-intelligence", "notsure"
  - industry : "maritime", "hospitality", "technology", "realestate", "finance", "ngo", "other"
  - timeline : "asap", "1-3months", "3-6months", "exploring"
- Appelle analyze_website quand ils mentionnent une URL.
- Quand la conversation atteint une conclusion naturelle et que tu as une bonne compréhension de leurs besoins, demande leur nom et email. Dis quelque chose comme "J'ai une bonne vision de vos besoins — laissez-moi préparer un brief. Quel est votre nom et votre email ?"
- Après qu'ils aient donné nom et email, appelle request_contact pour afficher leurs coordonnées à l'écran. Dis "J'ai mis vos coordonnées à l'écran — vérifiez et dites-moi si c'est correct." Attends leur confirmation avant de continuer.
- Après confirmation, appelle complete_brief immédiatement. Dis "Parfait, je génère votre brief maintenant." Inclus un conversationSummary détaillé capturant TOUS les détails : points de friction, outils actuels, ce qu'ils veulent garder ou changer, contexte business, décideurs, ce que le succès représente, besoins uniques. Le conversationSummary est l'input principal du brief — plus il y a de détails, meilleur sera le brief.

À propos de LetsBe (mentionner naturellement, ne pas réciter) :
- Tout est développé sur mesure — aucun template, aucun constructeur de pages
- Infrastructure privée : les clients possèdent et contrôlent entièrement leurs données et serveurs
- Petite équipe expérimentée avec des décennies d'expérience combinée en design et ingénierie
- Intégration IA profonde dans tout type de système
- Souveraineté des données et confidentialité numérique comme priorité`;
}
  • Step 5: Verify TypeScript compiles

Run: npx tsc --noEmit 2>&1 | grep -i "gemini-live" Expected: No output (no errors — the existing @google/genai import error is pre-existing and unrelated)

  • Step 6: Commit
git add src/lib/gemini-live.ts
git commit -m "feat: rewrite voice agent to consultative tone, add request_contact tool"

Task 4: Update VoiceAgentProvider — Contact Card State & Tool Handling

Files:

  • Modify: src/components/configurator/VoiceAgentProvider.tsx

  • Step 1: Add contact card types and state

Add a new type after the existing ConnectionStatus type (around line 14):

interface PendingContact {
  name: string;
  email: string;
}

Add to VoiceAgentContextValue interface:

pendingContact: PendingContact | null;
confirmContact: () => void;
updatePendingContact: (field: 'name' | 'email', value: string) => void;

Inside the provider component, add state:

const [pendingContact, setPendingContact] = useState<PendingContact | null>(null);

Add the handlers:

const updatePendingContact = useCallback((field: 'name' | 'email', value: string) => {
  setPendingContact((prev) => prev ? { ...prev, [field]: value } : null);
}, []);

const confirmContact = useCallback(() => {
  if (!pendingContact) return;
  // Contact confirmed — the agent will now call complete_brief
  // Send confirmation back through WebSocket so the agent knows
  if (wsRef.current?.readyState === WebSocket.OPEN) {
    wsRef.current.send(JSON.stringify({
      toolResponse: {
        functionResponses: [{
          id: pendingContactCallIdRef.current,
          name: 'request_contact',
          response: { result: JSON.stringify({ confirmed: true, name: pendingContact.name, email: pendingContact.email }) },
        }],
      },
    }));
  }
  pendingContactCallIdRef.current = '';
}, [pendingContact]);

Add a ref to track the pending tool call ID:

const pendingContactCallIdRef = useRef('');
  • Step 2: Add request_contact handler to handleToolCall

Inside handleToolCall, add a new branch before the complete_brief handler:

if (name === 'request_contact') {
  const { name: contactName, email: contactEmail } = args as { name: string; email: string };
  setPendingContact({ name: contactName, email: contactEmail });
  pendingContactCallIdRef.current = callId;
  // Don't return a tool response yet — wait for user confirmation via confirmContact()
  return '__DEFERRED__';
}

Then in the ws.onmessage tool call handler, adjust so deferred responses are not sent immediately. Change the existing tool call block from:

if (msg.toolCall) {
  const calls = msg.toolCall.functionCalls;
  if (calls) {
    const responses = [];
    for (const call of calls) {
      const result = await handleToolCall(call.name, call.args ?? {}, call.id);
      responses.push({ id: call.id, name: call.name, response: { result } });
    }
    ws.send(JSON.stringify({ toolResponse: { functionResponses: responses } }));
  }
}

to:

if (msg.toolCall) {
  const calls = msg.toolCall.functionCalls;
  if (calls) {
    const responses = [];
    for (const call of calls) {
      const result = await handleToolCall(call.name, call.args ?? {}, call.id);
      if (result !== '__DEFERRED__') {
        responses.push({ id: call.id, name: call.name, response: { result } });
      }
    }
    if (responses.length > 0) {
      ws.send(JSON.stringify({ toolResponse: { functionResponses: responses } }));
    }
  }
}
  • Step 3: Update complete_brief to use confirmed contact data

In the complete_brief handler, after building formData, merge in the confirmed contact info from pendingContact state. Change the formData construction:

if (name === 'complete_brief') {
  if (briefSubmittedRef.current) return JSON.stringify({ success: true, message: 'Brief already submitted' });
  briefSubmittedRef.current = true;
  setIsGeneratingBrief(true);
  console.log('[VoiceAgent] complete_brief called, generating...');
  try {
    const toolArgs = args as Partial<WizardFormData> & { conversationSummary?: string };
    const summary = toolArgs.conversationSummary ?? '';
    const existingScope = toolArgs.scope ?? '';
    const combinedScope = [existingScope, summary].filter(Boolean).join('\n\n');
    // Use confirmed contact details from the on-screen card if available
    const contactName = pendingContact?.name ?? toolArgs.name ?? '';
    const contactEmail = pendingContact?.email ?? toolArgs.email ?? '';
    const formData = {
      ...DEFAULT_FORM_DATA,
      ...toolArgs,
      name: contactName,
      email: contactEmail,
      scope: combinedScope,
      locale,
    };
    delete (formData as Record<string, unknown>).conversationSummary;
    // ... rest stays the same

Note: pendingContact must be added to the handleToolCall dependency array. Since handleToolCall is a useCallback, add a ref instead to avoid re-creating the callback:

Replace the direct pendingContact state read with a ref:

const pendingContactRef = useRef<PendingContact | null>(null);

And sync it:

// Add after setPendingContact calls:
pendingContactRef.current = pendingContact;

Use pendingContactRef.current inside handleToolCall instead of pendingContact.

Actually, simpler: just use a ref for the contact data throughout. Change the state to a ref + state pair:

const [pendingContact, setPendingContact] = useState<PendingContact | null>(null);
const pendingContactRef = useRef<PendingContact | null>(null);

Update setPendingContact calls to also set the ref:

// In request_contact handler:
setPendingContact({ name: contactName, email: contactEmail });
pendingContactRef.current = { name: contactName, email: contactEmail };

Then in complete_brief handler, read from pendingContactRef.current.

  • Step 4: Wire new values into context provider

Add the new values to the value object:

const value: VoiceAgentContextValue = {
  // ... existing values ...
  pendingContact,
  confirmContact,
  updatePendingContact,
};
  • Step 5: Reset contact state in startConversation and endConversation

In startConversation, add:

setPendingContact(null);
pendingContactRef.current = null;
pendingContactCallIdRef.current = '';

In endConversation, add:

pendingContactCallIdRef.current = '';

(Don't clear pendingContact on end — it may still be displayed during brief generation transition.)

  • Step 6: Verify TypeScript compiles

Run: npx tsc --noEmit 2>&1 | grep -i "VoiceAgentProvider" Expected: No output

  • Step 7: Commit
git add src/components/configurator/VoiceAgentProvider.tsx
git commit -m "feat: add request_contact tool handling and contact card state"

Task 5: Add Reconnection Logic to VoiceAgentProvider

Files:

  • Modify: src/components/configurator/VoiceAgentProvider.tsx

  • Step 1: Add reconnection state and context values

Add to the VoiceAgentContextValue interface:

canReconnect: boolean;
reconnect: () => Promise<void>;

Add state in the provider:

const [canReconnect, setCanReconnect] = useState(false);
  • Step 2: Modify WebSocket onclose to enable reconnection

In the ws.onclose handler inside startConversation, replace the existing handler:

ws.onclose = (e) => {
  console.log('[VoiceAgent] WebSocket closed:', e.code, e.reason);
  // Clean up audio but preserve transcript and selections
  cancelAnimationFrame(animFrameRef.current);
  if (mediaStreamRef.current) {
    mediaStreamRef.current.getTracks().forEach((track) => track.stop());
    mediaStreamRef.current = null;
  }
  if (audioContextRef.current) {
    void audioContextRef.current.close();
    audioContextRef.current = null;
  }
  if (playbackContextRef.current) {
    void playbackContextRef.current.close();
    playbackContextRef.current = null;
  }
  wsRef.current = null;
  setUserAmplitude(0);
  setAgentAmplitude(0);
  // If we weren't intentionally ending, allow reconnect
  if (status !== 'ending' && !briefSubmittedRef.current) {
    setStatus('error');
    setErrorMessage(null); // The UI will show reconnect option based on canReconnect
    setCanReconnect(true);
  }
};
  • Step 3: Build reconnect function

Add a reconnect callback that re-uses startConversation but seeds the existing transcript as context:

const reconnect = useCallback(async () => {
  setCanReconnect(false);
  setErrorMessage(null);
  // Don't reset transcript or selections — preserve them
  // startConversation will be called, but we need to pass transcript context
  // We'll handle this by temporarily storing transcript for the setup message
  reconnectTranscriptRef.current = transcript;
  await startConversation();
}, [startConversation, transcript]);

Add a ref:

const reconnectTranscriptRef = useRef<TranscriptEntry[]>([]);

In startConversation, after the setup complete message is received and before the intro prompt, check if there's reconnect context:

if (msg.setupComplete !== undefined) {
  console.log('[VoiceAgent] Setup complete, session active');
  clearTimeout(setupTimeout);
  setStatus('active');
  trackAmplitude();

  // If reconnecting, seed with prior conversation context
  const priorTranscript = reconnectTranscriptRef.current;
  if (priorTranscript.length > 0) {
    const summary = priorTranscript
      .map((e) => `${e.role === 'user' ? 'User' : 'Agent'}: ${e.text}`)
      .join('\n');
    ws.send(JSON.stringify({
      realtimeInput: {
        text: `We were having a conversation but got disconnected. Here is what was discussed so far:\n\n${summary}\n\nPlease acknowledge the reconnection briefly and continue where we left off.`,
      },
    }));
    reconnectTranscriptRef.current = [];
  } else {
    ws.send(JSON.stringify({
      realtimeInput: {
        text: 'Hello, please introduce yourself.',
      },
    }));
  }
  return;
}

Modify startConversation so it only resets transcript/selections when not reconnecting:

// At the top of startConversation, change:
setTranscript([]);
setSelections({});
// To:
if (reconnectTranscriptRef.current.length === 0) {
  setTranscript([]);
  setSelections({});
}
  • Step 4: Wire into context value

Add to the value object:

canReconnect,
reconnect,
  • Step 5: Reset reconnect state on fresh start

In startConversation, at the top (before the reconnect check):

setCanReconnect(false);

In endConversation:

setCanReconnect(false);
reconnectTranscriptRef.current = [];
  • Step 6: Verify TypeScript compiles

Run: npx tsc --noEmit 2>&1 | grep -i "VoiceAgentProvider" Expected: No output

  • Step 7: Commit
git add src/components/configurator/VoiceAgentProvider.tsx
git commit -m "feat: add reconnection logic to voice agent provider"

Task 6: Rebuild VoiceAgent UI

Files:

  • Modify: src/components/configurator/VoiceAgent.tsx

  • Step 1: Update imports and context destructuring

Update the context destructuring to include new values and remove unused ones:

const {
  status,
  errorMessage,
  isMicActive,
  toggleMic,
  transcript,
  isAnalyzingSite,
  isGeneratingBrief,
  agentAmplitude,
  startConversation,
  endConversation,
  completedBrief,
  completedFormData,
  pendingContact,
  confirmContact,
  updatePendingContact,
  canReconnect,
  reconnect,
} = useVoiceAgent();

Remove selections from destructuring — it's no longer displayed.

  • Step 2: Remove selection chips entirely

Delete the KNOWN_SERVICES, KNOWN_AI_TYPES, KNOWN_INDUSTRIES, KNOWN_TIMELINES arrays and the entire chipLabels building logic (lines 101-124).

Delete the selection chips JSX block (the <AnimatePresence> with chipLabels.length > 0).

  • Step 3: Fix autoscroll

Replace the existing autoscroll useEffect:

// Auto-scroll transcript
useEffect(() => {
  transcriptEndRef.current?.scrollIntoView({ behavior: 'smooth', block: 'end' });
}, [transcript]);
  • Step 4: Enlarge the orb

Change the orb container from w-20 h-20 to w-24 h-24:

className={cn(
  'w-24 h-24 rounded-full flex items-center justify-center transition-colors duration-300',
  // ... rest stays the same
)}

Update icon sizes inside the orb from size={28} to size={32}.

  • Step 5: Increase transcript height

Change the transcript container max-h-40 to max-h-72:

<div className="rounded-xl border border-outline-variant/30 bg-surface-high p-3 max-h-72 overflow-y-auto scrollbar-thin">
  • Step 6: Add Contact Confirmation Card

Add a new component above the controls section, inside the main return. Place it after the transcript and before the controls div:

{/* Contact confirmation card */}
<AnimatePresence>
  {pendingContact && !completedBrief && (
    <motion.div
      initial={{ opacity: 0, y: 8 }}
      animate={{ opacity: 1, y: 0 }}
      exit={{ opacity: 0, y: -8 }}
      transition={{ duration: 0.3, ease: [0.16, 1, 0.3, 1] }}
      className="rounded-xl border border-primary/20 bg-primary/5 p-4"
    >
      <p className="text-xs font-semibold uppercase tracking-label text-outline mb-3">
        {t('voice.contactConfirm')}
      </p>
      <div className="flex flex-col gap-2">
        <div className="flex items-center gap-2">
          <label className="text-xs text-outline w-12 flex-shrink-0">
            {t('fields.name')}
          </label>
          <input
            type="text"
            value={pendingContact.name}
            onChange={(e) => updatePendingContact('name', e.target.value)}
            className="flex-1 text-sm text-on-surface bg-white rounded-lg border border-outline-variant/30 px-3 py-1.5 focus:outline-none focus:ring-1 focus:ring-primary/40"
          />
        </div>
        <div className="flex items-center gap-2">
          <label className="text-xs text-outline w-12 flex-shrink-0">
            {t('fields.email')}
          </label>
          <input
            type="email"
            value={pendingContact.email}
            onChange={(e) => updatePendingContact('email', e.target.value)}
            className="flex-1 text-sm text-on-surface bg-white rounded-lg border border-outline-variant/30 px-3 py-1.5 focus:outline-none focus:ring-1 focus:ring-primary/40"
          />
        </div>
      </div>
      <button
        type="button"
        onClick={confirmContact}
        className="mt-3 w-full py-2 rounded-lg text-xs font-medium text-white transition-all hover:-translate-y-px active:translate-y-0"
        style={{ background: 'linear-gradient(135deg, #006494, #5BA4D9)' }}
      >
        {t('voice.contactConfirmButton')}
      </button>
    </motion.div>
  )}
</AnimatePresence>
  • Step 7: Add reconnect button to controls

Add a new status branch in the controls section, after the connecting block:

{(status === 'error' || canReconnect) && !completedBrief && (
  <div className="flex flex-col items-center gap-2">
    <p className="text-xs text-outline text-center">
      {t('voice.connectionLost')}
    </p>
    <button
      type="button"
      onClick={reconnect}
      className="flex items-center gap-2 px-5 py-2.5 rounded-xl text-sm font-medium text-white transition-all hover:-translate-y-px active:translate-y-0"
      style={{ background: 'linear-gradient(135deg, #006494, #5BA4D9)' }}
    >
      {t('voice.reconnect')}
    </button>
  </div>
)}

Also guard the existing errorMessage display so it doesn't show when canReconnect is true (since the reconnect UI replaces it):

{errorMessage && !canReconnect && (
  <p className="text-xs text-red-600 text-center max-w-xs">{errorMessage}</p>
)}
  • Step 8: Make controls sticky on mobile

Wrap the controls div with a sticky bottom class for mobile:

{/* Controls */}
<div className="flex items-center justify-center gap-3 pt-2 sm:pt-2 sticky bottom-0 bg-surface-high/95 backdrop-blur-sm pb-2 -mx-6 px-6 sm:static sm:bg-transparent sm:backdrop-blur-none sm:pb-0 sm:mx-0">

This makes the mic/end-call buttons stick to the bottom of the viewport on mobile for thumb reach, while remaining static on desktop.

  • Step 9: Add brief generating state to the orb area

When isGeneratingBrief is true and the conversation has ended, show a generating indicator in the orb area. Add after the existing isGeneratingBrief badge, or replace the orb state:

Add a new orb state for status === 'idle' && isGeneratingBrief:

{status === 'idle' && isGeneratingBrief && (
  <motion.div animate={{ rotate: 360 }} transition={{ duration: 1.5, repeat: Infinity, ease: 'linear' }}>
    <Loader2 size={32} strokeWidth={1.5} className="text-primary" />
  </motion.div>
)}

And update the orb background for this state:

className={cn(
  'w-24 h-24 rounded-full flex items-center justify-center transition-colors duration-300',
  status === 'active'
    ? 'bg-gradient-to-br from-primary to-primary-dark'
    : status === 'connecting' || (status === 'idle' && isGeneratingBrief)
      ? 'bg-primary/20'
      : 'bg-surface-low border-2 border-outline-variant/30',
)}
  • Step 10: Verify TypeScript compiles

Run: npx tsc --noEmit 2>&1 | grep -i "VoiceAgent" Expected: No output

  • Step 11: Commit
git add src/components/configurator/VoiceAgent.tsx
git commit -m "feat: rebuild voice agent UI — larger layout, contact card, reconnect, no chips"

Task 7: Create Discovery Section Component

Files:

  • Create: src/components/sections/Discovery.tsx

  • Step 1: Create the Discovery section

Create src/components/sections/Discovery.tsx:

'use client';

import { useState, useEffect, useRef } from 'react';
import { useLocale, useTranslations } from 'next-intl';
import { motion, AnimatePresence } from 'framer-motion';
import { MessageCircle } from 'lucide-react';
import { revealVariants, staggerContainer, viewportOnce } from '@/lib/animations';
import VoiceAgentProvider from '@/components/configurator/VoiceAgentProvider';
import VoiceAgent from '@/components/configurator/VoiceAgent';
import StepComplete from '@/components/configurator/StepComplete';
import type { WizardFormData } from '@/components/configurator/WizardContainer';

export default function Discovery() {
  const t = useTranslations('discovery');
  const ct = useTranslations('configurator');
  const locale = useLocale();
  const [isOpen, setIsOpen] = useState(false);
  const [completed, setCompleted] = useState<{ brief: string; formData: WizardFormData } | null>(null);
  const panelRef = useRef<HTMLDivElement>(null);
  const [voiceSupported, setVoiceSupported] = useState(false);

  // Check if voice is available (same logic as old ModeToggle)
  useEffect(() => {
    async function check() {
      if (typeof WebSocket === 'undefined') return;
      if (!navigator.mediaDevices?.getUserMedia) return;
      try {
        const res = await fetch('/api/gemini-token');
        const data = (await res.json()) as { success: boolean };
        if (data.success) setVoiceSupported(true);
      } catch {
        // silent — section stays hidden
      }
    }
    void check();
  }, []);

  const handleOpen = () => {
    setIsOpen(true);
    // Scroll to panel after it renders
    requestAnimationFrame(() => {
      panelRef.current?.scrollIntoView({ behavior: 'smooth', block: 'start' });
    });
  };

  const handleComplete = (brief: string, formData: WizardFormData) => {
    setCompleted({ brief, formData });
  };

  const handleReset = () => {
    setCompleted(null);
    setIsOpen(false);
  };

  if (!voiceSupported) return null;

  return (
    <section id="discover" className="relative bg-surface-high py-24 overflow-hidden">
      {/* Top accent line */}
      <div
        className="absolute top-0 left-0 right-0 h-px pointer-events-none"
        style={{
          background: 'linear-gradient(90deg, transparent 10%, rgba(91,164,217,0.15) 50%, transparent 90%)',
        }}
        aria-hidden="true"
      />

      <div className="relative z-10 container mx-auto px-6">
        <AnimatePresence mode="wait">
          {completed ? (
            <motion.div
              key="completed"
              initial={{ opacity: 0, y: 20 }}
              animate={{ opacity: 1, y: 0 }}
              transition={{ duration: 0.5, ease: [0.16, 1, 0.3, 1] }}
              className="max-w-2xl mx-auto"
            >
              <StepComplete
                formData={completed.formData}
                brief={completed.brief}
                onReset={handleReset}
              />
            </motion.div>
          ) : (
            <motion.div
              key="discovery"
              variants={staggerContainer}
              initial="hidden"
              whileInView="visible"
              viewport={viewportOnce}
              className="flex flex-col items-center text-center"
            >
              <motion.span
                variants={revealVariants}
                className="label-md text-primary"
              >
                {t('eyebrow')}
              </motion.span>

              <motion.h2
                variants={revealVariants}
                className="font-serif text-4xl font-semibold tracking-headline text-on-surface leading-tight md:text-5xl mt-4 max-w-lg"
              >
                {t('title')}
              </motion.h2>

              <motion.p
                variants={revealVariants}
                className="text-base text-outline leading-relaxed max-w-md mt-4"
              >
                {t('description')}
              </motion.p>

              {!isOpen && (
                <motion.div variants={revealVariants} className="mt-8">
                  <button
                    type="button"
                    onClick={handleOpen}
                    className="flex items-center gap-2.5 px-7 py-3.5 rounded-xl text-sm font-medium text-white transition-all hover:-translate-y-px active:translate-y-0 shadow-lg shadow-primary/20"
                    style={{ background: 'linear-gradient(135deg, #006494, #5BA4D9)' }}
                  >
                    <MessageCircle size={16} />
                    {t('cta')}
                  </button>
                  <p className="text-[11px] text-outline/60 mt-3">{t('privacy')}</p>
                </motion.div>
              )}

              {/* Voice panel */}
              <AnimatePresence>
                {isOpen && (
                  <motion.div
                    ref={panelRef}
                    initial={{ opacity: 0, height: 0 }}
                    animate={{ opacity: 1, height: 'auto' }}
                    exit={{ opacity: 0, height: 0 }}
                    transition={{ duration: 0.5, ease: [0.16, 1, 0.3, 1] }}
                    className="w-full max-w-xl mt-10 overflow-hidden"
                  >
                    <div className="rounded-2xl bg-surface-high shadow-[0_20px_50px_rgba(25,28,29,0.08)] p-6 sm:p-8 border border-outline-variant/20">
                      {/* Top accent line */}
                      <div
                        className="absolute top-0 left-6 right-6 h-[2px] rounded-full pointer-events-none"
                        style={{
                          background: 'linear-gradient(90deg, #006494, #5BA4D9, transparent)',
                        }}
                        aria-hidden="true"
                      />

                      <VoiceAgentProvider locale={locale}>
                        <VoiceAgent locale={locale} onComplete={handleComplete} />
                      </VoiceAgentProvider>
                    </div>
                  </motion.div>
                )}
              </AnimatePresence>
            </motion.div>
          )}
        </AnimatePresence>
      </div>
    </section>
  );
}
  • Step 2: Verify TypeScript compiles

Run: npx tsc --noEmit 2>&1 | grep -i "Discovery" Expected: No output

  • Step 3: Commit
git add src/components/sections/Discovery.tsx
git commit -m "feat: create Discovery section component with voice panel"

Task 8: Wire Discovery Section into Landing Page

Files:

  • Modify: src/app/(frontend)/[locale]/page.tsx

  • Step 1: Add import and section

Add the import:

import Discovery from '@/components/sections/Discovery'

Add <Discovery /> to the page layout, after <Process /> and before <SelectedWorks />. The section order becomes:

<main>
  <Hero />
  <TrustBar />
  <ServicesOverview />
  <Configurator />
  <Process />
  <Discovery />
  <SelectedWorks />
  <Philosophy />
  <CTABanner />
</main>

This places it after "How We Work" — the user has seen the services and the process, this is the natural "I'm interested but not sure" moment.

  • Step 2: Verify TypeScript compiles

Run: npx tsc --noEmit 2>&1 | grep -i "page.tsx" Expected: No output

  • Step 3: Commit
git add src/app/(frontend)/[locale]/page.tsx
git commit -m "feat: add Discovery section to landing page after Process"

Task 9: Verify Email Template with Longer Briefs

Files:

  • Review: src/lib/email.ts

  • Step 1: Review convertBriefToHtml for long content

Read src/lib/email.ts lines 35-85. The converter handles:

  • --- horizontal rules
  • Empty lines as spacers
  • Bold inline text
  • Numbered lists
  • Section headings (full-bold lines)
  • Regular paragraphs

This should handle longer narrative briefs fine — it's line-based and doesn't truncate. No changes needed unless visual testing reveals issues.

  • Step 2: Visually verify by running the dev server

Run: npm run dev

Test the full flow:

  1. Navigate to the landing page
  2. Scroll to the Discovery section — verify copy and CTA appear
  3. Click "Let's Talk" — verify the voice panel expands smoothly
  4. Have a conversation — verify no selection chips appear, transcript autoscrolls
  5. When agent asks for contact, verify the on-screen card appears with editable fields
  6. Confirm contact — verify the agent proceeds to generate brief
  7. Verify brief completion transitions to StepComplete view
  8. Check the email received for formatting with longer narrative content
  • Step 3: Test reconnection

During an active conversation:

  1. Disconnect network briefly (or close WebSocket via dev tools)
  2. Verify transcript is preserved and "Reconnect" button appears
  3. Click reconnect — verify new session picks up with context
  • Step 4: Final commit if any tweaks were needed
git add -A
git commit -m "polish: final adjustments from integration testing"

Summary

Task Description Key Files
1 Remove ModeToggle from configurator WizardContainer.tsx, ModeToggle.tsx (delete)
2 Add i18n translations en.json, fr.json
3 Rewrite system prompt and tools gemini-live.ts
4 Contact card state and tool handling VoiceAgentProvider.tsx
5 Reconnection logic VoiceAgentProvider.tsx
6 Rebuild voice agent UI VoiceAgent.tsx
7 Create Discovery section Discovery.tsx (new)
8 Wire into landing page page.tsx
9 Verify email template and integration test email.ts, dev server