Compare commits

...

9 Commits

Author SHA1 Message Date
a174518496 feat: add Discovery section to landing page after Process
All checks were successful
Build & Push / build-and-push (push) Successful in 1m51s
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
2026-04-06 14:45:46 -04:00
896f0eb5f4 feat: create Discovery section component with voice panel
Standalone landing page section with warm copy, CTA, and expandable
inline voice conversation panel. Shows StepComplete on brief completion.
Only renders if voice support is detected.

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
2026-04-06 14:45:17 -04:00
3cdb95e488 feat: rebuild voice agent UI — larger layout, contact card, reconnect, no chips
- Larger orb (w-24), taller transcript (max-h-72), proper scrollIntoView
- On-screen contact confirmation card replaces verbal spell-back
- Reconnect button on connection loss
- Selection chips removed (structured data captured silently)
- Mobile-sticky controls for thumb reach

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
2026-04-06 14:44:28 -04:00
cdb89553e0 feat: add contact card, deferred tool responses, and reconnection logic
- request_contact tool shows on-screen card for name/email verification
- Deferred tool responses let the UI wait for user confirmation
- WebSocket close preserves transcript and enables reconnection
- Reconnect seeds new Gemini session with prior conversation context

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
2026-04-06 14:43:27 -04:00
28d063e251 feat: rewrite voice agent to consultative tone, add request_contact tool
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
2026-04-06 14:41:07 -04:00
94a5876e7d i18n: add discovery section translations, update voice strings
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
2026-04-06 14:39:42 -04:00
bcc24d0f40 refactor: remove ModeToggle from configurator, make it typed-form-only
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
2026-04-06 14:38:43 -04:00
a5570a90b2 docs: add voice discovery pivot implementation plan
9-task plan covering: ModeToggle removal, i18n, system prompt rewrite,
contact card tool, reconnection logic, UI rebuild, Discovery section,
landing page integration, and email template verification.

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
2026-04-06 12:47:52 -04:00
81675335ad docs: add voice discovery mode design spec
Captures the pivot from form-filling voice mode to a standalone
consultative discovery experience with separate entry point, rewritten
system prompt, on-screen contact verification, and reconnection handling.

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
2026-04-01 13:46:47 -04:00
11 changed files with 1893 additions and 249 deletions

File diff suppressed because it is too large Load Diff

View File

@@ -0,0 +1,144 @@
# Voice Discovery Mode — Design Spec
## Overview
Pivot the voice mode from a "faster way to fill out the configurator" into a standalone consultative discovery experience. Exploratory users — people who don't yet know exactly what they need — get a warm, conversational entry point separate from the typed configurator. The conversation is free-flowing and consultant-like, structured data is captured silently, and the user receives a personalized brief at the end.
## Entry Point & Framing
### Placement
A new standalone section on the landing page, positioned after the services or process section — wherever the natural "I'm interested but not sure" moment occurs. Completely decoupled from the configurator.
### Copy Direction
- **Headline:** Warm and inviting — "Not sure where to start?" or "Still figuring out what you need?"
- **Subtext:** "Tell us what you're thinking and we'll figure it out together. You'll get a personalized brief at the end."
- **CTA button:** "Let's talk" — styled distinctly from the configurator's CTA.
- Both EN and FR translations required.
### Behavior
Clicking the CTA scrolls to / reveals the voice conversation panel inline on the page. No route change, no modal. The panel expands in place, keeping the user grounded in the site context.
### Configurator Changes
- Remove the `ModeToggle` component and `mode` state from `WizardContainer.tsx`.
- The configurator becomes typed-form-only — the "I know what I want" path.
- No other changes to the configurator itself.
## Voice Conversation UI
### Layout
A dedicated panel, roughly the same width as the configurator card but taller. Three zones stacked vertically:
1. **Agent header** — LetsBe branding mark, agent name, connection status dot. Similar to current but slightly more prominent.
2. **Orb + transcript area** — Orb is larger (24-28 units instead of 20). Live transcript below it with significantly more vertical space (`max-h-72` or similar instead of current `max-h-40`). Proper autoscroll using `scrollIntoView` on the bottom ref. **Selection chips are removed** — no visible evidence of structured data capture.
3. **Controls** — Mic toggle and end call button. Same as current, cleaner without chips.
### Mobile
Panel goes nearly full-width on small screens. Transcript takes most of the viewport height. Orb may scale down slightly. Controls stay fixed at the bottom for thumb reach.
### Contact Confirmation Card
When the agent captures name and email, a small inline card appears (above controls or below transcript) showing the captured values with inline edit affordance. The agent says "I've got your details on screen — look right?" User can tap to edit, then confirm. **This replaces the verbal spell-back entirely.**
Requires a new tool (e.g., `request_contact`) that the agent calls to surface the card, rather than collecting contact info verbally.
### During Brief Generation
After contact confirmation and `complete_brief` trigger:
- Connection is closed (already fixed).
- Panel transitions to a generating state — orb morphs to loader or StepGenerating-style progress indicators.
- Transcript remains visible so the conversation doesn't vanish.
### On Completion
Transitions to the same `StepComplete` view (brief preview + book a call CTA). The brief content will be richer due to deeper conversation, but presentation is the same.
## System Prompt & Agent Behavior
### Tone
The agent is a conversational consultant, not an interviewer with a checklist. No numbered topic list to work through. The prompt gives the agent a goal: "understand what this person needs deeply enough to write a compelling brief."
### Behavioral Guidelines
- **Follow the user's thread.** If they talk about a frustration, dig into it. Don't redirect to the next "topic."
- **One question at a time.** This stays — it works.
- **Offer perspective, not just questions.** "That sounds like it might be more of a systems problem than a website problem." The agent has opinions, not just a clipboard.
- **Reference LetsBe naturally.** "We've done something similar for a hospitality client" — not a feature list.
- **2-3 sentences per response.** Prevents monologuing.
### Structured Data Capture
`update_selections` tool stays. The agent is never instructed to "cover these topics." It maps what it hears to predefined values silently. If the conversation never touches timeline, that field stays empty — that's fine.
### Brief Generation
`conversationSummary` is the **primary payload**. The prompt instructs the agent to include everything discussed: pain points, current tools, what they want to keep vs change, business context, decision-makers, what success looks like. Structured fields (`services`, `industry`, `timeline`) are metadata that helps organize the brief, not the substance.
### Brief Content Philosophy
The brief should be **diagnostic, not prescriptive:**
- **Deep on their world** — pain points, current tools, what's broken, customers, what success looks like.
- **Deep on what matters** — priorities and trade-offs surfaced in conversation.
- **LetsBe's perspective** — a few sentences of informed opinion on what the real problem is.
- **High-level on implementation** — no stack recommendations, no architecture, no specific deliverables.
- **No timeline/cost** — "that's what the call is for."
The brief should make the user feel understood and make the follow-up call feel like a warm continuation, not a cold intro.
### Contact Collection
The agent asks for name and email when the conversation reaches a natural conclusion — "I think I've got a great picture of what you need. Let me put a brief together — what's your name and email?" No forced timing. The `request_contact` tool surfaces the on-screen card for verification.
### Language
Both EN and FR system prompts, same as now.
## Reconnection Handling
Exploratory conversations run longer than form-filling. If the WebSocket drops mid-conversation:
- Preserve the transcript on disconnect.
- Show a "reconnect" option instead of just an error.
- On reconnect, seed the new Gemini session with the transcript so far (as context in the system prompt or initial message) so the agent can pick up where it left off.
- The structured selections captured so far are preserved in state.
## Technical Changes
### Files to Modify
- **`VoiceAgentProvider.tsx`** — Refactor `handleToolCall` so `conversationSummary` is the primary brief input. Add state for contact confirmation card (name + email captured, pending user confirm). Add reconnection logic (preserve transcript, re-seed on reconnect). Connection teardown on brief completion already fixed.
- **`VoiceAgent.tsx`** — New layout: larger orb, bigger transcript area, no selection chips. Add contact confirmation card component (inline editable name + email). Fix autoscroll with `scrollIntoView`. Guard controls for brief-complete state (already done). Mobile-responsive layout.
- **`gemini-live.ts`** — Rewrite `buildSystemPrompt()` for both locales with consultative tone. Adjust `complete_brief` tool description to emphasize `conversationSummary`. Add `request_contact` tool declaration that surfaces the on-screen card.
- **`WizardContainer.tsx`** — Remove `ModeToggle` component import, `mode` state, and the voice mode rendering branch. Remove `handleVoiceComplete` and `VoiceAgentProvider` wrapper (these move to the new section).
- **`ModeToggle.tsx`** — Delete entirely.
- **New: Discovery section component** — New section component for the landing page with warm copy, CTA, and expandable voice panel. This is where `VoiceAgentProvider` and `VoiceAgent` now live.
- **Landing page** — Add the new discovery section at the appropriate position.
- **i18n message files** (`en.json`, `fr.json`) — Add translations for discovery section copy. Update voice-related strings as needed.
- **Email template** — Verify the brief email template handles longer, more narrative content gracefully. Adjust if needed.
### What Stays the Same
- WebSocket connection to Gemini Live API
- Audio worklet recording + playback pipeline
- `update_selections` tool (used silently now)
- `/api/configure` route and brief generation logic
- `/api/gemini-token` route
- `StepComplete` component
- `analyze_website` tool (still useful when someone mentions their current site)
- The typed configurator (minus the mode toggle)

View File

@@ -6,6 +6,7 @@ import Process from '@/components/sections/Process'
import SelectedWorks from '@/components/sections/SelectedWorks'
import Philosophy from '@/components/sections/Philosophy'
import Configurator from '@/components/sections/Configurator'
import Discovery from '@/components/sections/Discovery'
import CTABanner from '@/components/sections/CTABanner'
type Props = {
@@ -23,6 +24,7 @@ export default async function HomePage({ params }: Props) {
<ServicesOverview />
<Configurator />
<Process />
<Discovery />
<SelectedWorks />
<Philosophy />
<CTABanner />

View File

@@ -1,69 +0,0 @@
'use client';
import { useState, useEffect } from 'react';
import { useTranslations } from 'next-intl';
import { Keyboard, Mic } from 'lucide-react';
import { cn } from '@/lib/utils';
// ─── Types ───────────────────────────────────────────────────────────────────
interface ModeToggleProps {
mode: 'type' | 'talk';
onChange: (mode: 'type' | 'talk') => void;
}
// ─── Component ───────────────────────────────────────────────────────────────
export default function ModeToggle({ mode, onChange }: ModeToggleProps) {
const t = useTranslations('configurator');
const [voiceSupported, setVoiceSupported] = useState(false);
useEffect(() => {
async function check() {
if (typeof WebSocket === 'undefined') return;
if (!navigator.mediaDevices?.getUserMedia) return;
try {
const res = await fetch('/api/gemini-token');
const data = (await res.json()) as { success: boolean };
if (data.success) setVoiceSupported(true);
} catch {
// silent — toggle stays hidden
}
}
void check();
}, []);
if (!voiceSupported) return null;
return (
<div className="flex items-center gap-1 rounded-xl bg-surface-low p-1 border border-outline-variant/30">
<button
type="button"
onClick={() => onChange('type')}
className={cn(
'flex items-center gap-1.5 px-3 py-1.5 rounded-lg text-xs font-medium transition-all duration-200',
mode === 'type'
? 'bg-white text-on-surface shadow-card'
: 'text-outline hover:text-on-surface',
)}
>
<Keyboard size={13} />
{t('mode.type')}
</button>
<button
type="button"
onClick={() => onChange('talk')}
className={cn(
'flex items-center gap-1.5 px-3 py-1.5 rounded-lg text-xs font-medium transition-all duration-200',
mode === 'talk'
? 'bg-white text-on-surface shadow-card'
: 'text-outline hover:text-on-surface',
)}
>
<Mic size={13} />
{t('mode.talk')}
</button>
</div>
);
}

View File

@@ -5,7 +5,6 @@ import { useTranslations } from 'next-intl';
import { motion, AnimatePresence, useMotionValue, useTransform } from 'framer-motion';
import { Mic, MicOff, PhoneOff, Loader2 } from 'lucide-react';
import { cn } from '@/lib/utils';
import Chip from '@/components/ui/Chip';
import { useVoiceAgent, type TranscriptEntry } from './VoiceAgentProvider';
import type { WizardFormData } from './WizardContainer';
@@ -53,7 +52,6 @@ export default function VoiceAgent({ locale, onComplete }: VoiceAgentProps) {
isMicActive,
toggleMic,
transcript,
selections,
isAnalyzingSite,
isGeneratingBrief,
agentAmplitude,
@@ -61,16 +59,18 @@ export default function VoiceAgent({ locale, onComplete }: VoiceAgentProps) {
endConversation,
completedBrief,
completedFormData,
pendingContact,
confirmContact,
updatePendingContact,
canReconnect,
reconnect,
} = useVoiceAgent();
const transcriptEndRef = useRef<HTMLDivElement>(null);
// Auto-scroll transcript within its container only
// Auto-scroll transcript
useEffect(() => {
const el = transcriptEndRef.current;
if (el?.parentElement) {
el.parentElement.scrollTop = el.parentElement.scrollHeight;
}
transcriptEndRef.current?.scrollIntoView({ behavior: 'smooth', block: 'end' });
}, [transcript]);
// Handle completion — end the call, then transition
@@ -98,32 +98,6 @@ export default function VoiceAgent({ locale, onComplete }: VoiceAgentProps) {
['0px 0px 0px rgba(0,100,148,0)', '0px 0px 30px rgba(0,100,148,0.3)'],
);
// Build selection chips — use i18n for known keys, raw value otherwise
const KNOWN_SERVICES = ['web', 'systems', 'infrastructure'];
const KNOWN_AI_TYPES = ['teammate', 'customer-facing', 'data-intelligence', 'notsure'];
const KNOWN_INDUSTRIES = ['maritime', 'hospitality', 'technology', 'realestate', 'finance', 'ngo', 'other'];
const KNOWN_TIMELINES = ['asap', '1-3months', '3-6months', 'exploring'];
const chipLabels: string[] = [];
if (selections.services) {
for (const svc of selections.services) {
chipLabels.push(KNOWN_SERVICES.includes(svc) ? t(`services.${svc}.title`) : svc);
}
}
if (selections.aiEnabled && selections.aiTypes) {
for (const ai of selections.aiTypes) {
chipLabels.push(KNOWN_AI_TYPES.includes(ai) ? t(`aiTypes.${ai}.title`) : ai);
}
}
if (selections.industry) {
const ind = selections.industry;
chipLabels.push(KNOWN_INDUSTRIES.includes(ind) ? t(`industries.${ind}`) : ind);
}
if (selections.timeline) {
const tl = selections.timeline;
chipLabels.push(KNOWN_TIMELINES.includes(tl) ? t(`timelines.${tl}`) : tl);
}
return (
<div className="flex flex-col gap-5">
{/* Agent card header */}
@@ -152,20 +126,20 @@ export default function VoiceAgent({ locale, onComplete }: VoiceAgentProps) {
<motion.div
style={{ scale: status === 'active' ? orbScale : 1, boxShadow: status === 'active' ? orbGlow : 'none' }}
className={cn(
'w-20 h-20 rounded-full flex items-center justify-center transition-colors duration-300',
'w-24 h-24 rounded-full flex items-center justify-center transition-colors duration-300',
status === 'active'
? 'bg-gradient-to-br from-primary to-primary-dark'
: status === 'connecting'
: status === 'connecting' || (status === 'idle' && isGeneratingBrief)
? 'bg-primary/20'
: 'bg-surface-low border-2 border-outline-variant/30',
)}
>
{status === 'idle' && (
<Mic size={28} strokeWidth={1.5} className="text-outline" />
{status === 'idle' && !isGeneratingBrief && (
<Mic size={32} strokeWidth={1.5} className="text-outline" />
)}
{status === 'connecting' && (
{(status === 'connecting' || (status === 'idle' && isGeneratingBrief)) && (
<motion.div animate={{ rotate: 360 }} transition={{ duration: 1.5, repeat: Infinity, ease: 'linear' }}>
<Loader2 size={28} strokeWidth={1.5} className="text-primary" />
<Loader2 size={32} strokeWidth={1.5} className="text-primary" />
</motion.div>
)}
{status === 'active' && (
@@ -173,7 +147,7 @@ export default function VoiceAgent({ locale, onComplete }: VoiceAgentProps) {
animate={{ scale: [1, 1.1, 1] }}
transition={{ duration: 2, repeat: Infinity, ease: 'easeInOut' }}
>
<Mic size={28} strokeWidth={1.5} className="text-white" />
<Mic size={32} strokeWidth={1.5} className="text-white" />
</motion.div>
)}
</motion.div>
@@ -203,20 +177,20 @@ export default function VoiceAgent({ locale, onComplete }: VoiceAgentProps) {
<motion.div animate={{ rotate: 360 }} transition={{ duration: 1, repeat: Infinity, ease: 'linear' }}>
<Loader2 size={11} />
</motion.div>
{locale === 'fr' ? 'Génération de votre brief...' : 'Generating your brief...'}
{t('voice.generatingBrief')}
</motion.div>
)}
</AnimatePresence>
{/* Error message */}
{errorMessage && (
{errorMessage && !canReconnect && (
<p className="text-xs text-red-600 text-center max-w-xs">{errorMessage}</p>
)}
</div>
{/* Live transcript */}
{transcript.length > 0 && (
<div className="rounded-xl border border-outline-variant/30 bg-surface-high p-3 max-h-40 overflow-y-auto scrollbar-thin">
<div className="rounded-xl border border-outline-variant/30 bg-surface-high p-3 max-h-72 overflow-y-auto scrollbar-thin">
<div className="flex flex-col gap-2">
{transcript.map((entry, i) => (
<TranscriptBubble key={`${entry.timestamp}-${i}`} entry={entry} />
@@ -226,37 +200,58 @@ export default function VoiceAgent({ locale, onComplete }: VoiceAgentProps) {
</div>
)}
{/* Selection chips */}
{/* Contact confirmation card */}
<AnimatePresence>
{chipLabels.length > 0 && (
{pendingContact && !completedBrief && (
<motion.div
initial={{ opacity: 0, height: 0 }}
animate={{ opacity: 1, height: 'auto' }}
exit={{ opacity: 0, height: 0 }}
className="overflow-hidden"
initial={{ opacity: 0, y: 8 }}
animate={{ opacity: 1, y: 0 }}
exit={{ opacity: 0, y: -8 }}
transition={{ duration: 0.3, ease: [0.16, 1, 0.3, 1] }}
className="rounded-xl border border-primary/20 bg-primary/5 p-4"
>
<p className="text-xs font-semibold uppercase tracking-label text-outline mb-2">
{t('voice.capturedSoFar')}
<p className="text-xs font-semibold uppercase tracking-label text-outline mb-3">
{t('voice.contactConfirm')}
</p>
<div className="flex flex-wrap gap-1.5">
{chipLabels.map((label, i) => (
<motion.div
key={label}
initial={{ opacity: 0, scale: 0.8 }}
animate={{ opacity: 1, scale: 1 }}
transition={{ delay: i * 0.05, duration: 0.2 }}
>
<Chip active>{label}</Chip>
</motion.div>
))}
<div className="flex flex-col gap-2">
<div className="flex items-center gap-2">
<label className="text-xs text-outline w-12 flex-shrink-0">
{t('fields.name')}
</label>
<input
type="text"
value={pendingContact.name}
onChange={(e) => updatePendingContact('name', e.target.value)}
className="flex-1 text-sm text-on-surface bg-white rounded-lg border border-outline-variant/30 px-3 py-1.5 focus:outline-none focus:ring-1 focus:ring-primary/40"
/>
</div>
<div className="flex items-center gap-2">
<label className="text-xs text-outline w-12 flex-shrink-0">
{t('fields.email')}
</label>
<input
type="email"
value={pendingContact.email}
onChange={(e) => updatePendingContact('email', e.target.value)}
className="flex-1 text-sm text-on-surface bg-white rounded-lg border border-outline-variant/30 px-3 py-1.5 focus:outline-none focus:ring-1 focus:ring-primary/40"
/>
</div>
</div>
<button
type="button"
onClick={confirmContact}
className="mt-3 w-full py-2 rounded-lg text-xs font-medium text-white transition-all hover:-translate-y-px active:translate-y-0"
style={{ background: 'linear-gradient(135deg, #006494, #5BA4D9)' }}
>
{t('voice.contactConfirmButton')}
</button>
</motion.div>
)}
</AnimatePresence>
{/* Controls */}
<div className="flex items-center justify-center gap-3 pt-2">
{status === 'idle' && !completedBrief && (
{/* Controls — sticky on mobile for thumb reach */}
<div className="flex items-center justify-center gap-3 pt-2 sticky bottom-0 bg-surface-high/95 backdrop-blur-sm pb-2 -mx-6 px-6 sm:static sm:bg-transparent sm:backdrop-blur-none sm:pb-0 sm:mx-0">
{status === 'idle' && !completedBrief && !isGeneratingBrief && (
<button
type="button"
onClick={startConversation}
@@ -296,6 +291,22 @@ export default function VoiceAgent({ locale, onComplete }: VoiceAgentProps) {
{status === 'connecting' && (
<p className="text-sm text-outline animate-pulse">{t('voice.connecting')}</p>
)}
{(status === 'error' || canReconnect) && !completedBrief && (
<div className="flex flex-col items-center gap-2">
<p className="text-xs text-outline text-center">
{t('voice.connectionLost')}
</p>
<button
type="button"
onClick={reconnect}
className="flex items-center gap-2 px-5 py-2.5 rounded-xl text-sm font-medium text-white transition-all hover:-translate-y-px active:translate-y-0"
style={{ background: 'linear-gradient(135deg, #006494, #5BA4D9)' }}
>
{t('voice.reconnect')}
</button>
</div>
)}
</div>
</div>
);

View File

@@ -1,6 +1,6 @@
'use client';
import { createContext, useContext, useState, useRef, useCallback, type ReactNode } from 'react';
import { createContext, useContext, useState, useRef, useCallback, useEffect, type ReactNode } from 'react';
import type { WizardFormData } from './WizardContainer';
// ─── Types ───────────────────────────────────────────────────────────────────
@@ -13,6 +13,11 @@ export interface TranscriptEntry {
type ConnectionStatus = 'idle' | 'connecting' | 'active' | 'ending' | 'error';
export interface PendingContact {
name: string;
email: string;
}
interface VoiceAgentContextValue {
status: ConnectionStatus;
errorMessage: string | null;
@@ -28,6 +33,11 @@ interface VoiceAgentContextValue {
endConversation: () => void;
completedBrief: string | null;
completedFormData: WizardFormData | null;
pendingContact: PendingContact | null;
confirmContact: () => void;
updatePendingContact: (field: 'name' | 'email', value: string) => void;
canReconnect: boolean;
reconnect: () => Promise<void>;
}
// ─── Context ─────────────────────────────────────────────────────────────────
@@ -133,9 +143,15 @@ export default function VoiceAgentProvider({ locale, children }: VoiceAgentProvi
const [agentAmplitude, setAgentAmplitude] = useState(0);
const [completedBrief, setCompletedBrief] = useState<string | null>(null);
const [completedFormData, setCompletedFormData] = useState<WizardFormData | null>(null);
const [pendingContact, setPendingContact] = useState<PendingContact | null>(null);
const [canReconnect, setCanReconnect] = useState(false);
const turnCompleteRef = useRef(true);
const briefSubmittedRef = useRef(false);
const pendingContactRef = useRef<PendingContact | null>(null);
const pendingContactCallIdRef = useRef('');
const reconnectTranscriptRef = useRef<TranscriptEntry[]>([]);
const statusRef = useRef<ConnectionStatus>('idle');
const wsRef = useRef<WebSocket | null>(null);
const mediaStreamRef = useRef<MediaStream | null>(null);
const audioContextRef = useRef<AudioContext | null>(null);
@@ -144,6 +160,9 @@ export default function VoiceAgentProvider({ locale, children }: VoiceAgentProvi
const analyserRef = useRef<AnalyserNode | null>(null);
const animFrameRef = useRef<number>(0);
// Keep statusRef in sync for use in closures
useEffect(() => { statusRef.current = status; }, [status]);
const addTranscript = useCallback((role: 'user' | 'agent', text: string) => {
setTranscript((prev) => {
const last = prev[prev.length - 1];
@@ -193,6 +212,16 @@ export default function VoiceAgentProvider({ locale, children }: VoiceAgentProvi
}
}
if (name === 'request_contact') {
const { name: contactName, email: contactEmail } = args as { name: string; email: string };
const contact = { name: contactName, email: contactEmail };
setPendingContact(contact);
pendingContactRef.current = contact;
pendingContactCallIdRef.current = callId;
// Don't return a tool response yet — wait for user confirmation via confirmContact()
return '__DEFERRED__';
}
if (name === 'complete_brief') {
// Prevent duplicate submissions
if (briefSubmittedRef.current) return JSON.stringify({ success: true, message: 'Brief already submitted' });
@@ -204,7 +233,10 @@ export default function VoiceAgentProvider({ locale, children }: VoiceAgentProvi
const summary = toolArgs.conversationSummary ?? '';
const existingScope = toolArgs.scope ?? '';
const combinedScope = [existingScope, summary].filter(Boolean).join('\n\n');
const formData = { ...DEFAULT_FORM_DATA, ...toolArgs, scope: combinedScope, locale };
// Use confirmed contact details from the on-screen card if available
const contactName = pendingContactRef.current?.name ?? toolArgs.name ?? '';
const contactEmail = pendingContactRef.current?.email ?? toolArgs.email ?? '';
const formData = { ...DEFAULT_FORM_DATA, ...toolArgs, name: contactName, email: contactEmail, scope: combinedScope, locale };
delete (formData as Record<string, unknown>).conversationSummary;
const res = await fetch('/api/configure', {
method: 'POST',
@@ -254,8 +286,15 @@ export default function VoiceAgentProvider({ locale, children }: VoiceAgentProvi
const startConversation = useCallback(async () => {
setStatus('connecting');
setErrorMessage(null);
setTranscript([]);
setSelections({});
setCanReconnect(false);
// Only reset transcript/selections on fresh start (not reconnect)
if (reconnectTranscriptRef.current.length === 0) {
setTranscript([]);
setSelections({});
setPendingContact(null);
pendingContactRef.current = null;
pendingContactCallIdRef.current = '';
}
setCompletedBrief(null);
setCompletedFormData(null);
briefSubmittedRef.current = false;
@@ -364,12 +403,27 @@ export default function VoiceAgentProvider({ locale, children }: VoiceAgentProvi
clearTimeout(setupTimeout);
setStatus('active');
trackAmplitude();
// Prompt the agent to introduce itself
ws.send(JSON.stringify({
realtimeInput: {
text: 'Hello, please introduce yourself.',
},
}));
// If reconnecting, seed with prior conversation context
const priorTranscript = reconnectTranscriptRef.current;
if (priorTranscript.length > 0) {
const summary = priorTranscript
.map((e) => `${e.role === 'user' ? 'User' : 'Agent'}: ${e.text}`)
.join('\n');
ws.send(JSON.stringify({
realtimeInput: {
text: `We were having a conversation but got disconnected. Here is what was discussed so far:\n\n${summary}\n\nPlease acknowledge the reconnection briefly and continue where we left off.`,
},
}));
reconnectTranscriptRef.current = [];
} else {
// Prompt the agent to introduce itself
ws.send(JSON.stringify({
realtimeInput: {
text: 'Hello, please introduce yourself.',
},
}));
}
return;
}
@@ -409,9 +463,13 @@ export default function VoiceAgentProvider({ locale, children }: VoiceAgentProvi
const responses = [];
for (const call of calls) {
const result = await handleToolCall(call.name, call.args ?? {}, call.id);
responses.push({ id: call.id, name: call.name, response: { result } });
if (result !== '__DEFERRED__') {
responses.push({ id: call.id, name: call.name, response: { result } });
}
}
if (responses.length > 0) {
ws.send(JSON.stringify({ toolResponse: { functionResponses: responses } }));
}
ws.send(JSON.stringify({ toolResponse: { functionResponses: responses } }));
}
}
};
@@ -424,8 +482,28 @@ export default function VoiceAgentProvider({ locale, children }: VoiceAgentProvi
ws.onclose = (e) => {
console.log('[VoiceAgent] WebSocket closed:', e.code, e.reason);
if (status === 'active') {
setStatus('idle');
// Clean up audio but preserve transcript and selections
cancelAnimationFrame(animFrameRef.current);
if (mediaStreamRef.current) {
mediaStreamRef.current.getTracks().forEach((track) => track.stop());
mediaStreamRef.current = null;
}
if (audioContextRef.current) {
void audioContextRef.current.close();
audioContextRef.current = null;
}
if (playbackContextRef.current) {
void playbackContextRef.current.close();
playbackContextRef.current = null;
}
wsRef.current = null;
setUserAmplitude(0);
setAgentAmplitude(0);
// If we weren't intentionally ending, allow reconnect
if (statusRef.current !== 'ending' && !briefSubmittedRef.current) {
setStatus('error');
setErrorMessage(null);
setCanReconnect(true);
}
};
} catch (error) {
@@ -438,7 +516,7 @@ export default function VoiceAgentProvider({ locale, children }: VoiceAgentProvi
setErrorMessage(`Failed to start: ${msg}`);
}
}
}, [locale, trackAmplitude, handleToolCall, playAudioChunk, addTranscript, status]);
}, [locale, trackAmplitude, handleToolCall, playAudioChunk, addTranscript]);
const endConversation = useCallback(() => {
setStatus('ending');
@@ -463,9 +541,46 @@ export default function VoiceAgentProvider({ locale, children }: VoiceAgentProvi
setUserAmplitude(0);
setAgentAmplitude(0);
setCanReconnect(false);
reconnectTranscriptRef.current = [];
pendingContactCallIdRef.current = '';
setStatus('idle');
}, []);
const updatePendingContact = useCallback((field: 'name' | 'email', value: string) => {
setPendingContact((prev) => {
if (!prev) return null;
const updated = { ...prev, [field]: value };
pendingContactRef.current = updated;
return updated;
});
}, []);
const confirmContact = useCallback(() => {
if (!pendingContactRef.current) return;
// Send confirmation back through WebSocket so the agent knows
if (wsRef.current?.readyState === WebSocket.OPEN) {
wsRef.current.send(JSON.stringify({
toolResponse: {
functionResponses: [{
id: pendingContactCallIdRef.current,
name: 'request_contact',
response: { result: JSON.stringify({ confirmed: true, name: pendingContactRef.current.name, email: pendingContactRef.current.email }) },
}],
},
}));
}
pendingContactCallIdRef.current = '';
}, []);
const reconnect = useCallback(async () => {
setCanReconnect(false);
setErrorMessage(null);
// Preserve transcript for the new session to pick up context
reconnectTranscriptRef.current = transcript;
await startConversation();
}, [startConversation, transcript]);
const toggleMic = useCallback(() => {
if (!mediaStreamRef.current) return;
const track = mediaStreamRef.current.getAudioTracks()[0];
@@ -490,6 +605,11 @@ export default function VoiceAgentProvider({ locale, children }: VoiceAgentProvi
endConversation,
completedBrief,
completedFormData,
pendingContact,
confirmContact,
updatePendingContact,
canReconnect,
reconnect,
};
return (

View File

@@ -8,9 +8,6 @@ import StepDetails from './StepDetails';
import StepContact from './StepContact';
import StepGenerating from './StepGenerating';
import StepComplete from './StepComplete';
import ModeToggle from './ModeToggle';
import VoiceAgent from './VoiceAgent';
import VoiceAgentProvider from './VoiceAgentProvider';
// ─── Types ────────────────────────────────────────────────────────────────────
@@ -90,7 +87,6 @@ export default function WizardContainer() {
const [isSubmitting, setIsSubmitting] = useState(false);
const [isGenerating, setIsGenerating] = useState(false);
const [submitError, setSubmitError] = useState<string | null>(null);
const [mode, setMode] = useState<'type' | 'talk'>('type');
const goNext = () => {
setDirection(1);
@@ -108,7 +104,6 @@ export default function WizardContainer() {
setBrief('');
setSubmitError(null);
setIsGenerating(false);
setMode('type');
setCurrentStep(1);
};
@@ -144,13 +139,6 @@ export default function WizardContainer() {
}
};
const handleVoiceComplete = (voiceBrief: string, voiceFormData: WizardFormData) => {
setFormData(voiceFormData);
setBrief(voiceBrief);
setDirection(1);
setCurrentStep(4);
};
const stepVariants = makeVariants(direction);
const sharedProps: StepProps = {
@@ -162,28 +150,8 @@ export default function WizardContainer() {
return (
<div className="relative overflow-hidden">
{!isGenerating && currentStep !== 4 && (
<div className="flex justify-center mb-4">
<ModeToggle mode={mode} onChange={setMode} />
</div>
)}
<AnimatePresence mode="wait" initial={false}>
{mode === 'talk' && !isGenerating && currentStep !== 4 && (
<motion.div
key="voice-mode"
variants={stepVariants}
initial="initial"
animate="animate"
exit="exit"
>
<VoiceAgentProvider locale={locale}>
<VoiceAgent locale={locale} onComplete={handleVoiceComplete} />
</VoiceAgentProvider>
</motion.div>
)}
{mode === 'type' && !isGenerating && currentStep === 1 && (
{!isGenerating && currentStep === 1 && (
<motion.div
key="step-1"
variants={stepVariants}
@@ -195,7 +163,7 @@ export default function WizardContainer() {
</motion.div>
)}
{mode === 'type' && !isGenerating && currentStep === 2 && (
{!isGenerating && currentStep === 2 && (
<motion.div
key="step-2"
variants={stepVariants}
@@ -207,7 +175,7 @@ export default function WizardContainer() {
</motion.div>
)}
{mode === 'type' && !isGenerating && currentStep === 3 && (
{!isGenerating && currentStep === 3 && (
<motion.div
key="step-3"
variants={stepVariants}

View File

@@ -0,0 +1,162 @@
'use client';
import { useState, useEffect, useRef } from 'react';
import { useLocale, useTranslations } from 'next-intl';
import { motion, AnimatePresence } from 'framer-motion';
import { MessageCircle } from 'lucide-react';
import { revealVariants, staggerContainer, viewportOnce } from '@/lib/animations';
import VoiceAgentProvider from '@/components/configurator/VoiceAgentProvider';
import VoiceAgent from '@/components/configurator/VoiceAgent';
import StepComplete from '@/components/configurator/StepComplete';
import type { WizardFormData } from '@/components/configurator/WizardContainer';
export default function Discovery() {
const t = useTranslations('discovery');
const locale = useLocale();
const [isOpen, setIsOpen] = useState(false);
const [completed, setCompleted] = useState<{ brief: string; formData: WizardFormData } | null>(null);
const panelRef = useRef<HTMLDivElement>(null);
const [voiceSupported, setVoiceSupported] = useState(false);
// Check if voice is available (same logic as old ModeToggle)
useEffect(() => {
async function check() {
if (typeof WebSocket === 'undefined') return;
if (!navigator.mediaDevices?.getUserMedia) return;
try {
const res = await fetch('/api/gemini-token');
const data = (await res.json()) as { success: boolean };
if (data.success) setVoiceSupported(true);
} catch {
// silent — section stays hidden
}
}
void check();
}, []);
const handleOpen = () => {
setIsOpen(true);
// Scroll to panel after it renders
requestAnimationFrame(() => {
panelRef.current?.scrollIntoView({ behavior: 'smooth', block: 'start' });
});
};
const handleComplete = (brief: string, formData: WizardFormData) => {
setCompleted({ brief, formData });
};
const handleReset = () => {
setCompleted(null);
setIsOpen(false);
};
if (!voiceSupported) return null;
return (
<section id="discover" className="relative bg-surface-high py-24 overflow-hidden">
{/* Top accent line */}
<div
className="absolute top-0 left-0 right-0 h-px pointer-events-none"
style={{
background: 'linear-gradient(90deg, transparent 10%, rgba(91,164,217,0.15) 50%, transparent 90%)',
}}
aria-hidden="true"
/>
<div className="relative z-10 container mx-auto px-6">
<AnimatePresence mode="wait">
{completed ? (
<motion.div
key="completed"
initial={{ opacity: 0, y: 20 }}
animate={{ opacity: 1, y: 0 }}
transition={{ duration: 0.5, ease: [0.16, 1, 0.3, 1] }}
className="max-w-2xl mx-auto"
>
<StepComplete
formData={completed.formData}
brief={completed.brief}
onReset={handleReset}
/>
</motion.div>
) : (
<motion.div
key="discovery"
variants={staggerContainer}
initial="hidden"
whileInView="visible"
viewport={viewportOnce}
className="flex flex-col items-center text-center"
>
<motion.span
variants={revealVariants}
className="label-md text-primary"
>
{t('eyebrow')}
</motion.span>
<motion.h2
variants={revealVariants}
className="font-serif text-4xl font-semibold tracking-headline text-on-surface leading-tight md:text-5xl mt-4 max-w-lg"
>
{t('title')}
</motion.h2>
<motion.p
variants={revealVariants}
className="text-base text-outline leading-relaxed max-w-md mt-4"
>
{t('description')}
</motion.p>
{!isOpen && (
<motion.div variants={revealVariants} className="mt-8">
<button
type="button"
onClick={handleOpen}
className="flex items-center gap-2.5 px-7 py-3.5 rounded-xl text-sm font-medium text-white transition-all hover:-translate-y-px active:translate-y-0 shadow-lg shadow-primary/20"
style={{ background: 'linear-gradient(135deg, #006494, #5BA4D9)' }}
>
<MessageCircle size={16} />
{t('cta')}
</button>
<p className="text-[11px] text-outline/60 mt-3">{t('privacy')}</p>
</motion.div>
)}
{/* Voice panel */}
<AnimatePresence>
{isOpen && (
<motion.div
ref={panelRef}
initial={{ opacity: 0, height: 0 }}
animate={{ opacity: 1, height: 'auto' }}
exit={{ opacity: 0, height: 0 }}
transition={{ duration: 0.5, ease: [0.16, 1, 0.3, 1] }}
className="w-full max-w-xl mt-10 overflow-hidden"
>
<div className="relative rounded-2xl bg-surface-high shadow-[0_20px_50px_rgba(25,28,29,0.08)] p-6 sm:p-8 border border-outline-variant/20">
{/* Top accent line */}
<div
className="absolute top-0 left-6 right-6 h-[2px] rounded-full pointer-events-none"
style={{
background: 'linear-gradient(90deg, #006494, #5BA4D9, transparent)',
}}
aria-hidden="true"
/>
<VoiceAgentProvider locale={locale}>
<VoiceAgent locale={locale} onComplete={handleComplete} />
</VoiceAgentProvider>
</div>
</motion.div>
)}
</AnimatePresence>
</motion.div>
)}
</AnimatePresence>
</div>
</section>
);
}

View File

@@ -160,18 +160,20 @@
"runningAudit": "Running performance audit",
"generatingBrief": "Generating your personalized brief"
},
"mode": {
"type": "Type",
"talk": "Talk"
},
"voice": {
"agentName": "LetsBe project assistant",
"capturedSoFar": "Captured so far",
"endConversation": "End Conversation",
"analyzingSite": "Analyzing your site...",
"connecting": "Connecting...",
"mute": "Mute",
"unmute": "Unmute"
"unmute": "Unmute",
"generatingBrief": "Generating your brief...",
"contactConfirm": "Does this look right?",
"contactEdit": "Edit",
"contactConfirmButton": "That's correct",
"reconnect": "Reconnect",
"connectionLost": "Connection lost. Your conversation is saved.",
"briefComplete": "Brief complete"
},
"privacy": "Your information is private and will never be shared.",
"generateBrief": "Generate My Brief",
@@ -183,6 +185,13 @@
"network": "Network error. Please check your connection and try again."
}
},
"discovery": {
"eyebrow": "Let's Figure It Out",
"title": "Not sure where to start?",
"description": "Tell us what you're thinking and we'll figure it out together. You'll get a personalized brief at the end.",
"cta": "Let's Talk",
"privacy": "Voice conversations are not recorded or stored."
},
"process": {
"eyebrow": "How We Work",
"title": "From Idea to Launch",

View File

@@ -160,18 +160,20 @@
"runningAudit": "Audit de performance en cours",
"generatingBrief": "Génération de votre brief personnalisé"
},
"mode": {
"type": "Écrire",
"talk": "Parler"
},
"voice": {
"agentName": "Assistant projet LetsBe",
"capturedSoFar": "Informations recueillies",
"endConversation": "Terminer la conversation",
"analyzingSite": "Analyse de votre site...",
"connecting": "Connexion en cours...",
"mute": "Couper le micro",
"unmute": "Activer le micro"
"unmute": "Activer le micro",
"generatingBrief": "Génération de votre brief...",
"contactConfirm": "Est-ce correct ?",
"contactEdit": "Modifier",
"contactConfirmButton": "C'est correct",
"reconnect": "Reconnecter",
"connectionLost": "Connexion perdue. Votre conversation est sauvegardée.",
"briefComplete": "Brief terminé"
},
"privacy": "Vos informations sont privées et ne seront jamais partagées.",
"generateBrief": "Générer Mon Brief",
@@ -183,6 +185,13 @@
"network": "Erreur réseau. Veuillez vérifier votre connexion et réessayer."
}
},
"discovery": {
"eyebrow": "Trouvons ensemble",
"title": "Vous ne savez pas par où commencer ?",
"description": "Dites-nous ce que vous avez en tête et on trouvera la solution ensemble. Vous recevrez un brief personnalisé à la fin.",
"cta": "Discutons",
"privacy": "Les conversations vocales ne sont ni enregistrées ni stockées."
},
"process": {
"eyebrow": "Notre Méthode",
"title": "De l'Idée au Lancement",

View File

@@ -47,10 +47,23 @@ export const AGENT_TOOLS = [
required: ['url'],
},
},
{
name: 'request_contact',
description:
'Display a contact confirmation card on screen for the user to verify their name and email. Call this instead of spelling back their details verbally. The user will confirm or edit on screen.',
parameters: {
type: Type.OBJECT,
properties: {
name: { type: Type.STRING, description: 'The name the user provided' },
email: { type: Type.STRING, description: 'The email the user provided' },
},
required: ['name', 'email'],
},
},
{
name: 'complete_brief',
description:
'Generate and send the project brief. Call once all information is collected and the user has confirmed their name and email. Include a detailed conversationSummary capturing ALL key details discussed.',
'Generate and send the project brief. The conversationSummary is the most important field — it should capture the full richness of the conversation. Structured fields (services, industry, timeline) are supporting metadata. Only call after the user has confirmed their contact details on screen.',
parameters: {
type: Type.OBJECT,
properties: {
@@ -80,83 +93,87 @@ export function buildSystemPrompt(locale: string): string {
const isFr = locale === 'fr';
if (isFr) {
return `Tu es l'assistant de projets LetsBe, un consultant expérimenté et chaleureux pour LetsBe Solutions. Tu mènes des conversations de découverte qui révèlent les vrais besoins des clients. Toute la conversation se fait en français.
return `Tu es l'assistant de projets LetsBe un consultant chaleureux et expérimenté pour LetsBe Solutions. Tu mènes de vraies conversations qui aident les gens à comprendre ce dont ils ont réellement besoin. Toute la conversation se fait en français.
Présente-toi ainsi : "Bonjour, je suis l'assistant de projets LetsBe. Parlez-moi de votre projet et je préparerai un brief personnalisé pour vous."
Présente-toi : "Bonjour, je suis l'assistant de projets LetsBe. Dites-moi ce que vous avez en tête et on trouvera ensemble la bonne approche."
Ton rôle est de guider une conversation consultative naturelle. Couvre ces sujets, mais va au-delà des réponses superficielles :
Ton objectif : comprendre les besoins de cette personne assez profondément pour rédiger un brief convaincant et personnalisé. Tu ne remplis pas un formulaire. Tu as une vraie conversation de consultant.
1. **Services recherchés** (web, logiciels sur mesure, infrastructure privée) — Demande ce qui a motivé ce besoin. C'est un remplacement ou un nouveau projet ?
2. **Intégration IA** — Si pertinent, explore le type. Ne force pas si ce n'est pas leur intérêt.
3. **Leur secteur** — Montre que tu comprends leur domaine. Demande qui sont leurs clients typiques.
4. **Points de friction actuels** — "Quelle est votre plus grande frustration dans la gestion de [X] aujourd'hui ?" ou "Quels outils votre équipe utilise-t-elle actuellement ?"
5. **Site web actuel** — S'ils en ont un, propose de l'analyser. Demande ce qu'ils aiment et n'aiment pas.
6. **Vision et objectifs** — "À quoi ressemblerait le succès 6 mois après le lancement ?" ou "Si vous pouviez changer une seule chose, ce serait quoi ?"
7. **Calendrier et contexte** — "Qu'est-ce qui motive ce calendrier ?" ou "Qui d'autre est impliqué dans cette décision ?"
8. **Nom et e-mail** — Épelle les deux lettre par lettre pour confirmer. Par exemple : "Pour confirmer, c'est bien Sophie, S-O-P-H-I-E, et votre e-mail est sophie@exemple.com — s-o-p-h-i-e arobase exemple point com. C'est correct ?" N'appelle complete_brief qu'après confirmation.
Comment te comporter :
- Suis le fil de la conversation. S'ils mentionnent une frustration, creuse. Si un sujet connexe apparaît, il est probablement important. Ne redirige pas vers ton prochain sujet.
- Pose une seule question à la fois. Laisse-les finir avant de continuer.
- Offre ta perspective, pas seulement des questions. "Ça ressemble davantage à un problème d'intégration de systèmes qu'à une refonte de site web" — tu as des opinions et de l'expérience, partage-les.
- Mentionne le travail de LetsBe naturellement quand c'est pertinent. "On a construit quelque chose de similaire pour un groupe hôtelier" — pas une liste de fonctionnalités.
- Garde chaque réponse à 2-3 phrases. Tu es consultant, pas conférencier.
- C'est OK si les sujets arrivent dans un ordre différent. C'est OK si certains sujets n'arrivent jamais.
Style conversationnel :
- Pose UNE question à la fois. Laisse-les répondre complètement avant de continuer.
- Écoute les détails qu'ils partagent spontanément et pose des questions de suivi.
- Garde chaque réponse à 2-3 phrases maximum.
- Sois chaleureux, direct et professionnel — comme un consultant compétent, pas un chatbot.
Sujets qui méritent d'être explorés (mais ne traite pas ça comme une checklist) :
- Qu'est-ce qui les a poussés à nous contacter maintenant ? Quel est le besoin sous-jacent ?
- Qu'est-ce qui ne fonctionne pas ou qui est frustrant dans leur configuration actuelle ?
- Quels outils ou systèmes leur équipe utilise-t-elle aujourd'hui ?
- S'ils ont un site web, propose de l'analyser — puis discute des résultats naturellement.
- À quoi ressemblerait le succès pour eux ?
- Qui d'autre est impliqué dans la décision ?
- Qu'est-ce qui motive leur calendrier ?
Utilisation des outils :
- Appelle update_selections chaque fois qu'un point est confirmé. Utilise UNIQUEMENT ces valeurs prédéfinies :
- Appelle update_selections silencieusement dès que tu captes une donnée structurée. Fais correspondre ce que tu entends à la valeur prédéfinie la plus proche. Ne pose jamais de questions de type formulaire.
- services : "web", "systems", "infrastructure"
- aiTypes : "teammate", "customer-facing", "data-intelligence", "notsure"
- industry : "maritime", "hospitality", "technology", "realestate", "finance", "ngo", "other"
- timeline : "asap", "1-3months", "3-6months", "exploring"
- Appelle analyze_website dès que l'utilisateur fournit une URL.
- Quand tu appelles complete_brief, inclus un conversationSummary détaillé avec TOUS les détails discutés : points de friction, outils actuels, ce qu'ils veulent garder ou changer, contexte de décision, besoins uniques.
- Appelle complete_brief IMMÉDIATEMENT après confirmation du nom et e-mail. Dis "Parfait, je génère votre brief maintenant".
- Appelle analyze_website quand ils mentionnent une URL.
- Quand la conversation atteint une conclusion naturelle et que tu as une bonne compréhension de leurs besoins, demande leur nom et email. Dis quelque chose comme "J'ai une bonne vision de vos besoins — laissez-moi préparer un brief. Quel est votre nom et votre email ?"
- Après qu'ils aient donné nom et email, appelle request_contact pour afficher leurs coordonnées à l'écran. Dis "J'ai mis vos coordonnées à l'écran — vérifiez et dites-moi si c'est correct." Attends leur confirmation avant de continuer.
- Après confirmation, appelle complete_brief immédiatement. Dis "Parfait, je génère votre brief maintenant." Inclus un conversationSummary détaillé capturant TOUS les détails : points de friction, outils actuels, ce qu'ils veulent garder ou changer, contexte business, décideurs, ce que le succès représente, besoins uniques. Le conversationSummary est l'input principal du brief — plus il y a de détails, meilleur sera le brief.
Faits clés sur LetsBe :
À propos de LetsBe (mentionner naturellement, ne pas réciter) :
- Tout est développé sur mesure — aucun template, aucun constructeur de pages
- Infrastructure privée : le client possède et contrôle entièrement ses données et serveurs
- Petite équipe expérimentée avec des décennies d'expérience combinée
- Intégration IA profonde dans tous types de systèmes
- Souveraineté numérique et protection des données comme priorité`;
- Infrastructure privée : les clients possèdent et contrôlent entièrement leurs données et serveurs
- Petite équipe expérimentée avec des décennies d'expérience combinée en design et ingénierie
- Intégration IA profonde dans tout type de système
- Souveraineté des données et confidentialité numérique comme priorité`;
}
return `You are the LetsBe project assistant, a skilled and personable project consultant for LetsBe Solutions. You conduct discovery conversations that uncover what clients truly need — not just what they initially ask for.
return `You are the LetsBe project assistant a warm, experienced consultant for LetsBe Solutions. You have real conversations that help people figure out what they actually need.
Introduce yourself: "Hi, I'm the LetsBe project assistant. Tell me about your project and I'll put together a personalized brief for you."
Introduce yourself: "Hi, I'm the LetsBe project assistant. Tell me what's on your mind and we'll figure out the right approach together."
Your role is to guide a natural, consultative conversation. Cover these topics, but go deeper than surface-level answers:
Your goal: understand what this person needs deeply enough to write a compelling, personalized brief. You are not filling out a form. You are having a genuine consultative conversation.
1. **Services needed** (web, custom software, private infrastructure) — Ask what prompted this need. Is it replacing something? Starting fresh?
2. **AI integration** — If relevant, explore what kind. But don't push it if they're not interested.
3. **Their industry** — Show you understand their sector. Ask about their typical customers/clients.
4. **Current pain points** — "What's the biggest frustration with how you handle [X] today?" or "What tools does your team currently use for this?"
5. **Current website** — If they have one, offer to analyze it. Ask what they like and don't like about it.
6. **Goals and vision** — "What would success look like 6 months after launch?" or "If you could change one thing about your current setup, what would it be?"
7. **Timeline and context** — "What's driving this timeline?" or "Who else is involved in this decision?"
8. **Name and email** — Spell back both letter by letter to confirm. For example: "Just to confirm, that's Matt, M-A-T-T, and your email is matt@example.com — m-a-t-t at example dot com. Is that right?" Only call complete_brief after they confirm.
How to behave:
- Follow their thread. If they mention a frustration, dig into it. If they go on a tangent, that tangent probably matters. Don't redirect to your next topic.
- Ask one question at a time. Let them finish before moving on.
- Offer perspective, not just questions. "That sounds like it might be more of a systems integration problem than a website redesign" — you have opinions and experience, share them.
- Reference LetsBe's work naturally when relevant. "We built something similar for a hospitality group" — not a feature list.
- Keep each response to 2-3 sentences. You're a consultant, not a lecturer.
- It's OK if topics come up organically out of order. It's OK if some topics never come up at all.
Conversational style:
- Ask ONE question at a time. Let them answer fully before moving on.
- Listen for details they volunteer and ask follow-up questions. If they mention a specific pain point, explore it.
- Keep each response to 23 sentences maximum.
- Be warm, direct, and professional — like a knowledgeable consultant, not a chatbot.
- It's OK to skip topics if the conversation flows naturally past them.
Things worth exploring (but don't treat this as a checklist):
- What prompted them to reach out now? What's the underlying need?
- What's broken or frustrating about their current setup?
- What tools or systems does their team use today?
- If they have a website, offer to analyze it — then discuss what you find naturally.
- What would success look like for them?
- Who else is involved in the decision?
- What's driving their timeline?
Tool usage:
- Call update_selections each time a data point is confirmed. Use ONLY these predefined values:
- Call update_selections silently whenever you pick up on structured data. Map what you hear to the closest predefined value. Never ask checkbox-style questions.
- services: "web", "systems", "infrastructure"
- aiTypes: "teammate", "customer-facing", "data-intelligence", "notsure"
- industry: "maritime", "hospitality", "technology", "realestate", "finance", "ngo", "other"
- timeline: "asap", "1-3months", "3-6months", "exploring"
Map what the user says to the closest predefined value.
- Call analyze_website as soon as the user provides a URL — then discuss the findings naturally.
- When calling complete_brief, include a detailed conversationSummary that captures ALL specifics discussed: pain points, current tools, what they want to keep vs change, decision context, unique requirements. This summary feeds directly into the brief — the more detail, the better the brief.
- Call complete_brief IMMEDIATELY after confirming name and email spelling. Say "Great, I'm generating your brief now" while calling the tool.
- Call analyze_website when they mention a URL.
- When the conversation reaches a natural conclusion and you have a solid understanding of their needs, ask for their name and email. Say something like "I've got a great picture of what you need — let me put together a brief. What's your name and email?"
- After they provide name and email, call request_contact to show their details on screen. Say "I've put your details on screen — take a look and let me know if that's right." Wait for them to confirm before proceeding.
- After confirmation, call complete_brief immediately. Say "Perfect, generating your brief now." Include a detailed conversationSummary capturing ALL specifics: pain points, current tools, what they want to keep vs change, business context, decision-makers, what success looks like, any unique requirements. The conversationSummary is the primary input for the brief — the more detail, the better.
Key facts about LetsBe to reference when relevant:
- Everything is custom-built from scratch — no templates, no page builders
- Private infrastructure: the client fully owns and controls their data and servers
- Small, experienced team with decades of combined expertise in design and engineering
- Deep AI integration into any type of system they build
About LetsBe (reference naturally, don't recite):
- Everything custom-built from scratch — no templates, no page builders
- Private infrastructure: clients fully own and control their data and servers
- Small, experienced team with decades of combined design and engineering expertise
- Deep AI integration into any type of system
- Data sovereignty and digital privacy as a core focus`;
}