This document describes intentional implementation patterns in the Parle codebase that may appear unusual or incorrect at first glance, but are deliberate design decisions. Code review agents should NOT flag these as bugs.
Location: services/geminiService.ts - sendVoiceMessage() function
Conversation history is updated BEFORE TTS audio generation in single-character and free-conversation code paths:
// Update history BEFORE TTS
addToHistory("user", userText);
addToHistory("assistant", modelText);
syncedMessageCount += 2;
// Then generate TTS (wrapped in try/catch)
let audioUrl = '';
try {
audioUrl = await abortablePromise(generateCharacterSpeech(...));
} catch (ttsError) {
console.error('TTS generation failed:', ttsError);
// Return empty audioUrl - UI shows "Audio unavailable" with retry
}The app is designed to display message text even when TTS fails. This enables users to:
- See what the AI said even if audio generation fails
- Read the text response while audio is unavailable
- Use the UI's retry button to regenerate TTS without losing the message
- Maintain conversation context and history continuity
- ✅ TTS succeeds: Message displays with audio, history updated
- ✅ TTS fails: Message displays without audio (shows "Audio unavailable" warning), history updated, retry button available
- ❌ If history was updated AFTER TTS: Message would be lost entirely on TTS failure, breaking UX
The multi-character path uses a different pattern with Promise.allSettled() but achieves the same graceful degradation:
- Generates TTS for all characters in parallel
- Marks failed generations with
audioGenerationFailed: true - Still displays all text and updates history
- Returns partial audio results rather than failing completely
Users are rate-limited by TTS request count, not by conversation history size. By updating history before TTS, we ensure:
- Users can retry failed TTS without re-running the LLM (which would consume more tokens)
- Conversation context is preserved for subsequent messages
- Failed TTS doesn't break the conversation flow
services/geminiService.ts- Lines ~590-640 (single-character and free-conversation paths)types.ts- Line 29 (audioGenerationFailed?: booleanfield)- UI components expect this pattern and handle missing audio gracefully
Location: services/geminiService.ts - Multi-character response processing
Character responses are merged when the same character appears successively:
const mergedCharacterResponses = characterResponses.reduce((acc, current) => {
if (acc.length === 0) return [current];
const lastResponse = acc[acc.length - 1];
if (lastResponse.characterId === current.characterId) {
// Merge successive messages from same character
lastResponse.french = `${lastResponse.french} ${current.french}`;
lastResponse.english = `${lastResponse.english} ${current.english}`;
return acc;
}
return [...acc, current];
}, []);Users are rate-limited by TTS request count. Merging successive messages ensures:
- Only ONE TTS request per character per turn (not multiple)
- Reduced API calls and faster response times
- Same audio playback result (messages would play back-to-back anyway)
✅ ALLOWED - Character speaks, another character responds, first character speaks again:
- Character 1 (Baker): "Bonjour!"
- Character 2 (Cashier): "Bonjour!"
- Character 1 (Baker): "Que désirez-vous?"
→ 3 separate messages, 3 TTS requests (non-successive)
❌ MERGED - Same character speaks multiple times in a row:
- Character 1 (Baker): "Bonjour!"
- Character 1 (Baker): "Que désirez-vous?"
→ Merged into 1 message: "Bonjour! Que désirez-vous?" → 1 TTS request
The system prompt explicitly instructs the LLM to avoid creating successive messages from the same character (see scenarioService.ts guideline #7), but the defensive merge code ensures it happens regardless of LLM compliance.
Location: services/voiceService.ts - Voice assignment functions
Voice names are stored with capital letters in the catalog but converted to lowercase when passed to the API:
export const GEMINI_VOICES: VoiceProfile[] = [
{ name: "Aoede", description: "...", gender: "female", ... },
// ... (names are capitalized)
];
export const assignVoiceToCharacter = (...): string => {
// Returns lowercase for API compatibility
return suitableVoices[0].name.toLowerCase();
};The Gemini TTS API requires lowercase voice names. The pattern ensures:
- Human-readable names in code ("Aoede")
- API-compatible names when calling TTS ("aoede")
- Single source of truth for voice metadata
Using capitalized names like "Aoede" directly results in API error:
Voice name Aoede is not supported. Allowed voice names are: aoede, kore, leda, ...
Location: services/geminiService.ts - createChatSession()
Both single-character AND multi-character scenarios use JSON response format:
const isScenarioMode = activeScenario !== null;
chatSession = ai.chats.create({
model: 'gemini-2.5-flash-lite',
config: {
systemInstruction: systemInstruction,
...(isScenarioMode && {
responseMimeType: 'application/json'
})
},
...
});Structured responses enable precise French/English separation for TTS control:
- LLM returns
{ "french": "...", "english": "...", "hint": "..." } - TTS only uses the French text
- UI displays both French and English
- Prevents LLM from mixing languages in unpredictable ways
Previously, single-character scenarios used free-form text with inline translations. This was changed because:
- Unreliable separation of French and English
- TTS would read both languages aloud
- Harder to parse and display separately
Audio flows can have races: a user may cancel, close/reopen a modal, or start a new 'turn' while a previous Gemini request is still in-flight. If the stale request resolves after the user moved on, it can incorrectly update UI state (wrong transcript/messages/spinners) or throw JSON parsing errors.
This section is a developer-facing rule to prevent that entire class of bug.
- Create a new
AbortControllerper 'turn/request' and store it in a ref that cancellation/timeout handlers can reach. - Pass the per-request
signalinto the Gemini SDK viaconfig.abortSignalon every relevant SDK call (ai.models.generateContent(...)andchatSession.sendMessage(...)).- Do not rely on
Promise.race/ wrapper rejection alone. The Gemini SDK must receive the signal so it can stop internally and reject withAbortError.
- Do not rely on
- Invalidate/discard stale responses:
- Track a request token (e.g.
requestIdRef.currentcaptured intocurrentRequestId) and check it before any state updates. - If a newer request started (token changed) or the relevant UI is no longer open, return early and do not mutate UI state.
- Track a request token (e.g.
- Preserve JSON enforcement when passing per-request config with
abortSignal:- Keep
responseMimeType: 'application/json'andresponseSchema: ...set in the same request config. - This avoids the SDK returning plain text (which breaks downstream JSON parsing/validation).
- Keep
- Handle
AbortErroraccording to why the request was aborted:processingAbortedRef(exercise exit, TEF timer, leaving summary): suppress ERROR UI — treat as intentional and return silently fromprocessAudioMessage/ related flows.- User orb cancel during processing:
handleAbortProcessingis a no-op ifabortControllerRefis null (nothing in flight). Otherwise setpipelineFailureKindRefto'user_cancel', thenabort()on the userAbortController— surface ERROR + retry + a clear message (same retry path as network failures).lastChatAudioremains for Retry. - Pipeline deadline (
PIPELINE_MAX_MS, 90s wall-clock for transcribe + chat + TTS): setpipelineFailureKindRefto'timeout'before aborting a second controller; combine user + deadline signals withcombineAbortSignals(orAbortSignal.any) and pass that composite signal tosendVoiceMessage. Clear the deadlinesetTimeoutinfinally.
- One composite
AbortSignalcovers the whole turn: user cancel orPIPELINE_MAX_MS(exported fromservices/geminiService.ts). sendVoiceMessageusesconfig.abortSignalon transcribe,sendMessage, and TTS; no parallelPromise.racewrappers around those SDK calls.- In
App.tsx,isAbortLikeErrorclassifies aborted requests: the SDK may throwAPIUserAbortError, plainErrorwithname === 'AbortError', orErrorwith defaultnameand a message containingsignal is aborted(from the GenAI client). Do not rely oninstanceof DOMExceptionalone. Timeout user copy is “Connection timed out” (no seconds in the string). - If the model returns an invalid multi-character shape (missing
characters/modelText, or array length mismatch),App.tsxsets ERROR andcanRetryChatAudio(true)so the user can Retry with the samelastChatAudio(same as network/cancel failures).
In the scenario description 'describe by voice' flow:
- Each transcription attempt creates a fresh
AbortController(scenarioDescriptionAbortControllerRef) and increments a request token (scenarioDescriptionRequestIdRef). - The in-flight call passes
abortController.signalintotranscribeAndCleanupAudio(...). - After awaiting, results are discarded if
currentRequestId !== scenarioDescriptionRequestIdRef.currentor if the modal is closed (scenarioSetupOpenRef). - In
catch,AbortErroris ignored, and only non-abort failures show errors / enable retry. - In
finally, the transcription spinner is only cleared when the request token still matches (so stale requests can’t affect UI after close+reopen).
This is the same overall strategy used for the main mic audio flow: per-turn AbortController, request-token guarded state updates, and selective AbortError handling (suppress only when processingAbortedRef indicates an intentional exit).
App.tsx(main mic + scenario description cancellation/discard logic)utils/combineAbortSignals.ts(composite signal for user + deadline)utils/isAbortLikeError.ts(abort detection forprocessAudioMessagecatch)services/geminiService.ts(transcribeAndCleanupAudio,sendVoiceMessageper-requestconfig.abortSignal,PIPELINE_MAX_MS, and JSON enforcement config)services/tefReviewService.ts(generateTefReview— passessignaltofetchand toai.models.generateContent; returnsnullonAbortError)__tests__/scenarioDescriptionRecordingAbortDiscard.test.tsx/__tests__/transcribeAndCleanupAudioAbortSignal.test.ts(abort + discard + config preservation)
Location: App.tsx — handleExitTefAd, handleExitTefQuestioning, handleDismissTefAdSummary, handleDismissTefQuestioningSummary
For TEF Ad and TEF Questioning sessions, URL.revokeObjectURL is not called in the exit handlers. Revocation is deferred to the dismiss handlers — after the summary screen closes and the user is done with the review.
// handleExitTefAd — NO revocation here
const snapshot = messagesRef.current;
tefAdMessagesSnapshotRef.current = snapshot;
startTefAdReview(snapshot); // review service will fetch audio from blob URLs
// handleDismissTefAdSummary — revocation happens here, after review is done
for (const msg of tefAdMessagesSnapshotRef.current) {
if (msg.audioUrl) {
// ...
URL.revokeObjectURL(url); // safe: review service is no longer fetching
}
}
tefAdMessagesSnapshotRef.current = [];The same pattern applies to TEF Questioning: handleExitTefQuestioning captures the snapshot and calls startTefQuestioningReview; handleDismissTefQuestioningSummary does the revocation.
generateTefReview in services/tefReviewService.ts fetches user audio from blob URLs to send to the Gemini evaluator as inline audio data. If exit handlers revoked the URLs immediately (as other scenario exit handlers do), the fetch inside generateTefReview would fail with a network error and the review would only have transcripts, degrading evaluation quality.
tefAdMessagesSnapshotRef and tefQuestioningMessagesSnapshotRef capture the message array at exit time so that:
- The review service has a stable reference to the messages (including blob URLs) that persists even after React state is cleared.
- The dismiss handler can find the URLs to revoke them after the review is complete.
Do not clear these refs or revoke URLs in the exit handlers. Do not "clean up" the exit handlers by adding URL.revokeObjectURL calls there — this will silently break review audio.
App.tsx—handleExitTefAd,handleExitTefQuestioning(exit: capture snapshot, no revocation);handleDismissTefAdSummary,handleDismissTefQuestioningSummary(dismiss: revoke + clear snapshot)services/tefReviewService.ts—generateTefReview— fetches blob URLs viafetchAudioAsInlineData
Location: services/tefReviewService.ts — generateTefReview()
generateTefReview has return type Promise<TefReview | null>. It returns null when the request is aborted (via AbortSignal) rather than throwing. All callers must check for null and treat it as a graceful cancellation — not as an error.
// tefReviewService.ts
if (err instanceof DOMException && err.name === 'AbortError') return null;
if (err instanceof Error && err.name === 'AbortError') return null;
// App.tsx callers
generateTefReview({ ... })
.then((r) => {
if (r) { // null check is required — null means aborted
setReviews([r]);
}
})This follows the same abort-suppression convention used throughout the codebase (see "Abort / Cancellation Strategy for Audio Requests" above). Returning null rather than throwing keeps callers free of AbortError-specific catch logic. The review loading state is cleared in finally, so the UI returns cleanly to its idle state.
Do not change null returns to throws. Do not flag the if (r) null-checks in callers as unnecessary — they guard against the abort case.
services/tefReviewService.ts—generateTefReviewreturn type and AbortError handlingApp.tsx—startTefAdReview,regenerateTefAdReview,startTefQuestioningReview,regenerateTefQuestioningReviewcallers
Location: App.tsx handlers, components/ScenarioSetup.tsx, components/AdPersuasionSetup.tsx
When AI functionality requires API credentials, the app handles missing credentials in two complementary ways:
-
Warning banners: Setup forms (ScenarioSetup, AdPersuasionSetup) display a yellow warning banner when required API keys are not configured. This gives users a passive, non-blocking notification about what's needed.
-
Modal trigger on action: When users attempt an action that depends on AI credentials (clicking record, uploading an image, starting a conversation), the app intercepts the action, opens the API key configuration modal, and returns early without performing the action. Once the user configures their keys, the action can proceed normally.
Any new feature that depends on AI API credentials MUST implement both of these patterns:
-
Add a warning banner in the relevant setup/configuration UI when the required key(s) are missing. Use the yellow warning style (
bg-yellow-900/30 border border-yellow-600/50) consistent with existing banners. -
Gate user actions that trigger AI calls with a credential check at the top of the handler:
if (!hasApiKeyOrEnv('provider')) { setShowApiKeyModal(true); return; }
| Feature | Gemini | OpenAI | Why |
|---|---|---|---|
| Free conversation (main mic) | Required | — | Gemini handles transcription + conversation |
| Scenario creation (describe) | Required | Required | Gemini for transcription, OpenAI for scenario planning |
| Scenario practice (mic) | Required | — | Gemini handles conversation |
| Ad Persuasion (TEF Ad) | Required | — | Gemini for image analysis + conversation |
| Ad Questioning (TEF Questioning) | Required | — | Gemini for image analysis + conversation |
services/apiKeyService.ts—hasApiKeyOrEnv()function for checking key availabilitycomponents/ApiKeySetup.tsx— Modal component for entering API keysApp.tsx— Handler functions with credential gates (handleStartRecording,handleStartRecordingDescription,handleOpenTefAdSetup, etc.)
Location: services/geminiService.ts - sendVoiceMessage(), App.tsx
The TEF Ad Persuasion mode injects coaching context into each turn based on a simple turn counter (tefAdTurnCount). The LLM is not responsible for sequencing objections or declaring conviction — the session ends when the 10-minute timer expires.
// App.tsx: inject phase-appropriate coaching text alongside the user's audio
let phaseContextText: string | undefined;
if (tefAdMode === 'practice' && !tefAdIsFirstMessage) {
if (tefAdTurnCount <= 3) {
phaseContextText = '[Per-turn context: Encourage the user to introduce and present the advertisement clearly...]';
} else if (tefAdTurnCount >= 8) {
phaseContextText = '[Per-turn context: Push back with counter-arguments...]';
} else {
phaseContextText = '[Per-turn context: Ask for concrete examples if only bare assertions are given...]';
}
}
const response = await sendVoiceMessage(audioBase64, mimeType, signal, phaseContextText);// geminiService.ts: contextText is prepended as a text part before the audio part
const messageParts = [];
if (contextText) {
messageParts.push({ text: contextText });
}
messageParts.push({ inlineData: { data: audioBase64, mimeType: mimeType } });Three phases guide the AI's behaviour:
- Early (turns 1–3): encourage clear ad presentation
- Mid (turns 4–7): prompt concrete examples
- Late (turns 8+): introduce counter-arguments and nuance
The AI never expresses being "convinced". The session ends when the 10-minute timer fires.
Phase-based coaching produces more realistic exam practice than a mechanical round-counter:
- Natural pacing: The AI adapts its pressure to where the student is in the session rather than cycling through a fixed script.
- No artificial endings: The LLM does not decide when to stop — the timer does. This matches the real TEF exam structure.
- Simpler state: A single
tefAdTurnCountinteger replaces a multi-field state machine, making the flow easier to follow and test.
The phaseContextText parameter to sendVoiceMessage adds a { text: ... } part to the message before the audio blob. This is the intentional mechanism for injecting per-turn coaching instructions. Do not remove it — without it the AI has no phase signal.
The first user turn is always a greeting (e.g., "Bonjour"). It must not receive phase context injection and must not increment tefAdTurnCount. The app tracks this with tefAdIsFirstMessage:
// App.tsx: skip context injection on the first turn
if (tefAdMode === 'practice' && !tefAdIsFirstMessage) {
// inject phaseContextText based on tefAdTurnCount
}
// After the response:
if (tefAdIsFirstMessage) {
setTefAdIsFirstMessage(false); // greeting done, counter stays at 0
} else {
setTefAdTurnCount(c => c + 1); // advance phase counter
}Do not flag the missing counter increment on the first turn as a bug — it is intentional. Phase counting should start from the first real persuasion exchange, not the greeting.
After the session ends, generateTefReview evaluates the conversation against the 5 official TEF persuasion criteria. Results are stored in criteriaEvaluation: TefCriterionEvaluation[] on the TefReview object and displayed as a scorecard in TefAdSummary.
// types.ts
export interface TefCriterionEvaluation {
criterion: string;
met: boolean;
evidence: string;
}
// TefReview
criteriaEvaluation?: TefCriterionEvaluation[];Do not confuse this with the old "direction/round/isConvinced" state — that mechanism no longer exists.
types.ts—TefCriterionEvaluationinterface;criteriaEvaluationfield onTefReviewservices/geminiService.ts—sendVoiceMessage()(acceptscontextText?)services/scenarioService.ts—generateTefAdSystemInstruction()(timer-based system prompt, no round-counting)components/PersuasionTimer.tsx— Shows "Turn N" driven bytefAdTurnCountcomponents/TefAdSummary.tsx— Criteria scorecard fromcriteriaEvaluationApp.tsx—tefAdTurnCount,tefAdIsFirstMessagestate; phase context injection per turn
Location: services/geminiService.ts - TefQuestioningSchema, createChatSession(), sendVoiceMessage(); types.ts; App.tsx; components/TefQuestioningSummary.tsx
TEF Ad Questioning is a third synthetic-scenario practice mode (alongside Role Play and Ad Persuasion). It sets isTefQuestioning: true on the Scenario object. This flag drives two distinct behaviors in geminiService.ts:
1. Schema selection in createChatSession()
// isTefQuestioning selects the extended schema at session creation time
const schemaToUse = activeScenario.isTefQuestioning
? TefQuestioningSchema
: SingleCharacterSchema;TefQuestioningSchema is a superset of SingleCharacterSchema — it adds two fields:
isRepeat: z.boolean().optional()
.describe("true if the user asked a question that was already answered"),
conceptLabels: z.array(z.string())
.describe("Array of 2-4 word topic labels in English for the question asked (e.g. ['pricing', 'opening hours']). Always include this field — use an empty array if no topic applies.")isRepeat is optional (AI may omit it on non-repeat turns). conceptLabels is required — Gemini's structured output always emits it because it is not marked optional, ensuring the post-exercise review can always group questions by concept.
Do not flag the two-schema branch as unnecessary complexity or suggest collapsing it into one schema. The schemas must remain separate so that isRepeat and conceptLabels are never present in the standard single-character path and never absent in the questioning path.
2. isRepeat and conceptLabels propagation in sendVoiceMessage()
After validating the JSON response, both fields are extracted and forwarded on the VoiceResponse object using the same conditional guard:
const isRepeat = activeScenario.isTefQuestioning && 'isRepeat' in validated
? (validated as { isRepeat?: boolean }).isRepeat
: undefined;
const conceptLabels = activeScenario.isTefQuestioning && 'conceptLabels' in validated
? (validated as { conceptLabels?: string[] }).conceptLabels
: undefined;App.tsx reads response.isRepeat to increment a tefQuestioningRepeatCount and reads response.conceptLabels to store topic labels on user Message objects for the post-exercise summary. Do not flag either field on VoiceResponse as dead code — both are consumed downstream.
Unlike Ad Persuasion (which injects objection direction and round number on every turn), TEF Ad Questioning does not inject any per-turn context into sendVoiceMessage. The AI customer service agent's system prompt is self-sufficient — it needs no external sequencing signal. Adding context injection to the questioning path would be incorrect.
Like persuasion mode, questioning mode tracks a tefQuestioningIsFirstMessage boolean. The first user turn (a greeting) does not increment tefQuestioningQuestionCount, and neither isRepeat nor conceptLabels are stored on the first message. This is intentional — only genuine questions should count toward the score and appear in the concept summary shown in TefQuestioningSummary.
// In App.tsx — both isRepeat and conceptLabels are gated on the same first-message skip
...(tefQuestioningMode === 'practice' && !tefQuestioningIsFirstMessage && {
isRepeat: response.isRepeat,
conceptLabels: response.conceptLabels,
}),if (tefQuestioningIsFirstMessage) {
setTefQuestioningIsFirstMessage(false); // greeting done, no count
} else {
setTefQuestioningQuestionCount(c => c + 1);
if (response.isRepeat === true) setTefQuestioningRepeatCount(r => r + 1);
}components/TefQuestioningSummary.tsx exports a named function groupRepeatedConcepts(messages: Message[]) that is used internally by TefQuestioningSummary to build the "Repeated Concepts" section of the post-exercise review. It is an exported function (not a private helper) to make it independently testable. Do not flag the export as unnecessary or suggest inlining it into the component.
types.ts—isTefQuestioning?: booleanonScenario;isRepeat?: booleanandconceptLabels?: string[]on bothMessageandVoiceResponseservices/geminiService.ts—TefQuestioningSchema; schema selection increateChatSession();isRepeatandconceptLabelsextraction insendVoiceMessage()services/scenarioService.ts—generateTefQuestioningSystemInstruction()(self-contained prompt, no per-turn injection)components/AdQuestioningSetup.tsx— Setup UI for the questioning sessioncomponents/QuestioningTimer.tsx— In-session timer and question countercomponents/TefQuestioningSummary.tsx— End-of-session summary (total questions, repeat count, repeated concepts accordion); exportsgroupRepeatedConceptsApp.tsx—tefQuestioningIsFirstMessage,tefQuestioningQuestionCount,tefQuestioningRepeatCountstate;handleStartTefQuestioning; conditional storage ofisRepeatandconceptLabelson user messages
When reviewing this codebase:
- Don't flag pre-TTS history updates as bugs - This is intentional graceful degradation
- Don't flag successive message merging as data loss - This is intentional rate-limit optimization
- Don't flag lowercase voice name conversion - This is required by the Gemini API
- Don't suggest removing JSON response mode for single-character - This enables French/English separation
- Don't flag the TEF Ad system prompt as missing objection-counting or round-sequencing logic - The persuasion mode uses phase-based coaching injected per-turn; the session ends by timer, not by LLM conviction (see "Phase-Based Per-Turn Context Injection" section above)
- Don't flag the
contextTexttext part alongside audio as an API misuse - It is the intentional mechanism for delivering per-turn phase coaching context to the LLM - Don't flag the missing
tefAdTurnCountincrement on the first persuasion turn as a bug - The first turn is a greeting; the phase counter must not advance until the first real exchange (see "First-Message Skip" under "Phase-Based Per-Turn Context Injection") - Don't flag the two-schema branch (
TefQuestioningSchema/SingleCharacterSchema) as unnecessary -isRepeatmust only appear in the questioning path; the schemas must remain separate - Don't flag
isRepeatonVoiceResponseas dead code - It is consumed by the repeat counter inApp.tsxand displayed onTefQuestioningSummary - Don't add per-turn context injection to the questioning mode path - Unlike persuasion mode, questioning mode needs no external sequencing signal; its system prompt is self-sufficient
- Don't add
URL.revokeObjectURLcalls tohandleExitTefAdorhandleExitTefQuestioning- Revocation is intentionally deferred to the dismiss handlers so the review service can fetch audio blob URLs for evaluation (see "Deferred Audio URL Revocation" section) - Don't flag the
if (r)null-checks ongenerateTefReviewresults as redundant -nullis the documented return value for an aborted review request; the check is required (see "TEF Post-Exercise Review:generateTefReviewReturnsnullon Abort" section)
If you believe you've found a genuine bug in one of these areas, please:
- Reference this document in your review
- Explain why the documented rationale doesn't apply
- Suggest an alternative approach that preserves the documented benefits
- Remove temporary debug instrumentation (e.g. NDJSON ingest /
#region agent logblocks) only after the user has confirmed the fix in the UI, unless they explicitly ask to clean up earlier.
- Continual-learning transcript processing for this project uses an index file under the main checkout:
01-projects/parle/.cursor/hooks/state/continual-learning-index.json.AGENTS.mdmay be edited from a Cursor worktree (e.g.worktrees/parle/<branch>/AGENTS.md), so hook state and agent memory paths are not always the same directory.
- 2025-01-XX: Initial documentation of TTS/history pattern and successive message merging
- 2026-03-11: Added deterministic objection tracking pattern (TEF Ad mode) — superseded 2026-04-07
- 2026-04-07: Replaced objection state machine with phase-based per-turn context injection; AI never expresses conviction; session ends by 10-min timer; added TEF criteria scorecard to post-exercise review
- 2026-03-14: Added TEF Ad Questioning mode patterns (isTefQuestioning schema selection, isRepeat flag, no per-turn context injection, first-message skip); added persuasion first-message skip note; updated credentials table
- 2026-04-04: Added deferred audio URL revocation pattern and
generateTefReviewnull-on-abort convention (TEF post-exercise review feature) - See git history for detailed implementation timeline