Skip to main content

Intelligent Conversational Engine (ICE)

The conversation engine uses a 3-step LLM pipeline to generate context-aware replies to incoming social media messages. It supports auto-reply, escalation, human review, and A/B experiments on reply strategies.


Architecture


Pipeline Steps

Step 1: Intent Classification

Service: intent-classifier.service.ts

Analyzes the incoming message to determine:

OutputDescriptionExamples
Intent typeCategory of the messagequestion, complaint, praise, purchase_intent, support_request, spam
SentimentEmotional tonepositive, negative, neutral
UrgencyHow time-sensitivelow, medium, high, critical
Auto-reply decisionWhether to auto-reply or escalateBoolean based on bot config
const classification = await intentClassifier.classify(message, botConfig);
// { intent: 'purchase_intent', sentiment: 'positive', urgency: 'medium', shouldAutoReply: true }

Uses Gemini with temperature 0.2 for deterministic, consistent classification.

Step 2: Context Building

Service: context-builder.service.ts

Gathers all relevant context for generating an informed reply:

Context SourceDataModel
Audience profileName, platform, engagement score, tagsAudienceNode
Conversation historyRecent messages in the threadConversationMessage
Memory (3 tiers)Short-term, medium-term, long-term memoriesConversationMemory
Product informationIf e-commerce connected, relevant productsEcommerceProduct
Brand voiceTone, style, vocabulary guidelinesBrandVoice
Bot configurationAuto-reply intents, confidence threshold, response styleBotConfiguration
const context = await contextBuilder.build(threadId, audienceNodeId, workspaceId);
// { audienceProfile, conversationHistory, memory, products, brandVoice, botConfig }

Step 3: Response Generation

Service: conversation-brain.service.ts

Generates the reply using the classified intent and assembled context:

const reply = await conversationBrain.generateReply({
intent: classification,
context: assembledContext,
brandVoice: workspace.brandVoice,
language: thread.language,
});
// { text: "Thank you for your interest! ...", confidence: 0.92 }

Uses Gemini with temperature 0.6 for natural but controlled responses.


Core Service Functions

conversation-engine.service.ts orchestrates the full pipeline:

FunctionDescription
processIncomingMessage()Main entry point -- runs the full 3-step pipeline
listThreads()List conversation threads (filtered, paginated)
getThread()Get a single thread with messages
getThreadMessages()Get messages for a thread
sendAgentReply()Human agent sends a manual reply
resolveThread()Mark thread as resolved
reopenThread()Reopen a resolved thread
toggleThreadBot()Enable/disable bot for a specific thread
getBotConfig()Get workspace bot configuration
updateBotConfig()Update workspace bot configuration
getInboxStats()Get inbox statistics (open, resolved, escalated counts)
suggestReply()Generate an AI-suggested reply without auto-sending

API Endpoints

All ICE routes are under /api/v1/ice:

EndpointMethodFunctionMin Role
/api/v1/ice/threadsGETlistThreads()EDITOR
/api/v1/ice/threads/:id/messagesGETgetThreadMessages()EDITOR
/api/v1/ice/replyPOSTsendAgentReply()EDITOR
/api/v1/ice/ai-suggestPOSTsuggestReply()EDITOR
/api/v1/ice/threads/:id/resolvePATCHresolveThread()EDITOR
/api/v1/ice/threads/:id/reopenPATCHreopenThread()EDITOR
/api/v1/ice/threads/:id/bot-togglePATCHtoggleThreadBot()ADMIN
/api/v1/ice/bot-configGETgetBotConfig()ADMIN
/api/v1/ice/bot-configPATCHupdateBotConfig()ADMIN
/api/v1/ice/escalation-rulesCRUDEscalation rule managementADMIN
/api/v1/ice/escalationsGETList escalation recordsEDITOR
/api/v1/ice/experimentsGETReply experiment resultsVIEWER
/api/v1/ice/statsGETgetInboxStats()VIEWER

Bot Configuration

Each workspace configures its bot via the BotConfiguration model:

SettingDescriptionDefault
enabledMaster switch for auto-replyfalse
autoReplyIntentsWhich intents to auto-reply to['question', 'praise']
confidenceThresholdMinimum confidence to auto-send0.85
responseStyleCasual, professional, friendly, etc.'professional'
maxAutoRepliesPerThreadLimit auto-replies before escalating3
businessHoursOnlyOnly auto-reply during business hoursfalse

Escalation Rules

Defined via EscalationRule model, evaluated during Step 1:

Condition TypeExampleAction
Intent matchintent == 'complaint'Escalate to support team
Sentimentsentiment == 'negative'Escalate to manager
Urgencyurgency == 'critical'Immediate escalation
Keyword matchMessage contains "refund"Escalate to billing
Repeated contactSame person > 3 unresolved threadsEscalate to senior agent

Reply Experiments

The ReplyExperiment model enables A/B testing of reply strategies:


QueuePurposeTrigger
ice-processProcess incoming messages through the 3-step pipelineMessage webhook
ice-escalationRoute escalated threads to human agentsEscalation rule match
ice-experiment-evalEvaluate experiment results after duration elapsesTimer

Cross-Reference