Skip to main content

🤖 AI Auto-Reply

Let the Intelligent Conversational Engine (ICE) handle routine conversations automatically while your team focuses on high-value interactions. The AI reads intent, builds context from memory, and crafts natural replies in your brand voice.


How It Works​

Every incoming message passes through a 3-step LLM pipeline:

Step 1 — Intent Classification​

The AI analyzes the message to detect:

  • Primary intent — question, complaint, praise, purchase intent, refund request, general inquiry, etc.
  • Sentiment — positive, neutral, or negative
  • Urgency — LOW, MEDIUM, HIGH, or CRITICAL
  • Confidence score — a 0–1 value indicating how sure the AI is about its classification

Step 2 — Context Building​

The engine gathers everything relevant before composing a reply:

  • Conversation memory — prior messages in this thread
  • Customer profile — data from the Audience Graph (engagement score, segment, purchase history)
  • Knowledge base — your configured FAQ entries and product information
  • Decision log — a JSON record of every reasoning step (fully auditable)

Step 3 — Response Generation​

Using your configured brand voice, the AI generates a reply that:

  • Matches your tone, style, and terminology
  • Addresses the detected intent directly
  • Incorporates relevant product/order information
  • Respects the configured replyStyle for the thread
Behind the Scenes

Every auto-generated reply stores a contextSnapshot (the data the AI considered) and a decisionLog (the reasoning chain). You can inspect both from the thread detail view — nothing is a black box.


Bot Configuration​

Navigate to Settings > Conversations > Auto-Reply or use the API (GET/PUT /ice/bot-config) to control every aspect of the bot:

SettingDescriptionDefault
isEnabledMaster on/off switch for the botfalse
autoReplyCommentsAuto-reply to public commentstrue
autoReplyDMsAuto-reply to direct messagestrue
commentToDmEnabledMove public comment conversations to DM for privacyfalse
maxAutoRepliesPerThreadCap on consecutive bot replies per thread (prevents loops)3
replyDelaySecondsArtificial delay before sending (feels more human)0
workingHoursOnlyOnly auto-reply during business hoursfalse
workingHoursStart / EndBusiness hours window (e.g., 09:00 – 18:00)—
workingTimezoneTimezone for working hours (e.g., Africa/Cairo)—
brandVoiceIdWhich brand voice profile to use for replies—
maxConfidenceForAutoMinimum confidence threshold — below this, the reply is held for review0.7
blockedTopicsList of topics the bot should never respond to (e.g., legal, medical)[]
escalateKeywordsKeywords that trigger immediate escalation (e.g., "lawyer", "refund")[]
knowledgeBaseJSON knowledge base the AI references for answers{}
faqEntriesStructured FAQ entries for common questions[]
abTestingEnabledEnable A/B testing of reply variants (see Experiments)false

Message Sources​

Every outbound message is tagged with its source so you always know who (or what) sent it:

SourceMeaning
CUSTOMERInbound message from the customer
BOT_AUTOFully automated reply — sent without human review
BOT_SUGGESTEDAI draft that was reviewed/edited by an agent before sending
AGENT_MANUALWritten entirely by a human team member
SYSTEMSystem-generated message (e.g., auto-close notification)

Memory System​

The AI remembers past interactions using ConversationMemory, a structured memory store with three scopes:

ScopeWhat It StoresExample
Thread-levelCurrent conversation context"Customer asked about sizing for product X"
Customer-levelCross-thread history per customer (senderPlatformId)"Prefers email communication, has ordered 3 times"
Audience-levelShared knowledge per audience node (audienceNodeId)"This customer is in the VIP segment"

Each memory entry includes:

  • memoryType and key — structured categorization
  • value — JSON payload with the actual data
  • confidence — how reliable the memory is (0–1)
  • hitCount — how often this memory has been used in replies
  • expiresAt — optional TTL for time-sensitive information

A/B Testing​

When abTestingEnabled is turned on, ICE can run ReplyExperiments to optimize response effectiveness:

FieldDescription
variantA / variantBTwo different reply strategies (JSON config)
variantACount / variantBCountNumber of times each variant was served
variantAConversions / variantBConversionsConversion events per variant
winnerIdThe statistically winning variant (set automatically)

Manage experiments via GET/POST /ice/experiments.


Monitoring & Review​

Review all AI activity in Conversations > Auto-Replies:

  • Pending review — Replies where confidence was below maxConfidenceForAuto, waiting for human approval
  • Sent auto-replies — Successfully sent bot messages with full decision logs
  • Flagged responses — Replies marked as incorrect by your team for AI improvement
  • Performance stats — Response time, resolution rate, customer satisfaction via GET /ice/stats
tip

Use the BOT_SUGGESTED workflow for sensitive topics: the AI drafts a reply, but a human reviews and sends it. Set this up by lowering the confidence threshold or adding topics to blockedTopics.