Skip to main content

Prompt Templates

All AI prompts in UniPulse follow structured template patterns that ensure consistent, high-quality output from Gemini 2.5 Flash. Prompts are configured via the PROMPT_CONFIGS index and built dynamically with injected context.


Architecture


Prompt Structure Pattern

Every AI call uses a system prompt (defines the AI's role) and a user prompt (provides the specific task):

const response = await callGemini({
systemPrompt: SYSTEM_PROMPTS.captionGenerator,
userPrompt: buildCaptionPrompt(input),
temperature: PROMPT_CONFIGS.caption_generate.temperature,
maxTokens: PROMPT_CONFIGS.caption_generate.maxTokens,
});

System Prompts

Each AI feature has a dedicated system prompt that defines the AI's role and constraints:

FeatureSystem Prompt Role
Caption generationSocial media content expert for {platform}
Caption rewritingContent editor specializing in social media optimization
Hashtag generationHashtag strategist for {platform} algorithm optimization
CTA generationMarketing copywriter specializing in calls-to-action
TranslationProfessional translator for social media content
Conversation replyCustomer support agent for {brand}
Intent classificationMessage classifier for social media conversations
Content classificationContent categorizer for social media posts
A/B test evaluationData analyst specializing in social media experiments
Performance advisorSocial media analytics consultant
Gap analysisCompetitive intelligence analyst

Example: Caption System Prompt

const CAPTION_SYSTEM_PROMPT = `You are a social media content expert.
You generate engaging captions optimized for {platform}.
Follow these guidelines:
- Brand voice: {brandVoice}
- Tone: {tone}
- Language: {language}
- Target audience: {audience}
- Keep within platform character limits
- Use emojis appropriately for the platform
- Never include hashtags in the caption (those are generated separately)`;

Context Injection

Dynamic context is injected into prompts based on available data:

function buildCaptionPrompt(input: CaptionGenerateInput): string {
const parts = [
`Topic: ${input.topic}`,
`Platform: ${input.platform}`,
`Number of options: ${input.count || 3}`,
];

if (input.brandVoice) {
parts.push(`Brand voice: ${input.brandVoice.description}`);
parts.push(`Brand voice samples:\n${input.brandVoice.samples.join('\n')}`);
}

if (input.tone) {
parts.push(`Tone: ${input.tone}`);
}

if (input.language && input.language !== 'en') {
parts.push(`Language: Write in ${input.language}`);
}

parts.push(`Generate ${input.count || 3} caption options.`);

return parts.join('\n');
}

Response Formatting

All prompts request structured JSON output for reliable programmatic parsing:

const prompt = `
${taskDescription}

Respond in the following JSON format:
{
"captions": [
{
"text": "The caption text",
"tone": "The tone used",
"characterCount": 150
}
]
}

Important: Return ONLY valid JSON. No markdown, no explanations.
`;

Response Parsing

const response = await callGemini({ systemPrompt, userPrompt, temperature });

// Parse JSON from response
const parsed = JSON.parse(response.text);

// Validate against expected schema
const validated = captionResponseSchema.parse(parsed);

Prompt Templates by Feature

Caption Generation

Context InjectedSource
Topic/descriptionUser input
PlatformUser selection
Brand voiceBrandVoice model
LanguageUser setting or detection
Trending topicstrend-scanner.service (optional)
Top-performing post examplesanalytics.service (optional)

Conversation Reply (ICE)

Context InjectedSource
Classified intentintent-classifier.service
Conversation historyConversationMessage records
Customer profileAudienceNode data
3-tier memoryConversationMemory (short/medium/long)
Product infoEcommerceProduct (if connected)
Brand voiceBrandVoice model
Escalation rulesEscalationRule (to know boundaries)

Performance Advisor

Context InjectedSource
Recent metricsPostMetric aggregations
Top/bottom postsPerformance-ranked posts
Competitor dataCompetitorSnapshot records
Industry benchmarksIndustryBenchmark data
Historical trendsTime-series analytics

A/B Test Evaluation

Context InjectedSource
Variant A metricsPostMetric for variant A
Variant B metricsPostMetric for variant B
Test durationABTest date range
Sample sizeImpression/reach counts
Statistical significanceCalculated p-value

Prompt Engineering Best Practices

PracticeRationale
Keep system prompts concise and specificReduces token usage, improves focus
Always include brand voice when availableMaintains brand consistency
Request structured JSON outputEnables reliable programmatic parsing
Include examples for complex tasksFew-shot learning improves output quality
Set temperature based on task typeLower (0.2) for classification, higher (0.8) for creative
Specify output constraints (length, format)Prevents overly long or malformed responses
Include "Important" instructions at the endLast instructions carry more weight
Test prompts with edge casesEmpty inputs, non-English text, very long text

Temperature Guide

TemperatureTasksReasoning
0.2Intent classification, post classification, sentiment analysisNeed deterministic, consistent results
0.3Translation, data analysis, A/B test evaluation, predictionsNeed accuracy with slight variation
0.5Hashtag generation, trend analysis, gap analysisBalance between accuracy and creativity
0.6Conversation replies, brand voice applicationNatural language with consistency
0.7Caption rewriting, CTA generationCreative with guided constraints
0.8Caption generation, content calendar, content repurposingMaximum creativity within guidelines

Cross-Reference