Prompt Templates
All AI prompts in UniPulse follow structured template patterns that ensure consistent, high-quality output from Gemini 2.5 Flash. Prompts are configured via the PROMPT_CONFIGS index and built dynamically with injected context.
Architecture
Prompt Structure Pattern
Every AI call uses a system prompt (defines the AI's role) and a user prompt (provides the specific task):
const response = await callGemini({
systemPrompt: SYSTEM_PROMPTS.captionGenerator,
userPrompt: buildCaptionPrompt(input),
temperature: PROMPT_CONFIGS.caption_generate.temperature,
maxTokens: PROMPT_CONFIGS.caption_generate.maxTokens,
});
System Prompts
Each AI feature has a dedicated system prompt that defines the AI's role and constraints:
| Feature | System Prompt Role |
|---|---|
| Caption generation | Social media content expert for {platform} |
| Caption rewriting | Content editor specializing in social media optimization |
| Hashtag generation | Hashtag strategist for {platform} algorithm optimization |
| CTA generation | Marketing copywriter specializing in calls-to-action |
| Translation | Professional translator for social media content |
| Conversation reply | Customer support agent for {brand} |
| Intent classification | Message classifier for social media conversations |
| Content classification | Content categorizer for social media posts |
| A/B test evaluation | Data analyst specializing in social media experiments |
| Performance advisor | Social media analytics consultant |
| Gap analysis | Competitive intelligence analyst |
Example: Caption System Prompt
const CAPTION_SYSTEM_PROMPT = `You are a social media content expert.
You generate engaging captions optimized for {platform}.
Follow these guidelines:
- Brand voice: {brandVoice}
- Tone: {tone}
- Language: {language}
- Target audience: {audience}
- Keep within platform character limits
- Use emojis appropriately for the platform
- Never include hashtags in the caption (those are generated separately)`;
Context Injection
Dynamic context is injected into prompts based on available data:
function buildCaptionPrompt(input: CaptionGenerateInput): string {
const parts = [
`Topic: ${input.topic}`,
`Platform: ${input.platform}`,
`Number of options: ${input.count || 3}`,
];
if (input.brandVoice) {
parts.push(`Brand voice: ${input.brandVoice.description}`);
parts.push(`Brand voice samples:\n${input.brandVoice.samples.join('\n')}`);
}
if (input.tone) {
parts.push(`Tone: ${input.tone}`);
}
if (input.language && input.language !== 'en') {
parts.push(`Language: Write in ${input.language}`);
}
parts.push(`Generate ${input.count || 3} caption options.`);
return parts.join('\n');
}
Response Formatting
All prompts request structured JSON output for reliable programmatic parsing:
const prompt = `
${taskDescription}
Respond in the following JSON format:
{
"captions": [
{
"text": "The caption text",
"tone": "The tone used",
"characterCount": 150
}
]
}
Important: Return ONLY valid JSON. No markdown, no explanations.
`;
Response Parsing
const response = await callGemini({ systemPrompt, userPrompt, temperature });
// Parse JSON from response
const parsed = JSON.parse(response.text);
// Validate against expected schema
const validated = captionResponseSchema.parse(parsed);
Prompt Templates by Feature
Caption Generation
| Context Injected | Source |
|---|---|
| Topic/description | User input |
| Platform | User selection |
| Brand voice | BrandVoice model |
| Language | User setting or detection |
| Trending topics | trend-scanner.service (optional) |
| Top-performing post examples | analytics.service (optional) |
Conversation Reply (ICE)
| Context Injected | Source |
|---|---|
| Classified intent | intent-classifier.service |
| Conversation history | ConversationMessage records |
| Customer profile | AudienceNode data |
| 3-tier memory | ConversationMemory (short/medium/long) |
| Product info | EcommerceProduct (if connected) |
| Brand voice | BrandVoice model |
| Escalation rules | EscalationRule (to know boundaries) |
Performance Advisor
| Context Injected | Source |
|---|---|
| Recent metrics | PostMetric aggregations |
| Top/bottom posts | Performance-ranked posts |
| Competitor data | CompetitorSnapshot records |
| Industry benchmarks | IndustryBenchmark data |
| Historical trends | Time-series analytics |
A/B Test Evaluation
| Context Injected | Source |
|---|---|
| Variant A metrics | PostMetric for variant A |
| Variant B metrics | PostMetric for variant B |
| Test duration | ABTest date range |
| Sample size | Impression/reach counts |
| Statistical significance | Calculated p-value |
Prompt Engineering Best Practices
| Practice | Rationale |
|---|---|
| Keep system prompts concise and specific | Reduces token usage, improves focus |
| Always include brand voice when available | Maintains brand consistency |
| Request structured JSON output | Enables reliable programmatic parsing |
| Include examples for complex tasks | Few-shot learning improves output quality |
| Set temperature based on task type | Lower (0.2) for classification, higher (0.8) for creative |
| Specify output constraints (length, format) | Prevents overly long or malformed responses |
| Include "Important" instructions at the end | Last instructions carry more weight |
| Test prompts with edge cases | Empty inputs, non-English text, very long text |
Temperature Guide
| Temperature | Tasks | Reasoning |
|---|---|---|
| 0.2 | Intent classification, post classification, sentiment analysis | Need deterministic, consistent results |
| 0.3 | Translation, data analysis, A/B test evaluation, predictions | Need accuracy with slight variation |
| 0.5 | Hashtag generation, trend analysis, gap analysis | Balance between accuracy and creativity |
| 0.6 | Conversation replies, brand voice application | Natural language with consistency |
| 0.7 | Caption rewriting, CTA generation | Creative with guided constraints |
| 0.8 | Caption generation, content calendar, content repurposing | Maximum creativity within guidelines |
Cross-Reference
- Gemini API -- callGemini() function and PROMPT_CONFIGS
- Conversation Engine -- ICE-specific prompts
- Memory System -- memory context in prompts
- Services -- AI service functions