๐งช A/B Testing
Run scientifically rigorous experiments to discover which content variations drive the best results. Stop guessing โ let the data decide.
How A/B Tests Workโ
An A/B test publishes two post variations to comparable audience segments and measures which one performs better on your chosen metric. After the test period ends, an automated evaluation job determines the winner.
The ABTest model tracks each experiment with the following structure:
| Field | Description |
|---|---|
name | Human-readable test name |
status | RUNNING, COMPLETED, or CANCELLED |
variantAPostId | The first post variation |
variantBPostId | The second post variation |
platform | Platform where the test runs |
socialAccountId | The account used for publishing |
winningMetric | The metric used to judge (e.g., engagement rate, reach) |
winnerId | The post ID of the winning variant |
endsAt | Scheduled end time for the test |
evaluatedAt | Timestamp when results were evaluated |
When the endsAt time is reached, the ab-test-eval background queue picks up the test, compares variant metrics, and sets the winnerId with statistical confidence.
Creating an A/B Testโ
- Navigate to Analytics > A/B Tests
- Click New Test (or use
POST /analytics/ab-tests) - Configure your experiment:
| Setting | Description |
|---|---|
| Test Name | A descriptive name (e.g., "CTA comparison โ Shop Now vs. Learn More") |
| Variant A | First post variation |
| Variant B | Second post variation |
| Platform | Which platform to run the test on |
| Winning Metric | The metric that determines the winner (engagement rate, reach, clicks, revenue) |
| Duration | How long the test should run before evaluation |
- Launch the test โ both variants are published simultaneously
Monitoring Active Testsโ
View all tests and their statuses at GET /analytics/ab-tests or from the A/B Tests tab in the UI. Each test card shows:
- Current status badge (
RUNNING/COMPLETED/CANCELLED) - Real-time metric comparison between variants
- Time remaining until evaluation
Reviewing Resultsโ
Once a test completes, open it to see the full results (GET /analytics/ab-tests/:id):
- Head-to-head comparison โ Side-by-side metrics for both variants
- Winner declaration โ Which variant won and by what margin
- Statistical significance โ Confidence level of the result
- AI recommendation โ Why the winner performed better and how to apply the learning
Cancelling a Testโ
If you need to stop a test early:
POST /analytics/ab-tests/:id/cancel
This sets the status to CANCELLED. Partial data is retained but no winner is declared.
Cancelling a test early means the results may not be statistically significant. Only cancel if there's a compelling reason (e.g., a variant has an error).
Best Practicesโ
| Practice | Why |
|---|---|
| Test one variable at a time | Changing caption AND image makes it impossible to know which drove the difference |
| Run for at least 48 hours | Shorter tests may not capture enough data for statistical significance |
| Use consistent audiences | Both variants should reach similar audience segments |
| Document learnings | Apply winning patterns to your Brand Voice and content strategy |
| Test regularly | Audience preferences shift โ what worked last month may not work today |
API Referenceโ
| Endpoint | Method | Description |
|---|---|---|
/analytics/ab-tests | POST | Create a new A/B test |
/analytics/ab-tests | GET | List all A/B tests |
/analytics/ab-tests/:id | GET | Get test details and results |
/analytics/ab-tests/:id/cancel | POST | Cancel a running test |
Related Pagesโ
- Post Analytics โ Deep-dive into variant performance
- Predictions โ Predict which variant will win before testing
- AI Advisor โ Get recommendations based on test learnings
- Auto-Boost & Ads โ Promote winning variants automatically