Skip to main content

๐Ÿงช A/B Testing

Run scientifically rigorous experiments to discover which content variations drive the best results. Stop guessing โ€” let the data decide.


How A/B Tests Workโ€‹

An A/B test publishes two post variations to comparable audience segments and measures which one performs better on your chosen metric. After the test period ends, an automated evaluation job determines the winner.

Behind the Scenes

The ABTest model tracks each experiment with the following structure:

FieldDescription
nameHuman-readable test name
statusRUNNING, COMPLETED, or CANCELLED
variantAPostIdThe first post variation
variantBPostIdThe second post variation
platformPlatform where the test runs
socialAccountIdThe account used for publishing
winningMetricThe metric used to judge (e.g., engagement rate, reach)
winnerIdThe post ID of the winning variant
endsAtScheduled end time for the test
evaluatedAtTimestamp when results were evaluated

When the endsAt time is reached, the ab-test-eval background queue picks up the test, compares variant metrics, and sets the winnerId with statistical confidence.


Creating an A/B Testโ€‹

  1. Navigate to Analytics > A/B Tests
  2. Click New Test (or use POST /analytics/ab-tests)
  3. Configure your experiment:
SettingDescription
Test NameA descriptive name (e.g., "CTA comparison โ€” Shop Now vs. Learn More")
Variant AFirst post variation
Variant BSecond post variation
PlatformWhich platform to run the test on
Winning MetricThe metric that determines the winner (engagement rate, reach, clicks, revenue)
DurationHow long the test should run before evaluation
  1. Launch the test โ€” both variants are published simultaneously

Monitoring Active Testsโ€‹

View all tests and their statuses at GET /analytics/ab-tests or from the A/B Tests tab in the UI. Each test card shows:

  • Current status badge (RUNNING / COMPLETED / CANCELLED)
  • Real-time metric comparison between variants
  • Time remaining until evaluation

Reviewing Resultsโ€‹

Once a test completes, open it to see the full results (GET /analytics/ab-tests/:id):

  • Head-to-head comparison โ€” Side-by-side metrics for both variants
  • Winner declaration โ€” Which variant won and by what margin
  • Statistical significance โ€” Confidence level of the result
  • AI recommendation โ€” Why the winner performed better and how to apply the learning

Cancelling a Testโ€‹

If you need to stop a test early:

POST /analytics/ab-tests/:id/cancel

This sets the status to CANCELLED. Partial data is retained but no winner is declared.

warning

Cancelling a test early means the results may not be statistically significant. Only cancel if there's a compelling reason (e.g., a variant has an error).


Best Practicesโ€‹

PracticeWhy
Test one variable at a timeChanging caption AND image makes it impossible to know which drove the difference
Run for at least 48 hoursShorter tests may not capture enough data for statistical significance
Use consistent audiencesBoth variants should reach similar audience segments
Document learningsApply winning patterns to your Brand Voice and content strategy
Test regularlyAudience preferences shift โ€” what worked last month may not work today

API Referenceโ€‹

EndpointMethodDescription
/analytics/ab-testsPOSTCreate a new A/B test
/analytics/ab-testsGETList all A/B tests
/analytics/ab-tests/:idGETGet test details and results
/analytics/ab-tests/:id/cancelPOSTCancel a running test