A/B Testing

A/B testing lets you compare different versions of your email to find out what resonates best with your audience. Instead of guessing which subject line or...

A/B testing lets you compare different versions of your email to find out what resonates best with your audience. Instead of guessing which subject line or layout works better, you test both with a small group and send the winner to everyone else.

How A/B testing works

  1. You create a campaign with two or more variants (e.g., different subject lines).
  2. A small percentage of your audience (the test group) receives the variants randomly split between them.
  3. After an evaluation window, Owlat picks the best-performing variant based on your chosen metric.
  4. The winning variant is automatically sent to the rest of your audience.

This way, the majority of your audience always gets the version that performed best.

Setting up an A/B test

1. Create your campaign

Start a new campaign as usual in Campaigns > All Campaigns and fill in the basics (name, subject, preview text).

2. Add variants

In the campaign setup, enable A/B testing and create your variants:

  • Subject line variants — same email content, different subject lines. This is the most common and easiest test to run.
  • Content variants — different templates or content layouts. Use this when you want to test entirely different approaches.

Test one thing at a time. If you change both the subject line and the content, you won't know which change made the difference. A clear, single hypothesis per test gives you actionable results.

3. Configure the test

SettingWhat it means
Test audience shareThe percentage of your audience that receives the test variants (e.g., 20%). The remaining 80% gets the winner.
Evaluation windowHow long to wait before picking a winner (e.g., 4 hours). This gives recipients time to open and click.
Winning metricThe metric used to pick the winner — typically open rate for subject line tests or click rate for content tests.
Auto-send winnerWhen enabled, the winning variant is sent automatically to the remaining audience after the evaluation window.

4. Launch the test

Send or schedule your campaign as usual. The test group receives their variants immediately, and the evaluation countdown begins.

Choosing the right settings

Test audience size

  • 10-20% works well for large audiences (10,000+ contacts). You get reliable data while keeping most recipients for the winner.
  • 30-50% is better for smaller audiences. With fewer contacts, you need a larger sample for meaningful results.

Evaluation window

  • 2-4 hours is usually enough for subject line tests, where most opens happen quickly.
  • 12-24 hours works better for click-based tests, since clicks tend to come in more slowly.
  • For a monthly newsletter, a 4-hour test window is usually enough to see clear differences in open rates.

Setting the evaluation window too short may lead to unreliable results. If only a handful of people have opened the email, the "winner" might just be random noise. Give recipients enough time to engage.

Reviewing results

After the evaluation window closes, check your test results:

  1. Go to Campaigns > A/B Results to see outcomes across all tests.
  2. Open the specific campaign report for detailed variant-by-variant metrics.

For each variant you'll see:

  • Sent count
  • Open rate
  • Click rate
  • Winner indicator — which variant was selected

Best practices

  • Start with subject lines — they're the easiest to test and have the biggest impact on open rates.
  • One variable per test — change only one thing between variants for clear, actionable results.
  • Run tests regularly — what works with your audience evolves over time. Regular testing keeps your email strategy sharp.
  • Document your learnings — keep track of what subject line styles, content approaches, or CTAs perform best with your audience.
  • Don't over-test — not every campaign needs an A/B test. Use them for important sends where optimization matters.

Next steps