Insights

The Ultimate Handbook for Paid Media Testing

Running A/B tests is a necessity for all advertisers who want to continuously improve campaign performance. While there are a multitude of tests that can be run, the end goal of running a test is usually the same - improve performance, achieve efficiency, and/or to drive a higher ROAS

No matter what your goal is, there is always the question of “How do I know which campaigns to include in a test?” 

While testing philosophies can be different, there are 3 steps that should be kept in mind when evaluating which campaigns should be included in a test.


#1 Identifying Conversions Thresholds

The most important metric when identifying campaigns to test is conversions. Conversions are the most important signal used when measuring the effectiveness of a test and are the most valuable action a user can take when on your website.

So, what is the conversion volume minimum that a campaign should meet before being selected for an A/B test?

For an evenly split A/B test (traffic being split 50/50), we recommend campaigns with a minimum of 40 conversions in the last 30 days. The 50/50 split will also split conversions to be 20 for the control campaign and 20 conversions for the experiment campaign.

So why a 40 conversion minimum?

We’ve seen smart bidding strategies such as tCPA and tROAS react much more effectively and quickly to campaigns that have at least 20 conversions. This is important because when conversion volume is below 20 per month, it is much more difficult to drive incremental conversions and it's more difficult to determine a winner in the test results.

Keep in mind that 40 conversions in the last 30 days is the bare minimum we recommend for a test. Ideally, the campaigns selected for a test have 100 conversions in the last 30 days.

The more conversion volume the better!

#2 Weighing Out the Risk

Now that we’ve identified which campaigns are ideal to use for an A/B test, it's important to weigh out how much risk you or your client is willing to take on. 

Every time a test is implemented, there is always the chance that conversion volume will be sacrificed if the test is unsuccessful. Therefore it's important to be able to calculate the amount of conversions that are being put at risk for each test. 

The way conversion risk is calculated is by looking at total campaign conversion volume and calculating a traffic split that is either aggressive, moderate, or conservative. Remember that we also need to stay above the conversion minimum of 20, when choosing a traffic split for a test. 

Below are examples that show the split between the Control and the Test campaign and what to keep in mind depending on risk tolerance.

Traffic Split:

    • Aggressive: 20(control)/80(test)

      • This is shifting more traffic to the test campaign which puts conversion volume at a higher risk - minimum conversion volume is 100.

    • Moderate: 50/50

      • This is a great way to run a test for a campaign that has lower conversion volume but still has a chance to see a great reward while limiting potential loss of conversions.

    • Conservative: 80(control)/20(test)

      • This puts the least amount of conversions at risk but also requires a higher conversion volume minimum of 100.

If you or your client is risk averse, you may want to start with putting a low volume of conversions at risk and then if the test is showing signs of strong performance, you can slowly start increasing the amount of traffic being driven to the test campaign. However, its important to remember that the lower the risk you take the higher the conversion volume needs to be for your test campaign to hit that conversion minimum of 20.

#3 Selecting Campaigns for your Test

Now that you have an understanding of the importance of conversion thresholds and risk tolerance for campaign testing, it's time to cover the formula for choosing which campaigns to test. Here is a checklist to go through each time you are about to run a new campaign test in Google Ads:

  • Does the campaign hit the 40 conversions in the last 30 days minimum?
  • Did you identify you or your clients risk tolerance for this test? (Aggressive, Moderate, Conservative)
  • Based on risk tolerance, did you identify the traffic split for your A/B test?
  • Does the traffic split allow for both the control and test campaign to meet the 20 conversion minimum? 

Once all of these have been addressed, you are ready to launch your test!

Never Stop Testing

Running tests will always be a critical role for paid media professionals who are hungry to see consistent growth for their clients over time.

As user behavior is evolving, it's important that we are continuously testing to be continually learning.


 

Want more posts like this? Subscribe to the Seer Newsletter:

We love helping marketers like you.

Sign up for our newsletter for forward-thinking digital marketers.

David Lacamera
David Lacamera
Sr. Manager, Paid Media