AddThis Academy

An ever-growing library of resources to help you become a better online marketer.

How to Run A/B Tests with AddThis Audience Targeting

Written by Josh Horwitz, AddThis Software Engineer

Recently you might have noticed a new functionality in the Audience Targeting section of your dashboard – A/B testing. This new feature allows you to compare two or more Audience Targeting rules to see which yields the highest conversion rate and lift. The goal of these tests is to determine which rule is better at helping your website goals, whether it’s to collect emails, get shares or promote a specific link. To start an A/B test, simply create and activate an Audience Targeting rule. Then, hover over the cell you’d like to test against, and click the “Start an A/B Test” button that appears.


A/B testing in the general sense is a statistical method of comparing two variants, A & B. Variant A is typically viewed as your control in the experiment and variant B is the variation against which you wish to compare variant A. Things you would traditionally A/B test are different website home pages, copy, email campaigns, among many other possibilities.

How Traditional A/B Testing Platforms Work

Traditional A/B testing platforms are based on statistical T-Tests and require you to pick a sample size based off some function of your pageviews and require you to run your test until your variants meet the required sample size. Once the sample size has been met, then the results can be analyzed to determine which, if any, variants have statistical significance when compared to the control. Another common approach is to require your test to run for a minimum time period, like a week, before the results can be made final.

Understanding AddThis’ A/B Testing Platform


At AddThis, we decided to use a Bayesian Statistical approach over traditional T-Tests. By using bayesian statistics, we are able to cut out the sample size calculators and minimum time length required to run your tests, which ultimately helps us give you faster and more accurate results with fewer false positives. The only requirement that we impose is that each cell in your test must have a minimum of 100 visits and 25 conversions before we can accurately start declaring winners. The reason we do this is so we can help cut down on false positives, which helps you not pick the wrong winner.

What’s Confidence?


Confidence is our way to let you know how sure we are that your current leader is a winner. Once confidence has reached a minimum of 95% then we can declare the best performing cell a winner. You should try your best to let your test run its course until our stats engine can determine significance or whether you should keep your control.

Why Is It a Leader?


Any of your A/B test cells can be a leader (or winner). When a test cell gets the label “leader” it means that this cell is on track to be declared a winner. However, this does not mean that you should stop your A/B test and declare the leader a winner. When tests are stopped prematurely it means the chances of a false positives increase.

What Makes a Cell a Winner?

In order to have a winner, your cell must meet a couple requirements. The first requirement is that the current leader cell must have a minimum confidence of 95%. The second requirement that our platform enforces to reduce false positives is that the predicted conversion rates for your winner must always beat the control.


In the above graphic, you will notice that the control and variant have overlap. What this means is that there is still a chance the variant, even though mostly better, could perform the same as or worse than the control. Since there is still a pretty high level of uncertainty, we wait until there is no overlap in the predicted conversion rates.


This helps to ensure that your winning variant is predicted to always outperform your control.

You might be wondering what happens if your control is the winner. Well, we check that also! We make sure that your control meets the same requirements as mentioned above when compared to your best variant.

Why Can’t I Change the Audience or Goal?

One of the golden rules when conducting an A/B Test is to keep the changes you are testing to a minimum. We decided in order to keep tests accurate and geared towards your goals that you cannot change the goal or audience in a test.

Why Does My Test Have a Problem?

Tests can have problems for a few reasons. We will try to highlight the common few that should help you get the most out of your tests.

  1. You don’t have code on page. Without code we can’t get data and run your test. Head here to get instructions on how to install code!
  2. Your tests are too equally matched or similar. Try adding another cell that spices the test up!
  3. Your test could be taking too long to determine a winner. Trying adding another variation to your test or disabling badly performing cells to give more visits to your better cells.
  4. You have too many variations. The more variations you have, the fewer visits we can give to each, because we try to equally distribute views. If you turn off badly performing variations then you can allow more users to see your better performing variations and speed up your test!

Login to the dashboard to create your own A/B test!

How to Run A/B Tests with AddThis Audience Targeting
1 (20%) 1 vote

Was this article helpful to you?