AB Testing (A/B) | A statistical approach to optimization

Most of us have had an argument with friends or colleagues about which route is fastest from home to office or vice-versa. How’d you settle that bet? Yeah! Test it out. Leave the same place at the exact same time, via separate routes, and find out whose way is the best. Hope you got a brief idea about AB testing 😉 which is used by today’s designers & marketers to gain insight into visitor behavior and to increase conversion rate. Though one of the easiest and most effective technique for optimization, A/B testing is still not as common as Internet marketing subjects as SEO, Web analytics and usability. People just aren’t as aware of it. Let’s explore more about AB Testing.

What is AB testing?

First of all – AB Testing is NOT to find defects, it is to find the best optimized approach. As the name suggests, A/B testing is a technique which involves testing (subject to experimentation simultaneously) an original design (A) against an alternate version of that design (B) to see which performs better. Statistical analysis is used to determine which variation performs better for a given conversion goal. The original design is known as “the control” and the alternate version is known as a “variation.” The one which performs better is then selected for real-world use. It is also known as split testing or bucket testing.

The goal: To determine which out of two versions under experiment performs better.

Why Should You A/B Test?

Every business website wants visitors converting from just visitors to customers. e-commerce want visitors buying products, SaaS web apps want visitors signing up for a trial & converting to paid visitors and News & media websites want readers to click on ads or sign up for paid subscriptions. Measuring the performance of a variation (A or B) means measuring the rate at which it converts visitors to goal achievers. Testing takes the guesswork out of website optimization and enables data-informed decisions that shift business conversations from “we think” to “we know.” By measuring the impact that changes have on your metrics, you can ensure that every change produces positive results. The Return On Investment (ROI) of AB testing can be massive, as even small changes on a landing page or website can result in significant increases in leads generated, sales and revenue.

AB testing allows individuals, teams, and companies to make careful changes to their user experiences while collecting data on the results. More than just answering a one-off question or settling a disagreement, AB testing can be used consistently to continually improve a given experience. Testing one change at a time helps pinpoint which changes had an effect on visitors’ behavior. Over time, we can combine the effect of multiple winning changes to demonstrate the measurable improvement of the new experience over the old one.

AB testing can be performed continuously on almost anything, especially since most marketing automation software now, typically, comes with the ability to run A/B tests on an on-going basis. This allows for updating websites and other tools, using current resources, to keep up with changing trends.

What Can You A/B Test?

Almost anything on your website that affects visitor behavior can be A/B tested. The choice of what to test obviously depend on your goals. The following is a list of ideas to get you started with testing.

  • A media company might want to increase readership, increase the amount of time readers spend on their site, and amplify their articles with social sharing. To achieve these goals, they might test variations on: Email sign-up modals, Recommended content and Social sharing buttons
  • A travel company may want to increase the number of successful bookings or may want to increase revenue from ancillary purchases. To improve these metrics, they may test variations of Homepage search modals, Search results page and Ancillary product presentation
  • An e-commerce company might want to increase the number of completed checkouts, the average order value, or increase holiday sales. To accomplish this, they may A/B test Homepage promotions, Navigation elements and Checkout funnel components.
  • A technology company might want to increase the number of high-quality leads for their sales team, increase the number of free trial users, or attract a specific type of buyer. They might test Lead form components, free trial signup flow, Homepage messaging and call-to-action.

A/B Testing Process

The correct way to run an AB testing experiment is to follow a scientific process. The following is an AB testing framework you can use to start running tests:

  1. Data collection & analysis: Analytics tool such as Google Analytics provide insight into where you can begin optimizing. For example, you can identify the pages with the highest bounce rate.
  2. Identify Goals: Your conversion goals are the metrics that you are using to determine whether or not the variation is more successful than the original version. Goals can be anything from clicking a button or link to product purchases and e-mail signups.
  3. Construct a Hypothesis: Per the insights from data collection & analysis, build a hypothesis aimed at increasing conversions, i.e. begin generating AB testing ideas for why you think they will be better than the current version. For example, “Increasing the size of the CTA button will make it more prominent and will increase conversions.” Once you have a list of ideas, prioritize them in terms of expected impact and difficulty of implementation.
  4. Create Variations: Using your A/B testing software, make the desired changes to an element of your website or mobile app experience. Many leading A/B testing tools have a visual editor that will make these changes easy. Make sure to QA your experiment to make sure it works as expected.
  5. A/B Test your variation: A/B test the variation against the original page. For example, “A/B test your original home page against a version that has a larger CTA button.” Visitors to the site or app will be randomly assigned to either the control or variation of your experience. Record the statistics such as monthly visitors, conversion rate, and the expected change in the conversion rate.
  6. Analyze Results: Once your experiment is complete, it’s time to analyze the results. Analyze the A/B test statistics to determine how each version performed. If there is a clear winner (statistically significant difference, go ahead with its implementation. If the test remains inconclusive, go back to step number three and rework your hypothesis.
  7. Report the results to all concerned: Let others in Marketing, IT, and UI/UX know of the test results and the insights generated.

AB testing is worthless if you do not implement what you learn. Create a spreadsheet where you record the tests performed, results, and confirmed decisions. Even if you only send one test per week, at the end of a year, you would have 52 tests. A/B testing isn’t the end of the story, of course. Once you’ve mastered AB testing or are anxious for more than one key finding per test, we can begin to talk about multivariate testing, i.e. testing multiple variations instead of just two.


1908 marks the year when the switch was made from using any assumed information from the populations to a test performed on the samples alone – William Sealy Gosset altered the Z-test to create Student’s t-test. Google engineers ran their first A/B test in the year 2000 in an attempt to determine what the optimum number of results to display on its search engine results page would be. The first test was unsuccessful due to glitches that resulted from slow loading times. Later A/B testing research would be more advanced, but the foundation and underlying principles generally remain the same, and in 2011, 11 years after Google’s first test, Google ran over 7,000 different A/B tests.

A practical email campaign example

  1. Identify Goals: A company with a customer database of 2,000 people decides to create an email campaign with a discount code in order to generate sales through its website.
  2. Construct a Hypothesis & Create variations: It creates two versions of the email with different call to action. A1: “Offer ends this Saturday! Use code A1”, and B1: “Offer ends soon! Use code B1”. All other elements of the emails’ copy and layout are identical.
  3. A/B Test your variation: To 1,000 people it sends the email with the call to action A1, and to another 1,000 people it sends the email with the call to action B1.
  4. Analyze Results: The Company then monitors which campaign has the higher success rate by analyzing the use of the promotional codes. The email using the code A1 has a 5% response rate (50 of the 1,000 people emailed used the code to buy a product), and the email using the code B1 has a 3% response rate (30 of the recipients used the code to buy a product).
  5. Take action: The Company therefore determines that in this instance, the first Call To Action is more effective and will use it in future sales.

Do you ever wonder how often you’re being A/B tested? The practice is so commonplace among websites and marketers these days that at any given point at any given website you could be part of a grand experiment in optimization. There are many A/B testing case studies which show that the practice of testing is increasingly becoming popular with small and medium-sized businesses as well. There are various types of AB testing platforms and providers. All offer basic multivariate testing, along with various integrated services. AB testing providers include Optimizely, Kissmetrics and Monetate.

A/B testing, done consistently, can improve your bottom line substantially. By using controlled tests and gathering empirical data, you can figure out exactly which marketing strategies work best for your company and your product. If you know what works and what doesn’t, and have evidence to back it up, it’s easier to make decisions, and you can often craft more effective marketing materials from the outset.



Leave a Reply

Your email address will not be published.