Are You Monitoring These 4 KPIs in Your A/B Testing Program?

A/B testing is an excellent way to uncover valuable performance insights – but are you monitoring these crucial KPIs? Find out what to observe here.

If you're not currently performing continuous A/B experiments on your website, you might be missing the boat.

Using a scientific approach to determine which website changes you pursue or roll out based on actual visitor, or user behavior vs. assumptions and guesses, is critical for making your continuous website improvement activities successful.

Whether you use an agency or manage these activities in-house, it's important to establish some overall performance metrics for your program.

1. Testing Velocity

Testing velocity is the measure of how many tests you are performing over a certain time period.

This is an operational benchmark to gauge how fast you are able to design, develop, test, analyze, and launch your tests. While some larger tests will take longer and are important for producing a bigger impact on results, smaller 'quick wins' shouldn't be discounted.

Within the small-to-large testing spectrum, you want to measure the number of tests you can perform bi-weekly or monthly with statistically significant and valid results. This will be highly dependent on your website's traffic and your allocated resources to perform tests, but a simple way to determine a benchmark is to calculate:

You may also correlate the number of tests you pursue at one time to your site's number of touchpoints, such as carts, landing pages, product page templates, and headers. Once you've established your testing velocity, you can benchmark it against your win rate.


2. Testing Win Rate

A win rate is a measure of whether or not the website variation you tested against the original produced better results.

While it should be noted that 'losing tests' are winners in their own right, as they inform us that the original version is better and can provide other insights, a win is typically still a better measure of success—particularly in the eyes of management.

Your win rate can be calculated by taking your percentage of winning tests vs. the total number of tests performed. Create a benchmark to perform against, such as a 70% win rate or better. While this is more of a simplistic measure, it provides a quick snapshot into whether your efforts are producing more wins than losses.


3. Conversion and Revenue Lift

As you begin to accumulate wins, you'll want to additionally confirm the validity and deepen your understanding of these wins by monitoring the rate of conversion or revenue lift in your analytics platform. Lift refers to the percentage of improvement your variation(s) has over your original.

Within your analytics platform, set up custom reports to help you carefully watch changes across your various micro and macro conversion goals.

  • Macro-conversion events are often those that are closest to the end result, the sale, and are directly tied to the result that drives business revenue.
  • Micro-conversions, on the other hand, are those goals being tracked leading up to the final goal.

Then, go beyond just measuring outcomes in your testing tool. Once your test reaches validity, use your analytics tool to accurately measure outcomes across different segments. Answer performance questions across visitor, traffic, and device type. If you're leveraging personalization techniques and technology, you may uncover areas where using a personalized approach to your website changes within segments that are performing better than others.

Additionally, within your conversion measures, you'll want to monitor a number of lower level metrics such as bounce rate, time on site, engagement, page speed, etc. to inform your testing hypotheses, create new tests, and ultimately drive improvements in your rate of lift.


4. Projected vs. Actual Spend

Be sure to treat your A/B testing and conversion optimization program like any of your other marketing activities.

  • Give it a line item in your marketing budget or area of focus in your marketing planning
  • Devote adequate resources to ensure your program's success
  • Track hours or budget devoted to running your program
  • Monitor your projected budget vs. your actual spend on your testing projects

Then ask yourself: “Are we allocating resources efficiently as it relates to our overall revenue and conversion lift?”

Ultimately, you'll want to use these numbers to inform how you can perform better quality or more frequent tests more efficiently. While some of your efforts won't necessarily have a direct financial ROI correlation, it's worth understanding where you're trending.

Finally, leverage the right partners or internal leadership for using this performance measure to improve the systems and processes around executing A/B testing and conversion optimization efforts.


Using the Right Software

So how do you begin measuring these KPIs?

Excel for starters. Create a sheet to serve as your testing dashboard. Tie in data elements within your roadmap to bring in your number of active tests, winning tests, and conversion improvement. If you're ready for the next level, implement software like Experiment Engine to integrate with your testing and project management platform and display some of these results in their dashboard.

I'm curious. What metrics and measures are you using to run your A/B testing programs? And what challenges do you face in tracking and presenting them?