At its’ most basic, A/B testing is just what it sounds like. A company has two versions of a product, service or website that are put in front of users randomly to determine which is more successful given a chosen metric of success. For instance, if a business wants to know which version of its website is more likely to become a customer by clicking on a “Sign Up Now” button on the site, it would show 50% of users one version of the site and 50% a second version of the site. Whichever version received more clicks would clearly be the more effective website based on their goals.
In new product development, A/B testing can help improve products without the need for guesswork by marketers or any one person’s gut instinct. Ideas put forth by teams don’t languish in the abstract because the benefits (or lack thereof) associated with minute or even major design changes can be tracked. That data can then drive product evolution that is based on actual user behavior.
AB Testing Best Practices
Unlike multivariate testing (which tests multiple variables and how they interact to impact user experience), A/B testing should be easy to implement and simple to understand when the results are in. Reports generated as part of A/B testing should focus on a single metric of success – often involving users performing a specific action. This type of testing is a relatively simple way to prove or disprove a vague hypothesis like “Design B is better” – if the statement is true, then Design B will fulfill the chosen metric of success more often than Design A.
When it comes to product design, A/B testing should be happening any time there are new features on the table or someone proposes changes to existing ones. Split-testing is a continuous part of the development process.
While that makes it sounds like a lot of time and effort will go into testing, the opposite is actually true. Your team shouldn’t be spending months building tests because your split-testing strategy should involve only one element at a time (e.g., color, text placement, button placement) so you’re not wondering what it was about the new version that led to the change in behavior among users.
When to Use AB Testing
Split-testing is great for understanding how incremental improvements are influencing user actions but it’s easy to miss opportunities for innovation when you’re focused on elemental UX changes. Sometimes A/B testing will show a negative result in the short term but in the long term that same change would have a positive impact. What that means is that you don’t want to miss the forest for the trees – especially when it comes to new product development. When in doubt, release the product and then A/B test at a later date.
The biggest issue that can arise when A/B testing is part of your development culture is risk aversion. Ideas can stagnate when testing reveals the obvious: that users will often interact with familiar products more readily than with that which is unfamiliar, even though the unfamiliar has the potential to one day become the standard. This can lead to teams rehashing the same designs over and over instead of putting out products that are truly pioneering.
Overall, A/B testing can be an incredibly effective way to shape a product for the better but it’s important to remember that split-test data is only one piece of the product development puzzle. It’s up to you to take that data and put it into the context of your team’s overall strategy instead of being a slave to the results. Think in terms of being metric-informed versus metrics-driven and your designs will be stronger for the testing you do.