Market Research, Uncategorised
How to use A/B testing to de-risk your design decisions

Wouldn’t it be great if you could predict the impact of every single design decision you made? If you could know exactly how something would land with your target market and whether it would lead to an uptake in sales, brand awareness or brand loyalty? 

Well, the good news is that you don’t need a crystal ball to do just that.

Whilst it’s true that there’s an element of risk in every design you create for a product or its marketing material, you can take a huge proportion of that risk out of your decisions through simply understanding your customers better. By researching the “real life” reaction of your audience to something, you remove assumption from your design process. 

And through replacing assumptions with evidence, you can help to ensure that what you create is exactly what you and your customers need. 

At Snap Out, we work alongside companies to allow them to step into their customers’ shoes and truly understand what their wants and needs are. You can find out more about our services here

Based on our expertise and experience, we know that one of the best ways to test any type of design is through split testing. So, today we thought we would go back to basics and explain exactly why you need to include it in every project. 

What is A/B testing and why is it important?

It’s likely that you’ve already heard of (or even do!) split testing. However, in case you don’t yet, split testing, also known as A/B testing, is a way of comparing two different versions of a design in order to assess which of them does better in relation to your goals. Usually, there will only be one difference between the two designs so that it is easier to understand exactly what has an impact on whether a campaign resonates or not. 

To put it simply, split testing allows you to compare the impact of Design A, with that of Design B. 

This type of testing is particularly useful when creating a product or service, since every target market is slightly different. Whilst a certain approach may work really well with one demographic, it could be a complete flop with another. As such, it allows you to base all of your designs for the product and its marketing on evidence, as opposed to assumptions. 

However, whilst split testing may sound hugely simple, it’s easy to make mistakes that could be costly for you.


Tools for a brilliant A/B Test 

The first step to creating a great split test is to make sure you have two versions of a design to test. Whether it’s the landing page of a site, the user interface of an app, a logo, a piece of copy on your site or a piece of marketing material, make sure that there is one variant to test. 

Then, ensure you have clearly defined exactly what this design needs to achieve. 

For example, maybe you want to test the impact of your logo on website bounce rate. Perhaps you want to test how a change in user interface influences how long people stay in your app. Or you could even test how a certain colour within a Facebook advert impacts click through rates (CTRs). 

Next, you need to decide on the right tool to use. Here are a few that we recommend. 


If you subscribe to Hubspot’s CMS (Central Management System) for marketers, you also get access to their A/B testing software. It is designed to allow you to test out two landing pages to see which converts best. You can decide which metrics are measured, based on those that mean success for you. 

Then, after running the test for a while, you can assess the results and mark one design as the winner. It will be automatically implemented. 

Find out more here



Facebook allows you to test out the success of variables within an ad campaign before putting a larger budget behind one of them. It simply means that you can test the copy, images and calls to action used and then analyse the metrics to measure which was more successful. 

That way, not only can you choose to go forward with the more impactful design, but it also gives you some useful information for future marketing materials. 


Google Analytics Experiments

Google Analytics ‘Experiments’ is a much more in-depth tool to use for testing. It allows you to study up to 10 versions of a page at once! Though, we recommend starting with just 2. 

It then allows you to assess which of the pages perform best by giving the different designs to a random sample of users. Having defined the objective for your campaign, you can even receive email updates on how it’s going. 

book on brand perception research


Final thoughts 

You already know the value of powerful design and the ability of small changes to have a huge impact. However, it’s important to make sure that you’re not basing these small changes on your assumptions about how they will be received. 

By learning about target markets through split testing, you can take a lot of the risk out the decisions that you make whilst designing your product/service and any related materials. That way, you can rest assured that the choices you’re making are the right ones, confident in the knowledge that your designs will be a success.

The Snap Out Team