AB-TESTING WITH LOW TRAFFIC
- YES, IT’S POSSIBLE!
AB-testing is something that you’ve probably heard of before, but as a quick recap so we are all on the same page: AB-testing, also called split testing, is a way of testing ideas of improvement by creating two variations and then analyze their respective performance. The variation that performed the best will usually be implemented. You might test a webpage, an app, an ad, an email or anything else that you place in front of your users. It is very similar to a medical experiment where one group receives a placebo while the other receives the medicine and the scientist want to discover if there is any significant relief in the symptoms compared to the placebo group.
These types of tests are commonly used by corporations, marketing agencies, and large startups. It is a fantastic way of growing your company, without having to invest in traffic, but rather getting more out of the traffic you already have. With AB-testing, you become more efficient in your approach to your users. Split testing is also famous for assuring that your improvements are good business decisions, instead of relying on the decision making of the HiPPO (Highest Paid Person’s Opinion).
This all sounds great, but is this method applicable on early-stage startups? Split tests rely on statistics to be proven successful and to reach significance the tested variants need a certain minimum of data. The minimum of traffic you need depends largely on whom you ask, but Chris Goward, founder, and CEO of the conversion optimization agency WiderFunnel, says that he has achieved results with 100-400 conversions per variation. That hurdle is probably stopping many startups from trying out AB-testing, which is unfortunate.
In this article, I’ll talk more about how to get results from your split tests, even though you only have a few hundred conversions per month. I’ve taken help from a pro in AB-testing, Anthony Brebion at AB Tasty, to give you the best recommendations.
However, if you have less than 300 conversions per month, I recommend you to wait with formal AB-testing until you’ve reached that milestone. Instead, you can try out other split experiments along your user journey or do more qualitative market studies to grow your company.
What tool to use
There are plenty of different tools available to do AB-testing and it might be difficult to know which one that is best for you. Anthony Brebion shares what features a good tool should have:
“A good testing solution is first and foremost a solution that makes marketing teams autonomous without any technical constraints, whether they are related to resources or planning. The tool must be intuitive and set up within a day. A good testing solution is also a solution with advanced features that allow going beyond the first simple tests, to more complex scenarios.
Finally, a good testing solution is a solution that provides useful indicators for your decision-making and to which you can fully trust.”
If you want to read more and find out which one is best for your startup, check out the list here.
What to test
AB-testing is the best way to learn about your product and your audience. What you want to learn about your product and your audience is what should give you the ideas of what to test. Your revenue is the summary of the decisions made by your audience, including both customers and potential customers. Your audience might decide to buy your product or service, to choose a competitor or not to buy anything at all. With AB-testing you can better understand your audience’s decision factors and make sure that more of them pick your products or services.
Split testing done right will bring you closer to your users and make you understand how they are thinking. What gets your users above their conversion tipping point? To find these big learnings about your audience, you’ll have to make bold tests. You won’t achieve that by testing button colors.
Another advantage of bold and drastic tests is that they also work with low traffic. Testing colors of call-to-action buttons are for giants with millions of users like Amazon and Facebook. If you only have a couple of hundreds of conversions to run your test on, it means that you need to see drastic changes in the results of your tests to be able to prove it with high significance.
Let’s say, for example, that you send the same amount of traffic on two variations, and one of them gets 105 conversions and the other 100. You could easily believe that the first variation has a 5 % higher conversion rate, but there is also a large risk that it was completely random. If you instead test two drastic different variations and one receive 150 and the other 100 conversions, you can be sure that the first variant performs better
So what drastic and bold tests could you do? One example can be to test your marketing approach, to find out what works best on your prospects. Are you today approaching your prospects with a list of features and product details? Then you could instead try a more emotional approach, detailing how the customer would feel different from using your product. With such a test, you could find out if your audience is more driven by logic or emotions, which could be applied throughout your communication.
There are 6 conversion factors that you can test to improve your conversion rate. First of all, you need to have a value proposition with attractive features. You, then, need to maximize clarity and relevance of your presentation and minimize the prospect’s perceived anxiety and distraction. In addition, your message can be fuelled with urgency. Depending on your audience these factors might be optimized in different ways, clarity for one type of audience might not be the same as clarity for another type of audience. For more details, check out Chris Goward’s book “You Should Test That!”.
How to Test
In your testing process, one thing and only one thing is important - to be disciplined. If you don’t know how to put your emotions aside and be a strict scientist, AB-testing might not be for you. I’m sorry to break it for you, but it’s the hard truth.
Your first step will be to build your hypothesis which will be tested. It should be sort of a mantra that will lead you through the whole experiment. There are a couple of different templates online but I like this one:
Based on (assumption)
We predict that (experiment)
Will cause (outcome)
So, for example, it could sound like this:
Based on that price is important for our customers
We predict that putting the pricing info on top of the product page
Will cause an increase in conversions
Assumptions are ideas you have about your customers, you can assume that they think or act in certain ways. If you have one of those ideas, but they are not proven, ie you don’t have numbers proving that your idea is true, then that is an assumption. Many of the things you believe about your customers that you take for granted might actually be assumptions that you need to challenge.
The outcome will always be in the form of a metric that you want to move in the right direction. You should always aim for that metric to be the final conversion rate that generates revenue for you. It might be bookings, purchases, transactions or sign-ups, anything that constitutes your ultimate conversion. If that is not possible because you don’t have enough conversions, the second best option is a metric that is closely correlated to your final conversion. For example, at Airbnb they use searches with dates as a metric highly correlated with bookings.
The next step will be to build your test in your AB-testing tool. You will need to decide on certain details to run the test, which you should always stick to through the whole process. You either need to decide on timing or the amount of traffic that will be part of your test. You might be tempted to end the test as soon as you’ve seen some first results, but that could increase the risk of getting a false positives since the change might be of another reason.
If you want to achieve faster results Anthony Brebion shares two of his best tips. “Test the pages with the most traffic, then you increase your chances of having significant results quickly. Limit the number of variations, ie do not create more than 2 variations, in addition to the original. Depending on your traffic, you may need to limit yourself to one.”
Another thing to decide is your significance level, which is a statistical concept. “For the results of a test to be considered reliable, this index must be greater than 95%”, says Anthony Brebion. In plain English, a 95 % significance means that there is a 5 % risk that the result of the test was completely random. If you use a 95 % significance level it will mean that 1 test of 20 shows a false result. It is difficult to decide on the significance that you expect, of course, a higher significance is better, but it will also mean that you require a larger amount of traffic.
To get around your issue of low traffic during the test, Anthony Brebion recommends startups to attract more traffic temporarily. “The simplest is to use advertising on a performance model (pay per click)”, as for example Facebook or display ads.
At the end of the test you will need to analyze the result, did you prove or disprove your hypothesis? No matter if it was proven or disproven, I’m sure that you’ve learned something new about your customers and that is what is most important!
The last step will be to take your new learnings and use them in your business. If you’ve tested bold ideas, you will probably have many parts of your business that can be updated based on your new learning.
Let’s say for example that you realized that clarifying the pricing and putting it on the top of the page made more users convert to customers. Then, you should begin by remodeling your site to prioritize pricing, but it should also influence your communication. Maybe you should include the pricing in your ads?
If the test on pricing did not improve your conversion, you learned that your audience doesn’t prioritize pricing in their decision process. Then your next step would be to find out what arguments are the most important for your audience.
Your new learnings will make you ask many new questions. Fantastic! Let it give you new ideas for new tests and restart the amazing loop of improvement.
It seems like we’ve found the secret sauce to growth for early-stage startups. The result of our thesis can be seen in the title, and it is supported by a correlation of 80%, so fairly strong! But let’s take this from the beginning: It all started...
You might be running a SaaS startup or a consumer subscription service, no matter the case, here follows a guide on how to break it down piece by piece to identify the metrics that are important for you. The Action Metrics Tree is optimal to...
You might be running a C2C marketplace for selling off your old CDs or a B2B marketplace to exchange industrial spare parts, no matter the case, analyzing metrics is crucial to achieving growth. Some of the world’s biggest startups...
Ability to retain customers over time is a good indicator of product-market fit and general business health and viability when looked at in cohorts. It’s a vital metric that every startup should track, but can be difficult to interpret off the bat. So...