Blog

They think it’s all over: when is a test finished? Top 10 tips for getting the best results

“When is a Multivariate test finished?” This is the most common enquiry among brands and clients when using Multivariate or AB Testing to improve conversion rates. Some expect instant data, others accept that test scores evolve and mature- in the same way that stock exchange conversions can go down as well as up. If a tester has unlimited time, the more he or she can observe the influence of external factors – such as market conditions and seasonality – the better data you will find. But, therein lays the problem – hardly anyone has unlimited time these days.

So, what is the optimum length of time to run a test? The short answer is, it depends. Test data is made of known and unknown variables.

The known variables are average monthly visitors to the test page and visits to the conversion pages. You may even have Goal Flow set up in analytics to show drop offs through the funnel. The great unknown quantity, however, is the improvement (or deterioration) against the original.

Data reported within a test is usually:
•    Visitors who saw a design (e.g. the original, combination one and combination two)
•    Number of conversions achieved by a specified design
•    Percentage of conversion rate attained
•    The margin of error (MOE)- effectively your confidence that the sample is big enough to provide robust, valid statistics
•    An uplift against the original web design
•    Confidence level (i.e. 95% confident the new design will beat the original)

Statistically (and in an ideal world), the safest course is to run tests until the margin of error is irrelevant i.e. the test sampling has reached 0% MOE or the increase in conversion rate exceeds the MOE. In many tests it is simply not possible to get enough traffic to reach this stage. Various optimisers impose their own cut off points because of this factor. Some say, 30 days of data is enough, others imply test variations that see 100 conversions is sufficient insight.

In truth, neither is satisfactory – test results can radically change in a second month e.g. December verses January.  In a sample size of only 100 conversions per variation, there will still be a large margin of error.

So, without further ado – here are the top ten tips for getting the speediest, most satisfactory conclusion:
•    Fit the test type to the visitor
With low traffic pages use AB testing and iterate. If you have high traffic, you can afford to try MVT

•    Brainstorm
When setting up a test, get your colleagues and agency partners to feed in ideas for tests and use information from analytics or usability sessions to inform test design

•    Keep test elements consistent with the test goal
For example, if you want to increase CTR, changing the footer navigation isn’t likely to contribute much to this

•    When using MVT keep the test down to a manageable number of combinations
No one needs to read a test report with 128 combinations

•    Set yourself little milestones
If after a week the test data is inconclusive, review another week but be prepared to make a decision

•    With Multivariate testing check the elements view
Even if there is no clear winner at a combination level, your tool dashboard should allow you to isolate the specific benefit of a single element

•    Set yourself a confidence milestone
If you think 70% confidence is good, go with it. 100% confidence test results are rare so make a verdict and stick to it

•    Be prepared to iterate
Your first test may not be conclusive, but it might be the first step to attaining the perfect web design

•    Think of lifetime value
That small uplift over a two week period might be a big leap in revenue over 52 weeks

•    There’s no such thing as bad data
A losing test is just as valuable as a winning test. Do not disregard results – seek answers to its findings