Blog

A test is not just for Christmas

Christmas starts earlier each year. Next week (it’s August at the time of writing) the shelves in your local supermarket will be cleared of back to school
products- out go the calculators, jumpers, rulers and Ninja Turtle backpacks- wrapping paper and fairy lights will now be snuck on to the shelves. Heck, one of my colleagues has already started buying gifts, the maniac!

This got me thinking (yes I’m obsessed) about Multivariate and AB Testing.  Just as a dog is for life, not just for Christmas, your test programme is not something to be petted over one day and then ignored before the Alka Seltzer has settled at the bottom of your 1st January glass. I have heard as an objection from digital marketers “yeah we tried AB Testing once; it just didn’t work for us”. This excuse is continually used to cover a lack of insight and a failure of nerve.

Under no circumstances will testing “not work for you” for the simple fact that there is no such thing as bad data. If you think your test programme has failed, you’ve been badly advised or you’ve misunderstood what testing really is for. It is not about winners and losers. It’s about introducing a rigorous data led approach to managing your design and assessing the risks and benefits of design changes, a high value insurance policy. It’s also about opening the door to the holy grail of digital marketing: proper personalisation and assigning value to all the media touchpoints that turn browsers into customers.

Abandoned test programmes are often about a lack of will on the part of the tester. Let’s look at some scenarios of first time testers “failing” with their test campaigns:

The Christmas Cake Test. This is a classic. First time testers often get a new testing tool and chuck all the design changes that they’ve wanted to make into a single test. In comes the new carousel, the video and the flashy graphics. This is the equivalent of throwing everything in the cupboard into your Christmas cake and expecting it to taste nice. The problem is if it wins or loses, you won’t know why. The cake tastes disgusting due to too much brandy or not enough? Was it the carousel that made the difference or the video? This is a fundamental flaw with AB testing of a whole design- you’ll know if it won or lost but you won’t know why. So if you’re starting testing, begin with something simple- a change in button language perhaps or a new heading. Monitor the results, iterate and move on.

The All or Nothing Test. Similar to the Christmas cake test, the All or Nothing is a single test that the tester bets the mortgage on. The tester fights for it in planning meetings. The tester gets emotionally involved. A career is hanging on this. Don’t go near these tests- AB testing is scientific and needs a dispassionate approach. When you start, do two or three tests at launch on relatively low value pages instead of putting all your emotion into a single test. No two tests are alike, chances are you’ll find something surprising and unexpected to take back to the planning meeting across one of the tests.

You Win When You Lose. I can’t re-iterate this enough: THERE IS NO SUCH THING AS BAD DATA. If you’ve tested a design and the original has won, this has stopped you going down a design cul-de-sac and losing money. Most tests will be losing tests. I’ve heard it reported that Adobe lose 89% of their tests. However the 11% make up for it in terms of additional performance.

Less Is More. The temptation is always there to chuck the kitchen sink at a test, when planning a test programme it’s always more of a problem deciding what to leave out than what to put in. This is where it pays to have the experience of a professional agency behind you when you are a first time tester. Bear in mind a quantum approach to design changes and break down an on-page element into its indivisible parts. Let me give you a scenario:  you’ve changed a square orange “add to basket” button into a rounded green “buy” now button increasing click through by 25%. So, what made the difference? You’ve actually changed seven things: the colour, the number of words, the number of text characters, the edging, the semantic emphasis in the text (from passive to active), the amount of colour space in the button and the unified whole in relation to other elements on the page. Therefore, break down your changes to the smallest indivisible parts you can (this is where Multivariate comes in, as you can isolate the impact of the sub-elements).

Winning the World Cup. With all the will in the world, you’re not going to win the World Cup of testing (clue: there isn’t one). So when you set up your first test, you need to assess realistically what you can achieve and set up multiple goals. For example, if you’re a shoe retailer making changes on a landing page, is it realistic to expect that this test will have a dramatic uplift on your overall number of purchases? It might do- but maybe the landing page is still a heck of a long way from the basket. It’s always worth monitoring revenue changes on all tests, but by setting up multiple goals you can have a more nuanced view of how the challenger design affects user behaviour. As well as tracking final purchase on a landing page test, you should also track page bounce and click throughs to important pages such as specific product pages to assess the success of a test.

How Bad Can It Get? Another great test is to look at worst case scenarios for radical changes. For example, if you force potential customers to register BEFORE they make a purchase you are likely to see a high rate of cart abandonment. How do you weigh up the long term benefits and risks of such a radical step? The users that DO register may become loyal customers and brand advocates who you can re-market to. It’s always worth testing something that you expect to be a “loser” against the original- with one eye on evaluating if there are hidden benefits in the long term.

The Flat Soufflé. My last cooking metaphor. Sometimes it’s really tempting to open the oven before the test has finished- the result is a flat soufflé, under cooked data, an inedible dish with no real insight. Tests should be left to roll over a normal cycle for your business. If you’re in retail you need to run your test over a couple of weekends and you probably need the weekend after payday too. If you’re in leisure and you launch a test in the school holidays, you’ll want a week where the kids are back at school also. The most important stat in test data is statistical significance (i.e. the likelihood that the test sample is big enough). If you can get to 0% margin of error- great. If not, you’ll need a couple of clear weeks to observe the trends in winning or losing designs.

If you get a shiny new optimisation project for Christmas, I hope the above tips help you. Optimisation needs time, optimisation needs care and attention and optimisation needs a walk around the block. For a lifetime of successful testing and rewards- hang in there past Boxing Day.