So instead of relying on guesswork, it would make far more sense to perform A/B testing (sometimes called split testing) on your content to see what works best.
You won’t be surprised to hear that different audiences behave differently. Older people will have different interests to younger people, people looking for cheap products will behave differently to those looking for high end products and regular customers will behave differently to brand new customers.
And what works for one business or industry won’t necessarily work for another. When it comes to the content of your landing pages and emails, there really is no such thing as ‘best practice’, because someone else’s best practice won’t be the same as yours.
A/B testing will help you work out how your audience reacts, so you can make sure you are giving them what they really want, to increase conversions and sales.
A/B testing is performing a marketing experiment (eg on your landing pages or emails) in which you split your audience in two to test a number of variables, to see which performs better. So you show version A to half of your audience and version B to the other half.
To run an effective A/B test you need to create two versions of a piece of content, with changes to just one thing – this could be something like the headline, the image, the length of the copy or the background colour. You will show one version to one audience and the other version to another audience of the same size. Set a time period which is long enough to draw accurate conclusions about the results, then analyse which version worked better.
Two types of A/B tests are user experience test and design test.
The user experience test involves moving something, such as a call-to-action button, on your page to see if that increases clicks. The existing version is A, while the new version is the challenger, or B. So in version B, you could move the button from the bottom of the page to the top, or from the sidebar to the middle of the page.
In a design test, you would leave your call-to-action button in its usual place, but perhaps experiment with the colour. It’s important that the B version of your button still leads to the same landing page. If you usually use a green button, but the blue button receives more clicks after your A/B test, it could be worth switching to a blue button for all of your existing landing pages and future marketing campaigns.
Every business is different and there are many reasons why you may want to conduct split testing. Some of the most common reasons to carry out A/B testing are:
Increased conversions – making changes to text, colours or the layout of your page can increase the number of people who click on a link to get to a landing page. This can increase the number of people who convert into a lead – whether they are filling out a contact form on your site, signing up for a subscription or making a purchase.
Increased web traffic – testing different headlines and titles can increase the number of people who click on a link to get to your website. More traffic means more visibility for your products and services and ultimately should lead to more sales.
Reduced bounce rate – if your visitors leave, or bounce, quickly after visiting your site, testing different fonts, images or introductory text can help cut down your bounce rate and encourage your visitors to stick around for longer.
A/B testing will always make good economic sense. Testing is low cost but, when you get the answers you are looking for, it is high in reward.
If you employ someone for a salary of £30,000 to write five pieces of content a week, each piece of content would cost on average just over £115 (based on 260 posts a year). If each post generates around 10 leads, that means it costs your business £115 for 10 leads.
Rather than producing two separate pieces of content, your content creator could spend two days producing two versions of the same post (A/B testing). If the new version (version B) is a success and doubles your conversions to 20, you have spent £115 to potentially double the number of customers you get through your site.
If the test fails, you have lost £115 and haven’t increased your clicks, conversions or sales, but you know what not to use next time! You are one step closer to getting it right for your business. It doesn’t matter how many times your tests ‘fail’, because each failure is giving you answers and the eventual success will almost always outweigh the cost of conducting the testing.
Test one variable at a time – when you work on optimising your website and emails, it is likely that you will want to test a number of variables. But the only way to be sure of the effectiveness of a change is to test one factor at a time, otherwise you can’t be sure which variable led to the change in performance.
Consider the various elements of your landing pages, emails or marketing resources and possible alternatives for wording, layout and design. Easy things to test include your email subject lines or the way you personalise your emails.
Small changes, like different images in your email or different wording on your call-to-action button can produce big results.
If you wait until afterwards to consider why the data is important to you and what your goals are, you might realise that you haven’t set up the test in the right way to get the answers you were looking for.
From there, build a different version, or your challenger. So if you are wondering whether having less text on your landing page would increase conversions, set up your challenger with less text than your control version.
A web page is slightly more tricky, as it doesn’t have a finite audience. In this case, you need to leave your test running for long enough to get sufficient views, so that you can tell if there is a significant difference between the two versions.
For example, if you are running an A/B test on an email that leads to a landing page, at the same time as your colleague is split testing that landing page, you can’t be sure which change caused an increase in the leads.
The only exception to the rule is, of course, if you are testing the timing itself. In that case, you would put out two identical emails, but on different days of the week or at different times of the day. The right time to send out emails can vary significantly between different industries and product type, so it always makes sense to find the optimal time for your business.
How long is long enough will be different for every business and every type of test. The results from testing an email should come through in days, if not hours. But testing a web page can take much longer. A key point to remember is that the more traffic you get, the quicker you will get statistically significant results. But if your business doesn’t get much traffic to your website, the results of your A/B testing will take much longer.
One way to do this is through a pop-up survey on your site, which comes up as a visitor is about to leave or appears on your thank you pages. Asking why they did or didn’t click on a button or fill out a form will give you more understanding of your visitors and the way they behave.
To make it easier, you might want to put your results in a simple table. There are only two possible outcomes – converted or did not convert – from your two versions – A and B.
If 5000 people received a version of your email – 2500 received each email – and 200 converted from version A, while 800 converted from version B, you would say that was significant. But if 460 converted from version A and 480 converted from version B, you would probably argue that the difference wasn’t significant enough to warrant making a change.
If neither version is statistically better, you have learned that the variable you tested doesn’t have a significant impact on your business and you can stick with your original version.
If you tested your headline, you could move on to testing your images, colour scheme or copy. The more you test and amend, the more chances you have to increase your conversion rates, leads and sales.
Cotswold Web Services.