With so many emails flying about nowadays, it’d be silly to not be testing what works best for your subscribers. Even simple A/B testing can be highly effective and increase email engagement and conversions. Time to get testing with this advice from our digital partners, Connecting Element.
A/B testing, also known as split-testing, means testing one small change against another, with everything else remaining the same so that the impact of the small change can be truly understood. For example, testing one subject line over another to see what brings in the most email opens.
This method allows you to test different variations within a single email campaign to determine what your subscribers prefer to interact with and means that we can improve our content using informed decisions about what subscribers want to see.
You can set up two (or more depending on the platform you use) variations of the campaign you are sending out. All you need to do is:
- Identify your variable
- Decide on the variations you would like to test. E.g. Different banners
- Determine the size of data you would like to test
- Choose your winning metric
- Keep your testing data somewhere safe so that you can refer back to it later
A/B testing is a continuous strategy that will help you understand your subscribers, refine the user experience and influence your approach to your marketing and business decisions.
Let’s go through it step by step…
Step 1: Identify the variable
When thinking of A/B testing, many people automatically assume subject lines… But there are many more variables to choose from, such as:
- From name/address – Does the company name receive more opens or does a personal approach work better? info@ or hello@?
- Content/wording – The tone of voice and length of the content. Does your audience prefer a short and sweet email or a content-heavy newsletter?
- Email design/layout – Short emails or long emails? Wide emails or thin emails? With or without a navigation bar? The options are endless.
- Artwork – GIFs vs static images? Pastel colours or bold colours?
- Send date and time – What day and time are your subscribers most likely to open the email?
- Call to action – Test the colours, positioning and wording used. Do your subscribers respond better to “find out more” or “take a sneak peek”?
- Subject line/summary line – What is the ideal length? Yay or nay to emojis? Does personalisation increase engagement?
It’s important only to test one variable at a time so that you can easily track changes in engagement back to the variable that made the change. Also, don’t base decisions on one email test. Run the same variant test in a number of email campaigns so that you have plenty of results as a foundation for change.
Step 2: Decide on the variations
Once you’ve decided the topic of your test, you’ll need to decide which variations to use. For example, let’s say you want to test colours of your calls to action.
Test A = Red button with white text
Test B = Blue button with white text
Depending on your email management platform, you might be able to test more than one variant at a time. The bigger your mailing list, the more tests you can run with a reliable sample size, but we wouldn’t recommend more than two variants if you have a small database. The more variants you add, the harder it becomes to track cause and effect.
Step 3: Define email test settings
The tests should be sent to a small group of your data, for example, 10-20% of the database depending on its size. Half of this group will receive test A, while the other half receives test B. The remaining recipients will receive the winning version.
Next, you need to decide on test duration. Some email management platforms will give you the option to automatically send the winning test to the remaining recipients after the test period, or you can choose to send the winning test manually once you’ve analysed the results.
The longer the test period, the better. The minimum period of time we’d suggest for testing is about four hours, but if you have planned ahead and have the time, why not test over a 24-hour period? That way you even get to test how the time of send affects engagement rate… Bonus.
But bear in mind at what time the winning test will be sent. If you have dispatched your test emails at 6pm, to run for six hours, do you want your remaining subscribers to receive the optimised email at midnight? It’s worth working out the math ahead of time, otherwise you may get a few caffeine-fueled complaints the next morning.
Step 4: Choose your winning metric
Your winning metric will depend on your test. For example, if you are testing the subject line, summary line, “from” name or the “from” address, you would select “open rate” as your winning metric.
Or, if you are changing a design or content element, you’d select the “click to open rate” or “effective rate” as the winning metric because the objective of the test was to drive more clicks.
This “click to open rate” provides us with a percentage of subscribers who clicked on a link in an email as related to the total number of people who opened it, therefore gauging the overall effectiveness of an email campaign (Source: Sailthru).
Step 5: Create a reporting doc and save it well
There’s no point doing all of this testing if you don’t document the results. Testing is an ongoing process because there is always something to be learned about email optimisation and subscriber behaviour. So, once you have your reporting document that gathers your email testing, results, and insights, save it with a really clear name like ‘GREAT EMAIL TESTING’ and put it somewhere safe.
No. Not on your desktop like everything else. We’ve been testing that one for years and we can guarantee you’ll never see the document again…
If you’d like a helping hand with email marketing – from creating a new email template to email strategy and testing, why not shoot us an email? Contact firstname.lastname@example.org to discuss how your traditional marketing strategies dovetail with digital solutions.