DIY Week Marketing Series: How to make sure your e-shots are winning business

DIY Week Marketing Series: How to make sure your e-shots are winning business

In the last of a series of articles, first published by DIY Week, on sales and marketing in the home improvement and DIY industry, Kate from home and garden marketing agency Brookes & Co, shares her tips for successful e-shots.

It’s not that long ago that every other unsolicited email received was about the falling foul of the GDPR laws (and how ironic is that!).  Companies keen to do the right thing ditched their lists of prospects and started again to ensure they were fully permission-based.  After the initial pain of that decision, starting over has proved a good way to ensure that e-shots can indeed be a high reward, low wastage route to opening dialogue with new customers, generating repeat business and maintaining positive relationships.  So, if you’re not currently using them, then it’s time to try – or try again.

The key to e-shots is testing.  And then measuring the results, refining and testing again.  Even simple A/B testing can be highly effective and increase email engagement and conversions.   A/B testing, also known as split-testing, means testing the effect of one small difference against a control, with everything else remaining the same. For example, testing one subject line over another to see what brings in the most email opens.

This method allows you to test different variations within a single email campaign to determine what the recipients find most motivating.  You can set up two (or more depending on the platform you use) variations of the campaign.

All you need to do is:

  1. Identify the variable
  2. Decide on the variations you would like to test
  3. Determine the size of data you would like to test
  4. Choose your winning metric
  5. Keep your testing data somewhere safe so that you can refer to it.

Step 1: Identify the variable

When thinking of A/B testing, many people automatically assume subject lines… But there are many more variables to choose from, such as:

  • From name/address – Does the company name receive more opens or does a personal approach work better? info@ or hello@?
  • Content/wording – The tone of voice and length of the content. Does your audience prefer a short and sweet email or a content-heavy newsletter?
  • Email design/layout – Short emails or long emails? Wide emails or thin emails? With or without a navigation bar? The options are endless.
  • Artwork – GIFs vs static images? Pastel colours or bold colours?
  • Send date and time – What day and time are your subscribers most likely to open the email?
  • Call to action – Test the colours, positioning and wording used. Do your subscribers respond better to “find out more” or “take a sneak peek”?
  • Subject line/summary line – What is the ideal length? What about emojis? Does personalisation increase engagement?

It’s important only to test one variable at a time so that you can track changes in engagement easily.  Also, don’t base decisions on one email test. Run the same variant test in various email campaigns so that you have plenty of data.

Step 2: Decide on the variations

Once you’ve decided the topic of your test, you’ll need to decide which variations to use. For example, let’s say you want to test colours of your calls to action.

Test A = Red button with white text
Test B = Blue button with white text

Depending on your email management platform, you might be able to test more than one variant at a time. The bigger your mailing list, the more tests you can run with a reliable sample size, but the more variants you add, the harder it becomes to track cause and effect.

Step 3: Define email test settings

The test e-shots should be sent to a small group of your data, for example, 10-20% of the database depending on its size. Half of this group will receive test A, while the other half receives test B. The remaining recipients will receive the winning version.

Next, you need to decide on test duration. Some email management platforms will give you the option to automatically send the winning test to the remaining recipients after the test period, or you can choose to send the winning test manually once you’ve analysed the results.  The longer the test period, the better.  If you have the time, why not test over a 24-hour period?  That way you even get to test how the time of send affects engagement rate.  But bear in mind at what time the winning test will be sent. If you have dispatched your test e-shots at 6pm, to run for six hours, do you want your remaining subscribers to receive the optimised email at midnight?

Step 4: Choose your winning metric

Your winning metric will depend on your test. For example, if you are testing the subject line, summary line, “from” name or the “from” address, you would select “open rate” as your winning metric.  Or, if you are changing a design or content element, you’d select the “click to open rate” or “effective rate” as the winning metric because the objective of the test was to drive more clicks.

This “click to open rate” provides us with a percentage of subscribers who clicked on a link in an email as related to the total number of people who opened it, therefore gauging the overall effectiveness of an email campaign. (Source: Sailthru).

Step 5: Create a report and save it

There’s no point doing the testing if you don’t document the results.  Save the report with a clear name like ‘GREAT EMAIL TESTING’ and put it somewhere safe.

A/B testing like this is a continuous strategy that will help you understand your prospects and customers, refine their experience and influence your approach to your marketing and business decisions.  If you’d like a helping hand, contact kate@brookesandco.net to discuss how your traditional marketing strategies dovetail with digital solutions.