Tuesday, July 25, 2017

3 Common Email A/B Testing Pitfalls – and How To Avoid Them

When creating content for an email, it’s tempting to guess what your audience will respond to. But that’s not necessarily the best approach – especially if you’re emailing multiple contact lists. Different audiences have different preferences, which can affect your email metrics and conversions.

So how can you turn the tide? Try A/B testing (if you haven’t already). It’s one of the easiest and most popular ways to improve your conversion rates. But not all businesses get it right. Some have tried it, only to get inconclusive results – which can be frustrating.

The truth is, small mistakes made during A/B testing can affect your test results. And that hampers your success.

But don’t worry, because I’m going to share some of the most common A/B testing mistakes – and how to avoid them. These tips are designed to help you keep your testing plans on track, so you can achieve more with your email marketing. So, let’s dive in!

 

Pitfall #1: You stop testing too soon.

This is the statistical equivalent to throwing in the towel. Because stopping an A/B test as soon as you see a good result can make your overall results invalid. And all your hard work goes down the drain!

Many tools encourage this, by letting you stop a test as soon as it hits statistical significance. But if you want to get the best results from your emails, you need to fight the urge to end your tests early. This may seem counterintuitive, but the more often you check the test, the more likely you’ll see incorrect results.

 

The fix: stick to the set sample size.

Discipline is key to combatting these false positives. Before running an A/B test, set a sample size in stone – and avoid ending your test early (no matter how promising your results look). Not sure how to do that? The default A/B test settings in GetResponse recommend a sample size of 25%.

 

 

The same goes for the testing time. Give your contacts enough time to open or click – depending on your variables. I usually run the test for 4 hours and have GetResponse send the winning message for me. This is how I reduce testing bias. Simple!

 

 

Pitfall #2: You only focus on conversions

Sometimes when we’re deep in the weeds, we look at the forest and miss the trees. What does that mean for A/B testing? When you only concentrate on conversions, and lose sight of the long-term results. This hyper focus on conversions means you’d be better off choosing your best message based solely on the click rate.

While all email marketers want sky-high CTRs (click-through rates), several split test combinations may give you more meaningful results when you use the open rate as the measure of success.

 

The fix: Measure what you intend to measure.

Before you start your A/B test, you should outline a hypothesis you wish to prove or disprove.

Here’s an example:

If I’m testing different subject lines, then the open rate is the logical variable to track. The subject line that gets the highest email opens is the winner.

Or:

If I’m split testing emails with different content or design, then I’ll track CTRs. The content that gets the most clicks is the winner.

Bonus tip: Set a hypothesis based on your email marketing KPI. That way, you won’t be distracted and will measure what you intend to.

 

Pitfall #3: You only test incremental changes.

Sure, a newsletter might get a big return after changing something small like a button color. But for most of us, these tiny tweaks won’t produce meaningful results.

With A/B testing, it can be tempting to focus on minuscule improvements. But then you risk missing a bigger opportunity that makes a bigger impact.

 

The fix: Test radical changes.

A good rule of thumb? Test whole content sections or the email design. If you’re seeing weak CTRs, then it may be a sign you should invest in making radical changes, rather than incremental ones.

It does take more work than simply A/B testing subject lines or send times. Because you may have to completely redesign the email. So I suggest you perform radical tests occasionally – and on big contact sample sizes.

 

Over to you

Now you know the major pitfalls I spotted when A/B testing our newsletters. But there are many more.

What missteps have you taken in measuring your messages? And how did you avoid them the next time? Share your thoughts below!

A/B testing pitfalls and how to avoid them for better email marketing

The post 3 Common Email A/B Testing Pitfalls – and How To Avoid Them appeared first on GetResponse Blog - Online Marketing Tips.

No comments:

Post a Comment