Let’s just start by saying we’re not against A/B testing. It can be a powerful way to gather insights and boost the effectiveness of your communications. But if done incorrectly, it can turn into a costly exercise in futility.
A recent article by Campaign Monitor suggested that “a whopping 39% of brands don’t test their broadcast or segmented emails”. And that by “failing to A/B test, their campaigns are not running at their optimum potential”.
We would argue that applying a blanket rule of “test everything every time” is actually a less optimal strategy. Marketers can often get lost in the process of A/B testing, becoming hyper focused on minor optimisations, without stopping to consider other more effective ways of driving long-term uplift.
Here are a few questions you should ask yourself before falling into the A/B testing trap:
Is your test set up correctly?
- Is there a meaningful difference between the two variants? If you’re testing the effectiveness of two email subject lines that are practically identical, it might be worth saving your energy. Think about what the potential uplift in open or click rates could feasibly be and plan accordingly.
- Are all other variables constant? Imagine you’re a scientist. You wouldn’t test the effects of a new fertilizer by changing the soil type, watering schedule, and plant species at the same time, would you? The same logic applies here.
Is the effort worth the learning?
- Setting up A/B tests can be a marathon, not a sprint. If you’re running a test on whether blue or green buttons get more clicks, consider if it’s worth the time and cost. Will the potential 0.5% increase in clicks justify the hours spent?
- And don’t forget the domino effect of multiple variants. When you’re juggling several test versions, one change can affect your other versions. That’s a recipe for missed details and, ultimately, wasted resources.
Are you testing across a range of emails and audiences?
- What works for one group may not work for another. Before rolling out a finding across your entire audience, make sure it’s valid in more than one scenario. The hip, young demographic might love your quirky emoji-laden subject line, but your professional audience might not.
Are you recording and applying the learnings?
- If a test happens in the woods and no one records the results… well, you get the point. Document your findings and make sure they’re used in future campaigns. Otherwise, what’s the point of all this testing?
- Use those insights to inform your strategy moving forward. Testing the same thing repeatedly isn’t just annoying—it’s expensive.
Ultimately, while A/B testing can be a great way to optimise your opens and clicks, it’s just that – optimisation. If you want to see proper long-term uplift, you should be thinking about your content, your design, the frequency of your comms – and all at a much higher level than specific colours and subject line tweaks.
Here are a few tips to creating proper long-term change:
Make your emails f*cking great
- Email is an often-overlooked channel, but can be extremely hard-working when done properly. Like any good ad, avoid overloading users with information and make it visual and fun.
- If relevant to your brand, take a leaf out of Liquid Death’s book – they fill their emails with fun, bite-sized content, they don’t take themselves too seriously, and they don’t ask too much of their audience. It’s one of the few emails I actually look forward to opening.
Send more regularly
- Build a regular cadence so that your audience learn to expect comms on a regular basis.
- Don’t forget that many people won’t open your email simply because they missed it – sending more regularly will at the very least ensure you’re making an impression on the majority of your database.
So, there you have it. By following these guidelines, you’ll stop wasting money on pointless tests and start making smarter, data-driven decisions that enhance your CRM strategy.