Experienced email marketers know how to engage their audience, and A/B testing can help them optimize their efforts.
Each email marketers send includes various elements, such as subject lines and preheaders, that organizations can tweak to improve open and click-through rates. . Presenters at HubSpot's Inbound 2023 conference in Boston shared best practices for email A/B testing. This is a testing method that allows marketers to see how subsets of their audience respond to different versions of an email.
A/B testing helps marketers improve engagement and increase revenue. However, marketers need to know which variables to isolate and how often to run the tests.
What is A/B testing?
A/B testing, also known as split testing, refers to experiments that marketers can run to compare the performance of email variations. These tests can be automated in most marketing platforms by sending two different email variations to a subset of your organization's contact list. After a certain period of time, the system evaluates which version performs better and sends the winning version to the remaining contacts on your list.
Common variables that email marketers test include:
- Sender name.
- subject.
- Preheader.
- Email a copy.
- Image size.
- call to action.
- Link amount.
- Sending time.
- Email length.
Email testing best practices to adopt
Marketers can follow email testing best practices to identify weaknesses in their campaigns and strengthen them.
1. Design the right tests
While A/B testing provides empirical data that can improve your marketing efforts, each test still requires a human to set parameters. The effectiveness of an A/B test depends on the overall design of the test, says Chris Eichelsheim, head of inbound marketing at Dtch. Digitals is a marketing agency based in the Netherlands.
First, marketers must decide which variables to test, depending on their specific needs. For example, an email may have a high open rate but a low click-through rate (CTR). In this case, the problem is with her CTR, so the marketer might A/B test her email body copy instead of the subject line.
Next, the marketer selects an appropriate sample size for the test. Larger sample sizes usually yield more accurate results than smaller sample sizes.
Ideally, marketers should choose a sample size large enough to achieve statistical significance at a 95% confidence level. Statistical significance measures the likelihood that the results of an experiment are real and not due to chance. A 95% confidence level means that the test result is accurate with her 95% certainty. Online calculators can help marketers find the appropriate sample size to achieve this level of certainty.
For example, a marketer with more than 1,000 contacts might test with about 20% of their audience, so 10% will receive version A and 10% will receive version B. After a period of time, the marketer identifies the winner and sends the email to the remaining contacts. This ratio allows marketers to test enough people to generate statistical significance with a high confidence level, ensuring that a large proportion of their contacts receive more effective emails.
However, organizations with small contact lists may need to test larger proportions if they want to achieve a high level of statistical significance. For example, if a marketer with a contact list of 200 people tests a subject line with her 20% of her target audience, only 20 of her people will receive each version. Four people might open version A and six people might open version B, but marketers can be confident that version B is more effective based on such a small sample size. I can't say.
2. Start with sender name, subject, and preheader
When marketers start a new email campaign, they can start A/B testing the sender name, subject line, and preheader before testing the email copy itself. Marketers must prioritize these three factors because recipients can only see these three factors before opening an email.
“If people don't open your emails, who cares what your emails are about? No one's seeing this beautiful artistry of yours. …We don't open emails.” “We need to invest time and energy to get people to respond,” said Jay Schwedelson, CEO of marketing services company Outcome Media, in a session titled “Debate: Get the Open! vs. Get the Response!” comment.
Marketers can run various tests to improve open rates. For example, you can test different sender names, such as “Company Name” and “Company Name Joe.” You can also test numbers ending in 0 or 5 in the subject line compared to other numbers. For example, “7 Tips for Retail Leaders” is likely to generate a higher open rate than “10 Tips for Retail Leaders.” In addition, organizations can also experiment with emojis in the preheader, Schwederson said.
Email sender names, subjects, and preheaders have a huge impact on campaign success, so marketers should start there. After testing these elements, you can move on to other aspects of your email, such as headers, calls to action, and overall copy structure.
3. Complement A/B testing with AI tools
Rather than limiting yourself to A/B testing alone, marketers should use a combination of different types of email testing to optimize their campaigns. For example, marketers can find free tools online to evaluate emails and spark ideas.
“Many free services will scan your email and give you tips on: [how] …to avoid falling into spam traps,” Eichelsheim said.
Organizations can also use free generative AI tools like SubjectLine.com and ChatGPT to evaluate marketing content, generate subject lines, and provide tips. For example, marketers can paste a copy of an email into a generative AI tool and ask, “How can I make this email sound more appealing?” “How can I add some urgency to this email?”
AI tools can help marketers avoid spam traps and increase open rates and CTR. Additionally, it also helps users brainstorm ideas for email copy that they can perform in their A/B tests.
4. Test everything
Innovation in marketing can start with an idea that sounds a little different or non-traditional. For example, a marketer might want to send a promotional email that contains nothing but emojis in the subject line.
A marketing supervisor's reaction to this idea might be that it won't work. But even if marketers have doubts about a new approach or idea, they should test it, Schwedelson said in a session called “ENCORE: New Email Marketing Testing Ideas and Pitfalls to Avoid.” Stated.
A/B testing helps marketers use empirical evidence to find more effective marketing strategies. To optimize campaigns, marketers should use best practices for email testing and test every idea they can to improve their campaigns and learn more about their target audience.