2018 was a great year for Revenue Marketers – and 2019 is only going to get better. As you ponder how your revamped email marketing strategy will bolster engagement and increase sales, Demand Spring wants to share a little-known secret with you. Are you ready? Here it is:
Unpopular Opinion: Industry benchmarks for email marketing engagement don’t (really) matter.
That’s right! The truth is that 2018’s hot industry trends can tell only tell Revenue Marketers how leads are interacting on the whole, but they don’t provide insight into how your product or service is performing and how your potential and current customers are engaging with you.
If marketing has taught us one thing, it’s that selling is all about your why – what’s your company’s story? Why would a customer buy from you and not a competitor? While it’s also about your product and what you can offer the consumer, your story, identity, and brand are paramount today.
Don’t get us wrong, these 2019 trends reported by Litmus and others are absolutely useful – they just shouldn’t be taken at face value. We can actually use these benchmarks to form hypotheses and challenge assumptions with data unique to your company. And the best way to uncover these Revenue Marketing truths is with A/B Testing. But where do you start (and hopefully end)?
Step 1: Turn the assumption on its head.
A/B Testing is where art meets science: it’s an experiment that challenges assumptions in creative ways to render formidable results for the Revenue Marketer. All experiments begin with a hypothesis and expected outcome. So, put on your goggles and think of what those might be so you can distill your thoughts into variables and carve a path towards success. Here’s a snapshot of what that might look like:
Experiment #1: Time of Day
- Assumption: Mid-morning Eastern Time during business days (especially Tuesdays-Thursdays) are the best times to send B2B emails.
- Hypothesis: People are blending their work and personal time. They may in fact have more time to focus on vendor emails outside of typical business hours when they are often in meetings. Let’s test sending emails before work, after work, and on weekends.
Experiment #2: Email Personalization
- Assumption: The From Name should be the first name and last name of the sender.
- Hypothesis: Personalizing an email by using first name before the company name can both humanize an email and provide context about the sending organization.
Experiment #3: Email Length
- Assumption: The shorter the email, the better; 300-500 characters is the sweet spot.
- Hypothesis: Generally, shorter emails are better, but this depends entirely on the target audience and stage of the funnel they occupy. We hypothesize that longer email copy will resonate better with leads further down the funnel.
Now that you have your assumptions and challenging hypotheses…
Step 2: It’s time to test!
Distill your goals into measurable variables and remember to test only one variable at a time. If you change your “from name” and email send time simultaneously, you won’t know which variable contributed to a change in engagement. A/B testing must be performed methodically and with care to yield interpretable results. It’s also important to note that A/B testing should be repeated again and again to prove results, and assure that one positive outcome proving your hypothesis won’t yield a definitive answer printed into your Marketing Bible.
Here’s a few ideas on how you can bake this practice into your marketing methodology:
Rally around it
Make it fun! Rally Content Marketing, Operations, and Management to get buy-in for a formal Marketing initiative. Perhaps even convince leadership to make it a measurable internal goal – say, 50% of all outgoing email communications will contain an A/B test strategy or plan this quarter. Next quarter, strive for 60%. Then report out on this goal so leadership can hold the team accountable.
Good habits are hard to break.
Make it a habit by building A/B testing into your SLAs and approval times. Once testing is an expectation instead of a “nice to have,” it becomes a habit that’s harder to break.
A/B testing can still be done after launch.
You: Let’s take this huge marketing campaign as an opportunity to A/B test!
Your boss: “A/B testing is going to slow us down and create another operations bottleneck. Let’s just get this thing out the door STAT and get some results.”
Sometimes, you can hit some resistance here – but don’t fret! Tackle that campaign, get it up and running, and then take a second to look back: Is something not performing as well as it should? Can it be improved? Maybe you can still institute a post-launch test.
Step 3: Speaking of methodology…
The true value of A/B testing can be distilled into two words: Statistical. Significance. Without statistical significance, your results don’t really mean anything – just like those industry standards. Here’s what I mean:
You ran a test and found a 3% difference in performance, with Test B beating Test A. That’s great! But Test B performed better by what margin? What impact did it have on sales? Will implementing option B across all your existing campaigns make an impact that’s worth the effort? To answer these questions, you can use a statistical significance calculator (like this handy one from Neil Patel).
You can now be certain that this change will positively impact the conversion rate by a whopping 34% (which sounds considerably more impressive than a 3% boost in engagement)!
Now that you have the know-how, Demand Spring wants to help you kick off your A/B testing goals. We created this handy-dandy worksheet with two extra testing prompts to jetpack your optimization journey. Go forth, Revenue Marketers, and test with care!
The post A/B Testing: Where to Start (And Hopefully End)? appeared first on Demand Spring.