Giles Thomas, Author at Acquire Convert - Page 36 of 41
Categories
Email Marketing

5 Steps To Optimize Your Double Opt-In Email (For Ecommerce)

For example, customers of Klaviyo, a Shopify tailored email service provider, make at least $75 for every dollar they spend on email marketing, not too shabby ey?

When you ask most ecommerce entrepreneurs and marketers what their email acquisition strategies are for double opt-in emails you get blank faces.

With all the money spent on traffic and list building, you’d think people would be considering their double opt-in conversion funnel…

…You’d be wrong.

And don’t worry if you fall into the same boat, you’re not alone.

It seems ecommerce marketers far and wide have overlooked what could be the most valuable optimization process in their business.

But fear not, with this guide you can learn the ins and out of opt-ins (and how to optimize against opt-outs).

And get more of your sign ups to confirm their email and drive traffic and sales to your store.

In this in-depth guide to double opt-in email optimization you’ll learn about:

    • Double opt-in vs single opt-in
        • What is a single opt-in email?
      • What is a double opt-in email?
  • Double opt-in email best practices & optimization
      • Email opt-in form best practices
      • ‘Almost Finished’ page secrets
      • Confirmation email template examples & optimization
      • Thank you page analytics
    • Opt-out page optimization

Let’s get started.

Double opt-in vs single opt-in

When it comes to collecting email addresses from your customers, you have to decide whether or not you want to confirm their email address.

Their are pros and cons to both approaches, let’s find out what each approach entails and for beginners answer the question: “What does double opt-in mean?”

What is a single opt-in email?

sign-up-confirmed

Single opt-in definition: Single opt-in is when a user submits their email address through your form. They are then automatically added to your email list. They do not have to confirm validate their email in any way.

Pros

    • People get on your list fast.
    • You don’t miss out on people that do not confirm their email address.
  • Conversion rates from visitor to lead can be higher because of these reasons.

Cons

    • You will likely get a lot of fake or spam emails.
    • If your offering content in exchange for their email you will definitely get many people abusing the system by using fake email addresses to access the content.
  • According to Mailchimp you’re better off with double opt-in in terms of open rates and click through rates, this is huge.

What is a double opt-in email?

confirmationlink-1

Double opt-in definition: A double opt-in email is when the user submits their email to your list and is then required to confirm their email. This means when they subscribe to your newsletter they then receive an email with a confirmation link they must click to confirm the validity of their email address. Only after they click the link and visit the confirmation page in the browser are they added to your list.

Pros

    • You generally get better engagement in the form of higher open rates and click through rates with double opt-in
    • You are protected from spammers and fake email addresses that cost you email service provider fees and dilute the quality and per lead value of your list
  • Your conversion rates from list subscriber to customer will likely be higher as the user has shown a real interest in your business

Cons

    • You risk losing a lot of subscribers that don’t double opt-in
  • You risk losing sales by collecting less emails

The truth is, for some businesses single opt-in does work better, I’ve seen the numbers.

So you could A/B test it for yourself, but I’d go with double opt-in every time.

Why create yourself an extra job of weeding out fake emails and getting rid of people from your list that just really aren’t that interested. Plus you’re paying for them every month to Klaviyo or whoever!

The most important thing to A/B test would be the cost of the list monthly from your email service provider vs the profit it generates.

Open rates and click through rates are fine for indicators of engagement.

But I prefer to look at dollars in my account in wages and dividends when it comes to measuring success.

Double opt-in email best practices

There are a surprisingly large number of mistakes you can make in double opt-in email marketing.

In this next section you’ll learn best practices for optimizing your double opt-in email funnel and we’ll look at some double opt-in email examples from clients I’ve helped with optimization.

The first and best lesson to take away from this article is to think about your double opt-in as a sales funnel.

If you think about each step in the user journey like a step in a sales funnel, you can then learn where the biggest drop offs are and optimize them.

The key to this is tracking, you need to set up a Google Analytics goal and funnel visualization so you can understand the numbers.

You can also use tools like Hotjar or Kissmetrics for this purpose.

The main steps in the double email opt-in sales funnel are:

    1. Opt-in form submission
    1. Nearly there page
    1. Confirmation email
    1. Thank you page
  1. Unsubscribe page

Now let’s look at the first step in our double opt-in funnel. The opt-in form.

Step 1: Email signup form best practices

I’ve written at length before about form optimization, and let me tell you, there are a lot more moving parts than you would think.

In this article you’ll look at one specific form example and how you can improve it.

6

Here we have a classic opt-in form with one form field for email.

The question you should ask yourself is:

Should I ask for the visitors name as well as email address? Or any other information for that matter?

The truth is it depends, the only way to truly know this is to test it. And it really depends on your business.

What it comes down to is something called lead quality.

What is lead quality?

Lead quality determines how profitable a lead is to your business.

Say you removed the name field, conversions increased and you collected more emails, you’d be happy right?

Wrong.

What if those emails are worth much less in revenue and profit to your business than emails you collect from a longer form where you ask for more lead information.

For example as a service business you could test asking for a phone number and email address.

You could then end up closing twice as many sales and making twice as much profits from the same number of leads because you had a direct contact or multiple contact methods.

This is because the leads are often more qualified and show more intent to spend money with you by completing a longer form.

But as in all things CRO, there is no hard and fast rule. So test it.

Email opt-in form design and development

So as we learned, your form design will depend on the lead information you need for your business.

The more information you ask for, the lower your conversion rates will be (most of the time) but the higher the lead quality (in theory).

The lead is showing more intent to buy by going through a longer opt-in process and you have more contact information to keep getting in touch with them.

Remember, 80% of sales are made on the fifth to twelfth contact so having more ways to contact a lead aint a bad thing.

The classic design is to simply ask for name and email or just email as an in-line form or pop up, but now you have an array of tools and choices of how to get visitors to opt-in to your ecommerce email marketing newsletter.

Incentivizing email opt-ins

The first thing to strategize before choosing your design is how you will incentivize people to give you their email address.

You and I know well, that getting hundreds of newsletters in your inbox each week isn’t fun, so convincing someone to opt-in to receive more isn’t easy.

Historically in ecommerce people would offer a lead magnet.

14

This is typically a discount of your first order.

If you’re still doing this:

15

You’re in serious trouble. I wouldn’t be surprised if your email opt-ins were converting at 0.01%.

Best practices in 2016 for email opt-in design suggests you use three main strategies to collect email leads from your ecommerce traffic.

Strategy 1: Discounts

As I mentioned, many people offer first time buyer discounts.

There are some cases when this works well, but for premium products that don’t want to constantly be discounting in the future it can be very dangerous and should be avoided.

Strategy 2: The Content Upgrade

The content upgrade is where you ditch the generic email opt-in and use page specific opt-ins.

Unique giveaways on each page that are related to the context of the page and also the keyword that page ranks for.

The main two locations to use the content upgrade are on blog posts and on your product pages.

Product pages

image28

For example if you were a wine retailer online, rather than have a pop up that offers discounts or a generic opt-in you could offer wine content to your visitors in exchange for their email address.

Depending on the wine and keyword you could prepare videos with wine pairings, teach people about wine tasting or even tour them around the vineyards the wine came from in videos or image galleries.

Because you are aligning the search intent of the visitor and the email opt-in offer your conversion rate to lead generally go from around 0[d].5% to closer to 5%.

If you wanted a Malbec, landed on this page, and were offered a discount, you might be compelled to opt-in.

But if you were offered a video that explained the difference between a blended Malbec Mendoza and a Malbec or food pairings for this Malbec. You’d be much more compelled to opt-in.

Ecommerce blog opt-in optimization

The same content upgrade approach can be used on your ecommerce blogs.

But, again, so many ecommerce brands fail to monetize their content marketing efforts well.

In this blog post Alima Pure talks about what makeup to take on the road in summer.

But they don’t have any email opt-in on the page at all.

All that money and time spent to create content, positioned at a specific keyword and they let the visitors bounce and leave.

They could offer a summer lookbook with make up recommendations per outfit, a video of how to apply makeup when on a road trip. The content upgrade idea list goes on.

The lesson here is to incentivize the opt-in and use a content upgrade, page specific opt-in offers which are tied to the search intent of the pages keyword and the context of the page.

This has 10x some of my clients visitors to lead conversion rates in the past, that’s ten times as many leads to monetize with your email newsletter.

How much more a year would you be making with 10x as many sales from email?

Strategy 3: Loyalty Programs

The third common strategy is loyalty programs. You can offer people to join the loyalty program in the pop up.

How to structure a loyalty program really depends on a ton of variables from you business.

So I’ll write a post separate post about it rather than discuss it at length here.

Commonly loyalty software rewards customers for actions such as:

    • Page views
    • Social shares
    • Referrals
    • Reviews
  • Hashtag use

Customer can then redeem those points in the form of cash off their orders.

Form design checklist

As I mentioned I won’t go into length about the form design as I already wrote 10k words on it, but I will give you a short checklist to make sure you’re getting the basics right (you can always use a lead capture tool for this):

Use real labels not placeholders as labels

placeholder

Use the placeholder as intended, as an example entry

free-trial

Make sure form validation is on page and inline

form-validation

deviantart-errors

Make sure validation is in real time

Include a disclaimer message to reassure privacy

join-1

Use timing words

image342

Test personalized call to action buttons

wiebe-control-2 wiebe-treatment-b-2

Step 2: How to optimize your ‘Nearly There’ page

When it comes to the double opt-in confirm subscription page, the truth is most people end up with this:

10

There are so many things wrong with this approach it makes my head spin.

Here is a great example of how to do it right for my client Haute Hijab.

step-01

Here you can see that the confirmation message in onsite, not on the email service provider’s server, but on a webpage in your store.

Next you’ll notice we added the personal brand image of Haute Hijab, Melanie the founder, to humanise the process, like she is asking you to complete the confirmation.

We’ve added a timer to create urgency to complete the confirmation immediately.

The UX of the process is clearly explain to the user in a step by step flow.

They know what they’ve done and what is left to do.

Here you could also test showing the image of the email and the link that should be clicked.

7

You could even add a value proposition to the header of the page and convince the user why to confirm, tie this to your incentive or content upgrade.

9

You can also give instructions on what to do if the email cannot be found or your confirm subscription emails doesn’t turn up.

Although if you have high traffic, expect customer service work to come from this.

Step 3: Optimizing the confirmation email

Email design

Most people have a confirmation email that looks something like this:

4

For Haute Hijab we created one that was more on brand and had a clear call to action.

1200 grid for designers and developers

The blue call to action is made more prominent by the surrounding darker border.

We also continued the personal brand of Melanie through into the email.

You could test tieing the value proposition of the incentivized opt-in back into the copy and call to action here as well.

For example:

If the form design offer a discount the header could read, “Claim your discount” and the call to action, “Subscribe and claim my discount”.

Another opt-in email example would be in a plain text format.

8

For some brands or personal brands this converts higher as it feels more authentic and for some people like it was really sent by the person to them personally.

Subject line

The subject line of the email is also important.

Some brands are aggressive and offer warnings or write things like “URGENT”.

neel_image02-05

However you fear this might hurt your brand equity, the number above show it does increase open rates.

neel_blog01-01

Mailchimp suggests that using the first and last name of the soon to be subscriber can affect the open rate the most.

So I would advise to pair time sensitive language with personalization for a winning recipe.

Urgent: <<first name>> <<last name>>, confirmation needed.

However avoid capitalization of copy as it makes little difference to the open rates and looks spammy.

neel_blog01-10

Step 4: Thank You Page Optimization

The thank you page is a bit trickier to optimize, as it really depends on the purpose of it.

So you may need to have multiple version of the page depending on the source of the lead.

1200 grid for designers and developers

In this example we carry the design through from the confirm subscription or ‘nearly there’ page.

The step by step has progressed and the user is now asked to whitelist their email.

Now at this point you’ve already got them double opted-in and on your list.

Therefore I don’t think it is detrimental to the UX to then ask them to whitelist your email, to make sure your email don’t end up in their ‘updates’ tab or spam folder in Gmail (or whatever other email client you use if you’re living in the past)

In our example we then dynamically pull in the content upgrade content below the fold.

You can also choose to make the email more focused on the content upgrade as in the below example. And mention white listing below the fold. This seems to be more fair to the user, who really at this point just wants their content.

11

Some people also ask for social sharing at this stage, I’ve never seen this bring much of a result so I’d focused on white listing the email instead.

Step 5: Optimizing for fewer unsubscribes

When it comes to conversion optimization you should always be looking for hidden wins and profit.

And optimizing the unsubscribe page is definitely an easy win that is normally overlooked.

12

Firstly, when someone clicks on an unsubscribe link in your emails, give them the option to simply change their subscription preferences.

That is not to say make it hard for them to unsubscribe, but simply offer them with compelling alternatives.

For example:

“I want to receive emails:

    1. Once per week
    1. Every two weeks
  1. Once per month”

This should reduce the unsubscribe rate.

Next for the people hell bent on leaving the list at least try to capture some qualitative data around why they want to do.

That way in future you can optimize those parts of the business or email marketing to cause less people to want to unsubscribe.

Try to customize the questions to your audience specifically.

13

More emails = more cash (mostly)

Most people overlook the need to optimize their double opt-in.

As you’ve learned in this detailed guide there are many opportunities to fine tune your sequence and get more emails and most important more email revenues if done right.

I’ll be following this post up with a couple of case study examples in the coming month or two so stay tuned.

Categories
Analytics CRO

55 A/B Testing Best Practices Every Marketer Should Know

A/B testing is an important part of the ecommerce conversion optimization process.

It’s the step in the process when you try to validate your test hypothesis.

Proving that your new website changes will increase your conversion rate and more importantly profits.

Sounds simple right…maybe not…

The truth is most people get A/B testing wrong.

And it doesn’t come down to poor A/B testing tools.

What it really comes down to is A/B testing best practices.

In this in-depth article you’ll learn 55 best practices to follow when split testing your website.

You’ll learn how to get your a/b split testing right and how to stop wasting your valuable time, resources and dollars on bad website testing.

Here’s what you’ll learn in each section of the article…

  • Should You Be A/B Testing?
  • A/B Testing Mindset
  • Planning A/B Tests
  • A/B Testing Setup
  • Running Your A/B Tests Right
  • Interpreting A/B Test Results

Let’s get started.

This is part 4/5 in a series on ecommerce conversion rate optimization.


Should You Be A/B Testing?

To kick us off let’s first clarify if you should be using A/B testing at all in your business.


1. Low Traffic & Conversion Website Don’t Need To A/B Test

ab testing best practices low traffic

If your website gets only a few visitors and your business is getting 10 sales a month, you shouldn’t be using A/B testing for conversion rate optimization.

If your tests take 6 months to run, you’re doing it wrong.

Giles’s take

At this early stage in your business you shouldn’t use A/B testing as part of your CRO process.

Instead simply focus on customer learning, collect qualitative data from sources such as customer development interviews, make big changes to your website and business and watch your bank account.

Otherwise you’re simply wasting time and money on the wrong approach to CRO.


The Right A/B Testing Mindset

Now we’ll learn what mindset you should have going into A/B testing to ensure you get the most value in the shortest amount of time.


2. Don’t Run A/A Tests

ab testing best practices aa testing

A/A tests are when you test two identical controls against each other and expect the same results.

Many people use A/A testing to verify they are collecting clean data and that their testing software is setup right.

Controversial I know, but according to Craig Sullivan, A/A testing is a waste of time.

The truth is, too many other factors can come into play and skew the data, quality assurance of your testing set up, not realizing the sample size needed to compare two identical creatives, the novelty effect and poor segmentation.

The focus of your testing schedule should be to reduce the resource cost to opportunity ratio, A/A testing does the opposite.

In fact, with A/A testing you use valuable testing time and learn nothing about your customer.

You’re testing for noise instead of signal.

Giles’s take

There are too many sources of bias to use A/A testing as a method to check your testing tools are set up to collect clean data.

It may show large biases but it is not the most efficient route in revealing problems quality assurance, segmentation and Google analytics integration can.

Segment your data and cross check your results across multiple software products not just your testing tool, ensure to integrate with Google analytics and watch out for small sample sizes.

3. Tracking The Wrong Ecommerce KPI’s

André-Morys-Web-Arts.com-testing-mistakes

Andre Morys suggests when evaluating success in ecommerce conversion optimization focus on bottom line not conversion rate.

What does this mean?

Basically:

Bottom Line = Revenue – VAT – Returns – Cost of Goods

Case Study

In a case study he published, they got the following data:

Variation 1 – focus on discount: 13% uplift, -14% bottom line
Variation 2 – focus on value: 41% uplift, +22% bottom line

So variation 1 was a loosing test even though the testing tool reported a winner.

The uplift for variation 2 in conversions was much bigger than the “real uplift” in bottom line.

4. Don’t Focus On Design, Focus On Profit Optimization

17.-Web-Copy-is-King-1024x675

You all know design execution and usability are an important part of the conversion hierarchy.

But the truth is, if your messaging is wrong, it doesn’t matter how you dress it up.

It just won’t resonate with your target customer.

Don’t focus too much time and resources on design in time limited projects, copy changes and more specifically value proposition changes can be very powerful and are much quicker and cheaper to execute.

Giles’s take

Of course there are no hard and fast rules. In some tests design will prove to be more important that copy changes.

However, many of us don’t give copy it’s due credit, Alhan Keser of Widerfunnel.com used to make this mistake too:

“I used to make the mistake of focusing entirely on design/usability-related tests. I didn’t give copy and its effect on user motivation enough credit.”

Make sure to focus on copy and value proposition changes in tests if your budget is low or testing time short.

5. Be Creative With Execution For Better Tests Results

sp

Sujan Patel suggests that your mindset and approach to testing can be as important as technical best practices.

For example he says:

“One of the easiest ways to drive more conversions on your page is by testing more personalized CTA language against the standard (and boring) CTA language that you’re used to seeing everywhere. For example, in my ebook 100 Days of Growth, I reference the fact that Uber uses the text “Become a Driver” in their CTA button, rather than “sign up” or “join now.” This strategy is more personal, more direct, and has the potential to be a lot more effective.”

As Sujan points out here, the CTA is more personal and actually focuses on the outcome of clicking the CTA not the action itself.

Be creative with your copy, design and development execution. Don’t just test the status quo, challenge it.

6. Don’t Condition The Experiment With Customer Preconceptions

It’s hard to ignore your preconceptions about the customer when testing, but if you don’t take an unbiased approach to testing you can condition the experiment.

Alfonso Prim from Innokabi.com advises:

“The main thing in A/B testing for me is not a technical issue. It is a concept issue: Forget the image of the customer that you have in your brain before the experiment, and don’t try to discern the results… because this can condition the experiment.”.

7. Disregarding Test Results

ab testing best practices ignoring results

It’s hard to believe, but even after all the effort it takes to get A/B testing right, some people ignore the results.

They’ll get a conversion change that they didn’t expect and go with their opinion or gut feeling anyway.

Giles’s take

The whole point of data driven marketing is validate assumptions.

Don’t be bone headed, if a test doesn’t go your way, listen to the data.

Sure be diligent and cross check it in any way you can, but please don’t simply disregard test results you don’t like.

8. Always Test Every Website Change

0b6290730f37b932b3743ca82abe4d3e

Testing is a state of mind, and it requires commitment.

Everything you change on your site can impact your business.

It is only the reckless that push changes live untested.

Sure, I advocate big tests and big changes will bring you bigger lifts.

But that is not to say simply changing an image won’t affect your conversion rate and profits too, depending on who you are.

It very well might!

Giles’s take

A/B testing is like your final hand at the poker table, you’re all in or out.

Go all in on testing, and test everything.

Guessing is an irrational behavior, is that the best way to sustainably grow your business?

9. Testing Low Traffic / Low Conversion Sites

bank account ab testing

Many of you believe the only businesses you can really optimize are high traffic or high conversion volume businesses.

In fact, you don’t need traffic or conversions at scale to be a website optimizer.

The truth is, we can take practices from lean startup methodology and user experience designer to test low traffic and low conversion businesses early on and optimize them just the same.

Giles’s take

We can employ qualitative data collection and analysis methodologies like those found in early stage startups looking for product/market fit to improve businesses with little to no traffic or low conversions.

Created by Steve Blank and popularized by the lean startup movement, customer development is a great source of qualitative data for your business.

Some other examples of qualitative data to analyse for website changes are:

  • Customer survey
  • Exit intent polls
  • Talking with customer services or sales team members
  • User Testing
  • Usability Evaluations

Instead of running tests to find significance, you focus on big changes that test your business model and product/market fit and watch your bank account.

While for more mature businesses this would be guess work and not advised, for small early stage companies this can be the best road to find product/market fit and success down the line.

10. Don’t Give Up After One Failed Test

Stress concept

Honestly, most initial testing fails.

Sometimes it can take a half dozen tests to see a conversion and profit lift.

Don’t test one page, fail, then move onto the next area of testing.

Giles’s take

The most important thing, especially as an agency is to communicate to the client that tests often fail.

As long as you set expectations correctly either in-house to management or to your clients, then failing tests won’t be a problem.

Stick with testing and fix the biggest money leaks first, this will take more than one A/B test to get right.

11. Don’t Come Up With Test Ideas, Let The Data Talk

The first step in A/B testing is not coming up with a test idea.

Because every time you personally have a test hypothesis come to mind, you’re actually just remembering something you’ve seen elsewhere.

That’s not to say you or members of your company can’t come up with things worth testing from your experience working in the business.

But make sure to validate these ideas by collecting and analyzing data before testing.

Giles’s take

Every A/B test should come from data collected and analysed from your business.

Use quantitative data sources to learn where the money is being wasted in your business, and qualitative data sources to learn why.

If an A/B test idea didn’t come from data from your business, throw it away. It’s garbage.

12. Tests Are Not Based On A Hypothesis

As well as ensuring test ideas do come from data you’ve collected and analysed from your business, ensure they also use a hypothesis.

You need a hypothesis to ensure your tests have a clear focus and goal.

They also help to improve communication within teams and they help you to iterate on your customer theory.

Ensure your have a strong hypothesis that aims to validate something about your customer or product/market fit.

The hypothesis should come from an insight you’ve gathered through data collection and analysis.

Giles’s take

When writing your hypothesis you simply need to understand.

What lesson you learned from data collection and analysis
What the potential solution is
Which conversion rate you are trying to improve
What business objective this goal is linked to
What dollar amount your are trying to improve within what business cycle

Every hypothesis should aim to give you a better and deeper understanding of your customer.

Use this framework by Michael Aagaard as a guide to writing your hypotheses:

ab-testing-hypothesis-template

13. Don’t Listen To The Hippo

ab testing best practices hippo

Just because someone gets paid more than you, doesn’t mean they understand the customer or the business better.

It especially does not mean that they should run A/B tests based on opinion or using a consultancy approach.

Giles’s take

A/B tests ideas should come from data collection and analysis, not HIPPO’s or managerial roles without data driven marketing experience.

14. Not Having A Process

conversion-rate-optimization-process

This is a classic and something I see again and again.

Doing any kind of CRO without a proven process is just plain crazy.

Giles’s take

You can iterate on a process, it can be improved every time you cycle through it.

A process helps teams communicate and stops marketers just throwing stuff at the wall and seeing what sticks.

A process helps you to stay focused and data-driven and means your A/B tests are backed by business objectives, hypotheses and customer learning at heart.

15. Thinking A Losing Test Doesn’t Allow For Learning And Growth

Many people focus on creating winning A/B tests.

The truth is, A/B testing is not about winning or loosing.

It’s actually about customer understanding.

Giles’s take

If you write a great test hypothesis, and plan your test well, you should still validate or invalidate a customer learning from the test, whether you win or lose.

Don’t focus on test wins, focus on customer learning and persona validation.

16. Always Be Testing

always-be-testing

Any day that passes you by without testing is a waste of traffic and customer data.

Testing allows you to learn more about your visitors and customers, you get to know their desires and pain points inside out.

The best way to get conversion rate lifts is through customer understanding.

So use any time you have to learn more and focus in on your exact customer personas and differentiator.

Giles’s take

Insights from tests can be used throughout your business.

The more you know about your customer the better your user experience becomes.

By user experience I mean any touch point the user has with your brand.

Your PPC campaigns, content marketing, social messaging.

The more laser focussed you are on developing understanding of your customer personas, the better your chances of higher conversions and increased profits.

17. Don’t Waste Time & Money On Stupid Tests (Like Button Color)

drunk-ab-testing-post

How many case studies have you seen about button color or copy.

Too many.

How many of these studies tell you the bigger picture. Few to none.

That’s because businesses don’t change overnight due to a button color change.

Giles’s take

Don’t waste time early in your A/B testing schedule on stupid small tests.

Leave this kind of testing for multivariate tests after A/B testing has proved successful.

If you must take inspiration from inflamatory case studies, dig deeper into what is really happening behind the scenes.

For example: For button color tests, if you cross reference a lot of tests, you’ll find most of the tests were actually affected by visual hierarchy or button color contrast.

Learn the principles of conversion design and conversion copywriting to help you see through stupid test case studies.

18. Optimize For Lifelong Customers

We all know that when it comes to growing your business, it’s generally easier and cheaper to first look to your existing customers.

Word of mouth, customer referrals and repeat business.

Lifelong, loyal customers that are personally aligned with your brand’s value proposition are exactly what you want more of.

However, when it comes to conversion optimization people tend to forget this adage.

I constantly see people optimizing for the top of their sales funnel.

More easy leads and more cheap sales.

But if you optimize the top of your funnel at the expense of the quality of your customer or worse at the expense of your customers satisfaction, you are in trouble.

Those customers will end up refunding, complaining and they won’t turn into repeat business.

In fact they could end up costing you more than they make you, in customer service overheads and negative word of mouth.

If you optimize for the top of your funnel rather than the back end you will make your business feel like a tv infomercial!

core_sculptor

testimonial

And we know from online reviews that these businesses do not continue to sustainably grow!

Giles’s take

Focus on getting your value proposition right, make sure your unique value is communicated to your target audience. Even at the expense of conversion rate.

People who are aligned with and love your product or service will help grow your business much more than dissatisfied customers and negative word of mouth and reviews.

19. Create A Testing Plan That Has Customer Understanding At It’s Core

creating-customer-personas-resized-600

So if you’re not optimizing for the front of the funnel, what are you optimizing for?

Optimize for your value proposition and for your customer theory, what you think you know about your customer and their one true goal or one big pain point.

The one desire they have they your product uniquely enables or their one big pain point your product solves.

Giles’s take

Use qualitative data collection and analysis as well as quantitative data to inform your test hypothesis.

Focus on creating tests that aim to not only increase conversion rate but solidify or validate customer assumptions.

20. Run Macro-Focused Tests

It’s really easy to read a case study online, copy it’s results on your website and try to get a conversion lift.

Like, change some button copy from ‘your’ to ‘my’.

These kinds of tests rarely result in sustainable improvements or large conversion shifts.

Giles’s take

If you run micro-tests you limit your optimization to a local maxima.

With a macro focus on testing you can reach a new maxima outside your current vicinity.

local maxima

Big changes and tests lead to bigger conversions lifts, in the worst case scenario, to bigger customer learnings.

Test something big, your business model or even your product offering.

Smartshoot focused on macro testing and optimized their pricing and products page by offering products that didn’t even exist yet.

By running big tests that focused on customer understanding they learnt what features their customers really cared about and increased conversions by 233%.

21. Use A/B Testing As Part Of A Complete Conversion Optimization Process

ab testing best practices product market fit

As powerful as A/B testing can be, it’s not a magical solution for your business.

Often the problem lies deeper than simply getting more sales.

Sometimes what you are offering is just not what people want or need, there is no product/market fit.

Giles’s take

A great test is to try to sell your product in person or better yet try the Sean Ellis approach and see if 40% of people would be disappointed if your product or service ceased to exist.

As this article explains, it’s probably a customer development problem not a conversions problem.

Try these two tests before starting a/b testing and using conversion optimization to grow your business.

22. Don’t Copy Your Competitors

We all know how easy it is to be read a case study and get carried away, we see big percentage changes in conversion rate and our eyes roll over into dollar signs.

Ching!

Fredrik-Eklund-High-Kick

However, just because something worked for someone else doesn’t mean it will work for you.

Best practices and conventions can guide us but in CRO there are no hard and fast rules.

Don’t randomly copy tests you’ve found in case studies and stop copying your competitors: they don’t know what they’re doing either.

Giles’s take

Instead of copying case studies, collect and analyse qualitative and quantitative data to generate test hypotheses.

Run tests based on data from your business and from your customer.


Best Practices For Planning Your A/B Tests

In the next section you’ll learn some best practices to follow when planning your A/B tests.


23. Prioritize Your A/B Tests

optimization-priority-framework

So you’ve collected and analysed qualitative and quantitative data and have come up with test hypotheses that not only aim to solve a problem but deepen your customer understanding.

What could possibly go wrong!?

Well…you could run the wrong test first.

Prioritization of tests is another best practice most skip over.

Make sure to run the tests that take the least amount of time, are the cheapest to execute and have the biggest business impact first!

Giles’s take

When prioritizing your A/B tests consider these four factors and score each hypothesis out of ten, then execute the highest scoring test first:

Time – The test duration

How long it takes for the test to reach statistical significance, shorter tests score higher.

Ease – How easy it is to execute the test

The easier and cheaper a test is to implement the higher it should score.

Business Impact

How much will this test will change the business. The business! Not the conversion rate, not the revenue but the profit, the business. Big changes score high.

Cost of Advertising

How much will it cost to drive traffic to this page. If it is all organic traffic then a higher score is more appropriate, if it is expensive high competition CPC keyword traffic score it lower.

Now you have a framework to rank and prioritize your test hypothesis.

24. Testing Templates Not Individual Pages

the-million-dollar-optimization-strategy-andre-morys-conversionxl-live-2015-24-638

One common problem with testing, especially for ecommerce websites is that they only have a small number of page templates.

But they have potentially hundreds of pages based on those templates.

Product pages, category pages etc.

So to leverage the traffic and conversion volume of all those visitors and sales, test on a template level.

Giles’s take

Of course this comes down to what your testing and why, but leveraging all product page traffic to test a new variation is a smart move.

The test will be quicker and cheaper to run and the results will have a larger sample size and therefore more statistical significance.

25. Talk With The Developers While Planning Tests

It’s all well and good toeing the line of best practice and getting all your technical implementation correct.

But if you don’t talk with the website development team and discuss how your plan fits into their normal deployment schedule, you could be in for some trouble.

Overlapping your testing schedule with changes to the website would be a recipe for disaster.

Giles’s take

Make sure you align your conversion optimization efforts with the design, user experience and development processes already in place.

As CRO becomes more prevalent throughout online business we need to learn how to communicate and collaborate between UX, CRO, analytics, design and product more closely.

Combining company wide build, measure & learn processes and iteration cycles.

26. Not Engaging The Whole Team For Tests Ideas

Even though your tests ideas and hypotheses should come from data collection and analysis.

There is nothing stopping anyone in the business having input into testing schedules.

Giles’s take

The truth is your staff are in the trenches, they speak to the customer everyday (you should too).

Which means, especially if they work in customer services, they have a good idea of who your business caters for.

Get input and be open to test ideas from all team members, but just make sure they can back it up with qualitative data from a number of customers.

You want to see patterns, not one of extreme cases of anger or delight, but repeating customer feedback that points to test ideas from their department.

27. Run Tests For Full Weeks & Months

monday-to-sunday-ab-testing-full-weeks-months

Let’s say you’ve cracked the code on organic traffic and you get a bunch of daily uniques and sales. We’re talking 750+ sales a day.

Perfect right?!

You can A/B test till your hearts content and run whole tests in a single day!

Wrong.

You need to run your tests for full weeks (monday to sunday), if not full months.

Better yet run them for a full business cycle and then look at the bank statements.

Just look at your conversions by days of the week report in Google Analytics and you’ll understand.

Joe from Ispionage.com suggets:

“One of the best practices I’ve learned is that you need to let tests run for at least a week and possibly a month because I’ve seen way too many tests where one variation jumps out early and reaches statistical significance only to come back down to earth and score the same conversion rate as the control a week later. With that said, if you don’t have enough traffic you can sometimes pick winners sooner, but when possible, it’s better to let the test run longer so you can make sure the results hold true over time and aren’t merely a fluke.”

Giles’s take

Conversion rate can fluctuate by day of the week, when it’s the holidays or even because of weather.

You need a representative data sample for your data to be valid, so don’t run tests in a single day.

28. Consider Purchase Cycle In Your Sample Size

the-buying-cycle

Understanding how long your typical purchase cycle is can make or break your A/B testing efforts.

If you run a test for only two weeks (monday to sunday of course) when your cycle is more like a month then you’ll end up cutting off a lot of visitors from the experiment.

Giles’s take

The problem with stopping a test before the end of your purchase cycle is you may end up with skewed results.

You end up capturing data only for what I call ‘Early Converters’ and loose data for visitors who would convert later in the cycle.

You’re better off leaving an experiment running in the background once you’ve closed it to new visitors.

With this approach people who are still in their purchase cycle will continue to see the test and become ‘Late Converters’.

You increase your test sample size, which is hardly a bad thing and push ‘Late Converters’ through the test without showing it to new unique visitors.

29. Consider Visitor Device Types When A/B Testing

63228746

Imagine you’re an ecommerce entrepreneur and you mobile optimize your online store.

You run an A/B test, a site wide comparison of the original design vs the new responsive design.

Oh dear, conversion rate drops by 5%.

Do you stick with the control?

When you look again closely you realize, the visitors device type was not considered when preparing the test.

So it is possible that the majority of mobile visitors saw the old unresponsive site.

Giles’s take

A better approach would have been to evenly split the mobile and desktop traffic between the two variations.

You could then understand how the responsive design individually affects desktop and mobile visitors.

Make sure to understand and consider how device type affects your A/B testing results.

30. Prioritize Tests With The PIE Framework

Use your business resources to plan and measure marketing initiatives well.

Lorenzo Grandi, marketing manager at pr.co suggests:

“Prioritise your tests with the PIE framework.

The time you spend to prepare your tests is well spent. Since you’re going to run a lot of tests, and are aiming at making a difference, you’ll need to decide how to prioritise your tests.

A good way to do it is through the PIE framework: put all your test ideas in a spreadsheet, and rate (1-10) each of them for these parameters: Potential, Importance and Ease. Calculate the average, which is the ultimate value of your test idea. You can order your spreadsheet by this value to have your priority list ready.

In this way, you’ll be able to consider an objective ranking to decide which tests to start next.”

Check out this case study recommended by Lorenzo for more information.

31. Set Goals For Your Tests

Sean Si, founder of SEO Hacker suggests:

“One of the biggest A/B testing mistakes a person can make is to start testing without setting the goal properly.

For example, when A/B testing our homepage at Qeryz, our goal was to increase freemium signups. The problem is, we weren’t able to fully qualify the signups because the Original version of the homepage didn’t have a signup form in it while the Variation version did. The goal count was set for people who went to our signup page and then entered onboarding. This was a major mistake because people who were seeing the variation version did not need to go to our signup page because there was a signup form right there in the homepage!

Because of the mistake in goal setting, the data in that A/B test was severely skewed. We had to redo it and it took another 2 weeks for data gathering – which sucked.”

32. Understand Your Customers Weekly & Hourly Behaviors

Even though we test for full weeks, it’s still important to understand how buying habits and customer behaviors change hour to hour and day to day.

Jerry from Webhostingsecretrevealed.com suggests:

“Do time/day-based split-tests – Visitors shift their focus throughout the day and week (ie. weekday vs weekend; morning vs night; hence react to web content (sales copy, blog content, etc) very differently. So to give the biggest bang for my buck, I ought to determine the best day/time and max out my advertising effort on those good days.”

33. Running A/B Tests After Large Changes When You Should Run Multivariate Tests

mvt testing ab testing best practices

Even though I am a big advocate of running big tests and going after big changes in the business.

There is a time and place where multivariate testing just makes more sense.

This is normally after you have run an A/B test and seen a big lift, and now want to optimize more nuanced parts of the page.

Giles’s take

It can be valuable to run multivariate tests after getting a big win with A/B testing.

However, the truth is, multivariate testing requires a lot more traffic to get statically significant results.

For every test variation you will require at least another 250-350 conversions.

Multivariate tests do something A/B tests cannot, they help you to learn how users behave and interact with single elements and variables of those elements in terms of their design.

You can determine the impact individual elements on a page have in terms of conversion rate.

As a rule of thumb you should be running 9 A/B tests for every one Multivariate test.

34. Not Calculating The Sample Size Before The Test

ab-testing-best-practices-sample-size

The most important thing to remember from these best practices is that significance is not a stopping factor, sample size is.

Statistical significance does not tell us the probability that B is better than A. It also doesn’t tell us the probability that we will make a mistake in selecting B over A.

These are both common misconceptions.

Giles’s take

Testing without measuring statistical significance is just nuts.

Always calculate the required sample size before you run your test.

Good test planning as highlighted in this section is key.

Use a sample size calculator, I recommend this one from Evan Miller.

35. Running Multiple Tests At The Same Time With Overlapping Traffic

It may seem like you are saving time by running multiple tests at the same time.

But this can often lead to skewed data.

If you choose to run A/B tests with overlapping traffic, you must ensure that traffic is evenly distributed.

If you test a single product webpage, A vs B, and for example your checkout page too, C vs D.

Be careful to ensure that traffic from B is split 50/50 between C and D (e.g. instead of 30/70 for example).

Giles’s take

If you want to test new variations of several pages in the same flow at once you are better off using multi-page testing.

Just ensure you get attribution right.

36. Don’t Surprise Your Regulars

roi+of+return+visitor

When you run an A/B test, take time to plan how to segment your return and new visitors.

Return visitors know your website well, they have learned the experience of the site and have expectations.

If most of your revenue comes from return visitors think about testing your new variation on only new visitors.

Giles’s take

Make sure to segment your tests and understand what the purpose of the test is.

If you are trying to improve the conversion rate of new visitors, improving your value proposition for example, then consider exposing only new visitors to the test.

You could implement a segmented test like this using Convert.com.

37. Testing Too Many Variables

ABTestBlog2

Don’t A/B test multiple variations at once unless they are significantly different, leave this type of design refinement to multivariate testing (which should come after a strong lift achieved through a large A/B test with a radical redesign approach)

Giles’s take

Testing too many variations can often lead to wasted time.

Each variation will generally require a fairly large number of test subjects and will lengthen the time it takes to reach statistical significance when testing.

Also multiple variations usually point towards random testing, with either bad hypothesis or even worse no hypothesis.

Make sure your tests have a strong hypothesis that attempt to validate something significant about your customer, not just your webpage.

A/B testing is about improving customer understanding not micro conversion rates of single steps in a funnel.

38. Avoid Sequential Testing

What if you just make a change to your website and then simply wait to see if the goal conversion rate increases or decreases?

Testing the conversion rate of Design A for one week, then swapping it and testing the conversion rate of Design B for one week as a comparison is not real testing.

This is actually called sequential testing.

Sequential testing does not have validity as it doesn’t refer to the same source of visitors.

The data set is different and therefore not comparable.

Giles’s take

You conversion rate can change seasonally, by the day of the week or due to external factors such as weather or environment.

Therefore the only way to compare or test in a valid way is at the same time with the same traffic.

Never use sequential testing.

Only test using the same data set, the same traffic at the same time.

The only caveat to this rule is if you do not have enough traffic to test at all.

Then again, don’t run a sequential A/B test, simply make large changes and watch your monthly revenue and sales.

Your bank account can’t lie like misleading A/B test data can.


Best Practices For Setting Up Your A/B Tests

Now we’ve learned a number of key points to keep in mind when planning your A/B tests, let’s dig deeper into best practices for setting up your A/B tests technically.


39. Run A Website Load Time Test Before & After You Install Your Javascript Snippet

page-speed-test

You all know that your website performance can affect your conversion rate.

If your page loads too slowly visitors will bounce and leave your site without converting.

This can be expensive, especially if you’re driving paid traffic to landing pages.

Ensure to conduct a page load speed test after you install your A/B testing software on your website.

As every javascript snippet added to your website will slow down the load time.

Giles’s take

Use Google Tag Manager to organise and optimize all your websites jQuery snippets in one place (great for non coders too).

You can also use caching tools or plugins, for WordPress I recommend WP Rocket, which allows you to minify and concatenate (reduce the weight and load time of) your css and javascript files.

wp-rocket-minification

If you’re using Optimizely, toggle jQuery in your project settings – Optimizely uses jQuery heavily to minimize the size of its code.

You’re almost definitely already running jQuery on your website and can disable it in Optimizely to speed up your load time.

40. Watch For The Flicker Effect

The flicker effect is one downside to using client-side A/B testing tools, such as VWO or Optimizely.

The flicker effect is when for a brief moment you see the original version of the page before the test variation loads.

Also known as the FOOC or flash of original content, this flicker can be seen by humans who can identify images in as little as 13 milliseconds (The flicker lasting for up to 100 milliseconds).

Giles’s take

Tools like VWO and Optimizely are great as they remove the need for developers and allow non technical people to implement tests quickly using WYSIWYG editors.

But they do have some down sides.

One of the major drawbacks of using client-side tools is that they effect your page load times, website speeds and can cause the flicker effect.

The best way to combat this is to speed up your website and make sure you embed your javascript snippet which loads the testing software in the right location on your page.

For Optimizely this is as high up as possible in the header.

41. Make Sure You ‘No Index’ Your Variation Pages

external-links3

If you are running an A/B test, ‘Control’ vs ‘Variation 1’.

You do not want ‘Variation 1’ be indexed by Google.

Because to date, the ‘Control’ is the best version of that page you’ve created.

And if ‘Variation 1’ is indexed, people could land on it directly from a Google search result.

Make sure then to ‘No Index’ (add this tag to the page) your variation pages so visitors cannot accidentally land on them directly from search results.

Giles’s take

You can check if Google has indexed your variation pages using the following search query:

site:youdomainname.com/variation_one.html

If it shows up, it’s indexed.

A quick fix is to add this code to your robot.txt file:

User-Agent: Googlebot
Disallow: /variation_one.html

It’s also a good way to sneakily check to see what pages your competitors are testing! Simply look at their robot.txt file and see what pages are hidden from Google. They are probably pages they’re testing 😉

You can also add a canonical tag () to the control page to make sure Google knows this is the preferred content and ensure it doesn’t get marked down for duplicate content in search results.

42. Make Sure Your A/B Testing Tool Doesn’t Cloak

Your website knows if a visitor is a Google spider or a human using a ‘user-agent’.

Some A/B testing tools filter out spiders from tests to get better test results.

Presenting different versions to the spider and normal visitors is called cloaking and can negatively affect your SEO rankings.

Giles’s take

Ask your tool provider if they cloak content.

For the time it takes to ping a tweet to your provider, it is worth making sure your testing is not affecting your SEO standings.

43. Integrate Your Test Data With Google Analytics

Universal Analytics Editor

When it comes to testing tools the truth is there is no perfect solution.

This stuff is tricky to code and your data is never perfectly clean.

A great fail safe is to integrate your testing tool with Google Analytics.

That way you can cross check your testing tool results with GA and also simply look at the raw unique visitor data in GA and manually reconstruct your test data.

Giles’s take

Just don’t believe the numbers as Craig Sullivan would say.

Cross check your test data with other sources like Google Analytics, bank accounts, GA goals, email list counts, Stripe reports.

Anyway you can confirm or disprove your numbers, do it.

44. Don’t Break Your Site When Testing And Loose Money

Seems like a no brainer but it is easy to make small mistakes that you don’t realize affect your business.

In one of my A/B tests I redesigned an ecommerce homepage and managed to remove the phone number from the header for both the control and the variation.

The shop didn’t take orders over the phone so it probably didn’t affect the profits right?

Well…turns out the phone number was a huge trust factor, just by seeing it people recognized the company as being legit (as most competitors had no telephone contact) and removing sent conversions crashing down!

Giles’s take

Obviously we are trying to increase profits with A/B testing as with most marketing initiatives.

So ensure you get the nitty gritty details right, as sometimes even small mistakes can cost big.

A great way to be proactive here is using checklists that evolve and become more sophisticated as you improve your craft. Grab a checklist of the best practices in this post here.

45. Setup Custom Alerts In Google Analytics To Track Key Conversion Points

A reactive method to dealing with human or technical error in A/B testing is setting up custom alerts.

Custom alerts or intelligence alerts are a way to keep yourself updated with the changes in your key conversion goals.

You can establish custom intelligence alerts for traffic, conversion rate, goal completions, analytics movement and Google referrals.

Giles’s take

When running an A/B test, setup custom alerts for the key conversion goals you will be affecting incase your test causes technical issues.

That way you can fix the problem before it costs you too much money!

You can receive the alerts as emails and as SMS.

Here’s how to set one up:

Navigate in your GA account to:

Admin > Custom Alerts

Then click ‘New Alert’.

google-analytics-custom-alert-ab-test

Fill in as suggested above to track a conversion rate drop below your standard percentage.

Pick specific goal conversions you are affecting in your A/B test for more appropriate alerts.


Best Practices While Running Your A/B Tests

Now you’ve learned the ins and outs of settings up your A/B tests right, I want to take about just one point to remember while your tests are running.

It’s so important, it gets its own section!


46. Don’t End Your Tests Too Soon

Let’s be honest, you’ve all stopped A/B tests too early.

And a big problem here is the testing tools themselves.

They can honestly be misleading and this is because it is in their best interest to be.

Peep Laja of ConversionXL points this out in this example.

Here are the test results two days after the test had started:

11

The tool here suggests that Variation 1 has 0% chance of beating the original.

2

10 days later the results had turned around completely.

If he had stopped the test early according to the tool the results would have been completely wrong.

Giles’s take

Stopping tests early is one of most common ways people get testing wrong.

Here are some rules for when to stop your tests:

  • The tests have been running for a complete week (monday to sunday) or better still a complete month or business cycle
  • You pre calculated the sample size needed for each variation and met your criteria
  • Each branch or variation in the test has had 250-350 conversions each
  • Statistical significance is, at least, 95%

Best Practices For Interpreting Your A/B Test Results

Once you’ve completed your tests as above it’s time to crunch those numbers.

You’re not home and dry yet, so brush up on these best practices below and make sure you’re not falling at the last A/B testing hurdle.


47. Don’t Interpret A Random Fluctuation As A Winning Test

Random fluctuation, also known as a statistical fluctuation is when you get random changes to data sets that are not related to the tested changes.

Imagine you tossed a coin ten times and heads came up 80% of the time.

This might lead you to believe that heads comes up the majority of the time.

Not true, because the sample size (10) is so small it is easy to get misleading test results.

Giles’s take

Always use a sample size calculator when planning your A/B tests.

Make sure to calculate how many subjects will be needed for each branch (or variation) of your A/B test first.

This will not only make sure you reduce your chances of random fluctuations, but will also allow you to estimate the time needed to run the test.

48. Understanding What P Value’s Really Mean

Interpreting-p-value

Ok, so what are P Values?

P Values are used to determine statistical significance in a hypothesis test.

They tell you how well the data supports that the null hypothesis is true.

Basically it measures the compatibility between the data you collected and the null hypothesis.

Giles’s take

High P values: Your sample data points to a true null.
Low P values: Your sample data does not point to a true null.

A low P value should be interpreted as evidence to reject your null hypothesis for the testing population.

49. Stop Focusing On CVR (Average Conversion Rate)

What data do you look at when interpreting your A/B test results?

Some marketers look at sample size, conversions and confidence levels.

The more diligent will consider statistical power and test length.

But few, no I’d go as far as to say nearly none, consider the lead-to-MQL or the number of marketing qualified leads that are generated.

Most people look at the conversion rate of all their traffic, not the conversion rate of their target traffic.

And without segmenting you can get really misleading test results, end up making changes that mean you end up with more unqualified leads and a decrease in conversion rate among your target audience.

This means you could be optimizing for more conversions, but of less qualified leads, therefore optimizing for less potential revenue and profits…not good.

Giles’s take

Running A/B tests and interpreting the results without segmenting the data can lead to wasted time and resources.

Imagine you ran a test, Control vs Variation A, Variation A wins with 150% lift in CVR. Winning test…right?

What if your non-target audience converted better with Variation A, while your target audience saw no change. You’d be optimizing for more unqualified leads.

If your target audience hates Variation A but your non-target audience loves it you’re losing customers as well as getting more unqualified leads!

Make sure to segment your traffic and understand how conversion rate changes affects lead quality.

Measure the value of the leads on the back end for your business, measure the profits not the CVR.

50. Regression To The Mean

Regression-to-the-Mean

In the beginning of some tests you see crazy fluctuations in conversion rate data.

And you ask yourself, what is happening here!?

Regression to the mean simply says things will even out over time.

So don’t go jumping to conclusions when running tests by stopping them early.

Giles’s take

A good example is to imagine a researcher. They give a large group of people a test and select the best performing 5%.

These people would likely score worse on average if tested again.

In the same way, the worse performing 5% would likely score better if tested again.

In both cases the extremes of distribution are likely to ‘regress to the mean’ because of random variations in results.

This is known as regression to the mean.

Aka.

“The phenomenon that if a variable is extreme on its first measurement, it will tend to be closer to the average on its second measurement.”

So take care when calling tests early, based only on reaching significance, it’s possible you’re seeing a false positive. And there’s a good chance your ‘winner’ will regress to the mean.

51. Novelty Effect

Magpie2_i-love-shiny-things

The novelty effect in A/B testing is exactly what is sounds like.

You see a conversion rate change for your new variation.

A lift! Boom!
Wrong.

The truth is, you are only seeing a lift because the variation is new.

New and shiny, nothing more.

Giles’s take

To truly test if your new variation is causing a conversion rate change you can segment your new and returning visitors.

If it is simply the novelty effect, the new variation will see a lift with new visitors.

When returning visitors get used to the new variation, it should see a lift with them also.

You can learn more about testing for the novelty effect in this Adobe article.

52. Base Rate Fallacy

the-base-rate-fallacy-source-boston-2013-14-638

Humans have a funny way of only hearing and seeing what they want to.

Base rate fallacy or base rate neglect is when people ignore statistical information in favor of using irrelevant information, that they incorrectly believe is relevant, to make a decision.

This is an irrational behavior.

Giles’s take

Here’s a good example from Logically Fallacious.

Example: Only 6% of applicants make it into this school, but my son is brilliant! They are certainly going to accept him!

Explanation: Statistically speaking, there is a 6% chance they will accept him. The school is for brilliant kids, so the fact that her son is brilliant is a necessary condition to be part of the 6% who do make it.

53. Pay Attention To False Positives (type I errors)

image03

A false positive is when your A/B test results suggest something is validated when it is not.

These normally occur when many variations are tested at once.

Start off with large changes and just run A/B tests not A/B/n tests.

It’s easy to assume you are right especially when the data is leading.

Paul from LeadOne advises:

“Always look for ways to prove yourself wrong, assumptions are the killer of great conversion rates.”

Giles’s take

Testing too many variations at once increasing your chances of getting a false positive.

Focus on A/B testing first and then use multivariate testing for refinement later.

54. Don’t Ignore Small Gains

Don’t think small wins are worthless in A/B testing.

Small incremental, month over month improvements, are more expectable and definitely worth having.

The compounding effect of conversion increases over a year can be astounding.

Giles’s take

Don’t get wrapped up in hitting a home run.

Shoot for 15% conversion rate increase quarterly or 5% month over month.

Now work out what that would mean for your business compounded over the next year right now.

…Not too shabby ey?!

Remember, small gains count and can compound over time.

55. Run Important Tests Twice To Double Check Your Results

Just because a test wins, doesn’t mean the money is in the bank.

William Harris, VP of marketing & growth at Dollarhobbyz.com suggests:

“Double check your results. It’s so common to have a test come back statistically significant in favor of a result, only to have that completely negated by a follow up test. Run the test. Then run it again. If you get conflicting results, run it a third time. The worst thing you can do is run a test and make a hasty change to your website that causes you to lose money.”


A/B Testing Best Practices Checklist

Remember A/B testing is not a silver bullet, it’s just one step in a complete CRO process.

It’s important to always be testing and to run the right tests first. Focus on reducing the resource cost to opportunity ratio.

For those of you who are hell bent on getting testing right, I’ve created an A/B testing best practices checklist you can download below.

Let me know in the comments if I missed any best practices you follow when testing your websites changes…