Let’s face it: we all make mistakes.
Mindvalley went though a time when we would be afraid to even touch the checkout. Hence, we also make them while split testing: pushing a wrong variation live, missing a typo in a headline or even (God forbid!) misplacing a comma!
Keeping your conversion funnel in mind, the closer this missing comma is to the “confirm my order” button, the higher your risks are. One missing comma can just go by unnoticed on the landing page, but on the checkout page that same comma would cost you thousands of dollars.
The potential risks are high, but so are the potential gains. Like open surgery on a beating heart.
We went all the way from holding back on any checkout innovations, to testing everything: design, layouts, payment gateways, third party solutions. Not only have we learned how to create a seamless checkout experience, we’ve also taken it as a chance to straighten our optimization processes and come out with a few learnings about CRO in general.
This article is not about creating a perfect checkout, but about getting comfortable with the testing process. You can learn from it if you are about to work on your checkout, but also if you are testing any other steps of your conversion funnel.
Lesson 1: You are ready if you have solid processes
There are companies who are postponing the need to address their checkout experience until “everything else is optimized.” Because why touch something that’s already working?
I won’t waste your time trying to convince you that your biggest optimization opportunities lie in your checkout — there are many articles out there which will do that for me.
However, I will point out few things, a checklist of sorts — which you can go over to see if your testing processes are solid enough to step in that land.
Can you trust your data?
Checkout testing is one place where you can’t allow yourself jump into decisions based on inaccurate data. So, do you have reliable sources to provide that data?
For the Mindvalley team, we found that the same data gathering methods we use for all other split tests can’t be used for checkout testing — so we had to build our own tool for data gathering. This was mainly because we wanted a secure way to collect data, so we didn’t want to allow any third party tools to collect data on our payment page.
However, based on all the experience Mindvalley gathered, optimizing different areas of conversion funnels allowed us to determine exactly just what data would we need to gather — and we then built an analytics script for that. More on that in the Lesson 3.
Can you trust your team?
Do you have enough trust in people responsible for optimization process in your team? Can you count on them to watch out for those testing pages, as if their lives depended on it? If you don’t, look into hiring, not checkout optimization.
Can you trust your technology?
Whether you are using your own platform, or a third party solution, you need to be sure that there are no loose ends in technology that enable your payment gateways.
Even cosmetic changes can bring those loose ends to the light, which will result in revenue losses.
Lesson 2: Have a Plan B
If all is good with people, data and technology, it is still important to prepare yourself for any abnormalities (which cannot be predicted) before you start testing. Operating at the heart of your business, you need to have a risk management system in place — a “Plan B” — in case something goes wrong.
In our case, here are some principles that we followed:
Always create variations
We have never, ever made any changes to the current checkout system. It doesn’t matter if we were split testing a minor language change, or a change of an entire layout — we made sure to create a duplicate variation and apply all the changes on the variation only.
This way, we could always choose which variation to send traffic to, and could block all the traffic to a variation (if needed).
Have full control of the test
We made sure that tests can be started and stopped easily.
Any person involved in the testing process should be able to stop all traffic to the variation at any given point of time. He or she should not require technical knowledge to do so, or depend on someone with technical knowledge.
In our case, we used a Visual Website Optimizer to split test traffic, which meant anyone from the marketing team could stop all traffic to the test at any point, if mistakes happened to be found.
Lesson 3: Track results beyond conversion rates
One thing I love about checkout optimization — each decision involves multiple factors.
For example: If you are trying out new payment gateways, you need to account for both conversion changes and changes in merchant fees.
If your new gateway converts better, you need to then decide whether the increase in conversion big enough to cover for increased merchant fees, given your average order value.
If you are experimenting with different approaches, for example guest checkout vs. cart, it’s a combination of immediate conversions plus long term impact, like changes in conversion when it comes to the second and third transaction.
In this case, is your repurchase rate enough to justify those changes?
If you are testing layouts, for example one-step checkout vs. two-step (or more!) checkout, it’s important to keep in mind transactions which happen outside of one session.
For example, in our case, cart abandonment emails are applicable if a checkout has more then one step. Yet, when calculating results, this needs to be accounted for.
All in all, there are multiple factors, which influence the final results of the test. As I mentioned, we chose to create our own tracking system for all the checkout tests. We developed this system by planning well the models that would be used for result calculations — and by making mistakes on those models.
There was a situation when we would get one variation converting at 24%, while other would be converting at 70%. As much as we wished to proclaim a great win and go celebrate, we chose to revisit the data. Surely enough, there was a mistake in the way we chose to collect data.
We had to question the data many times, before we came up with a trustworthy tracking method for checkout optimization. Practically every checkout test was unique, so we had to develop a unique tracking model every time we ran a new test.
But don’t be worried. Sometimes “develop a unique tracking model” just meant to sit down, list all the metrics that we wanted to track — and make sure we have a way to track each of those metrics.
A few points to note if you are working on such a system:
For this test, is it important to know the total order value, or just to know if transaction have happened?
Is it important to include/exclude declined transactions in the test data?
Is it important to know if it’s the first transaction, or a re-purchase?
Do I want to account for refunds and cancelations for every test variation?
All in all, if you have a strong testing and optimization culture in your team, testing the checkout is the next logical step — and one of the most creative and exciting challenges for the team.