In my last blog post in the series on Game-Changing CRO, I talked about what it’s like when you have what seem like unlimited testing resources and the level of success that’s possible. I also showed how hard it is to have big success without a careful strategy to avoid a testing program based on hippos and rats (and if you don’t remember what hippos and rats have to do with testing, give the last post another look. :-) )
In this post, we’ll talk about one of four key ways to take control of your program and deliver big wins: Having a good plan, and prioritizing it.
Build a roadmap by looking at your whole conversion funnel
When we work with our customers, we do a deep-dive analysis of their website to bubble up overall performance trends and visitor behavior.
We look at different data sources, including:
Voice of the customer
Their competitors’ sites
Let me give you an example from a real customer.
First, we took a look at their site metrics by device type. In this case, it was really important to planning their roadmap.
For this customer, desktop visitors make up a majority of overall traffic. They also have the highest reservation rate and the highest average order value in comparison to mobile and tablet devices. This data suggests, the most immediate opportunity lies with the desktop visitors.
This is a key insight. If you’re anything like me, you read 10 stories a day about how mobile is taking over the world, think mobile first, desktop is dead, etc. And I don’t disagree that this is both the general trend and often true. But it wasn’t true for this customer, and it might not be true for every one of your sites. So check before you accidentally ignore the most important part of your business in your testing plan!
Second, after learning that desktop needed to be an important part of the testing plan, we also looked at the top actions users took by device, and found that while reservations were always the most important, different users had different needs after that. Mobile (and tablet) users were more likely on-the-go business travelers who needed to log in, while desktop visitors were usually looking for deals.
So we made sure that our test plan considered designing specific tests for tablet login, desktop deals, etc.
Third, we took a look at where in the funnel visitors tend to drop off, and whether that is different by traffic source.
As you can see, visitors tend to exit the funnel between selection & upsell and between upsell & payment, so it makes sense to emphasize those steps in the testing plan.
Additionally, we noticed that organic & referral traffic tends to exit the funnel in higher percentages than paid & direct search. We’ve normalized these four traffic sources to 100 in step 1 to protect the innocent, but if these traffic sources are very different in order value and cost, it’d be worth factoring that into where to apply the most testing effort.
So, don’t just test based on what’s in vogue, what someone in your business is keen on, or what your boss suggests. Do a little digging in the data (more is better) and figure out where you can move the needle for your business!
Ok, so let’s assume you’ve put together a plan that’s focused on delivering the most leverage for your organization. What next?
Prioritize testing ideas
In many of the organizations we work with, there are no shortage of testing ideas that make sense for the business. Sometimes hundreds. How do you pick and choose what test to run today, tomorrow, next week, next month?
To help our customers with this challenge, we developed a proprietary prioritization algorithm called SELECT. SELECT prioritizes test execution based on several factors including:
Length to run
Custom (customer survey score)
Moreover, because every customer has different needs and stakeholders, we made SELECT 100% customizable – customers can adjust the weighting to place more or less emphasis on any factors they choose.
Using SELECT allows customers to compare all their test ideas on an apples to apples basis, and make smart choices about what they test and when.
One popular view from running SELECT is the overall index score measured against revenue potential. As you can see in the example below, you can compare a large number of potential tests in two ways:
Looking horizontally, to identify tests of similar priority for your business and prioritize those with the greatest revenue potential (e.g., Ancillaries 1 is better than Advanced Search 2, which is better than Loyalty Registration)
Looking vertically, to identify tests with similar revenue potential and prioritize those that have a better overall score (e.g., Payment Page 1 is better than Payment Page 2)
In general, tests toward the top right should receive a higher priority than those in the bottom left.
Another popular view we use from SELECT plots complexity against success potential. If you’re just getting started in building internal support for your program, you might prioritize “quick wins” vs tests that also have high potential but require a lot effort invested. But where revenue potential is larger (bubble size), the effort required may be justified if you can get the support.
Finally, while prioritization is really valuable for figuring out how to order a long list of potential ideas, it’s also an incredibly powerful tool in pushing back on stakeholders – like HIPPOs or other departments – who bring new ideas to the table without a lot of vetting or concern for how those ideas will slow down or displace work already on the plan.
If you aren’t a HP Optimost customer today and don’t have access to SELECT, I’d encourage you to come up with your own prioritization system. Estimate things like effort and potential value. A basic system is a lot better than no system at all, and will keep RAT syndrome away!
In the next blog post, I’ll cover the second big step you can take to delivering outsize results – starting every test with the best possible hypotheses.