Putting together a repeatable and scalable process for consistent Conversion Optimization wins

Posted by Manuel da Costa / categories: Conversion Optimization, Teams

In his book, Malcolm Gladwell said – “Success is not a random act. It arises out of a predictable and powerful set of circumstances and opportunities.

The same is true for companies, large and small using Conversion Optimisation to grow their businesses. The ones that have a set process – a repeatable process are the ones that will constantly learn and improve their results. They use these processes to onboard and educate new members of their team and to stick to their roadmap.

In this blog post, we are going to share with you the process used by highly effective optimisation teams.

Start with the data. Observe your customers.

You might be tempted to start with the idea phase and give your opinions on what should be tested next. However, the data and observation phase is one you cannot skip if you want to build a solid foundation to your tests.

Scour your analytics data and make notes on what trends and insights you can spot. Beware – averages lie. Segment your data to really understand the dropoffs in your conversion funnel and what causes users to leave without taking the action you want them to.

Your analytics tool is quantitative data. It will only tell you the what but will never tell the why.

For this, you need tools that will give you insights in the user’s behaviour. Hotjar, Crazy Egg and Decibel Insight are just some of the products available at your disposal that will give you a wealth of information – heatmaps, scrollmaps and session replay videos. This qualitative data will allow you to marry up the insights you spotted in your analytics and will help you come up with ideas to test.

Brainstorm ideas based on the data

Bring the team and stakeholders and present the insights found in your exploratory session. This collaborative session should be used in coming up with ideas for solutions to fix the problems spotted. At this stage, don’t worry about whether the idea is worth exploring or not. Instead make a list of everything that comes up in this session.


Testing is expensive and time consuming. Prioritisation is the key.

This step involves everyone responsible for testing – stakeholders, core optimisation team members, developers and designers.

To prioritise, you can use a scoring system such as the PIE framework to evaluate the idea on it’s importance, potential and technical effort. The stakeholders and core optimisation team will be more interested in the importance and potential of the test whereas the tech and design team can score the test idea on how much effort it will take to set up.

Test with rigour

It might be tempting to jump the gun and start the test as soon as it’s ready to deploy. To ensure that your test runs correctly, you want to check it rigorously for Quality Assurance.

Run cross browser checks to see if your test variations render accurately. If you use Optimizely, Conversion.com have created a chrome plugin that allows you to verify if your test is setup correctly. Create a test cookie and run through the test as a visitor would and check that the goals fire correctly.

This allows you to prevent any unpredictable results that could be interpreted wrongly.

Test for 2 complete business cyles (weeks) and do not stop tests when you are happy with the results. Take into account weekday and weekend trends.

Results and analysis

Testing is as much about learning from the test than just about conversion uplifts. Sure, conversion uplifts are always going to make you look good but one cannot undermine the importance of learning from a failed or inconclusive test (or even a successful one).

The test results in your testing tool is only one part of the story. If you have hooked up your analytics tool you should be able to segment the data further and get some insights into what worked best in the test and what didn’t.

And iterate

Once a test is complete, the team must come back together to discuss the findings and learnings. This session is all about deciding the next steps. It could be led by the optimisation manager or the experiment lead.

Iterating on existing ideas is what separates successful optimisation teams from the others.