A/B testing is powerful, but most sites in Sweden haven’t got the traffic volume that’s required to A/B test during a fair amount of time. Can you still optimize if you can’t A/B test? Yes! We’ll explain how.
First, let us explain why not everyone can A/B test.
Low traffic means that you don’t have enough data, which means there’s a risk you’ll make the wrong decisions. Why? Because we generally like to believe what we see. Shouldn’t we? Well, the risk that the user behaviour depends on coincidences is much higher if you only have a few observations (low traffic). More observations lower the risk of coincidences.
To base your results on random behaviour and changes based on guesses doesn’t sound that good and it’s far from working data driven.
How do I know if it’s possible to A/B test my site?
Practially speaking, it’s possible for everyone to do experiments, but A/B testing is so much more than incoming traffic, a few days and a few conversions. You want to reduce the error factors and ensure that what you see isn’t the result of coincidences.
Factors affecting the reliability of your A/B test results:
- Even distribution of visitors during the experiment period
- That the selection is random
- For how long the experiment runs
- Number of conversions and change in conversion rate
- Interference from external factors, for example seasonal variations
Because a major change leads to faster test results, different landing pages can be A/B tested against each other, even on lower-traffic sites. However, there’s a possibility that a total redesign of your page doesn’t improve your conversion. Instead made it could make it more difficult for you to know what change was for the better and what was for the worse.
We explain this “+/- zero result” more in detail in this blog post (Redesign, does it increase your conversion?).
We have done more A/B tests than we can count, so if you have questions, please contact us.
The table below is an indication of whether your homepage/product page/landing page, etc is possible to A/B test or not.
|Traffic per day||Current conversion rate||Duration for A/B test (6% change)|
|5 000||5%||10 weeks|
|50 000||5%||7 days|
Calculation per page: A/B test duration calculator
See where on your site you possibly can A/B test and how A/B testing differs depending on the test goal with the “Experiment Feasibility calculator (post in Swedish)“.
A tip: Try to do testing in an early stage at your funnel where you have more traffic, your home page for example.
How do you optimize your site if you can’t A/B test?
Conversion optimization is something for everyone, high-traffic as well as low-traffic sites.
Rules for data driven conversion optimization:
- A change in your site should be based on observations and a well researched hypothesis.
- The hypothesis of improvement should be based on data, insight and understanding for your users.
- The change should affect the behaviour of the users.
The change must be big enough to be measurable
CRO is about data and understanding your users’ behaviour. If you don’t have the quantitative data, focus on understanding instead. That means working dedicatedly with UX, usability tests and web psychology. Measure all you CAN measure, but pick the low-hanging fruit first.
Off we go.
1 Low hanging fruit
The first question to ask yourself is: How do you want your visitors to behave?
WHO are your visitors, what do THEY want to do and what do YOU want them to do?
A few examples of low-hanging fruit
- Use motivating copy (this is huge, a separate blog post about this will come)
- Prioritize content/visual hierarchy (think mobile)
- Clarify the next step, like your CTA (see example below)
- Communicate from the outside in, do others understand?
- Motivate scrolling and get the user to read to the end of the page
- Apply simple web psychology (for example, see paragraph 3)
And also, think about this:
- Don’t copy your competitors, they don’t know what they’re doing either
- There is no “perfect visitor”, one size fits all will never be true
Okay, some examples:
Let’s look at two different online newspapers where one uses a prioritized CTA, and one doesn’t.
We also checked out how it might look when you present too many choices and none of them are prioritized. This particular example uses inside out communication: CTA:s, words and abbreviations that probably only the most well-informed people on the subject understand. Also, it’s not clear if the areas are clickable and takes you somewhere, and if they do, to what?
We have a separate blog post in Swedish about choices and how many choices that are too many, read it here.
Case (in Swedish): Skandia sold 222% more children insurance plans after a project where we (among other things) picked low-hanging fruit (the project wasn’t A/B tested but instead 1/2 tested).
The less you need to rely on general design principles and “best practices”, the better. Once you have picked your low-hanging fruit, you should always work on improving your data collection. Answering “What do I need to know about my users?” involves adding relevant data sources and measuring points, as well as ensuring that the data is as accurate as possible. To mention some basic stuff: Filter out your own IP, identify bot traffic and filter that too, track campaigns and make sure you know where your traffic comes from etc.
2 Measure everything that can be measured
The long tail of conversion is a concept we often use. It means that many small components contribute to something bigger. So if you have multiple data sources, these can be added.
BUT stop! The first thing you need to do is to understand which one of your pages is the most important. It could be a page where conversion happens, but not necessarily. Most common, however, is a product or landning page. If you have a checkout page, that is probably one of the most important pages too.
1. Qualitative data
If you have low traffic, qualitative data can be an advantageous alternative instead of waiting years to get enough of quantitative data (such as page views from tools like Google Analytics). Data is perishable and users behave differently in different cycles. Therefore, you shouldn’t look at too long time periods when you analyze and search for behavior patterns.
Qualitative analysis methods can show how users interact with your site or app. Below is an example of how qualitative analysis in the form of eye tracking can be used in a wireframe (sketch of site), before the wireframe proceed to development and launch.
Qualitative data can be collected by:
- User testing/Eye tracking
- Studies/Customer support
Qualitative data is suitable for sites with high as well as low traffic, and is a great addition to quantitative data.
Usability tests answer the question ‘Why did it happen?’
Recordings and questions will show obvious UX problems, problems which in many cases don’t appear during analysis like Google Analytics. Ask people that match your target audience to perform specified tasks on the site, record or sit next to them and see how and why they navigate as they do.
Case Spotify: We asked a question on the registration page: “What is the reason you don’t complete your registration?” The answers we got helped us to understand the users and we could easily come up with a solution to the problem.
What prevents you from signing up?
I do not want to leave my card details for something that is free.
Solution: Information that explains why credit card details are required. The result is secret but yes, it led to increased conversion.
Side note: We A/B/C/D-tested different copy alternatives, but that is not the point of this example.
Tools for qualitative data collection:
- Inspectlet (Heatmaps, Clickmaps, Scrollmaps, Funnel report, Visitor recordings, Form tracking, API integrations)
- SessionCam (Heatmaps, Clickmaps, Scrollmaps, Funnel report, Visitor recordings, Form tracking, API integrations)
- Clicktale (Heatmaps, Clickmaps, Scrollmaps, Funnel report, Visitor recordings, Form tracking, API integrations)
- Hotjar (Heatmaps, Clickmaps, Scrollmaps, Funnel report, Visitor recordings, Form tracking, Surveys, Poll, Recruit people to usability tests.)
- Triggerbee (Subscriber and Visitor recordings, Scroll analytics, Popups, Form tracking, API integrations)
2. Quantitative data
Quantitative data is aggregated data from multiple users. The goal is to reach an as accurate reflection of the truth as possible and minimize the risk of coincidence, which will be the result from a small set of or insufficient data, unlike qualitative data that collects unique user behaviour, responses, actions, etc.
You get quantitative data from:
- Google Analytics (or some other analysis tool)
- Heat maps (click, scroll, mouse movement…)
Keep in mind that the data you collect is fresh.
For the same reasons that you can’t run experiments for too long test periods, you can’t aggregate you data over too long time periods. To define what a reasonable time period is the term “business cycle” was coined, which is a way to measure your business’ cycle. Usually, one business cycle is a week long (Monday – Sunday) but different seasons play an important role (rainy autumns/winters, vacations and Christmas etc).
Can you answer these questions?
- Do you measure your most important KPIs?
- Where is the biggest leak in your site? That’s where you should start optimizing!
- How does the site perform across different browsers?
- How fast does your page load?
It’s hilarious to see how the outdoor weather affects your sales, but don’t start by implementing that measurement.
In Google Analytics there are several useful reports where you quickly identify leaks on important sites, such as non-compatible devices, browsers, relevance and how your site search is used. Or, like mentioned above, suspected crawler bots which break the data that you collect.
Search tips: In one Swedish study we conducted, we concluded that the users of onsite search convert 91% more. Is your search field optimized and visually prioritized?
3Increase motivation, decrease friction
When a visitor lands on your site, you want to greet them in the following order:
- Increase motivation
- Decrease friction
Not in opposite or reversed order, since you want to help your visitor make a motivated decision first. Only when a visitor is motivated is it possible that s/he wants to complete what you want them to do. That is why you want to simplify and decrease friction.
Motivation is the most important part of the recipe: the ones who really want that new iPhone will queue for weeks. The ones who really need that new sweater will fight through 5 pages of forms if necessary.
Therefore, the page that your visitors land on (starting page, product page etc) should have the following components in place:
To in the next, final phase have the following components:
Visualized like this:
Examples of what can go wrong
Several car rental companies have their start page filled with campaigns and offers etc. It’s very rarely only a simple start page where you can simply choose WHERE and WHEN you wish to rent a car (there are not many people who go to this type of site without a mission and make a spontaneous decision to rent a car over the weekend).
The start page is a place where you want to increase motivation and engage the user to turn on autopilot and take small steps, not the opposite.
When (if) the users clicks through, most car rental companies have a page with the cars that they offer (1000 photos of different cars, sizes, colors and models). You are asked to pick a red or blue car, but all you want to know is the price and when (simplify to increase motivation) and not 1000 other choices (in this situation, it only brings friction). Right?
Additional sales, exposure of current campaigns is something you want AFTER you’ve named 1) when and where you want to rent a car and 2) price.
The start page (or landing page): don’t let the user face decisions. Engage through small steps and increase motivation.
Checkout: Decrease friction and offer more choices, insurance, bigger car, other colors etc.
You can read more about how we use the components of conversion evaluation (in Swedish) here.
Case (in Swedish): Mathem sold 25% more food
after we used the analysis components (a project that was A/B tested) among other things. Note how the user can start small and engage further through filling in only their post code. Web psychology: Gradual Engagement.
The relevance of copy writing and photos are not marked here.
4 Prioritize hypotheses
You have several different conversion goals on your site. The primary goal might be to make users purchase a product, and the second could be to increase sign ups to your newsletter, read an article or contact you. Have you defined these goals?
You want to make simple changes that have big impact.
Easy as that.
Estimate potential uplift
How simple or time consuming will it be to make this change?
- Technique – Will the change involve IT, development?
- Design – The production and completion of UX, UI etc.
- Political – Will other departments have opinions on your change? Marketing, graphic profile team, law etc.
- Additional – Your own time, consultants etc.
What potential uplift will the change have and will it happen at an important stage of your funnel?
- Perception – Will the user see the change? Has the change been made where users actually land, scroll, click…
- Behavioral Contrast – Will the user experience a clear and noticeable difference which affects how the user wants to/can interact with the element?
- Behavior Patterns – Is the new change based on one or more web psychology principles?
- Conversion funnel – Will the change take place at an important stage in your funnel?
Remember: the bigger the expected change in conversion rate, as a consequence of your site change, the simpler and faster it will be to measure and prove.
We’re using a framework to weigh Effort vs Impact. You will also need a framework, feel free to use the same as us:
Look at your business colleagues’ tool “Iridion – Control your optimization process“, the framework we’ve used in the description above.
An example of a super simple change
We just showed how Aftonbladet uses a visually prioritized CTA, which is a low hanging fruit.
Although, if we look below (one A/B test yes, but that is not the important part) you will see how something so simple as changing the text on a CTA can make a massive difference.
Experiment retrieved from Maria del Riccios presentation at Conversion Jam 2015. The change in the test was CTA-copy (behavioral contrast?).
We want to show where you who have low traffic should not focus. Big changes for measurable uplifts, remember.
Copy and CTA are extremely important, so important that we have to write a separate blog post about it. Add copy and CTA to “just do it” and not “prioritize your hypothesis”.
An example of super noticeable change
This is how Talia Wolf (one of the top names within CRO) chooses to make hypotheses of different kinds (specifically for mobile in this example), big changes.
Talia has a framework that she talked about on the Conversion Jam 2016.
How can I measure if my changes have effect?
You conduct 1/2 tests, that is, checking your numbers before and then checking them after.
The time periods (before vs after) should be of equal length and with an even traffic distribution not affected by outer factors
When we say that you should check your numbers, we’re talking more specifically about conversion rate, number of sold products, button clicks, number of times users contacted you, scrolling etc. Exactly what you should look for depends on the changes you make and what effects you want to achieve.
Exactly what numbers you should look for when you analyze during 1/2 testing also depends on the volume of traffic and number of converions, the same as with A/B testing. It will be more simple to see the results of a button change on the start page if you look at the number of CTA on the start page, then it will be to make out whether a button change affects final conversion and purchase in the check out. Or furthermore, long term growth, returning customers etc.
If you have a low traffic volume A/B testing might not be for you, but that’s not an excuse to not work with conversion optimization.
If you can’t A/B test: start with low hanging fruit, focus on your most important pages, understand your users and prioritize your hypotheses!