Menu
How we increased Revenue by 27.5% due to user feedback from Hotjar Polls and Surveys
In this article you will learn how we used Hotjar Surveys and Polls on Product and Cart pages to find out what concerns/hesitations our users have before purchase and what prevents them from buying from us.
We are proactively using Qualitative analysis in our Conversion Rate Optimization processes to better understand users, their motivation and concerns, fears and hesitations. Later it allows us to formulate data-driven A/B testing hypotheses and to increase the chance that A/B tests will be successful (as all hypothesis backed by direct feedback from potential customers)

And one of the most powerful qualitative data that you can get on your website is user feedback from on-page surveys and polls. We have already written a step-by-step guide about collection of user feedback with Hotjar and now we will describe how we have implemented the whole CRO process backed by user feedback data and got 27% Revenue Uplift with just a few changes on product pages.
Step 1: Collection of user feedback
When we started to work with our customer (can't share the name of the website because of NDA), we didn't understand main concerns and fears of target audience, so to get this information we implemented polls and surveys on the main steps of conversion funnel:
- Product pages
- Cart page
- Thank you page (to survey new customers)
Сollecting user feedback takes long time as conversion rate to response is very small. That's why we had to wait ~3 weeks to get almost 200 responses for Product and Cart page polls in total.

From the other side it took much less time to get feedback from new customers through surveys on Thank You page after purchase and in Order Confirmation email.

Is it enough data?

Of course, the more data you have, the more insights you can get from it. But based on ConversionXL article, it will be enough to get 100-200 to identify any trends and draw conclusions (there is high chance that if you have any critical problems on your website, it will be mentioned many times within these 100-200 responses)
As you see on the image above, if you analyze 100-200 responses, you have ±6% margin of error in your findings and it's totally okay for qualitative surveys. You're mostly trying to spot patterns and do voice of customer research.

Clearly, though, you don't want to survey too few people. Ask 10 people, and surely some loud voices can skew your data. You're at a high risk of forming false patterns then.
Step 2: Analysis
When you got enough responses, it's time to analyze it. To simplify this process, guys from ConversionXL suggest (yeah, we are learning from them a lot as they are real professionals in CRO) to do it in the following way (and I completely agree with that approach after doing it a few times):
1. Conduct an initial review of all the information to gain a first sense of the data.
2. Code the data. This is often described as 'reducing the data', and usually involves developing codes or categories (while still keeping the raw data).
3. Interpret the data.
4. Write a summary report of the findings.

When you conduct initial reviews of all responses, everything becomes clear as you notice trends and repeated responses. Then you can easily divide it in categories to compare which categories of feedback occur most often.

Below you can see how we "Coded the data". As a result we got 5 categories of concerns that were mentioned most often:
- Shipping information (Location, Time, Cost)
- Size/Fit of the product
- Materials and Quality of the product
- Security questions of the website
- Colour issues of the product
- Others
"Code the data". Categories that were created after initial review
Then we summarized number of responses in each category and visualized it in chart:
Visualization of user feedback in Google Spreadsheet
After that it became clear that our potential customers have 3 major concerns:
1. Size / Fit category - they don't know if our product (it's a hat) will fit to their head.
2. Material / Quality category - they need specific information about materials of the product
3. Shipping info category - they don't understand if we deliver worldwide, how much it costs and how much time it takes.

And all 3 categories were mentioned almost equal number of times.
Step 3: Preparation to A/B testing
So now it became clear in which direction we should go in order to improve Conversion Rate of the website.

Also we have to say that we decided to add information about Size and Material without A/B testing. As for Shipping information, we thought that the risks are too high, so we should to A/B test this hypothesis.

The hypothesis was the following:
"We expect that by adding Shipping information section on Product page we will increase Ecommerce Conversion Rate and Revenue"

As we are working with Shopify website, we decided to use Shopify app that automatically put dynamic section (country of a visitor is detected by GEO IP) on product page below ATC (add-to-cart) button.
Shipping Information by Shipi Fireapps
As this app works based on Geo IP, it even adds some sort of personalization to product page.

The logic behind this A/B test was the following:
1. We added Shipping section through shopify app to all product pages
2. Then we used Visaul Website Optimizer to change design of this section (with HTML/CSS) and made it visible only for 50% of visitors.

As a result we got 2 versions of product pages:
Control and Variation for A/B test
As you see, we have slightly changed CSS styles of Shipping section, so it became a little bit more prominent, looked better and fit to overall design of product pages.
Step 3: A/B testing through VWO
For A/B and Split testing we have used Visual Website Optimizer as it's one of the most powerful tool on the market and it can be easily integrated with Google Analytics for deep post-analysis.
In addition to it we are Certified Partners of VWO as we are actively using their platform and recommend it to all of our clients (btw, all our CRO customers have pricing discounts on VWO plans due to it)
When testing variation was prepared, it was thoroughly tested by our team using query parameters, so we were able to target A/B test only on ourselves and to check if everything was displayed and worked as needed.

Then A/B test was integrated with Google Analytics for deep post-test analysis and was started for all Mobile visitors that come to product pages.
In addition to it, we have started recording user sessions through Hotjar (it's an additional opportunity to check if everything works correctly in browsers of our real visitors), so in 30-60 minutes after start of each A/B test, we review a few dozens of user recordings to be 100% sure that everything is going according to a plan.
Hotjar User recordings to be 100% sure that everything works correctly
It's worth saying, that while a test is live, we are always monitoring intermediate results to see any strange things as soon as possible (yeah, sometimes it happens in A/B testing) and recommend to do it for all who implements A/B tests.
Step 4: Analysis of A/B testing results
After 8 days we got enough data to call the winner with 95% statistical significance and as you can imagine it was the Variation with Shipping information below ATC-button.

As a result of this A/B test, we got:

1. 27.5% Revenue uplift
2. 27.72% Ecommerce Conversion Rate uplift
3. 6.02% Add-to-Cart Rate uplift
4. 20.9% Conversion rate uplift from Checkout started to Transaction

Google Analytics results of A/B test
As you see, even when we are A/B testing product pages, we are not focusing on microconversions (i.e. Add-to-Cart rate in this case), but we always measure performance of each A/B test with respect to final goals - Revenue and Ecommerce Conversion Rate.

And I think it's the right approach for A/B testing on Ecommerce Stores, as eventually the final goal of any CRO is to improve business, to increase Revenue and Conversion Rate.

For example, you can increase CTR of Add-to-Cart button, but it doesn't matter if additional number of people who added product to cart didn't convert into customers and droped-off on other steps of the funnel...You can lure some people with unrealistic offers, but you will never convert them into customers...But let's back to the case study.

In post-test analysis we also checked statistical significance of results to be sure that it's not just a random uplift. For that we are using Abtestguide.com and I recommend this service as it's very simple and it visualize results very well. In addition to it you can easily save each post-test analysis and share it with other people.
Statistical Significance of results for Ecommerce Conversion Rate
Step 5: Learnings and Next steps
Conversion Rate Optimization and A/B testing are ongoing processes. So when you finish each A/B test, you have to learn from it regardless of whether it was a big win or failure. You have to analyze results, think why you got such results and to use these learning/insight in the next A/B tests.

That's how data-driven business works. So let me explain you how we are doing that...

Before we did this successful A/B test with 27% revenue uplift, we got a big failure with almost the same hypothesis.

The previous A/B test had ~20% drop-off in Conversion Rate and Revenue. But it included one additional element - "Estimated delivery time" text in the Shipping section.
Our failure. The first version of this A/B testing hypothesis
With this Variation we got approximately 20% drop-off in Conversion rate. But when we got this negative result, we didn't stop A/B testing of this hypothesis at all. Instead of it we started to think why it have happened.

As a result, we suggested that it could happen because of long delivery time, which can be critical for our potential customers, especially from US. So we decided to remove this information in the next A/B testing iteration...

As a result, we got 27% Revenue uplift.

Now imagine if we gave up with the idea of adding shipping information to product pages....In this case we will never get such a good uplift in conversion rate and revenue.

But let's go further.

So we got 27% Revenue uplift.

Should we stop A/B testing of this hypothesis or go further with it to get even more revenue from it?


Of course we chose the 2nd option and started to think how we can make this feature even more profitable for us.

Here is just a few hypothesis that can be implemented in the next A/B tests:
1. By placing Shipping section above-the-fold, more visitors will see it, so we can expect that it will increase Conversion Rate
2. By making Shipping section more prominent, we will attract more attention to it, so we can except that it will increase Conversion Rate.

It's how real data-driven process looks like...You have to test a lot, you have to learn a lot and apply these learnings to the next tests.
Data-driven Process
That's how we are working. That is the process that we are trying to apply to all clients' businesses. That is the process that distinguishes a successful online company from unsuccessful.

It's a long process. It's a complex process that takes a lot of time and efforts. But in the end, for a long time, it can bring very great results for business.

I hope this article will be helpful for you. And as it should be actionable, I would recommend to do a few things:
1. Install Hotjar on your website
2. Set up polls on key conversion steps of the funnel
3. Set up survey on TY page and in Confirmation Email.

It will take less than an hour, but in a few weeks you will have a lot of user feedback that will help you to better understand target audience and formulate data-driven A/B testing hypothesis.
Do you want to work with us?
So just click on the button and fill in quick 5-questions form
Made on
Tilda