Use Impact evaluation to:
- Measure the impact on conversion and revenue to measure the success of your efforts
- Quickly assess whether an issue is worth fixing
- Compare segments to identify significant differences in performance
- Predict future UX/UI opportunities and prioritize investments based on ROI
How to calculate an impact
1. Set up your Analysis context
Open the Analysis Context and choose your first segment and then click Compare to choose your second segment.
E.g. [Traffic Source] Email compare to [Traffic source] Paid search.
2. Define a goal
What's your analysis goal? Define it clearly to measure success and assess its impact on conversion.
Once the Analysis context and goal are defined, you'll start uncovering initial insights such as:
- The extent of the impact.
- The magnitude of the impact on conversion (measured in missed conversions).
- The potential increase in conversions if a certain percentage of one segment converted similarly to the other.
To delve deeper, you can quantify the impact on revenue and refine the improvement scenario by adjusting the impact calculation (refer to step 3)
3. Edit Impact Calculation
Decide on a value for each conversion to estimate revenue impact.
By assigning a value to each conversion related to your goal, you can calculate missed or additional revenue based on this value.
Whether your e-commerce tag is enabled or not, you can visualize and adjust the value per conversion to accurately measure the impact on revenue:
(For e-commerce goals) How to define the value per conversion
Select between median cart (default) and average cart For your chosen goal, you have the option to select:
Both values are automatically calculated based on the segment that performed better. Once you've made your selection, simply click Apply, and your missed or additional revenue will be calculated accordingly based on the defined value.
|
(For non-e-commerce goals) How to define the value per conversion
Select a custom value You have the option to define a custom value for each conversion related to your selected goal. This value will be saved with your goal and updated for all users. Once you've made your selection, simply click Apply, and your missed or additional revenue will be calculated based on the value you define.
|
Adjust the improvement scenario.
You can adjust the impact calculation by using different improvement scenarios.
Learn more about adjusting the displayed opportunity here.
Analysis example: Analyze marketing campaign performance
- Go to your Analysis context
- Toggle on 'Compare'
- Customize the conditions for your segments
- for Segment A to include users who engaged with your marketing campaign (e.g. those who clicked on the CTA in a newsletter).
- Segment B to encompass visitors who reached the campaign page through alternative channels (e.g. through general browsing and not via the newsletter)
Example: Segment A, "Users who arrived at the site through the marketing newsletter" vs. Segment B "Users who reached the Campaign page without using the newsletter".
4. Define a goal value (e.g., clicked on the main CTA on your campaign page)
5. Define a value for each conversion of your selected goal. Click Quantify the revenue an enter a custom value. This value, chosen by you, will best represent the success of your marketing efforts in monetary terms.
6. Read the results.
Example Analysis Interpretation: Users arriving from your email marketing campaign are 15% less likely to click on the 'Submit the Form' CTA. If you cease driving users from email and utilize alternative routes to your campaign page, you could potentially generate an additional estimated revenue of $225,600 (based on your pre-defined value per conversion of $600).
FAQs
What segments are comparable?
By default, segments selected are compared to All other sessions on the same device/s. However, this information is not always the most relevant since it includes users that could be anywhere in their navigational journey.
Best practice: If you are analyzing a population segment who have clicked a certain zone on a page, compare them to users who have visited that same page and not clicked the zone rather than comparing to the default All other sessions.
Does Impact evaluation show the cause of low conversion rates?
It is a start! However, even if you see a strong correlation between, say, a slow load time and a low conversion rate, keep in mind that is not always the root cause.
Best practice: Check for other factors to explain the conversion rate (e.g., a broken CTA).
Can external factors lead to counterintuitive insights?
Yes, for instance, users who are rage-clicking might convert better than users who do not. Why? Rage clicks indicate a motivation to buy, hence the multiple attempts. But of course, adding rage clicks to users' journeys will not actually yield higher conversion rates or revenue.
If Segment A is underperforming Segment B, should I just apply my campaign for Segment B to A to boost conversion?
Maybe it will help performance, but don't assume that shifting all segment A users to Segment B's campaign will result in performance parity. Why not? Well, not 100% of previously non-converting segment A users would have converted if acquired via the segment B campaign. (To avoid generating insights based on this unrealistic outcome, Impact evaluation displays the per session revenue missed/gain, not the total revenue gain/missed.)
How is the revenue opportunity (additional revenue) calculated?
We display an estimated revenue that could be generated if the users from the losing segment convert at the rate of the winning segment. As such, we show a projected revenue based on the difference in the conversion rates of the two segments and the value per conversion of the outperforming segment.
To illustrate this, imagine the following scenarios:
1. You compare a chosen Segment against the default 'All other sessions'.
- The selected Segment outperforms the default ‘All other sessions’ segment.
The revenue opportunity is calculated by the traffic in your selected Segment, the difference in conversion rate between the sessions in the default segment (‘All other sessions’ ) and those in the selected Segment, relative to median cart's (or the average cart's/custom value's) revenue generated in the higher performing selected Segment.
Additional revenue (e-commerce goal)
Additional revenue (non e-commerce goal)
- The selected Segment underperforms the default ‘All other sessions’ segment.
The revenue opportunity is calculated by the traffic in your selected Segment, the difference in conversion rate between the sessions in the default segment (‘All other sessions’ ) and those in the selected Segment, relative to median cart's (or the average cart's/custom value's) revenue generated in the higher performing default segment ('All other sessions').
Additional revenue (e-commerce goal)
Additional revenue (non e-commerce goal)
2. You compare a chosen Segment A against another chosen Segment B
- Segment A outperforms Segment B.
The revenue opportunity is calculated by the traffic in the lower performing segment (Segment B), the difference in conversion rate between the sessions in the lower performing segment (Segment B) and those in the higher performing segment (Segment A), relative to median cart's (or the average cart's/custom value's) revenue generated in the higher performing selected segment (Segment A).
Additional revenue (e-commerce goal)
Additional revenue (non e-commerce goal)
- Segment A underperforms Segment B.
The revenue opportunity is calculated by the traffic in the lower performing segment (Segment A), the difference in conversion rate between the sessions in the lower performing segment (Segment A) and those in the higher performing segment (Segment B), relative to median cart's (or the average cart's/custom value's) revenue generated in the higher performing selected segment (Segment B).
Additional revenue (e-commerce goal)
Additional revenue (non e-commerce goal)
How are missed conversions, missed revenue, and additional conversions defined and calculated?
- Missed conversions
The number of conversions lost as a result of the difference in conversion rate (of the selected goal) between segment A and segment B, or between the selected segment and all other sessions on the same device(s).
Missed conversions = The traffic SEGMENT N X (Conversion Rate of SEGMENT M - Conversion Rate of SEGMENT N)
SEGMENT M: the segment that converted better
SEGMENT N: the segment that converted less
- Missed revenue
The result of multiplying the number of missed conversions (of the selected goal) with the value per conversion of the non problematic segment.
Missed Revenue (e-commerce goal) = The traffic SEGMENT N X (Conversion Rate of SEGMENT M - Conversion Rate of SEGMENT N) X Median Cart (Or Average Cart) of SEGMENT M
Missed Revenue (non e-commerce goal) = The traffic of SEGMENT N X (CR of SEGMENT M - CR of SEGMENT N) X Custom Value of the Goal
SEGMENT M: the segment that converted better
SEGMENT N: the segment that converted less
- Additional conversions
The number of conversions gained as a result of the difference in conversion rate (of the selected goal) between segment A and segment B, or between the selected segment and all other sessions on the same device(s). You can adjust your improvement scenario to estimate additional conversions.
Additional conversions = Improvement scenario (10% or 25% or 50% or 100% or custom percentage) x The traffic SEGMENT N X (Conversion Rate of SEGMENT M - Conversion Rate of SEGMENT N)
SEGMENT M: the segment that converted better
SEGMENT N: the segment that converted less
How is the statistical significance calculated?
Statistical significance serves as a metric of confidence that we employ to determine whether a segment condition is linked to a decrease in user conversion rates.
It's important to note that we can solely gauge the correlation between the error/segment and the conversion drop. We cannot claim to measure causation because there could be other factors contributing to a user's failure to convert, aside from experiencing an error/segment condition.
Our statistical significance calculations are based on the Frequentist statistical model, specifically employing z-tests. This model enables us to draw meaningful inferences using probability. When utilizing this model, our goal is to confidently assert, "We are x% certain that the observed difference is not a result of mere chance."
We set our confidence level at 99%. This signifies that we deem a test result statistically significant only when it exceeds a value of 2.33, as established in a distribution table, commonly referred to as a z-table.
At Contentsquare, our analyses operate with a 99% confidence level. This means:
- We are 99% likely to correctly identify that the null hypothesis is false when it indeed is false, indicating the presence of a genuine difference or variation.
- There is a 1% chance of incorrectly concluding that the null hypothesis is false when it is not, leading to false positives.
For example, if we were to set the confidence level at 95%, this would result in at least 1 false positive for every 20 errors or opportunities (20 x 0.05 = 1). However, at a 99% confidence level, we anticipate only 1 false positive for every 100 errors or opportunities (100 x 0.01 = 1).
This approach allows us to strike a balance between confidently detecting true differences and minimizing the risk of erroneous conclusions.