What segments are comparable?
By default, segments selected are compared to All other sessions on the same device/s. However, this information is not always the most relevant since it includes users that could be anywhere in their navigational journey.
Best practice: If you are analyzing a population segment who have clicked a certain zone on a page, compare them to users who have visited that same page and not clicked the zone rather than comparing to the default All other sessions.
Does the Impact widget show the cause of low conversion rates?
It is a start! However, even if you see a strong correlation between, say, a slow load time and a low conversion rate, keep in mind that is not always the root cause.
Best practice: Check for other factors to explain the conversion rate (e.g., a broken CTA).
Can external factors lead to counterintuitive insights?
Yes, for instance, users who are rage-clicking might convert better than users who do not. Why? Rage clicks indicate a motivation to buy, hence the multiple attempts. But of course, adding rage clicks to users' journeys will not actually yield higher conversion rates or revenue.
If Segment A is underperforming Segment B, should I just apply my campaign for Segment B to A to boost conversion?
Maybe it will help performance, but don't assume that shifting all segment A users to Segment B's campaign will result in performance parity. Why not? Well, not 100% of previously non-converting segment A users would have converted if acquired via the segment B campaign. (To avoid generating insights based on this unrealistic outcome, the Impact widget displays the per session revenue win/loss, not the total revenue win/loss.)
How is the revenue opportunity (additional revenue) calculated?
We display an estimated revenue that could be generated if the users from the losing segment convert at the rate of the winning segment. As such, we show a projected revenue based on the difference in the conversion rates of the two segments and the value per conversion of the outperforming segment.
To illustrate this, imagine the following scenarios:
1. You compare a chosen Segment against the default 'All other sessions'.
- The selected Segment outperforms the default ‘All other sessions’ segment.
The revenue opportunity is calculated by the traffic in your selected Segment, the difference in conversion rate between the sessions in the default segment (‘All other sessions’ ) and those in the selected Segment, relative to median cart's (or the average cart's/custom value's) revenue generated in the higher performing selected Segment.
Additional revenue (e-commerce goal)
Additional revenue (non e-commerce goal)
- The selected Segment underperforms the default ‘All other sessions’ segment.
The revenue opportunity is calculated by the traffic in your selected Segment, the difference in conversion rate between the sessions in the default segment (‘All other sessions’ ) and those in the selected Segment, relative to median cart's (or the average cart's/custom value's) revenue generated in the higher performing default segment ('All other sessions').
Additional revenue (e-commerce goal)
Additional revenue (non e-commerce goal)
2. You compare a chosen Segment A against another chosen Segment B
- Segment A outperforms Segment B.
The revenue opportunity is calculated by the traffic in the lower performing segment (Segment B), the difference in conversion rate between the sessions in the lower performing segment (Segment B) and those in the higher performing segment (Segment A), relative to median cart's (or the average cart's/custom value's) revenue generated in the higher performing selected segment (Segment A).
Additional revenue (e-commerce goal)
Additional revenue (non e-commerce goal)
- Segment A underperforms Segment B.
The revenue opportunity is calculated by the traffic in the lower performing segment (Segment A), the difference in conversion rate between the sessions in the lower performing segment (Segment A) and those in the higher performing segment (Segment B), relative to median cart's (or the average cart's/custom value's) revenue generated in the higher performing selected segment (Segment B).
Additional revenue (e-commerce goal)
Additional revenue (non e-commerce goal)
How are missed conversions, missed revenue, and additional conversions defined and calculated?
- Missed conversions
The number of conversions lost as a result of the difference in conversion rate (of the selected goal) between segment A and segment B, or between the selected segment and all other sessions on the same device(s).
Missed conversions = The traffic SEGMENT N X (Conversion Rate of SEGMENT M - Conversion Rate of SEGMENT N)
SEGMENT M: the segment that converted better
SEGMENT N: the segment that converted less
- Missed revenue
The result of multiplying the number of missed conversions (of the selected goal) with the value per conversion of the non problematic segment.
Missed Revenue (e-commerce goal) = The traffic SEGMENT N X (Conversion Rate of SEGMENT M - Conversion Rate of SEGMENT N) X Median Cart (Or Average Cart) of SEGMENT M
Missed Revenue (non e-commerce goal) = The traffic of SEGMENT N X (CR of SEGMENT M - CR of SEGMENT N) X Custom Value of the Goal
SEGMENT M: the segment that converted better
SEGMENT N: the segment that converted less
- Additional conversions
The number of conversions gained as a result of the difference in conversion rate (of the selected goal) between segment A and segment B, or between the selected segment and all other sessions on the same device(s). You can adjust your improvement scenario to estimate additional conversions.
Additional conversions = Improvement scenario (10% or 25% or 50% or 100% or custom percentage) x The traffic SEGMENT N X (Conversion Rate of SEGMENT M - Conversion Rate of SEGMENT N)
SEGMENT M: the segment that converted better
SEGMENT N: the segment that converted less
How is the statistical significance calculated?
Statistical significance serves as a metric of confidence that we employ to determine whether a segment condition is linked to a decrease in user conversion rates.
It's important to note that we can solely gauge the correlation between the error/segment and the conversion drop. We cannot claim to measure causation because there could be other factors contributing to a user's failure to convert, aside from experiencing an error/segment condition.
Our statistical significance calculations are based on the Frequentist statistical model, specifically employing z-tests. This model enables us to draw meaningful inferences using probability. When utilizing this model, our goal is to confidently assert, "We are x% certain that the observed difference is not a result of mere chance."
We set our confidence level at 99%. This signifies that we deem a test result statistically significant only when it exceeds a value of 2.33, as established in a distribution table, commonly referred to as a z-table.
At Contentsquare, our analyses operate with a 99% confidence level. This means:
- We are 99% likely to correctly identify that the null hypothesis is false when it indeed is false, indicating the presence of a genuine difference or variation.
- There is a 1% chance of incorrectly concluding that the null hypothesis is false when it is not, leading to false positives.
For example, if we were to set the confidence level at 95%, this would result in at least 1 false positive for every 20 errors or opportunities (20 x 0.05 = 1). However, at a 99% confidence level, we anticipate only 1 false positive for every 100 errors or opportunities (100 x 0.01 = 1).
This approach allows us to strike a balance between confidently detecting true differences and minimizing the risk of erroneous conclusions.