Why we exclude 'unconcluded' experiments from Smart Experiment value calculation?
What do we mean by an ‘unconcluded’ experiment?
In order to calculate value generated by a Smart Experiment we have to run experiments within a campaign.
To understand if the results of our Smart Experiments are reliable, we perform a Repeat Rate Significance Test to demonstrate our results are statistically significant.
The Repeat Rate Significance Test lets us know that any meaningful difference between the Repeat rate (the rate at which customers are coming back to make a second purchase) of the Control Group and the Low Propensity to Refer group is not due to any sampling biases. Therefore we can confidently say that the alternative offer alone was the reason for any difference in repeat purchase behaviour.
A test is only treated as concluded in the platform if we can determine a statistically significant difference between the rates of the two groups.
If a test has not reached a result where we can see a statistically significant difference between the rates of the two groups, it remains ‘unconcluded’.
Why can’t we use ‘unconcluded’ experiment data to calculate incremental value?
For Repeat Revenue
To calculate the increase in repeat revenue generated by the Low Propensity to Refer group, we first look at the increase in repeat rate generated by showing a repeat purchase offer instead of referral.
If the test is ‘unconcluded’, it means we can’t see any statistically significant difference between the repeat rate of the Control group and the Low Propensity to Refer group. As we judge the increase or decrease in repeat revenue by comparing the Low Propensity to Refer group against the Control group, we’d need to see a statistically significant difference between the repeat rates of the two to do so. As we do not have that here, we cannot calculate this value.
However when the test does conclude we will see a statistically significant difference between the rates of the two groups. If the Repeat rate from the Low Propensity group is the winner, it indicates that we are generating additional repeat revenue compared to showing everyone referral in the Smart Experiment.
However if the Repeat rate from the Control wins it could indicate that showing referral to everyone would have generated more repeat revenue.
For estimated referral revenue missed
When calculating the potential missed revenue from not showing referral to customers (and instead showing them the alternative action, to drive repeat purchases for example) we look to the Share rate of the Control group. We multiply this by the amount of customers we show an alternative offer to, and by how much an average share is worth for your business (based on all previous referral history).
The Repeat Rate test is still ‘unconcluded’, why do we exclude its results from the overall value calculated?
In order to calculate the overall value generated by a Smart experiment we need to subtract the estimated referral revenue missed from the Repeat revenue generated. If the Repeat Rate Significance test has not reached significance (i.e. concluded), we will not display the Additional Value From Repeats and will only show the Missed Revenue From Repeats part of the Incremental Value page.