In this guide, you'll learn the fundamentals of A/B testing in Mention Me, one of the most effective ways to optimise your referral programme and increase conversions over time.
You’ll discover how:
- A/B testing allows you to test two (or more) variations of an offer to see which performs better across key metrics like share rate, purchase rate, and repeat revenue.
- The platform supports testing of every element in the referral journey, from copy and rewards to design and sharing methods.
- Our built-in Bayesian statistics tool determines if test results are statistically significant, helping you make confident, data-driven decisions.
- Cohort-based A/B testing enables you to present different offers to distinct groups of customers and their friends simultaneously.
- To define a clear hypothesis, choose the right metric, and set up your experiment correctly to ensure valid, reliable results.
- You should focus tests on individual variables (like message style or incentive value) rather than different use cases to avoid conflicting outcomes.
- Continuous testing and maintaining an A/B testing roadmap are key to ongoing programme growth and success.
By the end, you’ll understand how to strategically plan, run, and analyse A/B tests that uncover what truly resonates with your audience—and drive real performance gains from your referral programme.
What it is
A/B testing is a powerful optimisation tool that tests two different offers side by side to increase conversion metrics.
- It’s used to test hypotheses about your customers
- Testing two or more variants of an offer allows us to see the relative performance
- Learnings are carried forward to build performance over time.
On average, you can achieve four times more referrals over the first six months.
A/B testing is used to test hypotheses
Our Engineering team has built a sophisticated platform that lets us A/B test every element of your referral scheme, from rewards and sharing methods to copy and design.
By testing hypotheses, you can better understand customer behaviour and drive performance.
- Does Stu like £ or %?
- Does Andy like Pink or Red?
- Does Simon like Bees or Ants?
AB testing puts hypotheses concurrently against one another to answer questions. It takes the guesswork out of optimisation.
Bayesian statistics model
Using statistics gives an unbiased view of your customer cohort.
- Randomised A/B testing allocates customers into cohorts of each variant
- Statistics are used to determine whether the results are representative of the population
We have a Bayesian statistics tool built into our platform.
Bayesian statistics: Based on what we’ve seen so far, how confident are we that the difference between the metrics is significant? This gives us a degree of certainty that the key metric we’re testing has been improved, and this result is representative of the whole population.
The Bayesian tool is in the campaigns section and highlights metrics that have a statistically significant difference.
A/B testing by cohort
One of the most powerful optimisation tools for a referral programme is A/B testing by cohort. The ‘by cohort’ means you can present one variation of your offer to one cohort (group) that they and their friends can benefit from, while showing the next cohort a completely different offer at the same time.
Experiment with every element of your referral customer journey, including incentive, design, copy, or imagery, so you can learn what resonates with your customer segments and optimise performance.
How to run a test
To run a robust test, there are a few things to consider
-
What is my hypothesis? Ahead of testing, you should define specifically what you want to prove or disprove. For example…
- “Reducing the share options will improve customer experience and increase share rate”
- “Showing branded copy will increase referral metrics”
- “Displaying an incentive to encourage customers to sign up for an email will drive more sign-ups”
- “Providing a higher incentive will encourage more people to come back and purchase again”
-
What metric am I measuring to validate my hypothesis? This will need to be defined when the test is launched and will likely be part of your hypothesis. For example:
- Share rate
- Purchase rate
- AOV
- Repeat revenue
- Number of email sign-ups
- Where in the funnel is this? Do I have enough volume?
- How long will I need to run this to get a significant result?
- Is my experiment set up to test only my hypothesis?
Testing referral against use cases isn’t advised, as the messages are so different, and it is unlikely the goals will align. In the case of serving a message to drive email sign-ups, the key goal is to increase the email database. Whereas serving a message with a discount on the next order is encouraging customers to come back and purchase again, therefore, repeat revenue would be a logical metric to track.
We recommend testing specific features of the message instead. For example:
- Message to encourage sign-ups vs incentive to sign up for the newsletter
- Discount on next order with no minimum spend vs discount on next order with minimum spend
- 7-day validity period for referral offer vs 14-day validity period for referral offer
Keep Testing
- Once you’ve found a winner, you’re not going to improve unless you try a new test
- So it’s a good idea to prepare an AB testing roadmap!
In this article you can read how to set up your first A/B test.
FAQs
Why is the Number of Enrolled Customers in Downloaded Reports is Typically Lower than in the A/B Testing Section?
The section dedicated to A/B testing encompasses all customers who have opted in for the referral programme, which includes those who have shared their names. However, it is important to note that the reports generated only account for customers who have not only enrolled but have also actively clicked through to share their referral links.
In our system, we automatically register every customer for the referral programme immediately after they make a purchase. These figures are readily visible on the platform for your convenience. However, when it comes to the downloadable reports, we focus exclusively on those customers who have taken the additional step of clicking on the referral pop-up. This distinction is crucial as it allows you to leverage this data for various purposes, such as selecting a winner for a referral competition or sending targeted emails to remind those who have referred others to share their links more widely.
If you require a comprehensive list of all customers who have enrolled in your referral offer, your Client Success Manager is available to assist you with this information. Should you wish to delve deeper into the process of automatically enrolling customers post-purchase, please click here for further details.