We use generative AI technology to provide you with the most useful information while striving to minimise inaccuracies. We also use reasonable efforts to ensure all personal information is not used in this process and to the extent we can do so we automatically redact personal information before using AI. We strongly advise end user consumers to not enter any personal information into any feature which uses AI as a precautionary measure to help us ensure no personal information is used. For any questions, please contact legal@mention-me.com.
FAQS
What are some of the AI capabilities you offer?
We currently use generative AI to summarise, categorise and make more actionable the large volumes of verbatim text which comes (optionally if you choose to use it) from your customers' share messages and/or NPS feedback. This is designed to help you optimise your advocacy.
We are also experimenting with using generative AI for helping support the optimisation of the programme (acting as an assistant in your use of our platform).
We use more traditional Machine Learning techniques to make predictions for customer segmentation and personalisation.
What AI technologies and frameworks are used?
We use a variety of AI and ML technologies to power our platform in different ways.
For building predictive models we use ML technologies like XGBoost (Propensity To Refer®) and Keras (Extended Customer Revenue).
For summarising sentiment analysis and other tasks to support our clients using the platform we use LLM models like ChatGPT.
What LLM Models/Versions are used?
We use ChatGPT 3.5 and 4 and later models as they are released.
What controls are in place to prevent inappropriate content being displayed to the user?
We do not display the output of any LLM to the end customer. The only person who would see the generative output would be an employee of one of our clients. We use filters to reduce the likelihood of inappropriate content being included in the inputs.
What measures are in place to ensure the accuracy, performance, and reliability of generative AI-driven functionalities?
We have completed a DPIA on the use of generative LLM in our features. We have satisfied ourselves that for the use-case in question we are using generative AI in a suitable, accurate and reliable enough way and the risks are a lot lower than if the generative AI were being used in a chat format with end users. We use a variety of caching, filtering and monitoring methods to achieve the right balance.
How do you handle AI model updates and maintain model performance over time?
For our ML we have a team of data engineers who monitor and manage our pipelines. We have weekly training schedules with monitoring and performance optimisation over time. We continue to monitor the output of our AI LLMs and use latest model features to help us improve.
Provide a brief overview of the machine learning AI capabilities including purpose and benefit to the service/product you provide
We have purpose built ML models that predict propensity for a customer to refer (powers our Propensity To Refer® feature), and predict churn / lifetime value (powers the predictive elements of our Extended Customer Revenue feature). Individual models are built for each client using only their own data. The specific models/frameworks are XGBoost and Keras respectively.
What measures are in place to ensure the accuracy, un-bias, performance, and reliability of ML AI-driven functionalities?
We have completed a DPIA covering our use of OpenAI in the processing of share messages and NPS messages for summarising and actioning. We are comfortable we take all the necessary steps to reduce the risk of PII being processed and to ensure appropriate accuracy and lack of bias
How do you handle AI model updates and maintain model performance over time?
ML models are retrained on at least a monthly basis using the latest data. Some are updated weekly. We monitor performance over time against a hold out group and report on the incremental upside.
Is the AI algorithm suitably transparent, i.e., can any part of the service that relies on the outcomes of an AI process (decisions/recommendations) be explained in an intelligible fashion to the user?
We record feature importances for all of our ML models. We use the SHAP framework to understand individual decisions, but this is not currently surfaced to users or regularly calculated
How are ethical considerations, such as bias, fairness, and responsible AI practices assured within your service? Do you have an ethics review board or process to ensure adherence to international AI ethics guidelines and standards?
We do not use protected characteristics as features in our models.
What other inherent risks are related to the AI used in the service and how have these been mitigated?
We have covered the risks in our DPIA. We are satisfied that because of the use-cases we adopt we carry a low risk.
What metrics do you use to evaluate the performance of AI models, and how do you report them to clients? Can you provide examples of the expected performance of the AI solution for our specific use case?
We use metrics focused on the performance in production and accuracy of decision-making. For Propensity To Refer® we report model ROC AUC, predicted propensities in our high and low groups, feature importances and incremental revenue generated by the feature.