Churn
Description
This classification model scores all customers on their likelihood to churn out of your brand’s active customer population. Churn is defined as a customer not making a purchase within a specified window.
Model Time Windows
PREDICTION_WINDOW
: 180 daysHISTORICAL_WINDOW
: 365 days
Predictable’s default value for the window of prediction (known as the PREDICTION_WINDOW
) is 180 days. This means that the model is predicting the probability that an individual will churn within the next 180 days. If necessary, we are able to assign a custom PREDICTION_WINDOW
that fits your brand’s unique sales cycle. Each customer’s unique data is transformed into a custom feature set, with automated training and tuning to deliver the most performant model possible.
The HISTORICAL_WINDOW
is how many days Predictable’s models look back for data. To be scored by the Churn model, a customer must have made a purchase within the HISTORICAL_WINDOW
. The default value for Churn’s HISTORICAL_WINDOW
is 365 days .
After training and tuning, the model is ready to score the active population.
Data Processed:
- Transaction Data
- Email Engagement Data
- Web / Pixel Engagement Data
Output
Model Results
The model’s output is a normalized SCORE
that ranks customers from 100 (most likely) to 1 (least likely) to churn out of your business’s active customer population. The customer is then assigned into a percentile, creating a (mostly) even distribution that ranks customers against each other. A customer with a score of 99 is considered more likely to churn than a customer with a score of 75, who in turn more likely to churn than a customer with a score of 25.
This score is then able to be deployed downstream for a wide variety of marketing use cases.
Returned Values:
SCORE
: a customer’s likelihood to churn in thePREDICTION_WINDOW
CUSTOMER_ID
: your unique customer identifiersDATETIME_STAMP
: unix timestamp of scoring runMODEL_VERSION
: version of platform that scored the run
Model Summary
Additionally, Predictable returns model summary statistics for you to assess how well the model fits your data.
Returned Values:
TRAIN_ROC_AUC
: a metric used to evaluate the overall predictive power of the model on the training data. This value is between zero and one; the higher, the more predictive the modelTEST_ROC_AUC
: a metric used to evaluate the overall predictive power of the model on the test data. This value is between zero and one; the higher, the more predictive the model. It is expected that this value will be less than theTRAIN_ROC_AUC
TRUE_POSITIVES
: the percentage of accurate positive predictions on the test setTRUE_NEGATIVES
: the percentage of accurate negative predictions on the test setFALSE_POSITIVES
: the percentage of inaccurate positive predictions on the test setFALSE_NEGATIVES
: the percentage of inaccurate negative predictions on the test setTIMESTAMP
: unix timestamp of training runMODEL_VERSION
: version of platform that trained the model
Feature Importance
Finally, Predictable provides the relative importance of the features (inputs) of the model. The higher the score, the more important the feature was to the model. However, it is extremely important to note that this importance does not indicate the direction that the feature had on the likelihood.
Returned Values:
FEATURE_NAMES
: name of featureFEATURE_VALUES
: relative importance of the featureTIMESTAMP
: unix timestamp of training runMODEL_VERSION
: version of platform that trained the model