Description

This classification model scores all customers in the population on their likelihood to make a purchase within a given window of time.

Model Time Windows

  • PREDICTION_WINDOW: 30 days
  • HISTORICAL_WINDOW: 90 days

The Purchase Propensity model’s default value for the window of prediction (known as the PREDICTION_WINDOW) is 30 days. This means that the model is predicting the probability that an individual will make a purchase in the next 30 days. If necessary, we are able to assign a custom PREDICTION_WINDOW that fits your business’s unique sales cycle. Each customer’s unique data is transformed into a custom feature set, with automated training and tuning to deliver the most performant model possible.

The HISTORICAL_WINDOW is how many days Predictable’s models look back for data. To be scored by the Purchase Propensity model, a customer must have made a purchase, clicked an email, or triggered a web event within the HISTORICAL_WINDOW. The default value for Purchase Propensity’s HISTORICAL_WINDOW is 90 days .

After training and tuning, the model is ready to score the active population.

Data Processed:

  • Transaction Data
  • Email Engagement Data
  • Web / Pixel Engagement Data

Output

Model Results

The model’s output is a normalized SCORE that ranks customers from 100 (most likely) to 1 (least likely) to make a purchase. The customer is then assigned into a percentile, creating a (mostly) even distribution that ranks customers against each other. A customer with a score of 99 is considered more likely to make a purchase than a customer with a score of 75, who in turn more likely to make a purchase than a customer with a score of 25.

This score is then able to be deployed downstream for a wide variety of marketing use cases.

Returned Values:

  • SCORE: a customer’s likelihood to make a purchase in the PREDICTION_WINDOW
  • CUSTOMER_ID: your unique customer identifiers
  • DATETIME_STAMP: unix timestamp of scoring run
  • MODEL_VERSION: version of platform that scored the run

Model Summary

Additionally, Predictable returns model summary statistics for you to assess how well the model fits your data.

Returned Values:

  • TRAIN_ROC_AUC: a metric used to evaluate the overall predictive power of the model on the training data. This value is between zero and one; the higher, the more predictive the model
  • TEST_ROC_AUC: a metric used to evaluate the overall predictive power of the model on the test data. This value is between zero and one; the higher, the more predictive the model. It is expected that this value will be less than the TRAIN_ROC_AUC
  • TRUE_POSITIVES: the percentage of accurate positive predictions on the test set
  • TRUE_NEGATIVES: the percentage of accurate negative predictions on the test set
  • FALSE_POSITIVES: the percentage of inaccurate positive predictions on the test set
  • FALSE_NEGATIVES: the percentage of inaccurate negative predictions on the test set
  • TIMESTAMP: unix timestamp of training run
  • MODEL_VERSION: version of platform that trained the model

Feature Importance

Finally, Predictable provides the relative importance of the features (inputs) of the model. The higher the score, the more important the feature was to the model. However, it is extremely important to note that this importance does not indicate the direction that the feature had on the likelihood.

Returned Values:

  • FEATURE_NAMES: name of feature
  • FEATURE_VALUES: relative importance of the feature
  • TIMESTAMP: unix timestamp of training run
  • MODEL_VERSION: version of platform that trained the model