KL Divergence as Marketing Mix Model (MMM) Calibration Metric
How Aryma Labs uses KL divergence to build accurate MMM models
At Aryma Labs, we strive to incorporate information theoretic approach in our MMM modeling processes because correlation based approaches have their limitations.
Of late, in our recent client projects, we have been using KL Divergence as calibration metric over and above the other calibration metrics like R Squared value, Standard Error, P value, within sample MAPE etc.
I would urge the readers to read my post on difference between calibration and validation.
Any statistician would be irked to find them used interchangeably. 😅
What exactly is KL Divergence ?
KL Divergence is a measure of how much a probability distribution differs from another probability distribution. There is a true distribution P (x) and an estimated distribution Q(x). The smaller the divergence the more similar the probability distributions are to each other.
From an information theory point of view, KL divergence tells us how much information we lost due to our approximating of a probability distribution with respect to the true probability distribution
It might look like a distance metric, but it is not. Since it is not symmetric it cannot be a distance metric.
Video credit: Ari Seff (link in resources)
Why do we use KL Divergence as calibration metric in MMM?
MMM is all about attribution. There is a true value or true ROI of the marketing variable. Through MMM, our job is to hone in on this true ROI.
Our MMM models hence have to be unbiased so that we converge to this true ROI values.
KL Divergence provides information on how biased your models are. Think of it like a beeping alarm that beeps when you 'veer off' too much from the ground truth.
KL Divergence provides information on how biased your models are. Think of it like a beeping alarm that beeps when you 'veer off' too much from the ground truth.
What causses these veering off?
Answer - Bias
Bias is generally an attribute of the estimator.
Statistically speaking, Bias is the difference between estimator's expected value and the true value of the parameter it is estimating.
An estimator is said to be unbiased if it’s expected value is equal to the parameter that we’re trying to estimate.
How Bias enters your MMM model?
In MMM, Multicollinearity is a notorious problem. It is a problem of too much overlapping signals. In MMM, you need clear signals so that you can attribute the change in KPI to the marketing variables.
To overcome this problem, statisticians use regularization (penalized regression) techniques. This inevitably causes huge bias in the model.
Your model as a result might be multicollinearity free but it would have veered off too much from the ground truth.
We have been carrying out multiple experiments and we have found enough proof that regularization always causes bias in the model.
As they say the cure should not be worse than the disease. Bias is a worse problem than multicollinearity especially if your goal is true attribution.
As they say the cure should not be worse than the disease. Bias is a worse problem than multicollinearity especially if your goal is true attribution.
In summary:
KL Divergence is a great calibration metric. This has helped increase accuracy of our MMM models.
Resources:
Image credit: https://www.npr.org/sections/thetwo-way/2013/11/16/245607276/howd-they-do-that-jean-claude-van-dammes-epic-split
Video Credit: Ari Seff (Twitter) - https://x.com/ari_seff/status/1303741288911638530?s=20
Thanks for reading.
For consulting and help with MMM implementation, Click here
Stay tuned for more articles on MMM.