KL Divergence and Chebyshev's Inequality - The newest calibration metrics in Marketing Mix Modeling (MMM)
Demonstrating efficacy of KL Divergence and Chebyshev's Inequality on real client data
Marketing Mix Modeling has a bad rap of being more art than science. Historically there were only few ways to calibrate the MMM model and even fewer (sometimes none) ways to validate the model.
At Aryma Labs, we took a resolve to make MMM more scientific. Our goal is to add robust processes in place, through which one can clearly discern whether the MMM model is accurate or not.
Most MMM vendors rely only on R squared value, in-sample MAPE, p-value and standard errors for MMM calibration.
These metrics are simply not enough.
We hence did what we do best - Innovate.
We did a lot of R&D and through this we figured out that KL Divergence and Chebyshev's Inequality are excellent calibration metrics for MMM.
We even proved this on a Robyn dataset and published a paper on it !!
But the next big question always was - Will this work on real client data?
We are happy to report that we recently completed a MMM project with a big fortune 500 company. In this project we leveraged KL Divergence and Chebyshev's Inequality as calibration metrics.
The Methodology
We built three separate models and each model were built by different data scientists to avoid any bias.
We built one model through our proprietary technique - Aryma Labs Model E
We similarly built two Robyn Models - One with Weibull CDF as adstock and the other with Weibull PDF as adstock.
To ensure a fair comparison, we ensured proper model specification of Robyn models.
The Evaluation:
All the three models were evaluated on 4 metrics
- KL Divergence
- Chebyshev's Inequality
- R squared value
- Decomp RSSD
Results:
Aryma Labs model came out on top in terms of all the four metrics.
Conclusion:
Aryma Labs model doing well was not a surprise to us. We always build accurate models. But we just wanted to benchmark our models against more robust metrics like KL Divergence and Chebyshev's.
We are glad that our idea of using KL Divergence and Chebyshev's for model calibration stands vindicated.
These two metrics are now part of Aryma Labs' arsenal to evaluate MMM models. With this clients can have more confidence in MMM models from Aryma Labs, since they are accurate and calibrated on robust metrics.
Thanks for reading.
For consulting and help with MMM implementation, Click here
Stay tuned for more articles on MMM.