There is no 'Uncertainty' in MMM.
Why Bayesian MMM modelers are wrong in their outlook towards Marketing Reality
Bayesian MMM practitioners often say 'we quantify uncertainty'.
However I am about to tell you that there is no uncertainty.
Surprised?
Bayesians invent a concoction of irrelevant and complex techniques that simply evaporate when it touches reality.
As a marketer, you need to ask the question -
Is the uncertainty in the phenomenon that we are studying?
or
Is the uncertainty in the technique developed to study the phenomenon?
Bayesian MMM vendors assume the phenomenon to be 'wavering' and hence justify the concept of 'quantifying uncertainty'.
But is their understanding correct ? No.
In MMM, there is no uncertainty, at least in the phenomenon.
The sales are already realized and a specific non wavering set of marketing numbers (spends) resulted in the sales. The marketing spends numbers don't keep dancing around like whack-a-mole.
Our goal is to find these precise numbers that led to sales. In a way it aligns with frequentist outlook of having a fixed parameter. Further as a result, there is also only one true MMM.
Now we have clearly established that the phenomenon has no uncertainty.
Lets move on to the process and technique used by Bayesians to study if there is uncertainty there.
The answer is yes. It stems from their wrong outlook. There are two types of uncertainty - Aleatoric and Epistemic.
Aleatoric is about uncertainty in the phenomenon.
This brings us to Epistemic.
Epistemic uncertainty is caused because of lack of knowledge. Good knowledge infused to the model generally reduces epistemic uncertainty.
But here is the catch.
In the Bayesian framework, a bad prior increases the Epistemic uncertainty.
So now we are in the room of 'uncertainty'.
Bayesians then try to solve this self created problem by creating another problem 'Quantifying uncertainty'.
I call it a problem because their act of 'quantifying uncertainty' serves no practical purpose.
Example: The probability that contribution of TV is between 3-7% is 70%.
I know some of you would go like - wow this is so intuitive than confidence interval. But I will argue that from a practical standpoint the above is not saying anything useful.
If you are a decision maker, you don't know whether the true contribution is 3.2, 4.5 or 7%.
Your Bayesian vendor will sound smart by saying ‘we quantify uncertainty’ but
what clarity have you got for decision making? The Bayesian vendor will hide behind the 1-P% and say 'hey look there was also 30% prob that contribution is not in that range.
Your Bayesian vendor will sound smart by saying ‘we quantify uncertainty’ but
what clarity have you got for decision making? The Bayesian vendor will hide behind the 1-P% and say 'hey look there was also 30% prob that contribution is not in that range.
So either way you can't hold a Bayesian MMM vendor culpable.
If they give you some point estimates then they are giving you frequentist results. At which point they will cleverly say 'our Bayesian mmm has good frequentist properties'😅.
In MMM, there is no uncertainty. At least not in the phenomenon. If you really want to know how good your MMM model's build is - confidence interval answers that question much more robustly.
Thanks for reading.
For consulting and help with MMM implementation, Click here