One of the points Bayesians usually rake up against Frequentist methods is that, "it gives only point estimates".
And that Bayesian methods help “quantify uncertainty".
The funny part is Bayesians don't realize that pointing to a problem is not solving the problem. Especially, if part of the problem is their own creation.
There are two kinds of uncertainty.
Aleatoric uncertainty: It is the inherent uncertainty present in the data or the model.
Epistemic uncertainty: Epistemic uncertainty is caused because of lack of knowledge. Good knowledge infused into the model generally reduces epistemic uncertainty. For e.g. The Priors.
In the Bayesian framework, a bad prior or uninformative prior increases the Epistemic uncertainty.
Increasing Entropy
Bayesians might purport that they 'quantify uncertainty' by giving statements like "There is 70% probability that paid search has a contribution of 3-7% to sales".
But from a decision making point of view, this simply increases entropy. Entropy is a information theoretic concept.
If you want a good refresher on Entropy, we would highly recommend the article by Naoki.
Basically, Low Entropy means receiving very predictable information.
High entropy means receiving very unpredictable information (some also define it as an element of surprise).
When you provide a solution in the form of a probability distribution, you inherently increase entropy because you are telling the user that you don't know exactly what the value is.
Bayesian MMM vendors are aware of this problem and hence they report the expected value of the probability distribution to mitigate the entropy.
But in doing so they are no different than Frequentist MMM.
But Bayesian MMM overall is complex and compute intensive. Why would you want to adopt a complex and compute intensive process only to provide answers like Frequentist MMM.
How Frequentist measure and mitigate uncertainty?
Frequentists measure uncertainty through confidence intervals. Confidence Interval is all about coverage.
That is, if one ran the same statistical test 90 times or 95 times by taking different samples and constructed a confidence interval each time, would they find the parameter of interest in those intervals each time.
Think of MMM as an apparatus that is trying to capture the true Marketing ROI. The data that goes into modeling is a sample from the marketing reality (population).
This data contains the true marketing ROI and your MMM is the experiment that either captures the true marketing ROI or not.
As a client, you would want to know how well this apparatus (MMM) is constructed and does it capture the true marketing ROI most of the time? Basically you need performance guarantees.
Confidence Intervals thus in a way informs you about the construct of the MMM model and its reliability.
Thanks for reading.
For consulting and help with MMM implementation, Click here
Stay tuned for more articles on MMM.