The subjective Bayesian synthesis represents an impressive theoretical achievement. Yet I think it is dangerous to see it as providing

*foundations*for Bayesian applied statistics. Such a foundational perspective, which amounts to embrace the subjective Bayesian paradigm a bit too wholeheartedly, corresponds to the kind of ideological attitude I was alluding to in the previous post (in fact, perhaps any foundational argument is ideological).

Rather, I prefer to see the subjective Bayesian theoretical synthesis as a

*model*. By this I mean a prescriptive logical model of how a rational agent can conduct inference and make reasonable decisions in the presence of uncertainty. By identification with the rational agent, we can then use this model as a*heuristic*tool for us to figure out how, as a gambler, as an economic agent or even as a scientist doing fundamental research, we could, well, conduct inference and make reasonable decisions in the presence of uncertainty.
However, in spite of all its theoretical and axiomatic justifications, this is just a model, with all the idealization and the pragmatic considerations that come with it. In fact, the idealization becomes more than apparent when comes the time to define our priors and utilities: there, we immediately realize how much using the Bayesian paradigm in practical situations requires a fair dose of pragmatism.

A heuristic model is like a tool in the hands of a craftsman: something that you recruit temporarily for a specific task and that you use concomitantly with other tools. Something that requires some external know-how, some sense of the context.

The word 'heuristic' is also important here. The heuristic power of the subjective Bayesian agent model is great, making it possible to derive fairly sophisticated probabilistic models formalizing and addressing delicate scientific questions. However, again, it's only a heuristic tool. In a second step, one should perhaps complement this tool with additional arguments to back up what we have done.

Complementing the heuristics with additional arguments is even more important in the context of scientific research, for the following reason. Subjective Bayesian inference is supposed to be a model of a rational agent conducting inference and making

*private*decisions in the presence of uncertainty. Using it to formalize, for instance, one's private gambling activity (a bit like Nate Silver, 2012) is relatively unproblematic: as long as it's all about my money, I don't need to give anyone any justification of why I used this or that particular prior or utility.
When we write a scientific article, however, we enter a new logical context: that of public scientific reporting. In this context, personal degrees of belief have no legitimate existence. I am not supposed to report on how much I am personally willing to bet on the monophyly of a clade (it would be idle anyway: we won't be able to determine whether or not I won my bet). Instead, I am supposed to give some meaningful and objective measure of the statistical support in favor of the monophyly of the group of interest.

This last point gives us a hint at what kind of argument we should develop in order to back up our Bayesian heuristics. Fundamentally, the only way to attach a clear, objective meaning to a statistical procedure is to refer to its

*operational*properties (i.e. to how it behaves in practice under controlled conditions). And I have the impression that, in a statistical context, operational somehow means: frequentist.
All this is not new. For instance Rubin (1984) emphasizes the need to complement the classical Bayesian procedure with other non-strictly Bayesian tools (like posterior predictive checks). See also Cox (2006), suggesting that Bayesian procedures, "if formulated carefully, [..] may provide a convenient algorithm for producing procedures that may have very good frequentist properties".

Yet, from various recent discussions and readings, I have got the impression that we are still too often stuck in an "either frequentist or subjectivist" dichotomy, which suggests that these questions are worth revisiting.

Yet, from various recent discussions and readings, I have got the impression that we are still too often stuck in an "either frequentist or subjectivist" dichotomy, which suggests that these questions are worth revisiting.

--

Cox D., 2006. Principles of Statistical Inference.

Goldstein M., 2006, Subjective Bayesian data analysis: principles and practice. Bayesian analysis, 3:403-420

Silver N., 2012. The signal and the noise: why so many predictions fail -- but some don't. Penguin books. USA.

Rubin D.B., 1984. Bayesianly justifiable and relevant frequency calculations for the applied statistician. The Annals of Statistics, 12:1151-1172.

## No comments:

## Post a Comment