Here are some situations where explicit computation of Bayes factors could still be meaningful and useful:
Bayes factors can be used for comparing a small number of alternative tree topologies in the context of a phylogenetic analysis. In particular, if the topology that would have been considered the most reasonable one before the analysis turns out to be totally absent from the sample obtained by Monte Carlo, we would typcially want to now how strong the empirical evidence is against what we have deemed thus far to be true. It would therefore make sense to compute a Bayes factor in that case. This would have to be done explicitly, using thermodynamic integration or the stepping-stone method.
Another reason for explicitly comparing the marginal likelihoods of several candidate tree topologies is to make sure that the MCMC did not get stuck in a local maximum, returning a high apparent posterior probability for a tree that has in fact a low marginal likelihood.
On the other hand, the differences in log-marginal likelihoods between alternative (reasonable) topologies are usually very small compared to the absolute values of the log marginal likelihoods, pushing the numerical methods to their limits in terms of numerical accuracy.
On the other hand, the differences in log-marginal likelihoods between alternative (reasonable) topologies are usually very small compared to the absolute values of the log marginal likelihoods, pushing the numerical methods to their limits in terms of numerical accuracy.
Even in the situations mentioned in my previous post, where automatic model selection can easily be implemented, it can still be useful to compute marginal likelihoods or Bayes factors: just to make a point, for instance, about how a new model improves on the models currently considered as the standards of the field or, more generally, what features of a model are important in terms of empirical fit.
Finally, in those cases where it is difficult to implement an efficient reversible-jump MCMC for averaging over non-nested models, model averaging may sometimes be replaced by model selection through explicit calculation of marginal likelihoods (although it is often easy to come up with a more general model encompassing all of the non-nested models that we want to compare).
But then, these are relatively specific situations, quite distinct from the idea of systematically computing Bayes factors in order to first select the best model, and then conduct inference by parameter estimation, which is more specifically what I was arguing against.
For me, systematic and explicit model selection is one of those old reflexes inherited from maximum likelihood thinking -- but not really adapted to the context of Bayesian inference.
For me, systematic and explicit model selection is one of those old reflexes inherited from maximum likelihood thinking -- but not really adapted to the context of Bayesian inference.
A great post as usual.
ReplyDeleteWhat about settings where we are interested in the importance of one component of the model average versus another? For example, one could imagine wondering about the level of evidence for an inferred collection of site-specific selection coefficients (say using the procedure in your 2010 PNAS paper).