Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

> At which point you've just found a more cumbersome way to do frequentist statistics.

Hmm, in one way, yes...but on the other hand, Bayesian posteriors are a lot more intuitive to interpret, for most people. So I think you trade one form of convenience for another. But as you sort of hint at, the results should usually be fairly similar, whether you're doing frequentist or Bayesian analysis. So in most cases, I doubt it matters that much. Where it does matter, is when you have grounds for strong priors, that you want to take advantage of. In such cases you can improve your chances of being correct in the "here and now", if you do a Bayesian analysis. Whereas a frequentist analysis is only concerned with the asymptotic error rates. (but of course frequentist vs Bayesian is also a ladder, rather than a black and white distinction)

> Well, the fact is there are too many small-sample studies being produced for all or even most of them to be critically analysed by people with deep understanding.

And this I totally agree with. If there's one thing I dislike about academia, it's the tendency to fund low-powered studies that get nowhere. Better to go all in, with sufficient support from experienced people, in fewer and bigger studies.



> So in most cases, I doubt it matters that much. Where it does matter, is when you have grounds for strong priors, that you want to take advantage of. In such cases you can improve your chances of being correct in the "here and now", if you do a Bayesian analysis.

I completely agree with this - but it's exactly this dynamic that I think, at least in the current academic environment, does more harm than good. Effectively it normalizes publishing a result that's not strong enough to swamp the prior, but where you have some detailed situational argument for why a different prior should be used here. We already get every social science paper arguing that they should be allowed to use a 1-tailed t-test rather than 2-tailed because surely there's no possibility that their intervention would do more harm than good, and you need to get into the details of the paper to see why that's nonsense; letting them pick their own prior multiplies that kind of thing many times over.


> letting them pick their own prior multiplies that kind of thing many times over.

I'm a big fan of sensitivity analysis in this context. Don't just pick one prior and call it a day, but show the effect of having liberal vs conservative priors, and discuss that in light of the domain knowledge. That gives the next researcher a much better foundation than a single prior, or a p-value, ever could.

Unfortunately, if it was a non-trivial paper to begin with, it now just turned into a whole book.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: