Suppose the likelihood it missclassified a ball is significantly different from zero, but not yet known precisely.
If you use a model that doesn't ask you to think about this likelihood at all, you will get the same result as if you had used bayes and consciously chose to approximate the likelihood of misclassification as zero.
You may get slightly better results if you have a reasonnable estimate of that probability, but you will get no worse if you just tell Bayes zero.
It feels like you're criticizing the model for asking hard questions.
I feel like explicitely not knowing an answer is always a small step ahead of not considering the question.
The criticism is important because of how Bayes keeps using the probability between experiments. Garbage in Garbage out.
As much as people complain about frequentist approaches, examining the experiment independently from the output of the experiment effectively limits contamination.
If you use a model that doesn't ask you to think about this likelihood at all, you will get the same result as if you had used bayes and consciously chose to approximate the likelihood of misclassification as zero.
You may get slightly better results if you have a reasonnable estimate of that probability, but you will get no worse if you just tell Bayes zero.
It feels like you're criticizing the model for asking hard questions.
I feel like explicitely not knowing an answer is always a small step ahead of not considering the question.