toejam wrote:E.g. What is the likelihood that all of your "probably nots" would fall in line?
This is a mathematical question, and the question is what kind of mathematical model we would construct to answer the question. (Sorry!
)
There is only one real, completely direct way to do this. There are ten items, so there would be ten items in the chain rule:
[1] P( A,B,C,D,E,F,G,H,I,J ) = P( A ) * P( B | A ) * P( C | A, B ) * P( D | A, B, C ) * P( E | A, B, C, D ) * P( F | A, B, C, D, E ) * P( G | A, B, C, D, E, F ) * P( H | A, B, C, D, E, F, G ) * P( I | A, B, C, D, E, F, G, H ) * P( J | A, B, C, D, E, F, G, H, I )
Now my "Probably Not" statements were, of course, statements about these values:
[2] P( A ), P( B ), P( C ), P( D ), P( E ), P( F ), P( G ), P( H ), P( I ), P( J )
Notice that these are different expressions. The expressions that we need are conditional probabilities that are conditional on all the others, in a chain, in order to apply the multiplication chain rule. The only way to reduce the terms of [1] to be expressed in terms of [2] is to assume conditional independence of the individual probabilities involved, but that seems not only unjustifiable but intuitively wrong.
This is where Carrier goes off the rails. He's scared of combining multiple statements to form a theory, because he's scared that forming multiple conclusions will amass into a huge ball of improbability and shove his theory into unlikely-land. Now that is true, when the probabilities involved can be assumed to be conditionally independent. But it is not necessarily true if they are not conditionally independent. In that case, it depends on what we believe about the relationships between the individual statements.**
Analogy interlude. Suppose you had a theory about a black box that spits out colored balls, which are always colored black or white. But suppose that these colored balls are so tiny and fragile that they will disintegrate on measurement, using up the material to get a reading from the instruments. Suppose further that your instruments could give you only a "Probably Black" (x>70%), a "Probably Not Black" (x<30%), or a "Non Liquet" (30% >= x >= 70%). This is all that you have to work with. Now you want to make a theory about your black box. You see it spit out:
Probably Black, Probably Black, Probably Black, Probably Black, Probably Black, Probably Black, Probably Black, Probably Black, Non Liquet, Probably Not Black, Probably Black, Probably Black
You record 1 Probably Not Black, 1 Non Liquet, and 10 Probably Black.
You go over to your fellow research scientist and you present your results. You say:
"We have to apply the chain rule. When I do so, I get approximately 0.85^10 * 0.5^1 * 0.15^1, or a 1.476558% chance of them all being black."
Your colleague asks, "And the 'probably black' balls? What are the odds that they are all in fact black?"
"We have to apply the chain rule. When we do so, I get approximately 0.85^10, or a 19.68744% chance of all the 'probably black' balls being black."
Your colleague asks, "Hmm. I'm not so sure. Have you considered that there might be a relationship at work, a common influence that could mean that one such reading makes another such reading more likely, because it is indicative of the underlying influence, which seems more likely with the greater consistency of the results, indicating that the box is biased or perhaps even has a law at work here?"
You say, "maybe, but how would we model that hunch mathematically?"
And the answer there is that you would start with a null hypothesis and see whether it is likely at all. If it is not likely at all, then you at least can suspect that there is some kind of principle at work here. Then you can take every principle at work that has been suggested and try to compare them to see which is the better scientific explanation, which hypothesis better fits the result without becoming too
ad hoc.
"Let's start with the null hypothesis that the machine spits out balls just like any other ordinary marchine" your colleague says.
You say, "I'm sorry, I think we forgot something." Your colleague asks, "what's that?" You say, "we need a control group." Your colleague says, "Of course! We'll find the appropriate control group and then test the null hypothesis using a statistical p-value test of the significance of the results!"
So you go out and get a greyish-black box. This black box is roughly just like the first one in every respect, except that it doesn't have one significant trait in common. And you set it up and let it rip and observe.
PN, PB, PN, PN, PN, NL, NL, NL, NL, NL, NL, NL, NL, PN, PN, PB, PB, NL, NL, PN, PN, PN, NL, PN, PN, PN, NL, PN, PN, PN, NL, NL, NL, PB, NL, PB, PN
16 PN (probably not), 16 NL (non liquet), and 5 PB (probably black). Okay let's make our table.
| PN | NL | PB |
Test Group | 1 | 1 | 10 |
Control Group | 16 | 16 | 5 |
Now your colleague says, "hey, what if we combined the categories of PN+NL, and then set that against the category of PB, just to see what happens?"
And you say, "sure, let's give that a shot then."
| PN or NL | PB |
Test Group | 2 | 10 |
Control Group | 32 | 5 |
The Fisher exact test statistic value is 1.9E-05. The result is significant at p < 0.01.
Anyway I feel that I am rambling, and I haven't answered your specific question, but I am indicating the general way in which the question is a good one but that the answer is not nearly as simple as it may seem. The first thing to observe is that there is a phenomenon that exists here (if we can confirm that), and then to seek an explanation (and to compare the ones we can come up with).
That the original letter writer had no HJ in mind seems to me to be one hypothesis with explanatory power, particularly for this phenomenon. And this is not the only reason why I'd suggest the hypothesis anyway, so it's definitely worth investigating, despite the difficulty of understanding mathematically or scientifically the exact probability that should be assigned to the conjunction of the hypotheses of several individual interpolations.
(PS--If we did a control group here, it would be all the other verses in the letters of Paul. Or, maybe, just the "Not-Completely-Useless-MJ" passages. Or maybe they'd both be test groups against the control group that had neither. I guess... this is why I don't like to use mathematical arguments that start with very human judgments. They get very confusing, so most people aren't really following them, and it's all starting from hunches anyway, so trusting the human brain seems just as valid as brain+quite possibly pseudoscience.)
** You do get one saving grace. Your theory is no more likely than your ---least--- likely single assumption. This makes it easier to think about, without going crazy with math. Identify the weakest necessary assumption and ask how likely it is. That sets the upper bound for the likelihood of the entire theory. Why?
[1] P( A,B,C,D,E,F,G,H,I,J ) = P( A ) * P( B | A ) * P( C | A, B ) * P( D | A, B, C ) * P( E | A, B, C, D ) * P( F | A, B, C, D, E ) * P( G | A, B, C, D, E, F ) * P( H | A, B, C, D, E, F, G ) * P( I | A, B, C, D, E, F, G, H ) * P( J | A, B, C, D, E, F, G, H, I )
Just let P( A ) be your least likely assumption and its probability. All the other terms are less than 1. So P( A,B,C,D,E,F,G,H,I,J ) < P( A ). So identifying the weakest component of the hypothesis and proving it to be unlikely is an excellent, simple way to reject a hypothesis as unlikely.
"... almost every critical biblical position was earlier advanced by skeptics." - Raymond Brown