Reviews of Carrier's books form a very "biased sample" of opinion on the application of mathematics to real-world problems. If it's not completely clear what I mean already, then it's simply this: reviews of Carrier's books are reacting specifically to Carrier's actual central hypothesis (re: "MinMyth" vis-a-vis "MinHist" and "why we might have reason for doubt") and reacting generally to Carrier's specific application of Bayes' theorem in this context (i.e., the resolution of the question of the historicity of Jesus). Besides the facts that the reviewers can be influenced by human subjectivity (one meaning of the word "biased"), the sample itself is "biased" because it is a very non-random selection out of the total pool of opinion on the application of mathematics to real-world problems.
While I appreciate the search for at least one reviewer with credentials in mathematics who can "back him up" (and I've conducted no search at all, personally--though I vaguely recall that Carrier himself submitted his manuscripts to one or more persons prior to publication for exactly that kind of review [unfortunately, my memory does not come with page numbers]), the general argument can't really be settled by such polling. All we really know from the fact of this argument is that the argument is real and worthy of our attention.
With this in mind, I don't really see the point of referring the general question back to Carrier all the time (that's why the first quote has an ellipsis), as if he's the first to try anything like this. (No, he's not the first to try anything like this.)
from that other thread: http://www.earlywritings.com/forum/view ... f=3&t=1317
GakuseiDon wrote:I don't see a problem with ... providing best/worst case odds myself. IMHO ... approach is reasonable, and says more about the subjective nature of the evidence ...
GakuseiDon wrote:I wouldn't call the odds 'suspicious' but rather 'subjective'.
GakuseiDon wrote:Given the subjective nature of the evidence available, I suspect that each person will end up with their own odds, which is a problem in itself.
some 1-star reviewer wrote:suspect, due to his insistence on applying subjective quanities to an objective theorem
some other reviewer wrote:When we’re dealing with rare evidence for rare events, then small errors in the inputs can end up giving a huge range of outputs, enough of a range that there is no usable information to be had... These issues combine to make it very difficult to make any sensible conclusions from Bayes’s Theorem in areas where probabilities are small, data is low quality, possible reference classes abound, and statements are vague.
GakuseiDon wrote:If the author is correct, it sounds like BT is simply not suitable when the probabilities are based on guesses.
GakuseiDon wrote:for investigating history, where the data is of low quality, etc. It becomes "Garbage In, Garbage Out."
This is all well and good, as one option for an opinion, and I'm sure there are some mathematicians that would stand for all of the above.Bernard Muller wrote:The problem is about the Bayesian theorem not applying to cases where the inputs are highly suggestive & can very greatly differ from one person to another.
If I understand correctly (and this is me speaking--I haven't read Proving History yet), some of them would self-identify as frequentists. To reduce a complicated subject to the bare basics (possibly misrepresenting it along the way), frequentists believe in the real objectivity of probabilistic statements. Not in the sense of the "completely neutral observer, with limited information" (which would be one of two interpretations of Bayesian probability--the other is "subjectively interpreting observer, with limited information," and that seems to be the sense in which someone Proving History would use her or his probabilities when it is necessary to do so) but in the sense of "God himself looking down" (which is frequentist probability).
So, for a hypothesis, which is either true or false in objective reality ("God himself looking down"), the strict frequentist says that it's probability is either 0 or 1, corresponding to false or true. After all, he's counted (or the Universe has counted, if he can't), and there's only one value to count.
Bayesians interprets the "probability" of a hypothesis that is either true or false differently than a frequentist. It assigns it a range from 0 to 1, and it does so with reference to the limited information that the observer has relevant to the hypothesis. Now, mathematically, this is not a contradiction, because they're describing different things. What they really disagree over is what the most useful definition of "probability of a hypothesis" is. (Or, sometimes, perhaps they both think that the Bayesian definition could be useful, but the frequentist might still think that to attempt to work with it mathematically is bollocks, for whatever reason. Either way, Carrier didn't create this controversy. It is centuries old, with the frequentist interpretation seeming to have the limelight in the mid-20th century but seemingly having less importance before or after.)
The most relevant wiki page (and I should have linked it earlier) seems to be:
Bayesian probability
And here we find this very debate, being played out in a completely general sort of way:
Broadly speaking, there are two views on Bayesian probability that interpret the probability concept in different ways. According to the objectivist view, the rules of Bayesian statistics can be justified by requirements of rationality and consistency and interpreted as an extension of logic. According to the subjectivist view, probability quantifies a "personal belief".
The page continues with some remarks on "Personal probabilities and objective methods for constructing priors":Broadly speaking, there are two views on Bayesian probability that interpret the 'probability' concept in different ways. For objectivists, probability objectively measures the plausibility of propositions, i.e. the probability of a proposition corresponds to a reasonable belief everyone (even a "robot") sharing the same knowledge should share in accordance with the rules of Bayesian statistics, which can be justified by requirements of rationality and consistency. For subjectivists, probability corresponds to a 'personal belief'. For subjectivists, rationality and coherence constrain the probabilities a subject may have, but allow for substantial variation within those constraints. The objective and subjective variants of Bayesian probability differ mainly in their interpretation and construction of the prior probability.
Accordingly, we must make an important distinction. The use of Bayes' theorem against "subjective" probabilities is controversial within mathematics (and in the sciences) but is not controverted by mathematics. Not all mathematicians would want to use Bayes' theorem with "subjective" probabilities, but mathematics itself doesn't tell us whether we should or should not do so. Mathematics simply tells us how to do so, if we really want to go down that road.Following the work on expected utility theory of Ramsey and von Neumann, decision-theorists have accounted for rational behavior using a probability distribution for the agent. Johann Pfanzagl completed the Theory of Games and Economic Behavior by providing an axiomatization of subjective probability and utility, a task left uncompleted by von Neumann and Oskar Morgenstern: their original theory supposed that all the agents had the same probability distribution, as a convenience.[21] Pfanzagl's axiomatization was endorsed by Oskar Morgenstern: "Von Neumann and I have anticipated" the question whether probabilities "might, perhaps more typically, be subjective and have stated specifically that in the latter case axioms could be found from which could derive the desired numerical utility together with a number for the probabilities (cf. p. 19 of The Theory of Games and Economic Behavior). We did not carry this out; it was demonstrated by Pfanzagl ... with all the necessary rigor".[22]
Ramsey and Savage noted that the individual agent's probability distribution could be objectively studied in experiments. The role of judgment and disagreement in science has been recognized since Aristotle and even more clearly with Francis Bacon. The objectivity of science lies not in the psychology of individual scientists, but in the process of science and especially in statistical methods, as noted by C. S. Peirce.[23] Recall that the objective methods for falsifying propositions about personal probabilities have been used for a half century, as noted previously. Procedures for testing hypotheses about probabilities (using finite samples) are due to Ramsey (1931) and de Finetti (1931, 1937, 1964, 1970). Both Bruno de Finetti and Frank P. Ramsey acknowledge[citation needed] their debts to pragmatic philosophy, particularly (for Ramsey) to Charles S. Peirce.
The "Ramsey test" for evaluating probability distributions is implementable in theory, and has kept experimental psychologists occupied for a half century.[24] This work demonstrates that Bayesian-probability propositions can be falsified, and so meet an empirical criterion of Charles S. Peirce, whose work inspired Ramsey. (This falsifiability-criterion was popularized by Karl Popper.[25][26])
Modern work on the experimental evaluation of personal probabilities uses the randomization, blinding, and Boolean-decision procedures of the Peirce-Jastrow experiment.[27] Since individuals act according to different probability judgments, these agents' probabilities are "personal" (but amenable to objective study).
Personal probabilities are problematic for science and for some applications where decision-makers lack the knowledge or time to specify an informed probability-distribution (on which they are prepared to act). To meet the needs of science and of human limitations, Bayesian statisticians have developed "objective" methods for specifying prior probabilities.
Indeed, some Bayesians have argued the prior state of knowledge defines the (unique) prior probability-distribution for "regular" statistical problems; cf. well-posed problems. Finding the right method for constructing such "objective" priors (for appropriate classes of regular problems) has been the quest of statistical theorists from Laplace to John Maynard Keynes, Harold Jeffreys, and Edwin Thompson Jaynes: These theorists and their successors have suggested several methods for constructing "objective" priors:
Maximum entropy
Transformation group analysis
Reference analysis
Each of these methods contributes useful priors for "regular" one-parameter problems, and each prior can handle some challenging statistical models (with "irregularity" or several parameters). Each of these methods has been useful in Bayesian practice. Indeed, methods for constructing "objective" (alternatively, "default" or "ignorance") priors have been developed by avowed subjective (or "personal") Bayesians like James Berger (Duke University) and José-Miguel Bernardo (Universitat de València), simply because such priors are needed for Bayesian practice, particularly in science.[28] The quest for "the universal method for constructing priors" continues to attract statistical theorists.[28]
Thus, the Bayesian statistician needs either to use informed priors (using relevant expertise or previous data) or to choose among the competing methods for constructing "objective" priors.
This is basically a form of the old "is-ought" problem, except that what might be true of philosophy in general is certainly true for math. Math can't tell you whether you "ought" to do something. For example, it can't tell you whether you should attempt to express your subjective opinions in probabilistic form. What it can do is tell you how to work with these numbers after you have them, which is what it does in the form of Bayes' theorem and the method of updating prior probabilities with the consequent probabilities in order to calculate posterior probabilities. The first part, choosing to express opinions as probabilities, is just as human and subjective a decision as the numbers representing the guesses; the second part is just the math.
I would just make a couple final comments, however. Keep in mind what we are calling "Garbage" here, in the phrase "Garbage In Garbage Out." If what we are calling "Garbage" is just one person's particular opinions, then that's not a problem whatsoever, Bayes or no Bayes--just ignore them and use the non-garbage instead. But I do get the feeling that people mean more than just one person's wayward beliefs and that this "Garbage" is seen as a real problem for us all, not just as an individual's problem. GakuseiDon (and the author he summarizes here) said it this way: "for investigating history, where the data is of low quality," GIGO. What we are calling "Garbage" in this phrase is the state of our knowledge of the facts. If that is "Garbage," it is a problem for everybody. Avoiding precise mathematical representation doesn't help us out of the swamp. Maybe it makes us feel better about being in a swamp of "garbage" opinions, but "Garbage In Garbage Out" is true even if you are just "muddling through" this swamp.
Just letting loose a little with another comment (and this is still just me talking). And in what way are we enlisting math in this swamp, if we do at all? I'm going to have time to talk about this a little more, when summarizing Carrier, but here's just a single paragraph. Before I've called it an "aid to honesty." Another way of putting that phrase is "an accountability measure." Basically it's just bookkeeping. It's keeping track of what the assumptions are and what weights have been assigned for these assumptions. It's formal logic with some numbers, because history deals in uncertainty and not with certainty. And it's better than the deductive method in this application for various, formal reasons (mostly because deductive reasoning is terrible at bookkeeping, works best only if everything is 100% full-steam-ahead True, and is especially terrible at representing results as anything other than a simple binary true or false). Who has a problem with bookkeeping? Mostly people that don't like keeping books. Which is most people. I know I wait to do my taxes until April. It's cumbersome and difficult for most, and it's just cumbersome for the rest. The only reason we'd ever use it in history, really, is when the matter is very controversial and somewhat ambiguous. The controversy makes the bookkeeping more help than hurt (because everything is challenged at some point), and the ambiguity makes the exercise more than pure wank (like trying to prove that the world didn't pop into existence last Tuesday). Now that doesn't mean we have to do so. But it might mean that we can without looking too foolish.