# The Old Evidence Problem

Document Type:Essay

Subject Area:Mathematics

Clack Gymour (1980) presented a particular problem to Bayesian. He argued that suppose a hypothesis h is proposed which turns out to explain data that is already known e. Can h ever be confirmed by e? Instinctively, the answer is that the hypothesis can be confirmed by the already existing data. From some few examples, the hs are supported by the es. One very often mentioned example is general relativity with its allegedly strong support from the data on the annual precession of Mercury's perihelion; the data was already fifty years old when the field comparisons of overall relativity were first obtained by Einstein. On the other hand, when expressing what has not yet been proved that is e, there is no reason in principle to think to think that it can be done.

While Glymour’s objection seems that it has been easily answered, it is not the only one to have been brought against K – {e} relating the probability used I assessing the support of his argument. Chihara (1987) brought some objections; his first objection is that K-(e) merely signifies the set-theoretic subtraction of e from K and K itself is expected to be a closed set of sentences, then K – {e} will not be well defined if there are other sentences in K concerning e. nor is there any ambiguous direction concerning which to remove there are? For if a and b jointly entail e then there is a choice whether to remove a or b or both, but that selection will be illogical.

The second doubt is that if K is deductively closed and e simply erased from K, then all the consequences of e by itself and by the remainder of K will remain. But that is all that can be said, and we must come to a conclusion that in such cases the probabilities in the support function are relativized to an unknown state of background information, with the outcome that the support e gives to any advanced assumption is also indeterminate. But there appears to be a very influential and familiar objection to concerning K as an expression of the background information. This is that an inferential theory can be expressed in different ways, and the effect of removing the same appearance from two equivalent expressions may give rise to different significance classes.

That is, to non-equivalent theories A simple example of this is the pair {a, b) and {a, a b) where a and b are reasonably independent sentences. A and a b are also independent, and the axiom pairs generate the same sets of consequences, but if we remove a from both the resulting sets of consequences are distinct: for example, b is in one and not the other. This fact presents us a very high level of confidence in the prediction of the result of the 100th toss. (ii) Another person predicts the same result of the 100th toss as in (i), but now after having learned the outcomes of the first 99 tosses. Let us suppose that roughly 60% percent of the outcomes of the 99 tosses in both this case and in (i) are heads and that the foretold outcome of the 100th toss is of a head.

Our intuitive degree of confidence in this prediction of the 100th toss, measured by the likelihood we assign that outcome in the light of all the data, is here probably around 60%, whereas in (i) it is practically one. For a Bayesian analysis of these judgments of support, first, some abbreviations. Now for (ii). What is P(m/e) here? Well, the background data in this case now includes the information that e is true, and also that the subject foretold h after being given with e's truth. Let us look, as in (i), at P(e/m) and P(e), where, in the light of the analysis in Section 3, we are relativizing the probabilities to background minus e. Relative to this trimmed background, there is no more cause due to believing in the truth of e given m than otherwise, set P(e/m) equal to P(e).

From $10 to earn access

Only on Studyloop

Original template

Downloadable

Similar Documents