I, by contrast, propose that the first thing to be taken into account should be the severity of tests… And I hold that what ultimately decides the fate of a theory is the result of a test, i.e. an agreement about basic statements… for me the choice is decisively influenced by the application of the theory and the acceptance of the basic statements in connection with this application…

This is in opposition to preferring the simple on an aesthetic basis. More importantly, he suggests we agree on the basic statements, and not universals.

He draws a long analogy to trial by jury:

The verdict is reached in accordance with a procedure which is governed by rules. These rules are based on certain fundamental principles which are chiefly, if not solely, designed to result in the discovery of objective truth. They sometimes leave room not only for subjective convictions but even for subjective bias.

The ideal these days, I guess, is that everyone can play juror if data are made available. Of course, taking data as basic (or near-basic) statements requires a decision.

The empirical basis of objective science has thus nothing ‘absolute’ about it. Science does not rest upon solid bedrock. The bold structure of its theories rises, as it were, above a swamp.

I see no reason not to believe this. The question is, then, to what extent can the theories built upon the swamp be objective — in particular, when most measurements have an associated error? We need to get into Popper’s treatment of probability before we can deal with this question.

[The scientist’s] aim is to find explanatory theories (if possible, true explanatory theories); that is to say, theories which describe certain structural properties of the world, and which permit us to deduce, with the help of initial conditions, the effects to be explained.

“Initial conditions” are singular statements that apply to a specific event in question. Combining these with universal laws produces predictions. Popper doesn’t require that every event can be deductively predicted from universal laws. But science has to search for such laws that causally explain events. Popper contends that while scientific laws are not verifiable, they are falsifiable.

One angle from which the primacy of falsification might be challenged is instrumentalism. Berkeley suggested abstract theories are instruments for the prediction of observable phenomena, and not genuine assertions about the world. The difference is that between “all models are wrong” and “all models are falsifiable”.

There is no sharp dividing line between an ’empirical language’ and a ‘theoretical language’: we are theorizing all the time, even when we make the most trivial singular statement.

We are always using models, so we’re always wrong. Personally, I can live with this. Under instrumentalism, the crucial question becomes “how wrong”. As long as measurements are taken to be real features of the world, the answer to this can be used in falsificationism.

But what if measurements are dependent on assumptions? This is an implication of conventionalism. Duhem held that universal laws are merely human conventions. Since measurements depend on these laws, a conventionalist might argue that theoretical systems are not only unverifiable but also unfalsifiable. Popper makes a value judgement against conventionalism, not because it’s demonstrably wrong but because it allows explaining away, rendering it useless for science. He quotes Joseph Black:

A nice adaptation of conditions will make almost any hypothesis agree with the phenomena. This will please the imagination but does not advance our knowledge.

Statistics makes such adaptation even easier: the phenomena were merely improbable. The rise of probabilistic models makes it even more valuable to guard against ad hoc adaptations.

I’m finding important contrasts between The Logic of Scientific Discovery and my fourth-hand preconceptions of the book. Popper differentiates between four kinds of tests:

“the logical comparison of the conclusions among themselves, by which the internal consistency of the system is tested”

“the investigation of the logical form of the theory, with the object of determining whether it has the character of an empirical or scientific theory”

“the comparison with other theories, chiefly with the aim of determining whether the theory would constitute a scientific advance should it survive our various tests”

“the testing of the theory by way of empirical applications of the conclusions which can be derived from it”

Statistics is largely concerned with the last of these, and so it should be. But it’s worth reminding ourselves and JPSP editors that the first three kinds of tests exist and are worth doing.

The demarcation problem — “finding a critierion which would enable us to distinguish between the empirical sciences on the one hand, and mathematics and logic as well as ‘metaphysical’ systems on the other” — is something I think about a lot. I hadn’t previously connected this to the induction problem, and will have to think about whether accepting a convention for demarcation lets us build science without induction.

Popper says that scientific statements are objective in the sense that they can be criticised “inter-subjectively”. In practice this seems to mean that other scientists can test the statements. This means “there can be no ultimate statements in science”, which I am satisfied with.