Do you think objectivity is attainable in science? Is it 'asymptotically attainable' (a phrase I just made up on the spot), by which I mean we can get ever closer to it as methods advance, just not 100%? Is it even desirable?

The scientific method, as we all know very well from school, starts from observation. However, Kuhn famously stated that, so long as we have committed ourselves to what he calls a scientific paradigm, our observations must by nature be 'theory-laden'. For example, astronomers in the tradition of Ptolemy saw the sun rising from the horizon, while Copernicans saw the sun appear as the earth move (http://plato.stanford.edu...). I can certainly relate to that. One of my professors just warned that while she wanted to keep the class theory-neutral, i.e. as a description of the facts as-is without committing to any major theory, it is impossible to do this 100%: the very definitions of concepts like 'subject' and 'noun' necessarily have theoretical undertones.

If it is not possible to describe observations with a theory-neutral language, is theory-laden observation a good thing? Couching observations in theoretical language makes it simpler for scientists to communicate, to set up hypotheses and design experiments, and so on. However, in doing so, we do not allow the possibility thinking outside the paradigm.

After observation comes the formulation of a hypothesis. The process of stating a hypothesis is almost certainly subjective, but this does not necessarily imply that the end result of the scientific method will be subjective. After making a hypothesis, we gather data and determine whether the data support it. Perhaps, if we keep to objective standards of testing , then we will get objective results. But can we?

It is not clear that the process of gathering data is objective. The measures we use to gauge our results our often subjective. This is usually less of a problem in physics, but in the social sciences, there can be a wide range of indicators, each of which makes underlying normative assumptions.

How about the process of drawing conclusions from the data? In theory, this sounds easy. If our data fail to match the prediction from our hypothesis, then the hypothesis is bunk, and we must revise it. Otherwise, the hypothesis is confirmed and our belief in the hypothesis increases (though this doesn't necessarily mean the hypothesis is correct). This is why, according to Popper, the difference between science and pseudoscience is based on falsifiability and not verifiability.

But can we really make such judgements objectively? Consider the two main paradigms in statistical inference, Bayesian and frequentist inference. Their difference is captured in this strip: https://xkcd.com....

The first one, probably first used in science by Laplace (though in an informal manner), is Bayesian inference. This approach is explicitly subjective in that it assumes a prior probability, which, by Bayes' thereom, is combined with the information supplied by the data to form a posterior probability. In the strip, the Bayesian statistician's prior probability of the sun going nova is maybe 0.0001, so the presence of new evidence only slightly budges the probability. The subjectivity of this approach lies mainly in the prior, which is the scientist's subjective view of the probability of a hypothesis. It is probably based on the scientist's own biases, so how can this be subjective?

Frequentism offers an attractive alternative to Bayesianism, and has been the dominant school of statistical inference for some time. In general, what frequentists do in the Neymann-Pearson paradigm is this: they have a null hypothesis and an alternative hypothesis. They obtain a result, and determine how surprising the result would be if the null hypothesis were true. If it's really surprising, then the null hypothesis is rejected. Yet it is often said that while Bayesians are honest about their subjective judgements, frequentists simply call them 'assumptions' and give their work a guise of objectivity. Determining 'how surprising' the result is forces us to make assumptions about the entire population (e.g. it's normally distributed), to which we have no access. A funnier feature of frequentism, which Bayesians criticise, is that your conclusion depends not just on the data, but also on the design of your experiment, which depends on your hypotheses - whence the source of subjectivity in inference.

Since both major paradigms of statistical inference are subjective in some way, can we really make conclusions about the correctness of hypotheses in an objective manner?

A related problem is the underspecification of theory from evidence. Consider the classical case of correlation. If we do find a significant linear correlation between two variables, we usually cannot readily tell from the data whether we can conclude causation, unless we are looking at a randomised experiment. The conclusion that can be drawn, it seems, is necessarily subject to judgement calls. Subjective judgements aren't always reliable. Even the frequentist Fisher, who stressed the objectivity of his methods, made the mistake of denying that smoking caused lung cancer despite strong correlations - likely because he personally enjoyed smoking.

So, can we really achieve objectivity in science? Or is the Bayesian picture of objectivity from subjective beginnings - the picture of all scientists eventually converging to the same objective truth - correct? Or, perhaps, is it impossible for us to ever find scientific 'truth' in an absolute sense?

I think it is well established that the only reason aliens come to earth is to slice up cows and examine inside peoples' bottoms. Unless you are a cow or suffer haemerrhoids I don't think there is anything to worry about from aliens. - keithprosser

At 9/3/2016 5:50:13 PM, Diqiucun_Cunmin wrote:Do you think objectivity is attainable in science? Is it 'asymptotically attainable' (a phrase I just made up on the spot), by which I mean we can get ever closer to it as methods advance, just not 100%? Is it even desirable?

This is a massive subject! Let me pick on one aspect which might or might not be what other people would start with.

The idea that describing facts without reference to any underlying theory is impossible seems undeniable to me. Near the bottom of the edifice of science is the theory that matter is made of 'atoms' that interact with each other via 'forces' and 'fields' such as gravity and electro-magnetism. (We can go deeper into atoms being made of quarks, but for simplicity lets pretend atoms and the forces between them are fundamental - it doesn't matter for now).

I want to ask if the 'fact' that the universe is made of 'atoms', 'forces' and 'fields' is not an objective fact at all but a 'theory' (or better a 'model') that is useful for visualisation and calculation but is only one of many possible consistent, non-isomorphic models we could use equally well. That is, could science have taken a different tack early on and built up an equivalent body of knowedge on a different set of fundamental concepts?

If so then it could have been that scientists today don't have the concept of an atom or the concept of a force yet with their competely different set of concepts they deveoped mobile phones and lande probes on Mars.

I am tempted to say "No". Although atoms and forces are indeed aspects of our currently favoured theory/mode of the universe, they are not only theoretical entities. I think there are things very like atoms and forces 'out there'. No doubt we do not understand atoms and forces 100%, but I think we know enough to say that whilst we are certain to be wrong about some details, we can't be so wrong that there is nothing remotely atom-like nor anything remotely force like in the universe. Reality and our model are 'isomorphic'.

At 9/3/2016 5:50:13 PM, Diqiucun_Cunmin wrote:Do you think objectivity is attainable in science? Is it 'asymptotically attainable' (a phrase I just made up on the spot)

I like that one.

by which I mean we can get ever closer to it as methods advance, just not 100%? Is it even desirable?

The scientific method, as we all know very well from school, starts from observation. However, Kuhn famously stated that, so long as we have committed ourselves to what he calls a scientific paradigm, our observations must by nature be 'theory-laden'.

Yes, but ironically, unlike Kuhn, I take that to be an argument for scientific realism and with that, scientific objectivity.

For example, astronomers in the tradition of Ptolemy saw the sun rising from the horizon, while Copernicans saw the sun appear as the earth move (http://plato.stanford.edu...). I can certainly relate to that. One of my professors just warned that while she wanted to keep the class theory-neutral, i.e. as a description of the facts as-is without committing to any major theory, it is impossible to do this 100%: the very definitions of concepts like 'subject' and 'noun' necessarily have theoretical undertones.

If it is not possible to describe observations with a theory-neutral language, is theory-laden observation a good thing? Couching observations in theoretical language makes it simpler for scientists to communicate, to set up hypotheses and design experiments, and so on. However, in doing so, we do not allow the possibility thinking outside the paradigm.

After observation comes the formulation of a hypothesis. The process of stating a hypothesis is almost certainly subjective, but this does not necessarily imply that the end result of the scientific method will be subjective. After making a hypothesis, we gather data and determine whether the data support it. Perhaps, if we keep to objective standards of testing , then we will get objective results. But can we?

It is not clear that the process of gathering data is objective. The measures we use to gauge our results our often subjective. This is usually less of a problem in physics, but in the social sciences, there can be a wide range of indicators, each of which makes underlying normative assumptions.

I agree, which is why I am a bit sceptical of the social sciences.

How about the process of drawing conclusions from the data? In theory, this sounds easy. If our data fail to match the prediction from our hypothesis, then the hypothesis is bunk, and we must revise it. Otherwise, the hypothesis is confirmed and our belief in the hypothesis increases (though this doesn't necessarily mean the hypothesis is correct). This is why, according to Popper, the difference between science and pseudoscience is based on falsifiability and not verifiability.

But can we really make such judgements objectively? Consider the two main paradigms in statistical inference, Bayesian and frequentist inference. Their difference is captured in this strip: https://xkcd.com....

The first one, probably first used in science by Laplace (though in an informal manner), is Bayesian inference. This approach is explicitly subjective in that it assumes a prior probability, which, by Bayes' thereom, is combined with the information supplied by the data to form a posterior probability. In the strip, the Bayesian statistician's prior probability of the sun going nova is maybe 0.0001, so the presence of new evidence only slightly budges the probability. The subjectivity of this approach lies mainly in the prior, which is the scientist's subjective view of the probability of a hypothesis. It is probably based on the scientist's own biases, so how can this be subjective?

Frequentism offers an attractive alternative to Bayesianism, and has been the dominant school of statistical inference for some time. In general, what frequentists do in the Neymann-Pearson paradigm is this: they have a null hypothesis and an alternative hypothesis. They obtain a result, and determine how surprising the result would be if the null hypothesis were true. If it's really surprising, then the null hypothesis is rejected. Yet it is often said that while Bayesians are honest about their subjective judgements, frequentists simply call them 'assumptions' and give their work a guise of objectivity. Determining 'how surprising' the result is forces us to make assumptions about the entire population (e.g. it's normally distributed), to which we have no access. A funnier feature of frequentism, which Bayesians criticise, is that your conclusion depends not just on the data, but also on the design of your experiment, which depends on your hypotheses - whence the source of subjectivity in inference.

Since both major paradigms of statistical inference are subjective in some way, can we really make conclusions about the correctness of hypotheses in an objective manner?

A related problem is the underspecification of theory from evidence. Consider the classical case of correlation. If we do find a significant linear correlation between two variables, we usually cannot readily tell from the data whether we can conclude causation, unless we are looking at a randomised experiment. The conclusion that can be drawn, it seems, is necessarily subject to judgement calls. Subjective judgements aren't always reliable. Even the frequentist Fisher, who stressed the objectivity of his methods, made the mistake of denying that smoking caused lung cancer despite strong correlations - likely because he personally enjoyed smoking.

So, can we really achieve objectivity in science? Or is the Bayesian picture of objectivity from subjective beginnings - the picture of all scientists eventually converging to the same objective truth - correct? Or, perhaps, is it impossible for us to ever find scientific 'truth' in an absolute sense?

It is important to note that the objectivity of science does not stem from every scientist or result being objective. Rather, it is the self-correction mechanisms employed that make science as a whole converge to objectivity and truth.

At 9/3/2016 5:50:13 PM, Diqiucun_Cunmin wrote:But can we really make such judgements objectively? Consider the two main paradigms in statistical inference, Bayesian and frequentist inference. Their difference is captured in this strip: https://xkcd.com....

Statistical reasoning is inherently subjective to the extent that truly objective estimation only permits probabilities of 0 and 100 percent. Anything in between implies localized ignorance and therefore non-generality (i.e., objectivity).

To the extent that we have any confidence at all in our original low estimation or in fact any estimation that deviates from total ignorance, some amount of Bayesian inference is called for. This is because if we have some idea as to the probability of the star exploding, we have some idea what "truth" and "falsehood" look like, and therefore some idea what telling the truth and lying look like. Under that assumption, the probability that the machine is lying goes way up when it gives an answer of "yes, the star exploded." The answer is consistent with the machine telling the truth and the machine lying, but it's more consistent with it lying. A lot more in fact. According to our original estimation, the machine is probably only telling the truth if it gives an answer of "no, the star has not exploded." The only way in which the "95 percent telling the truth 5 percent lying" description remains unchanged in light of the answer it gives is if we possess absolutely no background statistical reasoning, in which case we can hardly know whether the machine is a good predictor since we don't know what methods make a prediction good or not. Simply defining the machine to have the predictive powers arbitrarily ascribed to it does not get around this problem, because thought experiments only have relevance to reality insofar as the conclusions reached do in fact follow within the scenario. Blindly following the machine's proclamations is simply not justified.

Interestingly enough, there's really only one way to go as far as the bet is concerned. Since winning the bet only counts as a positive outcome if you're alive to claim the winnings, betting on the sun exploding can gain you nothing, as you're only around to claim the winnings in the event that you lost. :)

Interestingly enough, there's really only one way to go as far as the bet is concerned. Since winning the bet only counts as a positive outcome if you're alive to claim the winnings, betting on the sun exploding can gain you nothing, as you're only around to claim the winnings in the event that you lost. :)

: At 10/2/2017 3:00:43 AM, YYW wrote:
: Bossy: You are Regina.

:Inferno wrote:
:You sound rather gay.

-- And the light shineth in darkness; and the darkness comprehended it not.

"I believe that my powers of mind are surely such that I would have become in a
certain sense a resolver of all problems. I do not believe that I could have remained in
error anywhere for long. I believe that I would have earned the name of Redeemer,
because I had the nature of a Redeemer. "

Interestingly enough, there's really only one way to go as far as the bet is concerned. Since winning the bet only counts as a positive outcome if you're alive to claim the winnings, betting on the sun exploding can gain you nothing, as you're only around to claim the winnings in the event that you lost. :)

Interestingly enough, there's really only one way to go as far as the bet is concerned. Since winning the bet only counts as a positive outcome if you're alive to claim the winnings, betting on the sun exploding can gain you nothing, as you're only around to claim the winnings in the event that you lost. :)

-- And the light shineth in darkness; and the darkness comprehended it not.

"I believe that my powers of mind are surely such that I would have become in a
certain sense a resolver of all problems. I do not believe that I could have remained in
error anywhere for long. I believe that I would have earned the name of Redeemer,
because I had the nature of a Redeemer. "

Interestingly enough, there's really only one way to go as far as the bet is concerned. Since winning the bet only counts as a positive outcome if you're alive to claim the winnings, betting on the sun exploding can gain you nothing, as you're only around to claim the winnings in the event that you lost. :)

Interestingly enough, there's really only one way to go as far as the bet is concerned. Since winning the bet only counts as a positive outcome if you're alive to claim the winnings, betting on the sun exploding can gain you nothing, as you're only around to claim the winnings in the event that you lost. :)

Interestingly enough, there's really only one way to go as far as the bet is concerned. Since winning the bet only counts as a positive outcome if you're alive to claim the winnings, betting on the sun exploding can gain you nothing, as you're only around to claim the winnings in the event that you lost. :)

-- And the light shineth in darkness; and the darkness comprehended it not.

"I believe that my powers of mind are surely such that I would have become in a
certain sense a resolver of all problems. I do not believe that I could have remained in
error anywhere for long. I believe that I would have earned the name of Redeemer,
because I had the nature of a Redeemer. "

Sorry for the late replies everyone; I'll try to respond to all the responses to my threads within the next couple of days, because I won't have much time afterwards...

At 9/3/2016 9:42:27 PM, PetersSmith wrote:

At 9/3/2016 5:50:13 PM, Diqiucun_Cunmin wrote:Do you think objectivity is attainable in science? Is it 'asymptotically attainable' (a phrase I just made up on the spot), by which I mean we can get ever closer to it as methods advance, just not 100%? Is it even desirable?

Your reply was simple, but it seems to reveal two different problems about objectivity in science. The first one is whether science can be unaffected by values irrelevant to science. Falsifiability, economy and consistency with data are values almost universally agreed to be relevant to science, but cultural and political values, for example, are not. Totalitarian regimes have historically judged theories of science based on their conformity with their official ideologies, but are scientists in freer states, free from intervention, also affected by their values when they decide on the plausibility of different theories? I do think so - this is probably the case in many of the social sciences. For example, a tabula rasa narrative of human nature appeals to progressives, and this may be one of the reasons behind its popularity.

The second sentence seems to point to something else - whether we are able to perceive the world objectively. In general, modern scientists don't use their senses to observe the world directly; instead, they utilise instruments as an intermediary, since they are not prone to human perceptual biases and are therefore more accurate. But the problem becomes: How do we know that our instruments are accurate? Is it based also on our human perceptions? I'm not sufficiently familiar with this aspect of science, so I'll leave it here without making comments of my own :P

I think it is well established that the only reason aliens come to earth is to slice up cows and examine inside peoples' bottoms. Unless you are a cow or suffer haemerrhoids I don't think there is anything to worry about from aliens. - keithprosser

At 9/3/2016 9:50:52 PM, keithprosser wrote:This is a massive subject! Let me pick on one aspect which might or might not be what other people would start with.

The idea that describing facts without reference to any underlying theory is impossible seems undeniable to me. Near the bottom of the edifice of science is the theory that matter is made of 'atoms' that interact with each other via 'forces' and 'fields' such as gravity and electro-magnetism. (We can go deeper into atoms being made of quarks, but for simplicity lets pretend atoms and the forces between them are fundamental - it doesn't matter for now).

I agree, and the same thing happens in high-level sciences like linguistics as well - it's hard if not impossible to describe a language's grammar without referring to noun phrases or objects, even though these are very much theoretical constructs.

I want to ask if the 'fact' that the universe is made of 'atoms', 'forces' and 'fields' is not an objective fact at all but a 'theory' (or better a 'model') that is useful for visualisation and calculation but is only one of many possible consistent, non-isomorphic models we could use equally well. That is, could science have taken a different tack early on and built up an equivalent body of knowedge on a different set of fundamental concepts?

If so then it could have been that scientists today don't have the concept of an atom or the concept of a force yet with their competely different set of concepts they deveoped mobile phones and lande probes on Mars.

I am tempted to say "No". Although atoms and forces are indeed aspects of our currently favoured theory/mode of the universe, they are not only theoretical entities. I think there are things very like atoms and forces 'out there'. No doubt we do not understand atoms and forces 100%, but I think we know enough to say that whilst we are certain to be wrong about some details, we can't be so wrong that there is nothing remotely atom-like nor anything remotely force like in the universe. Reality and our model are 'isomorphic'.

I'm not sure about scientific realism in general, but as applied to the subject I study (linguistics) I don't think it applies. Rather, I believe phonemes, subjects, prepositional phrases, inflectional affixes, etc. emerge from lower-level cognitive capacities of our brains; these words are useful in describing the high-level behaviour of our minds, but do not really exist. Only neurons and synapses have actual existence. In linguistics, I have an instrumentalist view of science.

I'm curious as to your position. Why do you think that is the case for physics? (I'm not familiar with physics btw, so I don't have an opinion on this myself.) The caloric theory of heat was very successful for a while, but we now know with the benefit of hindsight that it isn't 'real' after finding counter-examples to it. How do we know that maybe, just maybe, modern physics has the same properties as the caloric theory?

I think it is well established that the only reason aliens come to earth is to slice up cows and examine inside peoples' bottoms. Unless you are a cow or suffer haemerrhoids I don't think there is anything to worry about from aliens. - keithprosser

The scientific method, as we all know very well from school, starts from observation. However, Kuhn famously stated that, so long as we have committed ourselves to what he calls a scientific paradigm, our observations must by nature be 'theory-laden'.

Yes, but ironically, unlike Kuhn, I take that to be an argument for scientific realism and with that, scientific objectivity.

That sounds like an interesting position. Why is that? (My guess is that if observations can be made in theoretical terms, then this means they are good at describing reality - and hence isomorphic with reality. But can we not do the same with, say, Marxist historiography?)

It is not clear that the process of gathering data is objective. The measures we use to gauge our results our often subjective. This is usually less of a problem in physics, but in the social sciences, there can be a wide range of indicators, each of which makes underlying normative assumptions.

I agree, which is why I am a bit sceptical of the social sciences.

Actually, the Stanford Encyclopedia of Philosophy does present some arguments about the subjectivity of measurement, even for physical sciences (http://plato.stanford.edu...http://plato.stanford.edu...). I'm not knowledgeable to form an opinion either way, though. What do you think of these arguments?

So, can we really achieve objectivity in science? Or is the Bayesian picture of objectivity from subjective beginnings - the picture of all scientists eventually converging to the same objective truth - correct? Or, perhaps, is it impossible for us to ever find scientific 'truth' in an absolute sense?

It is important to note that the objectivity of science does not stem from every scientist or result being objective. Rather, it is the self-correction mechanisms employed that make science as a whole converge to objectivity and truth.

That's a reasonable position, but I think it is not always obvious that the intersubjective standards of evaluation are necessarily objective. In the biomedical sciences, it certainly isn't. Even ignoring the notorious fact that, well, publication bias prevents falsifications from publication to protect the interests of pharmaceutical transnationals, the standards in such fields are tied to the consequences of incorrectly rejected or accepting a hypothesis. That may be more problematic for applied fields than pure ones, however.

Nevertheless, can we ever be sure that even the standards that we, as a community in the pure sciences, use in confirming or falsifying hypotheses are objective? Examples of the Dunhem problem seem to cast some doubt. When the orbit of a planet deviates from Newton's predictions, the default position was not to refute Newton's theory but to challenge some other underlying assumption. In Uranus' case, this was not only successful but also led to the discovery of Neptune. But for Mercury, before Einstein came along, everyone was confident about the existence of Vulcan as well - which even led to many bogus observations of the non-existent celestial body. On the one hand, the defence of an empirically successful theory seems to be an objective consideration, but on the other, was the subjectivity of scientific community in doubt when they accepted the spurious evidence for Vulcan?

I think it is well established that the only reason aliens come to earth is to slice up cows and examine inside peoples' bottoms. Unless you are a cow or suffer haemerrhoids I don't think there is anything to worry about from aliens. - keithprosser

At 9/3/2016 5:50:13 PM, Diqiucun_Cunmin wrote:But can we really make such judgements objectively? Consider the two main paradigms in statistical inference, Bayesian and frequentist inference. Their difference is captured in this strip: https://xkcd.com....

Statistical reasoning is inherently subjective to the extent that truly objective estimation only permits probabilities of 0 and 100 percent. Anything in between implies localized ignorance and therefore non-generality (i.e., objectivity).

But that sounds circular. To assert that probabilistic estimates implies ignorance is to presuppose the Bayesian interpretation of the probability, rather than the frequentist interpretation. A frequentist would argue that probability only represents the long-run relative frequency of a certain event, which can be justified by the law of large numbers. In this case, the probability 1/36 represents the number of times the machine would lie divided by the number of times the machine makes a prediction in the long run, which doesn't make any references to subjectivity, merely to the real world.

To the extent that we have any confidence at all in our original low estimation or in fact any estimation that deviates from total ignorance, some amount of Bayesian inference is called for. This is because if we have some idea as to the probability of the star exploding, we have some idea what "truth" and "falsehood" look like, and therefore some idea what telling the truth and lying look like. Under that assumption, the probability that the machine is lying goes way up when it gives an answer of "yes, the star exploded." The answer is consistent with the machine telling the truth and the machine lying, but it's more consistent with it lying. A lot more in fact. According to our original estimation, the machine is probably only telling the truth if it gives an answer of "no, the star has not exploded." The only way in which the "95 percent telling the truth 5 percent lying" description remains unchanged in light of the answer it gives is if we possess absolutely no background statistical reasoning, in which case we can hardly know whether the machine is a good predictor since we don't know what methods make a prediction good or not. Simply defining the machine to have the predictive powers arbitrarily ascribed to it does not get around this problem, because thought experiments only have relevance to reality insofar as the conclusions reached do in fact follow within the scenario. Blindly following the machine's proclamations is simply not justified.

I think the strip was not intended as a thought experiment, but as an analogy of actual scientific research, and therefore was not meant to be taken literally. The machine's die-rolling mechanism, whose result is determined by the convolution of two iid binomial distributions, represents the sampling distributions (Z, t, chi^2, F, etc) generated by ideal sampling methods. Obviously, it is rare in practice that the idealisations, particularly if practised in the social sciences, hold true; but it can be roughly true in, for example, randomised clinical trials. So we can assume that the machine is always right about whether the sun has exploded.

This was what the frequentist probably had in mind:-The sample space is simply the set {E, NE}, where E is the event that the machine proclaims the sun to have exploded, and NE is the event that the machine says it hasn't.-The null hypothesis H0 is that the sun has not gone nova.-The alternative hypothesis H1 is that the sun is indeed a goner.-The 'test statistic' is E.-The probability of a type I error (rejecting the null hypothesis when it's true) is fixed at 0.05, and we need to minimise the probability of a type II error (accepting the null hypothesis when it's false).The likelihood functions are:L0 = 1/36 when the result is E, 35/36 when the result is NEL1 = 35/36 when the result is NE, 1/36 when the result is E

Obviously, the likelihood ratio L0/L1 is 1/35 when the result is E, and 35 when the result is NE. Now, we want to set the probability that we reject H0 when H0 is true to be 0.05, i.e. when P(NE|H0) = 0.05. Obviously we can't set that since this distribution is categorical. We can only approximate that by setting the test this way:-Believe everything the machine says.where the probability of a type I error is 1/36 (and the probability of a type II error is also 1/36).

The point is that when frequentists dealing with numeric parameters and sampling distributions instead of explosions and die-rolling explosion detectors, they utilise the same basic methodology I outlined above, and assume that the machine is always right about explosions. This usually gives them an appearance of objectivity, which is also displayed in the die example, since the subjective probability of the sun going nova is ignored.

In the context of my discussion in the OP, the salient point is, Bayesians may argue that there is another type of objectivity implicit in such analyses, in that the choosing of hypotheses affects the conclusion - think of Z-statistics between 1.64 and 1.96, for example. Hypotheses are often formulated subjectively, so are we allowing our subjective biases to sneak in?

Interestingly enough, there's really only one way to go as far as the bet is concerned. Since winning the bet only counts as a positive outcome if you're alive to claim the winnings, betting on the sun exploding can gain you nothing, as you're only around to claim the winnings in the event that you lost. :)

That's an interesting twist on the Dutch book argument :P

I think it is well established that the only reason aliens come to earth is to slice up cows and examine inside peoples' bottoms. Unless you are a cow or suffer haemerrhoids I don't think there is anything to worry about from aliens. - keithprosser

At 9/3/2016 5:50:13 PM, Diqiucun_Cunmin wrote:Do you think objectivity is attainable in science? Is it 'asymptotically attainable' (a phrase I just made up on the spot), by which I mean we can get ever closer to it as methods advance, just not 100%? Is it even desirable?

The scientific method, as we all know very well from school, starts from observation. However, Kuhn famously stated that, so long as we have committed ourselves to what he calls a scientific paradigm, our observations must by nature be 'theory-laden'. For example, astronomers in the tradition of Ptolemy saw the sun rising from the horizon, while Copernicans saw the sun appear as the earth move (http://plato.stanford.edu...). I can certainly relate to that. One of my professors just warned that while she wanted to keep the class theory-neutral, i.e. as a description of the facts as-is without committing to any major theory, it is impossible to do this 100%: the very definitions of concepts like 'subject' and 'noun' necessarily have theoretical undertones.

If it is not possible to describe observations with a theory-neutral language, is theory-laden observation a good thing? Couching observations in theoretical language makes it simpler for scientists to communicate, to set up hypotheses and design experiments, and so on. However, in doing so, we do not allow the possibility thinking outside the paradigm.

After observation comes the formulation of a hypothesis. The process of stating a hypothesis is almost certainly subjective, but this does not necessarily imply that the end result of the scientific method will be subjective. After making a hypothesis, we gather data and determine whether the data support it. Perhaps, if we keep to objective standards of testing , then we will get objective results. But can we?

It is not clear that the process of gathering data is objective. The measures we use to gauge our results our often subjective. This is usually less of a problem in physics, but in the social sciences, there can be a wide range of indicators, each of which makes underlying normative assumptions.

How about the process of drawing conclusions from the data? In theory, this sounds easy. If our data fail to match the prediction from our hypothesis, then the hypothesis is bunk, and we must revise it. Otherwise, the hypothesis is confirmed and our belief in the hypothesis increases (though this doesn't necessarily mean the hypothesis is correct). This is why, according to Popper, the difference between science and pseudoscience is based on falsifiability and not verifiability.

But can we really make such judgements objectively? Consider the two main paradigms in statistical inference, Bayesian and frequentist inference. Their difference is captured in this strip: https://xkcd.com....

The first one, probably first used in science by Laplace (though in an informal manner), is Bayesian inference. This approach is explicitly subjective in that it assumes a prior probability, which, by Bayes' thereom, is combined with the information supplied by the data to form a posterior probability. In the strip, the Bayesian statistician's prior probability of the sun going nova is maybe 0.0001, so the presence of new evidence only slightly budges the probability. The subjectivity of this approach lies mainly in the prior, which is the scientist's subjective view of the probability of a hypothesis. It is probably based on the scientist's own biases, so how can this be subjective?

Frequentism offers an attractive alternative to Bayesianism, and has been the dominant school of statistical inference for some time. In general, what frequentists do in the Neymann-Pearson paradigm is this: they have a null hypothesis and an alternative hypothesis. They obtain a result, and determine how surprising the result would be if the null hypothesis were true. If it's really surprising, then the null hypothesis is rejected. Yet it is often said that while Bayesians are honest about their subjective judgements, frequentists simply call them 'assumptions' and give their work a guise of objectivity. Determining 'how surprising' the result is forces us to make assumptions about the entire population (e.g. it's normally distributed), to which we have no access. A funnier feature of frequentism, which Bayesians criticise, is that your conclusion depends not just on the data, but also on the design of your experiment, which depends on your hypotheses - whence the source of subjectivity in inference.

Since both major paradigms of statistical inference are subjective in some way, can we really make conclusions about the correctness of hypotheses in an objective manner?

A related problem is the underspecification of theory from evidence. Consider the classical case of correlation. If we do find a significant linear correlation between two variables, we usually cannot readily tell from the data whether we can conclude causation, unless we are looking at a randomised experiment. The conclusion that can be drawn, it seems, is necessarily subject to judgement calls. Subjective judgements aren't always reliable. Even the frequentist Fisher, who stressed the objectivity of his methods, made the mistake of denying that smoking caused lung cancer despite strong correlations - likely because he personally enjoyed smoking.

So, can we really achieve objectivity in science? Or is the Bayesian picture of objectivity from subjective beginnings - the picture of all scientists eventually converging to the same objective truth - correct? Or, perhaps, is it impossible for us to ever find scientific 'truth' in an absolute sense? : :

All scientific theories are put together by the subjective sensual experiences of man. Not one person's sensual experience is the same as another person's sensual experience. They can only agree ( or believe ) with each other's stories of what those experiences were.

At 9/3/2016 5:50:13 PM, Diqiucun_Cunmin wrote:But can we really make such judgements objectively? Consider the two main paradigms in statistical inference, Bayesian and frequentist inference. Their difference is captured in this strip: https://xkcd.com....

Statistical reasoning is inherently subjective to the extent that truly objective estimation only permits probabilities of 0 and 100 percent. Anything in between implies localized ignorance and therefore non-generality (i.e., objectivity).

But that sounds circular. To assert that probabilistic estimates implies ignorance is to presuppose the Bayesian interpretation of the probability, rather than the frequentist interpretation. A frequentist would argue that probability only represents the long-run relative frequency of a certain event, which can be justified by the law of large numbers. In this case, the probability 1/36 represents the number of times the machine would lie divided by the number of times the machine makes a prediction in the long run, which doesn't make any references to subjectivity, merely to the real world.

I agree that "over the course of many trials the machine has lied 1/36th of the time" is an objective statement. But to claim that of any given prediction, its probability of being a lie is 1/36 is to arrogate the status of human ignorance to that of an objective feature of the universe. It amounts to treating every prediction the same simply because that's that's all we as subjects are in the position to do.

To illustrate what I mean, let's saying we're trying to estimate the odds that a six-sided die will come up on the number 4 when tossed. A statistician brought up in the frequentist tradition is assigned to the task, and as he repeatedly tosses the die and records the results a probability of "1/6" begins to emerge. Satisfied with his result, he's confident that under conditions similar to those of his experiment, any die in midair has a 1/6 probability of coming to a rest with 4 on top.

Another statistician comes along to test his results. He finds that, indeed, over the long run tossed dice have a 1/6 chance of coming up on the number 4. But he makes another discovery as well. He notices that for whatever reason when a tossed die starts on the number 4 in the thrower's hand it never ends up on the number 4, the same being true of all the other numbers. In light of this knowledge, the 1/6 probability can be broken down into two separate probabilities of which it is the average between the two (assuming the thrower is not manipulating the starting number or biased toward/against certain numbers). In other words, the probability of the die coming up 4 oscillates between 0 and higher-than-1/6 and gives the impression, to someone who has not yet perceived this phenomenon, that the probability is 1/6 every time, when in fact it is not.

Eventually, science reaches the point where it is able to predict with near 100 percent accuracy what number a thrown die will land on given knowledge of the way in which it was thrown and the conditions of its trajectory. And while it remains true that over the long run tossed die will come up 4 one out of six times, saying that a given die has a 1/6 percent chance of coming up 4 is no longer tenable. Because the universe did not suddenly change in the meantime, this implies that the original probability of 1/6 was a subjective interpretation of objective data -- subjective, because justified within the original subject's epistemological frame.

For the sake of communication we practically define objectivity into existence and use it for it's application. The philosophical ramifications and considerations are generally not of any concern to the scientist.

The scientific method, as we all know very well from school, starts from observation. However, Kuhn famously stated that, so long as we have committed ourselves to what he calls a scientific paradigm, our observations must by nature be 'theory-laden'.

Yes, but ironically, unlike Kuhn, I take that to be an argument for scientific realism and with that, scientific objectivity.

That sounds like an interesting position. Why is that? (My guess is that if observations can be made in theoretical terms, then this means they are good at describing reality - and hence isomorphic with reality. But can we not do the same with, say, Marxist historiography?).

Almost. Because they can be made in theoretical terms and the theory has predictive success. I am not very familiar with Marxist historiography, but afaik it never had the latter.

It is not clear that the process of gathering data is objective. The measures we use to gauge our results our often subjective. This is usually less of a problem in physics, but in the social sciences, there can be a wide range of indicators, each of which makes underlying normative assumptions.

I agree, which is why I am a bit sceptical of the social sciences.

Actually, the Stanford Encyclopedia of Philosophy does present some arguments about the subjectivity of measurement, even for physical sciences (http://plato.stanford.edu...http://plato.stanford.edu...). I'm not knowledgeable to form an opinion either way, though. What do you think of these arguments?

I don't have a lot of time lately (hence the late reply), but it seems to me that both are different formulations of Kuhn's point about teory-ladenness. At least I would answer both concerns with the same reply I gave to the his argument: predictive success + the no miracles argument.

So, can we really achieve objectivity in science? Or is the Bayesian picture of objectivity from subjective beginnings - the picture of all scientists eventually converging to the same objective truth - correct? Or, perhaps, is it impossible for us to ever find scientific 'truth' in an absolute sense?

It is important to note that the objectivity of science does not stem from every scientist or result being objective. Rather, it is the self-correction mechanisms employed that make science as a whole converge to objectivity and truth.

That's a reasonable position, but I think it is not always obvious that the intersubjective standards of evaluation are necessarily objective. In the biomedical sciences, it certainly isn't. Even ignoring the notorious fact that, well, publication bias prevents falsifications from publication to protect the interests of pharmaceutical transnationals, the standards in such fields are tied to the consequences of incorrectly rejected or accepting a hypothesis. That may be more problematic for applied fields than pure ones, however.

Nevertheless, can we ever be sure that even the standards that we, as a community in the pure sciences, use in confirming or falsifying hypotheses are objective? Examples of the Dunhem problem seem to cast some doubt. When the orbit of a planet deviates from Newton's predictions, the default position was not to refute Newton's theory but to challenge some other underlying assumption. In Uranus' case, this was not only successful but also led to the discovery of Neptune. But for Mercury, before Einstein came along, everyone was confident about the existence of Vulcan as well - which even led to many bogus observations of the non-existent celestial body. On the one hand, the defence of an empirically successful theory seems to be an objective consideration, but on the other, was the subjectivity of scientific community in doubt when they accepted the spurious evidence for Vulcan?

Well, I am not sure it was accepted as scientific consensus, but it is one of the examples I always cite when arguing against falsificationism. Falsification and verification are a bit more complex than what the popular picture might suggest, because we always test not only one hypothesis in a vacuum so to say, but rather we test the hypothesis in conjunction with a whole theoretical framework, such that any observation adds verification or takes it away.