Cognitive Sciences Stack Exchange is a question and answer site for practitioners, researchers, and students in cognitive science, psychology, neuroscience, and psychiatry. It's 100% free, no registration required.

(a) Apples grow to different sizes, from small to average and large. Farmers want large apples.

(b) There is this rare bug, the apple bark bug, which lives in the bark of apple trees. Due to its rarity, the apple bark bug is not well understood.

(c) Agronomists are divided about the effect the apple bark bug has on the size of apples. Some believe that apples from trees that have been infested with colonies of this bug are smaller than on trees without such an infestation; others believe that in fact they are larger.

The ministry for agriculture has commissioned a study on the effects of the apple bark bug on apple size. Based on the results of this study they want to either exterminate the apple bark bug, ignore it, or cultivate it. There is no scientific literature on the matter, there is no reliable data, and the research institute commissioned with this study now wonders:

What should be the null hypothesis?

As natural scientists they know they cannot present their limited observations as general fact, but only falsify a hypothesis. But what should it be?

The third semester student apprentice from the nearby university, who has just finished a course in the theory of measurement, suggests:

H0 : Applebug = Appleno bug

H1 : Applebug ≠ Appleno bug

His friend, who has heard the same lecture, rolls her eyes and suggests:

H0 : Applebug ≤ Appleno bug

H1 : Applebug > Appleno bug

The research group leader smiles and compliments them on their attentiveness.

What does he suggest?

This is, of course, an example, illustrating a question relating to psychological research: How do we define the null hypothesis, if we have no presupposition about the effects of an intervention?

To me it looks like there are two competing research hypotheses, rather than no hypothesis.
–
AnaJun 9 '13 at 19:25

1

Not really. Any hypothesis must have an alternative hyothesis in such a way that between them they cover all possibilities: either x = 1 or x ≠ 1. Tertium non datur (there must not be a third possibility). The two opposing oppinions that either the bugs make the apples large, or the bugs make the apples small, leave a third possibility, that the bugs don't affect the apples at all. So you cannot test one hypothesis against one other, you'd be testing one against two others, which is impossible. You always need two, and the sum of their probabilities must be 1: p(h0) = a; p(h1) = 1-a.
–
whatJun 9 '13 at 21:31

1

This seems way out of scope... philosopy S.E. might be a better place, since this deals more with the philosophy of science rather any particular field.
–
zergylordJun 10 '13 at 23:47

I disagree with @zergylord about this being out of scope. Cross Validated would've been a far better suggestion if s/he'd been right, but I don't see a need for migration.
–
Nick StaunerJun 2 '14 at 8:44

2 Answers
2

So your independent variable is the bug, and your dependent variable is apple size.

The question asks whether you are testing a directional hypothesis.
Given that researchers are divided about the direction of the effect of the IV on the DV, it makes more sense to treat it as a non-directional hypothesis:

i.e., H0 is that apple size is equal with and without bugs.
The alternative hypothesis is that they are not equal. Or in some senses, there are two alternative hypotheses, one suggesting an increase and another suggesting a decrease.

Of course, you ultimately want to conclude whether there is a difference and if so what is the direction of the effect. Thus, confidence intervals from a frequentist perspective or a posterior density from a bayesian perspective might be more useful in understanding the direction and degree of any effect.

I recieved this same answer from another source through personal communication. The phrasing was: If you have no presuppositions, you conduct a bidirectional test. If you can reject H0: p = 0.5 and your estimator is for example p̂ = 0.7, you can interpret this as p > 0.5
–
whatJun 11 '13 at 9:16

I encounter a lot of situations like this in personality research. I don't think it's true that:

As natural scientists they know they cannot present their limited observations as general fact, but only falsify a hypothesis.

Limited observations of empirical information are still as factual as anything else. One may not want to present explanations of them as general theory, but one doesn't need a null hypothesis to generate theory. If the ministry's budget for the study is set, and the sampling procedure is too, then why not just collect the data and describe it the best one can? The scenario suggests a natural quasi-experiment, so causal inference would be limited unless laboratory-controlled experiments were used instead of naturalistic observation...and controlled experiments still might fail to apply to natural ecosystems by limiting the bugs' expression of any preferences they might have. Given no perfect solution, it seems inferential power is inevitably limited in this scenario. I don't see any hope for retaining a confirmatory hypothesis testing framework either, given the explicit lack of a hypothesis to confirm.

In an exploratory framework, clear data description is roughly as good as (even debatably better than) hypothesis testing. It doesn't pretend to be something it's not if reported appropriately, and doesn't replace real information with a choice of simplistic models. One way to take an exploratory approach in this case would be to estimate the effect size of the group difference and calculate confidence intervals. If zero is within your confidence interval, you might not want to reject the null hypothesis, but then again, why wouldn't you? It's probably false anyway, even if it's only ever-so-slightly off in whichever direction, even if only due to other uncontrolled variables. IMO, better to focus on what you do know:

I don't see why the ministry couldn't make an appropriate decision based on this information; none of its limits are unnecessary AFAIK, whereas the value of a test statistic is tied to the choice of null. What if the ministry only cares about substantial nonzero effects, but throws enough money at it to detect very small effects reliably? A rejected null then does less good than a narrow confidence interval, which might rule out both zero effect and substantially larger than zero effects given a small effect and large sample.