Rationality requires an impartial concern for all parts of our life. The mere difference of location in time, of something’s being earlier or later, is not a rational ground for having more or less regard for it.

While Rawls’s principle is a little broader, both hold that there is no good reason to grant greater weight to benefits one might receive now rather than later.

Look: I believe this. But it seems to me especially difficult to argue for. And argument is needed. Many of us in fact display temporal biases: we concern ourselves to a greater degree with particular parts of our lives rather than our whole lives. The most common example of this is future-bias: we prefer goods to be in our future rather than our past. But only slightly less common is a near-term bias: we prefer benefits to be closer in time rather than further in time (even keeping fixed the size of a benefit and its certainty). And if so, it seems sensible to ask what the rationale can and should be for rejecting such common patterns of concern.

Furthermore, or so I suggest, there is a reasonably potent philosophical rationale to be had for such biases. Take an analogy from morality: We typically believe that we have stronger moral reason to benefit those close to us, to benefit those with whom we bear certain bonds (such as friendships, parent-child relations, and so on). These reasons can give rise to stronger obligations to, for example, assist, nurture, and care for our loved ones, compared to our obligations towards those with whom we do not bear such bonds. But if this is correct and what we owe to each other can depend on the closeness of our bonds, it’s not clear why what we owe to ourselves cannot similarly be structured by our bonds to ourselves at different points in time. And, at least for some, these bonds to our other temporal selves will be biased toward those selves at particular periods of our lives.

In some folks, such bonds can yield forms of near-term bias. For instance, it could be—indeed, quite plausibly is—the case that I have a stronger set of interactions and relationships with my near-term future self than my far-term future self. In addition, the bonds of affection are likely to be more robust in the case of near-term future selves. For instance, we may be committed to different projects later on in our lives, and so we may care more about those nearer parts, where we are still committed to our current projects. And if this is the case, we seem to have at least some justification for privileging our near-term selves over our far-term selves.

Here’s another way to put this point: Moral views that allow obligations or special permissions to our near and dear will hold that each individual maintains a particular ‘moral perspective’—one that grants special reason to care for the interests of those close to us. My moral perspective will be different to yours, insofar as we are related to different people, care about different people, and so on. But if we allow similar considerations to cross from relationships between different people to the relationship between different parts of one person’s life, the evaluation is also similar. My perspective now may very well be different to yesterday’s perspective or tomorrow’s perspective. I, today, have bonds with my various past and future selves that may differ from the bonds I will have tomorrow. And given plausible psychological assumptions, for many people these perspectives will be biased toward the near future.

(Notice that this argument does not entail that one should maintain a near-term bias, only that it is rational for individuals to adopt a near-term bias, depending on their bonds to other temporally located selves. This is reflected in the analogous moral view: while those without specific bonds to other people may be permitted to remain neutral, those who maintain such bonds will have reason—requiring or merely permitting them, depending on the view—not to be neutral.)

Can this argument be rebuffed? Perhaps. The classic argument appeals to compensation and runs like this: When faced with a choice between two actions, one action that shows no bias towards future selves and another that shows bias towards near-future selves, the non-biased action can be supported by the fact that though it may initially yield smaller benefits in comparison to the latter, the person will be compensated by benefits at later times; or, first pain, then gain. As David Brink writes: ‘Now-for-later sacrifice is rational because the agent is compensated later for her earlier sacrifice’.

Brink compares this rationale to debates in ethics and political philosophy. Imposing a sacrifice on one person to benefit another is (at least on the face of things) unjust unless the person who loses out is in some way compensated for their sacrifice. However, when it comes to sacrificing one’s present benefits for the sake of later benefits, according to Brink, ‘the benefactor and beneficiary are the same person, so compensation is automatic’. Hence there should be no worries about sacrificing one’s present benefits for later benefits. After all, one will be compensated for the burdens one undergoes now.

I’d like this argument to succeed, but it does not. The largest problem is that the argument’s structure is flawed; the argument already assumes the truth of the conclusion and is what philosophers call ‘question-begging’. The claim is that we are later compensated (we ‘gain’) for the ‘pain’ we endure now, where the ‘pain’ and the ‘gain’ have been calculated in a manner that treats all our temporal selves as equal. But the very issue at stake is whether we should be neutral about our different temporal selves. So in claiming that the near-term bias fails because we are later compensated for our present sacrifices, one is already assuming the truth of a temporally neutral picturein the argument for a temporally neutral picture.

Indeed, the insistence that a near-term bias ought to be rejected given that one will be later compensated for one’s present (comparative) sacrifice isn’t even addressing the request for an argument for temporal neutrality. According to those who would accept a near-term bias, it is permissible to hold that near-term benefits are weightier than longer-term benefits—they matter more to individuals. And even when we know for sure that we will be compensated in the future, a temporal bias holds that this compensation is less weighty or less important in comparison to the sacrifice. The question that must be addressed is why one’s future compensation for present sacrifice should be considered to be as significant as the sacrifice. And the argument from compensation provides precisely no answer to that question.

Here’s another way to put the present point: According to the neutral view, adequate compensation for some sacrifice is any benefit of equivalent size (or greater). But whether one should sacrifice present for future benefits is not a question that only involves some measurement of the size of the sacrifice and that of the compensation. Compare the moral question: I might admit that the benefit to some stranger might be larger than a benefit to my child. But this does not determine whether I should accept such a sacrifice for my child, because moral considerations treat the benefit to my child as more significant, despite its comparative size. The same applies here. It may be that the benefit I stand to gain later is larger than the sacrifice I make now. But, assuming a near-term bias, this doesn’t determine whether I should make that sacrifice; only the importance of the sacrifice compared to that of the compensation can answer this question. And because a temporally biased view will say that some benefits for particular future selves are less important, such a view will hold that adequate compensation must either involve other (nearer) temporal segments of a person’s life, or be that much larger in comparison to the sacrifice.

Though we may hold that one ought to make a sacrifice if one’s compensation is adequate, it is simply to beg the question against temporal biases to hold that any benefit that is equal-to-or-greater-than the sacrifice will be adequate. Such temporal biases will hold that adequate compensation must either occur now or, if it is to occur later, be duly amplified.

So this is a bit awkward. Like many, I accept a temporally neutral view, but it seems we must look elsewhere to justify it.

Dale Dorsey is Professor of Philosophy at the University of Kansas. His research interests include moral philosophy, political philosophy, especially welfarism, and the history of philosophy, especially the work of the British Moralists.