I wanted to represent a timespan in terms of decimal years, for no apparent reason. This isn’t the most interesting thing I have to talk about, but in this “era”* of political correctness, one can’t even be politically correct without someone calling you out on it. Yeah, I’m taking the easy way out.

Anyway, I noticed that there’s a variety of ways to calculate this timespan, specifically the non-integer portion. As always, there’s a naïve way to compute the entire number, namely calculate the difference in milliseconds (or some similarly small unit of time), and divide by the number of milliseconds in a year. How many milliseconds are in a year? That’s a silly question, because years don’t have a constant length. We could use the average, but we’d generally be off.

So we know the root issue is that years have a non-constant length. What does a decimal year represent, then? Just like everything else about our measurement of time, it’s an arbitrary representation. I think it makes the most sense to make the representation continuous - if my decimal year clock says “3.5” and I wait half of this year, it should say “4.0,” and every moment of time should be a continuous and linear representation of those two points in time.

So I’ve established something meaningless here, because that doesn’t get us any closer to an answer. Because the decision is meaningless, here are a few possibilities for calculating the decimal: using average length of a year, using the length of the current year, using the length of the destination year, using a combination of the current and destination years, or using the average/combination lengths of the years between the current and destination years (inclusively or exclusively).

I ended up deciding on the fourth option - using a combination of the current and destination years. First, convert the dates into their absolute decimal years, then take the difference of those decimals.