The technical reason that floating-point ranges are deprecated is that precise addition of Double and Float is not guaranteed. In particular, it fails on addition of numbers with a decimal fraction, even if that fraction is “not tiny”, like 0.1. The library should not contain methods that mysteriously give you unreliable behavior, hence the deprecation.

The BigDecimal approach solves the issue because the math is performed without imprecision.

There’s no advantage to trying epsilon-shifting schemes because then you may as well just use integers to represent your decimal fraction, for example in the manner that Oliver demonstrated.

One can dispute whether Range.Double is useful, stepping with precision but delivering doubles, or whether there is another useful sense for double-stepping that could be controlled by a context, but please acknowledge that I am discussing the former, which is limited but by itself is unproblematic. Sometimes all you need or want is an unproblematic solution to an unproblematic problem.

I came back from the angry Java thread even angrier. You wouldn’t like me when I’m angry.

It is fairly easy to write a routine that works right by using multiplication and division rather than addition. In fact, I did exactly that a long time ago for my scalar class (which represents physical scalars with units). Here it is:

A simpler version of this (replace Scalar with Double) could be provided by default for so-called “Doubles” in Scala so that the deprecated syntax could be maintained and would work correctly. That would relieve users of spending time to figure out how to use BigDecimal. It would also result in a tiny performance penalty, but I would gladly take the slight hit in return for the convenience.

For what it’s worth, It just occurred to me that if the human race had chosen base 8 (octal) instead of base 10 as the standard numeral system, we wouldn’t have this problem. People say we use base 10 because we have ten fingers, but actually we have 8 fingers and two thumbs! Too late to fix that one, I guess!

Even if you step with precision you run into surprises. What should the behavior of 0.1 until 3*0.1 by 0.1 be? More sneakily, suppose you have def steps(x: Double) = x until 3*x by x. Shall this sometimes give you three elements and sometimes two?

The imprecision can easily arrive in the input which is why a solution with precise stepping isn’t really a solution.

I still insist that in the age of literal types and macros, it’s not too much magic to insist on literals or at least take warning action. 0 to .7 by .1, give me BDs or Doubles or whatever seems to be expected.

I actually use something like this to discretize a bounding area for a numerical algorithm, and I definitely need to capture the end point. But I need to capture the end point even if it is in the middle of a step, which is a slightly different problem. I could just add the end point to the end of the sequence, but then I would usually be repeating the end point. So I came up with this little scheme:

def scalarStepsx(start: Scalar, end: Scalar, step: Scalar): Vector[Scalar] = {
// same as scalarSteps except guaranteed to include end point

@som-snytt - I don’t have any objection to a working macro. I’m not likely to be able to write one in a reasonable amount of time myself, though.

As Russ’s examples indicate, it’s tricky to get it working. The only really safe thing to do is pass literal numeric arguments into the BigDecimal string constructor, picking them directly out of the text of the code (not the Double literal computed by the compiler).

Seriously though, a person is extremely unlikely to actually use a number like 0.300000000001, and roundoff error will be a couple orders of magnitude less than 1e-12. Hence, I don’t see it as a practical issue. Nevertheless, I can understand that you cannot allow even the tiniest “loophole” in the standard language and library.

Some applications actually hinge upon these kinds of differences–those that have chunked intervals where the intervals are used as a denominator, for instance, or those that count on hitting the endpoint exactly in order to generate a difference between a to b and a until b. This can be really important to get right if you’re, say, trying to generate angles between 0 and 2*Pi; overshooting on the last endpoint giving you a second approximately-zero angle can be a big deal.

I’d love to have a better story here, but unfortunately it is all too easy to have an “intuitive” result that’s just wrong. For example, people will reason, “Well, if I hit the endpoint exactly, to and until will be different, so I’ll just boost the endpoint up/down a tiny bit to make them the same,” and then they get weird unexpected behavior because it’s fighting secret heuristics in the algorithm put there to try to preserve a different kind of intuition.

I can see that arithmetic with Doubles is imprecise, and that makes a naive range of Doubles unintuitive. But I don’t really see the problem anymore when you can make the steps of the range precise by using a BigDecimal underneath. The argument now is that you can give an imprecise result of a calculation with Doubles as input to the range (e.g. 0.1 until 3*0.1 by 0.1). But isn’t this just the case for everything one might do with Doubles? If that’s a reason not to have a range of Doubles, then shouldn’t you just remove Double itself?

For instance:

scala> Ordering[Double].equiv(0.3, 0.1 * 3)
res0: Boolean = false

Should we now deprecate Ordering[Double]?

Also, if you force people to use Range.BigDecimal instead, this is what’s going to happen:

There’s a limit to how much we can protect people. But the bottom line is that Double represents decimal fractions imprecisely, and NumericRange has an API that presupposes accurate treatment of endpoints. There’s an inherent conflict there. We shouldn’t present an API and then blame the user for assuming that it works reliably because of course Double is imprecise.

So either we need an alternate API, e.g. 0.1 to 0.7 size 7 and 0.1 to 0.7 every 0.1 where you promise you will hit the endpoints regardless (and the step size for every is not strictly adhered to); or we need to bail on Double entirely and/or leave the deprecations forever that tell people that what they’re trying to do can’t be made reliable because of the mismatch between endpoint assumptions requiring something that Double can’t deliver.