The developers are planning on porting the code to a universe where the laws of mathematics are different?
–
vaughandroidJul 3 '12 at 10:47

4

Seriously though, I can't think of a single good reason for this. The only explanations I can come up with are over-zealous coding standards, or some devs who have heard "magic numbers are bad" but don't understand why (or what would constitute a magic number)...
–
vaughandroidJul 3 '12 at 10:49

@Baqueta -An alternate universe? I think they already lived there! As for magic numbers, I agree, however I use the rule of thumb that Everything Except 0 & 1 should be made constant.
–
NWSJul 3 '12 at 10:56

If x has type float, then x > 0.0 forces promotion to double, which might be less efficient. That's not a good reason for using a named constant though, just for making sure your constants have the correct type (e.g. 0f, float(0) or decltype(x)(0)).
–
Mike SeymourJul 3 '12 at 14:32

That's as hilarious as my tutor in C++ project claiming I should not #define MAGIC_NUM 13.37 but rather static const float MAGIC_NUM = 13.37 because it would be "type-safe". (I guess that's also part of Google Style Guide). And therefore we do semantically wrong stuff? O tempora, o mores!
–
Jo SoJul 3 '12 at 14:34

5 Answers
5

You want to avoid the cost of creating an object during the act of comparison. In Java an example would be

BigDecimal zero = new BigDecimal ("0.0");

this involves a fairly heavy creation process and is better served using the provided static method:

BigDecimal zero = BigDecimal.ZERO;

This allows comparisons without incurring a repeated cost of creation since the BigDecimal is pre-cached by the JVM during initialisation.

In the case of what you have described, a primitive is performing the same job. This is largely redundant in terms of caching and performance.

Naming (unlikely)

The original developer is attempting to provide a uniform naming convention for common values throughout the system. This has some merit, especially with uncommon values but something as basic as zero is only worth it in the case of the caching case earlier.

Forcing type (most likely)

The original developer is attempting to force a particular primitive type to ensure that comparisons are cast to their correct type and possibly to a particular scale (number of decimal places). This is OK, but the simple name "zero" is probably insufficient detail for this use case with ZERO_1DP being a more appropriate expression of the intent.

+1 for forcing type. I'll add that in languages like C++ that allow operator overloading, defining a constant and use of typedef would keep the type of the variable to exactly one place and enable changing it without having to alter the code.
–
BlrflJul 3 '12 at 11:32

3

Forcing type is most likely not what they were trying for, however this is the best explanation of why it could be done!
–
NWSJul 3 '12 at 12:46

5

For forcing type, I'd probably rather just use 0.0f.
–
SvishJul 3 '12 at 13:17

Forcing type can sometimes be useful in vb.net, where performing bitwise operators on bytes yields a byte result. Saying byteVar1 = byteVar2 Or CB128 seems a little nicer than byteVar1 = byteVar2 Or CByte(128). Of course, having a proper numeric suffix for bytes would be better yet. Since C# promotes the operands of bitwise to operators to int even when the result would be guaranteed to fit in a byte, the issue isn't so relevant there.
–
supercatJul 11 '12 at 16:56

It's almost certainly exactly as efficient during execution (unless your compiler is very primitive) and very slightly less efficient during compilation.

As to whether that's more readable than x > 0... remember that there are people who honestly, genuinely, think that COBOL was a great idea and a pleasure to work with - and then there are people who think exactly the same about C. (Rumor has it that there even exist some programmers with the same opinion about C++!) In other words, you are not going to get general agreement on this point, and it's probably not worth fighting over.

It's Because of "Tooling Nagging"

A possible reason I don't see listed here is because a lot of quality tools flag the use of magic numbers. It's often a bad practice to have magic numbers thrown into an algorithm without making them clearly visible for change later, especially if they are duplicated in multiple places in the code.

So, while these tools are right about flagging such issues, they often generate false positives for situations where these values are harmless and most likely to be static, or to just be initialization values.

And when that happens, sometimes you face the choice of:

marking them as false positives, if the tool allows it (usually with a specially formatted comment, which is annoying for people NOT using the tool)

or extracting these values to constants, whether it matters or not.

About Performance

It depends on the language I guess, but this is fairly common in Java and has no performance impact, as values are inlined at compile time if they are real constants static final. It wouldn't have an impact in C or C++ if they are declared as constants or even as pre-processor macros either.