typeof null actually makes a lot of sense. the object type in JS represents an object _reference_. And well, null is just such a reference, only that it points to nowhere. A NULL pointer is still a pointer after all.

And why should null === Object in your oppinion? Object is a constructor object reference, null is a null object reference… why should they be equal?

NaN === NaN // false is absolute okay and kind of logic because NaN stands for a huge viority of values which are not numbers. NaN so to speak is a “symbol” into a hash of things which are not numbers. A fish is NaN and a dog is NaN and both are not very equal.

it is a numeric representation of something that is not a number. The value expressed by NaN has the type “number” because it is produced by numeric expressions (such as 1/0), and all numeric expressions have to provide numeric values.

In some ways it is the equivalent of null, in the sense that null is the representation of an object reference that does not exist, while NaN is the representation of a numeric value that does not exist.

Anyone who would expect typeof null returning “object” to mean that null === Object needs to take a step back and thing about it. This is true for everything that returns “object”, null is no exception.

And to ywg,

“Theses do not belong to WTFJS, but to WTFANYPROGRAMINGLANGUAGE”

Even the nightmare that is ECMA’s float standard choice, really? I don’t use any other language that treats 0.1 that way. I’m sure plenty do, but it’s definitely not a given and it can definitely lead to unexpected errors.

@eyelidlessness
Especially for floating numbers. This is just a side effect of Double, which is the most common floating point abstraction, almost every programming language ships with an implementation of Double.

Honestly, I find it more funny that NaN != NaN. One of the annoyances is when you want to determin if a reference is valid (in terms of core types). IE, not null, not undefined, not an empty string, and not an invalid date or number. Just posted about that on my blog this morning actually.

(typeof obj == ‘number’ and isNaN(obj)) will let you know if the value is NaN, which would be nicer to simply be able to compare == NaN or === NaN like you can for an empty string, but not Undefined either. The other thing that’s nice and painful at the same time, is that null, ”, false and 0 all have equality, so you need to test for explicit equality if 0 is valid but say false isn’t etc.

@eyelidlessness
This is not ridiculous, and this has nothing to do with ECMAscript, try to search by yourself if you don’t believe me.
.
Java comes with double, C/C++ comes with double, Python comes with double (in fact, I believe this was the first language that implemented it).
The only reason your above examples work it’s because they use Float by default.
.
JS did not wrap up it’s own buggy floating point abstraction, it’s something perfectly standard and very common.

Let me clarify what I am and am not saying. First, what I’m not saying:

1. ECMAScript is unique in floating point imprecision.
2. Imprecise floating point operations are unavailable in other languages.

What I am saying:

1. 0.1 + 0.2 != 0.3 is not (as you claimed) the case in “any programming language”, and I demonstrated that.
2. It then doesn’t follow that this is something any programmer should expect, negating the importance of pointing out the oddity in ECMAScript for those who expect 0.1 + 0.2 == 0.3.

Therefore, 0.1 + 0.2 != 0.3 does not belong to “WTFANYPROGRAMMINGLANGUAGE”, as in quite a lot of programming languages that is not the case.

I do not agree :
1 – You just demonstrated that you are confusing Float and Double.
2 – “It then doesn’t follow that this is something any programmer should expect”
.
IMHO not knowing about the differences between a Float and a Double is a real fault for any experienced developper.
Don’t take it personnally, this is not against you. But I just consider this to be some fundamental knowledge.

You said that any programmer in any language should expect 0.1 + 0.2 === 0.3 // false.

You are incorrect. This isn’t a matter of opinion, I provided half a dozen examples where a programmer would not and should not expect that. The differences, then, between double and float, are moot. Look again at your first comment. It had nothing to do with double or float, just whether or not 0.1 + 0.2 === 0.3. All the “WTF” was pointing out was that ECMAScript’s floating-point precision is not sufficient for that mathematical equation to behave as expected from a mathematical standpoint. This is true in other languages (Java being the example you provided), but not true in other languages as well (perl, php, python, ruby, AppleScript and bc being the examples I provided). It’s without a doubt language-specific, because some (but not all) languages have this problem.

Perl, PHP, Python, and Ruby all show the same rounding errors, but not necessarily for simple operations like 0.1 + 0.2

I don’t think that’s an accurate statement. The question isn’t simplicity or complexity of operations, but degree of precision. Precision of floating point numbers in Perl, PHP, Python and Ruby is greater than that in ECMAScript and Java. I didn’t attempt to claim that numbers in those language have infinite precision, I hope that’s clear now.

JavaScript doesn’t have a “decimal” type. There was an effort by IBM to get it into ECMAScript 5, but it was voted down by the committee.

Yes, this is true. This is because IBM was steadfast in the particular standard they insisted on using. Both sides were in error, in my opinion.

I’m not sure if this point was intended as a rebuttal or correction, so if it was I don’t know how to address it.

bc is a different case; it’s a calculator, and has its own oddities. Try calling it (without the -l option), and it will tell you that 3/2 makes 1.

This isn’t a rounding error, it’s a rounding default that is not what one might intuit. It simply rounds to an integer (down for 0-5) by default.

I don’t have AppleScript available for testing.

AppleScript doesn’t support the operations in question, so it is untestable. It seems the precision is equal to that of Perl, PHP, Python and Ruby, as 1234567.1229999999 and 1234567.123 both output 1.234567123E+6.

If you need more information about this, search for “IEEE-754?.

Again, I’m not sure if this is meant as a rebuttal or correction of something, so I’m not sure how to address it. Nothing I’ve said here is inaccurate and I think I’ve adequately addressed the question of precision as it differs from ECMAScript to certain of other languages.

Not going to jump back in the argument, but I’ll just fix this :
“Precision of floating point numbers in Perl, PHP, Python and Ruby is greater than that in ECMAScript and Java.”
Java do implement float and BigDecimal as well.

Using equality with floating point has always been asking for trouble, whether it is with NaN or the result of a floating point expression. This is true in any language which uses standard floats or doubles, and has been for many years now. Javascript follows the standard (NaNs should not be compared for equality, this is part of the IEEE spec).

@eyelidlessness, I don’t think you understand what I’m telling you: the expression “0.1 + 0.2 == 0.3” is *false* in Perl, Ruby, PHP, and Python (your counter examples). This is not some JavaScript weirdness, it’s a well-known consequence of the impossibility to express either of these numbers exactly as a floating point number. You can call it ridiculous if you want, but if you want to understand what’s going on, why these numbers cannot be exact, and why it’s a bad idea to test for equality where the results of float operations are concerned, I suggest you read up on this topic.
.
All in all, none of these WTFJS examples are really surprising for anyone who’s been working with the language for a while. Personally, I think giving null the type “object” was not a wise design decision, but I can live with that.
If you want some serious WTFs, look at IE’s treatment of host objects.
window == document; // true, but
document == window; // false
Ugh. And this is only the tip of the iceberg…

Very interesting!!! from a Physiological / Philosophical perspective. I’ve been developing for about 15 years now, started in Pascal then C then Java … thats when I got it.. There are endless overflows,NPEs,false positives positive negatives and vulnerabilities caused by FP math in general. I have the scars to prove it.

The thing that I “got” when I started in Java (and this was 8 years ago) was.. the vast majority of young developers, have little knowledge of these ever present inaccuracies. There are in fact, many of them. It’s not necessarily a bad thing though. Young developers expect there tools to work. I cant blame them at all, it frees up a few time slices for other processes. Thats why VMs like Java and .Net are so popular (disclaimer I have never used nor do I know much about .Net). I started getting serious about a year ago, and I’m somewhat dismayed at the state of the language myself.

Maybe because what followed never appeared anywhere in your previous comment? Anyhow, you’re right: 0.1 + 0.2 == 0.3 returns false in those languages. I’m surprised, given the different results of 0.1 + 0.2 in those languages versus ECMAScript and Java.

This is not some JavaScript weirdness, it’s a well-known consequence of the impossibility to express either of these numbers exactly as a floating point number.

I’m well aware. It can, however, be solved.

As far as your continued suggestion to read more, I’m familiar with the issues involved on a general level, and never claimed it was “JavaScript weirdness”. I was mislead in practice by the inconsistencies of results in those languages versus in ECMAScript.

All in all, none of these WTFJS examples are really surprising for anyone who’s been working with the language for a while. Personally, I think giving null the type “object” was not a wise design decision, but I can live with that.

Agreed, agreed.

If you want some serious WTFs, look at IE’s treatment of host objects.

But here I think we’re looking at WTFDOM, not WTFJS. I don’t know the ins and outs of this bug, but it seems to me that the internal DOM is lying to the JScript engine.

Anyway, thanks for correcting my error. I hope you’ll understand why I was mislead, and accept that my misunderstanding of specific behavior doesn’t betray a misunderstanding of how floating point numbers work generally.

I don’t think this is very useful, but it certainly pays to remember, that all high-level languages are full of abstractions, and it can be very confusing if you use them without the full understanding of exactly what and how is abstracted away.