Thursday, February 16, 2017

The year was 1995, Google wasn't born yet. Mark Zuckerberg was 10 years old. Javascript was making its first steps. 22 years latter and JavaScript is everywhere. It's probably one of the most simple programming languages around and it's a good choice for someone that wants to start programming. However, there are some funny things about Javascript that anyone entering the language might find odd. Here's my top 5 funny things about the language. Just open a Google Chrome window and press F12. Have fun.

#5 – null is not an object?

What does a null look like in JavaScript? Well, let's check:

Object? Shouldn't null be the absence of meaningful value? Well, yes. Despite the above result, null is not considered an instance of an object:

Funny as this might look, there's more. Not a Number has "identity issues" and is not equal to itself:

Well, the technical explanation for this is complex and it's related to the types of NaN (quiet NaN and signaling NaN). You can read more here, but the real way to check for a number is using the function isNaN():

#3 – Math.min() > Math.max()

Hum... So, the minimum value is higher than the max?

Let's look at what they "represent":

Well, seems to be wrong. These don't represent the max or min values for a number, but actually functions that given two numbers return the max or the min of the provided parameters.

But why Infinity? And, apparently, in the inverse order? Well, looking at min(), "all numbers that are lower than positive infinity should be the smallest from a list, if there aren't smaller". So, the below makes sense:

As odd as it might look, it actually makes sense. "EcmaScript standard specifies that unless either of the arguments is a string, the + operator is assumed to mean numeric addition and not string concatenation." So, it was the sum of the conversion to integer. As for the second part, true === 1 yields false because it's also comparing type and

This is the coolest one and it's not a bug or anything. And it actually happens on several other programming languages, like C# (.NET) and I've written about it in the past

"Computers can only natively store integers, so they need some way of representing decimal numbers. This representation comes with some degree of inaccuracy. That’s why, more often than not, 0.1 + 0.2 !== 0.3."