If this is your first visit, be sure to
check out the FAQ by clicking the
link above. You may have to register
before you can post: click the register link above to proceed. To start viewing messages,
select the forum that you want to visit from the selection below.

Machine (Numeric) Constants in Javascript

I am updating some code in JavaScript which presently uses the numeric constant Number.MIN_VALUE, which I believe represents the smallest number of type double which can be used in Javascript.
It corresponds to the numeric constant DBL_MIN in C++.

The code also uses the numeric constant Number.MAX_VALUE, which corresponds to the numeric constant DBL_MAX in C++.

(Or so I think. Somebody please correct me if I am making the wrong assumption.)

I would now like to edit the code so that it uses the limits for type float instead of type double.

In C++, values for these machine constants are held in FLT_MIN and FLT_MAX:

Code:

FLT_MIN: 1.17549435082229e-038
FLT_MAX: 3.40282346638529e+038

However, I do not think JavaScript has any built-in corresponding values.

I suppose I could declare two constant variables and explicitly assign these values to them, but I would prefer not to use "magic numbers".

Anybody here have suggestions for computing values in JavaScript that correspond to FLT_MIN and FLT_MAX?
Are there any built-in constants in JavaScript from which these values can be defined?
For example, is there a way to compute FLT_MIN in terms of Number.MIN_VALUE?
Could FLT_MIN be computed by some brute-force method, and computed each time it is required?
Anything ...?

A related question:
Do these values vary from machine to machine?
Since the code runs in the client on a user's computer, would that influence the value computed by code that determines these values?
Say a small code block is written to compute FLT_MIN by brute force. Would its value be different on, say, a supercomputer with arbitrary precision than it would be on a simple 32-bit desktop computer?
(If so, that would be another reason to avoid "magic numbers" and try to better customize the code for the machine on which it runs.)

I don't expect to be all that helpful but there are a few things I wanted to point out.

As it turns out, javascript essentially has only one numeric data type. (from MDN)

According to the ECMAScript standard, there is only one number type: the "double-precision 64-bit binary format IEEE 754 value".

So as far as wanting to match something like C++ and its various numeric data type limits, you won't find what you are looking for. The code and values posted above by JMRKER are the only min and max values for numeric javascript values (given that there is only one numeric data type).

And to answer the related question, no there are not different min or max values depending on the computer the code is run on. The min and max values are dictated by the language, not the client's computer, as is with C++ or any programming/scripting language. So those min and max values will always remain the same regardless of a computer's architecture, processing power or any other local factors.

"Given billions of tries, could a spilled bottle of ink ever fall into the words of Shakespeare?"

That's correct. Those numbers correspond to the values C++ returns for DBL_MAX and DBL_MIN--the quantities for type double variables.
What I'd like now are corresponding values in JavaScript for variables of type float (i.e. - single precision.)

Unfortunately, it looks like such built-in quantities don't exist; I will have to declare and define some constant variables.

I have written a small routine to compute machine epsilon (DBL_EPSILON) by brute force, to confirm the values which are returned by the built-in C++ constant. The value returned by the JavaScript routine--on various computers--matches the value of the built-in C++ constant. I wonder if a similar small program could be written to confirm FLT_MIN and FLT_MAX?

As it turns out, javascript essentially has only one numeric data type. (from MDN)

While by the spec that is 'correct', you also can't rely upon it... if you run JMRKER's example on some different OS, processors and browsers, you'll find that's actually the low end of the spectrum. Since IE's "jScript" is NOT really even close to ECMAScript compliant, it often has 32 bit limits instead of 64 bits in older versions, and exceeds it in some newer ones.

I can't remember which browser it was, but there's one of them that will actually switch to arbitrary precision when a number gets larger than 64 bits... gah, for the life of me I can't remember which one though. You go arbitrary precision "BCD" style, and concepts like min and max are more a matter of system memory than processor limitations. (admittedly with one heck of a speed penalty)

This routine uses the fundamental definition of DBL_EPSILON to compute it:
i.e. - it finds the smallest number that can be added to 1.0 such that (1 + DBL_EPSILON) is distinguishable from 1.

I had done it this way because I do not know ahead of time what kind of browser/computer combination on which the program will be run.
However, if it is a function of the language itself, I would be better off declaring DBL_EPSILON as a global constant in my programs, and not bother computing it within the program.

So, does it matter?

Should I edit my programs to declare DBL_EPSILON as a (constant) variable with an assigned value (e.g. - 2.2204460492503131e-16)?
Or am I okay computing it explicitly, as I am presently doing?

JMRKER has it right that you're better off using the constant in most cases. It will most always be calculated to the limit of the language implementations data-types. Brute force recreating something the language provides for you is rarely if ever useful, unless you need a higher precision than the constant, something unlikely to be an issue in a non-typed language.

I mean, if you were in a pascal or C compiler where you needed pi accurate to an 80 bit extended, THEN you brute force it (or pre-calc and assign to your own constant) since the built-in one usually stops at 16 (single), 32 (double), 48 (real) bit floating point precision (depending on compiler and libraries used)... but in JavaScript? Not so much.

Though beware that with the differences in JavaScript engine implementations, the precision of floating point math can result in browsers giving slightly different results depending on how 'deep' your math goes.

For a root-finding routine, a root can theoretically be found to within DBL_EPSILON.
For an optimization routine, a maximum can theoretically be found to within √DBL_EPSILON.

My programs try to get results as close to these theoretical limits as possible; hence the need for DBL_EPSILON.

At the moment, I am updating the polynomial root-finder.
The underlying algorithm is the same for the 100th power root solver as it is for the quartic, cubic, and quadratic solvers; however, since most people who have gone through Grade 10 know about the quadratic equation, the numerical solver of a quadratic equation is the most popular.

I do, indeed, calculate DBL_EPSILON once per program, and save it in a variable for use throughout the rest of the program, but now I am wondering if even that is too much.
Perhaps I should just explicitly assign it the value I get from my C++ program (2.2204460492503131e-16).

Suggestions?
Explicitly assign DBL_EPSILON to the value of 2.2204460492503131e-16?
Or leave the brute-force code block in place?