Answers

The decimal has more significant figures than the double, therefore it can be more precise- it also takes up slightly more memory. Other than certian math or physics-related algorithms, the double or float should do fine.

One other thing to remember is that the decimal, double and float are real numbers (ie. 1.5, 1.83, or 3.33) whereas the short, int and long are integers (ie. 75, 600, and -9). You would use an integer as a counter on a 'for' loop, for example; whereas a float would be used for a monetary or interest-rate-calculating app, or anything else that requires fractions.

The chart below details most of the comon variable types, as well as thier size and possible values.

The decimal has more significant figures than the double, therefore it can be more precise- it also takes up slightly more memory. Other than certian math or physics-related algorithms, the double or float should do fine.

One other thing to remember is that the decimal, double and float are real numbers (ie. 1.5, 1.83, or 3.33) whereas the short, int and long are integers (ie. 75, 600, and -9). You would use an integer as a counter on a 'for' loop, for example; whereas a float would be used for a monetary or interest-rate-calculating app, or anything else that requires fractions.

The chart below details most of the comon variable types, as well as thier size and possible values.

another issue to take care of with decimal is the following. Assume you have

Code Snippet

decimal y = x / 12;

You were lazy to do the casting for 12.0 into decimal (otherwise opertor / is on different types) and decided that a shorter way is to drop 0 and simply use 12.

Next assume x is int and equals, say, 6. In this case your result for y will be 0 and not 0.5 as you would expect precisely because the operator / above is on performed on two ints and its result is, of course, another int.

The decimal has more significant figures than the double, therefore it can be more precise- it also takes up slightly more memory. Other than certian math or physics-related algorithms, the double or float should do fine.

One other thing to remember is that the decimal, double and float are real numbers (ie. 1.5, 1.83, or 3.33) whereas the short, int and long are integers (ie. 75, 600, and -9). You would use an integer as a counter on a 'for' loop, for example; whereas a float would be used for a monetary or interest-rate-calculating app, or anything else that requires fractions.

I think you have this backwards. Floating point numbers are intended for scientific use where the range of numbers is more important than absolute precision. Decimal numbers are an exact representation of a number and should always be used for monetary calculations. Decimal fractions do not necessarily have an exact representation as a floating point number.

The decimal has more significant figures than the double, therefore it can be more precise- it also takes up slightly more memory. Other than certian math or physics-related algorithms, the double or float should do fine.

One other thing to remember is that the decimal, double and float are real numbers (ie. 1.5, 1.83, or 3.33) whereas the short, int and long are integers (ie. 75, 600, and -9). You would use an integer as a counter on a 'for' loop, for example; whereas a float
would be used for a monetary or interest-rate-calculating app, or anything else that requires fractions.

I think you have this backwards. Floating point numbers are intended for scientific use where the range of numbers is more important than absolute precision. Decimal numbers are an exact representation of a number and should always be used for
monetary calculations. Decimal fractions do not necessarily have an exact representation as a floating point number.

Thank you for saying this. Floating point integers should NOT be used for monetary or currency related calculations either. That is to say that cds333's much voted answer is fundamentally
flawed as it downplays the real difference between two types.

"The decimal has more significant figures than the double, therefore it can be more precise- it also takes up slightly more memory. "

If you will pardon the unintentional pun, that's not very precise. Compared to floating-point types, the
decimal type has BOTH a greater precision and a
smaller range. The main difference between decimal and double data types is that decimals are used to store
exact values while doubles, and other binary based floating point types are used to store
approximations. A binary based floating-point number can only approximate a decimal floating point number, and how well it approximates is directly correlated with it's precision.

Doubles use
Floating Point storage in base 2, where as the Decimal stores the information in base 10.

So, for example, 2.25 as a decimal would be stored as
225 * 10 ^ -2 (underlined numbers are actually stored) or some variation thereof.

The double would store 1001 * 2 ^
-10 (underlined numbers are actually stored and they are in base 2).

You can think of integer binary numbers as each digit as having a power of two, i.e.

128 64 32 16 8 4 2 1

for a floating point number, you just need to extend that to negative powers of two as well, i.e.

16 8 4 2 1 1/2 1/4 1/8 1/16

or

16 8 4 2 1 .5 .25 .125 .0625

Some of the implications:

In my example I picked a number that is easily represented in binary format, but some numbers that are short/simple base 10 fractions are very long, if not irrational, binary fractions. This means that when using the double the number can sometimes
be off from what you would expect (More Info)

Doubles use
Floating Point storage in base 2, where as the Decimal stores the information in base 10.

So, for example, 2.25 as a decimal would be stored as
225 * 10 ^ -2 (underlined numbers are actually stored) or some variation thereof.

The double would store 1001 * 2 ^
-2 (underlined numbers are actually stored and they are in base 2).

You can think of integer binary numbers as each digit as having a power of two, i.e.

128 64 32 16 8 4 2 1

for a floating point number, you just need to extend that to negative powers of two as well, i.e.

16 8 4 2 1 1/2 1/4 1/8 1/16

or

16 8 4 2 1 .5 .25 .125 .0625

Some of the implications:

In my example I picked a number that is easily represented in binary format, but some numbers that are short/simple base 10 fractions are very long, if not irrational, binary fractions. This means that when using the double the number can sometimes
be off from what you would expect (More Info)

Hi!

I dont't know much about this, but why is 2,25 stored in double as
1001 × 2 ^
-2 and not 2002× 2 ^
-3 or 4004× 2 ^ -4?

It's just because this way, numbers stored in double can be stored in different ways.

You are absolutely right. But I guess the purpose was to explain the rounding problem you can have when trying to store a number such as 0.2 (binary 0.00110011...). The exponent is not relevant for that.

The decimal has more significant figures than the double, therefore it can be more precise- it also takes up slightly more memory. Other than certian math or physics-related algorithms, the double or float should do fine.

One other thing to remember is that the decimal, double and float are real numbers (ie. 1.5, 1.83, or 3.33) whereas the short, int and long are integers (ie. 75, 600, and -9). You would use an integer as a counter on a 'for' loop, for example; whereas a float
would be used for a monetary or interest-rate-calculating app, or anything else that requires fractions.

The chart below details most of the comon variable types, as well as thier size and possible values.

If you want "as high precision as possible", and the performance is also a plus, Double is the way to go.

If you need to avoid rounding errors or use a consistent number of decimal places, Decimal is the way to go.

Decimals have much higher precession and are usually used within financial applications that require a high degree of accuracy. Of course decimals are much slower than a double\float.

Decimal uses the most space and is the most accurate, but it's also quite a bit more expensive in processor time too as it is not an intrinsic type. One advantage of Decimal is that it is optimized for financial calculations.

Decimals have much higher precession and are usually used within financial applications that require a high degree of accuracy.

But be aware that high precision does not guarantee high accuracy; and high accuracy does not necessarily require high precision - check out the Wikipedia articles on 'Accuracy and Precision' and 'Numerical Methods/Errors'.
Much depends on the algorithm used - especially in iterative calculations.

Regards David R
---------------------------------------------------------------
The great thing about Object Oriented code is that it can make small, simple problems look like large, complex ones.
Object-oriented programming offers a sustainable way to write spaghetti code. - Paul Graham.
Every program eventually becomes rococo, and then rubble. - Alan Perlis
The only valid measurement of code quality: WTFs/minute.

Can someone explain this? Why would this be calculated in int? The variables are already in double. Even though it didn't say, double r=120.0, a double is still a double right? Also the code seems to be fine z=2.1333333 on my machine. I am using .Net3.5
in C#. Maybe he was using C++ or something?

Also I just want to make sure I get this right.

Double has higher range, thus, has the potential to store something much closer to the precision of actual value.

Decimal has "exact" precision within its smaller range, thus, is more suitable for financial applications. But, it is not used in scientific application because you lose its range and ended up rounding up/ truncate too short.

Decimal is a lot slower and is 16 Bytes.

Remember to say decimal z = x / 5m; to make sure it is calculated in decimal.