Dot Net Thoughts

October 20, 2007

Doubles, Decimals, and Dividing by Zero

Filed under: Misc Thoughts — dotnetthoughts @ 7:02 am
Tags: , , , , , , , ,

“.Net Exception Divide by zero” often comes up as a search criteria when this blog is hit. (I use division by zero a lot when I’m writing about errors and exception handling. It’s easy to create, and it’s easy to understand.) I couldn’t quite figure out exactly why anybody would be querying on it directly. I think I’ve figured it out, though.

Last week, I was typing up a division example for the blog. For whatever reason, I used double instead of decimal for my input parameters and return value.

   private static double Divide(double i, double j) 
      { 
         return (i / j); 
      }

When I passed in values of 5 and 0 for this method, I was expecting a divide by zero exception. Instead, my console app ran just fine and printed the word Infinity on my screen. I was totally caught off guard by this result. If you pull out an old calculus book, you’ll find that mathematicians often will say that the value of 1/x, as x approaches 0, is infinity (a standard limit), but that 1/0 is undefined. (You can’t take one object and break it into groups of zero.)

When I run the same code above using decimals, instead of doubles, I get a divide by zero exception. This is what I would expect.

With a little bit of thought and poking around, I think that there is a method behind the madness.

Doubles are floating point types. These types are specifically engineered never to throw an exception. Instead, they return values such as infinity, positive infinity, negative infinity, and not a number. Why?

I suspect that the reason has to do with precision. The double type supports numbers as small as +-5*10-324 and as large as +-1.7*10308. Its precision, however is only 15 to 16 digits. Due to the way that floating points are handled, you can’t ever be totally sure exactly what value your Double contains for very large or small numbers. For example, the following code prints out a value of 9.99988867182683E-321 in my output window when I run it:

   private static void DoublePrecision()  
   { 
       double double1 = 1 * Math.Pow((double)10, (double)-320); 
       Trace.WriteLine("double1: " + double1.ToString()); 
   }

This loss of precision means that the compiler itself can’t determine whether a value truly is zero, or if it is a really small number quite close to zero. Mathematically, 1/(10-324)2 is 1/10-648. If the first equation were plugged into .Net, the the result wouldn’t be infinity, but it would be greater than the double can deal with. The convention seems to be to return infinity for my initial divide by zero question.

Decimals, on the other hand, support numbers as small as +-1*10-28 and as large as +-7.9*1028. A decimal’s precision is 28 to 29 significant digits. Since the precision is equal to the exponential power of the range supported, the compiler knows the value it holds in the register is accurate, at least to defined precision. If it thinks it has a zero, it actually has a zero, and can safely throw a DivideByZero error if you try to use it to divide. This added precision is why Microsoft encourages decimals for financial and scientific values.

I’m making some educated guesses behind the thinking of the writers of the IEEE 754 standard, which defines floating types, but I don’t suspect I’m too far off. Let me know if you have any insights!

Good luck and code safe!

Mike

Blog at WordPress.com.