Dot Net Thoughts

October 20, 2007

Doubles, Decimals, and Dividing by Zero

Filed under: Misc Thoughts — dotnetthoughts @ 7:02 am
Tags: , , , , , , , ,

“.Net Exception Divide by zero” often comes up as a search criteria when this blog is hit. (I use division by zero a lot when I’m writing about errors and exception handling. It’s easy to create, and it’s easy to understand.) I couldn’t quite figure out exactly why anybody would be querying on it directly. I think I’ve figured it out, though.

Last week, I was typing up a division example for the blog. For whatever reason, I used double instead of decimal for my input parameters and return value.

   private static double Divide(double i, double j) 
         return (i / j); 

When I passed in values of 5 and 0 for this method, I was expecting a divide by zero exception. Instead, my console app ran just fine and printed the word Infinity on my screen. I was totally caught off guard by this result. If you pull out an old calculus book, you’ll find that mathematicians often will say that the value of 1/x, as x approaches 0, is infinity (a standard limit), but that 1/0 is undefined. (You can’t take one object and break it into groups of zero.)

When I run the same code above using decimals, instead of doubles, I get a divide by zero exception. This is what I would expect.

With a little bit of thought and poking around, I think that there is a method behind the madness.

Doubles are floating point types. These types are specifically engineered never to throw an exception. Instead, they return values such as infinity, positive infinity, negative infinity, and not a number. Why?

I suspect that the reason has to do with precision. The double type supports numbers as small as +-5*10-324 and as large as +-1.7*10308. Its precision, however is only 15 to 16 digits. Due to the way that floating points are handled, you can’t ever be totally sure exactly what value your Double contains for very large or small numbers. For example, the following code prints out a value of 9.99988867182683E-321 in my output window when I run it:

   private static void DoublePrecision()  
       double double1 = 1 * Math.Pow((double)10, (double)-320); 
       Trace.WriteLine("double1: " + double1.ToString()); 

This loss of precision means that the compiler itself can’t determine whether a value truly is zero, or if it is a really small number quite close to zero. Mathematically, 1/(10-324)2 is 1/10-648. If the first equation were plugged into .Net, the the result wouldn’t be infinity, but it would be greater than the double can deal with. The convention seems to be to return infinity for my initial divide by zero question.

Decimals, on the other hand, support numbers as small as +-1*10-28 and as large as +-7.9*1028. A decimal’s precision is 28 to 29 significant digits. Since the precision is equal to the exponential power of the range supported, the compiler knows the value it holds in the register is accurate, at least to defined precision. If it thinks it has a zero, it actually has a zero, and can safely throw a DivideByZero error if you try to use it to divide. This added precision is why Microsoft encourages decimals for financial and scientific values.

I’m making some educated guesses behind the thinking of the writers of the IEEE 754 standard, which defines floating types, but I don’t suspect I’m too far off. Let me know if you have any insights!

Good luck and code safe!


October 12, 2007

Why Language is Important (Why I prefer C#)

Filed under: Debugging,Misc Thoughts — dotnetthoughts @ 7:13 pm
Tags: , , , , , , ,

How many times have you heard this statement?

 “It doesn’t really matter whether you choose Visual Basic.Net or C#. It all compiles down to the CLR, anyway.”

This statement makes me shudder. It’s at least partially true. All managed code does compile down into the common language runtime. This is what allows us to mix and match components written in different languages when building an application. What this statement doesn’t recognize is the fact that every language was created for a specific purpose.

In his book CLR via C#, Jeffrey Richter lists around two dozen different compilers he knows of. These include many well known languages such as C#, J#, LISP, Perl, and Eiffel, just to name a few. Having this many different compilers would be a lot of wasted effort if each language acted exactly the same as every other language out there.

The reality is that each language has its advantages and disadvantages. C# and VB.Net are great for handling Input and Output. APL is optimized for mathematics and finance. PERL is a monster when it comes to string manipulation. LISP, one of the oldest programming languages, is still the language of choice for AI. C++.Net allows both managed and unmanaged code to run within the same module. Every language is designed for a specific type of development.

I would love to learn the ins and outs of all the different languages out there. When working as a consultant, however, reality dictates that I will almost always be programming either C# or Visual Basic.Net.

Choosing between these two isn’t always an easy decision. In many ways, I find the UI presented by the Visual Basic team to be superior to the UI offered by C#. Filtering down methods and properties to those most used, the my namespace, and better on-the-fly error detection make Visual Basic.Net a very enjoyable programming experience. When I’m in the IDE, I feel that VB wins hands down.

Ultimately, though, when I look at the intent of the entire language (and not just the UI), VB loses some of its shine. The intent of the C# language was to build a clean and new language around the .Net runtime. C# has a clean mapping between .Net runtime capabilities and features. Close alignment with the framework was its intent. VB, on the other hand, was designed to maintain market share by retaining much of the language from the previous version of VB. This often did not make sense in the context of the new .Net environment.

Let me give you an example. In legacy VB days, there was no such thing as structured exception handling. All errors were handled either by calling On Error Goto <label>, or by calling On Error Resume Next. On Error Goto isn’t a particularily good way to handle errors. On Error Resume Next is a disaster. Take this code, for example:

  Sub ResumeNextTest()
     On Error Resume Next
     Dim xml As XmlDocument = New XmlDocument
     Console.WriteLine("ResumeNext complete.")
  End Sub

If the attempt to load the xml fails, there will be no indication. You could probably infer that the xml didn’t load when the Console prints out nothing on the next line, but there is no explicit indication. In this example, the Xml data is accessed right away. In a more complex app, though, the problem of the missing Xml data may not show up for quite some time.

The .Net runtime has no concept of On Error Resume Next, though. If an error is thrown, something has to happen. How does VB get around this?

To find out, I wrote a simple Console App with the above method included, and ran reflector against it to see what the VB compiler did when it converted the code to IL. (I’ve included the converted code at the bottom of this post.) Essentially, VB wraps the entire On Error Resume Next method in a try-catch block. As it progresses through the code, the CurrentStatement variable is updated to indicate the location in the code. Should an exception occur, the exception is swallowed, and the user is redirected to the label following the point that the trouble happened.

While this is a clever way of solving the problem, it allows a very dangerous practice that should have been eliminated to carry on. Furthermore, if I ever had a developer present me with this kind of spaghetti code during a code review, we would have to have a serious talk.

Your choice of VB, C#, or any other compiler is up to you. Be sure you know the pros and cons of whatever language before you develop with it, though.

Good luck and code safe!


public static void ResumeNextTest() 
    // This item is obfuscated and can not be translated. 
    int VB$ResumeTarget; 
        int VB$CurrentStatement; 
        int VB$ActiveHandler = -2; 
        VB$CurrentStatement = 2; 
        XmlDocument xml = new XmlDocument(); 
        VB$CurrentStatement = 3; 
        goto Label_0089; 
        VB$ResumeTarget = 0; 
        switch ((VB$ResumeTarget + 1)) 
            case 1: 
                goto Label_0001;   

            case 2: 
                goto Label_0009;   

            case 3: 
                goto Label_0011;   

            case 4: 
                goto Label_0089;   

                goto Label_007E; 
        VB$ResumeTarget = VB$CurrentStatement; 
        switch (((VB$ActiveHandler > -2) ? VB$ActiveHandler : 1)) 
            case 0: 
                goto Label_007E;   

            case 1: 
                goto Label_0024; 
    catch (object obj1) when (?) 
        ProjectData.SetProjectError((Exception) obj1); 
        goto Label_0044; 
    throw ProjectData.CreateProjectError(-2146828237); 
    if (VB$ResumeTarget != 0) 

September 22, 2007

Error Handling – Best Practices

Filed under: Best Practices — dotnetthoughts @ 6:54 am
Tags: , , , , ,

I had a question from a friend about exception handling best practices earlier this week. He’d mentioned that it’s pretty hard to find good information about the topic. I rattled back a list of fairly generic dos and don’ts that we’ve all seen with exception handling before. As I thought about it a little bit more, I came to realize that we accepted a lot of these on blind faith, so I thought I’d take a little bit closer list of some of the error handling recommendations made in the .Net Framkework Design Guidelines.


Exceptions are fairly costly operations and will generally be considerably slower than a programmatic solution.  How much slower? To find out, I wrote a quick-and-dirty test application. This application divides two values, but, instead of returning an error to the client, it returns a null when attempting to divide by zero. The project contains two implementations of the calculator object: the calculator using exceptions, and the calculator not using exceptions. The only difference between the two is the way the division by zero is handled.

In the exception version, we are defaulting the return value to null, and simply hiding the exception when it occurs.

  public decimal? Divide(decimal numerator, decimal denominator)
    decimal? returnValue = null;
        return numerator / denominator;
    catch (DivideByZeroException ex) {}
return returnValue;

In the non-exception version, we are checking the denominator, and simply returning null if the denominator is equal to 0.

public decimal? Divide(decimal numerator, decimal denominator)
  decimal? returnValue = null;
  if (denominator != 0)
     returnValue = numerator / denominator;
  return returnValue;

While I was expecting there to be a little bit of a difference between the results, I was amazed at how big the difference was. On my machine, the object which was using exceptions ran 5000 iterations in 35 seconds. The version which explicitly checks the denominator value instead of using exception conditions ran in two milliseconds. [Note: Jon Skeet notes in the comments that this calculation is skewed, because I was running the code in the IDE at the time. He is correct. When running in the IDE, I get the 35 seconds to 2 milliseconds results. When running the compiled versions (Release or Debug), I’m seeing something on the order of 500 milliseconds to 2 milliseconds. It’s still pretty bad, but nothing on the order presented above.]

Handling Generic Exceptions

Section 7.2.2 talks about avoiding handling non-specific exceptions. For example:

catch (Exception ex) {//Handling here}

In general, this guideline is quite sound. If you are catching the general base class exception, you have no idea what has gone wrong, and shouldn’t attempt to recover from it. If an exception is thrown while writing to a file, for example, you do not know if the exception was because of an illegal file operation or due to an out-of-memory exception. It does take more time to research the appropriate exceptions that can be thrown by an object, but it can prevent a lot of pain in the long run.

The one exception I’ll often make to this rule is at the boundary of an object. If an exception has been thrown and has bubbled all the way up to the top of the stack, something truly unexpected has occurred. I’ll catch the generic exception object, log the exception to a log file, and then kill the application.

Use an Empty Throw

Section 7.2.2 also talks about using just an empty throw command when rethrowing an exception. In other words, use:

catch (Exception ex)
  {throw; }

Note that I’m not using throw ex. The primary reason for this is to maintain the stack trace. If you look at the Rethrowing Exceptions application, you’ll note that the only difference between the GoodErrorHandling and the BadErrorHandling methods is that good uses throw, and bad uses throw ex. Both methods call MyFlawedMethod which returns an InvalidOperationException. The resulting stack traces are displayed in a message box.

Examining the stack trace messages closely, you’ll notice something. The line number in the top frame of the stack points to different locations, even though the same base exception was thrown. The GoodErrorHandling frame points to the actual location of the error. When BadErrorHandling threw a new exception, however, the stack was reset and now points at the error handling in BadErrorHandling. This makes chasing down the originator of the actual error much less fun.

Code safe!

Blog at