Dot Net Thoughts

November 29, 2007

Debug and Release Builds

Filed under: csharp,Debugging,Uncategorized — dotnetthoughts @ 9:41 pm
Tags: , ,

Occassionally, I find old code concepts rattling around in my head that just don’t apply in today’s world. My most recent one was the difference between release and debug builds.

Back in VB6 days, one of the big reasons for creating a debug build was the generation of the pdb file. The pdb (portable database) file contained information about the names of items such as variables, classes and methods. Furthermore, it contained information about where these values were located in the code. Without the pdb file, all of this information was unavailable.

Imagine my surprise when I ran some code without a pdb file the other day, and I received a stack trace containing method names. With a little further thought, though, I realized it wasn’t that surprising. When disassembling a dll with ILDASM, I can see all of the method and variable names with or without a pdb file present. It turns out that the inclusion of a pdb file will allow you to trace a bug to a module and line number, but it is no longer needed for artifact names.

Release builds now contain pdb files by default. So what is the difference between the two build types? To help figure this out, I generated a very simple HelloWorld class and compiled it as both a release and a debug build. This class contained two methods.


        static void Main(string[] args) 

        static string SayHello(string name) 
            return String.Format("Hello {0}!", name); 

Opening the Main method of the release dll in ILDasm yields a pretty straight forward implementation of the code. In the decompilation, you can see the creation of the string, the loading of the parameter, the string formatting, and the return of the value.


  .method private hidebysig static string 
          SayHello(string name) cil managed 
    // Code size       12 (0xc) 
    .maxstack  8 
    IL_0000:  ldstr      "Hello {0}!" 
    IL_0005:  ldarg.0 
    IL_0006:  call       string [mscorlib]System.String::Format(string, 
    IL_000b:  ret 
  } // end of method Program::SayHello       

The debug code looks almost exactly the same, but it includes a few differences to help assist in debugging. A local variable is initialized to hold the value of parameter so that it is available to a debugger. Also added are several nop (no operation) and a br_s (branch) method. These set points within the application on which a breakpoint can be set.


  .method private hidebysig static string 
          SayHello(string name) cil managed 
    // Code size       17 (0x11) 
    .maxstack  2 
    .locals init ([0] string CS$1$0000) 
    IL_0000:  nop 
    IL_0001:  ldstr      "Hello {0}!" 
    IL_0006:  ldarg.0 
    IL_0007:  call       string [mscorlib]System.String::Format(string, 
    IL_000c:  stloc.0 
    IL_000d:  br.s       IL_000f    IL_000f:  ldloc.0 
    IL_0010:  ret 
  } // end of method Program::SayHello       

I expected that tweaking the code to call a few subroutines would at least lead to some inlining optimizations within the IL. The differences between the Debug and Release versions of the IL were similar to those above. So where were the optimizations?

The only other major difference between the IL versions are the values stored in the DebuggingModes attribute. This attribute links back to the debug compilation flags that can be set when compiling a project. Since these are the only differences in the IL, the optimizations must occur within the JIT compiler itself. Scott Hanselman has an excellent post on compiled release and debug that is well worth reading.

That would be it for today! Good luck and code safe!



November 18, 2007

An Interview Question (and a Thanksgiving note)

Filed under: Management — dotnetthoughts @ 7:21 pm
Tags: , ,

Interviewing potential hires is always tough. Determining whether or not somebody is good for your team from a technical and personality standpoint based on a two page resume and an hour interview is a nearly impossible job. I’ve made a number of hiring decisions over the years. Some of them worked out quite well; others, not so much.

I used to have a stack of “tricky” interview questions that I used to ask. In my VB6 days, I would ask applicants to explain binary compatibility to me. I was amazed by the number individuals who didn’t have any idea about such a fundamental concept in COM development. (In the interest of full disclosure, I would only give myself partial marks the first time somebody asked me that question. I told the interviewer that you could add a method to a COM contract. VB6 allowed you to get away with this through a horrible concept called interface forwarding, but it violates just about every COM rule known to man.)

Over the years, though, I’ve come to realize that tricky questions don’t mean anything. The percentage of the .Net Framework that the average developer will use in a typical day is tiny. If a competent developer doesn’t know exactly how to do something, there are plenty of resources on the web for them to research exactly what they need to know. Over the past couple of months, I’ve watched a team of developers with little or no experience in WPF and WCF start to build some pretty serious enterprise level applications using these new technologies.

I’m not saying that experience shouldn’t come into play when deciding to hire somebody, but I am saying that we need to know exactly why we are hire for experience. If a decent developer with a moderate amount of experience can pick up any new coding techniques they need fairly simply, what is the real difference between a junior and a senior level developer?

As near as I can tell, the difference comes down to how they would answer the question “What does a developer do in a day?”

I recently talked with a fairly senior developer at another company. He typically works from home, and hates the thought of having to ever go into the office. “I never like to go in, because when I’m in the office no coding ever gets done. It is nothing more than meetings with other team members to talk about the project.” While I think all developers feel this frustration occasionally, his aversion to going into the office is constant.

Even though he has a senior title, I have a hard time accepting him as a senior developer. (This may be a bit unfair, as I don’t know exactly what standards he is being held to by his management.) I’ve found that the extremely good developers focus less on the code, and focus more on the team. They work to bring the weakest developers in the team up to be more in line with the stronger developers. They push the strong developers to continually improve and to keep expanding their horizons. They work with management on long term planning to ensure the long-term success of the project. They do it all, and still have a love for the code and the langues and the bits and the bytes.

Leadership skills aren’t all that easy to come by. For developers to truly learn these skills, they have to see them modeled effectively. (I also believe that experience in seeing these skills modeled ineffectively can be extremely helpful in understanding the difference.) Reading books on methodologies, researching leadership on the internet, even getting an MBA, are all helpful activities when looking to learn about leadership, but it doesn’t go far enough. It takes mentorship and modeling from existing leaders to truly build and grow a successful organization. When a self-managing development team begins to work effectively, it is a beautiful sight to see.

I’ve been very blessed over the past couple of years to work with some extremely talented and effective leaders, and I just wanted to say a huge thank you this Thanksgiving for all of your time and guidance. I have learned so much from you and look forward to working with you over the next year.

Happy Turkey day, all!


November 10, 2007

TransactionScope and Unit Tests

Filed under: Uncategorized — dotnetthoughts @ 6:29 am
Tags: , , , , ,

Writing unit tests to validate a data access layer (DAL) can be a time consuming (but life saving) task. One of the biggest challanges of DAL unit tests is assuring that you have consistent data to pull from the database. Dedicating a database with static data for unit tests doesn’t always work. As unit tests are added to the project, data may need to be added to the database. This can cause previously created tests to fail, and a lot of time can be lost trying to resync everything.

One technique for getting around the unit test database consistency issue is to write data to the database you expect to find. The steps for doing this would be:

  • Begin a transaction in your unit test.
  • Write the data to the database that your data access layer will need.
  • Test the data access layer functionality against the inserted data.
  • Roll back the transaction.

Using a transaction has a couple of distinct advantages. Since you are running in the scope of an uncommitted transaction, your fellow developers running unit tests will not see your added data (isolation). Also, rolling back the transaction places the database back to the original state in which you found it.

Managing transactions by explicitly attaching them to the connection object doesn’t always work well when testing a data access layer. Since the DAL often contains code to retrieve its own connection, the following sequence often occurs:

  • Begin a transaction in your unit test.
  • Write the data to the database that your data access layer will need.
  • Call the Data Access Layer. The Data Access layer creates its own connection.
  • The DAL attempts to read the new data from the database. It is blocked, though, because it is in a different transaction than the unit test transaction, and will not be able to complete until the DAL commits.

The TransactionScope object helps alleviate this problem. It sits on top of the Distributed Transaction Coordinator, and will assign any code running within its context to the same transaction. In other words, a TransactionScope context within your unit tests will force your data access layer code to run within the same context. Isolation is maintained from other developers, but your DAL can access and manipulate the data as needed. (This does require that you have the Distributed Transaction Coordinator Service running on the box that handles the transaction.)

To demonstrate, let’s assume that I have a DAL method that I want to test that returns all users from a database. This method gets its own connection, retrieves the users, and returns them as a DataSet.

public DataSet GetUsers() 
    DataSet dataSet = new DataSet(); 
    string connectionString = GetConnectionString();      

    using (SqlConnection connection = new SqlConnection(connectionString)) 
        string sql = "SELECT * FROM [User]"; 
        SqlCommand command = new SqlCommand(sql, connection); 
        SqlDataAdapter adapter = new SqlDataAdapter(command); 

    return dataSet; 

Here is the unit test to test this code. It doesn’t test nearly all of the functionality you would want to check in a real unit test, but it does demonstrate the TransactionScope. Note that the TransactionScope object doesn’t have an explicit RollBack() method. The Rollback occurs if the TransactionScope object is disposed without Complete() being called on it. This occurs at the end of the using block.

public void GetUsersTest() 
    string connectionString = GetConnectionString();     

    using (TransactionScope ts = new TransactionScope()) 
        using (SqlConnection connection = 
            new SqlConnection(connectionString)) 
            DataLayer dataAccessLayer = new DataLayer();     

            DataSet dataSet = dataAccessLayer.GetUsers(); 
            AddNewUser("Fred", connection);     

            dataSet = dataAccessLayer.GetUsers(); 
            DataRow[] dr = dataSet.Tables[0].Select("[UserName] = 'Fred'"); 
            Assert.AreEqual(1, dr.Length); 

Hope this is helpful. Good luck and code safe!


Blog at