Pages

Tuesday, September 29, 2009

Memory leaks in .NET application have always being programmer’s nightmare. Memory leaks are biggest problems when it comes to production servers. Productions servers normally need to run with least down time. Memory leaks grow slowly and after sometime they bring down the server by consuming huge chunks of memory. Maximum time people reboot the system, make it work temporarily and send a sorry note to the customer for the downtime.

The first and foremost task is to confirm that there is memory leak. Many developers use windows task manager to confirm, is there a memory leak in the application?. Using task manager is not only misleading but it also does not give much information about where the memory leak is.

First let’s try to understand how the task manager memory information is misleading. Task manager shows working set memory and not the actual memory used, ok so what does that mean. This memory is the allocated memory and not the used memory. Adding further some memory from the working set can be shared by other processes / application.

In order to get right amount of memory consumed by the application we need to track the private bytes consumed by the application. Private bytes are those memory areas which are not shared by other application. In order to detect private bytes consumed by an application we need to use performance counters.
Below are the steps we need to follow to track private bytes in an application using performance counters:-

Start you application which has memory leak and keep it running.

Click start à Goto run and type ‘perfmon’.

Delete all the current performance counters by selecting the counter and deleting the same by hitting the delete button.

From the instance list select the application which you want to test memory leak for.

If you application shows a steady increase in private bytes value that means we have a memory leak issue here. You can see in the below figure how private bytes value is increasing steadily thus confirming that application has memory leak.

The above graph shows a linear increase but in live implementation it can take hours to show the uptrend sign. In order to check memory leak you need to run the performance counter for hours or probably days together on production server to check if really there is a memory leak.

Before we try to understand what the type of leak is, let’s try to understand how memory is allocated in .Net applications. .NET application have two types of memory managed memory and unmanaged memory. Managed memory is controlled by garbage collection while unmanaged memory is outside of garbage collectors boundary.

So the first thing we need to ensure what is the type of memory leak is it managed leak or unmanaged leak. In order to detect if it’s a managed leak or unmanaged leak we need to measure two performance counters.
The first one is the private bytes counter for the application which we have already seen in the previous session.
The second counter which we need to add is ‘Bytes in all heaps’. So select ‘.NET CLR memory’ in the performance object, from the counter list select ‘Bytes in all heaps’ and the select the application which has the memory leak.

Private bytes are the total memory consumed by the application. Bytes in all heaps are the memory consumed by the managed code. So the equation becomes something as shown in the below figure.
Un Managed memory + Bytes in all helps = private bytes, so if we want to find out unmanaged memory we can always subtract the bytes in all heaps from the private bytes.
Now we will make two statements:-

If the private bytes increase and bytes in all heaps remain constant that means it’s an unmanaged memory leak.

If the bytes in all heaps increase linearly that means it’s a managed memory leak.

Below is a typical screenshot of unmanaged leak. You can see private bytes are increasing while bytes in heaps remain constant

Below is a typical screen shot of a managed leak. Bytes in all heaps are increasing.

Now that we have answered what type of memory is leaking it’s time to see how is the memory leaking. In other words who is causing the memory leak ?.
So let’s inject an unmanaged memory leak by calling ‘Marshal.AllocHGlobal’ function. This function allocates unmanaged memory and thus injecting unmanaged memory leak in the application. This command is run within the timer number of times to cause huge unmanaged leak.

It’s very difficult to inject a managed leak as GC ensures that the memory is reclaimed. In order to keep things simple we simulate a managed memory leak by creating lot of brush objects and adding them to a list which is a class level variable. It’s a simulation and not a managed leak. Once the application is closed this memory will be reclaimed.

Once you know the source of memory leak is, it’s time to find out which logic is causing the memory leak. There is no automated tool to detect logic which caused memory leaks. You need to manually go in your code and take the pointers provided by ‘debugdiag’ to conclude in which places the issues are.
For instance from the report it’s clear that ‘AllocHGlobal’ is causing the unmanaged leak while one of the objects of GDI is causing the managed leak. Using these details we need to them go in the code to see where exactly the issue lies.

It would be unfair on my part to say that the above article is completely my knowledge. Thanks for all the lovely people down who have written articles so that one day someone like me can be benefit.

http://blogs.msdn.com/tess/ A great blog by a lovely lady Tess on debuggers. There are some great labs on memory leak detection using windbg , do not miss it. Tess god bless you, your blog rocks like anything.

Saturday, September 26, 2009

Bandwidth performance is one of the critical requirements for every website. Intoday’s time major cost of the website is not hard disk space but its bandwidth.So transferring maximum amount of data over the available bandwidth becomes verycritical. In this article we will see how we can use IIS compression to increasebandwidth performance.Please feel free to download my free 500 question and answer videos which coversDesign Pattern, UML, Function Points, Enterprise Application Blocks,OOP'S, SDLC,.NET, ASP.NET, SQL Server, WCF, WPF, WWF, SharePoint, LINQ, SilverLight, .NETBest Practices @ these videos http://www.questpond.com/

Best Practice No 4:- Improve bandwidth performance of ASP.NET sites using IIS compression

Note :- All examples shown in this article is using IIS 6.0. The only reason we have used IIS 6 is because 7.0 is still not that common.

Before we move ahead and talk about how IIS compression works, let’s try tounderstand how normally IIS will work. Let’s say the user requests for a‘Home.html’ page which is 100 KB size. IIS serves this request by passing the100 KB HTML page over the wire to the end user browser.

When compression is enabled on IIS the sequence of events changes as follows:-

• User requests for a page from the IIS server. While requesting for page thebrowser also sends what kind of compression types it supports. Below is a simplerequest sent to the browser which says its supports ‘gzip’ and ‘deflate’. We hadused fiddler (http://www.fiddler2.com/fiddler2/version.asp ) to get the requestdata.

• Depending on the compression type support sent by the browser IIS compresses data and sends the same over the wire to the end browser.

• Browser then decompresses the data and displays the same on the browser.Compression fundamentals: - Gzip and deflateIIS supports to kind of compressions Gzip and deflate. Both are more or less same where Gzip is an extension over deflate. Deflate is a compression algorithm which combines LZ77 and Huffman coding. In case you are interested to read more about LZ77 and Huffman you can read athttp://www.zlib.net/feldspar.html .

Gzip is based on deflate algorithm with extra headers added to the deflate payload.

Below are the headers details which is added to the deflate payload data. It starts with a 10 byte header which has version number and time stamp followed by optional headers for file name. At the end it has the actual deflate compressed payload and 8 byte check sum to ensure data isnot lost in transmission.

Google, Yahoo and Amazon use gzip, so we can safely assume that it’s supported by most of the browsers.

Till now we have done enough of theory to understand IIS compression. Let’s get our hands dirty to see how we can actually enable IIS compression.

Step 1:- Enable compressionThe first step is to enable compression on IIS. So right click on websites  properties and click on the service tab. To enable compression we need to check the below two text boxes from the service tab of IIS website properties. Below figure shows the location of both the checkboxes.

Step 2:- Enable metabase.xml editMetadata for IIS comes from ‘Metabase.xml’ which is located at “%windir%\system32\inetsrv\”. In order that compression works properly we need to make some changes to this XML file. In order to make changes to this XML file we need to direct IIS to gives us edit rights. So right click on your IIS server root  go to properties and check ‘enable direct metabase edit’ check box asshown in the below figure.Step 3:- Set the compression level and extension typesNext step is to set the compression levels and extension types. Compression level can be defined between 0 to 10, where 0 specifies a mild compression and 10 specifies the highest level of compression. This value is specified using ‘HcDynamicCompressionLevel’ property. There are two types of compression algorithms ‘deflate’ and ‘gzip’. This property needs to be specified in both thealgorithm as shown in the below figures.We need to also specify which file types need to be compressed. ‘HcScriptFileExtensions’ help us to specify the same. For the current scenario we specified that we need to compress ASPX outputs before they are sent to the end browser.

Step 4:- Does it really work?Once you are done with the above 4 steps, it’s time to see if the compressionreally works. So we will create a simple C# asp.net page which will loop “10000”times and send some kind of output to the browser.

In order to see the difference before compression and after compression we will run the fiddler tool as we run ourASP.NET loop page. You can download fiddler from http://www.fiddler2.com/fiddler2/version.asp .Below screen shows data captured by fiddler without compression and with compression. Without compression data is “80501 bytes” and with compression it comes to “629 bytes”. I am sure that’s a great performance increase from bandwidth point of view.

In our previous section we have set ‘HcDynamicCompressionLevel’ to value ‘4’. More the compression level value, more the data size will be less. As we increase the compression level the downside is more CPU utilization. One of the big challenges is to figure out what is the optimum compression level. This depends on lot of things type of data, load etc.In our further coming section we will try to derive which is the best compression level for different scenarios.

If your site is only serving compressed data like ‘JPEG’ and ‘PDF’, it’s probably not advisable to enable compression at all as your CPU utilization increases considerably for small compression gains. On the other side we also need to balance compression with CPU utilization. The more we increase thecompression levels the more CPU resources will be utilized.

Different data types needs to be set to different IIS compression levels for optimization. In the further coming section we will take different data types, analyze the same with different compression levels and see how CPU utilization is affected. Below figure shows different data types with some examples of file types.

Let’s start with the easiest one static content type like HTML and HTM. If a user requests for static page from IIS who has compression enabled, IIS compresses the file and puts the same in ‘%windir%\IIS Temporary Compressed Files’ directory.Below is a simple screen which shows the compressed folder snapshot. Compressionhappens only for the first time. On subsequent calls for the same compresseddata is picked from the compressed files directory.Below are some sample readings we have taken for HTML files of size range from 100 KB to 2048 KB. We have set the compression level to ‘0’.You can easily see with the least compression level enabled the compression is almost 5 times.As the compression happens only for the first time, we can happily set the compression level to ‘10’. The first time we will see a huge CPU utilization but on subsequent calls the CPU usage will be smallas compared to the compression gains.

Dynamic data compression is bit different than static compression. Dynamic compression happens every time a page is requested. We need to balance between CPU utilization and compression levels.In order find the optimized compression level, we did a small experiment as shown below. We took 5 files in a range of 100 KB to 2 MB. We then changed compression levels from 0 to 10 for every file size to check how much was the data was compressed. Below are compressed data readings in Bytes.The above readings do not show anything specific, its bit messy. So what we did is plotted the below graph using the above data and we hit the sweet spot. You can see even after increasing the compression level from 4 to 10 the compressed size has no effect. We experimented this on 2 to 3 different environments and it always hit the value ‘4’ , the sweet spot.

So the conclusion we draw from this is, setting value ‘4’ compression level for dynamic data pages will be an optimized setting.

Compressed files are file which are already compressed. For example files like JPEG and PDF are already compressed. So we did a small test by taking JPEG compressed files and below are our readings. The compressed files after applying IIS compression did not change much in size.

When we plot a graph you see that the compression benefits are very small. We may end up utilizing more CPU processor resource and gain nothing in terms of compression.So the conclusion we can draw for compressed files is that we can disable compression for already compressed file types like JPEG and PDF. CPU usage, dynamic compression and load testingOne of the important points to remember for dynamic data is to optimize between CPU utilization, compression levels and load on the server.We used WCAT to do stress with 100 concurrent users. For every file size range from 100 KB to 2 MB we recorded CPU utilization for every compression level. We recorded processor time for W3WP exe using performance counter. To add this performance counter you can go to process à select processor time à select w3wp.exe from instances.If we plot a graph using the above data we hit the sweet spot of 6. Till the IIS compression was 6 CPU utilization was not really affected.

TTFB also termed as time to first byte gets the number of milliseconds that have passed before the first byte of the response was received. We also performed a small experiment on 1MB and 2 MB dynamic pages with different compression levels. We then measured the TTFB for everycombination of compression levels and file size. WCAT was used to measure TTFB.

When we plot the above data we get value ‘5’ as a sweet spot. Until the value reaches ‘5’ TTFB remain constant.

All the above experiments and conclusion are done on IIS 6.0. IIS 7.0 has a very important property i.e. CPU roll-off. CPU roll-off acts like cut off gateway so that CPU resources are not consumedunlimitedly.

When CPU gets beyond a certain level, IIS will stop compressing pages, and when it drops below a different level, it will start up again. This is controlled using ‘staticCompressionEnableCpuUsage’ and ‘dynamicCompressionDisableCpuUsage’ attributes. It’s like a safety valve so that your CPU usage does not come by surprise.

• If the files are already compressed do not enable compression on those files.We can safely disable compression on EXE , JPEG , PDF etc.

• For static pages compression level can be set to 10 as the compression happensonly once.

• Compression level range can be from ‘4’ to ‘6’ for dynamic pages depending onthe server environment and configuration.The best way to judge which compressionlevel suits best is to perform TTFB, CPU utilization and compression test asexplained in this article.

Monday, September 14, 2009

Is this Article worth reading ahead?

This article discusses how we can use performance counter to gather data from an application. So we will first understand the fundamentals and then we will see a simple example from which wewill collect some performance data.

Introduction: - My application performance is the best like a rocket Let us start this article by a small chat between customer and developer.Scenario 1 Customer: - How’s your application performance?Subjective developer: - Well it’s speedy, it’s the best …huuh aaa ooh it’s a like rocket.Scenario 2Customer: - How’s your application performance?Quantitative developer: - With 2 GB RAM , xyz processor and 20000 customer records the customer screen load in 20 secs.I am sure the second developer looks more promising than the first developer. In this article we will explore how we can use performance counters to measure performance of an application. So let’s start counting 1,2,3,4….

At the end of the day its count, calculate and displayAny performance evaluation works on count, calculate and display. For instance if you want to count how many pages in memory where processed per second we first need to count number of pages and also how many seconds where elapsed. Once we are finished with counting we then need to calculate i.e. divide the number of pages by seconds elapsed. Finally we need to display the data of our performance.Now that we know it’s a 3 step process i.e. count, calculate and display. The counting part is done by the application. So the application needs to feed in the data during the counting phase. Please note the data is not automatically detected by the performance counters , some help needs to be provided by the application. The calculation and display is done by the performance counter and monitor.

If application does not provide counter data performance counters cannot measure by himself. Performance counter cannot measure applications which do not feed performance data. In other words the application needs to feed in counter data by creating performance counter objects.Types of measures in applicationAlmost all application performance measurements fall in to one of the below 6 categories.

Instantaneous values: - Many times we just want to measure the most recent value. For instance we just want to measure how many customer records where processed? , how much RAM memory has been used etc. These types of measures are termed as instantaneous or absolute values. Performance counter supports these measurement types by using instantaneous counters.

Average values: - Sometimes instant / recent values do not really show the real picture. For instance just saying that application consumed 1 GB space is not enough. But if we can get some kind of average data consumption like 10 MB data was consumed per 1000 records probably you can get more insight of what is happening inside the application. Performance counter supports these kinds of measurement types by using average performanance counters like AverageBase, AverageTimer32, AverageCount64 etc.

Rate values: - There are situations when you want to know the rate of events with respect to time. For example you would like to how many records where processed per second. Rate counters help us to calculate these kinds of performance metrics.

Percentage values: - Many times we would like to see values as percentages for comparison purposes. For example you want to compare performance data between 2 computers. Comparing direct values will not be a fair comparison. So if we can have % values from both computers then the comparison can make more sense. If we want to compare values between different performance counters, percentage is much better option rather than using absolute values. For example if you want to compare how much RAM is utilized as compared to hard disk space. Comparing 1 GB ram usage with 50 GB hard disk usage is like comparing apples with oranges. If you can express these values as percentages then comparison will be fair and justifiable. Percentage performance counterscan help us to express absolute values as percentages.

Difference values: - Many times we would like to get difference performance data , for instance how much time was elapsed from the time application started, how much hard disk consumption was done by the application from the time it started etc. In order to collect these kinds of performancedata we need to record the original value and the recent value. To get final performance data we need to subtract the original value from the recent value. Performance counter provides difference counters to calculate such kind of performance data. So summarizing there are 5 types of performance counters which can satisfy all the above counting needs. Below figure shows the same in a pictorial format.Example on which performance counter will be testedIn this complete article we will be considering a simple counter example as explained below. In this example we will have a timer which generates random number every 100 milliseconds. These random numbers are later checked to see if it’s less than 2. Incase its less than 2 then function ‘MyFunction’ is invoked.

Below is the code where the timer runs every 100 milliseconds and calculates random number. If the random number is smaller than 2 we invoke the function ‘MyFunction’.

Below is the code for ‘MyFunction’ which is invoked when the value of random number is less than 2. The method does not doanything as such.

private void MyFunction()
{
}

All our performance counters example in this article will use the above defined sample.Adding our first instantaneous performance counter in 4 stepsBefore we go in to in depth of how to add performance counters, let’s first understand the structure of performance counters. When we create performance counters it needs to belong to some group.So we need to create a category and all performance counters will lie under that category.

We will like to just count how many times ‘MyFunction’ was called. So let’s create an instantaneous counter called as 'NumberOfTimeFunctionCalled'. Before we move ahead let’s see how many different types of instantaneous counters are provided by performance counters:-

NumberOfItems32:- An instantaneous counter that shows the most recently observed value.

NumberOfItems64:- An instantaneous counter that shows the most recently observed value. Used, for example, to maintain a simple count of a very large number of items or operations. It is the same as NumberOfItems32 except that it uses larger fields to accommodate larger values.

NumberOfItemsHEX32:- An instantaneous counter that shows the most recently observed value in hexadecimal format. Used, for example, to maintain a simple count of items or operations.

NumberOfItemsHEX64:- An instantaneous counter that shows the most recently observed value. Used, for example, to maintain a simple count of a very large number of items or operations. It is the same as NumberOfItemsHEX32 except that it uses larger fields to accommodate larger values.

Step 1 Create the counter: - For our current scenario ‘NumberOfItems32’ will suffice. So let’s first create ‘NumberOfItems32’ instantaneous counter. There are two ways to create counters one is through the code and the other is using the server explorer of VS 2008. The code approach we will see later. For the time we will use server explorer to create our counter. So open your visualstudio  click on view  server explorer and you should see the performance counters section as shown in the below figure. Right click on the performance counters section and select create new category.

When we create a new category you can specify the name of the category and add counters in to this category. For the current example we have given category name as ‘MyApplication’ and added a counter type of ‘NumberOfItem32’ with name ‘NumberOfTimeFunctionCalled’.

Step 2 Add the counter to your visual studioapplication: - Once you have added the counter on the server explorer, you can drag and drop the counter on the ASPX page as shown below.

You need to mark ‘ReadOnly’ value as false so that you can modify the countervalue from the code.

Step 3 Add the code to count the counter: -Finally we need to increment the counter. We have first cleared any old values in the counter during the form load. Please note that counter values are stored globally so they do not do reset by themselves we need to do it explicitly. So in the form load we have cleared the raw value to zero.

Step 4 View the counter data: - Now that we have specified the counter in the application which increments every time‘MyFunction’ function is called. It’s time to use performance monitor to display the performance counter. So go to start  run and type ‘perfmon’. You will see there are lots of by default performance counters. For clarity sake we will remove all the counters for now and add our performance counter i.e. ‘NumberofTimeFunctionCalled’.

You can now view the graphical display as shown in the below figure. Ensure that your application is running because application emits data which is then interpreted by the performance monitor.

Above view is a graphical view of the same. To view the same in textual format you use the view report tab provided by performance monitor. You can see the report shows that ‘MyFunction’ was called 9696 times from the time application started.

In the previous section we have measured how many times ‘MyFunction’ was called. But this performance count does not really show any kind of measure. It would be great if we can also see the count of how many times the timer was called. Then later we can compare between the numbers of time timer was called and ‘MyFunction’ was called.So create an instantaneous counter and increment this counter when the timer fires as shown in the below code.

In the previous section we had counted two counters one which says how many times did the timer fire and the other says how many times ‘MyFunction’ was called. If we can have some kind of average data which says how many times was ‘MyFunctionCalled’ for the number of times timercalled it will be make more sense.In order to get these kinds of metrics Average performance counters can be used. So for our scenario we need to count the number of time function was called and number of time the timer fired. Then we need to divide them to find on a average how many times was the function for the timer fired.

We need to add two counters one for the numerator and the other for the denominator. For the numerator counter we need to add ‘AverageCount64’ type of counter while for the denominator we need to add ‘AverageBase’ type of counter.

You need to add the ‘AverageBase’ counter after the ‘AverageCount64’ type counter or else you will get an error as shown below.

For every timer tick we increment the number of time timer called counter.

From our sample we would now like to find out the rate of ‘MyFunction’ calls with respect to time. So we would like know how many calls are made every second. So browse to the server explorer and add ‘rateofCountsPerSecond32’ counter as shown in the below figure. Increase this counter every time ‘MyFunction’ is called.If you run the application you should be able to see the ‘RateofMyFunctionCalledPerSecond’ value. Below is a simple report which shows the rate of counter data which was ran for 15 seconds. The total call made in this 15 second was 72. So the average call is 5 ‘MyFunction’ calls per second.

We have left percentage counters and difference counters as they are pretty simple and straightforward. In order to maintain this article to the point and specific I have excused both these counter types.Adding counters by C# codeTill now we have added the performance counter using server explorer. You can also add the counter by code. The first thing is we need to import System.Diagnostics namespace.We then need to create object of ‘CounterCreationDataCollection’ object.

Its quiet a pain to write the counter creation code. You can use performance counter helper to ease and make your code smaller. You can find the performance counter helper at http://perfmoncounterhelper.codeplex.com/ ,

• Use performance counters to measure application data.• Performance counters comes in various categories like instantaneous, average , rate etc.• Performance counters should not be used in production. In case it’s used should have a disabling mechanism.• Performance counter cannot measure by itself application needs to provide data so that performance monitors can calculate and display the data.

Subscription Package for fundamentals and Interview preparation

which consist of .NET, C#, ASP.NET, SQL Server, WCF, Design Patterns, SilverLight, LINQ, SharePoint, Azure videos read details of subscription Also want to inquire more then call us on +91-22-66752917/mail us on questpond@questpond.com. Simply click to .