.Net GUI performance problem

I have a sporadic performance issue with a dot-Net application, and I'm at a loss as to what to try next. So please excuse me if this is the wrong forum for posting this.

This application was developed in C# in Visual Studio 2005. On most systems it runs fine. Recently we've discovered two different customer systems (both Windows XP) that won't run this application.

We've checked to make sure the dot-Net framework is installed. We require a minimum version of 2.0, however, both systems had that as well as 4.0 installed.

The symptom is that all input (mouse clicks, text) to this application is abysmally slow on these two systems. The Task Manager does not show our application taking an unusual amount of CPU time. No other apps appear to be affected. I don't know if it can
be in the actual code because the problem occurs when trying to access a control. For example, I click on an edit box and it takes over a minute for the selection to show. The response time is just as abysmal when I try to type in characters to this edit box.
The response is poor on all the controls, including buttons. I wonder if for some reason the screen is not refreshing. This is a multi-threaded application, but the child threads are temporary and perform a specific task, such as opening a file (in case it
takes a long time and the user wants to cancel the open.)

More Related Resource Links

Hello Everyone,
I have been developing one large and complex WPF application for 13 months now, all is going well and especially tonight as I now have some sort of clue or hint to what I have been seing now for quite some time. Here is the rundown.
1) Running on i5 processor with 4GB RAM and Windows 7
2) The application was developed all in .Net 4.0, WPF C#.
3) It is heavily GUI intensive and also uses Entity Framework Detached for Database Access.
4) Is multi-threaded / multi-tasking.
I am a nutcase when it comes to testing. Since my graphical application heavily depends upon performance and determinsm (please somebody don't start with the C++ vs .Net on here) and has been performing great FOR THE MOST PART. Every once in
a while, I see that the application runs into the weeds for 5 or 6 seconds and the interface is unresponsive. I have this thing tuned finely too. Well now toward the end of development I have finally had to face that bug so to speak. I can
force it by clicking spastically on my buttons causing an event storm and making the system update the GUI.
SO! Great start, I can now cause the problem on a regular basis (for you youngsters that is actually a good thing)! Alright, so I start with the easiest investigative tool, CNTRL - ALT - DEL into Windows 7 Performance Monitor. As you can
tell from

I have a production database replicated on my local machine. I run a select query for 10k rows. On my local box the query takes less than half a second. On production the query takes a full 4 seconds. I have looked at the SQL Server
Profiler and I see a lot of activity on production. I would like to pinpoint exactly what or what group of items is causing the performance to degrade on production. What are your suggestions?
Local Environment: SQL 2008, 32bit quad-core processor, 4gb ram
Production: SQL 2000, 32bit 8 processor cores, 16gb ram
BrianMackey.NET

Say @startDate and @endDate are my datetime parameters. If users does not want to limit the @startDate for example, he/she sends a null value to @startDate. Same goes for @endDate (and any other parameter).
What I like doing:
Select * From MyTable
Where MyTable.Date >= IsNull(@startDate, MyTable.Date)
And MyTable.Date <= IsNull(@endDate, MyTable.Date)
The above looks nice as far as coding but when I look at the execution plan I'm surprised to see the my clustered index (that has it's first column as "MyTable.Date") is not used. instead I get an index scan. More frustrating is that the following works
with an index seek and is much faster:
Select * From MyTable
Where MyTable.Date >= @startDate
And MyTable.Date <= @endDate
So what should I do? am I doing something wrong? I don't want to use dynamic sql because it's generally slower and prone to sql injection (is dynamic sql my only choise)? I don't want to use "if statements" because then i'd have to rewrite my code several
times (in this case three combinations but some of my procedures have up to 8 "nullable" parameters). Or maybe there is another way of implementing what I call here "nullable parameters".
obviously I preformance is crucial.
thanks,
Dror

Hi guys,
Just a quick question regarding a problem i seem to be having at the
moment which i can't seem to get my head around.
I have started at an organisation which has a very basic setup. A
clustered sql 2000 sp4 server with approx 90 databases on. Only a few
of these have much traffic on and performance on the whole seems ok.
The server is connected to a SAN with two LUNS being made available
for the data and log drives. Performance in general is ok with disk
latency for reads and writes on both luns around 10-20ms. There is a
constant stream of reads using approx 1.5 - 2 mb per second from the
data drive. On the odd instance disk read go upto 20-30mb per second
and the disk queue may rise to 10 for a very short period of time but
on the whole the disks seam to handle it and latency for the above
peak may go upto 200ms.
The problem seems to happen when checkpointing seems to happen. I have
set the trace flags so i can see what database is checkpointing at any
given time and when the highest throughput database checkpoints we see
performance problems.
Latency can go to 20-30 seconds for 30-60 seconds which effectively
halts processing for this time. Errors start appearing in the sql log
for files taking longer than 15s to respond.
What's throwing me is that the throughput on the disks doesn't seem
very high at these points. As an example checkpoint pages

Hi
I have a report containing a rectangle/tablix attached to a dataset of course details. The rectange contains a number of text boxes showing course details (fields from the dataset: course title, department etc.) and a sub-report which is a list of students
on that course (controlled by passing appropriate parameters). This worked fine until I decided I wanted one of the text boxes (course_title) to repeat on each page whilst printing.
To achieve this I added a column group on course_title and set the header to repeat on each page, as suggested elsewhere on this forum. This works fine when simply viewing the report in BIDS or after deployment (SharePoint), but not when attempting to print,
print preview or convert to another format (e.g. pdf). The report itself runs in a matter of seconds (<10) but the conversion runs for about 90 seconds before aborting (probably due to server settings) and in IE taking the browser down with it. I've tried
not repeating the group header on each page but the problem persists, unless I remove the group altogether. I've also tried adding the student dataset and the list directly into the main report, but that created too many restrictions on the list (e.g. not
enabling list column headers to be repeated on each page).
Is there a known issues with grouping affecting the performance in this way? Are there other ways around any of these p

when I am joining two tables where one table has two columns which specify a date range and the other table has one column with a single date which must be in that range for a join, then the performance is not so hot. The T-SQL example only shows the basic
query scheme, in reality there are appropriate indexes (but not on the date columns since I found them not helpful) and the DateRange column has about 100 mio rows and the Incident table about 200,000 rows. The query currently takes hours, I must speed
it up by at least factor 10.

We are facing some problem that connected with sql server (as we think)

Problem:

We have some software (OLTP more like) which receives data (with same structure) over network and then writes it into DB. The problem is that every minute sql server stops processing transactions for 10 seconds (approximately). You can see
this situation on picture (green line is intensity of incoming data and red line is output intensity after processing (adding to DB)).

Enviroment:

IBM server with 4 x Xeon 2.2 Ghz (cpu utilization is less then 10%)

DB files are located on RAID 6 over 20 spindle (physical disks) max IO speed more then 300 MB/s (IO throughtput usage is about 2 MB/s). DB Log is located on the same drive as other DB's files. LOB is stored in separated files. DB sheme is a
typical warehouse: 2 Large tables(Table A - 400 M of rows, contains images and has reference to table B. Table B has 40 M of rows).

I have designed a custom server control. It involve handling a large html table and a lot of dropdown lists with javascript. Everything works fine except I have a real performance issue with IE8: The page takes 10.1 seconds to be displayed completely with IE8 while it takes only 0.8 seconds when using Chrome 8 ! This is the exact same page delivered by the same server on the same computer. Only the browser change.

My question is: How can I profile the javascript execution ? I would like to find out what is taking so much time in IE8 and see if I cannot optimize that code. I really would like to have IE8 load the page as fast as Chrome is able to do.

Yes, it may sound strange. When we started using Excel 2007 our pivot table connectivity saw a performance drop.

As near as I can tell, Excel 2007 wants to do a Discover for
MDSCHEMA_PROPERTIES with a RestrictionList of <PROPERTY_TYPE>2</PROPERTY_TYPE>.
Since the restriction does
not filter on a particular cube, it appears that the operation is causing the user's dimension security to be calculated for
every single cube in the database.

This article lists the techniques that you can use to maximize the performance of your ASP.NET applications. It provides common issues, design guidelines, and coding tips to build optimal and robust solutions.

Every time I work with one of our .NET customers to help them with managing their application performance I come across the same problems as seen with other clients before: lots of ADO.NET queries, many hidden exceptions in core or 3rd party .NET libraries, slow 3rd party components, inefficient custom code

It has always been a goal of project architects to plan an effective strategy from the ground up in regards to an new application. All relevant factors are taken into consideration with respect to the application, from its design and layout to a functional website infrastructure. Pre-.NET strategies and design guidelines still effective now were developed with Microsoft's DNA (Distributed interNet Application) platform. This model successfully served the purpose of architecting N(any number of)-Tier (levels) applications. In its basic sense, as in most robust, distributed applications, you'll architect 3 main layers or Tiers: presentation, business rules and data access.

have been writing a series of blog posts, which I have named High Performance ASP.NET Websites Made Easy! There is no rhyme or reason to the order of these posts and certainly can be read in any order: