Yesterday me (programmer) and one of my coworkers from our Hosting department started series of tests to address performance issues with our websites. We are running a server with Windows Server 2008 with 8gb of ram. We are going to upgrade our server with an aditional 8 gb ram because our customers are experiencing slow performance of the backend (Sitecore). We think that our performance issues are related to having too many databases on our server. We have had a quick look into performance counters on the server we are testing on, but our test server is totally different from our live server. We isolated some websites on the testserver and started some stresstests / database tests on that server and tried some measuring with the counters with which we don't have any experience.
How can we adress the problem on our live server using performance counters? Is there any good guide on how to adress this kind of performance problems out there?
Any advice would be much appreciated!

2 Answers
2

I don't think that performance counters will give a useful answer for this, since they only really tell you often certain things happened or what kind of throughput your are getting, but you already know that the throughput isn't great.

I've had a look at the available performance counters on our MS SQL Server, and nothing in there strikes me as particularly helpful for this. You could try out some of the counters in the SQLServer:Wait Statistics category to get some indications.

However, I would start with the usual suspects:

More often than not, performance bottlenecks with db-backed web sites are connected to slow database queries. This is often caused by missing indices and bad query design. Depending on your version of SQL, you should be able to google some advice for identifying slow queries.

If you are running MySQL as your database, you should regularly analyze the slow query log.

Also for MySQL, inspect the server health variables, especially the full join selects.

If at all possible, separate the DB server from the web server. The constant context switching is not good for performance, either.

I will also try to isolate some of the slow pages in the application and review their queries. Sometimes heavy hit pages, like the main page, can have slow queries that will slow other properly performing queries down. Once I've tracked down some of the queries I will typically execute them in the management studio and look at their query plan to fine pain points like a missing or improper index. This is pretty basic in the end. This was also easy for me as it was a SAAS application where many many clients all connected to the save servers, using the same software.

The slowness could be coming from the fact this you are hitting a natural tipping point where your web traffic plus your database server is too much for one box. But before you go dumping $5,000 into a new database server take a peek into the perfmon data and see if something is just not standing out already. In my case to much disk IO on a direct attached storage array was the issue. Adding additional spindles was one possible solution. As well as adding more memory to allow more caching, and offloading the log files onto a different RAID array in the machine. In this instance also the web tier was not on the DB server so this was not the issue for me.