Server using SSD

Can anyone offer input on how to reliably use SSD's in a server? The main issue that has been offered against this idea is the limited write issue. Is there a reliable way to address that so that the server can benefit from the speed of SSDs? The main bottleneck to our custom app is large SQL queries and I think a simple solution, as opposed to a massive rewrite of our app, is to utilize SSD's where those SQL queries are bogging us down.

So long as you use enterprise SSDs and RAID them there shouldn't be a problem. They will wear out but then you just replace with new ones. Another option is NAND flash PCIe cards, you're not tied in to the server manufacturer with PCI cards. Admittedly you're not tied in to the server manufacturer with SSDs either but their RAID controller may not play nicely with them.

Most people think of an SSD as an extension of memory and that there will be a huge performance boost by all of the memory to memory transfers. In reality, an SSD is a storage media that mostly eliminates the rotational lag and seek time from a disk (at 7,200 or 10,000 RPM the lag is pretty small). That's all. The SSD is still a channel/bus connected media that adheres to all of the defined channel protocols. Channel transfer speed, device contention, etc. are exactly the same as with rotating disk.

You can see performance gains from the correct usage of SSDs. But it's typically a marginal increase, not a quantum leap. Most people seem to be hugely disappointed as they expect magic.

You're probably better off spending the same money for memory expansion on the server. If you've already maxed memory, then the SSD may increase the life of the server.

The "limited write" shouldn't concern you. It would take trillions of writes to come into play. That said, assign semi-static data to the SSD, if possible, so that the predominant usage is read, not a heavy mixture of read and write.

You may want to consider something like the EqualLogic PS6000XVS or PS6010XVS iSCSI storage arrays. These combine SSD and SAS drives with intelligent distribution of data. Keeping you most frequent accessed data on the SSD.

We are a small company, 5 LAN and 25 WAN users. Primary app is a custom app using IE8 to access SQL database on SBS 2008.

My assumption is that SQL and the OS residing on a SSD would be much faster access. I don't know anything, so, really just assuming that placing both onto a SSD will bring much faster query response. Valid or no?

weird, my screen on the sbs 2008 is black and only shows a pointer. I just logged on to check the database size and getting this black screen. Any idea what is going on? Server seems to be running fine. We are getting email and I can access the shared files, just this black screen...

"We are getting email and I can access the shared files"
Hmm, that may be a clue; how much RAM do you have and have you set a cap on how much RAM you have. SQL and Exchange both like to eat RAM unless you tell them how much they can have.

Queries should come from RAM as Kent says as your DB is tiny but writes have to be committed to disk, so SSD helps there. SSDs shouldn't be any use for the SQL executables or OS, they ought to stay in RAM too if there's enough of it.

So, when there are 50+ lines that have to be displayed, those take a long time to come up. Each line has to run a number of calculations. Where is the speed bottleneck, if not in the RAID5 spinning platter?

I don't know much. I am going on the experience I had of switching my desktop computer from a quad core AMD and spinning platter to an i7 core and with an SSD, this thing is stunningly amazingly fast. I am looking for that kind of difference in our server experience and feel SSD's would contribute to opening up the bottleneck. Again, I don't know anything technically, just looking at this from my perspective. I have posted this question EE and had responses the run the spectrum, from little speed pick up, to amazing speed pick up. Why such a disparity in answers from EE guys?

Also, we did add an SSD to this current server, as a "G" drive and put the SQL database on it and it did help the speed, some, but not what I expected. Now I think the OS and maybe the SQL app have to go onto the SSD to see the real speed benefit?

The disparity is due to us not knowing your complete setup until half way through the question. SSDs will reduce your disk latency but if you can keep it all in RAM (except for the writes) then that'll speed it up even more.

8GB's not much RAM to put SQL, Exchange and AD on. what make/model computer is it?

You can sneak in a bit more info into the thread by running perfmon and looking at disk write ops per second and memory usage.

it is one we put in 2008, a Xeon 3220 with RAID 5. It is a basic server and we run SBS 2008. It was our first time to implement a server. Now that we have this custom app developed, I would like to upgrade the hardware to work better. The users from their workstations are running IE8 and the app is querying SQL database on teh server. I think it is all the operations it is doing that makes the 50+ line page run slow. Where should I look to optimize? What can be kept on RAM? I think the IE8 app causes SQL to run data queries to multple related tables and run calculations from those tables data to compile each "line" that displays in our app. No problem when a page has 20 or less lines, but get up to 50 lines and it bogs down. I don't want to address that with code, would rather address it with the SSD or with "kept in RAM" where possible.

Others have covered most, you can use SSD for OS/app installs while TMP/page file should be on a SAS/SATA spindle drive.
Based on what you are reporting and what you have running, your system is IO bound because of swapping.
Exchange+sql+IIS is using more than the physical RAM you have.

Adding RAM will improve things.
Making sure that the server configuration is for application based as well as whether sql performance is boosted from normal to above average(affinity)

For the first:
Properties of computer, advanced settings, advanced, performance settings, set performance removing all the graphical niceties, then hit the advanced tab and make sure process and cache/memory is prioritized for programs.
Properties if network adapter, properties of file an printer sharing. Make sure it is prioritized for eat work applications versus file sharing.

For the second, using ssms, properties of the database server, check the boost sql priority.

The performance issues you might be experiencing are isolated to the IIS application rather than SQL.
Run the query in SSMS and see how long the SQL server takes to compete the query.
Use sql tuning engine with the query to see whether it provides suggestions/recommendations I.e. add index, stats to improve responsiveness/execution plan.

Have you considered the fact that ir COULD be the clients that are actually slow?
Some pages that are poorly designed for displaying dynamic data may respond fine when there is a limited amount of data, but give it a lot, 50+ rows in your instance, and the page rendering takes ages!!

Just a thought if you have not explored this yet. Do you have a high end machine with lots of ram an an I3/I5/I7 processor in it that you can try this with?

It could just be a very poorly designed page that is pushed back to the browser to render.

Neilsr, yes my PC is i7 core and while pages do load faster, the 50+ line thing is still a factor on this machine and why I think the issue needs to be addressed on server.

arnold, are you available to do screen share session or would you be willing to be on screen and walk me thru what you are talking about? It would be nice to try those suggestions and narrow it down.

I am preparing to implement a new server and continue using the old server as a second in the mix. So, really, the more I think about it, the focus being on how to implement SSD's on the new server in a way that best speeds the SQL query issue, that would be a good focus, moving forward, with settings set appropriately.....

Also bear in mind that depending on your license you may be able to use Hyper-V and have 2 VMs, one with SBS and one with SQL on it. DDR3 RAM is so much cheaper than the old DDR2 used on your server that then new server would be nearly as cheap as a RAM upgrade.

That's actually a pretty slow system by today's standards. Based on the database being so small, the likelihood that most/all of it is in memory, and statement by the OP that the performance degrades as the query output increases from 20 to 50 lines, I'm leaning towards I/O probably not being the issue.

What happens if you run the same queries from any desktop client (Visual Studio, Enterprise Manager, BCP, etc.) If the performance is OK, the server isn't the issue. If the server degrades, then we need an explain plan of the query to see if the query needs tuning. If the execution plan is acceptable, then it's time to focus on where the bottleneck is.

Given all your responses to the above, I would tend to start thinking about the actual SQL statement also. As Kent says, an explain plain of the actual SQL statement in question would be a good starting point. It could just be a very poorly performing statement.

each "line" runs a number of calculations, has a pdf link and does a number of things across 15 columns/cells. I think it is just task intensive, each line computing and displaying it's data. When there are more than 20 lines to retrieve, it is just a lot of calculations to run. And each time that page has to refresh, it runs all those calcualtions again. The "lines" are live billing lines with data in each cell that has to be calculated and also with links refreshed. I think it is just very intensive process....

Based on the above, it is unclear what it is you have/are dealing with.
You could use triggers that on update/insert of new data, the (computational columns are auto-updated)
or have a separate table/view that has the totals/computations.

You saying it takes you a long time to get from point A to point B and asking whether replacing the Engine, Transmission, wheels will speed up the process.
There is no way to answer unless the path from point A to point B is clear.
i.e. no matter on which day or during which time period, the travel speed on the path from A to B can not exceed 5 km or mi per hour because of the road type and conditions.
There are no changes that can be done to the vehicle to shorten the travel period from A to B.

If you have a 20x10 table of ajax items that each change triggers a refresh for the server to compute and respond, that is where you have to improve.
The problem in this scenario is that you can not lock update access such that only one person can make updates. If you could, you can perform the computation/adjustments on the client side via Javascript while at the same time, sending notification of the changes to the server.
javascript updates the totals in the browser (onChange), while relaying the changed column data to the server.
Locking is required to make sure that the data presented to user A remains consistent with the changes made otherwise. User A updates entry in row 5 column 4 from 3 to 4 while userB updates row 15 column 2 from 12 to 34. each will reflect incorrect cumulative information. until both refresh their displays.

I posted a couple pics and a Technologies list in hopes of clarifying. I can't really give precise and definitive responses to your questions, as I am the biz owner, not the programmer. I do know that we brought this app to a point a couple of years ago and stopped. It is fully developed adn functioning, but, as you say and as the programmer has said, there are areas that could be addressed, in code, to make it more efficient. My point in all this is that a change in hardware may get us through another 2 years without spending 50-100k on reworking the code.

The page that is problematic on reload is perhaps best addressed in code, but, when looking at cost and time to implement, I think using a faster server with SSD's will essentially eliminate the speed problem and keep us from having to go into "round three" on our custom app.

This seems to be stirring frustrations and I do not mean for this post to go that direction. Should we continue or is it best to close this question? If we continue on this question, what can I give you to help isolate the speed drag? Based on what I remember from the programmers description of it, the page is running multiple calculations on each "line" and that won't change without much re-work. Given that, will SSD implementation help the calcuations run more quickly?

Good looking reports, but there's nothing magical there. That is, nothing to suggest why you're having a performance issue.

And I do agree that a few thousand dollars of faster hardware can make a lot of sense when compared to 10s of thousands of dollars of "expert" time.

But a few hundred dollars is better still, with a great risk/reward ratio. Put more memory in the server. If the MB supports 16GB, go there.

And check the location of the O/S and swap space. Swap should be on the fastest device you have. It does not need to be on a RAID device. Do you happen to know if you're using hardware or software RAID? Software RAID is considerably slower than hardware raid. It provides the same redundancy/protection as hardware RAID, but at a cheaper cost and trades performance for peace of mind.

Your delay will likely not be addressed by any hardware related updates.
The issue is that the items presented might be relying to ever increasing numbers of rows in various tables that are continually increasing.

Hey it is your money to spend. SSDs have a limit in the size if you simply are looking to swap existing sas/sata drives for equivalent capacity SSD avoiding making any changes to the server.
Calculation occur in memory and memory is the fastest component after the CPU in the system. The more data that can be maintained in memory the faster the processing will be.

i.e. you have a pair of 146 SAS drives in a RAID 1 group on the server. You eject one of the 146 and replace it with a 146 SAS SSD. once the raid 1 is rebuilt and in optimal state, you eject the second SAS DRive and replace it with a 146 SAS SSD. At this point you spent X amount of money. On restart the OS boots faster (presumably you moved the page file from the SSD RAID partition and onto another to avoid excessive rights to the SSD.). So the bootup process is faster. The issue that you are currently experiencing with the display will not change a bit and might actually slow down over time as more and more data is added to the database that this page is supposed to analyze.

An equivalent would be you 10 years ago you hired a bookkeeper to maintain your records. The first six months, you called and got a report within 30 minutes.
As your business grew and expanded in the product offering, you call the same bookkeeper and now it takes them a day to provide you the same report covering the same request.
replacing the existing bookkeeper with a "faster" bookkeeper is unlikely to improve the speed by which that report is produced. Improving the speed at which books are delievered to the bookkeeper, i.e. insteaed of the bookeeper having to go and get the books, someone else brings it to them.

In the end you will have to analyze the causes and come to a decision on what the solution is. I.e. limiting the scope of data presented for alterations in a screen.
Alter the mechanism, i.e. display 50 lines, but you have to only update/adjust one row at a time.

The only thing I am say with almost certainty is that using SSD will not have a significant improvement to the issue you are looking to resolve.

The programmer developer, could use SQL Query Analyzer/profiler/tuning agent to explore what if any suggestion there are to improve the performance of the SQL server as it relates to the existing query.
i.e. SQL has a tool that allows the capture of queries during peak hours. Using that data, it is possible to analyze what recommendation the SQL Tunning agent makes to speed up the retrieval/processing of the data.

Not sure if it makes a difference, but those pics are of live, editable lines, not reports. I am not sure of the terminology to use, so bear with me, those pics are of the pages I work in all day long. Those lines can be "opened and edited" from the pages depicted.

OP is on C drive, which is part of a RAID 5 and the RAID is intel software. Swap space? Not sure how to figure that out.

Consider putting the O/S and swap on an internal SATA connected disk and using the RAID device(s) for all other storage. RAID systems are very reasonably priced these days and you may find it beneficial to use an external RAID array that gives you much better performance than the software RAID that you're using now.

I really dont figure why anyone is encouraging this questioner to even consider moving to SSD. From the information given and the answers to questions he has been asked, any competent expert would be giving the same advice.

I am such a procrastinator, don't hold your breathe, but most def, if it is in the next month or so, will post here. From this volley of answers on this post, I am leaning towards a single processor build with raid hardware and use SSD's in the array and then swap out with spinners, just to see what it is like. Will proly do 16 GB ram, to see how that works. Seems we are allways using max ram with current system at 8GB, so will be interesting to see if it maxes out 16 GB right away. Between the ram and the SSD's and also by installing the OS on a dif machine from Exchange (I think?), all that should relieve alot of the bog.

Featured Post

Is your marketing department constantly asking for new email signature updates? Are they requesting a different design for every department? Do they need yet another banner added? Don’t let it get you down! There is an easy way to manage all of these requests...

Use this article to create a batch file to backup a Microsoft SQL Server database to a Windows folder. The folder can be on the local hard drive or on a network share. This batch file will query the SQL server to get the current date & time and wi…

In this article we will get to know that how can we recover deleted data if it happens accidently. We really can recover deleted rows if we know the time when data is deleted by using the transaction log.

This tutorial will give a short introduction and overview of Backup Exec 2012 and how to navigate and perform basic functions.
Click on the Backup Exec button in the upper left corner. From here, are global settings for the application such as conne…

This tutorial will walk an individual through the process of transferring the five major, necessary Active Directory Roles, commonly referred to as the FSMO roles to another domain controller.
Log onto the new domain controller with a user account t…