If this is your first visit, be sure to
check out the FAQ by clicking the
link above. You may have to register
before you can post: click the register link above to proceed. To start viewing messages,
select the forum that you want to visit from the selection below.

Stability vs. Security

One issue that seems to come up often is whether to apply a patch immediately or wait to ensure its stable, etc.

In an eWeek article, ISS was quoted as saying:

Databases are also particularly vulnerable to attack, since DBAs are loathe to install patches that haven't been thoroughly tested

I know that froma security standpoint you want to apply patches and updates that protect your system from known vulnerabilities. I am curious of others thoughts on balancing that versus the stability of the system.

As they mentioned in the quote, if you have a mission-critical datbase that is running flawlessly and a new vulnerability is announced that is critical- do you patch immediately and risk screwing up the database, or do you hold off and risk getting hit by the vulnerability?

I think the best answer to that dilemma is cash intensive. We have three systems running a development box, a staging box, and a production box. WE roll out any new patch to the stage box (this is an exact copy of the production box)first and see if it continues to work. If there are any bugs we don't roll out the patch.

On a similar note we can't install MS SP3 for win2k because of its licensing agreements. If we where to agree to there license we would be in violation of federal laws regarding our industry...needless to say we are looking at becoming a *nix shop.

We always test the patches before putting them on production systems. This can mean we are vulnerable for some time before we can apply the patch. You probably where already vulnerable before the patch came out so an extra couple of days doesn't matter. If it's a really bad one (exploitable and exploits are seen in the wild) we might consider shutting the vulnerable service down in the mean time. It all depends on how vulnerable the service is and what kind of data it is serving.

Oliver's Law:
Experience is something you don't get until just after you need it.

I always test patches on my test machine (an old 400 Mghz P2 box with Win2k) before rolling them out to the rest of the company. I have had lots of issues with Windows 2000 and service packs and I have learned my lesson to back up and test before rolling patches out.

N00b> STFU i r teh 1337 (english: You must be mistaken, good sir or madam. I believe myself to be quite a good player. On an unrelated matter, I also apparently enjoy math.)

Sounds like everyone so far agrees to test patches first. One thing we have been doing, probably like everyone else here as well, is to come up with and deploy a patching strategy. This would work for 'normal' maintenance patching based upon a time frequency and then a 'emergency' patching strategy based on event; akin to a new exploit being found.

For us the difference between the time and event strategies for patching vary with how we carry out testing.

For the time based patching, we let the patch(es) 'sit' in the open to see how it fares with the company that released the patch, then begin testing in the DEV/QA tiers, and if successful, then the patch(es) get moved into PROD. Under these 'normal' conditions we can classify a server or environment for patching like this:

In event patching, again with the 'release' of a new exploit, we shrink the time of testing and moving into production, based on the severity of the exploit.

We have quite a numbers of servers/environments, so patching is not cookie cutter across the board today, although we are working now towards having all environments on one patching strategy. The strategy has to work for the server, but also your clients and users.

i think about things like this in a way like, Well, i would much rather have to reboot and loose my uptime because i installed a patch than reboot and loose it because the patch i didnt install didnt stop that worm from spanking my network. all in all if a patch comes out, try talking to people who used it and see if they had any problems but you should install it.

I think the consensus seems to be that it is a system by system, patch by patch decision. In each instance you have to weigh out the existence, propagation, payload and other factors of the potential threat versus the impact on your system if you crash the network with an unstable patch.

Sometimes that may mean patching immediately and sometimes that may mean holding off. Another suggestion I have for holding off is to assess the mitigating factors of the threat and try to put up alternative safeguards. For instance, if there is a threat that needs port 1443 to attack on and there is a patch available, you could block external access on port 1443 until you have sufficient time to test and implement the patch. If you can find ways to reduce the potential or severity of the threat you can buy more time to assess and test the patch.

I liked what Vescovono offered. We play by ear and assess risk with each action, as best we can. If the risk of performing a patch is small compared to the risk of not performing the patch (such as the 1443 SQL Spammer), we get busy as soon as possible.

If the risk of performing the patch is larger (such as one MS Exchange 2000 patch of a couple years ago), compared to not running it immediately, then we wait and seek out updates and reports from the field.

However, to patch or not to patch is also related to other issues. Hardware changes and firmware updates often push the need to update OSs and applications.

In our case, we live in an environment of constant vigilance. So, we must find some balance between our need for security, our need for stability, and our user's need for access and reliability.

I think I need to agree with the need to evaluate each system and each patch individually. It kills me, because I started out a long time ago in the land of helpdesk - where I learned that standardization is a beautiful thing. Unfortunately it's just not as practical in the data center.

Still I would love to hear if anyone's got a really good system for tracking revision levels and patch deployments.

A three ring binder or a spreadsheet is better than nothing, but it just isn't sexy if you know what I mean.

And I have definitely worked in environments that could not implement a patch in short order due to industry and/or governmental regulations. Things needed to be tested on systems mirroring production systems for literally months before upgrades could be pushed to systems containing production data.

Fortunately those type of environments tend to be very heavy on security too - limiting the potential for an exploit and greatly limiting the spread if an exploit were to happen.