Heterogenity may sound nice, but if I have x different systems, it multiplies the number of vulnerabilities too. And of course I would need staff, which knows this systems very good and is able to test / patch them.

I guess there a a lot of small and medium shops out there, who will patch their Windows systems regularly, but runs a 5 year old unpatched Linux server which will be never be touched, because it is too risky to break it (when applying all outstanding patches) or because nobody in this shop knows Linux well enough and because it just runs.

If your SQL server is physically hosted then doing a test conversion to a virtual machine has quite limited practical test value. It will test that your application still works with the amended OS (Operating System) but with another layer of indirection between the OS and the hardware the overall usefulness of this testing is low as you are likely to be using a different chipset with different configuration. Unfortunately the only really practical test is an identical physical clone of the original system, configured identically but with the patched OS on it but this is an arrangement that is quite hard to arrange unless spare systems were ordered with the original order. If your physically hosted SQL server is well enough secured that users cannot run arbitrary executables on it, and the general principle is that access should be very effectively locked down on such a system, and the few privileged users that do have access do not use it to browse the Internet then as a system it should be pretty much safe and you can relatively safely avoid the OS patches (software workarounds to hardware errors). The key point here is that the server should be effectively isolated, and if you're not effectively isolating your SQL server and running just SQL server on it then you really should be.

If your SQL server is virtually hosted then unless the host OS (VMWare, etc) is patched then there is nothing much that you can do to make it safe because a running process on one guest (hosted) system would be able to read the physical memory of the host OS, which would naturally include the physical memory of all guest (hosted) OSes. Assuming that you have multiple identical host systems then the test process becomes upgrading one of them and either moving or cloning a guest system onto the patched system and testing that.

Dell has rolled back patches for many of their models, people that jumped on the patch train early are having to manually undo patches on hundreds of laptops with a good percentage bricking, and the patches Intel submitted to Linux are poorly written and flawed. Plus there's no firmware patches for older devices.

This situation is a mess, but with no true competition in the laptop, workstation and server world, Intel brought it on through their hubris.Knowledgeable voices in the outskirts of the tech community have warned about this possible situation with Intel processors for years.

It's sad that my two RaspberryPi's are the only semi-useful computers in the house not affected.

I guess there a a lot of small and medium shops out there, who will patch their Windows systems regularly, but runs a 5 year old unpatched Linux server which will be never be touched, because it is too risky to break it (when applying all outstanding patches) or because nobody in this shop knows Linux well enough and because it just runs.

Caveat emptor!

Anything you buy or use that needs maintenance is your ultimate responsibility no matter the OS. Linux consultants are available, it's not rocket-surgery or the domain of grey-bearded priests. It's no more difficult than Windows, just different.

The flaw in chip design has been there since the late '90s, and then the news breaks overnight, 20 years later. If some insiders (ie: tech manufacturers, NSA, Wall Street) knew about it long before it went public, that information could have been used for insider trading or hacking. When big stuff like this happens in the financial, healthcare, or automotive industries, there are typically Congressional hearings and FBI investigations. Perhaps the IT industry should be held to the same standard.

One of the implications of Steve's advice is that if you are a Windows shop, your firewall and external security solution should be based on Linux, and you should invest as necessary to maintain it. If your are a Linux shop, your security should be Windows. Exploiters tend to have expertise in one or the other, rarely both. This one measure will knock out virtually all of the opportunistic exploiters, leaving you to deal only with those who are targeting you. As for having systems around the operation unpatched because nobody knows and everyone is afraid to touch it: this is just unprofessional. Raise your bar.

One of the implications of Steve's advice is that if you are a Windows shop, your firewall and external security solution should be based on Linux, and you should invest as necessary to maintain it. If your are a Linux shop, your security should be Windows. Exploiters tend to have expertise in one or the other, rarely both. This one measure will knock out virtually all of the opportunistic exploiters, leaving you to deal only with those who are targeting you. As for having systems around the operation unpatched because nobody knows and everyone is afraid to touch it: this is just unprofessional. Raise your bar.

Frankly, unless you run a gateway packet inspection service (i.e. something that tries to determine malware intent in protocols) your firewall brand really doesn't matter as long as it is adequately secure. This isn't to say that firewalls aren't important, they are an important layer in security, just that they don't protect most systems from attacks because most attacks will pass through intentionally opened routes in the firewall. Why should an attacker overly care to hack a firewall when you've already opened a route? It's usually not worth the trouble and often requires a lot of specialist knowledge and the attack is much more likely to be noticed.

I wasn't specifying hardware, just a principle. I think that Steve's advice about heterogeneous systems is good, and you are looking for an application in your operation, security is a good one.

OK, yes - heterogeneous systems are good. In fact, I nearly gave an example that with a, usually Windows, MS-SQL server sitting behind a hardware firewall, which is often Cisco but could be anything. The fact that it's not worth an attacker hacking the firewall device because it's too hard and not worth the effort does kind of imply that having different systems is a good thing. It doesn't help protect the open route but it does help protected the firewall In a big IT shop IT staff should be encouraged to have a mixture of systems, partly for support reasons (hard to support a system you have no access to) but also because an attack against one shouldn't affect the others.