Posted
by
EditorDavid
on Sunday February 25, 2018 @04:04AM
from the patchy-patches dept.

Esther Schindler (Slashdot reader #16,185) writes that the Spectre and Meltdown vulnerabilities have become "a serious distraction" for sysadmins trying to apply patches and keep up with new fixes, sharing an HPE article described as "what other sysadmins have done so far, as well as their current plans and long-term strategy, not to mention how to communicate progress to management."
Everyone has applied patches. But that sounds ever so simple. Ron, an IT admin, summarizes the situation succinctly: "More like applied, applied another, removed, I think re-applied, I give up, and have no clue where I am anymore." That is, sysadmins are ready to apply patches -- when a patch exists. "I applied the patches for Meltdown but I am still waiting for Spectre patches from manufacturers," explains an IT pro named Nick... Vendors have released, pulled back, re-released, and re-pulled back patches, explains Chase, a network administrator. "Everyone is so concerned by this that they rushed code out without testing it enough, leading to what I've heard referred to as 'speculative reboots'..."

The confusion -- and rumored performance hits -- are causing some sysadmins to adopt a "watch carefully" and "wait and see" approach... "The problem is that the patches don't come at no cost in terms of performance. In fact, some patches have warnings about the potential side effects," says Sandra, who recently retired from 30 years of sysadmin work. "Projections of how badly performance will be affected range from 'You won't notice it' to 'significantly impacted.'" Plus, IT staff have to look into whether the patches themselves could break something. They're looking for vulnerabilities and running tests to evaluate how patched systems might break down or be open to other problems.
The article concludes that "everyone knows that Spectre and Meltdown patches are just Band-Aids," with some now looking at buying new servers. One university systems engineer says "I would be curious to see what the new performance figures for Intel vs. AMD (vs. ARM?) turn out to be."

Both vulnerabilities are blown out of proportions and you need to rush to actively fix them only when your platform runs untrusted code which is mostly relevant for VPS/clouds/etc.

When you only run your own trusted code (say a DB or an HTTP server), there's little if any need to patch them urgently. Of course, this implies that your authentication process is properly secured and when it's not, the intruder might as well find other local unpatched vulnerabilities.

True that. I mean, what's more secure than a system that can't boot/and/ resists troubleshooting?

"But fisted", you say, "Look into journalctl." [...] "Oh, you're running cyrus and the mail subsystem produces awful amounts of logs so journalctl will take minutes to even get you to the pager?" "I guess you could.. no, not that, but, maybe grep on journalctl -f?" "What do you mean, you need to see past events?" "Well don't run such a stupid mail server, it's not systemd's fault!!!" "What do you mean, it was

Trust isn't binary. No code is fully trusted. There is a whole spectrum from core kernel security functions in an open source OS to random Javascript served up by ads.

Most people run some proprietary software. Most people have not carefully security audited all their open source software. That's why operating systems have feature to isolate tasks, to protect the kernel and manage hardware access rights.

For most people the Meltdown patch is essential. Exploits are already in the wild.

Trust isn't binary. No code is fully trusted. There is a whole spectrum from core kernel security functions in an open source OS to random Javascript served up by ads.

Most people run some proprietary software. Most people have not carefully security audited all their open source software. That's why operating systems have feature to isolate tasks, to protect the kernel and manage hardware access rights.

For most people the Meltdown patch is essential. Exploits are already in the wild.

No, it's overhyped. Perhaps if you're running a VM and intermix publicly accessible services with internal services, then you will want to worry about meltdown and spectre potentially causing the public VM to grab data from the secure VM. Of course, the other solution can use is to separate the machines physically, so someone exploiting meltdown on your public VM gets access to the other public VMs.

Here, the threat is not from the software on the VM, but from someone finding an exploit in the software and exploiting it. But there is nothing you can run that will get you access to the other private servers, especially with proper firewalling in place.

For single-server machines, the patches aren't as useful - if you break into the server via an exploit and then get root, just because you have patched it against meltdown means nothing - since you can access kernel memory anyways much more easily.

Plus, there are plenty of user-mode meltdown patches out there - the whole javascript exploit is now useless because all the major browsers have made it so "high resolution timers" aren't so high-resolution - they're around the 1msec range which is enough for scripts, but too coarse to actually do a meltdown exploit (the timing difference between cached and uncached is small and 1msec is not fine enough to tell).

The goal is to recognize that the problem is localized to one machine, and it inadvertently allows processes to read memory they're not supposed to. For a VM server, this is bad, since it means once VM can read memory of another VM. For a cloud service provider, this is disastrous since it means an evil VM can read other customer's data.

Within a company, it's a lot less serious if you already have the proper network segregation in place, you don't mix internal and externally accessible VMs on the same machine and other precautions. In a non-VM situation, it's a non-event - exploiting the service grants you access to the machine. And once that's happens, it can be assumed you can access the entire filesystem and everything accessible to the machine anyways.

As far as I know no one has still been able to provide a working spectre vulnerability that doesn't rely on custom written victim code,or specially instrumented code. If I'm wrong, please post me a link to the code.Meltdown is different. Basically never use intel again until they in 5 years they have fixed it in hardware.As for running trusted code on anything other than bare metal:Why would you trust the virtualization layer any more than your trusted code?That's retarded and a failure in your trust model.

After decades of struggling with virus scanners that insisted on slowly, laboriously scanning every.h file on every access during every compile, the insistence of sysadmins on braindead security policies has already wasted months of my life. I guess my only question is: what's different now? Is it, perhaps, that they themselves would also be bothered by it this time?

Go do your f'ing job and install the patches from hell, I say. And if the drop in performance bothers you, maybe we can finally talk about tur

Maybe it's just nostalgia and rose-tinted glasses, but life was a lot less complicated then.

The complications were different. You had to fiddle with hypermodem or xzmodem or UUCP, you had to know the AT command set, you had to know arcane technical details of the PC ISA just to get a sound card working.

Besides this the hosts file is missing certain tools to make it viable for most things.

It needs wildcards and regex patterns to be worth a damn in todays world where you want to not just block a specific IP but rather a block of IP. Iptables generally does a much better job of this than rink a dinking around with a billion IP addresses which change like the wind blows.

I thought the sysadmin had pretty much been eliminated in favor of outsourcing IT and making the developers do it themselves. It's a prime area for cost cutting, good sysadmins aren't cheap and you won't notice they're gone because they tend to automate their jobs.

I guess what I'm referring to is digging into every single patch to try to figure out what the fuck it actually patches. And if you *do* get some kind of detail on what a specific patch actually fixes, is the information meaningful enough to decide whether you *should* apply this specific patch (relevance, risk, etc)?

Is it easier or harder now with so many vendors releasing "rollup" patches which contain multiple patches, some of which are all-inclusive and some of which require some previous rollup installed? Now picking and choosing specific patches is more or less out the door.

And then there's the question of whether the vendor even makes it easy/hard to have any control over patches, automatically just giving you patch(es) in some form or other. And of course let's not forget support -- will the vendor provide any support if you are missing patches or do you have to have them all installed anyway?

I guess what I see this boiling down to is "Who cares?" Install all the latest available patches and hope for the best. Only a full-time dedicated patch admin for a narrow product silo has the time/energy/understanding to break down the compound patching environment into something coherent and also probably is also the only one to have a complex patch management system that gives them granular control over which patches get installed and which don't.

Also, based on the last few years of software quality we're all beta testers anyway. Pretty much everything released is beta quality and hits true stability and reliability just about the point the new version is released and taming its worst initial bugs.

As some have undoubtedly pointed out the true vulnerability of your system depends on exposure and the type of code run on the system, but the idea of patching only a certain segment is less than appetizing. Any business of any size has a tier of test/QA , maybe one of dev, and finally a production line. Obviously you patch one set, test/QA or dev. allow your developers to abuse the hell out of it hopefully, then roll it out to production. The company I currently work for actually has QA engineers who follo