My experiences as an IT professional - Anything that I write here is my personal opinion and should not be officially associated with any other entity

11 posts from June 2012

Friday, June 29, 2012

If you are a relative MySQL newbie like me, and someone tells you to backup your databases, you might make the same mistake that I did and copy the database files (all of those .frm, MYD and .MYT files) to a backup folder instead of using MySQL's built in backup method.

Of course, when you make this kind of mistake, often you are required to restore from backups, so you copy the files back to their original location and MySQL bombs on you when you try to start it.

In my case, I backed up the entire /var/lib/mysql directory to /tmp/mysql-backups. When I copied the /tmp/mysql-backups stuff back to /var/lib/mysql, nothing worked. Doing a 'cat /var/log/mysqld.log' showed a "Can't find file: './mysql/plugin.frm' (errno: 13)" error.

Turns out that you have to reset permissions on those files and do some work with SELinux to get that to allow the mysql account to access the databases.

This help is specific to MySQL using InnoDB on CentOS 6.x. I have no way of knowing if it will work for any other configurations.

Here's how to restore your databases and reset the permissions assuming that you have a backup of the entire /var/lib/mysql directory. I'm going to do a bunch of similar chown/chmod statements below just to be sure. You could do the same thing by doing 'chown -R mysql:mysql <directory>' and you'll want to do the separate commands for chmod since the directories and files should have different allowed operations.

Right now, one of my main functions at my job is to manage patches for nearly a thousand Windows systems. "Pain in the ass" doesn't even begin to describe the difficulties of patching that many Windows servers and workstations.

First off, you can't do it without an enterprise tool like LANDesk, Secunia, GFI LANGuard, Microsoft SCCM, etc. These tools are big, complex, hard to manage and typically require agents which often have their own issues.

Second, the number of third party apps that you have to track and patch is significant and growing. Because of compliance requirements, some organizations have to assess every single "security patch" for applicability. To do this, you have to know every piece of software that you have installed in areas requiring compliance and you have to regularly search for security updates to all of those applications. Yes, it is as laborious and mind numbing as it sounds. I have a lot of fun at my job, but this area is totally not intellectually stimulating at all.

Third, because of the fact that Windows patches require a reboot, scheduling downtime of critical production systems can be a total nightmare.

I should mention right now that Linux does all of this far better. If you pick your tools right (limit the tools that can't be installed from a repository), you manage all of your patching from one place. Yum for RedHat/CentOS or Aptitude for Debian. If your server is headless (who needs a GUI that takes up all of those resources and adds another 200 packages that need to be updated), patching a brand new server up to current takes minutes and requires ZERO reboots. It's as easy as typing "yum update". Since it can be used to track what security patches are applicable for each system, there's no research that needs to be done on just about anything other than the stuff that you installed from source.

Linux is quite simply light years ahead of Windows in this area. It is probably the main reason that I'm getting into Linux so much of late.

It sometimes takes over an hour to install a service pack on a Windows server. I can build up a new Linux server and fully patch it in less time. In fact, I can build up a new server with the LAMP stack, install an app on top of that (for log management, for instance) AND fully patch it in less time.

It's no wonder that Microsoft's share of datacenter systems has remained weak and will continue to slide. This situation is unacceptable. Even with clustering, which is expensive, hard to maintain and requires a lot more hardware, Microsoft can't maintain the kind of uptime that Linux is able to. Because guess what, patching invariably BREAKS your cluster.

On Windows 7, it can take over an hour to fully patch a new laptop or workstation and thats with SP1 already installed.

Even a shark with a congested nose can smell the blood in the water.

Linux administrators are able to perform patch management on their systems without buying expensive tools, without wasting a lot of time performing CVE and NVD searches for their apps, with a minimal amount of downtime and with significantly lower risk to their systems. Not to mention that it's much easier to deal with patching of systems isolated from the internet because you can build your own repositories.

Microsoft will continue to bleed marketshare and loyal customers like me until they solve this problem.

It will likely take a complete rewrite of Windows to do it, but given that the survival of the company depends on it, at least in my opinion, it would be stupid of them not to do it and soon.

I'm sure that anyone who's used Windows has at least one pet peeve that they'd like to see fixed. As much as I like Windows (at least in its latest incarnation), I'm no exception.

What's more annoying is that some issues have been around for several versions of Windows (Office too).

OK, so let's get to it! Here's my list of things I'd like to see changed in Windows Mountain View (sorry, still can't call it what Microsoft wants to call it -psst...Windows Vista-...hurts my fingers just thinking about typing it).

1. Focus, focus focus!

The way Windows changes focus, or how it designates what is the current active window, has been an annoyance of mine for a while now.

For example, you are working on something, a Word document for instance, you need to reference a web page, so you click on the Firefox icon (or IE) in the Quick Launch bar and get back to typing your current thought stream on the Word document, when all of a sudden mid-sentence, you realize that you are now on your homepage in Firefox. Exasperated, you sigh, click back on your Word document and retype the thought stream from the point where Windows, thoughtfully, changed the active window.

Hey Microsoft, here's a hint for you in as plain english as I can make it: Never, ever, never, ever, ever, ever, never, ever, ever, ever, ever change focus from the window that I'm WORKING on until I EXPLICITY COMMAND Windows to do so!!!!

I don't CARE what your usability testing shows you, I work in specific ways and it would be MOST APPRECIATIVE if I was given the choice on how Windows handles change of focus.

2. I'm the boss!!

I'm trying to delete files, a lot of them, from the Temp directory for instance. Windows pops up an error window telling me that the file is in use and can't be deleted. It doesn't tell me the specific app that is holding the file open. It doesn't ask me if I want to delete it anyways and it doesn't give me the choice to continue the file delete with the next file. LAME!

Give me the choice, I don't care how hidden it is, as long as it's there and documented. Unless the file is a required system file, ask me if I want to disconnect the app's connection to the file and delete the file.

An error comes up when I'm running an application. An error is in the event viewer. I'm having problems performing a specific task (such as trying to connect to console session of a Windows 2003 Server using RDP).

Where's the info in the event logs? Why are you making me click on a link to find out what the error means? Why is there nothing in the event logs at all? Why is the error not listed anywhere in the knowledge base with the same context as my error?

Windows should do a much better of job of telling the user (and an even better job of telling the admin) what's going on under the hood. What does the error mean, why is that application not starting, what's causing my inability to connect to another system's event logs.

I shouldn't have to download support tools in order to drill down to the lower levels of what's going on in Windows. It should tell me.

All error codes should translate into real english messages in the event viewer. All problems should be logged somewhere, or Windows should ask the user if they'd like the problem to be thoroughly added to the event log and Microsoft should at least let customers know that their problem is known even if there's not currently a fix.

Microsoft could even go the extra mile and ask the user if they'd like to be notified if a fix is created for their particular issue.

4. Welcome to the land of a thousand reboots.

So you install a new app, Windows asks you to reboot. You install a new driver for your printer, Windows asks you to reboot. You install a new security update, Windows asks you to reboot. You sneeze, Windows asks you to reboot. You run into a problem, so you reboot. Reboot, reboot, reboot after reboot.

I think you get the idea even if I have exaggerated the issue, but not by much. Sometimes it feels like I spend half my life waiting for systems to reboot. Perhaps it's not quite that much, but it's still too much.

Even with all the improvements that Microsoft has made in this area it's still not enough. Linux only needs to be rebooted for kernel updates (as far as I know, perhaps Martin MC Brown could shed more light on this), in other words, very very rarely.

With the amount of updates, patches and security fixes that Microsoft puts out for Windows, the reboot requirements are way too high. The only way to get ultra high reliability with Windows currently is to run it in clusters. This is simply not acceptable for small businesses that can't afford double the servers and licenses.

Because we can't afford double the hardware and software is no excuse for us not to have high levels of uptime.

These are just 4 of the issues that I have with Microsoft. To some they might seem minor, but in my line of work, I am bumping up against these problems almost every day.

Microsoft is starting to bump up against competitors in the consumer sector and business sector. In light of this, I would suggest to Microsoft and all of its employees, that good enough isn't. New features aren't enough. You have to be better than your competitors, some of whom offer their products for free, far better. You just about have to be perfect.

You have a long way to go. And Windows Vista better be the next big leap instead of the next small step.

stop(){ # Checking if the user is able to stop the agent.... if the user is not able to, script performs su to # the user zabbix and kills the agent. Also script tries to kill zabbix-agent processes 5 times using a counter, if at # the fith try the agent is still there, script outputs a message to the console.

sleep 10 # Script has a 10 second delay to avoid attempting to kill a process that is still shutting down. If the script # can't kill the processes, an error will appear. # After 10 seconds script checks again the number of zabbix_agentd processes running, # if it's 0, script will exit the loop and continue on

Wednesday, June 20, 2012

I work with Word and Visio a bit for incorporating network diagrams into documents. By default, Word will munge dashed lines of a Visio drawing when the drawing is embedded into Word and this is of course highly annoying when you are embedding Visio drawings with dashed lines into Word.

After some searching I found out that this is known behavior because of the way that Visio saves longer lines to save space, but there is a registry hack to fix the problem.

Saturday, June 16, 2012

Recently, I've been doing research on tools for log management and SIEM (Security Information and Event Management). As anyone who's ever been part of selecting an enterprise can testify, it's quite the daunting task.

Enterprise tools are BIG, as in they have a footprint larger than a brontasaurus, typically require a lot of work to get them set up to where they are useful and usable and they aren't any good unless someone (sometimes a small team is required) is managing them on a daily basis. Figuring out which tool best meets your needs is just as large of an endeavour.

Sometimes it can be hard to figure out just where to start, so you do some Google searches trying to find what SIEM tools are on the market, and maybe you do some searches for open source products that are up to the task and if you are lucky, you'll hit on one of Dr. Anton Chuvakin's blog posts on the subject.

He is definitely an expert in the field and currently works at Gartner doing analysis for them on the subject.

You'll also notice, if you pay close attention, that he's got a great sense of humor (and apparently likes Cognac).

Some notable blog posts of his that I think are required reading to anyone who is looking to purchase a SIEM tool are:

Top 10 criteria for a SIEM? - He goes against his better instincts and writes an informative blog post on excellent criteria, in my opinion, for beginning your assessments of the various SIEM tools

On choosing a SIEM - Here he provides some questions that a security analyst, business exec or IT manager should ask before they even begin looking for a SIEM...the questions are quite insightful and get to the heart of the matter of "do we really need this functionality?" and, more importantly, "do we have the resources to support it and make it successful?"

The myth of SIEM as "analyst in a box" - A good blog post in his series on how not to choose a SIEM. Basically, it boils down to if you don't have a security analyst or analysis team on staff already, you better get one before deploying a SIEM or your effort will be a complete failure.

Tuesday, June 12, 2012

With few exceptions (Puppet being one of them, because it just rocks) open source enterprise tools need to be packaged and available in one of the mainstream repositories (even EPEL is good enough! The bar isn't that high!) or your tool just isn't ready for primetime.

Looking at you Plone, LogAnalyzer, nxLog and LogStash! OSSEC gets a pass, but just barely because they are using the funky Atomic repo.

This is where tools like Zabbix and OSSIM come out ahead. The last thing you want to be doing when managing an enterprise, particularly in the industrial control space because of the funky requirements, is worrying about manually updating ANYTHING.

Really, the only reason why Puppet gets a pass is because it's so hugely useful in an enterprise.

And no, I don't want a bunch of excuses on why your tool isn't in a repository, whether it's for Debian, Ubuntu or the Red Hat side of things. If it's not available as a managed package, chances are your tool isn't going to be deployed by me or other IT people like me...and we tend to be a very stodgy group.

Saturday, June 09, 2012

One part of the conversation is completely *facepalm*/smh (shaking my head) worthy, though. Someone brought up the extremely ludicrous notion that Linux would have protected the Iranians against Stuxnet and Flame. Linux totally would not have given the Iranians any extra protection.

The Stuxnet attacks are highly sophisticated and targeted and were able to get through every layer of the Iranian infrastructure to get embedded within the centrifuge controls. Flame, at least as far as I can tell, may have been used to provide the reconnaissance needed in order to build Stuxnet. It wouldn't have mattered what types of computers and OSes the Iranians used. The attackers would have just found vulnerabilities in any of them and developed exploits accordingly.

These hacks were very difficult to defend against and would have required a very thorough "defense in depth" strategy with a dedicated staff of security analysts to monitor and analyze network, OS and application logs and health. Even then, given the fact that spies were involved, It would have been an uphill battle to stop these attacks from succeeding.

There are certainly advantages that Linux gives to organizations within datacenters. Protecting them against highly advanced and targeted threats that exploit unknown weaknesses is not one of them, in my opinion.

Friday, June 08, 2012

I've recently become a big fan of LastPass as a "cloud based" password management tool and have been using it to secure my identities on various sites by randomizing the password and storing it within LastPass.

It's good to know that companies like LastPass are doing valuable things within the consumer security space.

If you haven't changed your LinkedIn password within the last week, please use the tool to see if it was compromised, although you should change your password anyhow.

And if you aren't already a LastPass user, now would be a good time to start. For just $12 a year, you can have it manage your passwords, store them in encrypted and hashed form on their infrastructure and sync up the password vault to your PC, Mac, iPad, iPhone, Android phone, etc. I highly recommend using this or similar services to manage your passwords. Your identity should be protected and you can't do it by using a weakly generated password that you use for multiple sites. Have LastPass generate a strong password (at least 20 characters, 32 where possible) for your sites and manage the passwords for you.

Monday, June 04, 2012

For security as well as debugging purposes, it's a good idea to log the inbound, outbound, invalid and dropped packets that flow through the network interfaces on your linux systems. This is part one of a series with the rest, such as how to configure rsyslog.conf, to come later.

Then I modified what they had done and came up with my own file and here it is. It is set up to allow port 80 through as well as ICMP pings, SSH connections, syslog messages and traffic from your Puppet server.

Just do 'nano /etc/sysconfig/iptables' when logged in as root (using 'sudo su -' of course) to modify your file to do what you want it to do.

The three digit numbers are there to allow for automated building of iptables files using DevOps systems like Puppet or Chef. The benefit to that is that you'd manage the iptables file at the enterprise level and these systems keep an audit trail of changes made and also ensure that systems stay properly configured. If someone were to change an iptables file managed by one of the DevOps systems, the DevOps system will revert to the proper file. That's the power of DevOps.

As you can see, the file is broken up into modules with the header defining what those modules are and the various rules either accept, log, drop or send packets further on down the chain.