5 Answers
5

On Linux systems, all of this information is available through the proc interface, and as such is fairly easily scriptable. As a working example (coming from a RHEL6 system) let's look at the rsyslog process.

The arguments look like they're all run together. In fact, they are separated by null characters. You'll get a more friendly display by turning the null characters into newlines (but note this will drop the distinction between separations between arguments and actual newlines in an argument):

Granted, this doesn't really give us anything beyond what we got from the ps output. However, depending on what you want it may be more easily scriptable. If we wanted to work exclusively in bash, for instance, you could use this structure to iterate over all processes:

for p in /proc/[0-9]*/cmdline; do
…
done

Then use that as a file list for processing.

If you are instead into Perl, there exists a module called Proc::ProcessTable that queries the proc system and exposes the same information as an object.

All that being said, if you want to look for passwords on the command line, you may sometimes be disappointed. Some applications somehow work to mask it out, for example MySQL:

Awesome answer. I wish I could give you all my upvotes, seriously.
–
chao-muJun 15 '12 at 16:48

Given that this question is more or less answered by Simple example auditd configuration? I'm surprised you didn't recommend auditd. I wouldn't recommend sampling here, you'll miss important stuff. Regarding cmdline, the arguments are separated by null characters, and find is overkill to find the cmdline files.
–
GillesJun 18 '12 at 1:17

@Gilles: It's true, auditd is normally the "right" answer. I think I fell into the classic case of "answering the question as asked, not giving the answer needed".
–
Scott PackJun 18 '12 at 12:50

I am accepting this answer because it seems (from the answers and my research) that there is no such tool. This post would be a good starting point for someone willing to write one.
–
Gael MullerJun 18 '12 at 15:40

don't forget that you may have access to the environment too, so using a password in an environment variable may not be safer than passing it as command line argument. ps auxwwe | grep --color -i pass that said on my modern ubuntu, I cannot see environment for processes that I don't own, so its slightly more secure.
–
jrwrenJun 21 '12 at 18:38

This is the sort of task humans do a lot better than machines. And by that I mean pattern recognition. You could try writing a script that continually calls ps (cutting out the command and then sorting out duplicates with sort -u) and then review the output later. However note that this is now persisting the data that you've already identified as potentially being sensitive. Make sure its permissions are sane or that it is stored in an encrypted form. Review periodically until you feel comfortable with your users and then ditch the script to reduce maintenance costs.

Jumping back to your hypothetical tool, the number of false positives it would generate would be astounding unless the passwords were already known to it. In that case, since you are now sharing sensitive information across users, you are increasing risk not decreasing it.

One method to reduce those false positives is to make the script as simple as possible. For example, only restrict for common commands with consistent arguments that require passwords to be entered on the command line. Even then, shell scripting can get so complex you'll get a bunch of false negatives and positives. The best you can do is regular expressions and that will be a nightmare to implement. This means the value of such a tool diminishes, especially when one considers maintenance cost. I think there are probably better things that an admin could be doing with their time.

Also, this is reactionary approach to security and not a preventative one. Maybe there is a way on your system to prevent users from seeing process information of other users? This is probably highly unlikely, but it may be the better question to ask.

Also note that this information also ends up entering the user's history file, essentially persisting the disclosure. Checking the permissions/ownership of those files might be a useful task. Same thing with logs that might contain passwords or other sensitive information. Also some applications like mysql might record history of commands.

Automating such processes is a trivial task really, once you know what you want to be accomplished.

Assuming your systems are linux boxes, bash/python/perl scripts can easily be written to execute commands and parse the results. Such scripts could also compare the differences between the outputs day-to-day. The scripts could be set to execute regularly through the use of cron jobs.

The usefulness of the results would depend on the person monitoring/interpreting it.

The same could presumably be done for windows boxes using batch scripts(not too sure).

Hum, you know as I think about this question. I'm not so certain how trivial a task this would be. You want information about each image, presumably you also want information about short lived instances as well.

So if you want all this information you could be talking about using the Windows Filtering Platform and something like epoll for Posix so you can get all of the Event I/O notification.

Then you need a way to persist this data and transport it to a central repository where it can aggregate this data for consumption. This would be a lot of data, it would require a lot of horsepower to do the analysis live. You would probably have to depend on a process/solution like a ETL or Map/Reduce before you could make sense out of it. You could of course just send it to a syslog server and go through it by hand, I would believe that would quickly overwhelm you.

Suppose instead you could do the processing and filtering locally on the machine and only export to a syslog/aggregation server positive matches.

I don't know of a tool that does this specifically, maybe if you have an AV/HIPS product installed that'll let you write a custom rule, you could maybe get close. A custom from scratch application would be a healthy project for a single developer, probably a fun one.

Looking regularly at the process list isn't very reliable. Any sampling method is highly likely to miss short-lived processes which a more targeted attacker who isn't afraid of temporarily using a lot of CPU resource would capture.

Use your operating system's existing logging mechanisms. Even if you go for sampling because systematic logging is too expensive, sample for (say) a full minute now and then (preferably at times when things happen).

For example, under Linux, use the audit subsystem (auditd) to log all calls to the execve system call. See Linux command logging? for a general overview and Simple example auditd configuration? for a how-to. Note that under Linux, the arguments of a process are publicly readable (except under some restrictive security settings with SELinux or similar high-security environments), but the environment variables can only be seen by other processes run by the same user. To get an idea of what is public and what isn't, run ls /proc/1.