After using Acquia Drupal for a while, I took advantage of a trial subscription to the Acquia Network. The network’s services showed me that I had files present in my install that the agent could not account for.

When the archive is extracted in this way, my repository’s working directory shows modified, unknown, and deleted files. This allows me to treat each category of files individually before I commit the changes for a Drupal update as a revision.

$ hg status

The modified files will be tracked normally because they’ve already been added to the Mercurial repository, so I don’t need to do anything special for them.

The unknown files are ones that are completely new, and have not appeared in the same position in a previous revision. They have yet to be tracked by Mercurial, so I have to add them to the repository. To add just those unknown files, then, I have to pick them out from the status listing:

$ hg status --unknown

In order to operate just on those files to add them to the repository, I run a for loop:

This changes the “?” status to “A,” because the files were successfully being tracked by Mercurial.
I use the “--no-status” flag on the “status” command so that just the file paths are printed; the actual status code is not, which is appropriate for the target of the “add” command in the loop.

I do the same basic steps with deleted files. These are files that were in the previous revisions but have been deleted by the --recursive-unlink option from the tar extraction and not replaced with the extraction of the new Acquia Drupal tar archive. If the deleted files had been replaced by the tar extraction, they would either be unchanged (which would not show up in the “status” output) or marked as modified.

To remove the files that are marked as deleted from the repository’s working directory:

However, that may be the same as simply using the following, which I have to explore further:

$ hg remove --after

So, to follow all of these changes in the repository, I run the loop for the uknown files and the loop for the deleted files. The modified files are already tracked, so I don’t need to do anything additional for them. After that, a “commit” will record all of the changes — modifications, additions, and deletions — in the repo.

These commands are based on my current understanding of Mercurial, and they do work for me right now. There could certainly be another better way to do this in one fell swoop — or at least fewer steps. I would welcome that, so if you’re aware of a way, feel free to comment or contact me.

Update: I found that the “hg addremove” command cleanly replaces all of the shell loops I mentioned above. Therefore, I recommend using it instead of the “for” loops I described.

Radmind transcripts with symlinks will be damaged when edited in the Radmind Transcript Editor. I have confirmed this with RTE version 0.7.7 used in conjunction with the version 1.13.0 Radmind command line tools.

The problem appears to be an interaction between RTE 0.7.7 (which is old) and newer Radmind tools, according to [Bad link]. It apparently relates to any version of the Radmind tools greater than 1.12.0, which introduced symlink ownership, when used in combination with RTE 0.7.7. [Bad link] and has been fixed in the CVS version of RTE. To use that newer version of RTE, you have to [Bad link].

You only see the problem — assuming you are using the right combination of versions — if you edit and save a transcript or create a new transcript within RTE (either by drag and drop or the “Add Item to Transcript” command). So, using the RTE to simply view the transcript file — and then editing with a different editor (which is an inconvenience) — is a workaround.

To get a count of the affected transcripts (on your Radmind server), use the following command:

$ grep \?\?\?\? /var/radmind/transcript/*|cut-f1-d":"|sort|uniq|wc-l

You can simplify the grep search to only return the path of each match, and then process that with Awk to get just the basename of the file. Here’s how to use that technique to get the list of affected transcript files on a Radmind client:

[Bad link] and [Bad link] present [Bad link] in the MacIT track at Macworld Expo 2009. They are two of the first Mac system administrators I knew of using [Bad link], and both had a background in Radmind.

I’ve been reading through James Turnbull’s Pulling Strings with Puppet, since our library had a copy. I had hoped to get through it during our winter break, but illness and other factors (no Puppet pun intended) conspired to get in the way. From what I’ve read about it already, Puppet is clearly interesting. Nigel was very enthusiastic about it when we talked at WWDC 2008.

To me, it seems that it would take some effort to model what you want in it and build up a repository of what you want managed. Perhaps I’m feeling like an old dog trying to learn new tricks. Grin.

One point that Nigel and Jeff made in their presentation slides that struck me is that they needed a solution that works when offline, which Puppet does. Radmind can work offline but I daresay that’s not the way that most people would think to use it (lapply with its “-n” flag would be the most basic change).

[Bad link] also mentioned to me that he’s been using Puppet in conjunction with Radmind. I believe he has Puppet managing configurations and Radmind managing the bulk of the filesystem.

You can find out if an SSL certificate has expired with the command below. I’ve found it useful to be able to check for expired certificates in my use of Radmind, where you can uniquely identify clients to the server with them.

$ openssl x509 -in /path/to/cert.pem -noout -checkend 0

I mention this command primarily because I reviewed the the OpenSSL x509 man page (“man x509”) that comes with Mac OS X Leopard, and it didn’t show the “checkend” option for the command. That was odd, because that option was just what I needed.

I did, however, find it documented in the usage statement-style help for the command:

$ openssl x509 --help

In that usage statement, the “checkend” option is described (with little punctuation) as a way to “check whether the cert expires in the next arg seconds [sic] exit 1 if so, 0 if not.” So, using zero seconds shows you if the certificate has already expired, while an integer greater than zero will show if it will expire in the future. No matter how many seconds you check against, you must examine the results from the exit code (the “$?” shell variable) to see if the certificate is or has expired.

I find this is tremendously useful knowledge when dealing with certificates in Radmind, where an expired certificate can mean the failure of a client to connect to the Radmind server. It could be beneficial in other circumstances, of course — but I don't have those circumstances.

Taking this further, you could check for certificate expiration on a Radmind server — if your certificates are stored in the Radmind special directory for each hostname of a managed client. (Substitute one of your own managed clients’ hostnames for “hostname” in the path below.)

Since you can do it for one client certificate, you could also loop through all of the certificates on a Radmind server. In this example, I’ll continue to use the path of /var/radmind even though, on Mac OS X, I’d generally prefer to specify the full /private/var/radmind; your Radmind server may not be on Mac OS X even if your clients are. Also, you may need to modify the “depth” parameter on your search to accommodate the paths on your server. Finally, I’ll change the “checkend” parameter to 604800, for seven days (606024*7=604800). That produces something along the lines of:

It’s great to get just the CN of the certificate in these circumstances, since it’s likely you’ll want to act on just those that need attention. One way to do this relatively cleanly is to use OpenSSL x509’s “subject” and “nameopt” options, and then parse the output. Below, I’ll use awk for that. (Again, substitute one of your own managed clients’ hostnames for “hostname” in the path below.)

Getting even fancier, you can find out just which certificate CNs are expiring or expired compared to the managed hosts listed in your config file. (Make sure you don’t get any that are commented out, and also watch out for curly braces.) You can accomplish much of that with this snippet:

Beyond checking for expiration on the server, it may be valuable to do so in your Radmind client scripts, especially if you favor SSL connections. If you find an expired certificate, you can take some remedial action right away that might allow the client to communicate with the server.

I thought about this a while, and the easiest way I came up with — after having already developed more complex logic — was to simply rename or remove the expired certificate from its normal path. Then, allow the client to connect with another authorization level where the client certificate is unnecessary. (Use of a client certificate implies Ramind’s “-w2” authorization level, while a lesser level would mean you’re performing hostname/DNS rather than certificate verification.) This would probably mean you have multiple Radmind server processes running, each on its own port, to accept such incoming requests on the server.

In my [Bad link]-based workflow for updating [Bad link] sites, there is a sequence of commands I need whenever a new version of Drupal comes out. I have a hard time remembering the options for “tar” in this sequence — and my [Bad link] differs from what I need to do on my Web host — so I need to help my memory. The tar command, as constructed below, places its output into the specified destination directory.

I’ve been using zsh for a while as my preferred shell. I have a hacked-together zshrc file, and yet really wanted to use it across multiple systems. Some of those systems are running Mac OS X, others Solaris, and still others Linux. Executables are in a different locations and even have different switches across this range of systems, so my cobbled zshrc was not helping me.

As I was about to fall asleep last night, it finally hit me that fixing my zshrc would be a good thing to do. I jotted down some notes about an idea to reorganize it, and did something about it today.

Of course, since I’ve checked my zshrc and other dotfiles into a [Bad link] repository, I could experiment without fear.

I created three top-level functions, with one “case” statement in each. [Bad link], but they are one of the things I like about shell scripting. These statements do allow the script to make choices based on the host, operating system, or shell that it was running in. (Yeah, it’s a zshrc, but I sometimes do stupid things — like sourcing it in bash on the one Linux system that won’t let me switch to zsh. [Bad link])

I separated all the important sections of my zshrc into their own individual function calls. Each of those function calls was placed into one of the applicable case statements.

The case statement functions figure out the conditions the zshrc is running in, and then run the other functions to set up my environment.

The changes tested well from first try across the various platforms and hosts I log in to. I did have a minor problem with the hostname command, because Solaris doesn’t have a “-s” flag for it. Eventually, I solved that — and the odd “uname: error in setting name: Not owner” error I got, even though I wasn’t directly running uname there — by replacing hostname with uname.

Thankfully, it works for me, and it should be a little easier to manage changes in the future.

Drat! I’ve learned that the commands module for Python, which I use, [Bad link]. That means I can no longer safely call commands.getstatusoutput() anymore. I’ve frequently used this call in the past because it seemed the sanest, easiest way to call for a shell utility and get both its output and return status (for success or error).

I’ll have to find some other way to perform the same function — preferably one that will work on Python 2.3 from Mac OS X Tiger, Python 2.5.1 from Leopard, and future Pythons. The stated replacement for a number of similar modules (including popen2, which frankly kind-of frightened me off with its name) is the [Bad link], but I don’t know if that will work for my purposes.

There are also a bunch of Mac-specific modules being removed. I don’t use any of them right now, but that doesn’t mean they wouldn’t have been useful.

This kind of thing is spirit-crushing to me for some reason. I’m especially annoyed that Python has been around for so long and it is still reorganizing the ways it calls shell commands. Just settle on something! It seems hard to take it seriously as a system administration scripting language when things like this happen.

On the other hand, I love so much of the Python Standard Library, which has afforded me a lot for system administration …

Here, we can see that an entry created with repo looks like the standard Radmind log messages above. The client hostname and IP address are reported after the “report” text. The CertificateCN for the client — if the highest authorization level is specified (with the -w2 flag) — is also listed; if not, a dash takes its place. I haven’t seen a case where the second dash is substituted, however.

Finally, where the Radmind command/tool used would normally be, the “event” specified by repo will printed. After that, the message text appears.

The value proposition is that if you’re using Radmind, the repo command can help you send arbitrary messages to the server for logging. As bonus, if you've taken the time and effort to build the certificate infrastructure for Radmind, you can send these messages securely between the clients and the server cloaked in SSL.

If you’re using multiple servers, you may want to combine their logs in one location so that you can get all of the clients’ reports in one location. You may also want or need to retain these reports for more time. In either case, determine what policies you should apply to the syslog or Apple System Logger (ASL, for Mac OS X) configuration for your server systems.

Whether or not you use repo, it’s good to know that the tools do some logging. The logging can be followed to try to determine the status of your clients, or whether they are failing their updates.

Unfortunately, the most common client failures I have seen tend to involve the lapply tool, and the default level of detail I’ve seen reported back to the server does not provide an indication of what problem has been encountered. You see only that there was an error. Still, even though you may not get enough detail to remotely resolve the problem, it’s something for you to go by find problems in the first place.

When I come across software I might need to add into Mac OS X that requires compilation, I typically want to produce one Universal Binary. Make it a four-way UB and you get both 32- and 64-bit support.

A single binary is ideal for a Radmind transcript (or other package, if you wanted to bundle it into an installer) that can be deployed on both PowerPC and Intel Macs on Leopard.

Since [Bad link] 3.0.2 with some patches is apparently working [Bad link] — passing the [Bad link] tests — I thought I'd try my hand at a four-way Universal Binary.

What worked for me, using a Mac Pro 4x2.8 GHz with Mac OS X 10.5.2 and Xcode 3.0, was to start with [Bad link] and modify them with some [Bad link]. The configure and compile were both less than a minute on this system.

I have seen the use of "-Wl,-syslibroot,/Developer/SDKs/MacOSX10.5.sdk," in the LDFLAGS environment variable when compiling some applications but this did not work for me with rsync; when I removed it, rsync 3.0.2 configured successfully for me.

The result of the above build process appears to be a full four-way UB:

A local transfer on the build system appears to have worked correctly. I did not test with Backup Bouncer, sync with a non-Mac system, or when shuttling data between architectures. So, accept these results with a grain of salt; I’m just happy I got rsync to compile for now.