I needed to update many of the links in our wiki because a team member left, so I had to reupload all of her files to a shared service and change all the URLs to point to the new files. Unfortunately, the file service didn’t send me the former URLs of the files, so that was going to be a manual process. Our wiki had 149 pages in it. Not fun.

After a few pages of editing (and correcting the occasional typo that crept in as I changed URLs), I decided to partially automate the process. Using a smidgen of Emacs Lisp, I created a function that pasted text into a temporary buffer, performed whatever automatic fixes it could make, prompted me for any URLs it didn’t recognize, remembered the old URL – new URL mapping I defined, and copied the text back.

I used M-x global-set-key to bind a convenient function key to it (F12, I think), and then it was just a matter of clicking on each page, clicking on Edit, typing Ctrl-C to copy the text, switching to Emacs, pressing F12, switching back to my browser, typing Ctrl-V, and saving the wiki page. I also added some lines (not shown here) to convert the previous wiki gardener’s full links to intrawiki links, change server URLs, and do other fun things.

I thought about fully automating it (somehow hooking into w3, perhaps?), but that seemed to be more trouble than needed. Besides, it was good to review all the pages.

As a result of this Emacs wizardry, processing all 149 wiki pages took me a few hours instead of a few days. Yay!

Of course, I finished the last wiki page, I found out that I needed to change the servers in the URL. I decided to go ahead and fully automate the darn thing.

I extracted a list of URLs for the wiki by viewing the tree version of the wiki index. It used Javascript, so I couldn’t just pull the URLs out of the source code. Fortunately, the Firebug plugin for Firefox lets me copy the rendered HTML, so I used that instead. Some judicious text-editing later (replace-regexp rocks), I had a list of URLs to the different pages. I knew I needed to put in some kind of delay when loading web pages. sleep-for let me spread out my requests so I didn’t hammer the server too badly. Reading the w3m.el source code turned up w3m-async-exec. Once I set that to nil, requesting web pages and running code on the results turned out to be straightforward. Selecting the right widgets was a bit of a hack (re-search-forward here, w3m-previous-anchor there), but hey, it worked. After confirming it by manually running it on a few pages, I left it merrily running in the background.

I’m sure this kind of automation might be possible with lots of hacking in Mozilla Firefox, and I’ve seen great scripts for the Mac, too. But I know Emacs, I’m comfortable digging into source code, and I can make things work.