To those following this blog: you will have guessed that there’s not a lot of domestic geekery going on nowadays, so I’ve decided to archive these pages for the time being.

What am I up to instead? During the day ECMWF keeps me very busy – I really love it. Besides that I’m still improving on the piano, devouring a book or two a week (not as fast as the to-read pile grows, mind you), and herding the Netdisco community and source code towards world domination. All good stuff.

For the time being, the best place to keep tabs on me is probably Twitter.

So far so good for access to the new Cat Cam: from within the house we can view video from the cats’ shed, yet the camera is safely on its own DMZ.

In this final post I’ll show how I made the camera video feed available on the Internet.

One thing I wanted from the outset was for Internet clients not to make direct connections to the camera itself. I was a little worried about the ability of the web server and CPU in the camera to cope with multiple clients, and also the security implications of direct access. A second requirement was to have multi platform access – that is, desktop and iOS. This potentially means different streaming video formats.

We have one linux server in the house, which is used for many different things and runs virtual machines. My back-of-an-envelope plan looked something like this:

First step was to create the VM, but remember that the camera feed is in a DMZ using a VLAN, so the VM must live there too. In KVM it’s possible either to send all traffic to a guest system and let it process the VLANs or, you can separate the tagged VLAN traffic in the host system so the guest is dumb and just sees untagged frames. Clearly the latter is preferable so that were the guest to suffer attack from the Internet, it ought not to be able to put traffic onto the house workstation network. The guest is completely within the DMZ.

With that done and a basic Ubuntu system installed, I started work on Apache and VLC (the Swiss Army Chainsaw of video processing). First up, VLC…

Luckily the camera’s video feed comes in MJPEG format with a discoverable URL. The idea is to take this feed, duplicate it, and transcode the respective feeds into something suitable for a desktop browser and for iOS. As a bonus, I’ll timestamp the video to make it easy to tell if the transcoder has crashed (the timestamp would be wrong). After a lot of reading online about how to configure VLC I came up with the following monstrosity:

Of the two transcodes (“dst=”), the second is more straightforward. It creates an Ogg format stream using the Theora video codec, which modern browsers should be able to cope with. This is a video stream being served from VLC’s built-in web server, so I’ll need to proxy it via Apache. The configuration also applies a filter (“sfilter=”) to add a timestamp on the video stream.

The first transcode uses the new HTTP Live Streaming support in VLC. This is a rather elegant specification from Apple (which is why I selected it for the iOS clients) for simple and efficient delivery of streaming video. It creates a set of files and assumes you have a web server to serve them. The files each contain a few seconds of video, and the client retrieves them and plays one after another. The “######” templates an incrementing number within the segment filename. Again, the timestamp is added to the video stream.

CPU load for the above runs at about 60% (in the VM) on the dual core Athlon X2 245e processor. I wrapped the above in an Upstart init file, and just in case VLC gets its knickers in a twist, I added a cron job to periodically stop and start the service.

Now on to Apache. It needs to proxy the Ogg stream and serve the Live Streaming files, and prevent any other access to the web server:

All that remains is to enable a NAT rule and firewall pinhole on the home router for the web server (which is, of course, in the DMZ network connected directly to the router).

Let’s see the end result, taken on my iPhone this evening, also demonstrating the automatically activated night vision mode:

It’s nice to be able to check in on the wee beasties when I’m out at work. Other than a lot of reading about VLC, it wasn’t particularly difficult to do, and I think the end result is really quite good.

Previously I discussed the selection and installation of a Loftek CXS 3200 wireless camera, for us to keep an eye on our cats in their shed. As a reminder, here’s a screenshot of two cute, snoozing cats:

This post will cover the network changes made at home for the camera, and in the next, how it was made available on the Internet (for us to check up on away from home).

Naturally the camera needed to go on our home network, but I was a little wary of what shenanigans its software might get up to. For example I know the camera automatically registers itself with a public dynamic DNS service; it’s possible to update the firmware to disable that feature.

A safe design is to set up a DMZ, and put the camera on that. Our workstations in the house would be able to talk to the camera, as would the Internet, but the camera would not be permitted access to our workstations. Without dedicated cables for the DMZ I needed to enable a VLAN on the network. Thankfully the switch, wireless access point, and router we use support VLANs.

The switch is a Netgear GS108E, an eight port gigabit device (it lives in the loft, and I ran Cat5e inside the walls to recessed sockets in each room). It’s a simple job to enable a tagged VLAN on the ports to the router, the wireless access point, and our linux server (which I’ll come back to, next time).

I set up a new SSID on the wireless access point dedicated to the camera, which placed all traffic onto this new tagged VLAN. Now the camera and router were linked, via the switch, on a separate path from the rest of the house.

At the router I needed to configure a VLAN subinterface and add some access control lists to set up the DMZ access rules I mentioned above. The DMZ of course needs its own subnet so I gave it a new /24 network.

So far, so good: workstations in the house can now browse to http://172.16.30.10:8888/ (the new DMZ network, via the router) and log in to the Loftek camera to see video of the cats. The camera can only initiate connections to the Internet, or reply to requests from workstations in the house.

In the next post, I’ll talk about using our home linux server to make the camera video feed available on the Internet.

a few shelves so they can sit in different places or at the window to look out,

a cat flap which lets only them enter and leave,

carpeted floor(!).

A true cat palace, I think you’ll agree. We visit several times a day for feeding and cuddles, and mostly they’re out in the fields behind our house, failing to catch any wildlife.

One thing I miss is just being able to check up on them any time, to see that they’re okay. When they were in the house, of course you’d see them all the time. A good friend of mine mentioned cheap wireless webcams (or CCTV cams).

In this blog post and probably one other, I’ll talk about my selection and installation of the camera and how I made it Internet accessible (well, it’s still “Cats and Code” after all). Here’s an executive summary of the story:

Enough sleeping... it's time for hunting.

Several companies make so-called Internet-enabled cameras, for different budgets and with varying software quality. At the domestic end of the market are:

Axis, which might be more appropriate for business than the home, because the quality is high, with a price to match.

Foscam seem to be the one everyone goes for if they want a little home security on a small budget, with quality.

There are several far-east clones of the Foscam, many sharing the same designs, for example Loftek.

After some research online and a trawl through the Amazon marketplace, I selected the Loftek CXS 3200 Black. I didn’t want to spend much money at all, in case no camera would work inside the shed, but this model at least had good reviews and several useful features.

Like most similar models the camera can pan and tilt and runs an embedded web server so you can view the video and control the camera. The 3200 automatically switches between day and night vision modes, but interestingly includes the “IR cut” feature. This filters infra red when in day vision mode, to solve the common problem of (e.g.) green foliage appearing purple.

The camera is, of course, wireless, which is handy because the shed has power but no networking. I mounted the camera upside-down as in the image below, and was pleased to find the 3200 has settings to invert the image and pan/tilt controls so everything appears the right way up when viewing the video feed.

Loftek CXS 3200 Black

In the next post I’ll talk about the technical set-up of the camera on our home network.

I have a problem with accidentally becoming locked in confined places. I know it sounds odd, but after several such situations in my life I’m becoming convinced it’s a running theme.

Back in 1993, I was visiting Russia and on the overnight train from St. Petersburg to Moscow. Nature called so I left the sleeping compartment and visited the toilet at the end of the carriage. After finishing my business I couldn’t unlock the door. Needless to say I wasn’t too happy, and soon began to shout for assistance.

After a while no-one had come, so I attacked the door and broke my way out. I didn’t feel too bad about the broken door – these were the days when guards would be bribed to unlock your sleeping compartment in the night so you could be robbed. Fun times.

A long time later, around 2003 I think, we were living in a flat in Oxford and Suzanne had left early for work. I hopped into the en-suite shower, closing the door behind me so that steam didn’t fill the flat. I guess the lock closed somehow, but it also broke, and when I got out of the shower I was stuck.

The door opened inwards so I couldn’t kick it out very easily. After pondering my situation for a good while, I had a MacGyver moment and realised I might be able to dismantle the lock.

From memory, a pair of tweezers was used as a screwdriver, and various other tools fashioned from Suzanne’s make-up kit. It was a bit like in the WWII movies, taking a long time (probably an hour) to do the work, but I got there in the end. Suzanne wasn’t happy about the destruction of her kit, but was glad I was OK.

And then the other night, I went out to feed our cats, who live in a (heated) shed in the garden. I was inside the shed, but it was windy and raining so I pulled the door of the shed closed for shelter. You guessed it, one possessed door lock later and I was stuck.

This time I was in luck: I had my iPhone, and Suzanne was in the house. I phoned home and was rescued, if a little embarrassed.

What to make of all this? Is it normal for people to get accidentally locked into confined spaces? I suspect not. Should I avoid locking doors and risk mutual embarrassment in bathroom situations? I guess I could be plagued with a worse theme for my life; I don’t mind the odd minor drama like this. Where next, though – that’s the question?!

As a free software developer I come into contact with a wide variety of opinions on what development process makes for a good end product. I don’t believe the answer is straightforward. Here’s an excellent quote from Linus Torvalds which sums things up nicely:

“Don't underestimate the power of survival of the fittest. And don't ever make the mistake that you can design something better than what you get from ruthless massively parallel trial-and-error with a feedback cycle. That's giving your intelligence much too much credit.”

My point being: evolution is a tremendously powerful system which creates some very functional products. Don’t over-engineer what you’re doing. Keep it simple, try to avoid second system syndrome, and accept that one day someone may come out of the woodwork with a competing product which is either only incrementally better, or blows yours away.

I’ve had this happen to me. It doesn’t upset me, or put me off trying out more ideas, because the itch I wanted to scratch by writing the software in the first place is now being scratched better. I might join in with the competitor, borrow and build upon their ideas, or simply retire from that scene. It’s all good. It’s simply evolution.

Often when a system at work is having a problem I won’t actually know where it is. I can remember where most of the network kit is located, but not all, and certainly not all the servers connected.

Two tools we already use can help: Netdisco shows which switch port a device is connected to, and a good cable and patching database indicates the room or even cabinet location of the device. However I wanted a more visual understanding of just where cabinet “CX1″ might be located on our campus, and particularly in our large data centre halls.

There are a few options, in terms of web-based, Linux-hosted, open source tools:

RackMonkey is a simple bit of Perl CGI with an easy to use web interface. It’s no longer actively maintained, but is hosted on sourceforge. Good features include easy installation, numbering U (position in the rack) from top or bottom, and supporting SQLite3 storage. Missing features include specifying front/back position or facing-direction of the device.

RackTables is, I think, PHP and MySQL and in addition to rack layout includes IP address and VLAN registries. These latter features we don’t need, and would be a distraction or confusion to my colleagues if I couldn’t disable them. What I do like is that devices can occupy front/mid/back zones in each rack unit. Sadly racks can’t be numbered from top or bottom, and we have a mix of both.Update: From version 0.20.4 RackTables will support numbering in either direction.

RackSmith is new and has some good ideas, such as being able to place the racks on a tiled floor plan in a room, in a building. There seems to be sufficient flexibility in how devices are placed in racks, but I notice several user interface bugs, which are understandable as RackSmith is still under development. Update: It seems public development of RackSmith is “on hold” and it’s being rewritten under a SaaS model.

Both RackTables and RackSmith have demos on-line. RackMonkey is really easy to install so there’s less need there, anyway.

At first my choice was going to be RackTables, because of its front/mid/back device positioning, and clear hierarchy of location/room/rack. Sadly because we can’t reverse the U numbering, it’s rejected. RackSmith would be lovely but just isn’t ready. So, I’ve installed RackMonkey and seeing as it’s only providing additional information to that in Netdisco and our cable database, it being lightweight and unmaintained is probably not an issue.

Update: So, RackTables it is! Its continuing development has leap-frogged the competition,

(p.s. given sufficient tuits, I’d rather take the network inventory tool I once worked on, and extend that to support these features, instead. I don’t like duplicating information between systems.)

Recently at work I migrated from a Linux desktop to a Windows 7 desktop. This is an account of how I then configured the Win7 system.

First I should mention that working somewhere where they actually have an official, managed Linux desktop is awesome. However for the way I do things, it’s just not too pleasant an experience. On Linux I got tired of never being able to print something in the appropriate orientation. I also used to spend at least 50% of my time in the Windows VM anyway, working in MS Word/Excel/Visio.

The rest of my time is spent in Chrome and SSH in a terminal. So the first things to install on Win7 are Chrome and PuTTY. I created a directory called Applications in my home folder and saved apps such as PuTTY, which do not come with automated installers, there.

I pinned a shortcut to my home folder to the taskbar. This is trickier than it sounds, as you have to do a two-level right click to get to the pinned folder properties window, then edit the Target: field (and possibly the icon if desired). I still don’t fully understand the Win7 “Library” system.

To make PuTTY sessions more manageable, I installed two three additional apps. First is PuttyTabs, which despite its age works just fine, and allows easy firing-up of a shell to one of our servers, and management of the windows once open. Rarely I might also use PuTTYTabManager for staged system maintenance when I need to work on several similar systems at once (KDE’s terminal app is a superb tabbed experience which I do miss).

Update:The PuTTY Session Manager is another useful app to manage the session list; it can copy attributes/settings between session configs, and backup/restore session configs in bulk.

Once a few windows are up and running I long for focus-follows-mouse. It’s not available on my iMac at home, but Win7 does have a registry key one can poke. Well done MS! Good luck with the process, which I think could benefit from improved official documentation (pay attention to the comments on that page, as well).

The next little gem to install is AutoHotKey, as I have an Apple keyboard and like to remap some keys (for example the # sign which I use a lot). Here’s my current config:

This maps #-sign onto the unused section-sign (§) key. It also remaps the left Windows key, which on the Apple UK keyboard is of course the Command key, to be a left Control key. This means at home and at work my left thumb can take care of cut/copy/paste. Finally the last line means I always use the correct spelling for my manager’s name

By the way, I did also install the Apple software drivers for Win7 from their Boot Camp installer on the OS X install disk. This sets the correct keymap for my Apple UK keyboard, as well as enabling the special function keys such as volume control.

The next app, in no way less awesome, is Dexpot, which provides true, full-featured virtual desktop support for Windows. I have four desktops, and use the application pinning feature so that everything in my startup folder is put in the right place when I boot (which isn’t that often, to be honest). Plugins enabled include Dexcube (3D switching effects), MouseEvents (hot corners), and SevenDex (shortcut buttons in taskbar).

In case you’re not familiar with Dexpot, there are loads of features included (for instance context menu items on window taskbars, to allow moving them between desktops or show-on-all). One thing I’d like built-in is using the scroll-wheel on the desktop background to change virtual desktop; apparently it’s possible with an AutoHotKey script.

That’s it for the main tweaks. Here’s a list of some of the other applications I’ve installed:

Overall, after a couple of months now with the Win7 desktop, it’s a vast improvement on Linux. I know some of you may argue all of the above is possible with Linux. I don’t disagree, except to add that it isn’t easy. A well-configured desktop is a means to an end, not an end in itself. The Win7 experience is beautiful and slick, and makes me happier in my work – what more can I say?

When my Mac’s hard disk died, I replaced it with an SSD and reinstalled OS X. At the time I also restored my user’s “login.keychain” file from backup, because I knew that’s where Chrome had stashed all my stored web site passwords. Well, it turns out I only half-knew what was going on.

After the restore, Chrome seemed not to know about any of the credentials. It was quite frustrating because Safari was working fine with the restored Keychain. As a test I entered some credentials in Chrome and they appeared in its own Saved Passwords list! I was expecting that list to be empty because documentation suggests Chrome uses the Keychain on OS X.

After some digging, what I found is that Chrome is quite sensibly engineered, but the user interface is just a little confusing (especially to those with inquiring and suspicious minds!).

This page of Chromium developer documentation explains that Chrome needs to store more metadata about a set of credentials than is supported by the Keychain attributes. Therefore, even though Chrome does use Keychain for credential storage on OS X, it still uses its own built-in (“non-secure”) LoginDatabase. My mistaken belief was that the LoginDatabase would be empty once Keychain is in use.

This can be confirmed by poking around in the local user’s Chrome application data, and opening the SQLite database called “Login Data“. On OS X this contains a row for each stored password but crucially, not the password itself (the username is there, though). Yet in the Chrome Settings user interface, you click “Managed saved passwords” and up pops a list of usernames and passwords as if Chrome was storing them. In fact it’s merging data from both LoginDatabase and Keychain.

The next confusion arises over the lack of Chrome’s awareness of Keychain entries. As explained in the same developer documentation page, until LoginDatabase has a matching entry, the credentials in Keychain are untrusted by Chrome and hence not displayed.

To be fair to the Chrome developers, it’s not an easy thing to get over to the user that the attributes of a set of credentials are split between two stores, and that you can delete or read back from one, but possibly not the other. At least now I know what’s going on, and I can also make sure always to restore the Chrome application data in future.

SPNEGO is a negotiated authentication mechanism for HTTP which can be used to take advantage of Kerberos credentials for web site login (an alternative to simple username/password, or client digital certificates).

You’ll need to install a keytab for the HTTP service principal. The method differs depending on the type of KDC you have, but for Windows AD this would be:

net ads -U 'username@realm%password' keytab add HTTP

As verification I wrote a simple Perl CGI script to echo back $ENV{REMOTE_USER} which emitted user@REALM, as expected.

Sadly when testing this out I found the use of SPNEGO is not enabled by default in all browsers (for example, Google Chrome). A managed desktop seems the only way to ensure the user has both kerberos credentials and a browser started with the correct features enabled. Otherwise, it’d be just too much work?